Governments around the world are increasingly recognising the threat of autonomous weapons systems and ‘killer robots’, according to a new report.
The report, published by Human Rights Watch, suggests a growing number of countries would support plans to impose a ban on autonomous weapons and prevent further development of such systems.
30 countries have expressed a desire for the establishment of an international treaty, the report says, while a number of policymakers, artificial intelligence experts, private companies and international organisations have also endorsed calls to ban autonomous weapons systems.
The vast majority of countries, the human rights group said, regard “human control and decision-making as critical to the acceptability and legality of weapons systems”.
“Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control” reviews the policies of 97 countries that have publicly discussed or considered autonomous weapons systems since 2013.
In 2013, Human Rights Watch launched the Campaign to Stop Killer Robots. Since then, the charity said, the issue has “steadily climbed the international agenda” and has reached such a point that global action must be taken.
“Removing human control from the use of force is now widely regarded as a grave threat to humanity that, like climate change, deserves urgent multilateral action,” said Mary Wareham, arms division advocacy director at Human Rights Watch.
“An international ban treaty is the only effective way to deal with the serious challenges raised by fully autonomous weapons,” Wareham added.
- Twitter shows interest in acquiring TikTok US operations
- New digital platform launched to improve building inspections
- Privacy watchdog to probe Barclays over spying claims
Nations have taken part in eight Convention on Conventional Weapons (CCW) meetings between 2014 and 2019, which explored the use of lethal autonomous weapons systems.
Countries including Austria, Brazil and Chile have all proposed negotiations on a legally binding treaty aimed at ensuring “meaningful human control” over the critical functions of such weapons systems.
The report, however, notes that a number of military powers, including Russia and the United States, have stifled efforts to explore the introduction of regulations.
Furthermore, these nations continue to invest heavily in the military applications of AI-based weapons systems.
Human Rights Watch said the UK is not among the countries calling for a ban on autonomous weapons systems. At the Human Rights Council in May 2013, the UK said it considered international humanitarian laws to be “sufficient to regulate the use” of killers robots.
However, in 2017 the UK said that “there must always be human oversight and authority” with regards to the use of lethal autonomous systems. The report notes that Britain is currently developing weapons systems with autonomous functions.
“It’s abundantly clear that retaining meaningful human control over the use of force is an ethical imperative, a legal necessity, and a moral obligation,” Wareham said. “All countries need to respond with urgency by opening negotiations on a new international ban treaty.”
- Five ways Scotland is mastering the opportunity of 5G
- What challenges do businesses face during the Covid-19 pandemic?
- AI research is suffering from a gender diversity ‘crisis’
Concerns over the rise of ‘killer robots’ and the military applications of automated technologies have increased in recent years, particularly in the global technology sector.
In 2018, employees at Google spoke out against the company’s involvement in Project Maven, a US Department of Defense drone project. An open letter, signed by hundreds of academics, was published supporting the employee revolt and urging the tech giant to cease its involvement in military projects.