Site navigation

Killer Robot Deployment Would Breach International Humanitarian Laws

Ross Kelly

,

killer robots

A report by Human Rights Watch claims that the use of fully autonomous weapons systems could breach existing international humanitarian laws and has called for a preemptive ban treaty.

The deployment of autonomous weapons during a conflict would amount to a breach of international law, according to campaigners and experts.

As nations continue to invest in and develop autonomous weapons systems or artificial intelligence for battlefield purposes, calls for international regulation are increasing. Wars of the future could see AI-powered weapons, ships and aircraft deployed to the battlefield without being subject to human control or monitoring.

According to a new report published by Human Rights Watch, in collaboration with Harvard Law School‘s International Human Rights Clinic, the two organisations claim that autonomous weapon systems would violate the Martens Clause; a widely-acknowledged provision of international humanitarian law.

Stop the (AI) War

Currently, 26 nations support a prohibition on fully autonomous weapons, with Austria, China and Belgium recently declaring their support for a ban that is backed by thousands of scientists and AI experts.

On the 27th August, representatives from more than 70 governments will meet at the United Nations in Geneva to discuss the issue of fully autonomous weapons. The talks, which were formalised as part of a disarmament treaty in 2017, will discuss the ongoing debate surrounding the use – and ethics – of AI and autonomous weaponry.

Last month, some of the biggest names in tech promised not to support the development, manufacturing or trade of lethal autonomous weapons. Elon Musk, Google’s Jeff Dean and AI pioneer, Yoshua Bengio, of the University of Montreal were some of the biggest names to sign the Lethal Autonomous Weapons Pledge.

Martens Clause

The Martens Clause is a long-standing provision of international humanitarian law that requires emerging technologies be judged by the “principles of humanity” and the “dictates of public conscience” when they are not already covered by other treat provisions.

Due to their lack of emotion, legal and ethical judgement, the report states that fully autonomous weapons would face “significant obstacles in complying” with these principles, which require both the humane treatment of others and respect for human life and dignity. Legal and ethical judgement enables people to minimise harm toward others, however, autonomous weapons systems would pose a grave danger.

“Permitting the development and use of killer robots would undermine established moral and legal standards,” said Bonnie Docherty, senior arms researcher at Human Rights Watch.

Docherty added: “Countries should work together to preemptively ban these weapons systems before they proliferate around the world.

“The groundswell of opposition among scientists, faith leaders, tech companies, nongovernmental groups, and ordinary citizens shows that the public understands that killer robots cross a moral threshold. Their concerns, shared by many governments, deserve an immediate response.”

Science Fiction or Reality?

The idea of fully autonomous weapons systems, robots and killer AI envokes images of science fiction films, such as The Terminator franchise. Although weapons systems such as these do not yet exist, a number of military officials have suggested that the use of such devices could be commonplace on battlefields in the near future.

Nearly 400 partly autonomous weapon and military robotics systems have been deployed by military forces or are in development. Israel, Russia, France, the UK and the United States all have development programmes underway.

In Korea’s demilitarised zone (DMZ), mechanised sentries already stand guard, Israel’s Iron Dome has also been deployed but as of yet, cannot function in a fully automated capacity.

Ross Kelly

Staff Writer

Latest News

%d bloggers like this: