Jump to content

Will There Be a Ban on Killer Robots?


nir

Recommended Posts

An autonomous missile under development by the Pentagon uses software to choose between targets. An artificially intelligent drone from the British military identifies firing points on its own. Russia showcases tanks that don’t need soldiers inside for combat.
 

A.I. technology has for years led military leaders to ponder a future of warfare that needs little human involvement. But as capabilities have advanced, the idea of autonomous weapons reaching the battlefield is becoming less hypothetical.

The possibility of software and algorithms making life-or-death decisions has added new urgency to efforts by a group called the Campaign To Stop Killer Robots that has pulled together arms control advocates, humans rights groups and technologists to urge the United Nations to craft a global treaty that bans weapons without people at the controls. Like cyberspace, where there aren’t clear rules of engagement for online attacks, no red lines have been defined over the use of automated weaponry.

Without a nonproliferation agreement, some diplomats fear the world will plunge into an algorithm-driven arms race.

In a speech at the start of the United Nations General Assembly in New York on Sept. 25, Secretary General António Guterres listed the technology as a global risk alongside climate change and growing income inequality.

“Let’s call it as it is: The prospect of machines with the discretion and power to take human life is morally repugnant,” Mr. Guterres said.

Two weeks earlier, Federica Mogherini, the European Union’s high representative for foreign affairs and security policy, said the weapons “impact our collective security,” and that decisions of life and death must remain in human hands.

Twenty-six countries have called for an explicit ban that requires some form of human control in the use of force. But the prospects for an A.I. weapons ban are low. Several influential countries including the United States are unwilling to place limits while the technology is still in development.
 

Diplomats have been unable to reach a consensus about how a global policy can be implemented or enforced. Some have called for a voluntary agreement, others want rules that are legally binding.

A meeting of more than 70 countries organized by the United Nations in Geneva in August made little headway, as the United States and others said a better understanding of the technology was needed before sweeping restrictions can be made. Another round of talks are expected to be held later this year.

Some have raised concerns that a ban will affect civilian research. Much of the most cutting-edge work in artificial intelligence and machine learning is from universities and companies such as Google and Facebook. But much of that technology can be adapted to military use.

“A lot of A.I. technologies are being developed outside of government and released to the public,” said Jack Clark, a spokesman for OpenAI, a Silicon Valley group that advocates for more measured adoption of artificial intelligence. “These technologies have generic capabilities that can be applied in many different domains, including in weaponization.”

Major technical challenges remain before any robot weaponry reaches the battlefield. Maaike Verbruggen, a researcher at the Institute for European Studies who specializes in emerging military and security technology, said communication is still limited, making it hard for humans to understand why artificially intelligent machines make decisions. Better safeguards also are needed to ensure robots act as predicted, she said.

But significant advancements will come in the next two decades, said Derrick Maple, an analyst who studies military spending for the market research firm Jane’s by IHS Markit in London. As the technology changes, he said, any international agreement could be futile; countries will tear it apart in the event of war.

“You cannot dictate the rules of engagement,” Mr. Maple said. “If the enemy is going to do something, then you have to do something as well. No matter what rules you put in place, in a conflict situation the rules will go out the window.”

Defense contractors, identifying a new source of revenue, are eager to build the next-generation machinery. Last year, Boeing reorganized its defense business to include a division focused on drones and other unmanned weaponry. The company also bought Aurora Flight Sciences, a maker of autonomous aircrafts. Other defense contractors such as Lockheed Martin, BAE Systems and Raytheon are making similar shifts.

Mr. Maple, who has worked in the field for over four decades, estimates military spending on unmanned military vehicles such as drones and ships will top $120 billion over the next decade.
 

“They are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” William Roper, director of the Pentagon’s strategic capabilities office, said at the time.

To those fearful of the advancement of autonomous weapons, the implications were clear.

“You’re delegating the decision to kill to a machine,” said Thomas Hajnoczi, the head of disarmament department for the Austrian government. “A machine doesn’t have any measure of moral judgment or mercy.”

No completely autonomous weapons are known to be currently deployed on the battlefield, but militaries have been using technology to automate for years. Israel’s Iron Dome air-defense system automatically detects and destroys incoming rockets. South Korea uses autonomous equipment to detect movements along the North Korean border.

Mr. Maple expects more collaboration between humans and machines before there is an outright transfer of responsibility to robots. Researchers, for example, are studying how aircrafts and tanks can be backed by artificially intelligent fleets of drones.

In 2016, the Pentagon highlighted its capabilities during a test in the Mojave Desert. More than 100 drones were dropped from a fighter jet in a disorganized heap, before quickly coming together to race toward and encircle a target. From a radar video shared by the Pentagon, the drones look like a flock of migrating starlings.

There were no humans at the controls of the drones as they flew overhead, and the machines didn’t look much different from those any person can buy from a consumer-electronics store. The drones were programmed to communicate with each other independently to collectively organize and reach the target.
 

“They are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” William Roper, director of the Pentagon’s strategic capabilities office, said at the time.

To those fearful of the advancement of autonomous weapons, the implications were clear.

“You’re delegating the decision to kill to a machine,” said Thomas Hajnoczi, the head of disarmament department for the Austrian government. “A machine doesn’t have any measure of moral judgment or mercy.”

Source

 

Link to comment
Share on other sites


  • Views 343
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...