Killer Robots!! Have you ever heard this term before? Well, Space-X founder Elon Musk and many other AI (Artificial Intelligence) & Robotics leaders warned UN of the dangers of developing lethal autonomous weapons. They said that these weapons “threaten to become the third revolution in warfare“, after the invention of gunpowder and nuclear bombs.
Recently a South Korean University partnered with weapons manufacturer Hanwha Systems to develop artificial intelligence for weapons. Although the university later clarified it would not be developing “autonomous lethal weapons” but it raises ethical concerns in the application of all technologies including artificial intelligence.
At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like Kaist (Korea Advanced Institute of Science and Technology) looks to accelerate the arms race to develop such weapons.
If they develop autonomous weapons, it will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have that potential to be weapons of terror. Despots and terrorists could use them against innocent people, removing any ethical restraints. It will be like Pandora’s box, if opened will be hard to close.
South Korea already has an army of robots which guard the border with North Korea. Also Samsung SGR-A1 carries a machine gun that can be switched to autonomous mode but is, at present, operated by humans via camera links.
And when it comes to the merits and demerits of AI or a question whether AIs should replace soldiers on the battlefield?
We generally think of casualties in the battlefield and there is a potential for reduced human casualties if AI prospers. Also if a machine that takes action for humans can reduce psychological harm in soldiers.
But when it comes down to ethics, Is it possible to create a moral robot? Who will take the blame when and if AI weaponry makes a fatal mistake – the manufacturer, the developer, or the robot itself? Will AIs prove to be an asset or a liability to their human creators?
These questions will necessitate answers eventually but with experts warning against an impending arms race, the answers had better come through sooner rather than later.
“Fully autonomous weapons, also known as “killer robots,” would be able to select and engage targets without human intervention. Precursors to these weapons, such as armed drones, are being developed and deployed by nations including China, Israel, South Korea, Russia, the United Kingdom and the United States. It is questionable that fully autonomous weapons would be capable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity. Human Rights Watch calls for a preemptive ban on the development, production, and use of fully autonomous weapons. Human Rights Watch is a founding member and serves as global coordinator of the Campaign to Stop Killer Robots.”