The UN Human Rights Council will be discussing the morality of “killer robots” at their next meeting in Geneva. A moratorium has been requested on their use until a decision has been made by the Council—this is not a complete ban as some groups would like to see, but it will allow enough time to put to rest the more philosophical dilemmas. The robots in question, which are currently under development in the US, UK, and Israel, are called “lethal autonomous robots” and are designed to be pre-programmed to kill people or take out specified targets in war. However, these robots are also equipped with the ability to think freely for themselves in order to make adjustments when necessary.
Supporters of these machines believe that they will save countless lives in the long run by reducing the number of soldiers utilized in battle. Human rights groups believe that the distancing of the human element from war raises serious moral questions about our approach to combat. They have serious concerns on the robots’ ability to distinguish between targets and civilians, and if civilian casualties occur, who would be held responsible.
“The traditional approach is that there is a warrior, and there is a weapon,” said Christof Heyns, the UN expert examining the use of the robots in an article on the BBC News, “but what we now see is that the weapon becomes the warrior, the weapon takes the decision itself.”