When Robots Kill – Is it Justified?
Terrorism is an ever-increasing threat to major American cities, and more and more police and sheriff’s’ departments are making bomb disposal and neutralization robots important parts of their arsenals. Being near the Mexican border, (and thus a potential target for terrorists who might cross over into Texas,) the Dallas Police Department purchased three such robots from Northrup Grumman at a cost of $200,000 each.
Northrup Grumman designed the Remotec Andros robot (shown in the image above) to assist police departments and security forces in deactivating bombs of various types without putting humans at risk or in danger. I doubt very seriously, though, that the company ever imagined that their life-saving invention would be used to kill intentionally – at least not here in America. However, that’s exactly what happened when the Dallas P.D. used one if its Remotec Andros robots to carry and detonate an explosive device that killed Micah Xavier Johnson, the sniper responsible for shooting and killing five Dallas P.D. officers and injuring seven more on July 7.
Live Saving Robot Used to Kill Suspect
Dallas police armed the robot with a device designed to deliver a controlled explosion and neutralize Johnson without injuring others in the area. While I am not dismissing the idea that the killing was justified (after all, Johnson had just killed five officers without reason or provocation,) the Dallas P.D. tactic used to neutralize Johnson has certainly caused a controversy. In fact, many pundits are already causing the decision to use the robot to kill Johnson the beginning of the Robocop era.
While the Dallas Police Department is almost surely the first law enforcement agency to use a robot to kill a suspect on American soil, the U.S. military has been using the tactic for quite a while now. The military has frequently used (and quite often I might add) drones and other semi-autonomous devices and robots to take out dangerous combatants and terrorists in far-off locations like Afghanistan, Somalia and Yemen. The fact that the United States is not at war with these countries does not stop our military from using drones and robots kill these people no matter where they hide.
Death By Robot is Not New
While it is arguable that using drones and robots to kill enemies of the United States or members of terrorist organizations is indeed justified, the sad fact remains that many innocents have been killed as well. Of course, these innocent civilian deaths were mistakes, but they happened just the same.
While President Obama claims that less than 120 civilians have been killed by drones or robots since 2009, independent tallies are much, much higher. In fact, independent counts and studies performed by The Foreign Policy Group and the Bureau for Investigative Journalism reveal that as many 1,300 civilians may have in fact been collateral damage and killed by drone or robot strikes. Whatever the actual count, the tally is way too high and nobody is really buying the Obama administration numbers.
While debate surrounding the military’s use of drones is sure to continue for years to come, the use of robots to kill suspects here at home is unprecedented. The fact that it happened now has many discussing other possible uses such as using robots to disperse crowds with water cannons, employing tasers or stun guns and even deploying chemical agents to incapacitate whole crowds.
What Will Happen in the Future?
Just as is the case with military drones, the robot used by the Dallas P.D. to kill Johnson was controlled remotely by a real live person (in this a police officer.) However, it is not difficult to predict how more autonomous robots, programmed with sophisticated artificial intelligence, might be used in armed conflicts between police and criminals or unruly crowds in the future.
Notable figures such as Elon Musk, Steve Wozniak and Stephen Hawking have already warned about the dangers of robot AI and autonomous weapons (as well as more than 1000 other experts and leading robotics researchers.) In fact, the group of prominent thinkers collectively signed an open letter warning of a “military artificial intelligence arms race” and called for a ban on “offensive autonomous weapons.”
Although it’s yet to be determined if countries and defense contractors will pay heed to the warnings, some really smart people already see the writing on the wall and are pushing for the ethical use of autonomous robots. While the use of the killer robot in Dallas may have in fact been justified, it does open up the possibility that these types of tactics will become more commonplace.
If we continue to create smarter, more lethal robots to handle dangerous situations and kill criminals and terrorists, I have to wonder if it’s only a matter of time when robots use their artificial intelligences to start thinking that humanity in general is not worth the effort and decides to just get rid of the problem completely. A world based on Terminator or iRobot may not be as far off as we think. Please let me know what you think in the comments section.