Although the “Terminator” robots of today are far away from those in the science fiction movies people enjoy in theaters, researchers around the world and at Cal Poly are looking into issues that may arise from using these types of robots.
Due to its current involvement in two wars, the U.S. military’s interest in using armed robots has grown but as with all new technology, unforeseen issues often arise.
For example, in October 2007, a semi-autonomous robotic cannon deployed by the South African army malfunctioned, killing nine ‘friendly’ soldiers and wounding 14 others.
“To whom would we assign blame – and punishment – for improper conduct and unauthorized harms caused by an autonomous robot (whether by error or intentional): the designers, robot manufacturer, procurement officer, robot controller/supervisor, field commander, president of the United States… or the robot itself?”
This question is the premise of the preliminary report, “Autonomous Military Robotics: Risks Ethics and Design,” written by researchers of the Ethics and Emerging Technologies Group at Cal Poly.
“There are significant driving forces towards this trend,” said Patrick Lin, one of the authors of the report. “Congress actually mandated that by 2010, supposedly one-third of aerial vehicles need to be unmanned. By 2015 all ground vehicles need to be unmanned,”
“These deadlines apply increasing pressure to develop and deploy robotics, including autonomous vehicles; yet a ‘rush to market’ increases the risk for inadequate design or programming,” the report said.
The report was funded by a U.S. Department of Navy Office of Naval Research grant to select few faculty in the College of Liberal Arts who proposed the study. Lin and fellow author Keith Abney are Cal Poly philosophy lecturers. They teamed up with George Bekey, a special consultant in the College of Engineering and emeritus professor and founder of the University of Southern California robot.ics lab.
The preliminary report concentrates on the existing semi-autonomous (unmanned) robots used in war as a foundation for questions surrounding the possible risks of using fully autonomous combat robots that would have to make decisions a human would usually make.
According to the report, thousands of robots are being used today in Iraq and Afghanistan to replace human soldiers in “dull, dirty, and dangerous” jobs. The known military robots being used, the report said, are all semi-autonomous. This means that robots such as the unmanned Predator air drone can go on reconnaissance missions, but needs human authority to fire missiles. In this case Air Force pilots control the Predators from trailers outside Las Vegas.
As of 2007, robots that look more like Disney and Pixar’s animated Wall-E than fighting machines are credited for neutralizing 10,000 improvised explosive devices (IEDs.) IEDs are responsible for 40 percent of U.S. casualties in Iraq since 2003.
The report suggests that not only would these robots be beneficial in saving American soldiers’ lives, but saving civilian lives from human soldiers that commit atrocities.
“To the extent that military robots can considerably reduce unethical conduct on the battlefield – greatly reducing human and political costs – there is a compelling reason to pursue their development as well as to study their capacity to act ethically,” it read.
“Those of us who work in robotics, now that we are becoming aware of all the ethical issues in addition to the technology, should take it seriously and try to make sure that our students also take them seriously,” Bekey said. “So that robots are used for the good of humanity.”
The group is currently working on the second edition of the report and waiting on feedback from the Office Naval Research for their initial report. The Association of Practical and Professional Ethics in Indiana and International Conference on Robotics and Automation in Japan both invited the Cal Poly group to their events in March and May to present their work.