Shaun Kahmann
Special to Mustang News
Cal Poly’s Ethics + Emerging Sciences Group may be a glimmer of hope standing between mankind and extinction at the hands of robots.
Founded in 2007, the organization centers its attention on the ethical applications of robotics and artificial intelligence to human life. It’s also the topic of the book “Robot Ethics: The Social and Ethical Implications of Robotics,” authored by group co-founders and Cal Poly ethics professors Keith Abney and Patrick Lin, along with former University of Southern California professor of engineering George Bekey. In addition to conducting research for the Navy, the group’s findings have appeared in several publications including Wired and Forbes.
Their message? The threat robots pose to our privacy, job prospects and possibly our lives is very real.
God from the machine
“Could we grant a predator drone the ability to determine when to fire its hellfire missiles? Yes, we could. The question is whether or not we should, and that’s what we hope to explore.”
Machines still need humans to operate, but not as much as you’d think. Modern unmanned aerial vehicles (UAVs), otherwise known as drones, have the ability to take off, navigate and land on their own. But for a robot to be considered truly autonomous, it needs to meet three standards: It must have the ability to detect information coming from the external world (sense), process that information (think) and use it to make decisions (act). Technology has already advanced far enough to grant robots the ability to complete certain tasks without direct human manipulation, but the biggest impediment to the deployment of fully autonomous machines — especially weapon systems — might be public trust.
“Humans are currently kept in the loop when killing decisions are made, but this is not a technological necessity,” Abney said. “Before we take that step, robots need to be at least as good, if not better, than humans at not committing war crimes.”
Skepticism of artificial life forms dates back centuries. In the fifth Century , it was written in a Jewish holy book that, as man drew closer to God in wisdom, he would divine the ability to forge life-forms of his own, called golems. Created from mud and clay, golems could act independently but were clumsy and unintelligent.
In the 1970s, Israel spearheaded the development of the first UAVs designed for reconnaissance, which would be heavily deployed in wars with Egypt and Lebanon. Constant conflict has pushed Israel to the forefront of drone research in concert with the United States. But as UAVs collect more hours of reconnaissance footage than any human can monitor, it has created a push for the development of drones equipped with facial recognition software and automated targeting systems, though the technology for this is still a ways off.
“Right now we have things like the Predator drone flying around Pakistan and Afghanistan that are being piloted at an air force base in Nevada,” Abney said. “The robot is equipped with automatic radar for collision detection and can turn and land independently of human control.”
The Jewish holy book states mankind’s creations can never be as sophisticated as ones created by God. And while the Israelis weigh the merits of proving their own scriptures wrong, as many as 74 other nations are perfecting their own UAV systems, according to the U.S. Government Accountability Office (GAO). The GAO has stated the U.S. government has no mechanism to keep timely records on UAV exports, creating serious worldwide proliferation concerns. It is estimated that 35,000 drones will be produced within the next 10 years worldwide, two-thirds of which will be produced by the U.S. and Israel.
As the shift to move responsibility for armed drones from the CIA and into the hands of the military is ongoing, unmanned combat is quickly becoming the norm for modern warfare.
Plausible deniability
“The big problem with humans is that they pass out at nine Gs, but we can build jets than can easily pull an excess of 50 Gs. As technology advances, there will be no way manned aircraft will be able to compete.”
Cal Poly lecturer Bruce Wright, who has more than 30 years of experience with the aerospace defense company Lockheed Martin, personally oversaw the development of 11 “black” (unofficial) unmanned jets commissioned by the government.
Wright said weapon systems in unmanned aerial vehicles will be able to track enemy fighters without the plane actually needing to turn, which may help alleviate communication delays.
“Sensor delays last only a fraction of a second; there is no reason to put man in harm’s way with the technology we have,” Wright said. “But targeting decisions are still made by humans, and I don’t think that’ll ever go away.”
Such delays, called latency, are a major impediment to unmanned combat. Al-Qaeda has already learned to scatter its forces to outpace delays in anticipation of drone strikes, and even published a how-to guide for evading drones. Despite this, more than 3,000 members of Al-Qaeda, including 50 senior leaders have been killed by remotely piloted UAVs in Pakistan and Yemen, the epicenters of U.S. drone activity. But if drones are given more autonomy and are able to make decisions with on-board computers, the need for network communications may diminish.
Enter plausible deniability.
A term coined by the CIA in the 1960s, plausible deniability is the act of withholding information from senior officers to shield them from responsibility for illegal or unethical actions. But if machines are making the decisions, who is held responsible?
“There are folks who are complete abolitionists, who argue robots should never be given lethal decision-making capabilities,” Abney said. “We do not argue for that. Instead, we argue for a set of tests. Only once it can pass these tests should it be given the capacity to kill.”
As part of the Ethics + Emerging Sciences’ research on an approximately $90,000 grant from the Office of Naval Research between 2007 and 2009, the group came up with a test to be performed on autonomous robots before they’re released. In addition to more simple things such as not killing civilians, autonomous robots must have the ability to understand the rules of war and have a minimal likelihood of being turned against their owner by terrorists.
Critics might argue government spending on ethical guidelines for self-thinking robots is a bit fanciful, especially during a time when the country was trudging through the biggest recession in recent memory. Lin said this is a line of criticism his group gets from time to time.
“It’s easier to regulate autonomous military robots before they’ve been deployed, not after the genie is out of the bottle,” Lin said. “Being early is really the only feasible option.”
But with high-profile malfunctions such as a drone that slammed into a Navy ship during a training exercise last year, it’s hard to imagine drones safely acting of their own accord anytime soon — and the group’s report reflects this. But they say a form of programming called evolutionary computation may give robots the ability to learn from experience.
Such programming could allow them to rely on self-simulations, where they can obtain a distribution of unknown probabilities like humans do (i.e. prediction), but with greater accuracy. Such programming may allow them to learn from mistakes, formulate novel thoughts and possibly develop moral agency.
But they may also become harder to control.
“Robots will likely be programmed to learn on their own. But with this kind of programming, we don’t know what they’re going to do, even in a restricted context,” Abney said. “Maybe, one day, they’ll wonder why they don’t have any rights and decide to emancipate themselves.”