A Utilitarian View of the Software’s Fight: Mechanization and Liability in War (and Peace)

Individuals increasingly rely on sophisticated technologies to perform tasks: automobiles to move, calculators to calculate, social networks to socialize.  In recent years, however, technology has mechanized some very human affairs, with very human costs. The complexity of the technologies, as well as the vast number of parties involved in the creation and use of the technologies makes allocation of liability in the event of system error or failure a novel and complex legal, as well as moral, issue. Below are just a few instances where this issue may emerge in the coming years.

Predator Drones: Computations and Casualties

Almost 150 years ago, Herman Melville’s “A Utilitarian View of the Monitor’s Fight” recognized and lamented the dehumanizing efficiency of mechanized warfare, but even after the unprecedented rate of technological development since the Civil War, his description of the Monitor, the Union’s first iron-clad warship, seems hauntingly prescient of the Predator Drones used today in Iraq, Afghanistan and Pakistan:

Deadlier, closer, calm ‘mid storm;
No passion; all went on by crank.
Pivot, and screw,
And calculations…

While much has been said about the ambiguous morality of unmanned drone warfare and its potential for desensitizing violence, a surprisingly low-profile case (now settled) regarding the drones’ allegedly pirated and faulty positioning software exposes a new swathe of legal issues, namely the allocation of liability in the event of system error and/or failure when the machine or software used potentially contributes as much if not more to the decision-making process than the individual using the mechanism. As Melville later describes the “sailors”:

War yet shall be, but the warriors
Are now but operatives…

While the details of the case are hazy (and will remain so since the two parties have recently settled, upon which Netezza was acquired by IBM for $1.7 billion), ISSI alleged that Netezza illegally “hacked”  ISSIs’ Geospatial Toolkit and Extended SQL Toolkit and then packaged them with Netezza’s own data analysis programs, which Netezza sold to the CIA for use in unmanned Predator Drones.

Particularly unsettling is evidence that both companies, and perhaps the CIA itself, knew that the software was faulty and not ready for production, potentially causing the Drones to miss their targets by up to 40 feet. The question then, is, when civilians die because of faulty targeting software, who should be held responsible? The CTO of ISSI expressed concern that his company could be held liable, and this concern at least in part motivated ISSI’s lawsuit to enjoin the use of its software in the drones.

ALADDIN: Letting the Robots Decide

ALADDIN (Autonomous Learning for Decentralized Data and Information Networks), a joint project between the British defense contractor BAE systems and several of the top universities in England (including Oxford), reimagines the decision making process during warfare, disaster relief and other volatile high-risk situations. Essentially, by allowing the various robots or units (fire alarms, etc.) to bargain amongst themselves for resources and to determine various courses of action by comparing each units own data and assessment of the situation, the developers are optimistic that the decision-making process will be more effective than if a group of human beings, with all their notorious inefficiencies and inconsistencies, were to make such decisions.

However, ALADDIN seems to take “responsibility” even further out of human hands, and during war or disaster, decisions may result in the loss of life or other severe harms. If an ALADDIN-like program were to respond automatically, who should be held liable when the program decides on a disagreeable or morally reprehensible course of action? The Royal Academy of Engineering published a report exploring culpability in an automated world, even going so far as considering the idea of blaming a machine. The report ultimately concludes that most importantly, such problems need to be brought into the public forum so that as fully autonomous systems are introduced, society is prepared to handle the ramifications of utilizing such systems.

Google Autonomous Cars: Automatic for the People

Google recently announced that it has successfully developed automated cars. Like the ALADDIN Developers, Google is optimistic that its technology will result in fewer accidents and more efficient transportation overall. Using a wide array of sensors and high-speed data processors, Google claims to have driven 140,000 miles sans driver, with only one accident in which another driver apparently rear-ended Google’s automated vehicle.

While actual wide-scale use of automated driving systems is still a long way off, liability allocation will almost certainly be put in place before driverless vehicles are given the green light, and some practitioners are already exploring who would be held liable in the event of a crash. While product liability will play a large part when the navigation devices or systems fail, the human “driver” may still be held responsible, as any such system will likely contain a human override function in case of emergency or system failure.

About the Author

STLR

blog comments powered by Disqus