Recently, a European Committee on Legal Affairs (“ECLA”) issued a Draft Report (the “Report”) with a recommendation that artificially intelligent (“AI”) robots be given legal status. Robots will soon be able to act and make decisions independently. Given this, a legal framework of tort recovery will be necessary when an individual is harmed by the decision or action of an AI robot.
The Report notes the more autonomous an AI robot, the less it is a mere tool in the hands of another. To the extent a robot acted autonomously, it might not be possible or practical to prove it was “defective” under a products liability theory. New rules will be needed to ensure that persons injured by AI robots have practical legal recourse.
Currently, a robot cannot be liable for its acts or omissions. An injured party would have to sue the robot’s owner, manufacturer or user. The claim would be that the injury was foreseeable and preventable if the programming protocol was different. This would be difficult to prove. An AI robot would not be defective simply because it injured someone. A robot equipped with adaptive and learning abilities would interact autonomously with its environment in an unforeseeable manner. Such a robot would encounter an infinite variety of situations. Just because someone was injured during an encounter with a robot does not mean it was defective. The robot could have been provided with the state of the art navigation and guidance system, but contact occurred due to unanticipated environmental conditions beyond the technical ability of the robot to deal with. Nobody’s perfect! The robots software might simply not have been up to the challenge or like any other human, it lost control on a slippery sidewalk. A manufacturer, much less a user, could not be expected to reasonably foresee every situation where the robot could cause harm and take action to avoid it. We do not demand products that are incapable of causing harm.
Many products are potentially dangerous. Liability exists only when the product is unreasonably dangerous. It would be very difficult to prove that a robots’ software was defective except in the most egregious cases. One would need to find an expert programmer who could claim familiarity with the highly proprietary and nearly inscrutable code. Recovery on such a theory would be a tough road indeed. Attorneys would only consider taking the most grievous of injuries given the costs of proving a robot’s AI software was defective.
Recently, the NHSTA concluded that the Tesla that crashed into the side of a tractor trailer while in autonomous mode was not defective. The system did not see the truck because its color blended into the color of the sky. NHSTA found it was not designed to be entirely autonomous, but merely to assist the driver. Had the driver had been paying attention he would have been able to stop the vehicle without consequence.
Claim Against the Owner/User
One might consider a suit against the robot’s owner or “user”. Such suits would likewise be extremely difficult to prove where the robot was acting autonomously. A few circumstances come to mind. Where the owner or user asked the robot to do something illegal, inherently risky or something beyond its known capability.
A suit against an owner/user might be predicated on negligent education of the robot. It would be difficult however, to prove that it was the education that caused the accident and not the independent learning of the robot. Differentiating between the two might require a detailed analysis of a robots memory. The robots neural connections would have to be dated to determine when and under what circumstances the connection was made to determine if it was created due to the owner or due to its own experiences.
Nor are these issues solely confined to tort law. The ECLA makes the point that AI machines will eventually have the ability to negotiate contractual terms and conclude contracts and decide whether and how to implement them. Thus the laws of contracts may also need to be updated.
Faced with the difficulty of proving negligence against a manufacturer, owner or user for the acts or omission of an AI robot, one solution, which was recommended by the ECLA, would be to make them strictly liable for acts or omissions of robots. Robots would be classified as if they were wild animals. We also apply strict liability to highly dangerous activities such as building demolition. With strict liability, the plaintiff would only submit proof that their injury was substantially caused by the robot, without having to prove negligence.
Strict liability however, would seriously inhibit the development of autonomous robots which have the potential to profoundly advance the interests of society. A strict liability standard would also encourage fraudulent suits. Prospecting plaintiffs would likely throw themselves at the feet of robots looking for an easy payday. The robot might well be able to take the stand and defend itself by replaying its video memory of the event.
Human Negligence Standard
If not strict liability, perhaps we should hold the AI robot to the same standard that we hold ourselves. With humans, the standard is a “reasonable man under the circumstances.” This would have the benefit that it is the standard people expect of independent actors in society. Notwithstanding, it might be impossible to apply since AI robots would presumably be doing tasks that humans cannot, like using infra-red to see in the dark or carrying multiple objects beyond the physical capacity of the average human. It would also suffer from being arbitrary. As AI robots learned to process information faster and better than humans, it would not be fair to judge them to such a low standard of care.
Reasonable Robot Standard
We could hold them to a “reasonable robot standard”. Such a plan however, would be difficult to implement since unlike evolution, technology advances so quickly that the standard would be continually changing. Further, there would always be a mix of old and new robot actors in the marketplace with different software capabilities (although older robots presumably could receive mandated software upgrades).
No Fault System
A possible solution would be an statutory insurance scheme as exists with cars under the no-fault laws. Like no-fault laws, the plaintiff could recover for medical expenses without proof of fault and could sue where they suffered a statutorily defined “serious injury” attributable to the robot. This would remove frivolous claims while allowing more serious suits. Where there was a “serious injury” the case would be outside the no-fault system and the robot could defend itself (and its owner).
Robots would be registered with the state and data regarding injury claims against them would be kept. The owner would have a duty to make sure the robots software was updated and had proper instruction. To the extent owners failed to do this their personal assets or insurance would be at stake. To the extent that the damage was caused by a factory defect affecting all units a products liability action would be appropriate.
The responsibility for the payment of no-fault premiums could be borne either by the manufacturer or the owner/operator of the robot. An insurance policy could be required for each and every class and type of robot or the legislature could create a single general fund with a government agency to administer it (single payer!).
Legal Status for Robots
Finally, since recovery for the negligence of a robot might be entirely independent of its owner, it would make sense to create a form of legal status for AI robots so that they could be served with process and sued directly. Indeed, the ECLA recommended that autonomous robots be given the status of electronic persons with specific rights and obligations presumably so that they could accept service of process and be hauled into court like anyone else.
 European Parliament Committee on Legal Affairs Draft Report with recommendations to the Commission on Civil Laws on Robotics, with opinions of the Committees on Employment and Social Affairs, Environment, Public Health and Food Safety and Industry, Research and Energy and Internal Market and Consumer Protection. Reporter: Mady Delvaux. (Initiative – Rule 46 and 52 of the Rules of Procedure). http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN