Who Becomes Liable When Artificial Intelligence Causes an Injury?
Jul 13, 2021 01:33 AM EDT
Technological developments tend to disrupt more than just the market. They also disrupt the way the law operates. The introduction of cars, for instance, led to the legislation of laws related to road safety, anti-lemon laws, and car and truck accidents. The advent of the internet led to the creation of data privacy laws and an increased emphasis on intellectual property laws. The same thing will happen when technologies like artificial intelligence are introduced in the consumer market.
When dealing with new technologies, it's to be expected, if not guaranteed, that there aren't going to be laws and precedents to use as a legal basis for your case. How should lawyers handle such a case? Who becomes liable when an artificial intelligence causes an accident?
The Sources of Liabilities
Normally, when building a legal argument, we simply follow the principle of stare decisis, which refers to the use of precedents (relevant Supreme Court decisions) because these decisions form part of our laws. However, in the absence of laws and precedents, we rely on the sources of liabilities to determine which party ought to be held liable for an accident. The four major sources of liabilities are:
● Liability to clients arising from contracts
● Liability to third parties arising from Common Law
● Civil Liabilities arising from Federal Securities Law
● Criminal Liability, which arises from criminal intent
How is liability determined in a car accident involving an artificial intelligence?
Self-driving cars make use of complex hardware and software for full automation. Whether it's in the array of sensors that gather data on its environment, the sheer computing power of the artificial intelligence that makes real-time driving decisions, or the internet connection that allows the near-instant transmission of data. There are a good number of things that can go wrong, and that is exactly what happened with the first fatal self-driving car accident.
Facts of the Accident
In 2018, a pedestrian was struck at high-speed by a self-driving car that was undergoing a field test. Investigations revealed that even when the self-driving car was operating in fully-autonomous mode, it was also manned by a safety driver, whose job was to intervene in situations when the car failed to detect danger. The company (Uber) later admitted that the self-driving car failed to detect the crossing pedestrian because its software was faulty. The court later decided that Uber could not be held liable for the death of the pedestrian, even when they clearly admitted that their software was faulty.
Even when Uber clearly admitted that their software was faulty at the time of the accident, it cannot be treated as an admission of fault because it was not Uber's liability to the riding public. In fact, they did not even have a client to whom they could be held liable to, nor did they have a contract to uphold. They were merely testing a product in the field, during which, bugs and malfunctions are inherent and expected. Had they failed to deploy measures to minimize the risk to the public, then they would have been held liable. However, that wasn't the case, as they had a safety driver.
The court found that the safety driver was liable for the death of the pedestrian as it was the safety driver's obligation to interfere when the self-driving car failed to detect danger. It was found that the driver was distracted, hence the late reaction to the danger.
Fortunately, more lawmakers are beginning to see the urgency behind creating laws that govern the use of self-driving cars. Laws on self-driving cars will no doubt become a foundation for laws regulating other similar technologies. Even when the legal landscape is bound to change as new technologies are released into the consumer market, professionals like this personal injury lawyer in Sacramento will always be our best chance of securing fair and full compensation.