call us: 877-476-6411

08.21.2019

From the Field – AI and the black box – who is liable when no one is at fault?

Ironshore

AI is already on its way to transforming healthcare delivery and improving patient outcomes. However, while AI, Machine Learning, and Robotics are all designed to reduce human error and increase the predictability of patient care, they also create new risks across the healthcare liability landscape. In a situation where a healthcare provider uses AI to treat a patient who has a less than a desired outcome (or even simply an unanticipated one), we anticipate liability suits against those healthcare providers, healthcare systems, AI software companies, and robotic device manufacturers.

In this post, we will consider what happens when lawsuits get ahead of science, insurance considerations in this new liability landscape, and possible modifications to legal doctrine to address this new science.

What makes AI so compelling is its use of predictive, learning algorithms (Machine Learning) to improve the precision of the practice of medicine. AI has the capability to combine a vast amount of research and combine it with a patient's medical records and history to accurately predict the presence of (or likelihood to develop) a myriad of medical conditions such as osteoporosis, diabetes, hypertension, and heart failure. Thus, AI will play a critical role in improving diagnostics, customizing individual treatment plans, and providing doctors with the capability to leverage the most current and situationally relevant medical research. 

While the standard of care within the context of AI as the diagnostician is evolving, a central issue for the case against the physician will be whether relying on AI clinical decision software output was a breach of the traditional “duty of care”.

Insurance Considerations

We anticipate healthcare professionals will experience a learning curve as they acclimate to incorporating AI effectively into their medical practice.1 The physician’s employer could face vicarious liability for the acts of its employed physician based on its due diligence of the AI as well as its policies regarding its use and oversight of the physicians using it.

The AI software company, itself, could face a litany of claims, including products liability, false advertising, and negligent training and supervision. Liability against the software company will often require a finding of fault which may be impossible to determine because much of the neural network processing occurs in a “black box”. Because AI relies on machine learning, the program is continually evolving and changing as it learns. Thus, there may be a duty to continuously test the results of algorithm to ensure that its reasoning remains sound as it learns from additional data.

Plaintiffs may argue that the software companies failed to test/update their algorithms and make necessary adjustments. Given previous allegations levied against manufacturers of new healthcare technologies, AI software companies should consider investing in healthcare training to ensure patient safety.2

Legal Considerations

What is an AI system and how will its use in medical care be viewed by the legal system? Is it a medical device, software or a person? It is software and data running on hardware. It is programmed to review information, draw a conclusion, and make an informed recommendation to achieve outcomes based on a “duty of care” programmed into its “Black Box”. Since AI processes learn and become increasingly autonomous over time, manufacturers and programmers no longer can control or reasonably foresee all outcomes. This lack of agency and control make it very difficult to apply current product liability concepts of negligence and vicarious liability or even to find a “responsible” party among the many designers, developers, and component manufacturers involved. As AI becomes further integrated into medicine and health care, it becomes clear that current legal standards and doctrines regarding medical malpractice are insufficient. The innovations are unprecedented, and we need to design solutions to the new problems they present.

What happens when lawsuits get ahead of science and precedents? In the past, anecdotal results have driven findings for the plaintiff until studies have been done. In the case of AI, its “Black Box” nature will likely require solutions to modify the current laws or even to create new legal doctrines in order to produce fair and predictable legal outcomes to AI-related medical malpractice.3

Some solutions that have been put forward include:

- Conferring personhood onto the AI system and viewing it as a principle under the law could allow it to be insured like doctors with a form of malpractice where the cost for such insurance would be spread between both creators and users of the technology.4
- Common enterprise liability would remove the need to determine fault and assign it to any party (if that were even possible) and instead, if injury is caused by an AI system, then all groups involved in the use and implementation of the AI system would bear some responsibility.4 
- Modify the standard of care to include a requirement that health care facilities and clinicians have a duty to validate the algorithmic results of any “Black Box” algorithm and ensure it is fit for purpose.5 Under this model, health care professionals are responsible for harm if they did not take adequate measures to properly evaluate the AI technologies used in caring for the patient.

Conclusion

As the use of AI in healthcare improves patients’ outcomes, it will also create new areas of liability for doctors, health care systems, and AI companies. As it becomes an increasingly integral part of medical care, who will be held liable by the courts in the event of a missed diagnosis or adverse outcome? Which types of insurance policies will respond to claims arising out of this new AI driven medical diagnosis and care? Since it is unclear whether healthcare AI liability will be analyzed under product liability or tort law, all parties should ensure their current insurance program responds to both product liability as well as errors and omissions claims in the event of a bodily injury claim allegedly caused by AI.  

Additionally, now is the time for regulators to weigh in on these AI healthcare issues before the first patient injury claims are filed. New liability regulations and legal doctrine tailored to health AI applications would create transparency and security for stakeholders in the field. Insurers could customize their policies and offer coverage solutions as appropriate. In the interim, if AI companies and healthcare providers are implementing new diagnostic or predictive software, they should consult with their brokers and insurance partners with respect to how their insurance program would respond in the event of claim activity.

Ironshore knows the future is changing and we are changing to meet it. We have a nimble operating model designed to ensure a stable and scalable operation capable of supporting our growth trajectory. An important part of this operating model is and will continue to be adapting quickly to new technology and market conditions. Using AI to leverage the tremendous amount of digital information about behavior and outcomes will be key to that adaptability. AI, Robotics, and Machine learning will be used to support underwriters by taking over repeatable manual processes so they can focus on making complex risk decisions and supporting insureds facing new technologies, changing market conditions and shifting environmental patterns. Ironshore has a well-earned reputation for leveraging deep industry expertise to solve complex problems. Access to senior leadership, our in-house claims specialists and a nimble approach has helped us stay as agile and relentless as the companies we protect.

1 E. Olthof, D. Nio, and W.A. Bemelman, "The Learning Curve of Robot-Assisted Laparoscopic Surgery," in: Vanja Bozovic (ed.), Medical Robotics, available from: http://cdn.intechopen.com/pdfs/633/InTech-The_learning_curve_of_robot_assisted_laparoscopic_surgery.pdf

2 Although “duty to train” cases have not gained much traction in the medical device field due to federal pre-emption, it is unclear whether the theory might be successfully advanced in an AI context. Regardless of the ultimate success of such allegations, companies will still nevertheless need to defend against them. See, e.g., Glennen v. Allergan, Inc., Cal. Rptr.3d, 2016 WL 1732243 (Cal. Ct. App. Apr. 29, 2016), Taylor v. Intuitive Surgical, Inc., 187 Wash. 2d 743, 754, 389 P.3d 517, 523 (2017) (“While Taylor argued that ISI had a duty to train to the trial court, Taylor does not raise that claim to this court.”

3 AMA J Ethics. 2019;21(2):E160-166. doi: 10.1001/amajethics.2019.160.

4 Vladeck DC.  Machines without principles: liability rules and artificial intelligence. Wash Law Rev. 2014;89(1):117-150.

5 Price WN. Medical Malpractice and Black-Box Medicine. Big Data, Health Law, and Bioethics. Cambridge, UK: Cambridge University Press; 2018.