Skip to Main Content
Skip Nav Destination
Doctor illustration show different medical symbols

Autopilot will fly you into a mountain: navigating liability of AI in pediatrics

June 1, 2024

A 2-year-old female is brought into an ambulatory clinic with a three-day history of fever and rash. The family recently immigrated from Central America. The child’s vaccine history is unclear. The parents report that the child has decreased energy and oral intake. Two older siblings are at home with upper respiratory symptoms.

The pediatrician notes that the right tympanic membrane is erythematous. The office recently updated its charting system to include artificial intelligence (AI) clinical tools, so the pediatrician consults the AI program for a differential diagnosis. Relying solely on the AI-generated differential diagnosis and management plan, the pediatrician concludes that the child has otitis media with an associated viral exanthem. Amoxicillin is prescribed, and the toddler is discharged home with her parents. The progress note is generated entirely by an AI program.

Ten hours later, the family presents to the local emergency department. The child is comatose, and a lumbar puncture shows gram-negative intracellular diplococci.

Did the pediatrician deviate from the standard of care by relying solely on AI? Is the AI company liable for potential missed/delayed diagnosis?

To answer these questions, one must have a basic understanding of AI, the elements of a tort case and how AI is being used in health care.

Q: What is AI?

A: AI is the ability of machines and computers to perform tasks that normally would require human intelligence such as recognizing patterns and making predictions.

Seemingly countless articles are available on AI topics such as large language models, neural networks, deep learning and data mining. (See resources for recent AAP News articles on AI.)

Q: What are medical applications of AI?

A: AI is being used in pediatrics, internal medicine, surgery, radiology, oncology and other medical specialties. In pediatrics, work is being done in imaging, predictive modeling, natural language processing, drug discovery and many other areas. Some electronic health record (EHR) systems include clinical decision support that can assist a physician with differential diagnoses and decision-making.

Q: What are challenges of utilizing AI in health care?

A: As is commonly seen with new technologies, widespread adoption dramatically precedes a detailed understanding of the implications of the new systems. The use of AI tools may require users to understand not only AI’s findings but also its reliability and the best way to use tools. Additionally, substantial financial interests exist in the AI space with companies moving into health care AI.

Many questions regarding utilization and potential liability for use of AI remain unanswered. There is essentially no case law to guide understanding of these issues. It is unclear if liability involving health care AI would meet all four elements of a tort suit (duty, breach, causation and damages).

Q: Have any missteps been seen with use of AI in health care and other industries?

A: As AI is being incorporated increasingly in health care and other industries, there have been some unexpected outcomes.

In a well-known legal case, lawyers submitted a legal brief written by ChatGPT. The citations in the brief contained nonexistent court cases, and the lawyers were found to have acted in bad faith and made “acts of conscious avoidance and false and misleading statements to the court.”  

In a recent pathology case, AI was markedly more accurate at diagnosing skin cancer than pathologists. However, AI’s accuracy was driven by simply scanning the image of skin lesions for the presence of a ruler.

Q: Does an AI company have a “duty” to a patient?

A: Does a professional relationship exist between the patient and the AI company? Do AI disclaimers preclude a professional relationship with the patient? What actions by the physician and the AI company are “reasonably foreseeable?”

A possible analogy can be seen with EHR companies. They typically have “hold harmless” clauses that help ensure the physician, clinic or hospital that purchased the EHR will not be able to blame the EHR company for any harm caused when using the system. The bottom line is it will be very difficult to demonstrate that AI has a “duty.”

Scenario analysis

In the scenario above, is it reasonable for the pediatrician caring for the febrile toddler to rely solely on an AI program for a differential diagnosis? In 2024, the likely answer is “no.”

Currently, input from AI could be considered a “consultation” and not a substitute for a pediatrician evaluating/treating a patient. Pediatricians are encouraged, when indicated, to use available online sources of information when evaluating a clinical case.

AI input is being utilized increasingly to support/enhance human decision-making and often can be integrated safely. However, this integration requires that physicians behave in a prudent manner. The pediatrician is obligated to provide “reasonable care” under the circumstances.

Does AI change the standard of the reasonable person? Should AI be held to a different negligence standard? Would AI plus a human create a new reasonable person standard?

These are compelling questions. Psychologists have written how increased reliance on AI has led to deterioration in human decision-making.

Currently, there is no known case law on these questions. A variety of negligence standards are being proposed regarding the use of AI, including hybrid negligence standards that would reflect the performance of both a physician and AI engaging in an activity.

Take-home points

  • General risks of using AI include overreliance on technology, not confirming/verifying results, incomplete/ineffective communication with families and not protecting a patient’s privacy.
  • Because there is a lack of case law, pediatricians are encouraged to behave in a prudent manner and contemplate the reasonably foreseeable consequences of their actions.
  • Goals for practitioners include communicating with families, documenting and striving for reasonable use of AI.
  • It will be difficult to demonstrate that an AI company has a “duty” to a patient.
  • It is not reasonable to depend solely on recommendations from AI.

Dr. Turbow, Dr. Srinivasakumar and Dr. Khanna are members of the AAP Committee on Medical Liability and Risk Management.

 

Resources

Close Modal

or Create an Account

Close Modal
Close Modal