Regulating AI in Health Requires Focus on Algorithms

The role of a physician revolves around constant assessment and reassessment of risks: How likely is a procedure to succeed? Will the patient experience severe symptoms? When should follow-up tests be scheduled? In this critical landscape, the emergence of artificial intelligence (AI) holds the promise of enhancing safety in clinical environments while helping doctors focus their attention on patients who are at high risk.

However, researchers from the MIT Department of Electrical Engineering and Computer Science (EECS), Equality AI, and Boston University emphasize the necessity for increased regulation of AI technologies in the medical field. This call for oversight is detailed in a recent commentary featured in the New England Journal of Medicine AI (NEJM AI) October issue, following a recent rule introduced by the U.S. Office for Civil Rights (OCR) under the Affordable Care Act (ACA).

In May, the OCR announced a final rule that bans discrimination based on various factors such as race, color, national origin, age, disability, or sex in “patient care decision support tools.” This term encompasses both AI capabilities and traditional non-automated methods used in healthcare.

Triggered by President Biden’s Executive Order on Safe, Secure, and Trustworthy AI from 2023, this new rule reflects the administration’s ongoing commitment to health equity by preventing discrimination in medical technologies.

Marzyeh Ghassemi, a senior author and associate professor at EECS, considers this rule a significant advancement. Affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, CSAIL, and the Institute for Medical Engineering and Science (IMES), Ghassemi asserts that “this rule should enforce equity-oriented enhancements to the existing non-AI algorithms and clinical decision-support tools utilized across diverse medical fields.”

The number of AI devices approved by the U.S. Food and Drug Administration (FDA) has skyrocketed over the last decade, particularly since the first AI-enabled device, the PAPNET Testing System for cervical screening, was approved in 1995. As of October, the FDA has sanctioned nearly 1,000 AI-enabled tools, many of which aid in clinical decision-making.

Nevertheless, the lack of regulatory oversight concerning the clinical risk scores generated by these support tools raises concerns, especially as about 65 percent of U.S. physicians rely on such tools regularly to guide their patient care.

In response to these concerns, the Jameel Clinic is set to host another regulatory conference in March 2025. The previous year’s conference sparked a crucial dialogue among faculty members, regulators globally, and industry specialists centered on the governance of AI within healthcare.

Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School and editor-in-chief of NEJM AI, remarked, “Clinical risk scores offer greater transparency compared to complex AI algorithms, as they usually utilize a few variables arranged in a straightforward model. However, their effectiveness is contingent upon the quality of datasets used for training and the variables selected by experts. If these scores influence healthcare decisions, they must adhere to standards comparable to their more intricate AI counterparts.”

Moreover, it is important to note that even non-AI decision-support tools can contribute to healthcare biases, necessitating their regulation.

“Regulating clinical risk scores is a challenging endeavor due to the widespread integration of decision-support tools in electronic medical records and their common use in healthcare practices,” stated Maia Hightower, CEO of Equality AI. “Yet, this regulation is vital for ensuring transparency and preventing discrimination.”

Hightower further noted that achieving regulation of these clinical risk scores may prove particularly arduous with the new administration’s focus on deregulation and resistance to the Affordable Care Act and specific nondiscrimination policies.

Photo credit & article inspired by: Massachusetts Institute of Technology

Leave a Reply

Your email address will not be published. Required fields are marked *