Strategies to Unlock AI's Potential in Health Care, Part 2: FDA’s Approach to Protecting Patients & Promoting Innovation
Artificial intelligence—AI—is the future of everything. But when patient health is on the line, can we trust algorithms to make decisions instead of patients or their health care providers? This post, the second in our blog series about AI in health care, explores FDA’s proposed regulatory model that is supposed to be better suited for AI (and similar technologies) while still protecting patients. You can read our previous posts by clicking here.
FDA has made news recently by allowing marketing of novel technologies that incorporate AI. One device is software (yes, software can be a device) that uses AI to analyze images of the eye to make a decision about whether the patient has more than a mild level of diabetic retinopathy. Another uses AI to alert radiologists when a CT scan reveals potential intracranial hemorrhage.
Cool stuff.
But despite these authorizations, FDA is still in the early stages of figuring out its approach to AI. “We’re actively developing a new regulatory framework to promote innovation in this space and support the use of AI-based technologies,” FDA Commissioner Scott Gottlieb said in a speech earlier this year. This regulatory framework flows from FDA’s historically hands-off approach to software and digital health technologies. When I worked at FDA, the goal was to create a regulatory paradigm for digital health technologies—particularly software—that was light on premarket review, which we thought would promote innovation. In 2015 and 2016, FDA issued guidance outlining a generally hands-off approach to mobile medical apps and general wellness devices. Congress liked what FDA was doing so much that in the 21st Century Cures Act certain software functionality was deleted from the statutory definition of medical device. That law also spurred FDA to issue more guidance clarifying how it would regulate certain products like clinical and patient decision support products, which may use AI to offer diagnostic, dosing, or other recommendations to patients and health care providers.
The next step is the much-hyped Digital Health Software Precertification Program known as Pre-Cert. Pre-Cert envisions certifying software developers to legally market devices rather than FDA reviewing specific products. Traditional FDA premarket review focuses on the product—is it safe; is it effective; do its benefits outweigh its risks?—but Pre-Cert looks first at the software/technology developer to see if they have a culture of quality and organizational excellence.
This is revolutionary. Today, FDA’s decisions about whether a device can be legally marketed are largely based on data submitted by the manufacturer to demonstrate the device meets FDA’s marketing standard. With Pre-Cert, FDA is saying that if they pre-certify you based on the criteria they establish (and on which FDA is seeking public comment), you are free to introduce new products to the market with minimal or no FDA premarket review. This is good because software is typically developed iteratively and can be modified quickly in response to adverse events or other glitches; deploying updates should not be delayed by a regulatory model that relies on something like the 90-day statutory timeframe for review of a 510(k).
But there are questions and potential downsides. FDA says Pre-Cert works in part by relying on postmarket surveillance to monitor performance of software and digital health products once marketed, but the agency’s postmarket surveillance system is historically underfunded and has been subject to scrutiny for failing to address device problems quickly enough. FDA’s solution is to leverage the National Evaluation System for health Technology (NEST), a public-private partnership, to improve postmarket surveillance of medical devices by enabling better detection of safety signals using data from electronic health records, device registries, and other sources. It is not yet known how specifically this system—itself still in its infancy—will integrate with the Pre-Cert program.
Further, will patients accept products marketed by certified developers rather than approved or cleared by FDA? Without FDA looking at these products, are patients and consumers putting their faith in Silicon Valley and other potentially unscrupulous product developers? Consider the recent buzz around privacy and security on the Internet. What if hackers manipulate an algorithm making life-or-death decisions or recommendations? Neither patients nor product developers would benefit from such occurrences. Will the device ecosystem be more vulnerable without FDA premarket review of mitigations?
Questions may also arise about preemption, a provision of the Federal Food, Drug, and Cosmetic Act that bars tort claims against manufacturers of certain medical products. As Congress considers new authorities to enable Pre-Cert (authorities many think are needed) will some constituencies seek to codify into law an indemnity against health care decisions made by or with the help of AI?
There is enormous potential for AI to improve health care. Industry, patients, and other stakeholders should applaud FDA for taking a creative approach to regulating AI and similar technologies. But they should also look closely at the details of what FDA is proposing, submit feedback, and encourage Congress to provide the necessary authority and safeguards so the final regulatory framework appropriately harmonizes innovation and patient safety.