Skip to main content

FDA and CTTI Hold Joint Workshop on AI in Drug Development – AI: The Washington Report

  • On August 6, the Food and Drug Administration (FDA) and Clinical Trials Transformation Initiative (CTTI) held a joint workshop to explore “Artificial Intelligence in Drug & Biological Product Development.”
  • The workshop featured speakers and panelists from the FDA, academia, the industry, and public interest groups, who nearly all agreed that greater regulatory clarity is needed around AI in drug development, as well as greater public awareness of and trust in AI.
  • The head of the FDA’s Center for Drug Evaluation and Research, Patrizia Cavazzoni, stated that the agency is currently drafting risk-based guidance on the use of AI for drug development, though it is unclear when this guidance will be published.   
     

   
On August 6, 2024, the Food and Drug Administration (FDA) and the Clinical Trials Transformation Initiative (CTTI) hosted a full-day workshop on “Artificial Intelligence in Drug & Biological Product Development.” The workshop featured four panels with speakers from the FDA, academia, advocacy groups, and the industry. While the panelists discussed different issues related to the use of AI for drug development, nearly all of them called for increased involvement of government regulators to provide more explicit guidance on such uses of AI and additional resources for the beneficial development of AI, especially through public-private partnership opportunities. Panelists also emphasized the need for increased public awareness of the capabilities of AI for drug development to foster greater trust in AI and future innovation.

Morgan Hanger, Executive Director at CTTI, delivered the opening remarks, setting the stage for the workshop by noting how AI has impacted every facet of the drug development lifecycle in the recent years. She explained that although there are numerous potential applications of AI throughout the development lifecycle, the workshop would focus on clinical development and explore the use of AI to optimize study designs and other aspects of the drug development process.

Following the opening remarks, Patrizia Cavazzoni, M.D., the Director of the FDA’s Center for Drug Evaluation and Research (CDER), delivered the keynote address, which again highlighted the proliferation of AI use cases across the drug development process. She noted that the FDA has received more than 300 drug approval applications with AI elements. In March 2024, it published “Artificial Intelligence and Medical Products,” which detailed the agency’s commitment to promoting the safe and effective use of AI. However, Dr. Cavazzoni acknowledged that the industry has not had the clarity it needs around AI and stated that FDA is drafting risk-based guidance on the use of AI for drug development to provide a greater level of predictability and certainty for the industry, with the goal that increased clarity will foster greater innovation.

Session 1: Optimizing Model Design Through Multidisciplinary Expertise

The first panel of the workshop explored the importance of multidisciplinary expertise for the creation of AI models used in drug development. The panelists emphasized the need to bring together experts from diverse fields to work on AI models and explore the full potential benefits of AI for drug development. One of the panelists noted that AI itself can be used to promote interdisciplinary collaboration, as AI can explain and visualize technical concepts in different ways for diverse audiences, eliminating knowledge gaps and fostering greater levels of understanding.

The panelists also discussed strategies for advancing AI tools for drug development. The panelists agreed that it is important to highlight AI use case successes in this space, no matter how small, to showcase the potential of AI for drug development, educate stakeholders on various use cases, and build trust among different stakeholders around AI. Additionally, each of the panelists emphasized that all stakeholders, developers, and people in general, should embrace the use and integration of AI, rather than resist due to unknown risks, as broader use and development will encourage beneficial applications of and trust in AI across the population.

Session 2: Using the Data We Have, Creating the Data We Need

The next panel focused on the types and quality of data used in drug development, with a focus on how AI can remedy the common data-related challenges that drug developers face. The panelists emphasized the need for research-ready data sets, and many of them noted that they do not believe most researchers and developers currently have adequate data sets, as data is often not available for a diverse and wide range of people and situations. The panelists, however, highlighted public-private precompetitive consortia as a promising model for generating broadly accessible data for researchers and developers who want to leverage AI in drug development.

The panelists also discussed the issue of bias in AI models and data. While acknowledging that AI models may perpetuate certain biases inherent in existing data, especially in the context of health care data, the panelists described the need for standardized controls for monitoring and evaluating bias across the entire lifecycle of AI models used for drug development. They also expressed hope that AI may be used to identify biases in data sets more effectively and efficiently and to create outputs that eliminate certain biases.

Ultimately, the panelists agreed that the data available today are not sufficient to support the broad and safe use of AI in the drug development process and that greater data transparency and accessibility are needed. In addition to calling on FDA to develop guidance on data transparency to direct industry efforts, the panelists suggested that the federal government explore additional product development pathways (including conditional approval) to accelerate the generation of real-world performance data, as well as programs to provide funding or partnerships aimed at expanding data availability and transparency and boosting AI model development and safety.

Session 3: Model Performance, Explainability, and Transparency

The workshop’s third session featured panelists who have utilized AI in the drug development process. One issue that these panelists raised is the explainability and interpretability of AI models—that is, whether people can understand how the AI models, which are often viewed as black boxes, come to their decisions and the basis of those decisions. According to the presenters, increasing the explainability and interpretability of AI models is critical for the future of AI innovation for drug development. Regulators need to understand how AI models work in order to regulate them and have confidence in AI-driven results around drug development, while the industry needs to grasp how AI models work to trust and invest in them.

The panel also explored possible roles for regulators to encourage the creation and use of AI models for drug development. The panelists suggested that the FDA could spotlight certain AI use cases as a way to build trust in AI and enhance the credibility of drug developers who integrate AI into their processes. As in previous sessions, these session participants recommended that the FDA issue clearer guidelines on the use of AI for drug development and consider establishing a clear regulatory framework for AI use in drug development to address specific uncertainties and unknowns that may deter investment and innovation in this space. Finally, one panelist called on the FDA to create more incentives for entities involved in drug development to integrate AI into their processes, including potential grants, awards, and funding opportunities.

Session 4: Identifying Gaps, Addressing Challenges, and Charting the Path Forward

The panelists in the final session echoed many of the same themes and ideas that the previous panelists discussed, calling for increased clarity and guidelines from regulators and the creation of government partnerships and incentive programs for AI developers. However, this panel went further, suggesting that global regulators must focus on the alignment and harmonization of AI-related terms, best practices, and standards. They also highlighted multiple challenges to the broader adoption of AI for drug development, including:

  • Keeping up with emerging US and global laws and regulations on AI.
  • Building cross-functional teams to develop and maintain AI-based systems.
  • The lack of broader, overarching rules and guidance for developing, testing, and implementing AI-based systems.
  • The shortage of information on successful and unsuccessful use cases to guide AI development and build trust.

Building on the repeated theme of public-private partnerships to facilitate AI development, the FDA moderators asked the panelists to describe their visions for such partnerships. One panelist likened the current situation with AI to the creation and buildout of the Internet, suggesting that the federal government should be investing vast resources to establish rules and standards for the development, operation, and access to AI systems and data. Other panelists highlighted the need for global stakeholder and regulator interaction and collaboration to develop consensus on AI terms and standards, including data standards, to promote international harmonization and accessibility. Overall, there was broad agreement that greater participation and cooperation by a wide range of stakeholders and government agencies are necessary to address existing challenges and chart the path forward on the use of AI for drug development.

Finally, the workshop concluded with remarks from Jacqueline Corrigan-Curay, the Principal Deputy Center Director of CDER, who echoed many of the same sentiments about the possibilities of AI that Director Cavazzoni addressed in the opening keynote.

The public workshop provided evidence that FDA and the panelists are all committed to harnessing the full power of AI to enhance drug development and safety and to deliver better outcomes for patients. However, a recurring and resounding theme throughout the workshop was that FDA and the federal government need to do something – provide resources, partnerships, clear guidance, actionable standards – anything to help set a stable legal framework and development path for the use of AI in the drug development process.

 

Subscribe To Viewpoints

Authors

Benjamin advises pharmaceutical, medical device and biotech companies on the FDA regulatory process to identify the correct regulatory pathway, assisting with FDA communications and strategy.
Joanne counsels global clients on the regulatory and distribution-related implications when bringing a new FDA-regulated product to market and how to ensure continued compliance after a product is commercialized.

Matthew Tikhonovsky

Matthew is a Mintz Senior Project Analyst based in Washington, DC.