On January 7, 2025, in the last weeks of the Biden Administration and before President Trump returned to the White House, the Food and Drug Administration (FDA) issued draft guidance, entitled “Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products.” This guidance provides recommendations on the use of AI intended to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality. The guidance discusses the use of AI models in the nonclinical, clinical, post-marketing, and manufacturing phases of the drug product life cycle. This is the first time FDA has proposed draft guidance on the use of AI for the development of drug and biological products and may provide insight on how AI models in medical product regulation should be assessed. The FDA is seeking public comment on the proposed guidance by April 7, 2025.     

Since returning to office on January 20, President Trump has issued a number of executive orders, many rescinding Executive Orders previously issued under the Biden Administration and issuing a new order related to AI intended “to sustain and enhance America’s global AI dominance”. This FDA draft guidance does not appear to be impacted by these orders.

The guidance proposes a risk-based credibility assessment framework that may be used for establishing and evaluating the credibility (i.e., trust in performance) of an AI model for a particular context of use (COU).  The guidance proposes a 7-step process: (1) define the question of interest; (2) determine the COU for the AI model; (3) assess AI model risk; (4) develop a plan to establish AI model credibility; (5) execute the plan; (6) document results of the credibility assessment plan and discuss deviations from the plan; and (7) determine the adequacy of the AI model for the COU.

The guidance is intended to provide a framework to help establish credibility of an AI model’s output, using an approach consistent with how the FDA has been reviewing applications for drug and biological products with AI components. It was “informed by feedback from an expert workshop held by the Duke Margolis Institute for Health Policy (December 2022) and hundreds of comments on two discussion papers (May 2023) concerning AI use in drug development and in manufacturing. The FDA encourages entities to have early engagement with the agency about AI credibility assessment or the use of AI in human and animal drug development.  

The Proposed Framework

The guidance proposes a 7-step risk-based framework to establish and evaluate an AI model’s credibility for a particular context of use. The FDA defines “credibility” as “trust, established through the collection of credibility evidence, in the performance of an AI model for a particular COU.” The guidance addresses the use of AI models throughout the drug product life cycle, including nonclinical, clinical, post marketing, and manufacturing phases. For the first three steps, it also provides examples in (a) clinical development and (b) commercial manufacturing scenarios.

Step 1: Define the Question of Interest

This step involves clearly defining the specific question, decision, or concern the AI model aims to address. It sets the foundation for the subsequent steps by focusing on the problem the AI model is intended to solve, ensuring that the AI application is purpose-driven and directly aligned with a specific regulatory or development need. The FDA guidance also notes that various evidentiary sources may be used to answer the question, including but not limited to live animal testing, clinical trials, or manufacturing process validation studies used in conjunction with evidence generated from the AI model.

Step 2: Define the Context of Use for the AI Model

This step specifies the role and scope of the AI model in addressing the defined question of interest. It includes detailing what will be modeled and how the model outputs will be utilized, ensuring that the model’s application is clearly understood. This step is crucial for delineating the boundaries within which the AI model’s outputs are considered valid and reliable, thereby tailoring the AI application to its intended regulatory context.

Step 3: Model Risk Assessment

Model risk assessment combines two factors: model influence (defined as the contribution of evidence derived from the AI model relative to other evidence) and decision consequence (defined as the significance of an adverse outcome from an incorrect decision). This step involves evaluating the potential for the AI model output to lead to incorrect decisions that could result in adverse outcomes, emphasizing the need for a thorough risk evaluation to mitigate potential negative impacts on regulatory decisions.

Step 4: Develop a Plan to Establish AI Model Credibility within the COU

This involves creating a credibility assessment plan that outlines the activities and considerations necessary to establish the trustworthiness of the AI model outputs. The plan should be tailored to the specific COU and commensurate with the assessed model risk, ensuring a structured approach to validating the AI model’s applicability and reliability for its intended use. The credibility assessment plan should (a) describe the model and model development process, and (b) describe the model evaluation process.

(a)  The Model and Model Development Process – FDA recommends that sponsors take the following steps in developing a credibility assessment plan:

  • Describe each model used and rationales for choosing each, including descriptions of inputs and outputs; architecture; features (measurable property of an object or event with respect to a set of characteristics); the feature selection process; and parameters (internal variables of a model that affect how outputs are computed);
  • Describe the training data (used in procedures and algorithms to build an AI model) and tuning data (used to evaluate a small number of trained AI models) used to develop the model (collectively referred to by the FDA as “development data”).  The data should be relevant and reliable. The description should include the following information:
    • How development datasets were split into training and tuning data;
    • Which model development activities were performed using each dataset;
    • How the development data has/will be collected, processed, annotated, stored, controlled, and used for training and tuning of the AI model;
    • How the development data is fit for the COU;
    • Whether the development data is centralized; and
    • Which model development activities were performed using each dataset;
  • And finally, describe how the model was trained, including: learning methodologies, performance metrics, regularization techniques, whether a pre-trained model was used, ensemble methods, AI model calibration, and quality assurance and control procedures of computer software.

(b) The Model Evaluation Process – A description of the model evaluation process should include:

  • how the test data have been or will be collected, processed, annotated, stored, controlled, and used for evaluating the AI model;
  • how data independence was achieved;
  • the applicability of the test data to the COU;
  • the agreement between the model prediction and the observed data;
  • rationale for the chosen model evaluation methods;
  • performance metrics used to evaluate the model;
  • limitations of the approach including potential biases; and
  • quality assurance and control procedures.

Step 5: Plan Execution

This step involves carrying out the credibility assessment plan. FDA notes in the draft guidance that engaging with the FDA prior to execution can help set expectations and address potential challenges, and highlights the importance of collaboration between sponsors (a person or entity that takes responsibility for and initiates a clinical investigation) and the FDA to ensure the AI model’s credibility and applicability.

Step 6: Results Documentation

This step requires documenting the outcomes of the credibility assessment activities and any deviations from the initial plan. The results should be compiled in a credibility assessment report, which establishes the AI model’s credibility for the COU, ensuring transparency and accountability in the AI model’s evaluation process.

Step 7: Adequacy Determination

Based on the documented results, this final step assesses whether the AI model is appropriate for the intended COU. If the model’s credibility is not sufficiently established, various outcomes are possible, including downgrading model influence, increasing the rigor of credibility assessment activities, or revising the model’s COU, emphasizing the iterative nature of assessing and ensuring an AI model’s adequacy for its intended regulatory application.

Other Considerations

The draft guidance emphasizes the importance of life cycle maintenance, defined as “a set of planned activities to monitor and ensure the model’s performance and its suitability throughout its life cycle for the COU.” Because a model’s performance can change with time and across environments, the draft guidance recommends that performance metrics are monitored on an ongoing basis to ensure that the model remains fit for use and appropriate changes are made to the model as needed.

The FDA also emphasized engagement, encouraging sponsors and other interested parties to contact the FDA to “set expectations” and “help identify potential challenges.”

Potential Implications

The FDA draft guidance establishes a 7-step process to establish and assess the credibility of AI model outputs for drug and biological products, proposing a framework that can be used by individuals and entities involved in the drug product life cycle. This framework is intended to provide guidance on achieving credible AI models for drugs and biological products, providing consistency and standardization across the processes used.

Furthermore, the framework has the potential to be applied more broadly to other AI model outputs in health care contexts. In proposing the draft guidance, the FDA cites to various “examples of AI uses for producing information or data intended to support regulatory decision-making,” including the use of predictive modeling, integrating data from various sources, and processing and analyzing large sets of data. We expect other subagencies of the Department of Health and Human Services (HHS) to release further guidance related to the use of AI in health care in the coming months and years; however, the timing and content of that guidance remains to be seen due to the change in administration.

Public comment on the FDA draft guidance may be submitted until April 7, 2025. Organizations may wish to submit comments on this guidance, particularly at this opportune time when the AI regulatory landscape takes shape under this new administration.  Contact a Crowell & Moring professional for further information.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Tai Williams Tai Williams

Tai is an associate in Crowell & Moring’s Washington, D.C., office and a member of the firm’s Health Care and International Dispute Resolution groups. In her health care practice, Tai counsels and represents managed care organizations, insurers, health care providers, and health care…

Tai is an associate in Crowell & Moring’s Washington, D.C., office and a member of the firm’s Health Care and International Dispute Resolution groups. In her health care practice, Tai counsels and represents managed care organizations, insurers, health care providers, and health care technology companies in various regulatory, transactional, and litigation matters.

Photo of Roma Sharma Roma Sharma

Roma Sharma is an associate in Crowell & Moring’s Washington, D.C. office and a member of the firm’s Health Care Group. Roma primarily works with health care clients seeking to comply with regulations for state and federal health care programs, health care anti-fraud…

Roma Sharma is an associate in Crowell & Moring’s Washington, D.C. office and a member of the firm’s Health Care Group. Roma primarily works with health care clients seeking to comply with regulations for state and federal health care programs, health care anti-fraud and abuse laws, and licensing laws.

Roma’s work incorporates her Master of Public Health degree in Health Policy as well as her past experiences as an extern at the Office of the General Counsel at the American Medical Association and as an intern at the Illinois Office of the Attorney General, Health Care Bureau.

Photo of Stephen Holland Stephen Holland

Stephen Holland is a senior counsel in Crowell & Moring’s Government Affairs Group, where he leverages his extensive experience advising members of Congress and their staff as a policy advisor and attorney active in health care legislation. Stephen has been responsible for crafting

Stephen Holland is a senior counsel in Crowell & Moring’s Government Affairs Group, where he leverages his extensive experience advising members of Congress and their staff as a policy advisor and attorney active in health care legislation. Stephen has been responsible for crafting dozens of provisions in law to improve food, drug, and medical device innovation and regulation at the Food and Drug Administration (FDA), health coverage and access, public health communication and coordination, prescription drug affordability, and emergency preparedness and response.

Prior to joining Crowell, Stephen served in senior policy roles in the U.S. House of Representatives for over 10 years. Most recently, Stephen spent five years on the Energy and Commerce Committee staff under the leadership of Ranking Member and former Chairman Frank Pallone of New Jersey.  On the Committee staff, he was responsible for legislative action related to numerous agencies and programs, including the FDA, the Biomedical Advanced Research and Development Authority (BARDA), and the 340B drug program. Notably, his work on the Committee included leading negotiations and drafting of the Food and Drug Omnibus Reform Act of 2022 (FDORA), a package of more than 50 policies to expand research, development, and innovation for drugs, medical devices, and personal care products. During the COVID-19 response, Stephen worked to secure billions of dollars for research, development, distribution, and promotion of vaccines, treatment, and diagnostic tests in the CARES Act, the Fiscal Year 2021 Omnibus, and the American Rescue Plan Act.

Photo of Linda Malek Linda Malek

Linda Malek is a partner in Crowell’s Health Care and Privacy & Cybersecurity Groups, where she advises a broad array of health care and life sciences clients on compliance with federal, state, and international law governing clinical research, data privacy, cybersecurity, and fraud

Linda Malek is a partner in Crowell’s Health Care and Privacy & Cybersecurity Groups, where she advises a broad array of health care and life sciences clients on compliance with federal, state, and international law governing clinical research, data privacy, cybersecurity, and fraud and abuse. Her clients include national hospitals systems and academic medical centers, genetic and biotechnology companies, pharmaceutical companies, medical device companies, financial institutions involved in healthcare services, research foundations and international scientific organizations.

In the healthcare context, Linda is particularly focused on regulatory compliance issues related to clinical research and clinical trials. She creates and implements comprehensive policies governing the conduct of research involving human subjects, and advises clients on human subject research compliance issues. Linda also counsels on legal issues related to conducting secondary research on existing data repositories and tissue banks, including on data privacy and informed consent issues related to the ability to conduct future research. She has experience advising clients on a wide variety of research areas, including biologics, pharmacogenomics, translational research, secondary research, tissue banking, and data repositories. Linda also advises clients in general health care matters related to fraud and abuse, including issues under the Stark laws and federal and state anti-kickback statutes. Her work includes structuring complex transactions in compliance with such laws, assisting in the creation of internal compliance programs, and advising on issues related to the False Claims Act.