Applications of AI in medical devices are growing rapidly. The regulatory environment in both the US and the EU is also changing fast. In this dynamic environment, it is important to stay updated and practice a flexible approach to both risk management and your regulatory strategy.
Listen to a brief audio summary above, about the emerging regulatory environment in these two major jurisdictions, and key takeaways for risk practitioners and regulatory professionals.
Regulatory environment is changing rapidly but there is new guidance
There is good news! A new guidance document in the form of a questionnaire was recently published by the Team-NB, the European Association of Medical devices Notified Bodies. The joint Team-NB/IG-NB Questionnaire on Artificial Intelligence in Medical Devices1 offers device manufacturers a process-oriented roadmap to demonstrate conformity to the EU-MDR2 (or EU-IVDR) requirements.
The term risk(s) appears 50 times in this questionnaire, highlighting the significance of risk management as a critical factor in ensuring safety and effectiveness of AI devices throughout their lifecycle. Out of a total of 189 questions across 26 categories, 32 (17%) are explicitly related to risk management!
FDA’s regulatory approach is considerably less prescriptive and more collaborative. The regulatory framework for the pre-market review is no different for AI-enabled devices compared to medical devices in general, including Software as a Medical Device (SaMD). A majority of nearly 1000 AI/ML enabled devices have been authorized as Class II devices, either through the De Novo, or the 510k pathway. The most important requirement is to demonstrate safety and effectiveness through valid scientific evidence that benefits of the intended use outweigh probable risks3.
Let us take a closer look at the emerging regulatory environment in the US and EU
First, there is broad alignment at a high level between FDA and the EU
At a high level both FDA and notified bodies are generally aligned on the need to demonstrate safety and effectiveness of AI-enabled medical devices. Here are 3 specific areas of convergence in these two :
1. Focus on safety and effectiveness
Both the FDA and the European approach, reflected by Team-NB, prioritize patient safety and the effectiveness of AI-enabled medical devices.
2. Recognition of AI’s unique challenges
Both recognize that AI presents unique regulatory challenges due to its complexity, iterative nature, and reliance on data.
3. Importance of real-world monitoring
Both emphasize the need for ongoing monitoring of AI-enabled devices in real-world settings to ensure safety and performance.
Second, the Team-NB approach focuses on certifiability using a process-oriented questionnaire
1. Process-oriented approach for safety
The European approach, as evidenced by the questionnaire, focuses on ensuring the safety of AI-based medical devices through a comprehensive evaluation of processes throughout the device lifecycle.
2. Detailed requirements and documentation
The questionnaire outlines specific requirements for documentation, competence of development teams, risk management, data management, model development, and post-market surveillance.
3. Emphasis on certifiability
The questionnaire highlights the challenges of certifying AI-based medical devices, particularly those with self-learning capabilities, and emphasizes the need for robust validation processes.
4. Consideration of AI-specific security risks
The questionnaire addresses AI-specific cybersecurity risks like adversarial attacks and emphasizes the importance of security lifecycle management.
Finally, FDA’s approach is more collaborative and adaptive
1. Collaborative and adaptive
The FDA emphasizes collaboration with stakeholders (developers, patients, academia, global regulators) and a commitment to adapt regulations to the rapidly evolving AI landscape.
2. Focus on bias mitigation and health equity
The FDA prioritizes addressing bias in AI algorithms and promoting health equity by ensuring data representativeness.
3. Emphasis on lifecycle management
The FDA stresses the importance of managing AI applications throughout the medical product lifecycle, from design to deployment, monitoring, and maintenance.
4. Commitment to guidance and regulatory science
The FDA is actively developing guidance documents and supporting research to address the unique challenges of evaluating and regulating AI in medical products.
Key takeaways for risk practitioners and regulatory professionals
In this rapidly changing environment, it is very important for risk practitioners and regulatory professionals to stay current with evolving regulatory approaches. Here are 3 key takeaways to keep in mind:
1. Practice a flexible and adaptable approach to risk management
Risk practitioners and regulatory professionals need to stay informed of the latest developments and adjust their practices accordingly. They must also anticipate future changes and build flexibility into their risk management frameworks and compliance strategies.
2. Understand and address bias in AI systems
Identifying and quantifying bias in AI systems can be complex. Risk practitioners and regulatory professionals need to develop robust methodologies for assessing bias and its potential impact on patient safety and health equity. This includes understanding the sources of bias in training data, evaluating the fairness of AI algorithms, and implementing strategies for monitoring and mitigating bias in deployed systems.
3. Apply a tailored approach to address regulatory concerns in each market
The FDA is primarily focused on the end product and its intended use, while the EU is taking a more process-oriented approach that emphasizes the entire AI lifecycle. These differing approaches may lead to varying risk profiles and require adjustments to risk management strategies depending on the target market. Risk practitioners and regulatory professionals need to carefully consider these differences and develop tailored strategies that meet the specific requirements of each jurisdiction.
In conclusion
It is clear that AI applications in MedTech are going to continue growing. We are still in the early phase of AI applications, especially in healthcare.
At the same time, the regulatory environment is evolving rapidly. Both the FDA and the EU are moving fast to catch up with technology. While a focus on safety and effectiveness remains as the centerpiece of the regulatory approach, there are distinct differences in these two major jurisdictions. Good news is that new guidance from these regulators is coming out to clarify their latest thinking.
Risk management is an essential aspect of regulatory focus. There are new and emerging concerns about risks associated with AI/ML devices. Risk practitioners and regulatory professionals must stay current, and develop flexible, adaptable and tailored strategies to respond to this dynamic regulatory environment.
Disclaimer
This article was prepared with the help of Google NotebookLM4, an artificial intelligence enabled research assistant, using the following sources:
FDA white paper: Artificial Intelligence & Medical Products5.
Team-NB - Questionnaire: Artificial Intelligence in Medical Devices.
FDA: Good Machine Learning Practice for Medical Device Development6.
IMDRF: Good machine learning practice for medical device development7.
Notes created using Google NotebookLM in response to user prompts
All output(s), including the audio summary, were reviewed by a human for accuracy and relevance. This article is intended for educational purposes only and should not be considered as regulatory advice.
Team-NB Position Paper - Questionnaire: Artificial Intelligence in Medical Devices, Issued Nov 14, 2024.
Regulation (EU) 2017/745 of the European Union, Effective May 2021, website accessed on Dec 2, 2024.
FDA: Factors to Consider When Making Benefit-Risk Determinations in Medical Device Premarket Approval and De Novo Classifications, Final guidance issued August 2019.
Google: NotebookLM, accessed December 02, 2024. NotebookLM does not use any other source of information except those specifically provided by the user.
FDA - Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together, Issued March 2024.
FDA - Good Machine Learning Practice for Medical Device Development: Guiding Principles, Issued October 2021.
IMDRF - Good machine learning practice for medical device development: Guiding principles: Guiding principles, IMDRF/AIWG/N73 DRAFT: 2024
Share this post