How QA/RA professionals can help drive innovation in AI/ML
Insights from a Let's Talk Risk! conversation with Michael Bocchinfuso
Note: this article highlights key insights gained from a conversation with Michael Bocchinfuso as part of the Let’s Talk Risk! with Dr. Naveen Agarwal series on LinkedIn. Listen to the full recording of the discussion below.
Summary
Here are a few key points that emerged from our discussion:
Understand the difference between AI and ML
Clearly articulate and emphasize clinical benefit
Anticipate regulatory questions about model bias
Consider risks arising from the use environment
Collaborate and facilitate to help optimize for both safety and effectiveness
Understand the difference between AI and ML
There is a lot of hype around artificial intelligence (AI) and machine learning (ML) these days. That is why it is important for Quality/Regulatory professionals to go beyond the hype and understand these rapidly emerging technologies at a deeper level.
It is not uncommon for these terms to be lumped together in the context of medical devices. As an example, you might hear the phrase “AI/ML enabled medical devices”1. But is it just AI, or ML, or AI combined with ML? This can be confusing not just to patients and users, but also to engineers and other professionals directly involved in developing AI/ML enabled medical devices.
The difference between AI and ML is subtle, but important. AI refers to the broad category of machines (hardware or software) performing human-like tasks. Think of a pitching machine in a batting cage or robots performing repetitive tasks on an assembly line. On the other hand, ML is a specific subset of AI technologies which utilize vast amounts of data to “learn”, almost in a human-like manner, to create new and important insights. If the pitching machine is able to learn in real-time so it can also throw an occasional curve ball based on an individual player’s hitting pattern, then it is considered to have an intelligence based on machine learning.
Machine learning (ML) is the underlying algorithm within the product that is helping it to maintain and improve. Artificial intelligence (AI) aspect is performing an action that theoretically a human could do.
In short, all ML is AI but not all AI is ML!
The adaptive and learning nature of an ML algorithm embedded within a medical device is what makes it unique from the risk management point of view.
Clearly articulate and emphasize clinical benefit
One reason for the rising interest in using AI/ML technologies in medical applications is the rapid increase in the number of devices recently approved/cleared by the FDA. However, it is not easy to satisfactorily answer the regulatory question of safety and effectiveness given the limited technical understanding of this emerging technology.
That is why it helps to clearly define the intended use and clinical benefits when presenting your case to the regulatory authorities.
A major benefit of using machine learning algorithms in medical devices is to improve efficiency and accuracy of the clinical decision making process. 75% of the more than 500 AI/ML enabled medical devices authorized by the FDA so far, are in the Radiology category, broadly used for enhanced image processing to facilitate a more efficient and accurate clinical diagnosis2. The intended use is limited to serving as an aid in diagnosis and not as a diagnostic device.
In short, AI/ML is generally deployed in a medical device as a reliable assistant to the clinician and not as a substitute for their judgment. As an example, it can quickly process radiological images and provide a reasonably reliable likelihood of disease based on a learning model using tens of thousands of previous images. It does not take a coffee break and it does not get tired! As a result, the clinician can prioritize their time with patients who are most likely to be at risk rather than having to manually review and screen every individual case.
In the near term, we are likely to see increasing use of AI/ML to lower the administrative burden and to streamline clinical decision making at an individual practice level. As these technologies gain more confidence and maturity, we will see applications in triage and prioritization of treatment at an individual patient level based on risk.
Anticipate regulatory questions about model bias
Systematic bias in the machine learning model is a common concern of regulatory authorities when evaluating broad applicability within the scope of the intended use.
Machine learning models are developed using a training data set and tested on a separate, independent test data set. It is an iterative process which involves optimization for both accuracy and generalizability of model prediction.
The training and test data sets should be representative of the intended patient population characterized by age, gender, sex, race, ethnicity and other relevant demographic parameters3. However, there is always a trade-off between generalizability, accuracy and precision. Achieving the right mix of quantity and quality of the data sets proves to be very challenging in practice.
It is useful to engage with regulatory authorities early in your development process. In the US, the FDA has established the Q-Submission program4 to help manufacturers of medical device to request written feedback on their potential or planned activities during device development. This can help in planning your activities related to data collection, model development and testing to improve the chances of success during the final regulatory review.
In the end, it is important to establish trust in the model prediction within the parameters of each use case. Aligning your development plan with regulatory strategy is critical to success. Model development and testing should be done in an iterative manner to optimize the outcomes against the full range of constraints associated with the scope of the intended use.
A good source to learn about FDA’s expectations and emerging regulatory framework for AI/ML enabled devices is the 2021 medical device action plan5. In this action plan, FDA is shares that they are supporting numerous regulatory science research efforts to develop methods for identification and elimination of model bias.
Consider risks arising from the use environment
The whole idea of risk for software as a medical device (SaMD) broadly - and more specifically with AI/ML - is different in many ways from conventional hardware-only devices. Although the use environment introduces new risks for all medical devices, SaMD with AI/ML is particularly vulnerable to variations in the IT infrastructure and technical knowledge of the users.
Knowledge of the real-world use of an AI/ML enabled device is important to anticipating use-related failures and potential sequence(s) of events that may lead to harm. Developers of AI/ML enabled devices are highly competent from a technical point of view, but they have limited understanding of the real world use environment.
The human element is always an interesting factor! Considering who the end user is and their technical knowledge, and how they interact with the device, and use its outputs, is important to consider different strategies for risk mitigation during development.
Each sandbox is going to be a little bit different. No two medical facilities are exactly the same in terms of their IT infrastructure. Some level of customization at the local level is expected to smoothly operate an AI/ML enabled device within the constraints of the IT infrastructure. As an example, there might be special rules required to handle certain types of images in the AI/ML model which would otherwise be considered non-compliant based on certain pre-existing criteria.
A good source to learn about risk management of AI/ML enabled devices, within the ISO 14971 framework, is a recently issued consensus report by AAMI, the Association for the Advancement of Medical Instrumentation6. This report offers insights on how risk management systems and processes can be adapted for AI/ML enabled devices by identifying safety-related characteristics in 5 areas: data management; bias; data storage, security and privacy; over-trust; and adaptive systems. It is expected that this consensus report will be reviewed and adapted by ISO as a guidance, or an informative annex to ISO 24971.
Collaborate and facilitate to help optimize for both safety and effectiveness
Having gone from a pure engineering background to a quality and regulatory role, Michael recommends that QA/RA professionals develop an understanding of the engineer’s mindset on optimizing for performance, and help balance it with the regulatory expectation of optimizing for safety.
The fastest car may not necessarily be the safest. It also needs safety features such as seatbelts, which are also required to comply with regulations.
QA/RA professionals can play an important role early in product development by increasing awareness of the regulatory requirements. But it is not sufficient to just list these requirements, then step back to follow up later on documentation and traceability. Understanding the engineering mindset is important.
Communication and collaboration is critical to mission success! Working closely, and collaboratively with engineers, QA/RA professionals can build a better awareness of different constraints and trade-offs that must be made for optimization. By looking only at documentation, you see only what was done, but not why it was done that way. On the other hand, they can help improve their team’s understanding of the regulatory requirements, and why they are necessary.
Collaboration with our cross-functional partners offers us an opportunity to facilitate an inclusive conversation that leads to a solution that is optimal for both performance and safety, while also being compliant.
We are going to see more of AI/ML enabled devices in the near future, not less. In this fast moving regulatory environment, QA/RA professionals are uniquely positioned to play a leading role in helping their organizations quickly get to market with highly innovative and safe medical devices.
About Michael Bocchinfuso
Michael Bocchinfuso is currently the Director of Regulatory Compliance and Quality at Koios Medical. He is an electrical and software engineer by training. Early in his engineering career, he worked in avionics designing PCBs, and testing and improving reliability of these systems. He then moved into a Quality Engineer role in the medical device industry helping to ensure high quality standards and compliance during product development and beyond. In his current role, he closely collaborates with the product development teams working on AI/ML enabled medical devices in radiology applications, while maintaining an effective and compliant QMS, and coordinating regulatory activities related to various submissions in different countries across the world.
About Let’s Talk Risk! with Dr. Naveen Agarwal
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this article are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
See 10 guiding principles of good machine learning practice for medical device development identified by the US FDA, Health Canada and UK’s MHRA
See FDA guidance document on the Q-Submission Program
See AAMI CR 34971:2022, Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learning.