Artificial intelligence (AI) is rapidly unlocking new applications in the medical arena. It is challenging to keep up with something moving so fast. ChatGPT was launched just a couple of years ago, and we are already talking about artificial general intelligence (AGI) for medical purposes!
In this Let’s Talk Risk! conversation, Prof. Papademetris shares that indeed we are in a new game with plenty of opportunities. But we have to start learning in a new way. We will fail in non-obvious ways, and we will fail often. How do we harness the power of this technology without compromising patient safety or other ethical concerns?
Case in point - AI is just one part of the overall system where a medical device needs to operate safely. We need to design the overall software for its intended use by considering the system level requirements. It is the whole software that needs to serve its intended purpose when used within the broader healthcare system.
“People drive cars, not an engine!”, Prof. Papademetris reminds us so eloquently in this conversation.
Patient safety is an important factor for a medical device. So far, FDA has managed the risk by including a human in the decision making loop and not allowing a stand-alone, autonomous AI system for clinical decisions. An important question is how to share information with the user? There is a human factors component to AI, which needs to be carefully considered.
In practice this proves to be quite challenging.
Application of AI in medical software requires a true cross-functional approach. As Prof. Papademetris reminds us, we need to learn to speak each other’s language:
The biggest challenge we face is not being good in our discipline, but how to become at least competent in other disciplines to facilitate a truly cross functional problem solving approach when building medical software.
In the rapidly evolving world of AI, we need a new approach to medical software development. It used to be that data was secondary to code in traditional software development, but now code is becoming fairly standard, and the question is about data. How broadly does it represent the use cases you want to represent? Bias in the model is a serious concern.
Listen to this Let’s Talk Risk! conversation with Prof. Papademetris which also includes an open discussion with the audience.
About Prof. Xenophon Papademetris
Xenophon Papademtris is currently a Professor at the Yale University School of Medicine where he focuses on research in image analysis and software development. He has recently launched a certificate course for industry professionals to provide a comprehensive understanding of both technical and regulatory aspects of medical software and medical AI. He is also the lead author of a new textbook Introduction to Medical Software: Foundations for Digital Health, Devices and Diagnostics (Cambridge University Press, 2022), and the main instructor for the companion Coursera Class.
About Let’s Talk Risk! with Dr. Naveen Agarwal
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
Share this post