“The classic thing with AI is that the hard stuff is easy and the easy stuff is hard. It can do math I cannot do, but it cannot do the reasoning I find easy.”
In this Let’s Talk Risk! conversation, we discuss key challenges and opportunities for applying Artificial Intelligence/Machine Learning (AI/ML) in MedTech. This was an open conversation with a live audience as part of the weekly Let’s Talk Risk! conversation on LinkedIn.
AI/ML applications in MedTech are growing rapidly. FDA has authorized nearly 1000 such applications, and this trend is only expected to grow. Our conversation included a variety of topics about this rapidly evolving field.
This discussion involved comments from Emanuel Tkach, MD, Bijan Elahi, Edwin Bills, Rafael Pozos, Wag Hanna, Phil Deming, Andy David and Ritam Priya.
Jump to a section of interest using these timestamps.
00:03:30 Key factors related to AI/ML applications in MedTech
00:05:30 Dynamic nature of AI/ML causing performance drift
00:07:30 Upcoming ISO guidance on risk considerations for AI/ML applications
00:09:00 Keeping the human in the loop
00:10:25 Data quality issues and best practices for AI/ML
00:12:17 Cybersecurity considerations affecting safety
00:14:20 Lessons learned from clinical evaluation of conventional devices
00:16:25 Is agile software development for AI/ML too slow?
00:19:12 Treating AI/ML as a tool and a team member, and its limitations
00:23:30 A few examples of AI/ML applications in MedTech
00:21:35 Watch out for human over-reliance on AI/ML
00:27:44 Experience with ChatGPT prompts
00:32:22 Closing comments and key takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: AI/ML in MedTech
FDA: QA/RA aspects of AI/ML devices
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
Share this post