top of page
Search

AI to Advance Patient Safety

Writer's picture: Wendy ChapmanWendy Chapman

In November, I attended the AI to Advance Patient Safety Summit in San Diego. Experts from diverse fields — patient safety, AI research, health care, venture capital and policy — came together for thought-provoking discussions on how to prioritize patient safety while leveraging AI's transformative potential in health care.



I took away several key messages:

  1. We should not be thinking about AI Implementation into what we have today. If your AI solution can just plug into the current system, it’s probably not going to make a big change in healthcare. How can we leverage AI to transform the inadequate system we have?

  2. Most AI is not ready for prime time. We must invest in turning scientific discoveries into practical, real-world healthcare solutions. This includes evaluating and improving workflows, preparing the workforce through training and planning, and building the necessary infrastructure (see this article for more).

  3. We don’t have all the pieces yet - We should be thinking not just about what we should do but about who should be doing it and how.


We  have paleolithic emotion, medieval institutions, and god-like technology. Edward O. Wilson, ​​debate at the Harvard Museum of Natural History, Cambridge, Mass., 9 September 2009

But let me start from the beginning - per Chatham House Rule, I will share the ideas that were discussed without attributing them to any particular person. 


We haven’t made improvements in safety since To Err is Human


The Safe Care Study published in 2023 showed that 24% of admitted patients experienced an adverse event, 9% of the admissions suffered serious harm, and 23% of adverse events were judged preventable. Twenty-three years and no improvement. 


AI can potentially predict adverse events before they occur. Will plugging AI algorithms into our current systems help us improve patient safety?

  • Currently, when adverse event reporting is automated, the number of events reported increases 25-fold

  • Some organizations turn on automatic reporting, discover the amount of harm being done, and then promptly turn it off

  • Do we have the teams and processes to intervene and prevent the predicted adverse events?

  • We will need to re-organize how we approach safety, but isn’t it paradoxical to think we are going to use AI, which is quite unsafe, to improve patient safety?

  • New technologies and processes will be needed - should we be thinking about patient safety as a public good? 


AI brings new risks 


AI will introduce more patient safety issues, and we need a code of conduct.

  • Human vigilance is not the answer

  • AI assurance labs won’t be enough - your own organization will be different

  • Certification misses so many things, because it’s the implementation that will determine the success or failure of AI

  • Complex adaptive systems theory says we can’t control outcomes, but a handful of guidelines can set the context for an increased likelihood of success, so the National Academy of Medicine has released an AI Code of Conduct. 




It was great to see Don Detmer, who was a member of the committee that produced To Err Is Human.


Who should use AI?


AI governance needs to integrate with existing governance structures. A key part of governance will be transparency--who is the consumer of the AI output? Do they have the knowledge needed to leverage it? Patients are using AI (see #PatientsUseAI), and they can more easily join the safety movement now. 

  • Why don’t we let patients report safety events?

  • Why don’t we mine patient-generated data more? And why is their data not connected to healthcare safety reporting? AI models don’t include their data

  • We could arm patients with tools to pursue safety, but is there a business case for patient-facing AI tools? 


Industry and AI Regulation


There is a lot of evidence that some vendors are putting AI into existing systems without disclosing it. AI is not coming out of DARPA like the internet did--it’s coming out of the market without surveillance. The FDA statute about device regulation was created 50 years ago when a device was just a piece of hardware--they never contemplated software, iterative learning, and the fact that AI output can be continually updated rather than a single discrete event from a device. We need modern regulation, because without it, why would a vendor feel the business need to say “I’d better get my AI tool assured?” They will need to be compelled to reach a higher bar than currently exists.

AI companies are coming out constantly, but even with companies that have good governance and structures, strong development shops, and healthcare collaborations, there is no playbook. There are some important unanswered questions/issues:

  • How should regulation work when it is not feasible to require getting new approval every time you update an algorithm?

  • How do you get something out to market before you run out of cash? AI will become more expensive when we do it right.


Some sage advice that stood out to me:


Some say “AI has to fit into my workflow” - not if your workflow is bad and outdated! 

What are you using AI for? Don’t try to use AI to fix bad tech. If you have bad tech, fix it before implementing AI.

36 views0 comments

Recent Posts

See All

Comentários


©2022 by Wendy+Chapman Proudly created with Wix.com

bottom of page