top of page
Search
Writer's pictureWendy Chapman

Is AI in Health Too Risky?

Before I get into that...

I am in Pittsburgh, Pennsylvania right now, where we lived for 10 years (2000-2010) on a business trip that includes a workshop at the NIH. If you are feeling like having a bit of fun, take the quiz and let me know how you know I'm in the US based on these pictures. And in the spirit of Regional Linguistics Quirks, in Pittsburgh the plural form of you is "yinz". A derogatory term for people in this region is "yinzers".



Continuing my thoughts on AI, I thought I'd share a foreward I wrote for an upcoming textbook on AI in Healthcare.


Imagine a world where you receive treatment that is precisely targeted to give you the best outcome possible; where care is efficient and safe; and where you have guidance in managing your health at home and are a partner in your health care. We dream of a world like this, and you as a reader of this book probably see a role for AI in making that dream come true.


Rear Admiral Grace Hopper, developer of the first compiler for a computer language, repeated a motto coined by John August Shedd: “A ship in port is safe; but that is not what ships are built for. Sail out to sea and do new things.” In applying AI to healthcare, we are leaving a safe harbor to do something hard and something risky.


I left that safe harbor in 1994 when I pivoted from my humanities university degree in linguistics and Chinese to enroll in a graduate program in medical informatics at the University of Utah. My husband introduced me to the field, and my love of language steered me to the new world of natural language processing. In classrooms, I raised my hand to ask about the meaning of basic words like “algorithm,” “heuristic,” and “MI” and as the only female in the campus computer lab at 2 am, I sat with my two-year-old son asleep at my feet trying to get my linked list to compile. I was an outsider in the world of computers and AI, and I was an outsider in the world of healthcare. But I brought unique experience, and embracing that, I have been able to both develop methodological innovations and apply those to problems like disease surveillance and creation of research cohorts from electronic health record data.


The department where I studied was chaired by Homer R. Warner, who founded the department in 1964.


Homer Warner…developed in 1961 the first computerized program for diagnosing disease. Basing it on 1,000 children with various congenital heart diseases, Warner showed that Bayes’ [theorem] could identify their underlying problems quite accurately. “Old cardiologists just couldn’t believe that a computer could do something better than a human,” Warner recalled.


Homer sailed from a safe harbor and, in doing so, he developed one of the first electronic medical records to collect data not only to improve access to information but ultimately to transform the diagnostic and patient care process through building intelligence into the system.


And now we are here, in this open sea. It’s been over half a century since Homer Warner and other pioneers developed and implemented AI systems in healthcare. The age of AI seems to be upon us. The big tech industry sees the opportunity and is making unprecedented investments in healthcare.


So what’s the risk?

When creating innovative healthcare solutions using AI, challenges will arise at at least three steps: development, application, and implementation. In developing ML and AI models, we won’t be able to fully trust the output of the tools, because the learnings and predictions only represent a potentially deceptive proxy of the real situation, and the algorithms are most likely learning from biased data and biased healthcare delivery practices. In applying ML and AI models to healthcare problems, we will encounter unintended consequences--our predictive model may lead to a rapid upsurge in overdiagnoses, for example. In implementing our tools, we may discover the reality doesn’t match the hype: most never even make it to the real world and when they do, the results are often disappointing. When you put a new technology into the complex system of healthcare, everything around it changes. We will shift relationships, we will shift workflows, and we will shift power differentials. And those changes can cause harm.


Given the risks and the difficulty we will face, should we even launch our boat into the sea of healthcare AI? Yes! We need the smartest brains tackling problems more consequential than how to get people to click on ads.


Pedro Domingos said. “People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world.” If you come from outside of healthcare, you may be surprised at how “stupid” their computers are and how long the path may be to apply your cutting edge innovation. If you come from within healthcare, you understand why it’s taken so long to bring the innovations you see around you to healthcare, but you may not understand the technical aspects well enough to bridge the gap. As we become more informed partners, we will accelerate our journey to improved health and discovery through AI.


83 views0 comments

Comments


bottom of page