AI, or artificial intelligence, is being developed and implemented in so many industries around the world. From personalized shopping and voice assistants in our homes, to flagging changes in banking transaction patterns and detecting nutrient deficiencies in soil, there are many things that AI can help humans perform better. Health care is no exception.
We are fortunate to have Dr. Philip Payne, Associate Dean and Chief Data Scientist at Washington University School of Medicine (WUSM) available to share his knowledge. Dr. Payne is the Janet and Bernard Becker Professor and Director of the WUSM Institute for Informatics, Data Science, and Biostatistics, where he oversees strategic approaches to data, analytics, and artificial intelligence for the medical school.
Dr. Payne was also recently appointed to steering committee of the Artificial Intelligence Code of Conduct Project with the National Academy of Medicine. This committee will guide framework ensuring that AI’s application in health, medical care, and health research is done accurately, safely, reliably, and ethically to provide better
Our Lab Director, Ali Kosydor, recently sat down with him to discuss his thoughts on AI, how we can use it to help us here at BJC/WUSM, and how AI could play an important role in the greater health care community in the future.
Thank you for joining us, Dr. Payne! Given the roles you play within the School of Medicine, can you give us your perspective on AI in the health care industry today?
It’s interesting. Everyone is very excited about the role of AI in health care, and much of that excitement comes from a perception that AI is very new. In reality, AI has been around in health care since the 1960s and is very accessible to most people in the health care industry. Now, more than ever, people have understanding and awareness of how AI might change patient and provider experiences, quality and safety outcomes, value, research, and education. That can be both good and bad.
The good is that awareness comes with expectations and drives us to innovate and deploy these technologies. The bad may be that the perception of “newness” could cause us to ignore prior history if we’re not careful about looking at what’s been done previously.
So, with the idea that this is not new, and we’ve had some exposure and experience with AI in the past, where do you think health systems should focus their AI journey and how should they engage AI further?
I think there is a lower bound of potential applications of AI that are technically feasible but will not substantially change health care. These are things like workflow automation or even prediction tasks that may not actually improve upon human workflow that already exists. If we spend our time and energy working on those types of problems, we probably won’t see the impact we want.
Then there is a higher bound area where we’re not really clear what the implications of AI are. For example: closed loop systems that may use AI to make a decision and then actually implement. Something as simple as an IV pump or something as complex as a robot in the operating room. The technology is not quite ready for us to relinquish the human in the middle of dynamic health care.
We don’t want to be operating at too low a level that we don’t see the impact we want, but we don’t want to be operating at so high a level that we have unintended consequences of deploying AI. There are tasks, though, documentation tasks like information capture and summarization, that fall in the middle. These are central to the practice of medicine but may not be the best use of physician or patient time and cognitive energy.
We can also think about things like patient experience: scheduling appointments, speaking with providers, refilling medications; and research: doing a better job of asking and answering questions about the data we have rather than creating more data. These are all things we can do today, but I’d emphasize that all of them involve augmenting human capabilities. AI is not about replacing humans; it’s about making humans better and allowing humans to do the things that humans are uniquely able to do.
It’s so important for us to remember that when we speak about AI, we’re not replacing the human element. With that in mind, what risks should we be aware of as we engage with AI in health care systems?
There are probably three big areas of risk.
The first has to do with how we build the AI, because AI is only as good as the data that we have to train the algorithms or tools and the labels we put on that data. For example, if we build a predictive algorithm that tells us which patients are at risk of developing sepsis, we have to train that algorithm using data from historical patient encounters. Ideally, we would be able to tell that algorithm which patients had sepsis, and which did not. But as diagnostic reasoning is not quite as easy as it sounds, how do we know which patients truly had sepsis or not? It has the potential to produce an algorithm that does not actually generate the results we want.
The second area of concern is how do we navigate the “intermediate effect;” being every time we introduce a new technology, performance of humans who are interacting with that technology declines before it improves. To improve level of accuracy, quality, safety, etc. and there’s a real risk of not planning for and supporting human beings as they navigate that immediate effect. We have to make sure we support the humans interacting with the technology, that it doesn’t put our patients at risk, and that it’s financially viable.
The third risk is trustworthiness. There are going to be significant concerns about the use of AI in health care that will not necessarily be based on factual information, and that could lead people to not trust the technology before we’ve had a chance to prove the viability and impact. The concept of AI literacy, both for experts and general public, is critical.
That’s very helpful. May I round out our time with asking for your perspective on the future landscape of AI in health care?
I think the future landscape is going to see us moving back toward a model of health care that is much more humanistic. Where computers and technology get out of the middle of the interaction between humans in the health care system and become more the supporting entities that they’re meant to be. I’m anxious to see an environment where you don’t see a computer physically sitting between a patient and a provider, but rather we’re capturing the data in the background so we can provide superior care.
The second area that is going to be interesting is it will put us into a position to be able to make decisions that take advantage of human capabilities. Instead of Googling or looking at journals in exhaustive information seeking; researchers, educators, practitioners, or patients could have a meaningful summary of all the high-quality knowledge that they need to ask with an answer in front of them right now. Take the lengthy information-seeking out of the mix.
The other thing I’m interested to see is consumerism in health care. We want people to not just be receiving health care but be active parts of the health care system in things like interacting with scheduling systems and understanding and acting on their test results.
I don’t think the future of AI-enabled health care is really a technical endeavor. It’s a workflow and humanistic endeavor, and technology will follow. As long as we understand it’s not an engineering problem, but rather it’s a human opportunity, we’ll get to the brave new world where information is more free flowing.
Thank you, Dr. Payne, for your insights and we look forward to hearing more from you as we advance our knowledge of AI in health care!