Is my doctor using artificial intelligence to diagnose me during our appointment? Is he recording our conversation to create an AI summary of our visit?
The use of AI health care is still new enough that many people may not know what to make of it. Most Americans expressed “significant discomfort” about the idea of their doctors using AI to help manage their care, according to a 2023 survey. But AI is not likely to go away. The use of AI applications in medical care is growing, and it is important for patients to understand the uses that could improve care — and the reasons for continued caution.
I wanted to know how AI is already aiding in diagnosis and helping direct treatment, and what clinicians think about the use of AI. And finally, what are areas of concern, and what is being done to address those?
To guide us with these questions, I spoke with CNN wellness expert Dr. Leana Wen. Wen is an emergency physician, adjunct associate professor at George Washington University and a nonresident senior fellow at the Brookings Institution, where her research includes the intersection of technology, medicine and public health. She previously was Baltimore’s health commissioner.
Dr. Leana Wen: First, it helps to know the difference between predictive and generative artificial intelligence, or AI.
Predictive AI uses mathematical models and pattern recognition to predict the future. For example, a predictive AI algorithm can identify which patients with pneumonia are most likely to require hospitalization.
Let’s say you’re the patient. Using past experiences with many other patients with a similar condition — such as pneumonia, diabetes or heart disease — an algorithm could come up with a care plan for you based on factors that could impact your illness, such as your age, gender, other medical conditions, laboratory data and racial and ethnic background. The algorithm can help doctors decide, for instance, whether you need to be hospitalized and what treatment is most likely to be effective for your specific set of circumstances.
Generative AI uses large language models to generate humanlike interactions. Many people may be familiar with ChatGPT, which is a form of generative AI that answers user questions in a conversational manner. Generative AI can summarize huge quantities of information in a very short period of time, far surpassing that of any human. Some studies have suggested that generative AI models can “learn” so much that they can pass medical licensing exams and that they can generate easy-to-understand, well-written patient instructions on a variety of topics.
There are, though, concerns that these models could “hallucinate” and come up with responses that are misleading and inaccurate. And with both predictive and generative AI, how well the models work depends on what data they were trained on. When assessing the utility of AI in health care, it’s important to look at each AI tool separately and to understand how it was developed and in what circumstances they are meant to be used.