A third of Americans say they have asked AI to decode their medical results
As more people turn to chatbots for medical advice, the technology is revealing both its promise and its risks.
By Cody Cottier edited by Lewis asked.

Robert Wicher/Getty Images
When Judith Miller received the results of a medical imaging study last year, the 77-year-old Wisconsin resident did what many patients do today: She asked AI to explain them. Claude, a large language model (LLM) developed by the company Anthropic, helpfully outlined the possible interpretations. With the chatbot analysis in hand, Miller began her follow-up appointment feeling ready to have a productive conversation with her doctor. As she says, Claude’s responses “allowed me to better understand my health and participate more fully in shared decision-making.”
This scene has become commonplace in clinics across the country. Two recent polls found that a third American adults have turned to LLMs for health information – to make sense of lab results, diagnose symptoms, research treatment options, or learn about prescription medications. “The use of tools like these has doubled in the past year,” says Robert Wachter, a physician at the University of California, San Francisco. “I suspect they will double again next year.”
But these chatbots can also provide misleading or inaccurate advice, which is why experts urge caution when using them. Anthropic, for its part, agrees. “Claude is not designed or marketed for clinical diagnosis,” a company spokesperson said. Its proper use is to “help people prepare for conversations with their doctors, not replace them.”
On supporting science journalism
If you enjoy this article, please consider supporting our award-winning journalism by subscribe. By purchasing a subscription, you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
For many patients, AI represents a welcome solution to the problems posed by the overabundance of personal health data provided by 21st Century Cures Actwhich requires immediate online access to medical records, such as test results and clinical notes. “If you’ve ever looked at this stuff,” says health care blogger and activist Dave deBronkart, “you know it leaves you with the gigantic question: What does all this mean? » Just a few years ago, meaning was hidden behind a wall of medical jargon that only doctors could understand. And because patients can now view results online before speaking to a doctor, they’re often anxiously wondering what to make of it all. Today, however, general-purpose chatbots and a host of specialized health models can translate jargon into plain language in seconds, potentially allaying unfounded fears.
Yet they can also unnecessarily increase anxiety – or worse. LLMs remain error-prone. They can present lies as facts and to reinforce sycophantically users’ prior (and sometimes erroneous) beliefs. While these character flaws may lessen as models become more powerful, many experts express concerns about the potential risks of using current AI models in this way. “There aren’t a lot of guardrails to break them down, getting them to tell you real misinformation,” says Cait DesRoches, executive director of OpenNotes, a nonprofit that promotes patient access to medical records. She adds that there is little research into what happens when people treat an LLM like a health authority: “I don’t think we have any idea how effective this is for average patients.” »
The worst scenarios have already surfaced. In December, a 75-year-old Seattle man died from a treatable form of leukemia; he allegedly refused treatment due to AI-generated evidence this incorrectly suggested he had a rare complication. Some preliminary research into how people are using AI for medical diagnosis is sobering. In a Natural medicine In a study published in February, researchers asked participants to diagnose a hypothetical condition using various LLMs. They only came to the correct conclusion about a third of the time.
Still, most experts agree that chatbots can be useful to people seeking medical information, if used carefully. “I don’t think people should avoid using them,” DesRoches says, “but I think people should use them with their eyes open.” Adam Rodman, a general internist at Beth Israel Deaconess Medical Center, goes even further: “I would say that LLMs, if used appropriately – that’s a big caveat – are the best patient empowerment tool ever invented. »
Hoping to harness this technology without compromising security, researchers have developed a suite of strategies to counteract AI’s shortcomings. For example, they suggest asking chatbots to pretend to be a doctor. This can “incentivize the model to collect data like a doctor,” Rodman says. Other tactics include asking an LLM to rigorously reevaluate their own reasoning and seek a “second opinion” from a different model. Rodman emphasizes the importance of removing personal information, such as your name and social security number, from any chatbot input to protect your privacy.
Ideally, after all this digital dialogue, patients would be left with more informed questions to ask their doctors. Wachter describes this trend as “generally healthy,” although he sometimes wastes valuable time debunking Dr. Chatbot’s misguided advice. “I have 15 minutes for this appointment,” he says, “and I’m going to have to spend the first 10 minutes talking the patient out of what GPT told them to do.”
In many cases, LLMs likely completely replace actual clinical counseling, especially for those who are uninsured or face long wait times to get an appointment. “The access problem is at a critical level,” says Laura Adams, the National Academy of Medicine’s senior advisor on AI issues. Despite technology’s limitations, she argues that we should compare it not to perfection but to reality, in which the alternative might be not to worry about it at all. “It’s better than nothing,” she said.
With AI and medical advice, Adams notes, “the horse is out of the stable.” As more people rely on chatbots to manage their health, researchers and patient advocates say the moment demands a new form of AI mastery. “The cure is not to keep people in ignorance,” deBronkart says. “It’s about teaching them to do better” by raising awareness among children and adults alike. In addition to this, the new LLMs will likely improve their medical uses. Wachter suggests that some models could eventually be board-certified, as doctors do.
For now, people like Miller are already approaching AI the way DesRoches recommends: with eyes open, aware of its tendency to hallucinate and confirm users’ biases. As sophisticated as chatbot responses may be, they are assembled from statistical models in large data sets – an impressive trick but one that still falls short of the breadth and reliability of human-level clinical reasoning. “It’s just following up on words that were probable,” Miller says. “I don’t consider it a source of absolute truth.”
It’s time to defend science
If you enjoyed this article, I would like to ask for your support. Scientific American has been defending science and industry for 180 years, and we are currently experiencing perhaps the most critical moment in these two centuries of history.
I was a Scientific American subscriber since the age of 12, and it helped shape the way I see the world. SciAm always educates and delights me, and inspires a sense of respect for our vast and beautiful universe. I hope this is the case for you too.
If you subscribe to Scientific Americanyou help ensure our coverage centers on meaningful research and discoveries; that we have the resources to account for decisions that threaten laboratories across the United States; and that we support budding and working scientists at a time when the value of science itself too often goes unrecognized.
In exchange, you receive essential information, captivating podcastsbrilliant infographics, newsletters not to be missedunmissable videos, stimulating gamesand the best writings and reports from the scientific world. You can even give someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you will support us in this mission.