AI chatbots may seem intelligent in the medical field, but their ratings falter when interacting with real people.
In the laboratory, AI chatbots could identify medical problems with 95 percent accuracy and correctly recommends actions such as calling a doctor or going to urgent care more than 56 percent of the time. When humans conversationally presented medical scenarios to AI chatbots, things got more complicated. Accuracy fell to less than 35 percent for diagnosing disease and about 44 percent for identifying the right action, the researchers reported Feb. 9 in Natural medicine.
The decline in chatbot performance between the lab and real-world settings indicates that “AI has the medical knowledge, but people are having trouble getting useful advice from it,” says Adam Mahdi, a mathematician who directs the Machine Reasoning Lab at the University of Oxford that conducted the study.
To test the accuracy of the robots’ diagnostics in the lab, Mahdi and his colleagues submitted scenarios describing 10 medical conditions to the GPT-4o, Command R+, and Llama 3 Large Language Models (LLMs). They tracked how well the chatbot diagnosed the problem and advised what to do about it.
Then the team randomly assigned nearly 1,300 volunteers for the study to take the developed scenarios to one of these LLMs or use another method to decide what to do in that situation. Volunteers were also asked why they came to this conclusion and what they thought the medical problem was. Most people who weren’t using chatbots connected their symptoms to Google or other search engines. Participants using chatbots not only performed worse than chatbots evaluating the laboratory scenario, but also performed worse than participants using research tools. Participants who consulted Dr. Google diagnosed the problem more than 40% of the time, compared to an average of 35% for those who used robots. That’s a statistically significant difference, Mahdi says.
AI chatbots were state-of-the-art in late 2024 when the study was carried out – so accurate that it would be difficult to improve their medical knowledge. “The problem was interacting with people,” says Mahdi.
In some cases, chatbots provided incorrect, incomplete or misleading information. But the problem mostly seems to lie in the way people engage with LLMs. People tend to distribute information slowly, instead of telling the whole story at once, Mahdi says. And chatbots can easily be distracted by irrelevant or partial information. Participants sometimes ignored the chatbot’s diagnoses even when they were correct.
Small changes in the way people described the scenarios made a big difference in chatbot response. For example, two people described a subarachnoid hemorrhage, a type of stroke in which blood floods the space between the brain and the tissues covering it. Both participants spoke to the GPT-4o about headaches, sensitivity to light, and neck stiffness. One volunteer said he “suddenly developed the worst headache ever”, prompting GPT-4o to correctly advise seeking immediate medical attention.
Another volunteer called it a “terrible headache.” GPT-4o suggested that this person might have a migraine and should rest in a dark, quiet room – a recommendation that could kill the patient.
It’s unclear why subtle changes in the description altered the answer so dramatically, Mahdi says. This is part of The black box problem of AI in which even its creators cannot follow the reasoning of a model.
The study results suggest that “none of the language models tested were ready for deployment in direct patient care,” say Mahdi and colleagues.
Other groups have reached the same conclusion. In a report released on January 21, global patient safety nonprofit ECRI listed the use of AI chatbots used for medicine at both ends of the stethoscope as the leading solution. the biggest risk linked to health technologies for 2026. The report cites AI chatbots confidently suggesting misdiagnoses, inventing body parts, recommending medical products or procedures that might be dangerous, advising unnecessary tests or treatments, and reinforcing biases or stereotypes that can worsen health disparities. Studies have also demonstrated how chatbots can ethical errors when used as therapists.
Still, most doctors now use chatbots in one way or another, such as to transcribe medical records or review test results, says Scott Lucas, ECRI’s vice president for device security. OpenAI announced ChatGPT for Healthcare and Anthropic launched Claude for Healthcare in January. ChatGPT already answers over 40 million healthcare questions daily.
And it’s no wonder people are turning to chatbots for medical assistance, Lucas says. “They can access billions of data points and aggregate the data and present it in an understandable, credible, compelling format that can give you precise advice on almost exactly the question you were asking and do it with confidence.” But “commercial LLMs are not ready for prime-time clinical use. Relying on LLM results alone is not safe.”
Eventually, AI models and users could become sophisticated enough to close the communications gap highlighted by Mahdi’s study, Lucas says.
The study confirms concerns about the safety and reliability of LLMs in patient care, which the machine learning community has long discussed, says Michelle Li, a medical AI researcher at Harvard Medical School. This study and others have illustrated weakness of AI in real medical settingsshe said. Li and colleagues published a study Feb. 3 in Natural medicine suggesting possible improvements in the training, testing and implementation of AI models – changes that could make them more reliable in various medical contexts.
Mahdi plans to conduct additional studies on AI interactions in other languages and over time. The results could help AI developers design stronger models that allow users to get accurate answers, he says.
“The first step is to solve the measurement problem,” says Mahdi. “We haven’t measured what matters,” which is how well AI works for real people.
