Study Reveals Trust in AI Medical Advice Despite Risks

A recent study highlights a concerning trend: individuals are increasingly trusting medical advice from artificial intelligence (AI) over that provided by human doctors, despite the fact that AI often delivers inaccurate information. Conducted by researchers from the Massachusetts Institute of Technology and published in the New England Journal of Medicine, the study involved 300 participants who evaluated medical responses from three sources: a medical doctor, an online health care platform, and an AI model like ChatGPT.

The findings reveal that participants, including both medical experts and laypersons, rated AI-generated responses as more accurate, valid, trustworthy, and complete compared to those from human doctors. Alarmingly, neither group demonstrated the ability to distinguish between AI-generated responses and those provided by medical professionals.

Concerns Over AI-Generated Medical Advice

The researchers further examined how participants reacted to AI-generated medical advice that was known to be inaccurate. They reported, “Participants not only found these low-accuracy AI-generated responses to be valid, trustworthy, and complete/satisfactory, but also indicated a high tendency to follow the potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the response provided.”

This trend raises significant concerns, as there have been documented instances where individuals suffered serious consequences due to misguided AI recommendations. In one case, a 35-year-old man from Morocco required emergency treatment after a chatbot advised him to wrap rubber bands around his hemorrhoid. In another incident, a 60-year-old man experienced severe poisoning after following ChatGPT’s suggestion to consume sodium bromide, a substance typically used for pool sanitation. He was hospitalized for three weeks due to paranoia and hallucinations, as detailed in a case study published in the Annals of Internal Medicine Clinical Cases.

Dr. Darren Lebl, chief of research service in spine surgery at the Hospital for Special Surgery in New York, expressed concerns regarding the reliability of AI-generated medical recommendations. He noted, “The problem is that what they’re getting out of those AI programs is not necessarily a real, scientific recommendation with an actual publication behind it. About a quarter of them were made up.”

Public Trust in Artificial Intelligence

The study’s findings reflect a broader phenomenon regarding the public’s trust in AI. A recent survey conducted by Censuswide indicated that approximately 40 percent of respondents expressed faith in medical advice from AI bots like ChatGPT. This growing reliance on artificial intelligence for health-related information poses significant challenges for healthcare professionals and raises pressing questions about patient safety.

As AI technology continues to evolve, it is critical for both users and healthcare providers to understand the limitations and risks associated with AI-generated medical advice. The potential for misinformation and harmful recommendations underscores the need for vigilance and further research in this rapidly changing landscape.