Google Faces Scrutiny Over Health AI Disclaimers and User Safety

Google is under fire for potentially endangering users by minimizing health warnings related to its AI-generated medical advice. The company’s AI Overviews, which appear prominently above search results, advise users to consult healthcare professionals instead of relying solely on AI summaries. Yet, a recent investigation by the Guardian reveals that these disclaimers are not visible when users first encounter medical advice.

According to Google, AI Overviews are intended to inform users when they should seek expert assistance. However, the important safety warnings only surface when users opt for additional information by clicking a button labeled “Show more.” Even then, these disclaimers appear in a smaller font below the supplementary details, which may lead many to overlook them. The disclaimer states: “This is for informational purposes only. For medical advice or a diagnosis, consult a professional. AI responses may include mistakes.”

A Google spokesperson did not contest the findings, which highlight the lack of prominent disclaimers at the initial point of contact with medical information. The spokesperson claimed that AI Overviews encourage users to seek professional medical advice and often include advice within the summaries when appropriate.

Expert Concerns About Misinformation

The absence of clear disclaimers at the outset raises significant concerns among AI experts and patient advocates. Pat Pataranutaporn, an assistant professor and technologist at the Massachusetts Institute of Technology, emphasized the dangers of relying solely on AI-generated information. “Even the most advanced AI models still hallucinate misinformation or prioritize user satisfaction over accuracy,” he stated. “In healthcare contexts, this can be genuinely dangerous.”

Gina Neff, a professor of responsible AI at Queen Mary University of London, echoed these sentiments, asserting that the design of AI Overviews prioritizes speed over accuracy. “This leads to mistakes in health information, which can be dangerous,” she remarked.

In January 2023, the Guardian published an investigation exposing the risks associated with misleading health information disseminated through Google’s AI Overviews. Neff noted that the findings point to the critical need for prominent disclaimers. “Users may skim through information quickly and mistakenly believe it is reliable,” she warned.

Calls for Change

In light of these findings, Google has removed AI Overviews for some medical searches, but concerns remain. Sonali Sharma, a researcher at Stanford University, highlighted the issue of users receiving what appears to be complete answers at the top of search pages. This can discourage further investigation, leading users to accept potentially flawed information. “AI Overviews can contain both correct and incorrect information, making it difficult for users to discern accuracy,” she explained.

Tom Bishop, head of patient information at Anthony Nolan, a blood cancer charity, called for urgent action from Google. He stressed the importance of making disclaimers more visible to encourage users to critically evaluate the information they receive. “That disclaimer needs to be much more prominent,” he said. “It should be the first thing users see, ideally in the same size font as the rest of the information.”

As the debate over AI-generated health information continues, the need for transparency and accuracy in medical advice remains paramount. Google’s approach to displaying disclaimers may need significant revisions to protect users and ensure that they understand the limitations of AI-generated content.