Google Under Scrutiny for Health Advice in AI Overviews

Google faces criticism for potentially putting users at risk by minimizing health warnings related to its AI-generated medical advice. The company’s AI Overviews, which are designed to provide quick answers to health-related queries, lack prominent disclaimers that caution users about the accuracy of the information presented. Some experts argue this oversight may lead to dangerous health decisions based on incomplete or misleading information.

When users search for health-related topics, Google’s AI Overviews appear at the top of the search results. Although the company states these summaries prompt users to seek professional help, many users are not presented with clear safety warnings upon their initial interaction with the AI-generated content. Disclaimers are only revealed if users opt to click on a button labeled “Show more,” making them less visible. The disclaimer states, “This is for informational purposes only. For medical advice or a diagnosis, consult a professional. AI responses may include mistakes.”

Despite claims from Google that AI Overviews encourage users to verify information, the lack of immediate disclaimers has drawn concern from health experts and patient advocates. Pat Pataranutaporn, an assistant professor and AI researcher at the Massachusetts Institute of Technology (MIT), emphasizes the risks associated with the absence of visible disclaimers. “The absence of disclaimers when users are initially served medical information creates several critical dangers,” he stated. “Even the most advanced AI models can generate misinformation, which is particularly dangerous in healthcare contexts.”

Gina Neff, a professor of responsible AI at Queen Mary University of London, points out the design flaws in AI Overviews that prioritize speed over accuracy. “The problem with bad AI Overviews is by design,” she noted. “This leads to mistakes in health information, which can be dangerous.”

In a previous investigation, the Guardian uncovered that users were exposed to misleading health information through Google’s AI Overviews. Neff highlighted that the design requires users to click through to find disclaimers, which could mislead them into believing the initial information is more reliable than it actually is. “People reading quickly may think the information they get from AI Overviews is better than what it is,” she said.

Following the Guardian’s reporting, Google has removed AI Overviews for certain medical searches but not all. Sonali Sharma, a researcher at Stanford University’s Centre for AI in Medicine and Imaging, expressed concern about the implications of how AI Overviews are presented. “These AI Overviews appear at the very top of the search page and often provide what feels like a complete answer,” she explained. This can discourage users from seeking further information, potentially leading to misinformed health decisions.

Tom Bishop, head of patient information at the blood cancer charity Anthony Nolan, has called for urgent changes to how Google presents health information. “We know misinformation is a real problem, but when it comes to health misinformation, it’s potentially really dangerous,” he stated. Bishop advocates for disclaimers to be made more prominent, suggesting they should be the first thing users see. “I’d like this disclaimer to be right at the top, ideally in the same font size as everything else,” he added.

As discussions around the implications of AI-generated medical advice continue, the need for clear, accessible disclaimers becomes increasingly urgent. Experts agree that without proper visibility, users may unknowingly put their health at risk based on unreliable information. The ongoing scrutiny highlights a critical intersection between technology and healthcare, raising important questions about the responsibilities of major tech companies in safeguarding user well-being.