Mind is launching a significant inquiry into artificial intelligence and mental health after a Guardian investigation exposed how Google’s AI Overviews gave people “very dangerous” medical advice.
In a year-long commission, the mental health charity, which operates in England and Wales, will examine the risks and safeguards required as AI increasingly influences the lives of millions of people affected by mental health issues worldwide.
The inquiry – the first of its kind globally – will bring together the world’s leading doctors and mental health professionals, as well as people with lived experience, health providers, policymakers and tech companies. Mind says it will aim to shape a safer digital mental health ecosystem, with strong regulation, standards and safeguards.
The launch comes after the Guardian revealed how people were being put at risk of harm by false and misleading health information in Google AI Overviews. The AI-generated summaries are shown to 2 billion people a month, and appear above traditional search results on the world’s most visited website.
After the reporting, Google removed AI Overviews for some but not all medical searches. Dr Sarah Hughes, chief executive officer of Mind, said “dangerously incorrect” mental health advice was still being provided to the public. In the worst cases, the bogus information could put lives at risk, she said.
Hughes said: “We believe AI has enormous potential to improve the lives of people with mental health problems, widen access to support, and strengthen public services. But that potential will only be realised if it is developed and deployed responsibly, with safeguards proportionate to the risks.
“The issues exposed by the Guardian’s reporting are among the reasons we’re launching Mind’s commission on AI and mental health, to examine the risks, opportunities and safeguards needed as AI becomes more deeply embedded in everyday life.
“We want to ensure that innovation does not come at the expense of people’s wellbeing, and that those of us with lived experience of mental health problems are at the heart of shaping the future of digital support.”
Google has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.
But the Guardian found some AI Overviews served up inaccurate health information and put people at risk of harm. The investigation uncovered false and misleading medical advice across a range of issues, including cancer, liver disease and women’s health, as well as mental health conditions.
Experts said some AI Overviews for conditions such as psychosis and eating disorders offered “very dangerous advice” and were “incorrect, harmful or could lead people to avoid seeking help”.
Google is also downplaying safety warnings that its AI-generated medical advice may be wrong, the Guardian found.
Hughes said vulnerable people were being served “dangerously incorrect guidance on mental health”, including “advice that could prevent people from seeking treatment, reinforce stigma or discrimination and in the worst cases, put lives at risk”.
She added: “People deserve information that is safe, accurate and grounded in evidence, not untested technology presented with a veneer of confidence.”
If you have something to share about this story, you can contact Andrew using one of the following methods.
The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.
If you don’t already have the Guardian app, download it (iOS/Android) and go to the menu. Select ‘Secure Messaging’.
Email (not secure)
If you don’t need a high level of security or confidentiality you can email andrew.gregory@theguardian.com
SecureDrop and other secure methods
If you can safely use the tor network without being observed or monitored you can send messages and documents to the Guardian via our SecureDrop platform.
Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each.
The commission, which will run for a year, will gather evidence on the intersection of AI and mental health, and provide an “open space” where the experience of people with mental health conditions will be “seen, recorded and understood”.
Rosie Weatherley, information content manager at Mind, said that although Googling mental health information “wasn’t perfect” before AI Overviews, it usually worked well. She said: “Users had a good chance of clicking through to a credible health website that answered their query, and then went further – offering nuance, lived experience, case studies, quotes, social context and an onward journey to support.
“AI Overviews replaced that richness with a clinical-sounding summary that gives an illusion of definitiveness. They give the user more of one form of clarity (brevity and plain English), while giving them less of another form of clarity (security in the source of the information, and how much to trust it). It’s a very seductive swap, but not a responsible one.”
A Google spokesperson said: “We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.
“For queries where our systems identify a person might be in distress, we work to display relevant, local crisis hotlines. Without being able to review the examples referenced, we can’t comment on their accuracy.”