A 60-year-old man with no history of mental illness presented at a hospital emergency department insisting that his neighbour was poisoning him. Over the next 24 hours he had worsening hallucinations, and tried to escape the hospital.
Doctors eventually discovered the man was on a daily diet of sodium bromide, an inorganic salt mainly used for industrial and laboratory purposes including cleaning and water treatment.
He bought it over the internet after ChatGPT told him he could use it in place of table salt because he was worried about the health impacts of salt in his diet. Sodium bromide can accumulate in the body causing a condition called bromism, with symptoms including hallucinations, stupor and impaired coordination.
It is cases like this that have Alex Ruani, a doctoral researcher in health misinformation with the University College in London, concerned about the launch of ChatGPT Health in Australia.
A limited number of Australian users can already access the artificial intelligence platform which allows them to “securely connect medical records and wellness apps” to generate responses “more relevant and useful to you”. ChatGPT users in Australia can join a waitlist for access.
“ChatGPT Health is being presented as an interface that can help people make sense of health information and test results or receive diet advice, while not replacing a clinician,” Ruani said.
“The challenge is that, for many users, it’s not obvious where general information ends and medical advice begins, especially when the responses sound confident and personalised, even if they mislead.”
Ruani said there had been too many “horrifying” examples of ChatGPT “leaving out key safety details like side effects, contraindications, allergy warnings, or risks around supplements, foods, diets, or certain practices”.
“What worries me is that there are no published studies specifically testing the safety of ChatGPT Health,” Ruani said. “Which user prompts, integration paths, or data sources could lead to misguidance or harmful misinformation?”
Sign up: AU Breaking News email
ChatGPT is developed by OpenAI, which used the tool HealthBench to develop ChatGPT Health. HealthBench employs doctors to test and evaluate how well AI models perform when responding to health questions.
Ruani said the full methodology used by HealthBench, and its evaluations, are “mostly undisclosed, rather than outlined in independent peer-reviewed studies”.
“ChatGPT Health is not regulated as a medical device or diagnostic tool. So there are no mandatory safety controls, no risk reporting, no post-market surveillance, and no requirement to publish testing data.”
An OpenAI spokesperson told Guardian Australia that the company had worked in partnership with more than 200 physicians from 60 countries “to advise and improve the models powering ChatGPT Health”.
“ChatGPT Health is a dedicated space where health conversations stay separate from the rest of your chats, with strong privacy protections by default,” the spokesperson said.
ChatGPT Health data is encrypted and subject to privacy protections by default.
Sharing with third parties will happen with user consent, or in limited circumstances outlined in OpenAI’s privacy policy.
The chief executive of the Consumers Health Forum of Australia, Dr Elizabeth Deveny, said rising out-of-pocket medical costs and long wait times to see doctors were driving people to AI.
She said ChatGPT Health could be useful in helping people manage well-known chronic conditions, and to research ways to stay well. AI’s ability to give answers in different languages “provides a real benefit to people who don’t have English proficiency”, she said.
Deveny is concerned that people will take advice given by ChatGPT Health at face value, and that “large global tech companies are moving faster than governments”, setting their own rules around privacy, transparency and data collection.
“This is not a small not-for-profit experimenting in good faith. It’s one of the largest technology companies in the world.
“When commercial platforms define the norms, the benefits tend to flow to people who already have resources, education, and system knowledge. The risks fall on those who do not.”
She said a failure of governments to act had left health consumers to navigate the social transformation brought by AI largely alone.
“We need clear guardrails, transparency and consumer education so people can make informed choices about if and how they use AI for their health,” she said.
“This isn’t about stopping AI. It’s about acting before mistakes, bias, and misinformation are replicated at speed and scale, in ways that are almost impossible to unwind.”