A 60-year-old man ends himself in the hospital after seeking assistance from ChatGPT; AI systems like ChatGPT can produce errors and propagate disinformation.
After three months of substituting table salt with sodium bromide, a 60-year-old man was admitted to the hospital with bromism. He altered his diet after visiting ChatGPT for suggestions on how to remove sodium chloride from his diet.
The guy had serious neuropsychiatric symptoms such as paranoia, hallucinations, and dermatological issues. He spent three weeks in the hospital recuperating, exposing the risks of utilising AI for medical guidance.
The case paper warns that AI systems like ChatGPT might make errors and propagate disinformation, which is reflected in OpenAI’s terms of service.
ChatGPT is a generative artificial intelligence (AI) chatbot created by OpenAI and released on November 30, 2022. It responds to user questions by generating text, audio, and graphics using GPT-5, a pre-trained generative transformer (GPT). It is credited for boosting the AI boom, a time of high investment and public interest in the field of artificial intelligence (AI). OpenAI offers this service on a freemium basis.
Despite its popularity, the chatbot has faced criticism for its limits and potential for unethical usage. It can produce reasonable but wrong or ludicrous replies, referred to as “hallucinations.” Biases in the training data may be reflected in its replies. The chatbot may encourage academic dishonesty, spread disinformation, and develop dangerous code.
Recently, a 60-year-old man ended up in the hospital after asking ChatGPT how to eliminate sodium chloride from his diet. As humans interact more with artificial intelligence, we continue to hear stories about the dangers, sometimes even fatal, of chatbot conversations.
While some of the focus has been on mental health and concerns about chatbots’ inability to handle these types of challenges, there are also implications for people’s physical health.
It’s often said that you shouldn’t Google your symptoms because medical advice should be given by a healthcare professional who knows your medical history and can examine you.
According to a new case report published in the American College of Physicians Journals, caution should also be exercised when considering asking a chatbot health questions. The report focuses on a man who developed bromism after seeking advice from ChatGPT about his diet.
Bromism, sometimes known as bromide poisoning, was widespread in the early 1990s but is now less prevalent. Bromide salts were included in numerous over-the-counter drugs used to treat sleeplessness, hysteria, and anxiety. Excessive bromide intake might result in neuropsychiatric and dermatological problems.
The individual in this report had no past mental or medical history, but within 24 hours of being sent to the hospital, he began experiencing increasing paranoia as well as auditory and visual hallucinations. “He was very thirsty, but paranoid about the water he was offered,” the report says. The man’s health stabilised after receiving fluids and electrolytes, and he was taken to the hospital’s mental ward.
As his condition improved, he was able to report some symptoms he had noticed, including the recent onset of facial acne and ruby angiomas, further confirming that he was suffering from bromism. He also reported substituting sodium chloride, or table salt, for sodium bromide for three months after reading about the adverse health effects of table salt.
“Inspired by his university nutrition studies, he decided to conduct a personal experiment aimed at eliminating chloride from his diet ,” the case report states. He had replaced table salt with “sodium bromide obtained on the internet after consulting ChatGPT, where he had read that chloride could be replaced by bromide, although probably for other purposes, such as cleaning.” The man spent three weeks in the hospital before recovering sufficiently to be discharged. “ It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, are not capable of critically discussing results, and ultimately promote the spread of misinformation ,” the report’s authors warned.
Open AI, the developer of ChatGPT, admits in its terms of use that the chatbot’s results “ are not always accurate .” “ You should not rely on the results of our services as the sole source of truth or factual information, nor as a substitute for professional advice ,” the terms of use state. The company’s terms of use also explicitly state: “ Our services are not intended to be used for the diagnosis or treatment of any health condition. ”
This report highlights the fact that AI can be wrong. In 2023, a study conducted by researchers at Brigham and Women’s Hospital, a hospital affiliated with Harvard Medical School, had already revealed that ChatGPT was unreliable in providing cancer treatment plans.
Analyzing the chatbot’s responses to hypothetical cancer cases, the researchers found that 33% of them contained incorrect information, such as wrong drug doses, inappropriate radiation therapy recommendations, or unsubstantiated claims about treatment effectiveness.
The authors were surprised by the difficulty in identifying errors, as the chatbot’s responses were often consistent and plausible.
Another example: by 2024, weird, incorrect, and even deadly replies supplied by Google Search’s “AI Overviews” function had acquired traction. After encouraging customers to use non-toxic adhesive on their pizzas and eat three pebbles each day, Google’s AI suggested poisonous champignons.
According to one account, Google’s AI appears to be mistaking the poisonous “Destroying Angel” fungus for the edible “Button mushroom”. These recurring errors generated serious questions about the safety of this functionality.


