News

‘Bixonimania’ Is a Fake Disease—But ChatGPT Diagnosed It to Thousands, Other AI Did Too

new investigation published in Nature has revealed that major AI chatbots, including ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity, have been confidently telling users about a disease that does not exist. The condition, called “bixonimania,” was entirely fabricated by a researcher who wanted to test how easily AI systems could be tricked into spreading false medical information.

The results are alarming for anyone in healthcare. Not only did the chatbots repeat the fake diagnosis, but they elaborated on it, offered clinical-style advice, and even recommended that patients visit an ophthalmologist. For nurses who are already fielding more questions from patients who “Googled their symptoms,” this experiment is a wake-up call about a new and growing threat to patient safety.

With ECRI naming AI chatbot misuse the top health technology hazard of 2026, the stakes could not be higher for nurses on the front lines of patient care.

Medical researcher Almira Osmanovic Thunström at the University of Gothenburg, Sweden, launched the experiment in early 2024. She created a fictional eye condition called bixonimania, described as eyelid discoloration and sore eyes supposedly caused by blue light exposure from mobile devices. She then uploaded two fake academic papers to a preprint server to see whether AI chatbots would absorb and repeat the false information.

The papers were loaded with red flags that should have been impossible to miss.

  • The fictional lead author supposedly worked at “Asteria Horizon University” in the nonexistent “Nova City, California.”
  • The acknowledgements section thanked “Professor Maria Bohm at The Starfleet Academy” on the USS Enterprise, with funding credited to “the Professor Sideshow Bob Foundation for its work in advanced trickery.”
  •  The papers even stated outright that “this entire paper is made up.”
See also  Preceptorship now on national agenda, says NHS leader

Thunström chose the name bixonimania deliberately. The suffix “-mania” is used exclusively in psychiatry, so no legitimate eye condition would ever carry that label. 

Despite the warning signs deliberately included in the research paper, the AI systems failed spectacularly.

  • Microsoft Copilot declared that “bixonimania is indeed an intriguing and relatively rare condition.”
  • Google Gemini told users that “bixonimania is a condition caused by excessive exposure to blue light” and advised people to visit an ophthalmologist.
  • Perplexity AI went even further, telling one user that 90,000 people worldwide were suffering from the disorder.

>>Listen to The Latest Nurse News Podcast

The bixonimania experiment did not happen in a vacuum. Separate research has found that large language models are especially vulnerable to medical misinformation when the source material looks professional.

  • A study examining 20 different LLMs found that AI chatbots hallucinate and elaborate on false information at higher rates when the text is formatted like a clinical paper or hospital discharge note, compared to social media posts.

“When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” researcher Omar noted in the Nature report.

The real-world consequences have already arrived.

  • Three researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in India published a paper in Cureus, a peer-reviewed journal published by Springer Nature, that cited the bixonimania preprints as legitimate sources.
  • That paper was later retracted once the hoax was discovered.

The problem extends far beyond one fake disease. ECRI’s 2026 Health Technology Hazard Report found that chatbots have suggested incorrect diagnoses, recommended unnecessary testing, promoted substandard medical supplies, and even invented nonexistent anatomy when responding to medical questions. All of this is delivered in the confident, authoritative tone that makes AI responses so convincing.

See also  Nurses Among 15 Charged in Medicare Hospice Fraud Scheme That Paid ‘Fake Dying’ Patients

The scale of the risk is enormous. More than 40 million people turn to ChatGPT daily for health information, according to an analysis from OpenAI. As rising healthcare costs and clinic closures reduce access to care, even more patients are likely to use chatbots as a substitute for professional medical advice.

This story matters to nurses because you are the professionals most likely to encounter patients who have already consulted an AI chatbot before walking through the door. A patient may arrive convinced they have a condition they read about on ChatGPT or Gemini, complete with symptoms and treatment recommendations generated by a system that cannot tell the difference between a real disease and one funded by “the Professor Sideshow Bob Foundation.”

Nurses should be prepared to gently redirect patients who present with AI-sourced health claims, using it as an opportunity to reinforce the value of professional clinical judgment. ECRI recommends that health systems establish AI governance committees, provide clinicians with AI literacy training, and regularly audit AI tool performance. If your facility has not started these conversations, this is the moment to advocate for them.

🤔 Have you had a patient come to you with medical information they got from an AI chatbot? How did you handle it? Share your experience in the comments below.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button