News
12 July, 2025
AI are not doctors
Artificial intelligence chatbots can be easily manipulated to spread false and dangerous health information, according to a world-first study published today in the Annals of Internal Medicine.

The findings have prompted urgent warnings from health experts: trust your doctor, not a chatbot.
An international team of researchers from the University of South Australia, Flinders University, Harvard Medical School, University College London, and the Warsaw University of Technology assessed five leading AI systems from OpenAI, Google, Meta, Anthropic, and X Corp.
Using developer tools, the team successfully reprogrammed each AI to provide inaccurate health advice, often using fabricated references and a formal, scientific tone to appear credible.
“In total, 88 per cent of all responses were false,” said UniSA researcher Dr Natansh Modi, “and yet they were presented with scientific terminology, a formal tone, and fabricated references that made the information appear legitimate.
“The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne, and 5G causing infertility.”
Four of the five systems delivered false responses 100 per cent of the time; the fifth, 40 per cent.
Dr Modi’s team also identified and created health disinformation bots using OpenAI’s public GPT Store, revealing how easily these tools can be misused.
“Without immediate action, these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns,” Dr Modi warned.