Malaysia Oversight

AI not a reliable source of news

By NST in October 23, 2025 – Reading time 1 minute
AI not a reliable source of news


PARIS: Artificial intelligence assistants made errors about half the time when asked about news events, according to a vast study by European public broadcasters released yesterday.

The mistakes included confusing news with parody, getting dates wrong or simply inventing events.

The report by the European Broadcasting Union looked at four widely used assistants: OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.

Overall, 45 per cent of all AI answers had “at least one significant issue”, regardless of language or country of origin, the report said.

One out of every five answers “contained major accuracy issues, including hallucinated details and outdated information”.

Between late May and early June, 22 public media outlets from 18 mostly European countries posed the same news questions to the AI assistants.

When asked “Who is the ?”, ChatGPT told Finnish public broadcaster Yle, and Copilot and Gemini told Dutch media outlets NOS and NPO, that it was “Francis”, even though at the time he was already dead and replaced by Leo XIV.

Asked by Radio France about Elon Musk’s alleged Nazi salute at Donald ‘s inauguration in January, Gemini responded that the billionaire had “an erection in his right arm”, having apparently taken a satirical column by a comedian at face value.

Despite these deficiencies, AI assistants are increasingly being used to get information, particularly by young people.

© New Straits Times Press (M) Bhd



Source link