Can People Trust AI-Produced News

New research published today, led by the BBC and coordinated by the European Broadcasting Union (EBU), has found that AI assistants routinely misrepresent news content, regardless of the language, territory, or AI platform being tested.
They evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity.
This intensive international study was launched on October 22, 2025, at the EBU News Assembly. It identified multiple systemic issues across four leading AI tools.
Key findings include 45% of all AI answers had at least one significant issue; 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions, and 20% contained major accuracy issues, including hallucinated details and outdated information.
These researchers stated Gemini performed worst with significant issues in 76% of responses, more than double the other assistants.
A comparison between the BBC's results earlier this year and this study reveals some improvements, but still high levels of errors.
These issues matter since AI assistants are already replacing search engines for many users.
According to the Reuters Institute's Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.
'This research conclusively shows that these failings are not isolated incidents,' said EBU Media Director and Deputy Director General Jean Philip De Tender, in a press release.
'They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don't know what to trust, they end up trusting nothing at all, and that can deter democratic participation.'
The research team has also released a 'News Integrity in AI Assistants Toolkit'.
Building on the extensive insights and examples identified in the current research, the Toolkit addresses two main questions: "What makes a good AI assistant response to a news question?" and "What are the problems that need to be fixed?".
In addition, the EBU and its Members are pressing EU and national regulators to enforce existing laws on information integrity, digital services, and media pluralism. And they stress that ongoing independent monitoring of AI assistants is essential, given the fast pace of AI development, and are seeking options for continuing the research on a rolling basis.
This new research highlights the market impact from a recent Google News report that found that less than 25% of all news stories cited clinical research, especially regarding vaccines.
And with last-minute travelers deferring approximately 18% of their protective vaccines due to insufficient time before departure, people relying on AI-produced news may actually increase the risk of under-vaccination.
As the former U.S. Surgeon General Dr. C. Everett Koop encouraged, people should take charge of their health by collaborating with a healthcare provider when making essential decisions based on digital information.
Our Trust Standards: Medical Advisory Committee