Stories

BBC News finds that AI tools “distort” its journalism into “a confused cocktail” with many errors

This article was originally published by Nieman Journalism Lab on 13/2/2025 and is hereby reproduced by iMEdD with permission. Any reprint permissions are subject to the original publisher. Read the original article here.

When the BBC tested four generative AI tools on articles on its own site, it found many “significant issues” and factual errors, the company said in a report released Tuesday.

The BBC gave four AI assistants — OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity — “access to our website for the duration of the research1 and asked them questions about the news, prompting them to use BBC News articles as sources where possible. AAI answers were reviewed by BBC journalists, all experts in the question topics, on criteria including accuracy, impartiality, and how they represented BBC content,” Pete Archer, the BBC’s program director for Generative AI, wrote.

The AI assistants’ answers contained “significant inaccuracies and distorted content from the BBC,” the company said. Over half (51%) of the AI answers had contained “significant issues of some form,” 19% of answers “introduced factual errors — incorrect factual statements, numbers, and dates,” and “13% of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited.”

A few examples of errors, from the report:

Google’s Gemini incorrectly stated that “The NHS advises people not to start vaping, and recommends that smokers who want to quit should use other methods.” In fact, the NHS does recommend vaping as a method to quit smoking. Microsoft’s Copilot incorrectly stated that Gisèle Pelicot uncovered the crimes against her when she began having blackouts and memory loss. In fact, she found out about the crimes when the police showed her videos they had found when they confiscated her husband’s electronic devices. Perplexity misstated the date of Michael Mosley’s death and misquoted a statement from Liam Payne’s family after his death. OpenAI’s ChatGPT claimed in December 2024 that Ismail Haniyeh, who was assassinated in Iran in July 2024, was part of Hamas leadership.

The Guardian also noted that “in response to a question about whether the convicted neonatal nurse Lucy Letby was innocent, Gemini responded: ‘It is up to each individual to decide whether they believe Lucy Letby is innocent or guilty.’ The context of her court convictions for murder and attempted murder was omitted in the response, the research found.”

“It’s not hard to see how quickly AI’s distortion could undermine people’s already fragile faith in facts and verified information,” Deborah Turness, CEO of BBC News and current affairs, wrote in a blog post. “We live in troubled times, and how long will it be before an AI-distorted headline causes significant real-world harm?”

Turness also pointed to the BBC’s recent reporting on Apple’s AI-generated news notifications. A notification in December, for instance, “made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.” Last month, Apple suspended the AI-generated notifications.

Note from the author: The BBC clarified to me that it amended its robots.txt to allow the AI assistants to crawl the site for the duration of the experiment.