How can AI be used in investigative journalism and fact-checking? How do journalists (not) cover AI? These were some of the questions explored in this year’s AI and the Future of News conference, hosted by the Reuters Institute for the Study of Journalism on March 17, in a series of panel discussions also available online.
Reporting on AI with no fear
When it comes to covering AI, jargon and “big tech narratives” can be daunting for non-expert journalists, both Niamh McIntyre from the Bureau of Investigative Journalism and Joanna Kao from the Pulitzer Center said. Talking with other journalists and experts can help reporters decode AI.
Lack of public information leaves some AI stories in the dark, MacIntyre noted. To address this, journalists can turn to sources that may be more willing to speak, such as the lowest-paid workers in tech companies, she added. Following up on people’s claims can also uncover stories, according to Kao.
Investigate with AI, but always verify
AI supports the investigations of smaller, resource-constrained newsrooms, for instance through data analysis, Elfredah Kevin-Alerechi from the Colonist Report highlighted. However, verification is essential, she cautioned, a point reiterated by Reuters’ Ryan McNeill and Sondre Ulvund Solstad from The Economist, while discussing vibe-coding.
Because AI’s outputs rely on the data it is trained on, the lack of reliable data, in rural areas for example, poses a major challenge, Alerechi noted. “That’s why I tell journalists that AI cannot take your job – because you can go to the field,” pointing to the opportunity these data gaps create for reporters.
AI as a fact-checking tool
Despite its role in disseminating misinformation, AI also serves as a powerful fact-checking tool. Fact-checking organizations, like Maldita and Full Fact use LLMs to identify and categorize claims. To fact check the 2024 UK elections Full Fact used Google Research’s BERT LLM, which has been trained on millions of sentences in more than100 languages, to filter claims by topic and keywords in order to detect the most important claims to fact check. Meanwhile, Aos Fatos is developing “Busca Fatos”, a newsroom tool for fact-checking live coverage.
Guardian’s approach to AI
Training on AI and its limitations is mandatory for Guardian’s reporters, who use AI to curate the outlet’s tag pages and support complex investigations, like the one exploring 100 years of anti-immigration rhetoric in the British parliament using LLMs, said the Guardian’s Chris Moran. The outlet remains skeptical when it comes to building chatbots, such as the ones employed by FT and Washington Post, due to accuracy and accountability concerns, added Moran.