Stories

The role of Artificial Intelligence in the newsroom

What challenges do journalists face when implementing Artificial Intelligence (AI) in their newsrooms, and how do they acknowledge them? 

“It was the first time I saw machine learning in an investigation, and my heart went faster”, says Flor Coelho during the break at iMEdDs International Journalism Forum 2023. Sitting on the stairs behind the venue of Piraeus 260, Flor tells iMEdD how being a judge for the Online Journalism Awards on investigative and data journalism changed her life a few years ago. This is when she realized the capabilities of Artificial Intelligence. Now, she is researching the challenges of AI and the opportunities this technology offers to journalists in La Nación’s data unit. But her news organization is not the only one integrating Artificial Intelligence into their practices.  

Discussions on AI in journalism certainly permeated the Forum. While the Forum’s central theme for 2022 was trust in the media, the interplay between journalism and AI introduced an additional layer of complexity, further complicating the relationship between the two. This problem was raised during the panel “AI in the Newsroom: A debate on AI’s role in the media”. But is the journalistic world ready for this discussion?  

On a stage five people are sitting in front of a big white and yellow scrin that writes "AI in the Newsroom: A Debate on AI's Role in the Media". The woman in the center, dressed in black, is holding the microphone. The other four men of the panel are all listening to her.
The five experts on the panel “AI in the Newsroom: A Debate on AI’s Role in the Media”. Photo by: Alex Grymanis

A small newsroom and a significant change 

In the 2019 Report of JournalismAI, responders were already aware of the possible impact AI could have on their newsrooms. Of the 71 news and media organizations surveyed, those not yet AI-active expressed intentions to adopt this technology within the next five years at most.  However, when ChatGPT entered the picture in November 2022, the steps taken were greater and, most importantly, faster. 

“When discussing AI, I divide my history as before and after ChatGPT, in the same way, human history is usually told as before and after Christ […] or before and after COVID”, says Flor and smiles. The chatbot application– introduced by OpenAI– embodies the abilities of Generative AI as a simple, easy-to-use tool with great power. The difference from traditional AI is its capacity to learn patterns from input data without relying on explicit human rules. Despite this, it still retains the ability to mimic human creativity.  

The challenge in the realm of generative AI lies in accessing large amounts of high-quality data, which researchers often source from the internet. It is the availability of public knowledge that facilitated the rapid existence and development of ChatGPT. As mentioned in the panel, in less than a year, applications of generative AI have become multimodal, meaning that Artificial Intelligence can now process and generate different types of inputs, including text, audio, and images.  

The latest JournalismAI report –articulated and published by the POLIS project with funding from Google– surveyed 105 news/media organizations in 26 countries worldwide. As mentioned in the report, by 2023, news organizations had already integrated AI into every stage of the creation process. The most popular applications were audio transcription and editing, fact-checking, web scraping, and image generation. Among the participating newsrooms, 80% expect an increased use of AI for journalistic purposes. 

Nevertheless, fast implementation means being aware of the challenges that come with this. The Center for News, Technology, and Innovation (CNTI) –an independent global policy research center for media– attributes the complexity of Generative AI to its creative ability. It tends to “hallucinate” by producing inaccurate facts, integrating strong bias, and committing plagiarism due to the data it relies on for content reproduction. As Charlie Beckett, Director of The Journalism AI Project, Polis, LSE at The London School of Economics and Political Science, mentions, many newsrooms found themselves unprepared for the change. The pervasive sentiment is that they now face starting anew and reevaluating their decisions. 

Τech companies have created amazing products that have been fantastic to help us create and distribute journalism. It can be positive and negative, especially when those tech companies change their minds

Charlie Beckett, Director of The Journalism AI Project, Polis, LSE at The London School of Economics and Political Science

Tech and Journalism: A match made in heaven? 

In July 2023, Reuters revealed that Google communicates with various news organizations like the Washington Post, the Wall Street Journal, and the New York Times to produce AI-generated articles. “The AI tool pitched is called Genesis,” reported Reuters, “and it’s designed to help with journalists’ productivity.” As Reuters pointed out, the Associated Press had announced their partnership with OpenAI to implement AI in the news-making process a few days before. These developments have prompted many newsrooms, like Flor’s, to reconsider their collaborations with tech companies.  

“Any news organization, when dealing with tech companies, knows they don’t exist to serve journalism. They’re there to make money and serve many other industries and the public in general,” says Beckett. “We know from our own experience that in the past, tech companies have created amazing products that have been fantastic to help us create and distribute journalism. It can be positive and negative, especially when those tech companies change their minds”.  

“I think we must be transparent about this relationship and consider from whom we receive money,” says Flor. “If we want to feel that it’s changing our reporting about issues relating to this company, we should stop being involved in this kind of relationship”. For her team, that means transparency and separating tech reporting from receiving money for technological-related projects. Beckett says this is the way forward for many newsrooms. “All the editors I speak to know this; they know they must build their language models. But you’re going to be realistic. Technology companies are way ahead. They’ve invested the money and know what they’re doing, so there is a rationale for collaboration”. 

However, collaboration has to be built on trust from both parties. In August 2023, OpenAI announced to website publishers the option to prevent GPT website crawlers from using their data for generative purposes, without removing previously scrapped material used to train ChatGPT. Many newsrooms worldwide blocked GPT from scraping their websites. Beckett is optimistic about this move. “It seems that, in some ways, it’s quite a sensible precaution. In some ways, it’s a bit late. It has already happened,” he says. “Going forward, I can see that it’s good. First, it’s a sort of hygiene thing, […] but also a good negotiating stance; if you want to use our data to build your product, you should be paying us”. It is important to note that some of these newsrooms are on the list of collaborations with Google for the Genesis tool. So, how can news organizations strike a balance between securing their intellectual property and exploring the possibilities of AI?  

In a dark blue background, a woman in a long black and green dress is looking above her shoulder. She wears glasses and is holding a microphone.
Florencia Coelho during her workshop “AI in small newsrooms” . Photo by: Alex Grymanis

When discussing AI, I divide my history as before and after ChatGPT, in the same way, human history is usually told as before and after Christ

Coelho Florencia, New Media Research & Training Manager, La Nación Data

Finding our steps: Laws and Guidelines 

The journalistic community has been vocal about a more general application in the form of law for AI. In light of the European Union’s negotiations about the AI Act, the Reporters sans Frontiers (RSF) spoke up. They denounced fully automated sites reproducing fake news. “AI systems must be fully audited to verify their compatibility with journalistic ethics before they can be used,” writes Vincent Berthier, Head of RSF’s Tech Desk, who was also in IJF’s panel discussion.  

On a national and international level, the efforts by the legislative bodies have just started showing. On 6 December 2023, the European Union published the AI Act after two years of development. First suggested in 2019 and drafted in 2021, the now-called AI Act provides clear outlines for the basic definitions surrounding this technology, calls for transparency before the placement of products in the European market, and opens up development opportunities. Similarly, on 30 October 2023, U.S. President Biden signed an executive order regulating AI “requiring new safety assessments, equity and civil rights guidance and research on AI’s impact on the labor market”. This followed the May hearing in Congress of the United States by Sam Altman –CEO of the company behind ChatGPT, OpenAI– about the importance of regulating Artificial Intelligence.  

Legislation is still vague, and newsrooms must establish their own safety net to ensure security between them, AI companies, and, most significantly, their audience. Back in 2019, the most significant note of the JournalismΑΙ report was that, out of the newsrooms surveyed, only three had a clear strategy on AI and the way it would be implemented. After ChatGPT was released, the issue of strategy became pressing.  

In the past year, there have been a couple of cases, reported by the Guardian and the Verge, of news articles being written by AI. The fear of journalists being replaced by technology is not new, but it becomes more prominent when similar cases come to light. “Credibility is key for a news publisher. And if the audience loses trust in what you do, […] you are done,” said Van Meek Juan Carlos, Director of Digital Information and Programming at Al Jazeera Media Network, during the panel discussion.  To ensure that guidelines and AI strategies are essential for every newsroom, all four journalists agreed that these guidelines should always be adapted to the technological advancements of the era. 

While pressing for stricter regulations on a European level, RSF is already working on a global AI charter for media to help organizations find the critical parts of their guideline policy. On September 6, 2023, the European Publishers Council – a group of chairmen and CEOs from leading news organizations like the Times and the Guardian- released the Global Principles for Artificial Intelligence (AI) in an attempt to help media professionals integrate AI in their productive works. However, as of now, these principles have not seen widespread adoption by newsrooms.  

“I’m scared because the creators of these tools are scared”, says Flor when asked about her feelings regarding AI. “They asked for regulation because it’s like driving a Ferrari. It depends on whether a kid or an adult drives it. […] The same opportunities these tools have for good can be used for bad”. This is not different for journalists, and Beckett confirms it.  Rapid adaptations have left the newsrooms awkward but eager for change. “There are all kinds of legal issues around the responsibility and accountability of this technology, and that’s more of a policy issue. I’m more interested in how it will shape the work of individual journalists and production flows, which sounds quite dry. But for me, that’s the heart of it”. 

Watch the full panel discussion here: