Stories

What We Took Away From This Year’s European Data & Computational Journalism Conference 

We attended the 5th European Conference on Data Journalism and Computational Journalism, held for the first time in Athens. Here, we gather some of the key takeaways.

Featured Image: Kuku Digital, European Data & Computational Journalism Conference

“It probably will come as absolutely no surprise to anyone in this room that we all routinely and frequently think about the future,” said Nick Diakopoulos, Professor in Communication Studies and Computer Science at Northwestern University, addressing journalists during the main keynote he gave in the framework of the fifth European Data & Computational Journalism Conference 2025.  

The conference took place in Athens for the first time, by the Institute for People-Centred AI at the University of Surrey, the School of Computer Science & Informatics at Cardiff University and the National & Kapodistrian University of Athens (UoA), which also hosted the event, as academic partners.  

From September 8 to 10, 2025, journalists and experts in data and computational journalism from Greece, the rest of Europe, and the United States gathered to discuss the future of the relationship between the journalistic community and technological science. As mentioned by Dr. Constantinos Mourlas, Professor in the Department of Communication and Media Studies at UoA, “We want to bring together experts in the field […] to explain the issues where journalism and technology intersect, the topics of computational journalism, and to engage in discussions with them, to learn from them […]”. 

Although public trust in journalism is being shaken, faith in journalism as a “moral guardian” for artificial intelligence remains strong 

Nick Diakopoulos presented a study, based on data received from the Open Society Foundations, within the framework of the Open Society AI in Journalism Futures 2024, research project, where participants shared their perspectives on various versions of a future mediated by artificial intelligence.  

The analysis of the various participant scenarios by Diakopoulos’s team revealed an optimistic note: the view that journalists can exert a highly positive influence on the trajectory of artificial intelligence. “[…] We see that people write about how journalists play a positive role as content providers for AI. They supply raw material, verify facts, help shape and curate the content fed into AI systems, act as moral guardians, and provide editorial oversight. They uphold professional and ethical standards. So, people are really emphasizing the very positive potential of the interaction between artificial intelligence and journalists — or between journalists and artificial intelligence.” 

The only way not to fear artificial intelligence is to experiment 

Several newsrooms have already begun taking bold steps to integrate the new technology into their journalistic workflows. This was highlighted in the session entitled “AI Innovation in Newsrooms: From Promise to Impact”, which featured professionals in strategic roles in departments related to innovation at major news organizations, including, among others, the BBC, the Financial Times, and Tamedia in Switzerland. 

The panel titled “AI Innovation in Newsrooms: From Promise to Impact.” From left to right: Laura Ellis, Oli Hawkins, Titus Plattner, Karyn Fleeting. Photo: Dimitris Adamis, European Data and Computational Journalism Conference

Although the initial global frenzy over ChatGPT and other conversational models caused hesitation in newsrooms, editorial teams allowed their staff to freely test generative AI, to use it and experiment with it—either through off-the-shelf tools or with models developed in-house for their own projects. As emerged from the discussion, the result was that the curiosity of some journalists generated expertise that became essential for the entire editorial team. “I am absolutely loving what’s coming out of this [while trying to keep up], which is a closer relationship between all of the media organizations we see here and many others», mentioned Laura Ellis, Head of Technology Forecasting at the BBC: “We’re all looking together about how we deal with this, how we work on things like guidelines, how work on transparency. And I think that’s a nice thing to do”.   

Marc Lavallee, whose work focuses precisely on the intersection of media and artificial intelligence, further discussed this idea in his keynote speech. He argued that journalists should follow their curiosity, step beyond the narrow confines of journalism, and evaluate new tools with fresh perspectives—even if that means leaving their professional bubble. He urged journalists to “Bring the idea of play to your newsroom.”  

“Vibe Coding’ for quick results on small projects—it’s like knowing how to code without knowing how to code. 

In one of the workshops hosted in the Union of Athens Daily Newspaper Editors’ building, Dhrumil Mehta, Associate Professor of Journalism at Columbia University, and Aarushi Sahejpal, Adjunct Professorial Lecturer in Information Technology & Analytics at American University in Washington D.C., showed participating journalists how to create charts using Vibe Coding. 

This is a method of programming with the help of artificial intelligence: to generate code, we describe to a chatbot, which uses a Large Language Model (LLM), a process we want to accomplish. The chatbot then responds with the code that will carry out the task. 

In this case, the two professors demonstrated how journalists without programming knowledge can easily and quickly create charts for small projects. Sahejpal noted that it’s not necessary to be experts in charting tools like the D3 library. Often, the focus should be on doing good journalism. Mehta added that the method also has educational value: journalists can study the code generated by the model and try to understand the process that produces the desired outcome. 

Dhrumil Mehta, Associate Professor of Data Journalism at the Columbia University Graduate School of Journalism. Photo: Kuku Digital, European Data and Computational Journalism Conference

Journalism lacks internal accountability mechanisms, but some are trying to create one.   

During the discussion, Bette Dam, an independent journalist who spent most of her journalistic career reporting from Afghanistan, and Druhmil Mehta, presented the UNHEARD project, one of the first attempts to use AI to detect whose voices are being quoted in news articles.  

The project began with Dam’s study of the New York Times and Associated Press coverage of the war in Afghanistan. Her team manually analysed 1,500 articles and found that AP and the Times relied mainly on the accounts of U.S. officials, while the Afghan officials mentioned in the articles largely echoed the Western line.  

Today, the project analysing articles with the use of artificial intelligence is still in its early stages, but Dam and Meta plan to expand it so that it can identify sources in crisis reporting and beyond. “We intend to work with newsrooms to enable introspection and change,” said Mehta, who is also Deputy Director at the Tow Center for Digital Journalism at the Columbia University Graduate School of Journalism. 

Everyone benefits from open data — so let’s make it available to the journalistic community 

When shared, data can serve as a bridge for collaboration among journalists, especially in small local newsrooms. This was the idea championed by veteran journalist Cheryl Phillips — a two-time Pulitzer Prize winner who teaches in Stanford University’s Journalism Program. She is also the founder and co-director of the data-sharing platform Big Local NewsBig Local News, which was the focus of her talk at the conference. 

Cheryl Phillips, Hearst Professional in ResidenceD in the Journalism Program at Stanford University. Photo: Dimitris Adamis, European Data and Computational Journalism Conference

“Our resources are thin, and it takes a lot of time to tell important stories. That’s where data comes in.” she said, later adding that “the partnership with local newsrooms proved powerful because they amplified the content […] You can effect change at the local level more than at the national level”. 

Nevertheless, many news organisations, even when they make the data, they collect openly available, tend to treat it mainly as potential sources for their own future stories. In a study by the University of the Aegean, as presented in the talk by researcher Georgios Papageorgiou, titled: “Exploring Media Contributions to the Open Data Ecosystem – An Analysis of GitHub Repositories from Large News Organisations”, it was noted that 13 of the 20 organizations under study had repositories on GitHub, but only eight of them actively provided open data. 

Sometimes we use ChatGPT for the wrong reasons, even though better-suited tools are available. 

LLMs are not designed to replicate content exactly. “You might get a table, but probably not the table itself,” said Jonathan Soma, Professor of Professional Practice in Data Journalism at the Knight Chair and Director of the Professional Program in Data Journalism at Columbia University, in the workshop AI-native, spatially-aware document processing with Natural PDF.

One of the many challenges in journalistic research can be extracting data contained in PDF files. Inspired by the pdfplumber tool by New York Times data editor Jeremy Singer-Vine, Soma created a PDF processing library called NaturalPDF to further aid in document research. Indeed, only a few lines of code are needed to use the library, making PDF processing feel very natural. Soma, who presented a step-by-step guide on using NaturalPDF in his workshop at the conference, always makes all related GitHub information openly available.

The more we use artificial intelligence, the more often we need to ask ourselves about the cost of error. 

Many referred to the increase in productivity within newsrooms thanks to artificial intelligence. For example, the BBC uses it for translation and text summarization, as well as for the pilot creation of voiceovers in summaries of English football news, read in the tone and accent of its journalists from different parts of the United Kingdom. Over the past decade, Bloomberg has scaled up automation to produce thousands of articles—mainly routine news such as corporate earnings reports—so that journalists can use the outputs of semi-automated systems to conduct in-depth reporting instead of performing repetitive tasks. Finally, the Austrian Press Agency is running an ongoing project for the automatic generation of alt-text descriptions for every static, data-based chart. 

The challenge, however, is that “you can’t just plug AI into existing editorial processes and expect positive results,” as emphasised by Oli Hawkins, data scientist at the Financial Times. “Certain conditions must be met for there to be a truly positive impact.” To decide where to apply artificial intelligence, it is essential to ask the question: ‘What is the cost of error?’” 

Creative Commons license logo