Interview

Nick Diakopoulos: Artificial Intelligence and Journalism Beyond the Hype 

Professor Nick Diakopoulos speaks to iMEdD about how generative artificial intelligence is affecting newsrooms and journalism’s comparative advantage in content production. He comments on the competitive pressures media face in distribution and sees the opportunity for trust in journalism as still very much alive.

A reference figure in the field of computational journalism from the time when programming, data and technology-assisted reporting had yet, one might dare say, fully cross to the other side of the Atlantic. Α Professor of Communication Studies and Computer Science at Northwestern University in Illinois, USA, and Director of its Computational Journalism Lab, Nick Diakopoulos has for years focused in his research on automation and the use of algorithms in the production and distribution of news, on artificial intelligence, ethics, and on algorithmic transparency and accountability — which he had long ago urged the journalistic world to adopt as one of the next day’s beats. 

His book Automating the News: How Algorithms Are Rewriting the Media (Harvard University Press) was a publishing event in 2019 for the journalistic community, which at the time was still trying to find its footing in the new era that was taking shape for the news process. Since then, technological developments and the popularization of artificial intelligence through its mass availability have, of course, been rapid. Yet the “hype,” it seems, comes in cycles. And from there we picked up our conversation with Diakopoulos when we met in Athens, on the sidelines of this year’s European Data & Computational Journalism Conference

You have been doing extensive work in computational journalism and AI long before the hype. What changes have you noticed compared with the time you wrote your landmark book, Automating the News? 

It’s interesting —we go through these hype cycles with technology and journalism. When I wrote my book in 2018 (it was published in 2019), we were still in the AI and machine-learning hype cycle, and people in journalism were pretty excited about all those technologies. By 2020–2021, we were coming out of that cycle, and it was becoming more of a normal technology. It was just, “yeah, we’re going to use some machine learning to improve our news recommender systems or make news more relevant to our readers.” 

Then, in 2022, generative AI really hit with the launch of ChatGPT. Of course, generative AI had been around, percolating for a few years before that, but it wasn’t mainstream. You had the “ChatGPT moment” in 2022 and then this brand-new hype cycle around generative AI and journalism. I think we’re still in that hype cycle. I’ve dedicated the last couple of years to essentially trying to inject some empiricism and critical thinking into it — to get journalists to be more careful about accepting claims from technology companies out of the box and to push them to understand where generative AI actually has value for news production or the news media in general. I’m pushing news organizations to think about how to evaluate generative systems (kick the tires, measure benchmarks, and identify the right metrics to track), and to educate journalists about responsible prompting methods, and the ethical and legal issues to be aware of. 

There’s obviously a lot to keep up with on this topic, but what I’m starting to see now is that the generative AI bubble is beginning to deflate a little. What I’m looking for in the next couple of years is that, as these technologies continue to mature and work their way through organizations, they get boring in a way —and maybe that’s a good thing. Well, at least until the next bubble. Until the next hype cycle. 

You’re saying that’s probably a good thing because, once people get bored, they’re more careful about what they’re getting or how they’re using technologies —is that right? 

Once the hype bubble deflates, people are just like, “All right, does this help me do my job? Is this actually useful? Is it valuable?” At the end of the day, organizations need to make money to be sustainable, and once the hype glasses come off you get serious: “Does it save me money? Does it make me money?” 

Professor Nick Diakopoulos during his keynote speech at the University of Athens for the European Data & Computational Journalism Conference 2025 which took place in September 2025 in Athens, Greece. Photo: Dimitris Adamis/DataJConf2025

The core competence of news organizations is in gathering and content production. Where they’re struggling a bit more is distribution, because so much of the distribution system is beyond their control.

As you’ve stated several times, there are currently three main areas of AI use in journalism and the media: information gathering, content production, and distribution. Do media organizations deal equally and consistently with these areas when it comes to AI and generative AI? 

News gathering is the most expensive part of the news operation because you need people going out into the world, spending a lot of time preparing, interviewing, observing, recording, and so on. Then you have news production, which is also time-intensive but has much more technological support — software systems that transcribe information, tag it, and keep track of it, and so on. And then you have dissemination, which in some sense is the least labor-intensive. Certainly, when newspapers were printed, distribution was more labor intensive —you had to hire people to drive vans around and physically move the paper. But now we’re operating digitally. 
 
What I see is that the core competence of news organizations is in gathering and content production. Where they’re facing more competitive pressures, and maybe struggling a bit more, is distribution: “How do we deal with distribution in this new world?” Because so much of the distribution system is beyond their control. It’s whatever audiences want, wherever audiences are, and you have to adapt your distribution strategy. There are also all kinds of competitive forces from tech companies about how they want people, or how they think people want, to consume information — via AI summaries, chatbots, and so on — and news organizations don’t control that channel. I think they’re struggling with how to maintain their identity and still have a relationship with their audience when the distribution mechanism is really upset by AI companies. 

Big tech companies and platforms — like Meta, Google, TikTok, OpenAI, and other chatbots — increasingly influence how people access the news. How can the media adapt to this reality, which also sets new standards, particularly for online traffic? 

If people’s habit becomes “I go to my chatbot for current events,” that’s not a good thing for news companies. So, the question is: how do you build habits where people come directly to you for information?

I think it’s a question on a lot of people’s minds: what do we do strategically? There are a couple of options, and the strategy you choose probably depends on the kind of media outlet you are. Some outlets are mass-market, so they’re going to be hit a lot more by any decline in search traffic [or similar shifts]. Other news organizations are more niche, with audiences that come directly to their site or through newsletters. Those organizations may be less exposed to shifts in how content or traffic is referred, for example through chatbots. 

I don’t think there’s any one strategy that fits everyone. But generally, to the extent that news organizations can control their relationship with audiences — through direct distribution or a strong brand presence —, they want to develop habits with people. If people’s habit becomes “I go to my chatbot for current events,” that’s not a good thing for news companies. It might be a good thing for users, to the extent they feel like that meets their information need. 

So, the question is: how do you build habits where people come directly to you for information? There’s a lot of tactics involved. You can have a big-picture strategy, but then how do you translate that into concrete tactics? For example, you have to be on social media. If people see a video on TikTok and wonder whether it’s real, you want to be there, so they see you as a place to go to understand.  

After the social media boom in previous decades, do you think the media industry is more prepared? In other words, are there lessons learned that can help it confront, adapt, or evolve in this new era of “platformization”? 

The simple lesson is to be skeptical of the platforms, because they’re not really your friend. If they’re offering a deal, know that it’s because the platform wants something from the news publisher in that deal — and it’s probably not actually a favorable deal, at least not in the long term. I think that could be one lesson: you need to be pretty skeptical of negotiations with platforms. I don’t know the details of any of these deals, so who am I to say? Maybe it is worth it for media companies to make those deals.

How do you see the balance between news organizations, traditional journalism production, and platforms evolving over the next few years? 

There are a lot of ways it could evolve. In one possible future, news organizations might pursue a model of “content as data,” licensing their content to chatbot providers and maybe even using that to generate subscription revenue. Don’t forget — something like OpenAI is a subscription business. Why not have subscription add-ons for premium content? 

In one possible future, news organizations might pursue a model of “content as data,” licensing their content to chatbot providers and maybe even using that to generate subscription revenue.

We’ve seen this model work in streaming: I pay for Amazon Prime as my base service, but then I might want HBO content or some other package. So, [there could be] business arrangements where there is a subscription that people want —like the AI chatbot—, and the publishers provide additional value on top as an add-on. This could attract a new set of customers who wouldn’t traditionally pay [for news content]. 

You coined the concept of algorithmic accountability reporting. How should —and how can— journalists adopt this approach in the age of generative AI and opaque systems like ChatGPT? 
 
I’ve started thinking about AI accountability as an evolution of algorithmic accountability. I think there are a few roles journalists can play. One is continuing to investigate these systems: understanding the data that goes into them, their environmental impact, and the harm they can perpetrate on individuals or communities. That’s just bread-and-butter reporting —telling the public what’s going on with these systems, what’s not working well, and how people are being hurt. 

There’s another, slightly more abstract dimension: how the media help set societal expectations around AI systems —the norms and expectations people have for how an AI chatbot should act or interact. I think the media has a more subtle impact on people. For example, if you’re reporting on a chatbot interacting with children, people come to understand what might be inappropriate. So, it starts setting norms and expectations for these systems. This is a more indirect impact of reporting, but it may also be important. It can also come through opinion-based reporting, essays, from people who are critically thinking and helping the public consider what is appropriate with these tools. 

Something I’ve been thinking about is the potential for AI systems to really destroy social capital. Democracy really relies on relationships, communities, conversations, debates, and so on. AI, if used in the wrong way, can upset those relationships.

What do you see as the biggest ethical risks when journalists use AI and generative AI, both internally in their workflows and externally in their published work? 

When it comes to ethical risks, there’s inadvertent misinformation — being inaccurate if you’re not carefully checking everything. There can also be privacy risks if you’re using data that contains private information; there could be leakage of that data. There are certainly transparency issues: the ethics of explaining how you know what you know, what sources you’re relying on, and the provenance of information. These are some of the standard risks. 

But sometimes in ethics we don’t always think about the longer-term [implications]. Something I’ve been thinking about is the potential for AI systems to really destroy social capital. Democracy really relies on relationships, communities, conversations, debates, and so on. AI, if used in the wrong way, can upset those relationships. For example, science reporters are often freelancers, at least in the U.S. That means they’re constantly pitching ideas to editors. If they start using AI to write their pitches, how does that diminish their relationship with other people? That’s a microcosm. If you start eroding these relationships, what happens to the community over time? I don’t know if we always frame this as an ethical issue, but I think it is, because it speaks to the appropriateness of the activity. 

After your keynote here at the conference, someone asked you about one of your biggest fears regarding AI in everyday newsroom use. You mentioned a future scenario involving fake bylines. This sounded more like a distrust of journalism rather than a distrust of generative AI systems. Is there a broader distrust of journalism that we need to address before we can discuss trust or distrust in AI systems? 

I think there’s a reason for diminished trust in the media, which is that when you have more access to information, you don’t necessarily need to trust as much —you can just find other sources. Why rely on what you say if I can check a chatbot? 

What I mean is, journalists need to care about trust and be trustworthy. That can be one of their differentiators in an information environment where you often can’t trust AI bots, most of the time. Managing the trust question is definitely going to be a big challenge for journalism institutions. As they use AI as part of their processes, they also need to figure out how to communicate it effectively to audiences to maintain trust. 

I made this argument several years ago in the Columbia Journalism Review, when I was starting to study deepfakes: as misinformation online grows, people need places that are reality-based. They need to know there’s a group of people that really went slow, and had a careful methodology, and did their best to understand what the real context is, what is the truth around something. I feel there’s still a big opportunity for established media brands to leverage the trust they have with people to become the destination people turn to.  

Nick Diakopoulos, was in Athens, Greece for the European Data & Computational Journalism Conference 2025, organised in academic partnership with the People-Centred AI Institute at the University of Surrey, the School of Computer Science and Informatics at Cardiff University, and the National and Kapodistrian University of Athens (NKUA), which hosted the event.

This interview was edited for length and clarity.

Creative Commons license logo