Stories

What journalists can do to increase transparency in algorithmic systems 

AI and opaque algorithms increasingly drive life-altering outcomes, shaping everything from online content to government decisions. As their influence expands, how can journalists push for greater accountability? Ojooluwa Ibiloye, an SNF Ithaca Graduate Fellow at the University of Delaware, reports on expert insights from the 2024 iMEdD International Journalism Forum in Athens for the Forum’s Pop-Up Newsroom.

At the 2024 iMEdD International Journalism Forum, investigative journalists Daniel Howden, Giorgos Christides, Pablo Jiménez Arandia, and Pierluigi Bizzini showcased how journalism can critically examine Artificial intelligence’s (AI’s) inner workings. During the panel discussion “Inside the machine: Investigating AI and algorithms”, they shared insights from their investigations that promoted transparency and accountability in AI systems, covering everything from tools used for migrant surveillance to risk assessment algorithms in the criminal justice system, and welfare fraud detection software. The panel was moderated by Timothy Large, Director of Independent Media Programmes at the International Press Institute (IPI).

From left to right: Daniel Howden, Giorgos Christides, Pablo Jiménez Arandia, Pierluigi Bizzini and Timothy Large.

AI surveillance on migrants: lack of transparency blocks journalism 

Giorgos Christides, a Greece-based investigative journalist focusing on migration, started by saying that his team’s findings “are ominous.” Across Europe, significant EU and national funding is being allocated to the development and purchase of many surveillance programs. “Dozens of programs and lots of money are being funneled into AI and smart board technologies to detect asylum seekers before coming close to the EU borders and stopping them beforehand,” he said and added: “We have found surveillance tools such as drones and data-scraping applications being deployed for border management, often without the consent or awareness of the people involved”. 

Christides emphasized that tech companies and public institutions are blocking access to information on how “smart algorithms” used to monitor migrants are designed, making it harder for investigative journalists to do their work. In some cases, he said, people have had their rights violated, including asylum claims. “We’ve seen a staggering lack of transparency in how these algorithms operate. Much of our reporting has depended on whistleblowers and on-the-ground investigations.”  

Examining predictive algorithms in criminal justice 

While AI offers incredible potential, it poses significant ethical and democratic challenges. Pablo Jiménez Arandia, a Spanish journalist, has been working on the Decision Machines project, which examines – from a human rights perspective – how governments in three EU countries—Spain, Italy, and the Netherlands—use predictive algorithms in sensitive areas such as courts, prisons, and policing. In Spain, RisCanvi, a risk assessment tool, has been used for 15 years in Catalonia prisons, relying on socioeconomic characteristics to predict the likelihood of an inmate reoffending after release. “The algorithm’s score is applied in various ways, influencing decisions about inmate treatment and rehabilitation programs, and is also used by judges and prosecutors when determining parole eligibility,” he said.  

AI in policing: ethical concerns and accountability 

The use of AI in crime control is also on the rise, often at the expense of the rights and dignity of those under surveillance. Pierluigi Bizzini, a freelance journalist based in Sicily, has been investigating KeyCrime, a predictive policing software used by police in the Milano area of Italy since 2008 to combat crime. The software collects extensive interview data and compares information from recent interviews with data from past incidents to identify patterns when a new crime occurs in a specific area. KeyCrime matches details of the latest crime with previous events, aiming to identify potential repeat offenders. However, concerns have been raised that conducting interviews immediately after an incident may pressure victims into making confessions while still in shock. The process of gathering and using data from crime suspects further complicates the ethical issues surrounding the use of predictive algorithms in law enforcement. The cumulation of collaborative investigative works done in Europe provides a model for how journalists across the globe can expose the inadequacies of algorithm systems that affect many people’s lives. 

Pierluigi Bizzini added that, in Italy, Freedom of Information Act (FOIA) requests are often unhelpful for several reasons. “Journalists either receive no response or must wait two to three months for one,” he said. The state’s lax stance on transparency leaves unanswered critical questions about how these algorithms are trained and function. 

There is a role for journalists to devise and distribute strategies for breaking through the roadblocks surrounding access to algorithm information.

Daniel Howden, founder and director of Lighthouse Reports

Exposing biased algorithms in welfare systems 

Daniel Howden, the founding director of Lighthouse Reports, has led investigative efforts that show that algorithms are far from neutral. Algorithms are being used to influence government decisions, which has far-reaching consequences in sensitive areas like social welfare. An 18-month investigation by Lighthouse Reports into AI-driven risk-scoring systems in welfare programs across Europe revealed that algorithms often reflect the biases of the data they are fed. The report showed that the algorithmic ‘Suspicion Machine’ used to detect welfare fraud in Rotterdam, Netherlands, disproportionately targets individuals based on ethnicity, language, gender, parenthood, appearance, and relationship status. Despite analyzing over 315 data points, the algorithm performed only slightly better than random guessing and has been widely criticized for worsening inequalities. 

As Howden explained, “Being flagged because the machine has decided that, according to your profile, you may be guilty of fraud can mean losing your rights to welfare, losing the ability to make rent, losing access to payments that put food on the table for your children.” Following widespread criticism for its discriminatory outcomes, Rotterdam suspended the use of the algorithm in 2021. After discounting the algorithm system, the city authorities issued a statement encouraging cities all over the world to subject themselves to the same level of scrutiny.  

What should journalists do to get under the bonnet of AI?

Watchdog journalism: “There is a role for journalists to devise and distribute strategies for breaking through the roadblocks surrounding access to algorithm information,” said Howden. The lack of transparency regarding how algorithmic data is collected, used, shared, and stored creates an urgent need for journalists to hold public authorities and tech developers accountable for what goes into these systems.

Cross-sector collaboration: “Journalists need to develop expertise or seek collaboration with techies and data scientists to peer inside the box of these mystical machines,” noted Christides. For algorithms to be used fairly and responsibly, there must be a higher level of transparency.

Alliance with communities: “Journalists need to create alliances with communities,” said Bizzini, emphasizing that since AI systems’ decisions are largely driven by the training data, expanding the number of data points is key. Empowering communities with knowledge of AI systems is critical for mitigating their unfair impacts. Deliberative democracy offers this opportunity for journalists, policymakers, and data scientists, including academics, to collaboratively take account of normative values that could help build legitimacy around AI infrastructure.

Ensuring Fairness in AI: The Essential Role of Journalism  

AI is supposed to augment—not replace—human decisions, particularly in public services that require a high level of fairness and accountability. Ensuring greater oversight and understanding of how these algorithms operate is essential to build fairer and more responsible systems. What is the role of journalists and how they can collaborate with policymakers, private companies, educators, and communities to improve the data quality used in building AI systems? 

We spoke with two experts at the University of Delaware, Prof. Danilo Yanich, a Professor of Urban Affairs and Public Policy at the Joseph R. Biden, Jr. School of Public Policy, and Dr. Timothy Shaffer, Director of the Stavros Niarchos Foundation (SNF) Ithaca Initiative.  

The intention behind the design of algorithmic systems by Big Tech may either prioritize serving the public interest or primarily be driven by profit motives. If the latter, it raises serious ethical concerns, said Prof. Yanich. These algorithms call for a more participatory approach to AI design, one that includes the voices of the people most impacted by these fast-moving technologies. Journalists can help by verifying and providing evidence. “This is the way to go. We need to verify whether AI-driven decisions work or not,” said Professor Yanich.  

Media plays a crucial role in facilitating conversations, uncovering, and telling stories that otherwise would never be known, helping to inform people about different realities, and guiding them through the complexities of how to trust or question AI-based decisions,” said Dr. Timothy Shaffer.  H also called for collaboration between academics and journalists—a relationship that reinforces critical thinking about the sociological, historical, and ethical dimensions of emerging technologies. “Education,” he suggested, “should help us understand that nothing, including AI, exists in isolation—everything comes from somewhere and is often more complex than we might assume”. 

You can watch again the discussion here.  

Check out all Pop-Up Newsroom 2024 stories here.