The nexus point where AI slop and propaganda meet is journalism’s latest challenge. When institutions are posting deepfakes of regular citizens and journalists themselves are being targeted, how can the public know what is real?
Featured illustration: Evgenios Kalofolias/ iMEdD
What is AI slop?

A technologist explains this new and largely unwelcome form of online content.
On Orthodox Easter Sunday (April 12) President Donald Trump posted an image of himself in flowing white robes, leaning over a man in a hospital gown and laying his right hand on the patient’s forehead. As a woman prays by the man’s bedside, a halo of light forms around his head and he smiles, eyes closed. In his left palm Trump cups another sparkling ball of light, presumably ready to heal some more. In the background, a panoply of patriotic imagery: an American flag, a bald eagle, fighter jets, the Statue of Liberty, the Capitol and soldiers march in the sky.
This particular image likening him to Jesus Christ may have prompted a rare outcry –Trump took it down the next day, saying that he thought the picture depicted him “as a doctor”– but by now the world has become accustomed to Trump posting fantastical images of himself. On his personal Truth Social page but also the official White House X account, he has appeared as both the Pope and a Jedi; he’s walked side by side with a penguin in Greenland and dumped loads of excrement on anti-government protesters in New York while flying a fighter jet.
These images of patriotic cosplay are clearly and unambiguously created with the aid of artificial intelligence. They are interspersed between posts about significant matters of state, such as war updates or press briefings. The White House’s X account has also taken to posting pictures of individuals detained by police with captions detailing their alleged crimes.
On January 22, 2026 the White House posted on X an image of a Black woman as she was being led away in handcuffs after being arrested at a protest in Minneapolis. In the photograph, captioned “Arrested: Far Left Agitator Nekima Levy Armstrong for orchestrating Church Riots in Minnesota,” Armstrong is crying, makeup-less, sobbing with her mouth open as rivers of tears run down her cheeks.
One of these is not like the other
The only reason the public became aware that this image was the product of AI is because the original photograph had been posted by the then-Department of Homeland Security secretary Kristi Noem a few hours before the White House. In the untouched photograph, Armstrong is composed and wearing bright pink lipstick. Her skin is also noticeably brighter, and her ears and nostrils smaller.
In the White House’s X post of Armstrong, there was nothing to suggest that this was an altered image of a real U.S. citizen being arrested by law enforcement. Nearly two months later, the post remains up, albeit with an X community note noting its falseness and linking to the original photograph.

Digital blackface
According to Dr. Omekongo Dibinga, professor of intercultural communication at American University and the author of the book, Lies About Black People, Trump is intentionally using centuries-old racist signifiers to get his messages across.
“Nothing Trump is doing is new,” Dibinga told iMEdD. “Part of the way that they justified enslavement in the United States was by putting out images of us that made us look barbaric, savage, and animalistic, and likening us to monkeys and apes; portraying black people with very large lips, very large noses and big eyes. The idea is, the darker you are, the more you are likely to be perceived as being a criminal in this country.”

Part of the way that they justified enslavement in the United States was by putting out images of us that made us look barbaric, savage, and animalistic.
Dr. Omekongo Dibinga, author and professor of intercultural communication, American University
Dibinga says that the racist posts that Trump regularly posts like the doctored images of Armstrong as well as an AI video showing the heads of former president Barack Obama and former first lady Michelle Obama on the body of monkeys (which Trump was forced to delete after a rare outcry ensued) are being called ‘digital blackface.’
“Trump is old enough to understand how the media works. He’s been in the media forever. So, fast forward to today and what he’s doing, putting out doctored images, particularly the racist ones; he knows that it doesn’t have to be so, it just has to be shown, and then people just start running with narratives.”
Behind the memes
According to Macquarie University philosophy professor, Dr. Mark Alfano, all these posts, whether they are racist or fabulist are actually ‘slopaganda:’ a term he coined in a recent paper which explores the interaction between generative AI and propaganda.
Alfano and his co-authors, Drs. Michał Klincewicz and Amir Ebrahimi Fard, define slopaganda as “unwanted AI-generated content that is spread in order to manipulate beliefs and other attitudes to achieve political ends.”

The portmanteau merges the words propaganda and slop, the latter being the Merriam-Webster 2025 Word of the Year (and defined as “digital content of low quality that is produced usually in quantity by means of artificial intelligence”).
A lot of the AI slop that that we’re seeing […] is sort of emblematic or aspirational. And that means that it can’t be true or false. It’s instead meant to unlock potentials for action and intention that people wouldn’t otherwise adopt.
Dr. Mark Alfano, philosophy professor, Macquarie University
Slop is everywhere, from Ballerina Cappucina to the Kitty Olympics, but what makes something slopaganda?
In a joint interview with Klincewicz, Alfano told iMEdD that propaganda “doesn’t have to be false, it can be true. It can also just not be true or false. And I think the images in particular that generative AI produces fall into this category. So, Donald Trump walking with a penguin into Greenland: is that true? It didn’t happen, but I’m not sure that it’s meant to represent something that did happen rather than something that’s aspired to. And a lot of the AI slop that that we’re seeing in American politics, but also in politics in other countries is more like that. It’s sort of emblematic or aspirational. And that means that it can’t be true or false. It’s instead meant to unlock potentials for action and intention that people wouldn’t otherwise adopt.”

It’s really elusive what is going on in these images. And I think that’s maybe an unintentional consequence of this being AI generated, that uncanniness. There’s something about them that pulls you in.
Dr. Michał Klincewicz, associate professor pf computational coginitive science, Tilburg University
Klincewicz, who is an associate professor of computational cognitive science at Tilburg University, notes that the weirdness of the AI images somehow softens them and works in their favor.
“They look kind of real, but they (also) don’t. And that is important,” he says.
Referring to a deepfake video Trump posted on X last September 2025 featuring a of Senate minority leader Chuck Schumer saying that everybody hates Democrats now as House Minority leader Hakeem Jeffries stands beside him sporting a cartoon Mexican sombrero and a fluffy mustache, Klincewicz tells iMEdD:
“I mean, there’s something really offensive about that image, something demeaning, patronizing, but also kind of cute and soft. It’s really elusive what is going on in these images. And I think that’s maybe an unintentional consequence of this being AI generated, that uncanniness. There’s something about them that pulls you in and it’s like, what the hell am I looking at? It’s captivating.”

Abdication of responsibility
Vincent Berthier, head of the Technology and Journalism Desk at Reporters Without Borders (RSF) says that the use of slopaganda by a powerful institution like the White House “sends a signal that it’s OK now to lie to the citizens. It is a direct attack against the public’s rights to have access to better information.”
According to Berthier, at present, there is no unified legal recourse against malicious deepfakes for anyone – whether they are a public or private individual.
[The use of slopaganda by a powerful institution like the White House] sends a signal that it’s OK now to lie to the citizens. It is a direct attack against the public’s right to have access to better information.
Vincent Berthier, head of the Technology and Journalism Desk, RSF
“It’s the institutions that are failing,” says Klincewicz. “Basically, you have a situation in which there’s a total abdication of responsibility.”
Alfano believes this abdication is intentional. “The companies that enable people to make these slopaganda items have had the capacity for years to watermark generative AI content. And they chose not to make that available to academics or journalists or teachers, maybe even law enforcement – the people who would really need to know whether an image or a piece of text is authentic.”
Klincewicz says that journalists will be both the last resort for truth and also a target.
“The target, ultimately, is going to be you [journalists]. Truth in a world where all you see is an ocean of slop, is impossible. There are no authorities, no sources of reality, just you pick the one that fits with what you believe to be true. And journalists are going to get replaced – that’s a part of the point of that we make in the paper, that you can create stories with images that are local.”

Attacking journalistic credibility
Klincewicz’s dystopic predictions are already coming true.
In a recent RSF investigation, Berthier and his team examined 100 deepfakes of journalists posted on social media between 2023 and 2025. They varied in content: some were scam, using a journalist’s likeness to sell pharmaceuticals, while others purportedly spoke out against their government or announced their plans to commit electoral fraud. 74% of the journalists targeted were women.
“It’s an attack against a journalist’s credibility,” says Berthier. “And the question is not only what journalists can do, because basically they can do only what they know; they are already doing their job.”
Berthier says that newsrooms needed to become even more transparent with their audience by investing in content-marking systems so that the public can see all their metadata and the modifications made to their content.
More significantly, he says that platforms need to be compelled by law to flag both AI and authentic content. That way, if content doesn’t display its origins, it should automatically be considered suspicious.
RSF also recommends that a specific criminal offence be created for spreading malicious deepfakes.
“You can’t fight with people who just don’t follow the same rules in the public debate. Journalists are under a very, very impressive pressure and that’s what allows the fakes to spread,” says Berthier.
