Crisis Reporting Resource

As misinformation surges during the Israel-Hamas war, where is AI?

This article was originally published by Poynter on October 19, 2023, and was reproduced with permission. Any reprint permissions are subject to the original publisher.

Despite the panic about generative artificial intelligence, deepfakes have not been a major factor in the flood of falsehoods during the conflict.

A recent crowdsourced fact check — from X’s Community Notes — claimed a graphic image from the Israel-Hamas war was generated with artificial intelligence.

It wasn’t.

The note disappeared from the tweet. But it was a grim illustration that as the conflict in Gaza plays out on social media, generative AI has not been a major factor in the flood of misinformation.

The dominating threat has instead been real footage used out of context. The vast majority of images and videos fact-checkers have debunked during the war have included footage from other countries like Syria or Turkey, and the past, like this video that was actually from a previous conflict in Gaza.

“There’s just a lot of stuff out there to misrepresent when it comes to wars, whether it’s footage of this conflict, of previous conflicts, or for that matter video game conflicts,” said Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public. “Wars are densely documented things, which means if you are looking for media to misrepresent there’s plenty of choices.”

In a previous post, I explained that the best way to verify images and videos is to track down the original source through a reverse image search. (Here’s a more comprehensive guide to that, featuring MediaWise ambassador Hari Sreenivasan.)

While there has been scrutiny around the harmful content AI image generators can be used to create, restrictions on graphic or violent imagery may have slowed the production of war-related images and videos, said Sam Gregory, executive director of the human rights nonprofit WITNESS, who recently testified to a U.S. Senate subcommittee about the harms of AI. And the technology isn’t able to produce synthetic videos or images that match the quality of out-of-context war footage spreading online.

PolitiFact, which is based at Poynter, debunked a deepfake of Joe Biden announcing a military draft. While the post garnered more than 675,000 views on TikTok and tens of thousands of impressions on Facebook, the quality is abysmal, and the deepfake itself is from months ago, well before the latest war between Israel and Hamas.

The war was the first real test of experts’ warnings about the threat of generative AI. Their hypothesis that it would increase the quality and quantity of misinformation has so far remained unproven. Coincidentally, a group of experts Wednesday published a peer-reviewed article in the Harvard Kennedy School Misinformation Review calling such concerns “overblown.”

While AI may increase the quantity of misinformation, people still have finite attention spans, according to the commentary. Just because the amount of fake images increases, doesn’t necessarily mean the demand for those falsehoods will. (The full paper is definitely worth the read.)

“It strikes me that a more significant concern than AI-generated misinformation are influential accounts endorsing and promoting problematic or misleading sources, as seen in the case of Elon Musk,” said Felix Simon, a co-author and Oxford Internet Observatory doctoral researcher. “Previous research indicates that misinformation frequently originates from prominent figures, with certain politicians and partisan media playing a disproportionately large role.”

Despite its lack of influence on the current information ecosystem, I’m not totally sold on AI being a nonissue as we approach the 2024 election. The viral AI image of the Pope wearing a Balenciaga puffer jacket showed the technology was capable of fooling plenty of people if the stakes are low and the image meets a threshold of plausibility. And audio deepfakes emerged in the recent election in Slovakia.

“It seems more likely that if we see generative AI usage, based on current patterns, it would either be fake audio (which has got significantly easier to make) or claims of AI used to dismiss real content (either image or audio), which we’re seeing frequently,” Gregory said.

But for now, trolls and propagandists have plenty of existing war footage from conflicts in the Middle East to continue flooding social media with misinformation without using AI.

“If there’s a wealth of existing media, there’s really no need to create something new,” Caulfield said. “As a matter of fact, since the best lies have a kernel of truth, something fully fabricated may not be the most effective route anyway.”