Following Mark Zuckerberg’s unexpected decision to end third party fact-checking in the U.S., iMEdD explores how Europe is responding to the new reality on Facebook.
European fact-checkers are increasingly concerned about the growing link between fact-checking and censorship. In an interview with iMEdD, Carlos Hernández-Echevarría, assistant director of the Spanish journalism organization Maldita, and a partner with Meta as an independent fact-checker, argued that Mark Zuckerberg’s recent comments—where he labeled fact-checkers as “politically biased” and blamed them for a decline in public trust—misrepresent the situation.
“Mark Zuckerberg can make the decisions he sees fits on running his company. It’s his company, he can do it. What I cannot accept is that he calls us names in the meantime. Not because I have an overbearing sense of honor, it’s because he’s lying. He implies that we have imposed censorship.”
In a five-minute video titled “More Speech and Fewer Mistakes,” Mark Zuckerberg announced on January 7 that Meta would be ending its third party fact-checking program in the U.S., replacing it with the Community Notes model. Citing issues like freedom of expression and referring to the recent U.S. election as a “cultural tipping point,” the 41-year-old CEO revealed the discontinuation of the fact-checking system his company had relied on since 2016 to combat misinformation.
Meta is ending its third-party fact-checking partnership with US partners. Here’s how that program works.
Meta will end its eight-year partnership with independent American journalists and will instate a Community Notes model like X.
Meta’s Fact-Checking Program
Maldita began its partnership with Meta’s fact-checking initiative in 2019. Six years later, Carlos Hernández-Echevarría argues that the program was designed to protect freedom of expression, directly contradicting Zuckerberg’s description. According to Hernández-Echevarría, the goal of fact-checkers was never to remove information, but rather to add context and provide users with more tools for understanding.
Meanwhile, the European Fact-Checking Standards Network (EFCSN) issued a statement condemning Zuckerberg’s remarks that linked fact-checking to censorship. “The abrupt change on how Meta describes fact-checking, which as we said, has been celebrated as a successful endeavor, to dangerously describing it as censorship is concerning,” said Stephan Mündges, EFCSN Coordinator, in an interview with iMEdD.
Hernández-Echevarría stresses that it’s crucial to draw a clear line in the ongoing debate about fact-checking. He emphasizes that fact-checkers have never sought the power to remove content from Facebook.
“It’s not only that we didn’t do it, it’s that we also explicitly and constantly push Meta and all the other major digital platforms not to use removals of content as a way to fight disinformation,” he said.
“We have been accused of censorship that we never could do and have never accepted. And that, for me, is the big problem here—that Mr. Zuckerberg is mischaracterizing what we have been doing as part of the Facebook program,” Hernández-Echevarría added.
Meta launched its fact-checking program in 2016—while still operating as Facebook—following the U.S. election, with the goal of combating misinformation on the platform.
“Back then, Facebook was under a lot of pressure because it was kind of an actual cesspool of disinformation and manipulation,” comments Carlos Hernández-Echevarría. To address this, the company decided to outsource fact-checking to professional organizations with high standards and expertise in the countries where it already had a presence.
The role of the fact-checkers in Meta’s project was to provide users with cross-checked evidence regarding the accuracy of the content they were seeing.
“We put the actual evidence in front of the user. And in the end, if you still wanted to share something that we had flagged as misinformation, you could do so,” notes the Spanish journalist.
Citing data for the European Union, Hernández-Echevarría points out that 37% of users who received a notification about potentially misleading content ultimately chose not to share it.
“If you are about to share something on, like reposting something on Facebook that we have rated as false, you could get kind of an interstitial that would say, ‘Are you sure you want to go ahead sharing this, knowing that Maldita has rated this post as false?’ And then you would be absolutely free to do it. But 37% of everyone in the European Union who was already going to share this information—so that person had already pushed the button—37% of them decided not to,” he explains to iMEdD.
At 3%: The Errors of Fact-Checkers
At one point in the January 7 video, Meta’s CEO claimed that fact-checkers now make “too many mistakes, too often.” In response, Carlos Hernández-Echevarría presented relevant figures for the EU. According to Meta’s September 2024 Transparency Report to the European Commission on Facebook, the company received 172,780 user complaints between April and September 2024 regarding incorrect “demotion” and “reduced visibility” of their content by the company following fact-checking. After review, 5,440 of these cases resulted in the removal of the “demotion.”
“That’s 3%. So, Meta itself has been saying that, at most, fact-checkers might be wrong 3% of the time, which I think is pretty great, if you ask me, considering that Meta’s own moderators in other categories, or their automated systems, have an error rate of 60-90% in some categories,” Hernández-Echevarría says.
According to the Report, “demotions” of content involving bullying were restored by the company at a rate of 92%, while for hate speech, the reversal rate was about 62%.
The Next Day
Meta’s third party fact-checking program will be replaced in the U.S. by the Community Notes model, already adopted by Elon Musk’s platform, X. In this regard, Carlos Hernández-Echevarría believes that although the model has some positive aspects, it requires better implementation than what has been seen so far with X, which he describes as having “very significant gaps.”
Currently, Meta’s fact-checking program includes over 80 organizations in more than 60 languages worldwide, and according to the company, it operates in 119 countries.
In his video announcement, Zuckerberg hinted that similar changes could be implemented beyond the U.S. In regard to Europe, he claimed it has “institutionalized censorship,” referencing the region’s strict laws on the matter. He also said he would work with President Trump to fight censorship in other parts of the world.
A European Commission spokesperson stated that if Meta tried something similar in the EU, it would first need to submit a “risk assessment” to the Commission. Meanwhile, another Commission spokesperson denied any allegations of censorship.
EFCSN Coordinator, Stephan Mündges, notes that “The EU must not be deterred in its efforts to stop the spread of mis- and disinformation” and “must stand strong in the enforcement of its own laws.”
“European fact-checkers will continue their important work and the EFCSN will support the fact-checking community through any future changes,” he adds.
So, what does Carlos Hernández-Echevarría himself think?
“I think the program has been very useful. Is it going to continue in the future? Is it possible for Meta to comply with EU provisions around risk mitigation without fact-checkers? All those are questions that I can’t really answer. It’ remains to be seen…”