Let’s fact-check Mark Zuckerberg’s fact-checking announcement
On Tuesday, the CEO of a social network born to rate the hotness of college students announced he was taking it back to its roots by “restoring free expression.”
The plan includes relocating Meta’s Trust & Safety employees to Texas (this may be a sleight of hand, if most of its moderation staff is already Texas-based) and allowing a little bit more misogyny and homophobia. It also unwinds algorithmic scanning for harmful content that will result in more Meta users getting harassed. (Watch all five minutes of his video here.)
But his marquee announcement, the one Zuckerberg opened with because he knew it would get him the response he craved from the president-elect of the United States, was that he will terminate a long-standing partnership to fight misinformation.
The program pays fact-checking projects to review viral and potentially false claims through a dedicated tool. When they label something as false, that content sees its future reach reduced. In addition, a fact-checking label is added to the post that gives users the context with a link to the fact check:
First launched in 2016 in response to criticism about rampant fake news, the program since grew into a multimillion-dollar affair involving 90 partners covering 130 countries. (This is a good moment to flag that as the director of the fact-checkers’ association from 2015 to 2019, I advocated for Meta to fight misinformation and was closely involved in some key discussions that shaped the program.)
Facebook has over the years repeatedly trumpeted this feature as a sign that it is a Very Responsible Social Network. Testifying to a group of U.S. members of Congress in March 2021, Zuckerberg called the fact-checking program “unprecedented.” His written submission branded it “industry-leading.”
No longer. Here’s Zuck’s statement on the topic in its entirety:
First, we’re going to get rid of fact-checkers and replace them with Community Notes similar to X, starting in the U.S. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they have created, especially in the U.S. So over the next couple of months we’re going to phase in a more comprehensive Community Notes system.
There is so much bad faith reasoning in 96 words that it’s hard to know where to start. But let’s go in order.
Meta has 8 years’ worth of data to prove that the fact-checking program was biased. Zuckerberg shared none. Instead, he chose to ignore research that shows that politically asymmetric interventions against misinformation can result from politically asymmetric sharing of misinformation. As you can see in a chart from that research, below, American conservatives tended to share more URLs from false news websites on Twitter even when the definition of “false news” was left to a vote of a bipartisan group of laypeople rather than professional fact-checkers.
Back in 2016, Zuckerberg’s staff desperately sought an independent verification system for potential partners of the fact-checking program that couldn’t be gamed by blatantly bad actors like Alex Jones. The International Fact-Checking Network (IFCN)’s code of principles provided it; flawed as it is, it has stringent transparency requirements that are reviewed by an external assessor annually.
That code proved nonpartisan enough to allow for the certification of the conservative magazine The Weekly Standard, which was far from uncontroversial at the time. Meta’s U.S. fact-checking partners also include Check Your Fact, a project tied to the Daily Caller, a website cofounded by Tucker Carlson.
Meta’s CEO knows all this, by the way. When asked by Alexandria Ocasio-Cortez about Check Your Fact in another congressional hearing in 2019, Zuck said the IFCN had “a rigorous standard for who they allow to serve as a fact-checker.“
Zuckerberg didn’t mention that a big chunk of the content fact-checkers have been flagging is not political speech. Instead, it is the low-quality spammy clickbait that Meta platforms have commodified, turning him into a billionaire who wears $900,000 watches.
PolitiFact, one of Meta’s U.S.-based fact-checking partners, collects all the falsehoods it has labeled as part of the program in one place. I reviewed more than 100 of them and found that only about 21% were about politically sensitive posts about social security cuts or Trump’s impeachment. A similar share was spammy political hoaxes about Mitch McConnell’s health condition or Kash Patel calling the FBI “gay.” The relative majority (45%) was not political at all, including extremely sensitive speech about suspended NFL referees or a ship that’s pretending to be the Titanic. (Even around Election Day, posts being labeled were as likely to be about voting irregularities as they were to be spammy fakes about Trump saying he hated SNL or Melania Trump endorsing Kamala Harris.)
Zuckerberg justifying the termination of this program as a defense of free speech is particularly galling given that fact-checking labels did not lead to the underlying posts getting removed. Meta (rightly) took the approach of fact-checking as a contextual intervention that reduced a post’s reach but didn’t prevent users from continuing to access it. I am no First Amendment expert, but this seems like a decent application of Justice Louis Brandeis’ exhortation to fight falsehoods with “more speech.”
The fact-checking program was not perfect and fact-checkers have no doubt erred in some percentage of their labels. Meta’s transparency report suggests this error rate in the EU may be as low as ~3%, orders of magnitude lower than the error rate for other demoted content.
Meta had every right to eventually terminate its contract. But getting rid of fact-checkers in this manner was politics, not policy.
Typically for Meta, it is also a choice of American politics. In 2016, Facebook launched the fact-checking program once it was in hot water with domestic stakeholders; earlier warnings from the Philippines had been ignored. Today, Meta’s non-U.S. fact-checking partners are left assuming but uncertain that the program is terminated globally. The requirements under Article 34 and 35 of the Digital Services Act may mean Meta holds on to or tapers off its partnership in the EU more gradually.
And now for Zuckerberg’s proposed alternative to censorious fact-checkers.
It is unusual to see a CEO say he will emulate another platform’s product, especially after having threatened to wrestle that company’s owner.
But let’s assume that Zuckerberg is genuinely committed to a crowdsourced effort to combat misinformation in an unbiased and pro-speech manner.
He should probably read the research that suggests Community Notes users are motivated by partisanship and tend to over-rate their political adversaries. He should also probably be aware that as many as 90% of Community Notes never get displayed on X.
I was running crowdsourced fact-checking projects into the ground long before Zuckerberg entered his bling-heavy midlife crisis. So I’m not in principle opposed to user-driven fact checks (and will write more on those in my newsletter next week). But the quality of a crowdsourced project hinges on the incentive structures of the underlying crowd. There is very little in Meta’s history to suggest those incentives will be enlightened.