Meta’s Decision to End Fact-Checking Program Sparks Concerns Over Disinformation and Hate Speech
Meta’s recent announcement of phasing out its third-party fact-checking program has sent shockwaves through the online community, with experts warning that this move could allow disinformation and hate speech to flourish on the platform. The company has stated that it will be replacing the program with a crowdsourced approach to content moderation, similar to X’s Community Notes.
This decision has been met with criticism from fact-checking organizations and experts, who argue that the new system will be ineffective in combating misinformation. Angie Drobnic Holan, director of the International Fact-Checking Network (IFCN) at Poynter, expressed her concerns about the potential consequences of this move. "It’s going to hurt Meta’s users first," she said. "The program worked well at reducing the virality of hoax content and conspiracy theories."
Holan emphasized that fact-checkers have been working closely with Meta for nearly a decade, following the IFCN Code of Principles as well as Meta’s own policies. Fact-checkers review content and rate its accuracy, but ultimately, it is Meta that makes the call when it comes to removing content or limiting its reach.
The fact-checking program has proven to be an effective "speed bump" in the way of false information, with a screen placed over content that has been flagged as questionable. This process covers a broad range of topics, from false information about celebrities dying to claims about miracle cures. Meta launched the program in 2016 in response to growing public concern about the potential for social media to amplify unverified rumors online.
Some critics have suggested that Meta’s decision is motivated by a desire to curry favor with President-elect Trump, who has been vocal about his support for free speech and criticism of fact-checking. Mark Zuckerberg’s video announcement described recent elections as "a cultural tipping point" toward free speech, and the company recently named Republican lobbyist Joel Kaplan as its new chief global affairs officer.
Nina Jankowicz, CEO of the nonprofit American Sunlight Project and an adjunct professor at Syracuse University who researches disinformation, expressed her concerns about the implications of Meta’s decision. "Zuck’s announcement is a full bending of the knee to Trump and an attempt to catch up to [Elon] Musk in his race to the bottom," she said.
The use of Community Notes-style moderation has been met with skepticism by experts, who argue that it is ineffective in combating misinformation. Michael Khoo, a climate disinformation program director at Friends of the Earth, likened this approach to the fossil fuel industry’s marketing of recycling as a solution to plastic waste. "Companies need to own the problem of disinformation that their own algorithms are creating," he said.
The impact of Meta’s decision on online safety and transparency has also been a major concern for experts. Imran Ahmed, founder and CEO of the Center for Countering Digital Hate, described it as a "huge step back" for online safety. "By abandoning fact-checking, Meta is opening the door to unchecked hateful disinformation about already targeted communities like Black, brown, immigrant and trans people," he said.
Nicole Sugerman, campaign manager at the nonprofit Kairos that works to counter race- and gender-based hate online, also expressed her concerns about the potential consequences of this move. "This could have terrible offline consequences in the form of real-world harm," she said.
The decision has also been met with criticism from scientists and environmental groups, who are concerned about the spread of anti-scientific content on Meta’s platforms. Kate Cell, senior climate campaign manager at the Union of Concerned Scientists, expressed her concerns about the implications of this move. "Disinformation’s effects on our policies have become more and more obvious," she said.
The fallout from Meta’s decision has been far-reaching, with experts warning that it could have serious consequences for online safety and transparency. As one expert noted, "It’s not just about what happens online; it’s about the impact offline.