Meta’s decision to terminate its professional fact-checking program in favour of a community-based moderation system known as “Community Notes” has sparked concern. Such a shift could enable the unchecked spread of disinformation and hate speech, threatening democracy, access to information and freedom of expression. While Meta argues that decentralising content moderation empowers users, critics fear that it will create an environment where false narratives, propaganda, and hate speech thrive. Besides, the trust in the platform has dropped. As soon as Meta announced its new policy, Google reported an explosion in searches for how to delete or cancel Facebook, Instagram and Threads accounts.
By eliminating third-party fact-checking, Meta shifts responsibility for identifying and addressing falsehoods directly onto its users. This mirrors the model implemented by X, which struggled to fight effectively against misinformation during the US elections. While proponents claim this democratises fact-checking and prevents institutional bias, detractors highlight the inability of crowd-sourced moderation to keep up with sophisticated disinformation campaigns, including those orchestrated by state actors, especially when it comes to misinformation related to ethnic minorities, migrant workers and other marginalised groups.
The failure of unchecked misinformation on social media may negatively influence various areas of our lives and various social groups. Below, we will see several real-life cases in which social media have contributed to misinformation. One of the most striking examples is Facebook’s role in the Rohingya genocide in Myanmar. In 2017, Facebook was used to spread hate speech and incite violence against the Rohingya Muslim minority. The UN later identified Facebook’s failure to curb misinformation as a major factor in fuelling genocide. UN investigators highlighted concluded that Facebook had “substantively contributed to the level of acrimony and dissension and conflict” in Myanmar.
A key concern regarding Meta’s new policy is the increased risk of disinformation in political and security-sensitive contexts. The Business & Human Rights Resource Centre has called Meta’s decision a “reckless gamble,” emphasising that it jeopardises its 4 bln. users by potentially allowing unchecked falsehoods to circulate freely. Similarly, the Lowy Institute warns that the removal of professional fact-checkers could make Meta’s platforms fertile ground for authoritarian regimes seeking to manipulate public opinion, influence elections, and spread divisive rhetoric. For instance, in Brazil’s 2022 presidential election, widespread misinformation on social media platforms, including Meta’s, contributed to political unrest and distrust in democratic institutions as reports suggest vast majority of viral messages with false information were right-wing.
Beyond the political sphere, Meta’s decision to cease the fact-checking policy, may bring disinformation in public health sphere. During the COVID-19 pandemic, Facebook and Instagram became breeding grounds for false claims about vaccines, unproven treatments, and conspiracy theories. The Center for Countering Digital Hate found that just 12 influencers were responsible for 65% of anti-vaccine misinformation circulating on Facebook in 2021. The World Health Organization described this as an “infodemic”, where the spread of false health information became as dangerous as the virus itself.
Beyond democratic concerns, Meta’s shift also raises alarms about the spread of misinformation around climate change. It has played a pivotal role in amplifying both accurate scientific research and climate denialism. Research by Climate Action Against Disinformation claimed that platforms like Facebook allowed climate misinformation to spread unchecked, undermining efforts to combat global warming.
Lastly, the impact of Meta’s decision will also disproportionately affect activists and journalists who expose human rights abuses. Without robust fact-checking, online harassment campaigns could escalate, with authoritarian regimes using misinformation to discredit dissidents and suppress press freedom. This was seen in the Philippines under Rodrigo Duterte, where Facebook was widely used to spread false accusations against journalists and activists critical of the government. Maria Ressa, a Nobel Prize-winning journalist, was repeatedly targeted by disinformation campaigns that sought to undermine her credibility and justify her persecution.
Meta’s decision to end fact-checking marks a turning point in the battle against online misinformation and its broader impact on human rights. While the company argues that shifting to a user-driven model promotes open discourse, the risks of unchecked disinformation, state-sponsored propaganda, and political manipulation far outweigh the potential benefits. Without professional fact-checkers, the burden of verifying truth now falls on individual users, many of whom lack the expertise or resources to distinguish fact from fiction. This shift not only threatens public trust in online information but also raises urgent questions about the role of social media companies in safeguarding human rights and democratic values.
#humarights #meta #socialmedia #communitynotes #misinformation #factchecking #freedomofexpression
Image credits – Photo by Magnus Mueller on Pexels
https://www.pexels.com/photo/photo-of-hand-holding-a-black-smartphone-2818118