[UNITED STATES] In a concerning development just weeks before the highly anticipated 2024 US presidential election, a recent study has exposed significant vulnerabilities in the content moderation systems of major social media platforms. The investigation, conducted by the nonprofit organization Global Witness, found that TikTok and Facebook approved advertisements containing harmful election disinformation, raising alarm bells about the potential impact on the integrity of the upcoming vote.
Key Findings of the Study
The Global Witness investigation tested the election integrity commitments of three major social media platforms: TikTok, Facebook, and YouTube. Researchers submitted a series of advertisements containing false election claims and threats to assess how well these platforms could detect and block harmful content. The results were eye-opening:
TikTok's Performance: Despite its policy prohibiting all political advertisements, TikTok approved 50% of the submitted ads containing disinformation1
.
Facebook's Results: While showing improvement from previous tests, Facebook still accepted one ad with harmful disinformation2.
YouTube's Response: Initially approving 50% of the ads, YouTube ultimately blocked publication of all ads until formal identification was submitted, demonstrating a more robust barrier against disinformation2.
TikTok's Troubling Performance
TikTok's failure to detect and block harmful content is particularly alarming, given its strict publisher policy regarding political content. The platform explicitly prohibits all political ads, yet it performed the worst in this test2. This raises serious questions about the effectiveness of TikTok's content moderation systems and its ability to protect users from misleading information during critical election periods.
Ava Lee, Digital Threats Campaign Lead at Global Witness, expressed her concern: "Days away from a tightly fought US presidential race, it is shocking that social media companies are still approving thoroughly debunked and blatant disinformation on their platforms2."
Facebook's Mixed Results
While Facebook showed some improvement compared to previous tests, the fact that it still approved an ad containing harmful disinformation is troubling. This highlights the ongoing challenges faced by even the most established social media platforms in combating the spread of false information during election seasons.
The Threat to Election Integrity
The findings of this study underscore the potential risks to the integrity of the US presidential election. With political debates increasingly taking place online, the inability of major platforms to consistently detect and block disinformation poses a significant threat to informed democratic participation.
"In 2024, everyone knows the danger of electoral disinformation and how important it is to have quality content moderation in place. There's no excuse for these platforms to still be putting democratic processes at risk," Lee emphasized2.
The Role of "Algospeak" in Bypassing Moderation
One notable aspect of the study was the use of "algospeak" in the submitted advertisements. This technique involves using numbers and symbols as stand-ins for letters to bypass content moderation filters2. The success of this method in getting disinformation approved highlights the need for more sophisticated detection systems on social media platforms.
Comparison to Previous Investigations
Global Witness has conducted similar investigations in the past, including tests during the 2022 US Midterms, the 2022 Brazilian General Election, and the 2024 Indian General Election2. The consistent findings across these studies suggest that the problem of disinformation on social media platforms is a global issue that requires urgent attention.
Platform Responses and Commitments
In response to the study's findings, the social media platforms provided statements addressing their content moderation efforts:
TikTok: A spokesperson stated, "Four ads were incorrectly approved during the first stage of moderation, but did not run on our platform. We do not allow political advertising and will continue to enforce this policy on an ongoing basis3."
Facebook (Meta): The company acknowledged the limited scope of the study but emphasized its ongoing efforts to improve enforcement of its policies3.
YouTube: While not providing a direct comment on this study, YouTube has previously highlighted its multi-layered approach to combating abuse on its platform5.
The Broader Impact on US Elections
The implications of this study extend beyond the immediate concerns about disinformation. As American voters increasingly rely on social media for information that shapes their voting decisions, the responsibility of these platforms in safeguarding the integrity of the electoral process becomes even more critical.
A quote from the Free Malaysia Today article underscores the gravity of the situation: "Five out of eight ads with false election claims submitted by an advocacy group for testing were accepted6." This statistic highlights the scale of the problem and the potential for widespread dissemination of false information.
Recommendations for Improvement
Global Witness has called on Facebook and TikTok, in particular, to increase their efforts to protect political debate in the US from harmful disinformation2. Some recommendations include:
Enhancing AI-powered content moderation systems to better detect subtle forms of disinformation.
Increasing human oversight in the ad approval process, especially for politically sensitive content.
Implementing stricter verification processes for advertisers seeking to run political or election-related ads.
Improving transparency in the ad approval process and providing more detailed explanations for rejected ads.
Collaborating with fact-checking organizations to quickly identify and remove false claims.
The Ongoing Challenge of Balancing Free Speech and Misinformation
The struggle to combat disinformation while preserving free speech remains a significant challenge for social media platforms. Striking the right balance between allowing open political discourse and preventing the spread of harmful falsehoods is a complex task that requires ongoing refinement of policies and technologies.
Looking Ahead: The 2024 US Presidential Election
As the United States approaches the 2024 presidential election, the findings of this study serve as a wake-up call for both social media companies and voters. The potential for disinformation to influence election outcomes highlights the need for increased vigilance from all stakeholders in the democratic process.
Voters must become more discerning consumers of online information, while platforms must redouble their efforts to create robust safeguards against the spread of false and misleading content. Policymakers, too, have a role to play in establishing clear guidelines and consequences for platforms that fail to adequately protect against election disinformation.
Conclusion
The Global Witness study has exposed significant weaknesses in the content moderation systems of major social media platforms, particularly TikTok and Facebook. As the 2024 US presidential election draws near, these findings underscore the urgent need for improved detection and removal of harmful disinformation.
The integrity of democratic processes in the digital age depends on the ability of social media companies to effectively combat the spread of false information. As voters increasingly turn to these platforms for political information, the responsibility of companies like TikTok, Facebook, and YouTube to safeguard the truth has never been greater.
As we move forward, it is clear that addressing this challenge will require a concerted effort from technology companies, policymakers, and citizens alike. Only through collective action and ongoing vigilance can we hope to preserve the integrity of our elections and the health of our democratic institutions in the face of digital disinformation.