Amid growing concerns over misinformation, platforms like Meta, Google, and YouTube pause political ads, but experts argue that past missteps could leave the election vulnerable to continued disinformation.
In the lead-up to the high-stakes 2024 U.S. presidential election, tech giants such as Meta, Google, and YouTube are taking new steps to curb the spread of misinformation, including temporarily halting political ads on their platforms. The move is aimed at preventing the manipulation of public sentiment and misinformation during the election period, especially during the crucial days when ballots are being counted. However, experts argue that these measures may be insufficient, as the digital ecosystem has already been flooded with false narratives that could undermine trust in the electoral process.
A Changing Landscape
Meta, which owns Facebook and Instagram, took the first step last week by banning new political ads across its platforms in a bid to combat misinformation. Initially set to expire on Tuesday, the ban was extended for an unspecified period. Google followed suit, announcing a similar pause on election-related ads after the last polls close, but its timing remains open-ended. TikTok, which has had a ban on political ads since 2019, continues its strict policy.
In contrast, Elon Musk’s X (formerly Twitter) has reversed its own ban on political ads since Musk’s acquisition of the platform in 2022. Despite recent concerns about disinformation, X has yet to announce any plans to pause political ads during the election.
These ad pauses come as part of a broader effort by tech companies to ensure the integrity of the election, particularly as misinformation and disinformation continue to spread unchecked across platforms. Election officials and fact-checkers have been battling viral false claims, such as baseless allegations of voter fraud and machine interference, for weeks.
The Risks of Late Action
Despite these measures, many experts believe the timing of the pauses is too little, too late. Social media platforms, including X, have made cuts to their internal trust and safety teams, reducing their ability to monitor and respond effectively to misinformation. Since Musk’s acquisition, X has come under fire for allowing false claims about elections, including misleading assertions about undocumented voters and widespread fraud, to proliferate unchecked.
Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), pointed out that these platforms have already failed to prevent disinformation from spreading widely before the election. He noted that while the temporary pauses may slow down the impact of paid political ads, they do little to mitigate the effect of organic content—posts and tweets that are naturally amplified by algorithms for their engagement value.
“The platforms are algorithmically designed to promote the most contentious, engaging content,” Ahmed said. “Even without paid ads, disinformation thrives in an environment that rewards polarization and controversy.”
The Backslide in Election Integrity
The decision to pause ads comes after years of evolving policies in the wake of online interference in past elections. Following the 2016 U.S. presidential election and the Capitol riots on January 6, 2021, social media companies, including Meta and Twitter, were forced to strengthen their election-related safety measures, including removing harmful content and suspending accounts spreading false information.
However, over time, these efforts have been rolled back. Tech companies have cut resources devoted to election integrity, including trust and safety teams, and have relaxed policies on removing false claims. A prime example of this is the recent decision by platforms not to remove posts claiming the 2020 election was stolen, despite widespread evidence to the contrary. This “backslide” in policy enforcement has left the platforms less equipped to handle misinformation in 2024, experts argue.
Sacha Haworth, executive director of the Tech Oversight Project, criticized this rollback, stating that the platforms have failed to evolve to meet the challenges posed by modern disinformation campaigns. “Platforms are hotbeds for false narratives,” she said.
Misinformation’s Evolving Threat
Beyond just ads, misinformation on social media has become increasingly sophisticated, with the rise of artificial intelligence tools making it easier to generate convincing fake content, including images, videos, and audio. This raises the stakes for tech platforms, which are grappling with the proliferation of deepfakes and manipulated content designed to sow discord and confusion.
Platforms like YouTube, TikTok, and Meta say they are working to address these challenges by flagging unverified claims, partnering with fact-checkers, and directing users to authoritative sources. Meta, for instance, removes posts that may intimidate voters or interfere with election processes and labels content that is deemed false.
TikTok has also ramped up efforts to label unverified claims, particularly those that may prematurely declare a winner or suggest widespread fraud before the final results are in. Meanwhile, YouTube has focused on removing content that encourages violence or promotes harmful conspiracy theories, particularly in the wake of the 2021 Capitol riot.
Despite these efforts, experts caution that the damage may already be done, particularly on platforms like X, where Musk’s rhetoric and the platform’s lax approach to disinformation have made it a breeding ground for false claims.
Conclusion: Too Little, Too Late?
As the 2024 election draws nearer, tech companies are taking steps to minimize the impact of misinformation through ad pauses and enhanced content moderation. However, experts believe that these actions, while necessary, may not be enough to prevent the ongoing flood of disinformation that has already spread throughout the online ecosystem.
The real challenge, they argue, lies in addressing the deep-rooted issues that allow misinformation to thrive on social media platforms. Without a broader, long-term commitment to strengthening trust and safety mechanisms, and ensuring that platforms are held accountable for the content they host, the fight against election-related disinformation may be doomed to fail.
As we approach one of the most contentious elections in U.S. history, it remains to be seen whether these temporary fixes will be enough to safeguard the integrity of the election process—or whether the damage has already been done.