All major social media platforms have outlined plans to tackle misinformation around the midterm election and its immediate aftermath, but questions will remain about their effectiveness and the ability of these platforms to adequately implement these measures after struggling to do so during the 2020 elections.
Meta, which owns Facebook and Instagram, has temporarily suspended all “political, electoral and social issue” ads from its platforms like it did during the 2020 election and says it will reject all ads that discourage people from voting or question the legitimacy of the elections— will also take down posts that promote voter suppression, including misinformation about polling dates, locations, timings and voter eligibility.
TikTok says it has been labeling all content around the midterms with links to its “Election Center” page which the company says will offer users “authoritative information” about the polls.
The platform will take down any content pushing election misinformation, harassment of poll workers, hateful behavior, and violent extremism and any content which is in the process of being fact checked by its partners will not be recommended on user’s ‘For You’ feed.
Twitter says it plans to get ahead of misinformation on the platform using “prebunks” which are prompts that appear on a user’s timeline that “proactively address topics that may be the subject of misinformation.”
Like it did in 2020, Twitter says it will continue to use labels on tweets sharing election-related misinformation, claiming that these help both direct people toward debunking content while reducing engagement levels for these tweets.
Both YouTube and its parent Google will rely on the Associated Press to display “authoritative election results” on their platforms, while YouTube will also elevate content from “authoritative news sources” like “CNN and Fox News,” limit the spread of “harmful election misinformation” and add information panels on top of all election related search results.
What To Watch For
All eyes will be on Twitter after it was recently acquired by billionaire and self-styled “free speech absolutist” Elon Musk. While Twitter continues to have policies in place to deal with election misinformation, there have been concerns about its ability to enforce these policies as it is set to lose nearly half of its workforce in a mass layoff on Friday. Earlier this week, Twitter’s Head of Safety and Integrity Yoel Roth acknowledged that the platform had seen a brief surge in hateful content after being acquired by Musk, but attributed most of these issues to a small number of troll accounts. Earlier this week, Bloomberg reported that most members of Twitter’s Trust and Safety team have been restricted from accessing the platform’s internal moderation tools. In addition to this, Twitter’s new owner, Elon Musk, faced criticism over the weekend after he tweeted out an unfounded conspiracy theory about the attack on House Speaker Nancy Pelosi’s husband. Musk, who has engaged in a war of words with progressive Rep. Alexandria Ocasio-Cortez (D-N.Y.), responded to a clip of the congresswoman accusing Musk of restricting her Twitter account by writing “What can I say? It was a naked abuse of power.”
All the policies outlined by the major social media platforms largely seem to build upon measures that were in place for the 2020 elections. While most of these policies were enacted as outlined, there were several questions about the effectiveness of the use of labels and fact checks by “authoritative sources.” Immediately after the 2020 election, all platforms had to scramble to deal with former President Donald Trump’s refusal to concede and various conspiracy theories shared by him and his supporters about the legitimacy of the election process. The proliferation of these false claims, led by the former president, came to a head when Trump supporters stormed the Capitol building on January 6, resulting in the former president being banned from all major social media platforms.
It is unclear what kind of “authoritative news” content YouTube plans to elevate but the choice of Fox News as one of the sources may raise some eyebrows as the network faces a $1.6 billion lawsuit from voting machine maker Dominion which has accused it of amplifying false claims about the voting machines being used to rig the 2020 elections.
Siladitya Ray, Forbes Staff