Send any friend a story
As a subscriber, you have 10 gift articles to give each month. Anyone can read what you share.
Stuart Thompson writes about online information flows.
The election dashboards are back online, the fact-checking teams have reassembled, and warnings about misleading content are cluttering news feeds once again.
As the United States marches toward another election season, social media companies are steeling themselves for a deluge of political misinformation. Those companies, including TikTok and Facebook, are trumpeting a series of election tools and strategies that look similar to their approaches in previous years.
Disinformation watchdogs warn that while many of these programs are useful — especially efforts to push credible information in multiple languages — the tactics proved insufficient in previous years and may not be enough to combat the wave of falsehoods pushed this election season.
Here are the anti-misinformation plans for Facebook, TikTok, Twitter and YouTube.
Facebook’s approach this year will be “largely consistent with the policies and safeguards” from 2020, Nick Clegg, president of global affairs for Meta, Facebook’s parent company, wrote in a blog post last week.
Posts rated false or partly false by one of Facebook’s 10 American fact-checking partners will get one of several warning labels, which can force users to click past a banner reading “false information” before they can see the content. In a change from 2020, those labels will be used in a more “targeted and strategic way” for posts discussing the integrity of the midterm elections, Mr. Clegg wrote, after users complained that they were “over-used.”
Facebook will also expand its efforts to address harassment and threats aimed at election officials and poll workers. Misinformation researchers said the company had taken greater interest in moderating content that could lead to real-world violence after the Jan. 6 attack on the U.S. Capitol.
Facebook greatly expanded its election team after the 2016 election, to more than 300 people. Mark Zuckerberg, Facebook’s chief executive, took a personal interest in safeguarding elections.
But Meta has changed its focus since the 2020 election. Mr. Zuckerberg is now more focused instead on building the metaverse and tackling stiff competition from TikTok. The company has dispersed its election team and signaled that it could shut down CrowdTangle, a tool that helps track misinformation on Facebook, sometime after the midterms.
“I think they’ve just come to the conclusion that this is not really a problem that they can tackle at this point,” said Jesse Lehrich, co-founder of Accountable Tech, a nonprofit focused on technology and democracy.
In a statement, a spokesman for Meta said that its elections team had been absorbed into other parts of the company and that more than 40 teams were now focused on the midterms.
In a blog post announcing its midterm plans, Eric Han, TikTok’s head of U.S. safety, said the company would continue its fact-checking program from 2020, which prevents some videos from being recommended until outside fact checkers verify them. It also introduced an election information portal, which provides voter information like how to register, six weeks earlier than it did in 2020.
Even so, there are already clear signs that misinformation has thrived on the platform throughout the primaries.
“TikTok is going to be a massive vector for disinformation this cycle,” Mr. Lehrich said. He added that the platform’s short video and audio clips were harder to moderate, enabling “massive amounts of disinformation to go undetected and spread virally.”
TikTok said its moderation efforts would focus on stopping creators who were paid for posting political content in violation of the company’s rules. TikTok has never allowed paid political posts or political advertising. But the company said some users were circumventing or ignoring those policies during the 2020 election. A representative from the company said TikTok would start approaching talent management agencies directly to outline its rules.
Disinformation watchdogs have criticized the company for a lack of transparency over the origins of its videos and the effectiveness of its moderation practices. Experts have called for more tools to analyze the platform and its content — the kind of access that other companies provide.
“The consensus is that it’s a five-alarm fire,” said Zeve Sanderson, the founding executive director at New York University’s Center for Social Media and Politics. “We don’t have a good understanding of what’s going on there,” he added.
Last month, Vanessa Pappas, TikTok’s chief operating officer, said the company would begin sharing some data with “selected researchers” this year.
In a blog post outlining its plans for the midterm elections, Twitter said it would reactivate its Civic Integrity Policy — a set of rules adopted in 2018 that the company uses ahead of elections around the world. Under the policy, warning labels, similar to those used by Facebook, will again be added to false or misleading tweets about elections, voting or election integrity, often pointing users to accurate information or additional context. Tweets that receive the labels are not recommended or distributed by the company’s algorithms. The company can also remove false or misleading tweets entirely.
Those labels were redesigned last year, resulting in 17 percent more clicks for additional information, the company said. Interactions, like replies and retweets, fell on tweets that used the modified labels.
The strategy reflects Twitter’s attempts to limit false content without always resorting to removing tweets and barring users.
The approach may help the company navigate difficult freedom-of-speech issues, which have dogged social media companies as they try to limit the spread of misinformation. Elon Musk, the Tesla executive, made freedom of speech a central criticism during his attempts to buy the company this year.
Unlike the other major online platforms, YouTube has not released its own election misinformation plan for 2022 and has typically stayed quiet about its election misinformation strategy.
“YouTube is nowhere to be found still,” Mr. Sanderson said. “That sort of aligns with their general P.R. strategy, which just seems to be: Don’t say anything and no one will notice.”
Google, YouTube’s parent company, published a blog post in March emphasizing efforts to surface authoritative content through the streamer’s recommendation engine and remove videos that mislead voters. In another post aimed at creators, Google details how channels can receive “strikes” for sharing certain kinds of misinformation and, after three strikes within 90 days, the channel will be terminated.
The video streaming giant has played a major role in distributing political misinformation, giving an early home to conspiracy theorists like Alex Jones, who was later barred from the site. It has taken a stronger stance against medical misinformation, stating last September that it would remove all videos and accounts sharing vaccine misinformation. The company ultimately barred some prominent conservative personalities.
More than 80 fact checkers at independent organizations around the world signed a letter in January warning YouTube that its platform was being “weaponized” to promote voter-fraud conspiracy theories and other election misinformation.
In a statement, Ivy Choi, a YouTube spokeswoman, said its election team had been meeting for months to prepare for the midterms and added that its recommendation engine was “continuously and prominently surfacing midterms-related content from authoritative news sources and limiting the spread of harmful midterms-related misinformation.”