Twitter said Thursday that it has new policies regarding election misinformation, banning or labeling confusing content about ballot tampering, election rigging, election results, and other topics.
Twitter’s move, which follows a similar initiative announced by Facebook last week, sets the stage for a growing battle between the social media companies and President Donald Trump and his allies, who have been sharing misleading information and false claims about voting in recent weeks.
Twitter’s expanded policies come at a time when social media, more than ever, has become the central electoral information battlefield, as the coronavirus pandemic has drastically limited traditional rallies and door-to-door campaigning. The tech companies have heeded the advice of experts who predict that the election result may not be settled quickly in part because of mail-in ballots this year, leading to potential confusion about who wins.
Trump has more than 85 million Twitter followers, and the company has flagged misleading claims from the president including the assertion that mail-in ballots are fraudulent.
Twitter said the policies would go into effect Thursday, Sept. 10, 2020, and would extend to whenever the election results are officially called. In some cases, the company will remove a tweet entirely. In others, it will add labels to tweets. That includes when misleading information does not seek to directly manipulate or disrupt civic processes, but leads to confusion, said Twitter spokesman Trenton Kennedy.
Twitter’s algorithms decide whether to share tweets in a person’s timeline if they follow a person or if people in the person’s network are engaging with the tweet. That policy restricts the labeled tweet to people who follow the account. The tweet will be masked by the label even for those who follow the account, and the label will say the information in the tweet is disputed and link to mainstream news and official sources (The Washington Post has been included as one of these sources). Twitter will not amplify the tweet through its algorithms or other methods, such as injecting the tweet into the timeline of people who do not follow the account, even if people are discussing it.
Last week, Facebook said that it would prohibit new political ads in the seven days before the election, though ads placed earlier could continue running. The company said it would expand its efforts to remove content that might suppress voting, would attach labels to posts that suggest that casting a ballot puts voters at risk of contracting the novel coronavirus, as Trump did on Twitter recently.
Twitter banned political advertising last year.
Facebook’s labels give users an option to visit a voting misinformation center where the company shares accurate information about elections from official sources. The labels have been criticized because the labels themselves do not serve as a warning about the veracity of the information.