Twitter bans deepfakes and deceptive media ahead of US elections

Content designed to cause 'confusion or misunderstanding' will come with warnings

Anthony Cuthbertson
Wednesday 05 February 2020 12:36 GMT
Comments

Twitter has announced a partial ban on fake videos and photos in an effort to stifle the spread of misinformation on its platform.

From next month, the company will label manipulated videos, known as deepfakes, as well as doctored images that may result in “confusion or misunderstanding” among its users.

“You may not deceptively share synthetic or manipulated media that are likely to cause harm,” Twitter stated in a blog post outlining the new rule.

“In addition, we may label tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.”

Any media shared in a tweet that has been significantly and deceptively altered will be labelled as such, while anyone attempting to retweet or like the tweet will be shown a warning. The visibility of the tweet will also be reduced and more context could be provided.

Twitter’s updated rules comes after calls from users to crack down on this type of content, with more than 6,500 responses gathered from its #TwitterPolicyFeedback survey.

Feedback was also sought from prominent users, including Tesla CEO and prolific tweeter Elon Musk. At a recent employee conference, the entrepreneur appeared via video link to implore Twitter to make it easier to differentiate between real and fake users.

Just days after his appearance, he once again raised concerns about the “dire problem” of trolls and bots eroding the integrity of Twitter.

The company told The Independent this week that it would continue "adapting to bad actors' evolving methods".

Twitter's latest policy update follows a similar announcement from YouTube, which pledged to remove misleading content relating to the US election.

YouTube’s new rules prohibit “content that has been technically manipulated or doctored in a way that misleads users and may pose a serious risk of egregious harm”.

Social media and content sharing apps have come under increasing pressure to deal with misinformation following the 2016 presidential elections and Brexit referendum, during which foreign actors spread false news in an effort to influence the results.

Online misinformation campaigns surrounding the 2020 US presidential elections are already underway, despite poll booths not opening for another nine months. Ahead of the Iowa caucuses, a conservative organisation was accused of spreading false information across Facebook and Twitter designed to mislead voters.​

Facebook has been one of the most heavily criticised firms, however it has consistently pushed back on requests to ban political advertising and other measures to stop malicious campaigns.

Nick Clegg, Facebook’s head of communications, justified the tech giant’s stance by claiming it was to protect freedom of expression.

“In the end, you need to be careful once you have curtailed free speech, because once you have curtailed it you can’t turn back,” he said in January.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in