Hedging against Deepfakes

Last week, Twitter announced plans to label or take down material that appears to have been digitally manipulated, such as deepfakes. However, there’s one caveat pointed out by the New York Times: the company will only remove the content if it is likely to cause harm, such as threats to physical safety, or if the tweet creates a risk of mass violence or widespread civil unrest. In similar news, Alphabet’s Jigsaw has released a new tool intended to help journalists more easily spot deepfakes and manipulated images. These moves ought to be viewed as safeguards for this year’s US presidential election, which will reportedly be a billion-dollar campaign for the incumbent President Trump. Only last Thursday, Trump posted an edited video of Nancy Pelosi, depicting her as though she had ripped up a copy of Trump’s State of the Union address while he honoured citizens. These developments come at the moment of a controversial article claiming that America needs a Ministry of Truth to combat deepfakes.

Will platform liability test the Special Relationship? 

In the context of post-Brexit trade relations between the UK and the US, former chair of the Digital, Culture, Media and Sport Committee and UK MP Damian Collins sat down with Politico to talk about platform liability in relation to a prospective trade agreement. He asserted that the UK should be clear with the US government regarding Section 230-style trade language, which limits websites’ liability for what users post, since it would “kick away all the good work that’s been done,” meaning the Online Harms White Paper. In view of this, the UK could appoint watchdog Ofcom as the new regulator to enforce the statutory duty of care requiring tech giants to protect users from online harms. This comes at the moment of a report on online targeting by the UK government’s advisory body on AI ethics, which called for regulation to control the recommendation algorithms of online platforms. 

In the news

  • Last December, an investigation by The Guardian exposed a network that had been harvesting Islamophobic hate for profit. Facebook later promised to remove this network, but two months later the network is still active
  • Buzzfeed has uncovered a network of almost 100 sites that have been republishing news from reputable media organisations, impersonating local news and financial outlets, and manipulating Google News and search results to earn money through ads, email subscriptions, or by referring people to dubious investments.

Good reads

  • A recent piece by Coda Story takes a look at those behind the anti-5G movement and their management of Stop 5G Facebook groups, which often spread disinformation. This article also sheds light on the public health concerns stemming from 5G.
  • It’s not misinformation, it’s faith: A report by Columbia Journalism Review reveals the inner workings of the anti-vaccination movement. Last Spring, the author closely followed the actions of anti-vaxxers around the time of the U.S. measles outbreak, from their communications via WhatsApp to protests in New York.


  • ICYMI – We’ve released a blog post on our investigation from November 2019 on a network around linfonational.net, which impersonated French politicians on Facebook to amplify disinformation. This investigation showed just how easy it is to use Facebook’s features to manage a disinformation network.
  • Berkman Klein Center’s new report titled Content and Conduct: How English Wikipedia moderates harmful speech shows that Wikipedia is largely successful at identifying and quickly removing a vast majority of harmful content despite the large scale of the website.

Events and announcements


  • Digital Action is looking for an Intern
  • Reuters Institute seeks an Associate Director of the Journalist Fellowship Programme. 
  • Science Po’s Médialab is hiring an Associate Professor for computational propaganda.