FINAL CALL: EU Code of Practice on Disinformation – WDYT?

On behalf of the European Commission, EU DisinfoLab is taking part in an independent assessment of the Code of Practice led by VVA Economics and Policy. VVA has created a survey to collect expert feedback on the Code’s effectiveness. Please let us know what you think via this link.

A Mirage of Independence?

Facebook published provisional bylaws for its Oversight Board last Tuesday, which were met with raised eyebrows. CNBC noted how these rules firmly leave the platform in control. After all, according to CNBC, “decisions made by the oversight board will, by default, apply narrowly to the specific piece of content that is being reviewed, and will not create any precedents that Facebook has to follow in the future for similar types of violations”. Interestingly, though, Wired’s Steve Levy is more hopeful: he believes that the Board may actually kill Facebook’s political ad policy. Levy writes that the values that underpin the Board (authenticity, safety, privacy, and dignity) aren’t likely to be present in an ad that intentionally circulates a lie about someone, meaning that the Board – in theory – would rule against the hypothetical ad.

What’s the latest: US Presidential Election

In light of the Iowa Caucus, last week Democrat White House hopeful Sen. Elizabeth Warren announced her plan to combat 2020 election disinformation, including a self-commitment to honest campaigning. She later writes that the “tactics employed by the Russian government are just as easily accessible to domestic groups seeking to promote or oppose candidates and political or social issues”. In this context, The Guardian released an investigation detailing how Donald Trump’s campaign had spent almost $20 million on Facebook ads in 2019. Among the topics that dominate Trump’s Facebook ads, the media and immigration came out on top. In related news, there’s an interesting piece outlining 10 things tech platforms can do to safeguard the 2020 US election.

In the news

Good reads

  • Reuters Institute held a seminar that aimed to debunk myths surrounding filter bubbles. A key takeaway was that by too narrowly focusing on filter bubbles and platforms’ algorithms, we fail to fully understand the mechanisms at play and obscure other issues pertaining to polarisation.
  • Why are U.S. presidential candidates are running political ads against animal cruelty? A new Business Insider piece affirms that this strategy is an effort to connect with the community of animal rights enthusiasts on Facebook. The author also notes that “research shows that the human brain is wired to respond with unique attentiveness to the sight of animals”.


  • Auditing radicalization pathways on YouTube: a new study looked to see whether YouTube users progressed towards more extreme content on the platform by looking at likes, comments, and views of videos. They found that users who engaged with a middle ground of extreme right-wing content migrated to commenting on the most fringe far-right content. TechCrunch has a concise write up here.
  • Using a disinformation campaign that targeted the White Helmets, Kate Starbird and Tom Wilson unpack the lessons learnt from cross-platform disinformation campaigns and provide insight into the needed steps to combat it.

Events and Announcements