Good day, Disinfo Update readers,

Welcome to our biweekly newsletter on disinformation, offering you a curated selection of news, events, and announcements in the disinformation field from around the world.

You’ll find in today’s edition the latest investigation from our friends at CheckFirst, #FacebookHustles, a massive scam involving 1500+ Facebook ads luring users to fake media sites. Together with EDMO BELUX, we dug deeper into the Belgian case, to figure out how the deception targeted local outlets, politicians, and audiences.

We also dive into the intricate world of AI and its impact on disinformation – again! -, and explore the latest Twitter content moderation developments.

In terms of events, we have two exciting webinars lined up before we head off for a well-deserved summer break. We’re hosting one tomorrow, on 28 June, on the disinformation landscape in Europe and one on climate change disinformation on 3 July. And don’t forget to check the #Disinfo2023 programme, and register for our annual conference which will reunite the counter-disinformation community in Krakow on 11-12 October!

With that, let’s dive in!

Disinfo news & updates

  • Content moderation. Elon Musk declared last week that Twitter will abide by EU laws to combat disinformation and hate speech online, referring to the Digital Services Act (DSA) which will start applying in late August. “If a law is enacted, Twitter commits to comply with it,” Musk said in a pre-recorded interview on France 2. That same week, Thierry Breton, the European Commissioner for Internal Market, who was visiting some platforms in the US including Facebook and Twitter, reminded them that “they have to respect new legal obligations to crack down on things like disinformation, cyberbullying and threats to public health and safety by the fall. By the end of August, these so-called very large online platforms (with over 45 million users in the EU) have to hand the Commission a first detailed assessment of their major risks for users.” 
  • New detection tool. With the rise of AI-generated media, false profiles have become increasingly sophisticated. LinkedIn’s partnership with UC Berkeley has led to a groundbreaking detection method that “accurately identifies artificially generated profile pictures 99.6% of the time while misidentifying genuine pictures as fake only 1%”. 
  • Down Under upcoming law against disinformation. While this goes beyond our usual geographic scope, this is an interesting development coming from Australia which has proposed a new law to increase oversight and severe penalties for digital platforms spreading misinformation.

What we’re reading

  • Ignorance of AI issue. Disinformation expert Nina Jankowicz relates how she found that her face had been stolen and that she was the subject of deepfake porn online. “Many commentators have been tying themselves in knots over the potential threats posed by artificial intelligence […]. Yet policymakers have all but ignored an urgent AI problem that is already affecting many lives, including mine.” An eye-opening story to read in The Atlantic.
  • AI challenge to elections. This Brennan Center for Justice analysis provides insight into how the widely accessible artificial intelligence tools could fuel the rampant spread of disinformation and create other hazards to democracy in election times.
  • Pivotal moment. The DFRLab unveiled the Task Force for a Trustworthy Future Web’s final report: Scaling Trust on the Web which explores the dynamics and gaps that impact the trustworthiness and usefulness of online spaces.
  • Code of Conduct. The “Our Common Agenda” report on the United Nations’ Sustainable Development Goals includes proposals for a Code of Conduct to address the spread of disinformation. The objective is to “provide a gold standard for guiding action to strengthen information integrity”. Find out more here
  • Encrypted Messaging Apps & disinfo. Curious to find out what is an Encrypted Messaging App (EMA)? How do they work? What is their role in the spread of disinformation? This report from the propaganda team at the Center for Media Engagement at the University of Texas breaks down how EMAs work and shares features of six popular apps, and examples of how they’ve been used for political manipulation.
  • Disinformation impact – case studies. IBERIFIER, the Iberian Media Research & Fact-Checking EDMO hub, just released this comparative study on the impact of disinformation on political, economic, social and security issues, governance models, and good practices in Spain and Portugal. 

This week’s recommended read

This week’s recommended read by Maria Giovanna Sessa, Senior Researcher at EU DisinfoLab, is a Financial Times investigation titled “You can’t unsee it.” In Kenya, 184 content moderators filed a lawsuit against Meta for alleged human rights violations and wrongful termination of contracts. 

This is a crucial testimony of people suffering indescribable trauma to (barely) make a living from filtering out social media’s most disturbing content and receiving no mental support from their employer.

Many issues emerge: the need for human moderators with cultural and linguistic expertise, the unfairness of the gig economy, and the chokehold of NDAs. Ultimately, it tackles a global problem that becomes increasingly urgent in fragile and conflict-torn countries. A strongly recommended read!

The latest from EU DisinfoLab

  • Belgian spin-off. While last week, our friends at CheckFirst unveiled a massive scam involving more than 1,500 Facebook ads luring users to fake media sites, this week, EDMO BELUX and the EU DisinfoLab dug deeper into the Belgian case, to figure out how the deception targeted local outlets, politicians, and audiences. Read the Belgian case.
  • Platforms’ policies on misinformation. How are platforms (Facebook, Instagram, YouTube, Twitter and TikTok) defining health and elections misinformation? What are the potential harms of this type of misinformation? And how are platforms addressing this risk? Answers to those questions, and more, are included in this factsheet on platforms’ policies on elections misinformation, and in this one, specific to health misinformation.
  • Disinformation landscape in European countries. This Monday, we’ve released a new batch of country disinformation factsheets, covering the disinformation landscape in Austria, Bulgaria, Greece, Hungary, Lithuania and Luxembourg. Discover them, and the previous ones (Belgium, Finland, France, Germany, Ireland, Italy, Spain and Sweden) here!
  • Webinars before the summer break! Join us on 28 June for the eye-opening “Disinformation landscape across Europe” webinar which will explore our country factsheets that reveal the disinformation landscape across EU Member States. Then, on 3 July, get ready for our enlightening webinar “Polluting the truth about climate change.”  
  • Last. But certainly not least! While sign-ups for #Disinfo2023 are ongoing, the programme for the annual EU DisinfoLab conference was disclosed a few weeks ago. Have you had a look at the promising and exciting lineup yet? Secure your spot for this in-person, 2-day conference here.


  • 28 June: Tomorrow, we’re hosting our webinar, “A disinformation landscape across Europe” with the Friedrich Naumann Foundation (2-3 PM). We will delve into our series of country factsheets that highlight the disinformation landscape across EU Member States, from the most emblematic cases to recurrent narratives, community actors, and policy initiatives. Register here.
  • 29 June: Horizon Europe research projects AI4media, AI4Trust, TITAN and – in cooperation with the European Commission – host a conference, “Meet the future of AI”, which focuses on various facets of Artificial Intelligence and the disinformation landscape. Sign up here.
  • 30 June: The “Countering Disinformation During the Rise of AI: 2 Years of EMIF’s impact” conference will take place in Brussels and online. Register here
  • 3 July: “Polluting the truth about climate change”, a webinar by EU DisinfoLab and the Heinrich Böll Foundation (2-3 PM), with Alexandra Geese, German MEP of The Greens / EFA, Jennie King, Head of Climate Research and Policy at the Institute for Strategic Dialogue (ISD), DeSmog’s Climate News Reporter Adam Barnett, and Ana Romero Vicente, Researcher at EU DisinfoLab. Don’t miss out on this opportunity to gain valuable insights and join the conversation on how to combat climate change disinformation. Register here
  • 4 July: This high-level event, “How best to ensure the integrity of the 2024 European elections?”, at the European Parliament will focus on how best to ensure the integrity of the 2024 European elections through the participation of civil society. Sign up here.
  • 27-28 July: Register here for the Cambridge Disinformation Summit, a hybrid event gathering global thought leaders to discuss strategic disinformation. 
  • 11-12 October: Registrations to #Disinfo2023, the EU DisinfoLab’s annual conference, are ongoing. Don’t miss that chance to meet with your peers in the counter-disinformation field for a two-day conference in Krakow! Register now, here!

Job opportunities

This good thread!