Dear Disinfo Update readers,

The EU elections are now behind us, and it’s time to dig into the analysis and apply the lessons learnt. Many important elections are still on the horizon in this super election year. What more can be done to protect them from disinformation and foreign (or domestic) interference? How to ensure that democracy remains a cornerstone, not just a cornice, in our societies? In this edition, we start the work by taking a look at how online platforms are approving political ads and labelling content, how AI deepfakes deceive voters, and how false narratives are used to sow distrust and division, among other topics.

We’ve also refreshed our two popular factsheets on platform policies: one on election misinformation and another on AI-manipulated and generated misinformation – both highly relevant resources in the context of elections.

If you’ve had enough of elections and platforms, there’s plenty more to explore on other topics in this newsletter. We’ve got an interesting read to recommend bringing clarity to the polluted climate information ecosystem, a variety of events to suit everyone’s tastes, and job opportunities to further the fight against disinformation. Get stuck in!

Our webinars

Upcoming – register now!
Past – watch the recordings!
  • Cleaning the climate advertising industry | Duncan Meisel from Clean Creative guided us through the world’s most powerful creative agencies, revealing their connections to Big Oil and uncovering their techniques to delay and deny climate solutions.
  • Beyond deepfakes: AI-related risks for elections | In this session, Sophie Murphy Byrne from Logically discussed the risks posed by AI technologies in spreading electoral disinformation, with a focus on recent electoral events.

Your message here?

Want to reach the 10.000+ readers of this newsletter?

Disinfo news & updates

  • Platforms & election confusion. This report from Check First highlights issues during the 2024 Finnish presidential elections, focusing on confusing Google search predictions, unclear content labelling in Google News, and political imbalances on TikTok and Instagram.
  • Election integrity risks. Meta’s new transparency tool that aims to boost accountability doesn’t seem to adequately address the deeper risks of misinformation on the platform​​.
  • Pro-Russian propaganda. In May, just weeks before the EU elections, Meta approved 275 political ads lacking mandatory disclaimers, reaching over three million users across Italy, Germany, France, and Poland. These ads, disseminated just weeks ahead of the EU Parliament elections, pose significant risks to civic discourse and electoral integrity. These findings reveal critical vulnerabilities in Meta’s ad moderation systems.
  • AI fakes for young voters. A BBC investigation found that TikTok is exposing young voters to misleading, AI-generated videos about key political figures and issues in the run-up to the UK general election. Despite TikTok’s announcements to increase its efforts to counter misinformation, many users remain confused about the authenticity of the content.
  • Debunked, but… Fact-checking is a crucial first response to counter-disinformation. But what happens next, when it’s reported to platforms? Get it labelled? This report by shows that around 45% of flagged content infringing their disinformation policy is not acted on. Worst performance medal on these EU elections goes to X and YouTube, who failed to act on around 75% of the cases.
  • Convergence of anger. False narratives circulated in the runup to and during the European Parliament elections, aiming to exploit voter anger and distrust. These misleading claims focused on climate change, immigration, and Ukraine, and particularly on the protests by European farmers.
  • FactCRICIS. The European Fact-Checking Response in Climate Crises (FactCRICIS) was launched by the EFCSN to support fact-checking organisations to identify and debunk disinformation campaigns related to climate change and other crises.
  • Godfathers of climate chaos. The Secretary General of the United Nations, António Guterres, made a major speech in New York on 5 June, urging to ban advertising from fossil fuel companies to stop greenwashing campaigns.
  • AI at the Games. A Microsoft Threat Intelligence report reveals Russian influence campaigns targeting the 2024 Paris Olympic Games. Russia-affiliated actors use AI-generated deepfakes and spoofed media, including videos with AI-mimicked Tom Cruise voices.

Reading & resources

  • Profiting from propaganda. This report by Reset and reveals that Meta is profiting from pro-Kremlin election meddling ads, mirroring the messages of EU/US-sanctioned oligarch Ilan Shor, in Moldova. It follows up on the investigation that uncovered a massive paid ad campaign on Facebook – presented also in our webinar in February.
  • Please check. This investigation by Check First and Reset dissects Operation Overload, a systematic campaign by pro-Russian propagandists to overwhelm fact-checkers and newsrooms, using coordinated emails, Telegram channels, fake accounts on X, and Russia-aligned websites. Don’t miss our webinar on 20 June with CheckFirst’s Guillaume Kuster to learn more about this operation!
  • Israel & US. Israel has been linked to a disinformation campaign targeting US lawmakers, according to Mandiant. The campaign involved fake personas and news outlets to promote pro-Israel military content and discredit critics.
  • Social media unplugged. This study investigates how deactivating Facebook and Instagram accounts affected political knowledge, attitudes, and behaviour during the 2020 US election. Although it could be criticised for focusing on short-term effects, the study remains an intriguing contribution to understanding social media’s impact on political behaviour.
  • Accurate but deceptive. This study found that misleading yet factually accurate mainstream news content on Facebook significantly increased COVID-19 vaccine hesitancy, being much more impactful than flagged misinformation.
  • Investigating digital ad libraries. This guide by Craig Silverman teaches you how to track spending, narratives, and campaigns using ad libraries from Google, Meta and other platforms.

This week’s recommended read

This week, our researcher Ana Romero-Vicente recommends reading ‘Climate Obstruction across Europe’. Coordinated by the Climate Social Science Network (CSSN), this research-based book includes insights from leading experts. Released just in time amidst the evolving political landscape, it offers readers a broader context to understand how climate disinformation persists – and how it affects us all.

The book provides a very lucid explanation of disinformation techniques, their underlying objectives, key disseminators, and an additional treasure trove of information: ‘Climate Obstruction across Europe’ explores how the idiosyncrasies of climate disinformation vary across different European countries. It is genuinely intriguing to discover how the dynamics of climate disinformation differ from one country to another based on culture, political agendas, industry needs, and economic and geopolitical interests.

This is a book that requires a highlighter to mark countless essential points. It brings clarity to the polluted climate information ecosystem and is an excellent example of how less is more. Time well invested.

For more insights into climate disinformation, visit our Climate Clarity hub. Just updated with the latest news, research, events, resources, and more, it consolidates knowledge and expertise to help you stay on top of the facts.

The latest from EU DisinfoLab

We’ve refreshed our resources that shed light on how major online platforms are handling (or not) the risks posed by electoral misinformation and AI-generated content to election integrity and our democracies.

  • Platforms’ policies on elections misinformation (V2) | Very large online platforms (VLOPs) face stringent regulations under the EU Digital Services Act (DSA) to tackle false and misleading content related to elections. This factsheet explores how Facebook, Instagram, YouTube, TikTok, and X are addressing election misinformation. 
  • Platforms’ policies on AI-manipulated and generated misinformation (V3) | This factsheet delves into how some of the main platforms approach AI-manipulated or AI-generated content and updated their terms of use, exploring how they address its potential risk of becoming mis- and disinformation. Spoiler alert: most of the actions taken in 2024 reaffirm the previous approach of labelling as a main solution to avoid mis- and disinformation.

Check out our entire collection of factsheets highlighting how platforms operate and how they can be weaponised for misinformative purposes.

Events & announcements

  • 17-19 June: The European Dialogue on Internet Governance, EuroDIG 2024, will be organised in Vilnius, Lithuania.
  • 7-28 June: The AI-CODE project hosts a series of co-creation workshops online, aimed at developing innovative tools to help media professionals leverage generative AI effectively and responsibly.
  • 19 June: Two consecutive events in Brussels will delve into AI. In the morning, AI4Media will host ‘EU Vision for Media Policy in the Era of AI’, focusing on AI’s impact on media, EU legislation, and the ethical and legal challenges AI poses to media freedom and expression. In the afternoon, AI4Media, Titan, veraAI, AI4Trust, AI4Debunk, and AI-CODE will organise the ‘Meet the Future of AI’ event, addressing challenges around generative AI for public good and democracy. Participation in both events is free but requires separate registrations.
  • 20 June: Organised in the framework of the International Weather and Climate Forum, this workshop brings together TV weather presenters and representatives of international organisations for an in-depth reflection on how to communicate on climate change, towards viewers and policymakers.
  • 20-21 June: The Media & Learning 2024 Conference, titled ‘Back to the Future?’, will take place in Leuven, Belgium. The two-day event will explore the latest innovations and trends in educational technology.
  • 26–27 June: The Fourth AI Community Workshop 2024, co-organised by the European Networks of Excellence in AI, Data, and Robotics (AI NoEs), will take place in Thessaloniki, Greece, and online. This two-day event will focus on fostering AI research collaborations and AI education, featuring workshops, keynotes, and panel discussions with leading experts.
  • 26-28 June: The International Fact-Checking Network’s (IFCN) 11th Global Fact-Checking Summit, GlobalFact 11, will be held in Sarajevo in Sarajevo, Bosnia and Herzegovina, and online.
  • 16-18 July: The 2024 International Conference on Social Media & Society will gather leading social media researchers from around the world to London.
  • 1 October: The Tech and Society Summit will bring together civil society and EU decision-makers in Brussels, Belgium, to explore the interplay between technology, societal impacts, and environmental sustainability.
  • 9-10 October: #Disinfo2024, our annual conference, will gather leading specialists from diverse backgrounds to explore the pressing challenges in the realm of disinformation. Secure your spot now!!
  • 29 October: Coordinated Sharing Behavior Detection Conference will bring together experts to showcase, discuss, and advance the state of the art in multimodal and cross-platform coordinated behaviour detection.
  • 14 November: Arcom, the French Audiovisual and Digital Communication Regulatory Authority, is calling for research proposals around the themes of information economics, digital transformation, audience protection, and media regulation for its 3rd Arcom Research Day. Submit your proposal by 1 September.
  • AI & market power fellowship. The European AI & Society Fund has announced a call for proposals for its Global Fellowship Programme on AI & Market Power. Individuals with diverse skills, especially in journalism, OSINT, and technical research, are encouraged to apply by the deadline of 8 July.


This good post!