Dear Disinfo Update readers,

Welcome to the latest edition of our newsletter, where we hope to inspire those dedicated to countering disinformation.

This year, we’ve identified a few key work streams that intertwine with our mission: AI, climate change disinformation, and the foreign information manipulations and interference (FIMI), especially zooming in to elections. Diving deep into these critical areas, we’ll harness our expertise and that of the wider community to produce impactful research, formulate actionable policy recommendations, and offer easily digestible insights (like our webinars – check out the list of upcoming ones below!). In addition to these focus areas, we keep monitoring and analysing the evolving disinformation trends and relevant developments and initiatives in the field. Our goal is to provide support, facilitate collaboration, and amplify the impact of our collective efforts. Don’t hesitate to reach out to us with your ideas and suggestions!

We are also gearing up for the highlight of our year, #Disinfo2024. In response to our open call, we received a number of proposals that would be sufficient for at least seven conferences! As much as we’d like to organise them all, we’ll have to settle with one. We’ll do our best to incorporate as many ideas as possible into the conference programme, and also accommodate some of these topics in our upcoming webinars. We’re sorry not to be able to get in touch with everybody, but we are very moved by your commitment and the exceptionally high quality of submissions! We look forward to welcoming you at the conference to meet, share, learn, and connect with fellow professionals and enthusiasts. Mark your calendars for 9-10 October 2024, and stay tuned for more details on registration!

Read on to collect more building blocks of knowledge and opportunities for action in our shared mission!

Disinfo news & updates

  • Self-regulation for democracy. In this crucial election year, major AI developers including OpenAI, Google, and Meta, are racing to set measures to curb AI’s potential misuse in the political sphere. A big tech coalition of 20 major companies (including Microsoft, Meta, Google, and OpenAI) signed an agreement at the Munich Security Conference to collaborate on stopping deepfakes from interfering with global elections. The agreement covers developing tools to detect and prevent misleading AI content (by watermarking or embedding metadata), sharing best practices, and launching public awareness campaigns. Anthropic, an AI company heavily backed up by Amazon, announced an initiative to combat misinformation ahead of the US presidential elections, investigating the possibility to redirect the users of its AI chatbot Claude inquiring about political subjects to “authoritative” sources of voting information.
  • Matching words with deeds. In a move to align with the Digital Services Act (DSA), TikTok announced their in-app election centres for each of the 27 EU Member States. The initiative, aiming at combating online misinformation ahead of the June elections, involves collaborating with local electoral bodies, civil society groups, and fact-checking organisations. Meta joined the party by announcing its own EU-specific operations centre to combat election misinformation, influence operations, and risks related to the abuse of AI.
  • Requesting platform data. AlgorithmWatch and AI Forensics are among the first ones to utilise the DSA to request platform data. The success of the initiative could show the effectiveness of the DSA in protecting citizens’ rights online, especially in crucial election periods​​.
  • Countering AI misuse. OpenAI, with Microsoft Threat Intelligence, has disrupted five state-affiliated actors linked to China, Iran, North Korea, and Russia that attempted to misuse AI services for malicious cyber operations.
  • Fake celebrity cameo II. This DFRLab investigation reveals a disinformation campaign against Moldova’s president Maia Sandu. The campaign, similar to the one conducted to denigrate Ukraine’s president Zelensky in late 2023, involves manipulated videos of celebrities on Cameo, falsely portraying them as supporting the overthrow of Sandu.
  • Explaining Biden robocalls. The identity of the man behind the Biden robocalls that used an AI-generated voice mimicking Joe Biden and advised against voting in the primary, is revealed. Steve Kramer, a consultant working for Biden’s opponent, stated that his aim was to highlight AI’s political dangers.
  • United against climate misinformation. Foundation creates an alliance with seven media outlets in Spain – 20minutos, CadenaSER,, La Marea, La Vanguardia and Servimedia – to fight climate misinformation. In addition, one of the media outlets from the Network of Local Journalists against climate disinformation will be selected. Read more (in Spanish) here.

Brussels corner

  • Digital Services Coordinators. The Digital Services Act (DSA) officially came into force on 17 February 2024, establishing a framework of responsible content moderation, transparency, and accountability for online platforms. Context published a preliminary list of national Digital Services Coordinators (DSCs) responsible for supervising, enforcing and monitoring the DSA. Read the article (in French) and find the list here, and don’t forget to sign up for our 21 March webinar on the topic!
  • TikTok under DSA scrutiny. The European Commission has opened formal proceedings to assess whether TikTok has breached the Digital Services Act (DSA) in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content. 
  • DSA election integrity guideline public consultation. The European Commission’s public consultation on draft Digital Services Act (DSA) guidelines for platforms on the mitigation of systemic risks for electoral processes is open until 7 March.

Our webinars

Register now for these upcoming webinars!
Watch the replay!

What we’re reading

  • Faltering on Facebook, ignored on Instagram. Graphika’s latest report examines the activities of Russian state-controlled media on Meta platforms two years after the invasion of Ukraine.
  • Countering hybrid warfare in the Black Sea region. This report by the Center for the Study of Democracy (CSD) examines Russian disinformation operations in the context of weapons of mass destruction non-proliferation and how it fits within the Kremlin’s hybrid warfare strategy in the Black Sea region.
  • Persuasive AI. This study explores the effectiveness of propaganda generated by artificial intelligence. A survey conducted with US participants to compare the persuasiveness of news articles written by foreign propagandists to those generated by GPT-3 found that it can create highly persuasive content. This raises concerns about the potential use of AI in crafting compelling propaganda with minimal effort​​.
  • What’s up with AI? The newly published white paper, “Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities,” is a collaborative effort by four EU-funded projects (TITAN, AI4Media, AI4Trust, and to enhance the understanding of both the disinformation-generation capabilities of the state-of-the-art AI and the use of it in developing new technologies for disinformation detection.
  • Russian disinfo in Germany. This study (in German) by Center for Monitoring, Analysis and Strategy (CeMAS) offers evidence-based insights to the prevalence and impact of pro-Russian conspiracy narratives and propaganda in Germany, utilising data gathered from surveys to analyse how Russian narratives are influencing the population.
  • Russosphere returns. As a follow-up to their investigation into the ‘Russosphere’ influence campaign in early 2023, Logically’s recent report assesses new campaigns related to promoting pro-Russian sentiments and US-based independence movements claiming to have support from PMC Wagner and the Russian government.

This week’s recommended read

This week, our Managing Director Gary Machado suggests taking a careful look at this X thread by Bellingcat’s Eliot Higgins.

Following the death of Alexei Navalny, a massive smear campaign was launched against Yulia Navalnaya, purporting to ‘reveal’ her affair with investigative journalist Christo Grozev. Bellingcat responded with timely and brilliantly conducted open-source intelligence (OSINT), digging into the supposed hotel room reservation of the ‘couple’ on by examining the missing digits in the booking confirmation, and instantly smashing the false allegations.

Events & announcements

  • 26-27 February: The EDMO Scientific Conference, ‘Navigating the Complex Landscape of Disinformation’, is ongoing in Amsterdam.
  • 27 February – 1 March: The European Digital and Media Literacy Conference, a three-day event on digital and media literacy, takes place in Brussels.
  • 11 March: The first module EDMO’s training series on election integrity will focus on election integrity on online platforms. Read more here, and apply to participate here.
  • 18 March: The second EDMO election integrity training session is aimed at journalists covering the upcoming elections. Find the details here, and apply here.
  • 13-14 May: The 2024 EDMO Annual Conference will take place in Brussels in May. Save the date!
  • 25-26 September: The fifth edition of the JRC DISINFO workshop, organised by the European Commission Joint Research Centre (JRC), and focusing this year on elections and the role of generative AI in disinformation, will take place in Ispra, Italy. Save the date!
  • 9-10 October: The EU DisinfoLab 2024 Annual Conference will take place on 9-10 October 2024 in Riga, Latvia. Registrations will open soon – stay tuned!