Dear Disinfo Update readers,

Proudly presenting the last edition of our newsletter before heading into the summer break! Intriguing reads ahead on climate change disinformation, elections, FIMI, AI, platforms, and more. Be sure to bookmark them for holiday reading to keep you grounded in case your beach novels get too breezy.

Enjoy your summer, and we look forward to reconnecting with fresh insights!

Our webinars

There’s just one session left before we’re off: ‘False Façade and CopyCop: two names for a new Russian influence operation’, with Beatriz Marin from the European External Action Service (EEAS), and Clément Briens from Recorded Future. After that, our webinar series will be taking a pause with no scheduled sessions until late August.

So far this year, we’ve hosted 19 insightful webinars, covering a wide range of topics relevant to our community and giving the floor to experts to share their knowledge. If you attended all these sessions, chapeau! If not, you can find the recordings on our website to catch up.

We’ll resume our series with more engaging content after the break. Stay tuned for updates!

Your message here?

Want to reach the 10.000+ readers of this newsletter?

Disinfo news & updates

  • Climate litigation. A report by the Grantham Research Institute on Climate Change and the Environment of the London School of Economics reveals a surge in climate lawsuits against companies. The increase is driven by accusations of “climate-washing”, misleading environmental claims, and failing to disclose climate risks, and highlights the growing use of legal systems to enforce corporate accountability on climate change​​.
  • Undermining Kaja. This analysis outlines how Russian state media and pro-Kremlin networks disseminated false narratives and manipulated information to undermine the credibility of Estonian Prime Minister Kaja Kallas after the European Council nominated her as the candidate for High Representative of the European Union for Foreign Affairs and Security Policy.
  • Russia and Iran targeting French elections. This analysis by Recorded Future reveals that Russian and Iranian influence operations actively targeted the French elections, spreading misleading narratives on social media to sway public opinion and to destabilise the political landscape. The report underscores the ongoing threat posed by FIMI campaigns to electoral integrity.
  • Bye, Bye Biden. Wired reports that generative AI was used to create the cheapfake video of US President Joe Biden announcing his resignation, spread on social media platforms by the Doppelganger network.
  • Unruly AI bots. This NBC News report reveals that Microsoft’s Copilot and OpenAI’s ChatGPT have been implicated in spreading misinformation during the US presidential debate about the broadcast delay being extended to allow time to edit the content before broadcasting it. Although the false news was corrected swiftly, the AI chatbots continued to repeat the incorrect information.

Brussels corner

  • ARCOM: trusted flaggers. The French Digital Services Coordinator (DSC), ARCOM, has published its procedure for appointing “trusted flaggers” under Article 22 of the Digital Services Act (DSA). Under that legislation, whereas all notices of illegal content must be dealt with by hosting providers and platforms in a timely way, notices from entities recognised by a national DSC as “trusted flaggers” must be dealt with as a priority by platforms. Trusted flagger status awarded by any DSC must be recognised by all platforms covered by the DSA, which will require a very high level of consistency in approval of “trusted flaggers” across the EU. Interestingly, the French application procedure allows the applicant organisations to claim expertise on topics that go beyond content that is illegal and, therefore, beyond the scope of what they could report as “trusted flaggers”. This raises some obvious questions about how easy or difficult it will be to maintain a strictly harmonised approach.

Reading & resources

  • Information integrity. The United Nations introduced the Global Principles for Information Integrity to combat the escalating threats of misinformation, disinformation, and hate speech, by promoting trust, empowering users, and ensuring media freedom. The principles call for increased transparency, protection of human rights, support for independent media, and ethical use of AI, and include key recommendations for governments, tech companies, and advertisers to refrain from amplifying harmful content and ensure robust protections for journalists and civil society.
  • QAnon and climate denialism. This investigation by Lighthouse Reports and EUobserver reveals how QAnon has shifted its focus from COVID-19 and Ukraine conspiracies to climate denialism, significantly influencing European mainstream discourse.
  • Extreme weather events misinformation. This report by the European Fact-Checking Standards Network (EFCSN) examines climate misinformation and denial narratives in Italy, Greece, and Spain. It analyses various extreme weather events in these countries over recent years, exploring the narratives that emerged in their aftermath.
  • Revived hacktivism demands vigilance. This blog post offers an analysis of the hacktivism threat landscape by Mandiant, providing tools to understand and assess the risks posed by these groups.
  • Far-right extremism on TikTok. This VSquare investigation uncovered how far-right groups in Europe are leveraging TikTok to spread conspiracy theories and target young users with extremist narratives.
  • The TikTok effect. This report by CheckFirst and Faktabaari, under the CrossOver Finland project, highlights concerning trends regarding TikTok’s influence on the 2024 European elections in the country.
  • Holistic approach. This report highlights what the UK could learn from Taiwan, Estonia and Lithuania in the fight against disinformation. The report highlights the necessity of a whole-of-society approach, engaging civil society, researchers, and journalists to enhance resilience against disinformation campaigns.
  • Information ecology. Alicia Wanless, the director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace, discusses in an interview the complexities of the modern information environment and the concept of “information ecology”. Read (or listen) the interview here, or watch the video here.

This week’s recommended read

Raquel Miguel, Senior Researcher of EU DisinfoLab, recommends reading the recent study from Google’s DeepMind “Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data”.

Based on the analysis of 200 real incidents, the piece sheds light on how GenAI models are being abused or exploited in practice. Some of the conclusions read that “manipulation of human likeness and falsification of evidence” – e.g., impersonation, appropriated likeness, falsification, counterfeit, etc. – are the most prevalent tactics in real-world cases, mainly to influence public opinion or with scam, fraud or profit purposes. The report endorses the premise that perpetrators avoid technological sophistication and rather rely on more simple techniques for deception.

Although the conclusions do not seem surprising, the study follows recent attempts to provide evidence-based evidence on the misuse of this technology and is a step forward from basing claims on mere speculation, as has been often the case so far. The evidence presented can also inform researchers’ and stakeholders’ approach to AI governance and mitigations.

In addition to this, the taxonomy of GenAI misuse tactics presented in the paper is a real input to the community and can contribute to harmonising the terminology and standards used by the researchers in the field.

The latest from EU DisinfoLab

  • Platforms’ AI policy updates: Labelling as the silver bullet? Following the release of the updated version 3 of our factsheet on platforms’ policies on AI-manipulated or generated misinformation, this blogpost, published as part of the veraAI project, provides an overview of recent updates on how online platforms are tackling AI misinformation.
  • Back to the future of AI. The second edition of the Meet the Future of AI event, organised in Brussels on 19 June by our veraAI project and the “AI against disinformation” cluster, explored the intersection of generative AI and democracy. Key discussions included AI’s dual role as both a threat and tool against disinformation, with panellists and keynote speakers presenting AI-powered solutions and regulatory perspectives, and emphasising the need for collaborative efforts and clear steps to leverage AI in combatting disinformation.
  • Tackling FIMI in Europe. To counter the threats of Foreign Information Manipulation and Interference (FIMI), the EU has mobilised a range of strategic initiatives. These are reflected in various projects that employ innovative tools and collaborative approaches to enhance public awareness, strengthen policy frameworks, and safeguard democratic processes. Read more about the projects involved in this project cluster – ATHENA (which EU DisinfoLab is proud to be part of),, ARM, DE-CONSPIRATOR, RESONANT, and SAUFEX – here.

Events & announcements

  • 15-16 July: The European Media and Information Fund (EMIF) Summer Conference, “Impact and Future Outlook,” will take place in Lisbon, Portugal. Our Research Manager, Maria Giovanna Sessa, will be talking in a panel titled “Two sides of the same coin: algorithmic accountability and user empowerment?” during the second day of the conference.
  • 16-18 July: The 2024 International Conference on Social Media & Society will gather leading social media researchers from around the world to London.
  • 1 October: The Tech and Society Summit will bring together civil society and EU decision-makers in Brussels, Belgium, to explore the interplay between technology, societal impacts, and environmental sustainability.
  • 9-10 October: The programme for our annual conference #Disinfo2024 is out, and you won’t want to miss it. If you haven’t requested your tickets yet, now is the perfect time to do so! And if you have, but didn’t finalise the purchase, do it ASAP to avoid losing your seat!
  • 10 October: News Impact Summit: Fighting climate misinformation in Copenhagen, organised by the European Journalism Centre, will address how climate misinformation undermines public trust in climate policies and stalls progress toward a green transition.
  • 16 October: UNESCO will organise a webinar “Countering climate disinformation: strengthening global citizenship education and media literacy.”
  • 29 October: Coordinated Sharing Behavior Detection Conference will bring together experts to showcase, discuss, and advance the state of the art in multimodal and cross-platform coordinated behaviour detection.
  • 14 November: ARCOM, the French Audiovisual and Digital Communication Regulatory Authority, is calling for research proposals around the themes of information economics, digital transformation, audience protection, and media regulation for its 3rd Arcom Research Day. Submit your proposal by 1 September.
  • Verifying climate claims. This free 45-minute online course by AFP dives into verifying content and claims about climate change, spotting “greenwashing”, and selecting your sources.


This good X!