Dear Disinfo update reader, 

In this edition of our newsletter, we spotlight the latest news from Brussels, old but new disinformation threats, and reports and tools fresh off the press. 

We will dive into the continuous updates on climate disinformation, political changes, and Russian influence operation – with its efforts in Ukraine and Africa, as well as UK/EU sanctions against disinformation agents.

But, we’ve got more to share! On top of our usual AI hubdates and fresh webinars and events, our European disinformation landscape resource is evolving, with updated factsheets now live for France and Portugal. As the digital information space continues to shift and evolve, we’re actively revising earlier entries to keep them accurate and up to date. This effort is powered by our passionate community, whose time, insights, and support make it all possible. If you’re interested in getting involved – by contributing your knowledge or helping to fund this important work – we’d love to hear from you.

So scroll down and dive into this new packed edition!

Our webinars

UPCOMING – REGISTER NOW!

PAST – WATCH THE RECORDINGS!

  • Mapping Manipulation: How the FIMI Exposure Matrix Sharpens Attribution and Reveals Connections | In this webinar, Beatriz Marin Garcia from EEAS illustrated the FIMI Exposure Matrix, a major step forward in the field of attribution, discussing its structured, four-tier classification of information channels and deeper insight into recurring disinformation tactics
  • 2025: Climate disinfo reload | AI bots are talking green while pushing denial. Trump calls global warming “good for us.” Musk goes from EV hero to climate contrarian. In this webinar, Global Witness and Ripple Research reveal how 2025’s climate disinformation is smarter, louder, and more dangerous, stretching from Washington to cities around the world. Learn how the disinfo economy is thriving and what can still be done to stop it. One hour. Two experts. Zero time to waste.
  • Influence of foreign narratives on contemporary conflicts in France (in French with subtitles available) | This webinar delved into how foreign interference shapes public opinion and threatens democratic resilience, drawing on insights from Laurent Cordonier’s recent report on disinformation narratives in France. 

Disinfo news & updates 

Platforms and politics

  • Witnessing the influence. Despite the commitments to uphold election integrity undertaken by TikTok ahead of Romanian elections (discussed in our previous newsletter), research shows that the platform’s algorithm pushed far-right content almost three times more than other political content combined. To assess this, Global Witness set up TikTok accounts expressing equal interest and engagement in the two political candidates. Overall, they found that after only 30 minutes spent on the “For You” section, out of the 65 posts recommended, 74% promoted far-right views and 26% platformed all other political views.
  • An appeal to the masses. On the day of the Romanian elections, the government issued warnings of Kremlin-linked campaigns after all Telegram users in the country received a message from Telegram’s CEO claiming that the West was trying to silence conservative voices. Pavel Durov, founder of Telegram, now under judicial supervision in France, has accused the head of the French foreign intelligence agency DGSE of banning conservative voices ahead of elections. The DGSE stated that they have met Durov over the years to remind him of his company’s responsibility, though they refute the allegations to ban accounts ahead of elections.
  • (Tech) Empire State Building: Since Donald Trump’s inauguration, over three dozen individuals linked to Elon Musk, Peter Thiel, Marc Andreessen, and Palmer Luckey have taken positions in US federal agencies that oversee and award contracts to their companies, resulting in about $6 billion in appointments directed to them. This raises concerns about potential conflicts of interest and ethics violations, as federal law prohibits using public office for private gain. The situation is further complicated by Trump’s firing of key oversight officials, including the director of the Office of Government Ethics and 17 inspectors general, weakening safeguards against corruption and triggering broader concerns on competition and regulations meant to protect the public.
  • From Siberia to the Sahel. A wave of disinformation spread in the Sahel following a viral AI-generated video falsely claiming a French woman was arrested for espionage in Burkina Faso. Originally posted to YouTube with a “work of fiction” disclaimer, the video was then shared on other platforms with no heads-up and, subsequently, amplified by Russian state-sponsored media outlets such as Pravda. These type of narratives are common in the Sahel (Burkina Faso, Niger, and Mali, Countries already considered strongholds of pro-Russian propaganda) and affect the political landscape, increasing the popularity of leaders considered to be strong Kremlin allies (e.g. Burkina Faso’s Ibrahim Traoré).
  • Death to information. During his meeting with South African President Cyril Ramaphosa, US President Donald Trump showed an aerial shot of a procession of cars moving along a road lined with white crosses. With the footage, he doubled down on false claims of a white genocide in South Africa, claiming that the crosses were “burial sites” for over a thousand white farmers. However, the crosses were actually placed along the road between the town of Newcastle and the rural community of Normandien in 2020 as a tribute to a farming couple who had been murdered.

Russia strikes again

  • I spy with my little eye. A Russian state-sponsored cyberespionage campaign attributed to APT28 (Fancy Bear/Forest Blizzard) hackers has been targeting and compromising international organisations since 2022 to disrupt aid efforts to Ukraine. The hackers targeted entities in the defence, transportation, IT services, air traffic, and maritime sectors in 12 European countries and the United States, as well as compromised internet-connected cameras at Ukrainian border crossings to monitor aid shipments. 
  • A price for lies. Britain announced wide-ranging new sanctions targeting Russia’s military, energy and financial sectors, ramping up pressure on Moscow in coordination with the European Union. The UK also sanctioned 15 individuals involved in spreading Russian disinformation, while the EU listed a further 21 individuals and six entities and introduced sectoral measures in response to destabilising activities against the EU, its member states and international partners.
  • Bad coverage, uncovered. The French satirical newspaper Charlie Hebdo filed a complaint against an unknown person for the dissemination of false covers on social networks. These front pages, which follow the layout and typology of the newspaper, circulate “mainly on the Telegram messaging network” and on X and are “accompanied by captions or comments written in Russian”. Aimed at the Russian public, these covers denigrate Ukrainian President Zelensky and support for Ukraine, making readers believe that Charlie Hebdo is anti-Zelensky and pro-Putin.

Money talks

  • Another election, another FIMI threat. In the days following Romanian elections, a series of foreign-funded political advertisements appeared on Facebook across Poland ahead of its upcoming presidential election, sparking fears of an attempt at interference. The NASK Disinformation Analysis Center has identified political ads on the Facebook platform that may be financed from abroad and reported the accounts to META. Read more about Polish elections in our recommended read.
  • A new foundation in town. Last week the Sunstorm foundation held its first event in Paris. This new philanthropist venture aims at gathering European Venture Capitalist and tech founders to support the ecosystem of NGO and projects countering disinformation in Europe. Their next target? Raise 10 millions euros in the next three years to start their activities. 

A bad climate

  • On air: disinformation. A study found that out of the ten most popular online shows and podcasts, eight have spread misleading or false information on climate change. This was frequently done through subtle narratives like claiming the benefits of climate change, denying the effectiveness of solutions, and arguing that environmental policies are just a tool for government overreach. Some of these platforms are also backed by wealthy right-wing donors (for example, Texas fracking billionaires), all while presenting themselves as independent truth-tellers.
  • Liar liar World’s on fire. The Union of Concerned Scientists’ (UCS) new report, Decades of Deceit, presents evidence that fossil fuel companies such as BP, Chevron, ExxonMobil, and Shell not only understood the risks of climate change as far back as the 1950s but worked to suppress that knowledge, mislead the public, and shape policy to preserve their profits. According to UCS, Big Oil internal models correctly predicted the temperature increases and sea level rises, but funded front groups to spread disinformation on the topic. The UCS report also outlines how corporate PR strategies evolved over time, going from denying to lobbying, misinformation via third-party groups, and even alleged digital surveillance and cyberattacks. Dated between 2015 and 2021, the documents show oil companies funding sponsorship programs meant not only to enhance their reputations as socially responsible corporations, but also to ward off anti-climate change regulations that might undercut their oil and gas business.
  • Uncovering unsustainability. A new briefing by Climate Action Against Disinformation (CAAD) explores how fifteen of the largest and most influential agribusiness, mining, and fossil fuel companies operating in Brazil communicate climate and sustainability narratives across websites, social media, and paid advertising channels, often greenwashing their environmental impact and delaying real action.
  • Grid of misinformation. US right-wing media have attempted to use the blackout that hit Spain, Portugal, and France to criticise renewable energy. They claimed that the power outage exposed the shortcomings of renewable energy and validated Trump’s support of oil and gas expansion. Nonetheless, power outages have also long impacted fossil fuel grids, while renewable energy sources have proved to be resilient in the face of past extreme weather.

Brussels corner

Council outlines key areas for AVMSD update

The Council of the European Union has issued a document containing its position on the update of the Audiovisual Media Services Directive (AVMSD). 

The Directive, originally inspired by an international convention on trans-frontier television, was adopted in 1987 and, in an effort to keep up with technological developments, was updated in 1997, 2007, 2013 and 2018, with a plan to update the legislation again in 2026. It overlaps to varying degrees with the E-Commerce Directive, Digital Services Act (DSA), European Media Freedom Act (EMFA) and self-regulatory instruments such as the Code of Conduct on Disinformation. The Council believes that the relationship between the AVMSD and other regulatory and self/co-regulatory instruments “should be clear”. 

European Parliament Committees hold joint hearing on media resilience

The European Parliament’s Special Committee on Disinformation held a joint hearing with the Parliament’s Culture Committee on “The State of Media in the EU and Media Resilience“. 

There were two panels.

The first one heard from public service broadcasters on the current problems, but also future potential, of artificial intelligence (AI) to support quality news media, as well as trends in public perceptions of trustworthy news sources. It also heard from editorial media on what it sees as the three pillars of effective journalism – critical and independent analysis, holding power to account and an arena for diverse opinions. They pointed to a weakening of liberal democracies, a weakening of independent media and a growth in the role of online media. Even in comparatively strong countries, there is erosion and a strengthening of the extremes. The lack of innovation of traditional media was lamented. Finally, it heard from a speaker on the threats to independent journalism, taking a look at the threats to independent journalism from AI, in particular relation to editorial independence: First, the use of such technologies presents major ethical and legal challenges. Secondly, there is currently a lack of clarity of what a journalist is. Thirdly, the silencing of journalism through law was highlighted. 

The second panel consisted of a Facebook whistleblower, talking about systemic risks, an academic, talking about structural responses of democracies to enhance resilience and an NGO representative, talking about spyware. The Facebook whistleblower said the Digital Services Act(DSA) should be used more to learn about what platforms are doing to identify and dismantle disinformation campaigns, as well as to stop them from being amplified.  The academic described research on the increasing use of Instagram and TikTok as news sources by young people. This “news” comes from influencers and celebrities. Young people are generally more distrustful of traditional news and are more susceptible to disinformation. The speaker described a Croatian project designed to address structural problems with fact-checking. She also described the gaps in policy and regulation related to news influencers, before suggesting that traditional media learn from the approach taken by them. The NGO representative described spyware attacks on journalists in Europe, pointing to the apparent un-inviting of journalist victims from a Parliament Civil Liberties Committee meeting and the non-availability of a Euractiv article previously published on this matter. She also described attacks on Ukrainian and Russian journalists, as well as journalists from other countries in the region. She raised concerns that Latvia and Estonia are reputed to use spyware against journalists. She called for urgent action from the EU to protect journalists.

Democracy Shield Special Committee holds hearing on algorithmic manipulation and platform accountability

The European Parliament’s Democracy Shield Special Committee held a hearing on the role of online platforms in shaping democratic discourse, focusing on algorithmic manipulation, data access, and the effectiveness of current regulatory frameworks. No online platform made itself available to speak at the event, just like, according to the Committee chair, “every time platforms are invited”.

The Institute for Strategic Dialogue (ISD) described the various levels of failure of platforms to understand or mitigate the negative effects of their recommender systems, or even to enforce their own internal rules – at the same time as making access to platform data more difficult for researchers. She pointed to failures to implement legislation, such as the DSA, and attacks on regulators and civil society.

The European Analytical Collective Res Futura described algorithmic and design features of Telegram that drive disinformation. The round of questions from parliamentarians focussed on the right to anonymity online and if the existing legal framework for illegal content is adequate. Responses focussed on data access being key to achieving numerous public policy objectives.

In the second panel, the Center for European Policy Studies explained that there is no real anonymity and, while we know little about the platforms, they know vast amounts about us. They proposed an alternative approach to regulation of freedom of expression, based on the speed and scale of communications. An Austrian journalist described an experiment by Meta to promote content recognised by users as good for the world, which found that promotion of such content led to less user engagement. As a result, they only implemented a weakened version of this. She also pointed to a fundamental shift in what user signals are collected by platforms, moving from explicit and subject to some user reflection (likes, etc.) to implicit (number of seconds watching a video). Finally, she criticised the manipulative way platforms use in their procedures for reporting content.

Reading & resources

  • Fighting disinfo in the Baltics. Civic Resilience Initiative’s (CRI) Tracking and Fighting Disinformation in the Baltics is a multidisciplinary project aimed at strengthening media literacy and public resilience across the region. It combines an interactive educational game, Disinformation Challenge, which helps students and individuals build critical thinking and content verification skills, with a dynamic Disinformation Monitoring Platform that operates as a dynamic database offering weekly trend summaries and monthly in-depth reports on disinformation campaigns circulating in the Baltic states, providing journalists and civil society members with actionable insights.
  • Tracking hashtags. A new open-source dashboard to track and investigate hate speech and disinformation campaigns on Bluesky has been created as a project from the Bellingcat & CLIP Hackathon at Universidad de los Andes. Originally based on data from Brazil, it can be adapted to other countries, contexts, and research. 
  • Impulsivity problems. A study found that problematic social media use (PSMU) has been associated with greater impulsivity and risk-taking and, consequently, the greater one’s PSMU, the greater one’s intent to click on specifically false news. This research demonstrates that individuals who experience the most distress and impairment in daily functioning from social media use are also the most susceptible to false information posted on social media.
  • Misinfo in the UK. Full Fact Report 2025 assesses the state of misinformation in the UK, exploring how political change has created new challenges for those tackling misinformation and disinformation, and examining the tasks facing government, regulators and online platforms as a result.
  • Follow the Pravda trail. GLOBSEC published a new report mapping the sources behind the Pravda Network. By systematically examining over 4,3 million articles and drawing on a vast database of more than 8.000 sources, this study highlights Telegram’s pivotal role as the primary channel for content distribution, accounting for as much as 75% of the aggregated materialism and traces the dynamic growth of the network
  • Public Di-sinformation. The book “How Disinformation Ruins Public Diplomacy” analyses how Chinese and Russian public diplomacy strategies differ from the existing academic literature and debates, specifically in the context of the new disinformation era. While using China and Russia as the two main case studies in order to develop a new theory, this book covers other relevant cases on the management of public diplomacy, like Turkey, India, and Morocco.
  • A sea of disinfo. The new report by MedMO (the EDMO hub for Greece, Malta and Cyprus) report examines over 1.200 instances of false or misleading content circulating in Cyprus, Greece, and Malta between January and June 2024, with the aim of measuring how these claims were treated by very large online platforms and online news outlets that shared them. Findings show significant variation in how platforms handle identical content and shed light on how platforms handle identical content.
  • Teaching media literacy. The Guardian Foundation is offering a programme qualifying schools in the UK aimed at developing critical thinking skills for secondary-aged students, to recognise and build resilience against mis- and disinformation.
  • Debunk and Discover. The new Debunk.org report sheds light on the “Sprinter” network on X – a complex web of fake accounts that spread pro-Kremlin disinformation – revealing that these accounts, automated and inauthentic, coordinate efforts to amplify harmful political narratives. By uncovering links to a Belarusian state operative-controlled Telegram channel, Debunk traces the operations of the Sprinter network to actors linked to government-sponsored disinformation.

This week’s recommended read

This week’s recommended read, an investigation into election disinformation, comes from our Executive Director Alexandre Alaphilippe:

Polish investigative outlet Demagog, with Maldita.es and other civil society partners, have released a well-documented investigation into a cross-national network of Facebook pages paying to promote political disinformation on Meta in Spanish and Polish before elections.

Why is this investigation worth reading? 

Because this opaque coordinated campaign doesn’t come from outside the EU, but seemingly from within Europe. Because, unusually, the campaign is pushing content aligned with a progressive agenda. 

What do we think about it? 

The political slant of this campaign is noteworthy. The underlying problem here, as usual, lies not in the content but in the mechanisms and processes that are not strong enough to defend democracy, such as the vulnerability of recommender and advertising systems. 

Any tendency to consider such coordinated campaigns as normal only adds to the information disorder, increasingly blurring the lines between truth and fiction. 

Of course, the mission of the counter-disinformation community is to expose all disinformation campaigns, no matter the source, content or intent, and demand accountability. Failure to do so will make our goals even more difficult to achieve.

The latest from EU DisinfoLab 

EU DisinfoMap. We have published new updated versions of the disinfo landscape factsheets for France and Portugal! They cover key narratives, notable cases, main actors within the counter-disinfo community, and legal frameworks. Explore these and the other countries already released … and more coming soon!

Another AI Hub-date. We’ve just added new content to our AI Disinfo Hub – your source for tracking how AI shapes and spreads disinformation, and how to push back. Here’s a snapshot of what’s new: 

  • TechCrunch: Elon Musk’s AI chatbot Grok, developed by xAI and integrated into X, recently startled users by injecting references to “white genocide” in South Africa into unrelated queries. While xAI initially framed this as a bug, reporting from The New York Times revealed it was caused by an unauthorised code change by an xAI employee, who inserted politically charged prompts without approval, an act the company says breached internal policy. The Atlantic suggested the behavior may reflect deliberate ideological shaping through prompt engineering, raising broader concerns about manipulation in AI systems. Meanwhile, The Guardian reported that Grok itself claimed it had been explicitly instructed by its creators to treat the “white genocide” theory as real and racially motivated. Days later, Grok also cast doubt on the Holocaust death toll, citing “skepticism” over the six million figure, a stance xAI later attributed to the same rogue prompt modification, according to The Guardian.
  • UK Defence Journal: A recent surge in pro-Russian disinformation blogs, mainly from UK and European authors, has been spreading Kremlin-aligned narratives across platforms like Substack. These blogs frequently blame NATO for the Ukraine conflict and portray Ukraine and the EU negatively, often using AI-generated content to produce high volumes of repetitive posts. This coordinated strategy creates the appearance of independent grassroots opposition but serves as a deliberate information warfare campaign to influence public opinion in favor of Russian interests.
  • RSiS: A devastating 7,7 magnitude earthquake struck Myanmar on 28 March 2025, its tremors reaching as far as Bangkok, Thailand. In addition to the dire impacts, the victims were not spared from disinformation. Amid the chaos and the critical need for information, misleading AI-generated content spread widely, highlighting the dangerous intersection of technology and humanitarian crises.

Events & announcements  

  • 28 May: The Scandinavia House is hosting a virtual panel exploring Finland’s groundbreaking strides in battling misinformation and disinformation in the media
  • 29 May: Yale Centre for Environmental Communication organised an online conversation moderated by YPCCC Director on how climate change disinformation spreads and how to respond to it.
  • 2 June: This online event is a 60-minute discussion with Karen Hao, the author of the forthcoming book Empire of AI, the first comprehensive account of OpenAI’s evolution from a nonprofit research lab to a dominant force at the centre of the global AI race.
  • 3 June: Association Civic Tech Europe – ACTE, CEE Digital Democracy Watch and Make.org, organised a high-level event in Brussels discussing AI, citizen participation, and the European Democracy Shield.
  • 5 June: CEPA Europe Tech & Security Conference in Brussels will bring together experts to discuss the most pressing tech and security policy challenges on both sides of the Atlantic.  The conference will explore the intersections of technology, trade, and security – from supply chains to EU-US strategies in response to the China challenge and the role of innovation in allied defense.
  • 6 June: The International Centre for Counter-Terrorism is holding a webinar on Russian extremist disinformation operations in Europe.
  • 10-11 June: NATO Riga StratCom Dialogue 2025 is taking place in Riga, Latvia. Media accreditation is now open.
  • 13 June: Defend Democracy will host the Coalition for Democratic Resilience (C4DR) to strengthen societal resilience against sharp power and other challenges. The event will bring together transatlantic stakeholders from EU and NATO countries to focus on building resilience through coordination, collaboration, and capacity-building. 
  • 17 & 24 June: VeraAI will host two online events exploring how artificial intelligence is being used to analyse digital content and support media verification. The first one is a practical session for journalists, fact-checkers, and other media professionals, featuring hands-on demonstrations of veraAI tools. The second one is a research-focused session for academics, scientists, and scholars, offering deep dives into the latest findings and methodologies. Register for one or both sessions here.
  • 16-17 June:  The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies.
  • 17-18 June: The Summit on Climate Mis/Disinformation, taking place in Ottawa, Canada, and accessible remotely, will bring together experts to address the impact of false climate narratives on public perception and policy action.
  • 18-19 June: Media & Learning 2025 conference under the tagline of “Educational media that works” will be held in Leuven, Belgium.
  • 30 June: Forum Information Democracy has published a call for contributions addressing environmental disinformation, attacks on information integrity and their impact. Deadline to submit contributions is 30 June. 
  • 27-28 June: The S.HI.E.L.D. vs Disinformation project will organise a conference “Revisiting Disinformation: Critical Media Literacy Approaches” on Crete, Greece.
  • 30 June – 4 July: The Summer Course on European Platform Regulation 2025, hosted in Amsterdam, will offer a deep dive into EU platform regulation, focusing on the Digital Services Act and the Digital Markets Act. Led by experts from academia, law, and government, the course will provide in-depth insights into relevant legislation.
  • 8-11 July: The AI for Good Global Summit 2025 will be held in Geneva. This leading UN event aims to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 
  • 6-13 July: WEASA is hosting a summer school titled “Digital Resilience in the Age of Disinformation” for mid-career professionals.
  • 15-16 July: The European Media and Information Fund (EMIF) will hold its Summer Conference in Lisbon, Portugal. Save the date!
  • 4-6 September: This year’s edition of the SISP Conference #SISP2025 will take place at the University of Naples Federico II and will host conversations on digital sovereignty, EU cybersecurity policy, and the challenges posed by emerging technologies.
  • 19 September: “Generative futures: Climate storytelling in the age of AI disinformation” is a workshop that will be held in Oxfordshire (UK), exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation.
  • 24-25 September: This year’s JRC DISINFO hybrid workshop is titled “Defending European Democracy”. Save the date and stay tuned for more info!
  • 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. Make sure you request your ticket now as the early bird sale finishes on 27 May!

Spotted: EU DisinfoLab

  • On the mic in Madrid. Our senior researcher Raquel Miguel Serrano moderated the presentation of the book ‘Guerras Cognitivas’ at Espacio Fundación Telefónica.
  • Talking democracy in Brussels. Last week, our Executive Director, Alexandre Alaphilippe, contributed to the panel discussion “Safeguarding Democracy: Strategies to Combat Disinformation and Foreign Interference” at the Democracy Alive 2025: The Brussels Summit.
  • Let’s meet! In the coming two weeks, alongside our usual bases (Brussels, Madrid, Milan, Athens), part of our team will be heading to Prague, and to Berlin for the Overcoming Fragmentation event organised by Columbia World Projects & the Centre for Digital Governance at the Hertie School. If you’re attending too – or just nearby – drop us a reply to this email. We’d love to try and meet up!

Jobs 

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.