Dear Disinfo Update reader,

Glad to have you with us for this new edition, and what an eclectic one it is: from new AI-generated scandals to old familiar faces, the disinformation landscape is full of new stories. If you are wondering who the familiar names may be, scroll down to our recommended read.

In the face of disinformation, defenders of democracy are often drawn into endless battles against falsehoods; rather than playing by their rules, we must focus on protecting our shared rights and strengthening democracy. We discussed more of this in our newest blogpost!

But that is not the only update from us! We’ve been busy updating our European disinformation landscape resource, with refreshed factsheets now available for Finland and Sweden. What began with 20 countries has steadily grown, and as the information space keeps shifting, we’re working our way through earlier entries to keep them up to date. This wouldn’t be possible without the dedication of our brilliant community, who continue to lend their time and insight. If you’d like to be part of this effort – either by sharing your knowledge or financially supporting the work – we’d love to hear from you.

And if you’re looking for an even more hands-on way to get involved, we’re on the lookout for a Policy and Advocacy Intern to join our Brussels team. It’s a great opportunity to work closely with our Senior Policy Expert and gain experience at the intersection of EU policy and disinformation.

In this edition, we are tracing everything from AI-generated hoaxes to Russian foreign interference, the linkage between conspiracy theories and insomnia, an insightful – yet grim – glance at the state of fact-checkers worldwide, and much more. Dive in to learn more about recent falsehoods and the community’s responses to them.

P.S.: #Disinfo2025 registration is open – don’t miss your chance to join us in Ljubljana this October!

Our webinars

UPCOMING – REGISTER NOW

  • 17 April: Disinformation and real estate exploitation: the case of Varosha | This webinar with Demetris Paschalides from the University of Cyprus examines the intersection of disinformation campaigns and real estate exploitation in occupied Cyprus, specifically focusing on the “ghost city” of Varosha/Famagusta – a case study also analysed in the ATHENA project’s broader research into foreign information manipulation and interference (FIMI).
  • 30 April: Exploring political advertising data – who targets me? | In this session, Sam Jeffers will use tools and dashboards developed by Who Targets Me to explore how political advertising campaign spending, targeting, and messaging are tracked across platforms. We’ll look at what data is available, how to access it, and how to better read political advertising campaigns – particularly during elections – and consider how this data can complement research.
  • 8 May: Influence of foreign narratives on contemporary conflicts in France (in French with subtitles available) |  This webinar will dive into how foreign interference shapes public opinion and threatens democratic resilience, drawing on insights from Laurent Cordonier’s recent report on disinformation narratives in France. The session will be held in French, with real-time subtitles available.
  • 15 May: 2025: Climate disinfo reload | AI bots are talking green while pushing denial. Trump calls global warming “good for us.” Musk goes from EV hero to climate contrarian. In this explosive webinar, Global Witness and Ripple Research reveal how 2025’s climate disinformation is smarter, louder, and more dangerous, stretching from Washington to cities around the world. Learn how the disinfo economy is thriving and what can still be done to stop it. One hour. Two experts. Zero time to waste.
  • 29 May: UK riots: how coordinated messaging mobilised rioters — lessons in detecting inauthentic behaviour | In this webinar, Kamila Koronska from University of Amsterdam presents new research on how coordinated messaging on X and Facebook helped spark the 2024 UK riots, revealing how disinformation and hate speech fuelled unrest both online and offline.

PAST – WATCH THE RECORDINGS!

  • Safeguarding information integrity – the DSA in action | In this webinar, Luboš Kukliš, Lead on Information Integrity for the DSA framework at the Commission, shared the latest developments around the DSA’s implementation – specifically in the area of information integrity.
  • LLM grooming: a new strategy to weaponise AI for FIMI purposes | This session with Sophia Freuden from The American Sunlight Project unpacked the emerging concept of LLM grooming – a novel strategy to manipulate large language models (LLMs) for foreign information manipulation and interference (FIMI) purposes.

Find all our past and upcoming webinars here.

Disinfo news & updates

  • AI deepfakes, again. An AI-generated audio clip of US Vice President JD Vance slamming Elon Musk has gone viral, receiving more than 1.9 million views and countless shares before being deleted. The deepfake was referencing Musk’s South African origins and criticising his “audacity to act like an elected official”, sparking backlash and briefly damaging the public image of the Trump administration.
  • From content moderation to “contract” moderation. Meta’s contractor Telus has cut 2.000 jobs at its content moderation centre in Barcelona. The team was responsible for content moderation in Catalan, Dutch, French, Hebrew, Portuguese, and Spanish. The company announced that this is part of a plan to relocate their services, not suspend content review efforts. So far, employees have been placed on leave, with no work to carry out but continuing to receive full salaries.
  • IT’s a scam. North Korea’s IT worker scam has expanded widely into Europe after years of focusing on US companies, according to Google’s Threat Intelligence Group, also due to law enforcement operations disrupting scammers’ tools in the US. The scam involves the Democratic People’s Republic of Korea placing workers in IT roles at multiple companies to earn hefty salaries and eventually extort organisations, often by using fake documents and multiple personas as references.
  • “Services point to an eastern trace”. During the first week of April, the IT systems of Polish Prime Minister Donald Tusk’s Civic Platform party were hit by a cyberattack, amid growing concern over foreign interference during elections. The party blames Russian or Russia-linked entities (such as Belarus) acting on behalf of the Kremlin. Russia has denied all allegations.
  • Hacked from afar. The Czech Prime Minister has confirmed that his account on the social media platform X was hacked “from abroad”, with a series of false messages posted, including those claiming a Russian attack on Czech soldiers and a response to US tariffs.
  • From Russia with Love. A Romanian man was arrested upon arrival at London Stansted on suspicion of aiding a Russian sabotage campaign. The man was identified as part of an investigation into a fire at a DHL warehouse in Birmingham, caused by an incendiary device believed to have been delivered to the warehouse by air, similar to those used in other attacks.
  • Security matters. The British government is placing Russia in the top tier of a security programme to shield the UK from malign foreign influence. As part of it, any person or company “carrying out activity as part of any arrangement” with Russian authorities will need to register with the Foreign Influence Registration Scheme from 1 July. Anyone who doesn’t sign up faces up to five years in prison. Participants should include government agencies, armed forces, intelligence services, and parliament, as well as Russian political parties controlled by the Kremlin. 
  • Protection from freedom of expression. The editor-in-chief of Deutschland-Kurier, an outlet linked to the far-right AfD (Alternative for Germany), has been sentenced to seven months in prison for manipulating an image. The montage depicted Federal Minister of the Interior Nancy Faeser holding a shield reading “I hate freedom of expression”, replacing the original text, which commemorated the victims of National Socialism. The photo montage was not recognisable as such, leading to charges of “defamation against people of political life”.The sentence is now suspended.
  • A vaccine against conspiracies. Anti-vaccine group Children’s Health Defense, founded by Robert F. Kennedy Jr., has claimed that the recent measles-related death of a six-year-old girl in Texas was not actually caused by measles. Instead, they argue it was due to a medical error in treating her pneumonia, using the incident as an opportunity to promote  anti-vaccine rhetoric. The claim has been rejected by state and federal officials, as well as the hospital that treated her.

Brussels corner

MORE DISCUSSIONS ON THE DSA IMPLEMENTATION IN THE IMCO COMMITTEE

In the European Parliament’s Committee on Internal Market and Consumer Protection (IMCO), the European Commission Vice-President Henna Virkkunen said that her services were advancing “with resolve” on the proceedings against platforms, including with regard to elections and civic discourse. She also pointed to the support (guidelines and a toolkit) the Commission gave to platforms to help them to mitigate systemic risks to electoral processes, as well as to Tiktok’s additional mitigation measures for the upcoming Romanian election as a success of the Digital Services Act (DSA). In response to questions about when the first decisions under the DSA and Digital Markets Act would be taken, she said, twice, that it is important to enforce the rules, while not having delays, and while respecting due process. She also argued that other measures, such as dialogues, guidelines and codes were having an impact.

In response to questions from Member of the European Parliament in the same Committee, European Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection Michael McGrath said:

  • The Commission insisted that the DSA is an adopted law and the Commission services are fully committed to its implementation.
  • On the Transparency of Political Advertising Regulation, the Commission is working to ensure that the implementation deadline (10 October) is respected and is working on a number of supporting measures such as implementing acts, guidance and web portals. The implementing act on the labels and format of transparency notices in October. The repository work will take somewhat longer.
  • On the Democracy Shield, Foreign Information Manipulation and Interference (FIMI), and mis- and dis-information are key priorities.

DEMOCRACY SHIELD CONSULTATION CONTINUES, AMIDST DEBATE

Commissioner McGrath also spoke at the Special Committee on the Democracy Shield. He started by giving a summary of the main workstreams envisaged by the Commission.

On FIMI and disinformation, he pointed to the intention to increase capacity to counter threats, including through pre- and debunking, and building a European network of fact-checking. 

He also said that equality of arms of candidates online is a concern, as is transparency of recommender systems. He added that the Commission is very invested in listening to citizens in the ongoing consultation (open until 26 May), as he sees it very much as Europe’s democracy.

The debate, to misquote Shakespeare, generated more heat than light, characterised by members complaining at length at being silenced and asking the Commissioner to get involved in discussions about European Parliament’s internal rules on national matters such as court rulings and matters of national competence. He emphasised strongly that civil society was not the target of the proposed Directive on transparency of interest representation carried out on behalf of third countries, currently being discussed in the IMCO Committee, and that he was open to amendments to put this point beyond doubt.

The Democracy Shield may or may not lead to new legislation – the decision has not been made yet. It will probably take the form of a Communication, although this may include plans for subsequent legislation.

Reading & resources

  • Municipal Matters. As cities drive over 70% of global emissions, local climate leadership is crucial, but increasingly undermined by false and misleading information. Municipal Matters explores how disinformation disrupts trust, delays policy, and harms vulnerable communities. With limited capacity to respond, local governments in Canada must now tackle the information crisis alongside the climate one.
  • Misinformation goes native. A study found that misinformation in local languages spreads rapidly because it feels personal and deeply relatable. That is the case, for instance, with Hausa-language hashtags and videos, a widely spoken language across Northern Nigeria, Niger, and other Sahelian states, to spread authoritarian narratives on TikTok, tapping into regional grievances.
  • Conspiracies never sleep. Two studies examined the link between sleep quality and belief in conspiracy theories, as well as the underlying mechanisms. The studies found that poorer sleep quality and insomnia were positively correlated with conspiracy theory beliefs. However, the indirect relationships between both sleep quality and insomnia with conspiracy beliefs, via anger or paranoia, were inconsistent.
  • 131% more likely. A study used crowd-sourced assessments from X’s Community Notes program to examine partisan differences in the sharing of misleading information in the US. The research found that 131% more Republican posts were flagged as misleading compared to posts by Democrats, despite no over-representation of Republicans among X users or political bias among raters. This indicates that Republicans may be sanctioned more frequently than Democrats, even if platforms transition from professional fact-checking to Community Notes.
  • DAVOS reports. The World Economic Forum’s Annual Meeting addressed the impact of disinformation and conspiracy theories on elections and society. In 2024, half of the global population took part in elections while disinformation ran rampant. In Moldova, for instance, foreign disinformation networks interfered with the electoral process by disseminating anti-EU deepfakes. This unfolds in a context where generative AI has amplified the spread of false and misleading content, but also holds promise for identifying and filtering it. Strikingly, 40% of the global population supports the use of disinformation as a tool to drive change.
  • What a year. Close to 90% of fact-checkers see financial sustainability as the top challenge for their organisations, following the end of grants (like in Latin America and the US) and the end of the Meta fact-checking initiative. The apprehension is not only limited to funds, as nearly 80% of fact-checkers said they faced threats or online abuse in 2024, directed both at individuals and organisations. As for methodology, fact-checkers are now directing their efforts both to internet-based and political misinformation, using quick visuals (which have been found to be the most engaging) and making limited use of AI due to ethical and financial concerns.
  • A new weapon against disinformation. Spread Analyser, developed by the FERMI project, completed its first pilot, aimed at testing its capabilities in the context of right-wing extremism risks in Finland following disinformation and Russian hybrid operations. More specifically, the tool is fed X posts one by one before analysing their spread, distinguishing between bots and human-operated accounts, and assessing their influence. This can be helpful when assessing the situation, as part of a broader analysis, to help authorities set out measures in minimising impact, preventing crime, or catching and investigating potential perpetrators. 
  • Hot Air. This tool, developed by Tortoise, helps tracking climate disinformation, from scepticism to outright mis-disinformation.
  • OSCE new Mission. The Organization for Security and Co-operation in Europe (OSCE) Mission in Kosovo has launched a publication comprising six policy briefs on media and information literacy-related issues, including artificial intelligence in education, media ethics, gender representation, and online risks. This collection is aimed at fostering a media-literate society to enhance democratic resilience and counter misinformation.
  • For those shaping the message. The Institute for Public Relations (IPR) has launched Disinformation Awareness Month, a month-long initiative to highlight new research and equip communicators with strategies to combat misinformation through educational resources, events, and a call-to-action.
  • Russia bots go green. In a research for The American Sunlight Project, Benjamin Shultz and Nina Jankowicz reveal how a sprawling Russian disinformation network dubbed “EcoBoost” is posing as grassroots environmental activists to infiltrate left-leaning movements online. Over 600 fake X accounts have posted more than 245.000 messages since June 2024, targeting Western democracies with AI-generated personas promoting climate causes.

This week’s recommended read, an investigation by the German investigative outlet Correctiv, comes from our Executive Director Alexandre Alaphilippe:

Aeza, a Russian company known for hosting assets used in the Doppelganger disinformation campaign, has become the target of a major enforcement action – by Russian authorities themselves. 

According to Correctiv, Aeza’s CEO, founder, and several employees were arrested on suspicion of involvement in Black Sprut, a large darknet drug marketplace. Notably, Aeza’s offices are located at the same address as the former Internet Research Agency – the infamous troll factory linked to the 2016 US election interference, once run by Yevgeny Prigozhin.

The article highlights how operations linked to state-backed disinformation efforts often intersect with cybercrime and illicit digital infrastructure.

The latest from EU DisinfoLab

  • We fight disinformation because we fight for democracy. In the face of disinformation, defenders of democracy are often drawn into endless battles against falsehoods. This is precisely the trap laid by anti-democratic movements. In our latest statement, we reflect on how disinformation isn’t just about lies online – it’s a carefully staged performance, designed to twist reality into absurd contortions and drag us into the spotlight for the wrong reasons. But we don’t have to play along.
  • Updating the disinfo map. We continue maintaining our resource that maps the European disinformation landscape. The latest additions to the series are updated factsheets for Finland and Sweden. Since the initial round, which covered 20 countries, we’ve published a number of new factsheets. And as things are moving fast, we are now in the process of updating the previously published ones. This work is being done in collaboration with our amazing community, who generously contribute their time and insight. If you’re interested in supporting our efforts – whether through your expertise or by helping fund the coordination of this critical task – don’t hesitate to get in touch!
  • AI & disinfo: What’s new? Take a look at the latest round of updates to our AI Disinfo Hub – your source for tracking how AI shapes and spreads disinformation, and how to push back.
    • The Jianwei Xun case. This story revolves around Andrea Colamedici, an editor who created a fictional philosopher named Jianwei Xun and attributed authorship of a book on “hypnocracy,” a theory exploring how AI manipulates society. Presented as an intellectual work, the book was later revealed to be a philosophical experiment and artistic performance, co-created with AI, something that was never revealed by the author. The book was widely accepted and cited by academics and media outlets before the truth came to light, sparking a debate on the ethical use of AI in content creation. This revelation raised important questions about the transparency of AI-generated work, highlighting the risks and potential of AI in shaping knowledge and perception, also in the case of renowned intellectual circles.
    • Emotional prompting. This study explores how the emotional tone of user prompts, specifically politeness versus impoliteness, affects the likelihood of large language models (LLMs) generating disinformation. Results showed that polite prompts significantly increased disinformation output across all tested models, including the latest versions of GPT.
    • AI ≠ truth. This research exposes how a climate-change denial website, ScienceofClimateChange.org, published a study claiming that global warming isn’t caused by humans. The lead author listed on the paper wasn’t a human, but Grok 3. Climate change deniers quickly amplified the study on social media, calling it “proof” that the climate crisis is a hoax, seeing Grok’s involvement as a sign of credibility. But the truth is, as highlighted in the article, that climate change is real and driven by human activities, and AI, while powerful, can be manipulated through biased prompts and misused to spread misinformation. 
  • Updated: Climate Clarity. We’ve just added new content to our Climate Clarity Hub, the platform that consolidates knowledge and expertise on climate mis/disinformation, with the support of the Heinrich-Böll-Stiftung European Union. Here’s what’s new:
    •  Trump’s next climate move. Trump isn’t just denying climate change; he’s rewriting the science. His team is working to replace key regulations, downplay human-driven warming, and frame global warming as beneficial. Climate data is being wiped from federal websites, and partisan researchers are being brought in to reshape official reports. Under pressure, major companies are already rolling back climate commitments. But there’s hope: civic efforts are underway to preserve critical environmental data and fight back against this coordinated retreat.
    • Greenwash No More: Cities, citizens, and the fossil ad rebellion. Nearly half of EU citizens support banning fossil fuel ads, and cities like The Hague or Saint-Gilles in Brussels are already stepping up. But while local action grows, fossil fuel companies continue to push misleading native ads, disguised as news, to shape public opinion. Media giants and PR firms profit from this deception, despite growing pressure from the public and the UN to end fossil advertising. Meanwhile, advertising authorities are cracking down, but inconsistently. And whistleblowers are exposing how greenwashing extends beyond Big Oil, revealing deep-rooted issues within the ad industry itself.

Events and announcements

  • 23-25 April: The 2025 Cambridge Disinformation Summit will discuss research regarding the efficacy of potential interventions to mitigate the harms of disinformation.
  • 23 April: The Adaptation AGORA project is co-organising this online webinar on Disinformation and Climate Adaptation in collaboration with TUUWI Disinformation and climate adaptation
  • 24 April: Reality Crisis will host a webinar “Conspiracy Theories & How to Talk to People About Them”.
  • 24 April: Gendered Disinformation and Disinformation about Gender in Europe. This event will take place in person and on Zoom. Register here.
  • 24 April: WhatsApp in the World: Disinformation, Encryption and Extreme Speech. Hybrid event, more information here.
  • 28 April: The event Journalistic Weapons in the Age of Disinformation, hosted by the University of Bergen and Media Cluster Norway, will explore what must be done to ensure that journalism remains a force for truth and accountability. 
  • 28 April: A study presentation and roundtable discussion on disinformation and electoral interference, titled “Testing the fault lines: Afro-European strategies against disinformation”, will be organised by the Heinrich-Böll-Stiftung and the Global Unit for Human Security in Brussels. Register by 24 April.
  • 8-9 May: Data & Society will host an online workshop on the intersection of generative AI technologies and work. This workshop aims to foster a collaborative environment to discuss how we investigate, think about, resist, and shape the emerging uses of generative AI technologies across a broad range of work contexts.
  • 22-25 May: Dataharvest, the European Investigative Journalism Conference, will be held in Mechelen, Belgium. Tickets are sold out, but you can still join the waiting list.
  • 13 June: Defend Democracy will host the Coalition for Democratic Resilience (C4DR) to strengthen societal resilience against sharp power and other challenges. The event will bring together transatlantic stakeholders from EU and NATO countries to focus on building resilience through coordination, collaboration, and capacity-building. 
  • 17 & 24 June: veraAI will host two online events on artificial intelligence and content analysis, including the verification of digital media items with the help of technology. The first session is dedicated to end user facing tools addressing journalists and fact-checkers as a target audience. The second session will be primarily directed at the research community.
  • 16-17 June:  The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies.
  • 17-18 June: The Summit on Climate Mis/Disinformation, taking place in Ottawa, Canada, and accessible remotely, will bring together experts to address the impact of false climate narratives on public perception and policy action.
  • 18-19 June: Media & Learning 2025 conference under the tagline of “Educational media that works” will be held in Leuven, Belgium.
  • 26 June – 2 July: The training Facts for Future – Countering Climate Disinformation in Youth Work will bring together youth workers, trainers & civil society pros im Eisenach, Germany to tackle climate disinformation and populist narratives. Apply by 11 May!
  • 27-28 June: The S.HI.E.L.D. vs Disinformation project will organise a conference “Revisiting Disinformation: Critical Media Literacy Approaches” in Crete, Greece.
  • 30 June – 4 July: The Summer Course on European Platform Regulation 2025, hosted in Amsterdam, will offer a deep dive into EU platform regulation, focusing on the Digital Services Act and the Digital Markets Act. Led by experts from academia, law, and government, the course will provide in-depth insights into relevant legislation.
  • 8-11 July: The AI for Good Global Summit 2025 will be held in Geneva. This leading UN event aims to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 
  • 14-18 July: the hybrid AIDA Symposium and Summer School – co-organised by AIDA and Aristotle University of Thessaloniki –  will explore the latest in AI and ML with expert-led lectures, special sessions, and hands-on tutorials.
  • 19 September: Generative futures: Climate storytelling in the age of AI disinformation is a workshop that will be held in Oxfordshire (UK) exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation. 
  • 15-16 October: #Disinfo2025 is heading to Ljubljana, Slovenia. Mark your calendars – and request your ticket now to make sure you don’t miss out!
  • 25 or 26 October: Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI, the European Workshop on Trustworthy AI organised as part of the 28th European Conference on Artificial Intelligence ECAI 2025, Bologna, Italy.
  • 29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.
  • 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.

Spotted: EU DisinfoLab

  • Recognised by EU Civil Protection. EU DisinfoLab is highlighted by the European Civil Protection and Humanitarian Aid Operations (ECHO) as a key resource on disinformation. Their platform brings together guidance and tools on countering information manipulation in crisis settings.
  • On air: Tackling FIMI. Our Executive Director Alexandre Alaphilippe joined French radio RFI to discuss how Europe is responding to information manipulation and foreign interference. The interview (in French) is part of the programme “Les dessous de l’infox”. Listen to the episode here.
  • In the BELUX spotlight. On 8 April, we were at the EDMO BELUX workshop “Mitigating disinformation in Belgium and Luxembourg”. During the session “Foreign information manipulation and interference: trends, threats and responses”our Project Officer Inès Gentil shared key findings on FIMI in Belgium and Luxembourg, drawn from evidence-based studies encoded by our Research Manager Maria Giovanna Sessa.
  • Let’s meet! In the next two weeks, on top of our usual offices (Brussels, Madrid, Milan, Athens), our team will also travel to Berlin. If you’ll be around and would like to meet, just reply to this email – we’ll do our best to make it happen.

Jobs

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.