Dear Disinfo Update reader, 

Welcome to the latest Disinfo Update – and what a packed edition it is! With new crises, elections, and platform changes there’s plenty to catch up on.

From Brazil to South Korea, the disinformation landscape takes up many shapes and we do our best to trace them all, also thanks to your input. 

#Disinfo2025 will bring together leading voices from across the disinformation and digital policy space on 15–16 October in Ljubljana, Slovenia. We already have an impressive line-up of confirmed speakers – and this is just the beginning. More names will be added in the coming weeks, so stay tuned for updates. New this year, we’re introducing Day Zero workshops ahead of the main conference – designed to offer more quality time for deeper dives and unhurried exchanges. Details coming soon. Don’t miss out – request your ticket now to join us.

For now, get ready and dive in!

Our webinars

UPCOMING – REGISTER NOW!

PAST – WATCH THE RECORDINGS!

Disinfo news & updates 

Update (on) your apps

  • “Grok, is this true?”. As X has scaled back investments in fact-checkers, users have found themselves asking the integrated chatbot for confirmation. Nonetheless, the model has often given responses riddled with disinformation. In the context of the recent Indian strikes on Pakistan, Grok wrongly identified old video footage from Sudan as a missile strike on Pakistan’s airbase and misidentified unrelated footage of a building on fire in Nepal as “likely” showing Pakistan’s military response to Indian strikes.
  • “Grok, summarise this text”. Telegram has partnered with Elon Musk’s AI company, xAI, to integrate the latter’s chatbot, Grok, into Telegram and apps available on Telegram for one year. Its function would be available to all users, for writing suggestions, summarising chats or documents, and possibly assisting with moderation. 
  • Tips and tropes. More than half of the top trending videos on TikTok offering mental health advice contain misinformation, including misused therapeutic language, “quick fix” solutions and false claims. The dubious advice includes the promotion of supplements with a limited evidence base for alleviating anxiety, methods to heal trauma within an hour, and guidance presenting normal emotional experiences as a sign of borderline personality disorder or abuse.
  • Monetise your data. Brazil announced the roll out of a data ownership pilot that will allow its citizens to manage, own, and profit from their digital footprint. The pilot, led by state-owned public firms and the private sector, involves a small group of Brazilians who will use data wallets for payroll loans. When users apply for a new loan, the data in the contract will be collected in the data wallets, which companies will be able to bid on. This project, however, has raised concerns as it might lead functionally illiterate individuals (3 in 10 Brazilians belong to this category) to believe that their data can be bought for a certain fee and potentially be used against them.

Disinfo and geopolitics

  • Disinfo and denial. West Africa faces a growing disinformation crisis fueled by global power rivalries, fragile media systems, and declining public trust, which make misinformation harder to eradicate. Kenyan President William Ruto falsely claimed that Germany would provide 250,000 jobs for Kenyan youth, a statement amplified by major media before being denied by the German Interior Ministry. Ruto’s supporters dismissed the denial as motivated by the “sensitive nature of immigration in German politics” rather than a refutation of Ruto’s inflated figures.
  • From China to Czechia. The Czech Republic blamed China for a cyberattack (APT31) on the foreign ministry’s unclassified network from 2022, potentially exposing emails between embassies and EU institutions. After detecting the intrusion, the Foreign Minister summoned China’s ambassador to warn that hostile cyber actions threaten bilateral relations and a new secure communication system was deployed.
  • The currency of disinfo. Since joining the EU in 2007, Bulgaria has faced political instability and corruption, fueling public euroscepticism. Foreign disinformation campaigns have heightened fears about economic changes, with false social media claims suggesting the EU might seize unused savings or introduce a digital euro to control citizens.
  • Brazilian moves against disinfo, The Brazilian Attorney General’s Office filed an appeal before the Supreme Court requesting immediate action against digital platforms that contribute to the spread of disinformation. Brazilian authorities have raised concerns about over 300 fraudulent ads on Meta’s platforms, which used doctored images of officials like Finance Minister Fernando Haddad to mislead users with fake compensation offers related to social security fraud. Other concerns include minors dying from dangerous TikTok challenges and the fraudulent usage of the National Health Surveillance Agency’s logo to sell unapproved medications.
  • Another case of election disinfo. Ahead of the newly finished elections, disinformation on ballots and candidates spread in South Korea. The main narratives targeted opposition leader and election frontrunner Lee Jae-myung, picturing deepfakes and claiming his support to Beijing through (debunked) pictures showing the candidate next to a Mao statue, and wearing a face mask with the Chinese flag on it.
  • Source? I made it up. The Trump administration released a report last week that it billed as a “clear, evidence-based foundation” for action on a range of children’s health issues based on studies that did not exist. Those errors were consistent with characteristics of the use of generative artificial intelligence, which has led to similar issues in legal filings and more. After news outlets pointed out the mistake, the White House uploaded a new copy of the report with corrections.
  • Disinfo sparks in crisis. During the recent military tensions between India and Pakistan, Indian newsrooms became engulfed in a wave of misinformation. False reports, including a fake military coup in Pakistan and fabricated Indian strikes on Karachi, spread rapidly across major Indian news networks and social media, often without verification. Major Indian channels aired unfounded reports claiming Indian forces had entered Pakistan, cities were destroyed, and leaders were in hiding. Some used unrelated footage from other conflicts or even video games to support their claims.

Russian disinfo

  • Washing away enemy’s files. Dutch intelligence has uncovered a new Russian cyber-espionage group, Laundry Bear. Their tactics involve purchasing login credentials from infostealer shops, breaching individual accounts at their desired targets, using open-source tools to expand access, and then scraping and stealing files and email inboxes. Laundry Bear attacks are primarily focused on supporting Russia’s effort in the war in Ukraine, targeting military and diplomatic figures in NATO member states, defence contractors, and high-tech businesses involved in military production linked to Ukraine. 
  • A US(SR) official. The acting under-secretary for public diplomacy and public affairs who dismantled the US government’s Russian disinformation unit has significant personal ties and views aligned with the Kremlin. In the years before joining the government, Mr Beattie wrote social media posts praising Vladimir Putin, suggesting Western institutions should be “infiltrated” by Putin, that the US should surrender Taiwan to Beijing, and overall attacked the “globalist American empire”. Moreover, the senior official is married to a woman whose uncle has held several positions in Russian politics and once received a personal “thank you” message from Vladimir Putin for his help in the election campaign that brought him to power.
  • Headline impersonator. Pro-Russian actors have repeatedly used Euronews likeness and visual identity in recent weeks to spread propaganda against various European countries. More of those videos alleged criminality and health issues in Moldova, such as claims of Moldovans ranking first in the number of STD carriers in Europe or a rise in criminal gangs in Italy producing false documents for Moldovan immigrants.
  • Disinfo is child’s play. Russia has launched a new children’s TV show on geopolitics aimed at “instilling patriotism from an early age”. The trailer features a videocall between baby animated versions of Donald Trump, Elon Musk, Vladimir Putin, Emmanuel Macron, Kim Jong-un, and Recep Tayyip Erdogan, bickering and mocking each other based on recent news or personal life. The show was created by Vladimir Solovyov, who was sanctioned by the European Union and Britain for spreading disinformation that undermines Ukraine’s sovereignty.

Brussels corner

Media Board provides input on Democracy Shield

The European Board for Media Services, the successor to the European Regulators Group for Audiovisual Services, has issued its input on the EU Democracy Shield. Its key advice is to focus on the effective implementation of the EU’s existing legislative framework, rather than introducing new legislation. It also emphasises the importance of leveraging national regulators to ensure successful enforcement.

The document describes disinformation as “a serious and growing threat to democratic societies, undermining public trust and distorting democratic discourse”. It calls for greater investments in “initiatives that empower users to distinguish between high-quality, trustworthy information and manipulative or misleading content”. Notably, the Board urges continued monitoring of the Code of Conduct on disinformation to ensure, and states that, if necessary, “stronger regulatory measures must remain an option to ensure the resilience of European democracies”. However, the document does not clarify the conditions under which such measures would be considered, or whether this option is currently being weighed. 

The Board also calls for increased funding for fact-checkers and trustworthy media, alongside protection for content developed under editorial standards. Additionally, it advocates for improved transparency in recommender systems, although specific mechanisms or benchmarks for achieving this transparency are not detailed. 

The document also looks forward to the planned review of the Audiovisual Media Services Directive (AVMSD) to address certain issues including a statement that “reconsideration of the jurisdiction criteria related to satellite distribution under Art. 2 par. 4 AVMSD to complement the regime under Art. 17 EMFA may be relevant to address procedural shortcomings of the existing provisions”. The document points to the introduction of training modules and certification systems to encourage influencers to be more transparent and meet professional and ethical standards, calling on the Democracy Shield to focus on this increasingly important group.

“The Media Board believes that the European Democracy Shield should support a systemic, multidimensional, EU-wide approach to media literacy, grounded in the principles and best practices established by ERGA and the Media Board”. Somewhat nebulously, the Board calls on the Democracy Shield to call on very large online platforms to “promote measures to embed “media literacy by design” principles into their products and services”.

Presidency conclusions on strengthening EU democratic resilience

The outgoing Council Presidency adopted a Conclusions document on democratic resilience, with the support of 25 of the 27 Member States. It starts by recognising some basic principles, including the legal protection for fundamental rights in the EU legal framework, the importance of democratic resilience, the role of civil society organisations, and the framework of initiatives in this policy area. It also highlights the importance of independent media.

The document acknowledges the threat to democracy of disinformation in general and foreign manipulation in particular. It states that “failure to address these risks may pose serious threats to the integrity of democratic processes and citizens’ engagement by undermining public trust in democratic institutions and procedures.”

The text, drafted by Poland, which has extensive positive experience with the use of “resilience councils” to strengthening the country’s endurance to various challenges in a cooperative, multi-stakeholder environment, pointedly calls on the Commission and the EEAS to “to explore ways to bring together all relevant stakeholders including Member States, EU institutions, civil society, research, academia, private entities and other relevant experts from different areas in a systematic manner in order to share best practices and to provide strategic guidance on policies pertaining to democratic resilience, making best use of existing efforts and with due respect for Member States’ competences”.

Reading & resources

  • Keeping an eye on cognitive warfare. OpenMinds offers access to their Cognitive Defence Monitor, a collection of reports on cognitive warfare such as FIMI, social media governance, and AI.
  • DSA on the radar. CheckFirst is releasing RADAR (Regulatory Assessment for Digital Service Act Risks), an open-source framework that introduces standardised tags for Digital Services Act (DSA) infringements. The communities involved in DSA enforcement are warmly invited to test, use and help them build the framework, so it can be improved and provide better support for navigating the numerous DSA violations. Visit the RADAR website or explore the GitHub repository to learn more about the project.
  • Community’s eye on AI. WITNESS’ TRIED report presents a comprehensive benchmark for evaluating AI detection tools through a sociotechnical lens, emphasising their real-world impact and ability to support key information ecosystem actors (e.g., journalists, fact-checkers, civil society). Rather than focusing solely on technical accuracy, the benchmark integrates community feedback, insights from deceptive AI casework (via WITNESS’s Deepfakes Rapid Response Force), and findings from global consultations 
  • A cure to disinfo? BBC is testing proactive responses to mis- and disinformation in places where such interventions are rarely studied, such as Libya, Algeria, and Tunisia. Among the tactics, they used inoculation, the basis of prebunking, which suggests that exposing people to a weakened version of a manipulative technique and showing them how to refute it can build resilience against future exposure. While the attempt in Tunisia did not work, in Algeria, it caused improved recognition of emotionally manipulative language and discernment for scapegoating content. 
  • Slowing down solutions. In this report, the World Economic Forum explores the growing trend of businesses embracing climate action, recognising its financial and environmental benefits. It warns that the spread of misleading information, receiving an estimated 1.36 million daily views on platforms like Facebook, delays climate progress and significantly hinders progress towards science-aligned climate policies. 
  • The disinfo grid. The new Climate Action Against Disinformation (CAAD) report discusses how the Iberian power outage was opportunistically seized upon by climate disinformation actors, set against the backdrop of ongoing Spanish, Portuguese, and overall European efforts to reduce greenhouse gas emissions and increase the share of energy generated by renewables, and amplified by media and social media platforms to fill the information void surrounding the causes of the outage. 
  • Extremism and disinfo. The International Centre for Counter-Terrorism has shared a policy brief exploring the intersection between disinformation and violent extremism, discussing the complex dynamics and risks of disinformation spread by extremist groups, with a focus on Boko Haram in Nigeria.
  • A wide network. In this investigation, Maldita explores how the ‘Spain News Pravda’, part of the 100-domain-wide Russian propaganda and disinformation network Pravda, and correlated pages in Catalan, Basque, and Galician have generated over 150,000 articles in just six months. They amplified coverage on events like the April 28 blackout or the May 21 shooting of Ukrainian figure Andriy Portnov in Madrid and boosted Spanish-language Telegram channels, some with under 3,000 followers. In the context of the blackout, they shared nearly 1,200 publications in two days, among which was the first detected hoax during the crisis: an alleged statement of President Von der Leyen blaming a Russian cyberattack. 
  • Meta vs covert operations. Meta revealed it dismantled three covert influence campaigns originating from Iran, China, and Romania, targeting users in Romania, Azerbaijan, Turkey, Myanmar, Taiwan, and Japan. They detected and removed these campaigns before they were able to build authentic audiences on our apps. The full report can be read here.

This week’s recommended read

Raquel Miguel, Senior Researcher at EU DisinfoLab, recommends reading the article “How Deepfake Fraud Is Rewiring Our Minds” by The Spectator World, which explores how the rise of deepfake and voice-cloning technologies has the potential to drive profound changes in human psychology and societies – changes that go far beyond deception and scams.

Scammers can now convincingly mimic family members, coworkers, or public officials, making it incredibly hard to tell what’s real and what’s fake. As a result, people are starting to doubt even urgent calls from loved ones, afraid they might be falling for a scam.

But the problem goes beyond fraud, tapping into how our minds function. As the article explains, humans are biologically wired to trust by default; this is an evolutionary trait that enables cooperation, connection, and social life. However, as fake content becomes indistinguishable from reality, that natural instinct becomes a vulnerability. People are being driven into a state of what the author calls “pre-emptive mistrust”, constantly second-guessing even close relationships. This ongoing doubt has a psychological toll: elevated stress levels, impaired memory, and reduced emotional well-being.

This growing distrust doesn’t just affect individuals – it’s reshaping society. As people grow more cautious online, they may withdraw from digital platforms and rely more on in-person contact or analog alternatives to feel safe. As the author reflects, the danger isn’t just financial, but deeply human: this erosion of trust could reshape how people interact with one another, with institutions, and with the very concept of truth.

The latest from EU DisinfoLab 

We’ve just added new content to our AI Disinfo Hub – your source for tracking how AI shapes and spreads disinformation, and how to push back. Here’s a snapshot of what’s new: 

  • Hey chatbot, is this true? AI ‘factchecks’ sow misinformation, France 24

AI chatbots, increasingly relied upon for instant fact-checking, have been shown to frequently spread misinformation rather than correct it. During India’s recent conflict with Pakistan, these tools wrongly identified unrelated video footage as military strikes, fueling confusion. Beyond this, investigations revealed that chatbots sometimes fabricate details, as when an AI-generated image of a woman was falsely confirmed as authentic by a chatbot in Uruguay. The decline in human fact-checkers at major tech platforms has exacerbated the problem, raising concerns about the reliability, political bias, and manipulation of AI-powered fact-checking tools.

  • A weaponized AI chatbot Is flooding Canadian City Councils with climate misinformation, DeSmog

A group called KICLEI, mimicking the international environmental network ICLEI, has been sending thousands of AI-generated emails to over 500 Canadian municipalities, urging councils to abandon net-zero climate targets. Using a custom AI chatbot dubbed the “Canadian Civic Advisor,” KICLEI crafts tailored messages that downplay climate change, focus on “real pollution, not CO2,” and cast doubt on the scientific consensus. Several municipalities, including Thorold, Ontario and Lethbridge, Alberta, have already voted to weaken or withdraw from key climate initiatives after receiving KICLEI materials. Scientists have labelled many of KICLEI’s claims as misinformation, while the group denies spreading falsehoods.

  • Deepfakes just got even harder to detect: Now they have heartbeats, BBC

Deepfakes have advanced in a critical area that could make them significantly harder to detect. A new study published in Frontiers in Imaging reveals that synthetic videos are now capable of replicating realistic pulse signals in human bodies, biological cues whose absence was previously used to identify fakes. This development may render many existing detection tools less effective. Experts warn that this breakthrough could further erode public trust in visual media, and emphasize the need for cryptographic authentication methods, not just more advanced detectors, as a long-term defense strategy.

Events & announcements  

  • 13 June: Defend Democracy will host the Coalition for Democratic Resilience (C4DR) to strengthen societal resilience against sharp power and other challenges. The event will bring together transatlantic stakeholders from EU and NATO countries to focus on building resilience through coordination, collaboration, and capacity-building. 
  • 17 & 24 June: VeraAI will host two online events exploring how artificial intelligence is being used to analyse digital content and support media verification. The first one is a practical session for journalists, fact-checkers, and other media professionals, featuring hands-on demonstrations of VeraAI tools. The second one is a research-focused session for academics, scientists, and scholars, offering deep dives into the latest findings and methodologies. Register for one or both sessions here.
  • 16-17 June: The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies.
  • 17-18 June: The Summit on Climate Mis/Disinformation, taking place in Ottawa, Canada, and accessible remotely, will bring together experts to address the impact of false climate narratives on public perception and policy action.
  • 18-19 June: Media & Learning 2025 conference under the tagline of “Educational media that works” will be held in Leuven, Belgium.
  • 19 June: Deadline to apply to “Climate Action Against Disinformation Fellowship on Countering Disinformation 2025”, which supports grassroots organisers from the Global South who have experienced the impact of online climate disinfo in their territories and want to explore those more deeply.
  • 30 June: Forum Information Democracy has published a call for contributions addressing environmental disinformation, attacks on information integrity and their impact. The deadline to submit contributions is 30 June. 
  • 27-28 June: The S.HI.E.L.D. vs Disinformation project will organise a conference “Revisiting Disinformation: Critical Media Literacy Approaches” on Crete, Greece.
  • 30 June-4 July: The Summer Course on European Platform Regulation 2025, hosted in Amsterdam, will offer a deep dive into EU platform regulation, focusing on the Digital Services Act and the Digital Markets Act. Led by experts from academia, law, and government, the course will provide in-depth insights into relevant legislation.
  • 8-11 July: The AI for Good Global Summit 2025 will be held in Geneva. This leading UN event aims to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 
  • 6-13 July: WEASA is hosting a summer school titled “Digital Resilience in the Age of Disinformation” for mid-career professionals.
  • 15-16 July: The European Media and Information Fund (EMIF) will hold its Summer Conference in Lisbon, Portugal. Save the date!
  • 4-6 September: This year’s edition of the SISP Conference #SISP2025 will take place at the University of Naples Federico II and will host conversations on digital sovereignty, EU cybersecurity policy, and the challenges posed by emerging technologies.
  • 19 September: “Generative futures: Climate storytelling in the age of AI disinformation” is a workshop that will be held in Oxfordshire (UK), exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation.
  • 24-25 September: This year’s JRC DISINFO hybrid workshop is titled “Defending European Democracy”. Save the date and stay tuned for more info!
  • 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. Make sure you request your ticket now as the early bird sale finishes on 27 May!

Spotted: EU DisinfoLab

  • On 11 June, our Executive Director Alexandre Alaphilippe will be moderating the discussion “Understanding the Information Environment to Protect Democracy” held by Carnegie Endowment Europe in Brussels. 
  • Between 25 and 27 June, our Research Manager, Maria Giovanna Sessa, will attend the Countering Foreign Interference (CFI) Dialogues at the European University Institute in Florence. 
  • Let’s meet! If you’re attending too – or just nearby – drop us a reply to this email. We’d love to try and meet up!

Jobs 

  • The German public broadcaster NDR (Norddeutscher Rundfunk) is searching for a freelance journalist with strong OSINT and data skills for a verification and fact-checking team based in Hamburg. The ideal candidate is specialised in visual investigations (e.g. experience in evaluating satellite images) and data journalism. Fluent German is required. If you are interested, please contact Sven Lohmann (s.lohmann@ndr.de).
  • RAND Corporation has various open positions for AI researchers or technicians.
  • The Imperial Policy Forum are recruiting research Fellows to work on a new initiative connecting research in fundamental and applied AI to policy.

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.