Dear Disinfo update reader,
As autumn settles in, the disinformation landscape shows no signs of slowing down. This edition covers all the latest disinformation news spanning continents and platforms, from Russia’s evolving propaganda campaigns in Spain and Moldova, to the US retreating from global efforts against foreign disinformation, to France’s bold new “digital firepower” strategy.
We will also delve into how platforms are handling responsibility – from the child safety debate involving TikTok, Meta, and Snapchat, to the growing wave of climate disinformation amplified by politicians and media figures. We will also spotlight the latest on platform accountability, policy battles in Brussels, and fresh resources to help you make sense of it all.
Since our last update, we’ve already hosted two thought-provoking webinars on Synthetic Propaganda and EDMO’s report on major platforms’ commitment to combating disinformation, both of which are available to watch on our website. Speaking of, don’t forget to register for our next webinar, this week, 18 September, titled “Operation Overload”, where we will dig into the latest on the rapidly developing AI-powered Russian propaganda campaign.
All the more, #Disinfo2025 is just around the corner! Check out the programme – you will not want to miss this highly anticipated and invigorating event. As the date approaches, spots are filling up quickly, so be sure to register now and secure your seat so you don’t miss out. We are counting down the days until we see you all in Ljubljana!
Get reading and enjoy!
Our Webinars
UPCOMING – REGISTER NOW!
- 18 September: Operation Overload: smarter, bolder, powered by AI | The Russian propaganda operation targeted at media organisations and fact-checkers is stronger than ever. Operation Overload, first documented in June 2024, is now conquering new platforms and harnessing AI tools to target fact-checkers, media, and international audiences with Kremlin propaganda. Aleksandra Atanasova and Guillaume Kuster will shed light on how the campaign has evolved, what makes it increasingly sophisticated, and how the community can respond.
- 2 October: Navigating the DSA Delegated Act on Data Access | With the imminent entry into force of the delegated act on data access under the DSA, speaker Joao Vinaigre, a researcher from the JRC of the European Commission, will give an overview of what is mandated and under what conditions, as well as a chance to ask questions and seek clarifications.
- 23 October: Effective AI Detection tools for frontline actors: introducing the TRIED benchmark | With the rapid development of generative AI, the AI detection tools are a key resource for information actors to verify the authenticity of content and combat disinformation. However, in practice, these tools often fail to effectively support frontline actors operating in the most urgent and complex global contexts. This webinar will present TRIED -Truly Innovative and Effective AI Detection Benchmark-, developed by WITNESS, which offers a new sociotechnical framework to evaluate detection solutions based on their real-world effectiveness in diverse environments.
PAST – WATCH THE RECORDINGS!
- Synthetic Propaganda – Generative AI and the Future of Political Communication | From conflict parties sharing AI war spam to presidents posting themselves as pope, wrestler or superstar to right-wing extremists synthetically remixing Third Reich aesthetics. Generative AI is not only used for deceptive purposes or disinformation, but also increasingly as a tool for political communication and propaganda. Marcus Bösch dissects current case studies and discusses ethics and effects.
- The EU Code of Practice on disinformation: evaluating VLOPSE compliance and effectiveness | How well are major platforms putting the EU Code of Practice on Disinformation into action? A new EDMO report reviews Meta, Google, Microsoft, and TikTok, looking at their transparency reporting and the impact of measures taken in 2024. In this webinar, co-author Trisha Meyer shares the findings and reflects on what they mean for the effectiveness of the Code.
Disinfo news & updates
Russian disinformation watch
- Russian Disinfo in Spain. In a report by German fact-checker CORRECTIV, Russian-backed media have reportedly been behind the spread of less-than-truthful reporting once again. This time, with videos depicting burning/exploding solar-powered systems, like street lights, during the severe heat waves in Spain this August. However, a closer analysis revealed that most of these media are old (dating as far back as 2021), taken in other places like China (or no verified location at all), or simply fabricated narratives.
- Information War in Moldova. Ongoing Russian intrusion on democratic processes aimed at eroding trust in EU leadership, most notably through the “Russia is not my enemy” campaign. Watchdog Moldova has identified almost 1.000 social media accounts dedicated to the spread of Russian propaganda on platforms such as YouTube, TikTok, Facebook, Instagram, and Threads. Coming into play before elections, this move is strategically amplifying the support for Russia that some Moldovans already carry, creating a precarious situation for the EU and Moldova’s European integration process.
US yields ground in fight against FIMI
- Trump administration makes way for Russian electoral interference. Since his inauguration into the White House in January, President Donald Trump has abandoned efforts to counter foreign disinformation. This subordination has left an open pathway for Russian propagandists to disseminate destabilising disinformation through social media, as highlighted in the upcoming Moldovan parliamentary elections.
- Has America lost the global info battle? Recent moves by the American government have covertly shifted its foreign policy to protect governments that manipulate and censor information. The United States has long funded programs like Voice of America and Radio Free Asia to help people in repressive regimes access uncensored news. The Trump administration, however, is gutting these efforts – appointing Kari Lake to lead the US Agency for Global Media, slashing budgets, and cutting staff. Critics warn this move cedes ground to Chinese and Russian propaganda, undermines US security, and weakens the global push for democratic ideals.
- The US withdraws efforts countering FIMI from Russia, China, and Iran. The United States is withdrawing from international agreements to combat foreign disinformation. These Biden-era agreements, led by the Global Engagement Centre (GEC), aimed to unite allies against hostile state propaganda. Critics call this move made by the Trump administration an “act of unilateral disarmament” amid rising AI-driven threats from Russia and China, while Republicans frame the move as defending free speech against government “censorship.”
- US House Judiciary hearing & Farage. The U.S. House Judiciary Committee held a hearing targeting the UK’s Online Safety Act and the EU’s Digital Services Act. At the centre of the session were disagreements over the definition of “censorship” and who has the authority to decide its boundaries. The speakers included David Kaye (law professor, former UN Special Rapporteur), Morgan Reed (App Association president), and Lorcán Price (barrister, Alliance Defending Freedom International). Nigel Farage (leader of the populist Reform UK party) heavily criticised the UK’s Online Safety Act. Just hours later, Farage welcomed Nadine Dorries, the minister who introduced the bill, to Reform UK, the party she had just defected to.
Interdisciplinary initiatives
- Cyberattack disinformation nexus. Ongoing research at the CyberPeace Institute has identified an interconnection between cyberattacks and disinformation, where digital manipulation tactics and disinformation campaigns merge to undermine, mislead, and compel. The Institute has identified ten different categories of these nexus processes, which fall under two types: those beginning with cyberattacks and those beginning with information manipulation.
- Disinformation at play. This unique collaboration between Cologne Theatre and public welfare media company CORRECTIV brings investigative research to the stage. On 10 September, the play titled “Secret Plan Against Germany” took to the stage and shed light on the Potsdam meeting of right-wing extremist actors. Watch the recording here.
National responses & strategies
- France’s fight against disinformation. France is taking steps forward in its fight against disinformation with the launch of “French Response,” an official X account aimed at actively countering foreign disinformation. This initiative marks a move from monitoring to direct engagement, pushing back on hostile narratives – including recent claims by US Secretary of State Marco Rubio that France’s plans to recognise a Palestinian state caused a breakdown in hostage negotiations. Supporters see it as a bold adaptation to online information warfare, while critics warn it risks credibility if ethical boundaries are not maintained.
- Swiss push against privacy tech. Switzerland is weighing a proposal that would force digital platforms to collect user IDs, store subscriber data, and potentially disable encryption. This move has prompted outcry from privacy and digital-freedoms advocates and companies like Proton. Critics caution that the measure, introduced without parliamentary approval, could undermine online anonymity and put vulnerable groups at risk, even as officials justify it as a tool against cybercrime and terrorism.
The climate disinfo files
- Spread of science denial. This article, written by scientists Michael Mann and Peter Hotez, argues that “weaponised disinformation” poses a serious threat to human civilisation, undermining our ability to address critical issues like climate change and public health crises. The writers point to the “five Ps” – plutocrats, petrostates, paid promoters, propagandists, and parts of the press – as drivers of antiscience campaigns. Making comparisons between Australia’s resilience to that of the US, they stress the need to confront and hold accountable the powerful forces funding this disinformation machine.
- US pushes fossil fuels, not facts. This article reports on US Energy Secretary Chris Wright’s lobbying efforts in Europe to promote fossil fuels during a period when European nations are largely moving towards cleaner energy. European climate experts and policymakers contend that Wright’s arguments are based on climate disinformation, including a misleading report published by his own department that downplays global warming’s impacts. Critics say his push undermines the EU’s Green Deal, downplays global warming, and risks locking Europe into fossil dependence, which is in direct opposition to the Paris Agreement goals. The overall argument is that this agenda prioritises short-term corporate profits while discouraging collective climate action.
- Trump vs. turbines. Heatmap News outlines the Trump administration’s climate and energy record, highlighting “Trump’s War Against Wind Energy,” a timeline of efforts to stall offshore wind through permit revocations and stop work orders. The piece also covers the quiet dissolution of a controversial climate-contrarian group and warns that Trump’s broader policy rollbacks could sharply reduce US emissions cuts compared to the past administration.
- Rogan vs. reality. This article by The Guardian reports that podcaster Joe Rogan misrepresented a scientific climate study to claim the Earth is cooling, an assertion that is directly refuted by the paleoclimatologists who authored the research. These scientists emphasise that while the Earth’s climate has naturally fluctuated over millions of years, today’s rapid, human-driven warming is unprecedented. The article warns that misinformation from high-profile podcasts spreads “old-school denier nonsense,” undermining public understanding of the climate crisis.
AI disinfo updates
- BBC reveals web of spammers profiting from AI Holocaust images. A network of spammers is using AI to create fake images of Holocaust victims on Facebook to earn money through content monetisation. Survivors and remembrance groups say these fabricated photos and stories are disrespectful, distort history, and cause distress. While some accounts have been removed for spam and impersonation, experts warn that the spread of such “AI slop” risks undermining Holocaust education and remembrance.
- MIGS launches new report “Wired for War: How Authoritarian States are Weaponising AI against the West.” The report Wired for War examines how authoritarian states are weaponising AI to undermine democratic societies. By deploying deepfakes, bots, and algorithmic amplification, they accelerate foreign information manipulation and interference (FIMI), expand the reach of cognitive warfare, and challenge the West’s ability to defend its information space.
- Are bad incentives to blame for AI hallucinations? Recent research highlights persistent challenges with AI chatbots. OpenAI’s study on GPT-5 explains why large language models often produce hallucinations, plausible but false statements, tracing the issue to their training, which predicts the next word without true/false labels, and to evaluation systems that reward guessing over admitting uncertainty. Separately, articles by Wired and IJNet reveal that chatbots can be manipulated or “jailbroken” using psychological tactics or prompt reframing, causing them to generate objectionable or false content despite built-in safety rules. Together, these findings underscore both the inherent limitations of AI models and the vulnerabilities that allow users to bypass safeguards, emphasising the need for deeper alignment and robust safety mechanisms.
- After Kirk’s assassination, AI ‘Fact Checks’ spread false claims. After Charlie Kirk’s September 10, 2025, assassination, AI chatbots like Perplexity and Elon Musk’s Grok spread false information, denying the shooting or misidentifying suspects. For instance, Grok wrongly claimed a Utah Democrat named Michael Mallinson was the shooter, even though he was in Toronto. The case shows how AI “fact-checks” can amplify confusion, spread conspiracy claims, and cannot replace traditional verification from official sources.
Want to stay on top of the latest in AI and disinformation? Our AI Disinfo Hub has just been updated. Take a look!
Platform accountability & privacy
- Testing the Code. This report from the European Fact-Checking Standards Network (EFCSN) evaluates major online platforms’ adherence to the Code of Conduct on Disinformation. The EFCSN expresses concern that many platform signatories are abandoning their commitments to fact-checking despite the Code’s continued relevance as a crucial risk mitigation tool against disinformation. Their analysis outlines how platforms like Google, YouTube, Microsoft (Bing Search and LinkedIn), TikTok, and Meta (Facebook and Instagram) have either significantly reduced their fact-checking efforts, withdrawn from relevant commitments, or have plans to do so. The report highlights instances of decreased cooperation with fact-checkers and a lack of transparency regarding their anti-disinformation measures.
Social media and public safety
- Viral violence. Within minutes of the fatal shooting of Charlie Kirk at a Utah rally, graphic videos of the attack spread across X, Instagram, YouTube, and Telegram, amassing millions of views. Despite long-standing policies against violent content, users easily evaded detection by altering footage. Experts warn that the viral spread of such politically charged violence marks a dangerous moment for US civic life and exposes weakened platform guardrails.
- TikTok and child safety in focus. A French lawmaker is pushing for a criminal investigation into TikTok, alleging the platform “deliberately endangers” users after a six-month inquiry linked the app’s algorithm to harmful content. The committee’s report described TikTok as a “slow poison” and proposed banning social media for children under 15, a “digital curfew” for teens, and tougher EU regulation. TikTok defends its safety measures, but the case reflects rising global concern over social media’s impact on youth.
- Snapchat under the influence. A Danish study finds Snapchat is failing to block drug dealers, exposing children to illegal substances. Using test profiles of 13-year-olds, researchers found that Snapchat’s algorithm promoted drug-dealing accounts despite obvious usernames and repeated reports. The findings suggest a lack of “will,” not technology, to curb the problem, raising major child safety concerns.
- Meta mutes child safety findings. Four current and former employees claim that Meta’s legal team intervened to shape, edit, and even veto studies highlighting risks associated with VR use on children and teens. This alleged suppression sought to establish plausible deniability of the negative effects of their products after previous leaks led to congressional hearings.
- Community alerts fact checks. Meta is enhancing its Community Notes fact-checking system, a crowdsourced program designed to combat misinformation on its platforms like Facebook, Instagram, and Thread. The key updates include notifications when posts that users have interacted with receive a note, and the ability for anyone to request or rate a note. While the system relies on consensus to flag misleading content, critics express concern about its effectiveness due to the difficulty in achieving timely consensus and its potential limitations in private platform areas.
Brussels Corner
The General Court annulled the Commission’s decisions setting the supervisory fee applicable to Facebook, Instagram and TikTok
The EU’s Digital Services Act (DSA) requires very large online platforms and search engines to pay annual supervisory fees based on their user numbers. To set the 2023 supervisory fee, the Commission calculated the number of average monthly active recipients in the European Union (AMAR) using a common methodology based on third-party data and annexed it to implementing decisions. This led Meta and TikTok to challenge the decisions in the Court of Justice of the European Union (CJEU). The Court ruled that the Commission should have adopted a methodology to calculate AMAR through a delegated act instead of an implementing decision as set out in DSA article 43 (3). The Court annulled the Commission’s decisions setting the supervisory fee applicable to Facebook, Instagram and TikTok; however, the effects of the decision are provisionally maintained for 12 months until the Commission establishes a new methodology.
Parliamentary questions
In recent parliamentary questions, members of the far-right PfE party, together with Mary Khan from the ESN, challenged two key EU laws.
- PfE members called on the Commission to suspend or postpone the European Media Freedom Act (EMFA), even though most of its provisions have already entered into force and the Commission has no authority to halt its application.
- Mary Khan (ESN) and Petra Steger (PfE) also asked under what conditions the Commission would agree to abolish or relax the Digital Services Act (DSA), citing alleged US lobbying efforts.
Both requests go well beyond the Commission’s powers, since neither EMFA nor DSA can be unilaterally abolished or weakened by the Commission. Both questions have not received an answer from the Commission yet; it should be expected within six weeks of the submission date.
Reading & resources
- Social media and mass atrocity prevention. This article examines the increasingly complex role social media plays in preventing mass atrocities, identifying three pathways of influence: structural prevention, operational prevention, and crisis response. It confronts the existing research gap by bridging insights from mass atrocity studies and journalism scholarship to move beyond the foregoing focus on humanitarian aid and intervention.
- Monitoring conflict through rampant misinformation. Bellingcat’s new guide outlines methods for monitoring conflicts amid heavy disinformation, using OSINT to verify weapon imagery, identify armed groups, and assess drone warfare claims. The guide shows how researchers can navigate internet shutdowns and disinformation to uncover the realities of conflicts like those in Manipur, India.
- Prebunking thwarts election misinformation. A new study comparing strategies to counter election misinformation in the US and Brazil finds that prebunking outperforms traditional fact-checking in boosting election confidence and reducing fraud beliefs. The research shows prebunking’s impact comes mainly from its factual content rather than warnings about deception, making it a practical tool for safeguarding trust in democratic institutions.
- Weapons of information warfare. Handbook detailing Russian disinformation tactics with the war in Ukraine. This guide explores the processes that determine how manipulated content is created and spread, the strategies that influence the perception of information, and the soft power tools utilised by Russia to control public opinion.
- Strategic Foresight Report 2025. The latest Strategic Foresight report from the European Commission to the European Parliament and Council mentions disinformation as a threat to democracy, specifically in the context of climate-related disinformation, electoral interference, the role of SoMe and AI algorithms in amplifying disinformation, and the importance of supporting media literacy efforts.
This week’s recommended read
This week’s recommended read by Maria Giovanna Sessa is a report on the online experiences of political candidates in the 2024 Irish elections.
The Irish media regulators (Coimisiún na Meán) took a data-rich look at how election runners campaigned online and what they faced. Findings reveal a broad experience of abuse, as 48% of local and 59% of general candidates encountered offensive, intimidating, or impersonation behaviours, with some receiving threats to kill or cause serious harm. Unsurprisingly, women, migrants, and LGBTIQ+ candidates were disproportionately targeted, contributing to a chilling effect where contenders self-censor on contentious policy areas.
Providing yet another example of the weight of identity-based disinformation, this important contribution combines rigorous evidence with actionable insights for platform accountability under the Digital Services Act (DSA) and practical candidate support, making its recommendations useful far beyond Ireland for anyone working on election integrity!
The latest from EU Disinfo Lab
- New FIMI Response Team reports. The FIMI Defenders For Election Integrity (FDEI) project (under FIMI-ISAC) brings together ten member organisations of the Foreign Information Manipulation and Interference Information Sharing and Analysis Centre (FIMI-ISAC) to develop a multistakeholder FIMI framework for elections to effectively monitor, respond and counter FIMI threats before and during elections while at the same time strengthening FIMI defenders communities and democratic institutions in four selected countries. Two new reports have been published with EU Disinfolab as one of the collaborators: Country Election Risk Assessments for Moldova and Czechia.
Events & announcements
- 18-19 September: The 2025 edition of the International Democracy Day Brussels Conference will take place, under the title “A World Turned Upside Down: Democracy And Inclusion In An Age Of Insecurity”.
- 19 September: “Generative futures: Climate storytelling in the age of AI disinformation” is a workshop that will be held in Oxfordshire (UK), exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation.
- 24 September: Media & Learning’s next Wednesday Webinar ”Persuasion by design: understanding Influence(rs)” will explore how audiovisual and social media shape our values, norms, and beliefs.
- 3 October: The French research Chair on Content Moderation is organising the “Hack the DSA” workshop in Paris. This workshop seeks to analyse the reports and data made available under the Digital Services Act (DSA).
- 9 October: Climate at a Crossroads 2025. Tackling Disinformation in Climate and Economic Policy. Experts, policymakers, and civil society gather in Ottawa, Canada, and online to address climate disinformation and its impact on governance.
- 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. The perfect time to get your ticket is now – we bet you won’t want to miss it.
- 17 October: MLA4MedLit online conference. Stay tuned; the event description and list of speakers will be announced shortly.
- 23-25 October: The IPI World Congress and Media Innovation Festival will gather in Vienna to discuss topics under the theme “Defending the Future of Free Media.”
- 24-31 October: Global Media and Information Literacy Week 2025. Minds Over AI – MIL in Digital Spaces. Stakeholders around the world organise events and UNESCO co-hosts with a Member State during this conference to be held in Cartagena de Indias, Colombia.
- 25 or 26 October: Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI, the European Workshop on Trustworthy AI organised as part of the 28th European Conference on Artificial Intelligence ECAI 2025, Bologna, Italy.
- 29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.
- 19 November: Media & Learning Webinar: Understanding and responding to Health Disinformation. The session will offer practical tools for educators, librarians, NGOs, and civil society to counter pseudoscience, conspiracy narratives and support informed health communication.
- 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
- 17 December: Media & Learning Wednesday Webinar about Lines of speech: hate, harm and the laws across borders.
- MediaEval 2025 Synthetic Images challenge. As part of the veraAI and AI-CODE projects, a new challenge invites researchers to tackle synthetic image detection and manipulation localisation using real-world data. More info, registration, and dataset links are available on the MediaEval 2025 website and GitHub.
- AI tools & FIMI. A short survey is open as part of the EU-funded RESONANT project, exploring the effectiveness and reliability of AI tools in detecting disinformation and Foreign Information Manipulation and Interference (FIMI). If you work in this area, we invite you to share your insights.
- Education on nuclear disinformation. Are you a secondary school teacher or student based in Belgium? Do you have your own good idea of possible “fake news” related to ionising radiation or radiation protection? Submit it before 30 September.
Spotted: EU DisinfoLab
- FIMI Analysis Report. A new report by International IDEA presents a methodology for analysing Foreign Information Manipulation and Interference (FIMI) – originally presented last November at the International IDEA workshop. During the workshop, EU DisinfoLab’s Project Officer, Inès Gentil, gave a presentation on “How do platforms’ algorithmic recommendation systems enable (electoral) FIMI?” and specifically explored these two case studies in the presentation:
- State of the threat. Last week, the French government gathered stakeholders at a roundtable hosted by the French Ministry of Foreign Affairs in Paris. Our Executive Director, Alexandre Alaphillipe, contributed to a session on the state of the threat. Discussions covered the diversification of actors – from Russian operations to emerging networks in Africa and the MAGA sphere – and the growing spread of manipulative techniques. The debate also underlined the importance of enforcing existing frameworks to safeguard democratic debate, transparency, and fundamental freedoms.
Jobs
- Civitates is looking for an Impact and Learning Manager.
- Tech Policy Press is now accepting applications for its 2026 Fellowship Cohort.
- The Institute for Strategic Dialogue (ISD) is now looking for a Senior Research Analyst focused on Extremism. Applications are due no later than October 1.
- The Global Disinformation Index is accepting applications for a part-time Fundraising Manager.
- NewsGuard is looking for a Staff Reporter tracking false narratives spreading online and reporting on digital news trends.
- NewsGuard is also looking for a Business Development and Social Media Intern for Fall 2025.
- ProPublica is now accepting applications for the Assistant Director of Leadership Gifts position.
- MoonShot is looking for a Financial Controller.
- CORRECTIV is looking for a Trainer and Coordinator for Media Literacy.
- Global Media Registry is looking for a Project Lead & Analyst, EU Media Regulation
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.
