Dear Disinfo update reader,

Two weeks ago, our community came together for the #Disinfo2025 annual conference, and what an inspiring event it was! Over three days, academics, researchers, policymakers, and journalists from across the world shared strategies and insights to strengthen the fight against disinformation. A heartfelt thank you to all of the 600+ participants who joined us for making this year’s event a huge success. 

Couldn’t make it? Don’t worry, a selection of presentation slides and photos will be available on our website by the end of this week. Stay tuned for updates and relive the best moments and conversations that shaped this year’s discussions. 

Meanwhile, this issue dives into a busy policy month in Brussels, with the Digital Services Act’s delegated act on data access and the new Transparency and Targeting of Political Advertising Regulation coming into force. We also highlight new research on disinformation prevalence, climate narratives in French media, and inequalities in X’s Community Notes. 
And finally, we’re thrilled to announce that applications are now open for our 2026 Community and Communications Internship! If you’re passionate about digital integrity, community engagement, and strategic communication, this is your chance to join our dynamic team in Brussels and contribute to our mission.

Get reading and enjoy!

Our Webinars

UPCOMING – REGISTER NOW!

  • 6 November: Weaponising the past: historical disinformation as a tool to legitimate aggressive politics | This session traces how history and archaeology have been manipulated to legitimise power, first by Nazi Germany and today by the Russian state. Under the Nazis, archaeological “research” and historical myths about racial purity and ancestral lands were used to justify persecution and conquest. In modern Russia, the government promotes a heroic, selective version of World War II and uses heritage sites and historical narratives to claim moral and territorial legitimacy, including for the war in Ukraine. Chiara Torrisi, historical researcher, will show us how old mechanisms of distortion have been adapted to today’s information environment, turning the past into a political weapon.
  • 20 November: Command and Control: How ANO Dialog surveils the Russian info space for the Kremlin | Behind Russia’s polished state messaging lies a vast monitoring apparatus: ANO Dialog. As the nerve centre of the Kremlin’s information control, it quietly manages propaganda and online manipulation across thousands of social media channels. In this session, Serge Poliakoff (Univ. of Amsterdam) uncovers how ANO Dialog works, why it represents a new model of state-controlled disinformation, and what its reach means for the information space today.

PAST – WATCH THE RECORDINGS!

  • Are AI Detection Tools Effective? Introducing TRIED, a WITNESS Benchmark | With the rapid development of generative AI, the AI detection tools are a key resource for information actors to verify the authenticity of content and combat disinformation. How do we ensure AI detection tools truly serve the people who need them most—and strengthen the work of fact-checkers, journalists, and civil society groups? In this webinar, Zuzanna Wojciak presents TRIED: the Truly Innovative and Effective AI Detection Benchmark, a practical framework developed by WITNESS, to evaluate whether detection tools are genuinely useful and effective while simultaneously guiding AI developers and policymakers in designing and promoting inclusive, sustainable, and innovative detection solutions.
  • Delegated act access to data | In this webinar, João Vinagre, researcher at the European Commission’s Joint Research Centre (JRC), unpacks what the delegated act on data access means in practice and how researchers can leverage these new rights. While there is no recording of this webinar available online, you can now access the presentation slides on our website.

Disinfo news & updates 

⚖️Platform accountability and policy shifts

  • Denmark’s social media ban. In a recent speech, Danish Prime Minister, Mette Frederiksen, announced plans for an upcoming social media ban for children aged 15 and younger. The proposed ban is said to span across several platforms and is planned to come into effect as early as next year. This move is fueled by claims that “mobile phones and social media are stealing our children’s childhood,” and is accompanied by plans to ban smartphones in all schools in Denmark. The government links excessive online activity to rising youth anxiety, depression, and declining reading and concentration skills.         
  • NYC sues major platforms over youth mental health crisis. New York City federal courts have filed a lawsuit against Meta, Google, Snapchat, and ByteDance, accusing them of designing addictive platforms that have fueled a growing youth mental health crisis. The city claims the companies acted with gross negligence, creating a public nuisance by exploiting children and teens’ neurophysiology for profit. Citing 77.3% of NYC high school students spending three or more hours a day on screen time and an increase in harmful trends like subway surfing, NYC seeks damages to offset the costs of addressing the crisis. The lawsuit joins over 2,000 similar lawsuits nationwide. Google has denied the allegations concerning YouTube.  
  • EU political ad transparency backfires. New EU regulations aimed at increasing transparency in political advertising have produced unintended results, as major platforms like Meta and Google have implemented blackouts on political advertising. While the new law aims to prevent manipulation and foreign interference, critics warn it could actually cause adverse effects by silencing smaller parties and independents. The broad definition of political advertising has also led to the blocking of ads about key social issues like climate change, migration, and social justice and human rights initiatives, raising fears of lost data and diminished public insight.
  • Meta’s AI advisor controversy. This article by The Guardian critically assesses the appointment of Robby Starbuck as an adviser on AI bias at Meta. A role which he acquired through a lawsuit settlement after threatening legal action against a Meta AI output, claiming that he was a supporter of the QAnon conspiracy theory and participated in the January 6 riot at the Capitol. Starbuck, a vocal anti-DEI activist, has been widely criticised for spreading disinformation on issues ranging from vaccines to transgender rights. His appointment has raised concerns about Meta’s commitment to platform integrity and its apparent appeasement of right-wing political forces.    
  • Google cuts funding to Full Fact. After three full years, Google has abruptly ended its financial support of Full Fact, a prominent UK-based fact-checking organisation. In the past year, Google has provided over £1m in funding, either directly or through affiliate programs, helping Full Fact develop advanced AI tools for combating disinformation. According to Full Fact, none of these funds have been renewed, suggesting that US tech companies may be retreating from fact-checking efforts to appease political pressures from the current administration. Full Fact warns that this emerging trend serves to undermine efforts to uphold verifiable truth online and is urging public support to continue its independent work.  
  • TikTok and ICE. This Forbes investigation reveals that TikTok has quietly updated its policies to allow broader sharing of users’ personal data with government and regulatory bodies, including the US Immigration and Customs Enforcement (ICE). The changes go back on previous commitments to notify users before disclosing their information, making it difficult to contest government subpoenas. 
  • X transparency features. As AI-generated bots become more sophisticated, X is coming out with an experimental feature that they claim is meant to enhance trust and verify content authenticity. According to the head of product, Nikita Bier, the update will display more specific information on profiles, such as account creation date, location, and username history. Allegedly, these features will help users spot fake or malicious accounts. While users can opt out of sharing this information, doing so will be flagged on their profile, potentially signaling suspicious behavior.

🎭Disinformation campaigns & narrative manipulation

  • Food access misinformation in Gaza. An investigation by the Washington Post has revealed that Google allowed Israeli government-backed ads to remain on YouTube that contained misinformation about food access in Gaza. The ads, part of a campaign asserting “There is food in Gaza,” featured footage from mid-2023 showing stocked markets while failing to mention the severe inflation and scarcity that made this food largely unaffordable and thus unattainable for most Palestinians. Internal emails looked at by the Washington Post show that Google’s Trust and Safety team determined that the ads did not violate misinformation policies, effectively enabling the Israeli government to continue using the platform to shape narratives amid the ongoing conflict.          
  • Trump’s green new scam messaging. An analysis by Grist uncovers the political intent behind Donald Trump’s use of the phrase “Green New Scam,” coined during a 2023 campaign event in New Hampshire, as part of a broader effort to delegitimise climate policy. The slogan can be seen as part of a three-pronged strategy: erasing language related to climate change in government discourse, undermining climate science, and promoting pro-fossil fuels narratives. Experts describe this as a classic propaganda strategy, using catchy repetitive language to reshape and control public perception. 
  • COP30 climate disinformation surge. This Substack article exposes a surge of disinformation campaigns targeting COP30, which President Lula has framed as the “COP of truth.” The article describes a transnational “supply chain of lies” combining imported conspiracies like “globalists” and “deep state” with disinformation narratives locally tailored in Brazil. The article highlights how disinformation is being monetised, puts environmental defenders under threat, and fuels a rise in negative COP30-related content on social media. For example, some posts include AI-generated content and real videos shared out of context to discredit the host city. Despite the risks associated with these disinformation campaigns, major platforms have yet to dedicate a climate-specific response.    
  • How to fight Russia’s disinformation campaign. In this piece by Foreign Policy, the authors argue that most authoritarian regimes, like Putin’s Russia, share one core vulnerability: the fear of losing control over their own people. The authors call for the West to strategically exploit this weakness through ethical cognitive deterrence or coordinated information efforts exposing corruption, economic strain, and the realities of war. Rather than simply copying disinformation tactics, they urge truthful, targeted communication to make authoritarian elites fear internal instability more than the West fears escalation.

🤖AI Disinfo updates

  • AI video generators are now so good, you can no longer trust your eyes. The New York Times: OpenAI’s new video generator Sora could mark “the end of visuals as proof,” as anyone can now fabricate convincing footage from a text prompt. NewsGuard put that to the test and found Sora 2 generated fake videos advancing false claims in 80% of cases, including Russian disinformation and brand impersonations. Together, the findings highlight how text-to-video AI is blurring the line between evidence and illusion, and how OpenAI does not seem to be putting enough effort into ensuring compliance with its usage policies, which, for example, ‘prohibit misleading others through impersonation, scams, or fraud’.
  • We say you want a revolution. PRISONBREAK – An AI-enabled Influence operation aimed at overthrowing the Iranian regime. Citizen Lab reports on an AI-enabled influence operation, referred to by the researchers as “PRISONBREAK”, consisting of more than 50 inauthentic X profiles spreading AI-generated videos, fake news screenshots, and impersonations of media outlets to incite Iranians to revolt against their government. The campaign was mainly active during the 13–24 June 2025 Iran–Israel conflict and was synchronised with operations by the Israel Defense Forces (IDF). 
  • Russian hackers turn to AI as old tactics fail, Ukrainian CERT says. The Record: As artificial intelligence reshapes both cyberwarfare and information control, new evidence shows how Russia is weaponising AI across digital domains. Ukraine’s CERT-UA has observed Russian hackers using AI to automate cyberattacks, generate malware, and exploit zero-click vulnerabilities, marking a shift toward “Steal & Go” operations and AI-assisted hybrid warfare coordinated with missile strikes. Meanwhile, a SAGE Journals study reveals how Russian disinformation networks are poisoning large language models (LLMs) to embed Kremlin narratives in AI systems, an effort to manipulate collective digital memory and sustain what Timothy Snyder calls the “politics of eternity.”
  • Generative AI and news report 2025: How people think about AI’s role in journalism and society. Reuters: AI is redefining how people find and make sense of information, raising fresh challenges for journalism and fact-checking alike. A new report shows that the public’s use of generative AI has surged across six countries, and that searching for information has become the main reason people use these tools, with weekly use for this purpose more than doubling in the past year, from 11% to 24%. Meanwhile, some are raising their voices about the dangers of using AI in journalism: NewsGuard’s latest Reality Check exposes how a French outlet used ChatGPT to invent quotes from real experts, a reminder that AI shortcuts can just as easily manufacture credibility as they can undermine it, deepening the trust crisis facing journalism.

Want to stay on top of the latest in AI and disinformation? Our AI Disinfo Hub has just been updated. Take a look!

Brussels Corner

New rules on data access and political advertising 

October is a busy month for EU digital policy, with several legislative acts coming into force. The Delegated Act on Data Access under the Digital Services Act enters into application this month, requiring major online platforms to grant researchers access to data to study systemic risks in the EU. This is a key step toward better monitoring of online harms, although whether this framework delivers meaningful transparency will largely depend on how platforms handle upcoming data requests.

Also, taking effect on 10 October, is the Regulation on the Transparency and Targeting of Political Advertising (TTPA), designed to help people recognise political ads. TTPA could be an important measure to mitigate the risks of disinformation efforts to influence elections in several EU countries. However, a backlash from BigTech companies risks causing a harmful loss of information.

In a reaction that could only be described as petulant, Meta and Google have paused political ads entirely, claiming the new rules are too burdensome. At a recent scrutiny session held at the European Parliament by the Committee on the Internal Market and Consumer Protection (IMCO) Meta argued the regulation would reduce ad effectiveness and ignore the “benefits” of personalised ads, while rapporteur Sandro Gozi (IT, Renew) questioned whether complexity is the real issue or simply an excuse to avoid compliance.

MEPs back report on transparency of interest representation for third-country funding

During the same IMCO committee meeting last week, MEPs backed a report (based on an initial proposal from the European Commission), led by Adina Valean MEP (EPP, Romania), supporting the creation of national transparency registers for organisations receiving funds from outside the EEA for “influence operations”. Eight of the 43 MEPs who voted did not vote in favour of the report, probably at least in part due to concerns about lack of clarity. The proposal is still awaiting a plenary vote. While the Commission promotes its “Digital Omnibus” as simplification for businesses, this legislation risks adding complexity and regulatory overhead for NGOs.

The next Multiannual Financial Framework (MFF) and AgoraEU

The EU’s next MFF or seven-year budget (2028–2034) will introduce AgoraEU, a new programme combining the Creative Europe, Media+, and CERV+ programmes. This funding programme is intended to create the legal basis for the continued funding of research projects that are crucial for the understanding of, and the fight against, disinformation campaigns in Europe.

The European Parliament has co-legislative powers in adopting individual spending programmes, making deliberations at the committee level particularly important. On 22 October, the Parliament decided that the Civil Liberties (LIBE) and Culture (CULT) Committees will have joint competence in the new AgoraEU negotiations, meaning shared responsibility for shaping the regulation. Additional opinions will be provided by the Budgetary Control, Budgets, and Women’s Rights Committees.

Securing this crucial funding to counter disinformation for the next MFF will require our community to coordinate efforts in the next months. If you’re interested in learning more about joint efforts towards achieving stronger support for CSOs in the next MFF, feel free to reach out to EU DisinfoLab.

Reading & resources

  • Community notes on X. New research presents the first large-scale quantitative analysis of X’s crowdsourced moderation tool, Community Notes, examining over 1.8 million entries. The study identifies major challenges in how the system operates: a small group of users produces most notes, consensus among contributors is rare, and many notes are used for debate rather than factual correction. Additionally, the average publication delay suggests the system struggles with overall timeliness.  
  • Disinformation on major platforms in Europe. A new report by Science Feedback and partners offers a cross-platform measurement of structural indicators for online misinformation across six major platforms in four EU member states. The study finds that TikTok has the highest prevalence of misinformation with a 20% exposure-weighted misinformation rate on public-interest topics. It also identifies a “misinformation premium,” with low-credibility accounts earning more engagement per post than reliable sources on most platforms except LinkedIn. These findings underscore how platform design and policy choices amplify misleading content, issues the Digital Services Act seeks to address.    
  • Climate Disinformation on French TV and Radio. An eight-month monitoring project by Science Feedback and partners has uncovered 529 misleading or false climate-related claims broadcast on French television and radio between January and August 2025. The volume of misinformation spiked around major political and geopolitical events, with 70% of cases targeting climate solutions, especially renewable energy, rather than climate science itself. The study identifies 19 recurring narratives and warns that mainstream outlets, though widely trusted, are inadvertently amplifying falsehoods through guests and commentators. The findings call for stronger safeguards to protect the integrity of environmental information in public discourse.    

This week’s recommended read

Raquel Miguel, Senior Researcher at EU DisinfoLab, recommends reading this series of three articles published by Spanish fact-checker Maldita together with Italian media outlet Facta, as part of a project funded by Journalismfund Europe. The authors coined the term ‘pop fascism’ to explain how elements of pop culture are being exploited to amplify fascist narratives, disinformation, and conspiracy theories. The strategy includes the use of AI-generated content, coded language, and masking through humour to evade content moderation on digital platforms.

Memes, pop music, and football help to amplify and also digest these types of narratives, which idolise and present figures such as Adolf Hitler, Benito Mussolini, and Francisco Franco as friendly, cool, and less radical.

Beyond specific content, the research illustrates how disinformation actors are exploring and using novel communication strategies, including cultural elements with high emotional impact, to pollute the information ecosystem and ultimately expand and normalise anti-democratic narrative frameworks, especially among young audiences.

The latest from EU Disinfo Lab

  • Documenting counter-disinformation setbacks in Europe and Germany. In a new report for the Friedreich Naumann Foundation, EU DisinfoLab team members Raquel Miguel Serrano and Maria Giovanna Sessa highlight the mounting challenges for the fight against disinformation. The report identifies a dual threat: increasingly sophisticated foreign interference campaigns and a deteriorating political and corporate climate within Europe. The study warns that domestic political shifts and major tech platforms loosening moderation controls are weakening regulatory safeguards like the Digital Services Act. Using Germany as a case study, the report underscores both the promise of strong regulation and its fragility in the face of external manipulation and internal political pressures, calling for stronger, more resilient defences across Europe.  
  • New factsheet: disinformation landscape in Poland. Our latest disinformation landscape release turns the spotlight on Poland, mapping key narratives, trends, and actors shaping the country’s information space. The factsheet explores notable disinformation cases influencing public debate, highlights both drivers and defenders of false narratives, and reviews Poland’s legal, media, and policy responses. Produced in collaboration with Mateusz Zadroga and NASK, this study contributes to our broader European mapping effort.

Events & announcements  

  • 29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.
  • 5-7 November: The Sofia Information Integrity Forum (SIIF) will take place, bringing Southeast Europe and the Black Sea region together for regional dialogue, international collaboration, and strategic innovation on the topic of FIMI vulnerability. 
  • 19 November: Media & Learning Webinar: Understanding and responding to Health Disinformation. The session will offer practical tools for educators, librarians, NGOs, and civil society to counter pseudoscience, conspiracy narratives and support informed health communication.
  • 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
  • 17 December: Media & Learning Wednesday Webinar about Lines of speech: hate, harm and the laws across borders.
  • 23-24 Jan: The Political Tech Summit, held in Berlin, offers an opportunity for political professionals working at the intersection of tech, campaigning, and democracy to exchange knowledge and discuss fresh perspectives shaping digital politics. 
  • 16-17 Feb: The DSA and Platform Regulation Conference will take place at the Amsterdam Law School, to reflect on the DSA and European platform regulation, providing an opportunity to discuss its broader legal and political context, through the overall theme of platform governance and democracy.     
  • 8-10 April: The Cambridge disinformation summit is expected to gather the world’s leading scholars, professionals, and policy-makers to explore interventions on systemic risks from disinformation. 

Spotted: EU DisinfoLab

  • Our senior researcher, Raquel Miguel Serrano, will be participating in the Women in AI international talk in Valencia on 12 November. She will be speaking in the thematic session “AI Impact on fundamental rights: guidelines and tools combating disinformation.”

Jobs 

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.