Dear Disinfo update reader,

Welcome to the November 2025 release of the Disinfo Update! In this edition, we trace how power, platforms, and narratives continue to shape the information ecosystem. From X’s algorithmic bias to Meta’s role in the global scam economy and China-linked cyber operations across Europe, there’s plenty to unpack. 

This edition also marks the update of three of EU DisinfoLab’s flagship hubs: Climate Clarity, Conflict & Crisis, and AI Disinfo. You’ll find a quick snapshot of the latest insights, investigations, and resources from each. And if you want to dive deeper, explore the hubs themselves- each one is a living library with curated materials and tools to help you stay informed, spot new trends, and stay ahead in the fight against disinformation. 

Get reading and enjoy!

Our Webinars

UPCOMING – REGISTER NOW!

  • 20 November: Command and Control: How ANO Dialog surveils the Russian info space for the Kremlin | Behind Russia’s polished state messaging lies a vast monitoring apparatus: ANO Dialog. As the nerve centre of the Kremlin’s information control, it quietly manages propaganda and online manipulation across thousands of social media channels. In this session, Serge Poliakoff (Univ. of Amsterdam) uncovers how ANO Dialog works, why it represents a new model of state-controlled disinformation, and what its reach means for the information space today.

PAST – WATCH THE RECORDINGS!

  • 6 November: Weaponising the past: historical disinformation as a tool to legitimate aggressive politics | This session traces how history and archaeology have been manipulated to legitimise power, first by Nazi Germany and today by the Russian state. Under the Nazis, archaeological “research” and historical myths about racial purity and ancestral lands were used to justify persecution and conquest. In modern Russia, the government promotes a heroic, selective version of World War II and uses heritage sites and historical narratives to claim moral and territorial legitimacy, including for the war in Ukraine. Chiara Torrisi, historical researcher, will show us how old mechanisms of distortion have been adapted to today’s information environment, turning the past into a political weapon.
  •  Are AI Detection Tools Effective? Introducing TRIED, a WITNESS Benchmark | With the rapid development of generative AI, the AI detection tools are a key resource for information actors to verify the authenticity of content and combat disinformation. How do we ensure AI detection tools truly serve the people who need them most—and strengthen the work of fact-checkers, journalists, and civil society groups? In this webinar, Zuzanna Wojciak presents TRIED: the Truly Innovative and Effective AI Detection Benchmark, a practical framework developed by WITNESS, to evaluate whether detection tools are genuinely useful and effective while simultaneously guiding AI developers and policymakers in designing and promoting inclusive, sustainable, and innovative detection solutions.
  • Delegated act access to data | In this webinar, João Vinagre, researcher at the European Commission’s Joint Research Centre (JRC), unpacks what the delegated act on data access means in practice and how researchers can leverage these new rights. While there is no recording of this webinar available online, you can now access the presentation slides on our website.

Disinfo news & updates 

🧠Platforms and information integrity  

  • X algorithm amplifies right-wing and extreme content. A Sky News investigation finds that Elon Musk’s X algorithm systematically boosts right-wing and extremist content, reshaping the online political discourse in the UK. In tests with politically varied new accounts, over 60% of recommended political content leaned right, and more than half of posts came from users employing hateful or extreme language, regardless of user preference. The report warns that Musk’s growing influence on British politics, including endorsements of fringe candidates, poses a threat to democratic debate. Analysts suggest the platform’s algorithmic bias toward outrage and engagement has turned X into a space where right-wing voices dominate, reflecting the owner’s ideological priorities.   
  • Meta’s profits tied to scam ads. Internal documents reviewed by Reuters reveal that Meta earned an estimated 10% of its annual revenue (around $16 billion) from ads promoting scams and banned goods, despite long-standing awareness of widespread fraud across Facebook, Instagram, and WhatsApp. The leaks indicate Meta’s systems serve users up to 15 billion “high-risk” scam ads daily, making the company a major pillar of the global fraud economy, responsible for roughly one-third of all successful scams in the U.S. Rather than removing offenders, Meta has at times charged suspected scammers higher ad rates, prioritising profit over enforcement. In response to this growing crisis, former Meta executives, Rob Leathern and Rob Goldman, have launched CollectiveMetrics.org, a new non-profit dedicated to bringing transparency and accountability to digital advertising. Concerned that major platforms have stalled in addressing AI-driven scams and deepfakes, the founders aim to use independent data and analysis to track the prevalence of online fraud and reveal how opaque ad systems generate massive profits for tech giants. The initiative seeks to empower journalists, researchers, and policymakers with clear metrics to hold platforms accountable for deceptive advertising practices. 
  • EU weighs delay to AI act enforcement amid US pressure. According to a recent article by The Guardian, the European Commission is reportedly considering delaying parts of the EU Artificial Intelligence Act, the world’s first comprehensive AI regulation, under pressure from US businesses and the Trump administration, which has threatened tariffs over what it calls unfair tech rules. The proposed adjustments include a one-year grace period for violations involving high-risk AI systems and postponed fines for transparency breaches. Critics, particularly those who helped draft the law, warn that such a delay would undermine legal clarity and leave citizens vulnerable to the very AI risks the regulation seeks to prevent.  
  • Spyware firms enter the US market, tech giants sound the alarm. A new article by The Guardian reports that as Israeli-founded cyber-surveillance companies NSO Group and Paragon Solutions expand into the US market, major tech firms like Apple and WhatsApp are pledging to warn users targeted by spyware. Paragon has already struck a deal with US Immigration and Customs Enforcement (ICE) for its Graphite software, raising alarm among civil rights advocates. While the firms claim their tools fight serious crime, previous misuse by government clients against journalists, activists, and business leaders fuels fears of a growing “silent spyware epidemic” within the US and a new wave of digital rights violations.  
  • Grokipedia and the illusion of neutrality. A recent WIRED investigation reveals that Elon Musk’s AI-generated encyclopedia, Grokipedia, launched by his xAI startup, is drawing criticism for ideological bias. Marketed as an alternative to Wikipedia, which Musk and other conservative critics accuse of having left-leaning bias, Grokipedia appears to lean the other way, allegedly promoting far-right narratives and factual distortions, such as linking pornography to the AIDS epidemic and portraying social media as a “contagion” influencing gender identity. While Grokipedia accuses mainstream media outlets of liberal bias, the platform seems to offer praise of Musk himself. Another investigation by TIME magazine compares Elon Musk’s portrayal on Wikipedia versus Grokipedia. The platform paints a glowing picture of Musk, glossing over controversies like his Nazi salute comparison and amplification of “white genocide” claims. Critics say Grokipedia replaces one bias with another rather than offering true balance.
  • Digital evidence at risk. According to a 404 Media investigation, the FBI has subpoenaed the domain registrar Tucows to identify who operates archive.today, one of the internet’s most widely used archiving platforms. Known for preserving deleted posts, government pages, and other disappeared content, the site is vital for investigators, especially those documenting Russian war crimes in Ukraine. However, the outlet’s opaque ownership, alleged ties to Russia, and unstable funding raise alarms about the fragility of this critical infrastructure. If the service were to shut down, years of archived digital evidence could vanish overnight, underscoring the urgent need for open, transparent, and publicly supported archiving systems to safeguard truth and accountability online.
  • China-linked hackers exploit Windows zero-day against European diplomats. This report by Bleeping Computer covers a recent cyber-espionage campaign attributed to the China-linked group UNC6384 (Mustang Panda), which has been targeting European diplomats and government bodies. The attackers are exploiting an unpatched Windows zero-day vulnerability using malicious files themed around NATO defense procurement workshops to deploy the PlugX remote access trojan malware, allowing them to monitor diplomatic communications. Spearphishing emails disguised as diplomatic event invitations lure victims into opening infected files, enabling data theft and surveillance, especially in Hungary, Belgium, and Serbia. With no Microsoft patch yet available, experts urge organizations to implement mitigations like blocking LNK files to reduce exposure.

Our three hubs– Climate Clarity, AI, and Conflict & Crisis– have all been updated! Below, we feature a few highlights to give you a snapshot of the latest content.

🌱Climate clarity hub

Recommended!

⚔️Conflict & crisis hub

🤖AI disinfo updates

  • Chatbots pushing content from sanctioned entities: Leading AI chatbots are repeating Russian propaganda from sanctioned entities when asked about the war in Ukraine, according to research by the Institute for Strategic Dialogue (ISD) published by Wired. Around 18% of responses cited Kremlin-linked or state-funded outlets such as RT, Sputnik, or the Strategic Culture Foundation. The researchers ask the European Commission to clarify how EU sanctions apply to chatbot outputs that cite sanctioned state media.
  • LLMs grooming or data voids?  A new study challenges the idea that pro-Kremlin narratives in chatbot outputs stem from “LLM grooming,” arguing instead that they arise from data voids. Based on a controlled audit of 416 responses from ChatGPT, Gemini, Copilot, and Grok, researchers from the Harvard Kennedy School and published by Misinfo Review suggest that these AI vulnerabilities stem less from coordinated manipulation and foreign interference than from the so-called “data voids.”
  • AI regulation & platform accountability: The EU may delay parts of its AI Act under pressure from big tech and the Trump administration, potentially giving companies a grace period on compliance and transparency rules, according to The Guardian. Meanwhile, platforms like Instagram, YouTube, and TikTok fail to consistently label AI content (~30% compliance), raising trust concerns, as shown by Indicator
  • AI assistants & news integrity: ChatGPT, Gemini, Copilot, and Perplexity misrepresent news in 45% of cases, exposing gaps in sourcing, accuracy, and context, as revealed by EBU and BBC. But who loses audience trust—and who gets blamed—when a chatbot misrepresents the news? A UK BBC–Ipsos survey finds the public spreads responsibility – 36% say AI providers, 31% say regulators, and 23% say news outlets should be accountable, even when mistakes come from AI summarization. Thus, suggesting that machine-made mistakes can damage credibility across the entire information chain.
  • AI risks to users & trust: AI chatbots and generative tools are raising privacy, safety, and literacy concerns. A study warns that user conversations are routinely used to train models, sometimes including children’s data, according to HAI Stanford University. Meanwhile, AI enables hyper-realistic threats and violent deepfakes, as revealed by The New York Times, while teens are struggling to distinguish real from synthetic videos, according to ABC. Additionally, as published by The Verge, AI-powered browsers like ChatGPT, Atlas, and Edge Copilot introduce new privacy and security risks, from data exposure to hijacking by malicious actors.
  • AI for espionage: Former U.S. security official Anne Neuberger argues that AI is remaking espionage and intelligence work. Drawing on her NSA and White House roles, she says AI can speed up analysis and improve threat prediction but considers that democracies should update their practices without eroding transparency or civil liberties. As authoritarian rivals use mass data for surveillance, she calls for “responsible innovation” to protect public trust. The core test, she concludes, is whether open societies can harness AI’s power without becoming what they oppose. This information has been gathered by ASPI.

Want to stay on top of the latest in AI and disinformation? Our Climate Clarity, Conflict & Crisis, and AI Disinfo Hubs have just been updated. Take a look!

Brussels Corner

Latest news on the EU’s Democracy Shield

If you’re not in Brussels, you might get lost in translation, as multiple initiatives labeled “Democracy shield” are being run in parallel. Now, as the Commission has released its Communication on European Democracy Shield (EUDS), we should disentangle the different things called Democracy Shield…

The European Commission published its Communication on the Democracy Shield on 12 November. However, initial reactions (paywall) point to the “Democracy Shield” being little more than a political communication exercise rather than a concrete reinforcement of the EU’s democratic defences. More on this in our next newsletter.

In parallel, the European Parliament (EP) has appointed a temporary special committee on the Democracy Shield. This committee will publish a draft European Parliament Resolution on the Democracy Shield in January (which will be open for comments and amendments. A final Resolution to be voted on by the full Parliament is expected later next year.

As we look forward to the EP’s forthcoming EUDS draft report expected in January, the EP Research Unit has just published a study for the EUDS Committee, presented during a committee meeting on November 5. The study explores the EU’s evolving framework to safeguard democracy and issues recommendations for the upcoming report.

Among other things, in its key recommendations, the study calls for a clear mandate and streamlined governance for any new structures created under the Democracy Shield. This would help to avoid duplication of efforts — something that should be especially considered in light of the proposed European Centre for Democratic Resilience, a new structure which is not clearly legally identified. This new Centre was announced by President von der Leyen, and a similar suggestion was made in the EUDS Committee’s own working document. Will it be a reshuffling of the existing Rapid Alert System? Will it become a new EDMO working group?  

The study also underscores the importance of sustainable funding, urging the Parliament to secure sufficient resources under the current and future Multiannual Financial Framework (MFF) to strengthen the resilience of EU democracy. Accessible long-term funding is crucial for civil society, as existing EU instruments — such as Horizon Europe, Citizens, Equality, Rights and Values (CERV), or ad hoc tenders — provide only project-based, short-term funding.

Where is AgoraEU in the Council and Parliament?

Following the European Commission’s proposals for the 2028–2034 Multiannual Financial Framework (MFF), work in the Council is well underway. After an initial policy debate in the General Affairs Council (GAC) on 18 July, the Ad Hoc Working Party on the MFF (AHWP MFF) began identifying possible elements for a future Negotiating Box — a non-binding, evolving document prepared by the Council Presidency to structure the MFF negotiations. 

So far, in October, discussions in the GAC focused on Heading 2 – European Competitiveness Fund and Horizon Europe. AgoraEU was also discussed under this Heading. In December, the GAC will discuss Global Europe, Administration, and revenue issues — aiming to table the first draft Negotiating Box ahead of the European Council in December 2025. The Council’s positions on proposals are expected in mid-2026, followed by negotiations with the European Parliament towards the end of 2026 during the Irish Presidency. 
In the European Parliament, Civil Liberties (LIBE) and Culture (CULT) Committees will have joint competence in the new AgoraEU negotiations, and are in the process of assigning rapporteurs for this file. It appears that the rapporteurship on the CULT side may fall to a member from the S&D group, while in LIBE, the allocation of this important role will be decided later this month. This takes place against an evolving political backdrop around the MFF, after combined pressure from EPP, S&D, Renew Europe, and Greens/EFA on the Commission. In response, the Commission has already signalled its intention to adjust its proposal.

Reading & resources

  • Conspiracy theories and Charlie Kirk. In this video by WIRED, Joan Donovan and Nina Jankowicz discuss the potency of conspiracy theories following the assassination of Charlie Kirk, revealing how disinformation and misinformation ecosystems exploit confusion and fear. Nina and Joan explain that conspiracies often emerge as a psychological coping mechanism during chaotic or confusing events, mirroring past patterns from the JFK and even Lincoln assassinations. Online narratives framed the attack as a “false flag” orchestrated by the “violent left,” while influencers like Candace Owens amplified claims blaming Israel, fueling cross-platform spread from YouTube to Instagram and back to X. They also highlight the “nugget of conspiracy theories” (and some of the best disinformation) is how real-life events are taken out of context and weaponized in several ways. Algorithm-driven platforms reward outrage and virality, where cheapfakes and sensational claims often outperform facts. With platforms taking a step back on content moderation, like TikTok shifting towards AI-based moderation, we are entering a post-content moderation world where disinformation now circulates faster and more profitably. Thus, resulting in an information ecosystem that rewards outrage over accuracy.      
  • Report: Agents of Chaos. A report by the Casimir Pulaski Foundation exposes a long-term hybrid operation led by Russian and Belarusian intelligence services targeting the transatlantic realm through an “architecture of cognitive warfare” rather than conventional conflict. The campaign blends narrative manipulation, psychological operations, and coordinated disinformation campaigns to erode public trust and disrupt Western cohesion. This campaign functions across three interlocking layers: Cognitive Infiltration discourse, Tokenized Execution via proxy agents conducting sabotage, and Micro-Operations aligned with information warfare. The study warns that the West must move beyond a reactive defence and adopt a proactive, offensive approach, treating cognitive security as a core element of national defense. 
  • OSINT resource. Founder of Farallon, Claudia Tietze, announces the free release of the 2025 Not Really Ultimate OSINT Resource for Dorks– a compact, tested compendium that helps anyone from beginner to seasoned investigator level up their search game using advanced search operators (Google Dorks). The guide includes Basics, Upskilled & Uncommon Search, practical Dorky Ninjitsu for crafting effective queries, Foreign language and country-specific techniques, and a toolkit of Dorky Helpers (plus pro-tips from real-world practice).    
  • Prebunking at scale. Full Fact, Maldita.es, and the European Fact-Checking Standards Network (EFCSN) are participating in the development of Prebunking at Scale (PAS): a shared AI-powered tool and a common methodology for prebunking designed to anticipate early signals of emerging disinformation narratives, so fact checkers can publish prebunks before they spread. PAS is not just a piece of technology. It’s a collaborative infrastructure for the fact-checking community. At the core of PAS is a multilingual monitoring system that scans short-form videos on TikTok, YouTube Shorts, and Instagram Reels. Using AI technology developed by Fundación Maldita.es, the system clusters emerging claims into narratives that cross languages and platforms — helping fact-checkers act early and effectively. 
  • Beyond deepfakes. Are deepfakes a distraction? Disinformation researcher Zea Szebeni confronts this question in her latest piece, arguing that our obsession with deepfakes misses the real danger of AI-driven disinformation: “deep lore.” As opposed to manipulating our feelings through real-life moments, deep lore floods our social media with AI-generated images and stories, depicting politicians as heroes, surreal memes, and playful absurdities, that build emotional narratives and modern mythologies. While we know these images aren’t real, their constant repetition shapes feelings, perceptions, and cultural memory, much like digital folklore. This marks a shift toward a new “second orality,” Szebeni argues, where truth is measured by resonance and virality, not actual facts. Thus, allowing these mythical stories to subtly influence how people interpret reality itself.
  • Winning the information battle. A new strategic paper by the International Centre for Defence and Security warns that the information domain has become a central battleground of modern geopolitics, with authoritarian powers like Russia and China weaponising digital manipulation to weaken democracies. The paper calls for a shift toward proactive, intelligence-led strategies, including greater investment, strategic autonomy, and even ethical offensive operations, to reassert democratic values in the information battle domain. 

This week’s recommended read

This week’s recommended read is brought to you by Maria Giovanna Sessa and features a commentary on AI hallucinations published by the HKS Misinformation Review. The article introduces a conceptual framework that distinguishes “AI hallucinations” – inaccurate outputs generated by AI systems without human intent to deceive – from traditional human-driven misinformation. Unlike humans who may transmit falsehoods out of bias or deceptive intent, these hallucinations emerge from structural and technical vulnerabilities such as gaps in training data, opaque model processes, and weak gatekeeping. They also interact with users in novel ways that shape perception and trust, often creating a misleading sense of legitimacy.

The latest from EU Disinfo Lab

  • Updated factsheet: disinformation landscape in Romania. Our updated disinformation landscape factsheet offers fresh insights into Romania’s evolving information ecosystem. This report highlights the latest disinformation narratives, notable cases shaping public debate, and the key actors both spreading and countering false information. It also reviews recent legal, media, and policy responses aimed at strengthening resilience against information manipulation. Developed with the support of Ciprian Cucu, Susana Dragomir, and Madalina Botan, this study is part of our broader effort to map Europe’s disinformation dynamics, fostering stronger cross-border understanding and collaboration.

Events & announcements  

  • 19 November: Media & Learning Webinar: Understanding and responding to Health Disinformation. The session will offer practical tools for educators, librarians, NGOs, and civil society to counter pseudoscience, conspiracy narratives and support informed health communication.
  • 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
  • 17 December: Media & Learning Wednesday Webinar about Lines of speech: hate, harm and the laws across borders.
  • 23-24 January: The Political Tech Summit, held in Berlin, offers an opportunity for political professionals working at the intersection of tech, campaigning, and democracy to exchange knowledge and discuss fresh perspectives shaping digital politics. 
  • 23 January-June 2026: The Cyber for Good Media Academy will take place with the mission to protect and better equip journalists against interference and manipulation in the digital space, with a focus on OSINT and cybersecurity. Applications for this event open on 3 November and close on 5 December, with the program beginning on 23 January. 
  • 16-17 February: The DSA and Platform Regulation Conference will take place at the Amsterdam Law School, to reflect on the DSA and European platform regulation, providing an opportunity to discuss its broader legal and political context, through the overall theme of platform governance and democracy.     
  • 8-10 April: The Cambridge disinformation summit is expected to gather the world’s leading scholars, professionals, and policy-makers to explore interventions on systemic risks from disinformation.

Spotted: EU DisinfoLab

  • Our senior researcher, Raquel Miguel Serrano, will be speaking at the Women in AI International talk in Valencia, Spain, today, 12 November. She will be participating in the thematic session “AI impact on fundamental rights: Guidelines and tools combating disinformation” and will be exploring how AI can erode the integrity of information. She will also share our AI Disinfo Hub.

Jobs 

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.