Dearest readers,
Happy New Year from all of us at EU DisinfoLab.
We’re back, bonjour! We’ve spent the past week catching up on what matters most to bring you a condensed snapshot of what’s shaping the information space as the year begins.
Unsurprisingly, 2026 starts where 2025 left off: with disinformation escalating, accountability under pressure, and the rules of a free, trustworthy and pluralistic information space openly contested.
This first EU DisinfoLab newsletter of the year brings together the stories setting the tone: from US travel bans targeting European counter-disinfo practitioners, to Venezuela and the rapid spread of disinformation following the US operation, and to platforms continuing to profit from fraudulent advertising. We also examine how AI is accelerating political and climate deception and highlight several recent FIMI investigations.
From Brussels, our policy team is already back on the ground, unpacking the battles that will shape 2026: scaling up EU enforcement against disinformation and its business model and pushing for setting up new European funding opportunities for the counter-disinfo community in the next EU budget.
There is some good news too: disinformation is no longer going unchecked, regulators and sometimes courts are beginning to act. Let’s see whether this sends a lasting signal.
Along the way, we also explore deeper and substantive questions: *Can information warfare be a precursor to armed conflict? *Are we talking about AI all wrong? Read on to explore what this means in practice.
This edition also offers a wide range of opportunities for the community. We kick off the year with a new season of EU DisinfoLab webinars, alongside major conferences, training programmes, tools and resources, calls for participation, and a curated list of job openings across research, policy, OSINT and journalism.
We’re glad to have you back with us. Thank you, and enjoy the read!
Our Webinars
UPCOMING – REGISTER NOW!

15 January: Pop fascism: memes, music, and the digital revival of historical extremism
Franco in glitter sunglasses. Mussolini dancing in a school corridor. Hitler celebrating like Cristiano Ronaldo. These are not “just jokes”, they are disinformation tactics.
This webinar exposes how “pop fascism” operates by laundering extremist ideology through memes, music, football fandom, AI-generated videos, and nostalgic aesthetics. Based on a cross-border investigation by Maldita.es (Spain) andFacta (Italy), Coral García and Francesca Capoccia will reveal how this content systematically moves from normalisation to acceptance and idolisation across TikTok, X, Telegram, and YouTube, embedding conspiracy theories, historical distortion, andextremistnarratives into viral culture.

29 January: Climate deception ‘on air’
Mainstream TV and radio remain some of the most influential sources of information, yet they increasingly host misleading claims about climate science and climate policy.
In this webinar, Louna Wemaere (QuotaClimat) presents new comparative findings from France and Brazil, revealing how broadcasters shape public perception by spreading misinformation about renewables, electric vehicles, energy prices, deforestation, and climate measures. Join us to understand how climate falsehoods reach millions, why they matter for democracy, and what can be done to counter them.
PAST – WATCH THE RECORDINGS!
- Starting over: A new strategy for US information integrity | The United States government has chosen to unilaterally disarm in the global contest for truth and integrity, even as authoritarians and technology companies find increasing alignment. The abrupt shutdown of our information-integrity tools poses significant risks, leaving Americans and people worldwide increasingly exposed to an information ecosystem designed to manipulate the public and erode social trust. Speaker, Adam Fivenson, explores what “starting over” should look like, the hard lessons that must be confronted, and the strategic choices that will define the future of democratic resilience.
- Are platforms curbing disinformation? Scientific, cross-platform evidence from six VLOPs | How much mis- and disinformation do people actually see on major online platforms? Are repeat misinformers being structurally rewarded with extra reach and monetisation? There has been no scientific, cross-platform way to answer these questions… until now. This webinar introduces the first results from SIMODS (Structural Indicators to Monitor Online Disinformation Scientifically), the first project to scientifically measure the prevalence of mis- and disinformation across platforms and languages. With Emmanuel Vincent, from Science Feedback.
Don’t miss out, watch the recordings and explore all our past EU DisinfoLab webinars.
| Did you know? 📈 In 2025, EU DisinfoLab hosted 31 webinars, with an audience growing at an incredible pace. From counter-disinformation investigations and case studies to AI, climate disinformation, elections, policy, FIMI, and IBD, our webinars continue to unpack how information manipulation works, and how to counter it. 🧡 A huge thank you to all our speakers, partners, and participants for making every conversation sharper, deeper, and more impactful. If your company or institution is interested in partnering with us and sponsoring our webinars, please reach out if you’d like to discuss how we can work together: info@disinfo.eu |
Disinfo news & updates
🇻🇪 Venezuela, tracking what spreads
- In the hours after Venezuelan leader Nicolás Maduro’s capture, long-debunked election conspiracies and AI-generated images flooded platforms such as X, Instagram, TikTok and Facebook. The lack of early, authoritative information accelerated the spread of fabricated visuals and recycled footage, shaping narratives in real time.
- As expected, Russia’s influence apparatus quickly weighed in after the US operation in Venezuela. Unable to protect an ally, Moscow’s digital assets deployed narrative confusion, states DRFLab in a recent report. Its network of propaganda websites known as Portal Kombat, alongside associated social media accounts, pushed coordinated messages attacking US military credibility and portraying the operation as aggressive and unreliable.
- Analysis by Cyabra found that thousands of inauthentic profiles coordinated posts before, during and after the operation, helping shift perceptions by first normalising the intervention and then reframing the capture as an “illegal kidnapping.”
- Reporting by The Washington Post shows that Chinese influence activity also sought to shape U.S.-focused discourse.
Scroll down to AI-generated disinfo updates for more on how AI hijacked the Venezuela story.
🎯 New attacks against the disinfo community
- In late December 2025, the Trump administration officially barred five prominent Europeans tech regulators and anti-disinformation researchers from entering the US. Secretary of State Marco Rubio framed the move as a defense against “foreign censorship”, but the European Commission and other critics considered it as political retaliation for Europe’s Digital Services Act (DSA), which requires platforms to moderate harmful content.
- The list includes former EU Commissioner Thierry Breton, Imran Ahmed (Center for Countering Digital Hate), Clare Melford (Global Disinformation Index), and Josephine Ballon and Anna-Lena von Hodenberg (HateAid).
- The dispute has already moved to the courts: on Christmas Day, a US federal judge issued a temporary restraining order to block the detention or deportation of Imran Ahmed, a green card holder, ruling that his residency cannot be revoked simply because the government disagrees with his research.
- Do not miss this op-ed at Tech Policy Press by Imran Ahmed. He argues that transparency is a free-speech issue, not a regulatory extra and claims that “platform accountability only works when transparency rules are backed by real consequences.”
- The administration’s stance has faced further scrutiny following reports that it simultaneously intervened to facilitate the return of Lauren Chen, the Canadian founder of Russian-funded Tenet Media.
- The timing of the sanctions has prompted discussion. Only weeks ago, the European Commission imposed a €120 million fine on X, Elon Musk’s social media platform, for failing to comply with transparency and content-moderation rules.
A full understanding of how the DSA was applied in this case requires access to the legal basis and arguments relied upon. As the European Commission’s decision has not yet been published, we have submitted an official access to documents request to obtain it as soon as possible (the status of our request is available here.)
Scroll down to ‘readings and resources’ for more analysis: “The Trump Lie About Europe and Why it Matters.”
📢 In ads we don’t trust
- TikTok and Instagram allowed 37 advertisers to keep running thousands of fraudulent ads, repeatedly violating platform rules and the EU’s Digital Services Act. The investigation, run by Maldita.es, reveals that these ads masquerade as legitimate brand sales but function as scam storefronts.
- Similar patterns have also been documented by Graphika, which has tracked coordinated ad networks promoting counterfeit luxury goods across TikTok and Meta platform.
- This is disinformation for profit, embedded in advertising infrastructure rather than posts, and is as damaging to information integrity as other disinformation. Despite this, Reuters has revealed that Meta accepted billions in fraudulent advertising revenue, particularly from China. The company even developed a “playbook” to intentionally delay and dilute action against scam ads. How? By mimicking likely regulator searches in its Ad Library to make fraudulent ads harder to find.
- Furthermore, Sweden’s news media trade body, Utgivarna, has reported Meta chief executive Mark Zuckerberg to the police for fake adverts posted on his company’s social media platform Facebook.
- Speaking out against these practices has come at a steep personal cost. Reporting by The Washington Post shows how tech whistleblowers who challenged Meta’s advertising and integrity systems faced professional exile, legal pressure and financial hardship.
🌡️ Climate denial: human and artificial
- Climate denial isn’t fringe in Washington, it’s in charge. A new Center for American Progress report finds that nearly 70% of leadership positions across Congress, the Cabinet and key federal agencies are held by climate deniers. In Congress alone, 119 members still deny climate change and have collectively received $51,4 million in lifetime fossil fuel donations.
- AI chatbots are actively amplifying climate disinformation through personalisation, rather than simply making occasional factual errors. According to Global Witness, this “AI sycophancy” (the tendency to agree with users) is promoting climate denial, amplifying known disinformers, and even encouraging more extreme language to drive engagement.
⚖️ Conspiracies, disinformation and sanctions
- A French court has convicted ten people for a coordinated cyberharassment campaign against Brigitte Macron, marked by gender-based abuse and conspiracy-driven falsehoods spread across social media. Most received suspended prison sentences while one defendant was sentenced to six months in prison and others were fined. The ruling represents a rare and significant legal response to online harassment, highlighting how disinformation-fuelled campaigns can spill from digital platforms into criminal courts.
- France is not alone: The EU also sanctioned American propagandist John Mark Dougan, linked to pro-Kremlin disinformation networks targeting Western elections, subjecting him to asset freezes and EU travel bans.
⚔️ Information warfare, precursor to armed conflict? Yes, according to a confidential German military plan seen by Politico. Berlin views cyberattacks, sabotage and disinformation campaigns as potential early indicators of war and, rather than background noise, these “hybrid” tactics are treated as preparatory steps toward armed conflict.
🔄 Antisemitism across the extremes. This ISD Digital Dispatch reveals how US-based violent extremist networks systematically use antisemitic conspiracy narratives to mobilise across ideologies and platforms. Analysing data from 1.000+ extremist accounts, it shows how spikes in hate align with real-world events.
📈 Google Trends trap. After the Bondi shooting, social media users misread Google Trends data to claim people searched for the attacker’s name before the shooting, allegedly from places like Tel Aviv. This fueled conspiracy theories suggesting the attacker was part of a foreign plot.
🤖 AI-generated disinfo updates
- We’re talking about AI all wrong. Here’s how we can fix the narrative. This article, published in The Conversation, examines how the metaphors and narratives we use to describe AI shape public understanding, and, in turn, how AI is designed, adopted, and governed. The author argues that portrayals of AI as humanlike “assistants,” artificial brains, and the ubiquitous humanoid robot can obscure what today’s AI systems actually are, exaggerate their capabilities, and blur their limitations, making it harder to use and regulate this technology.
- Google’s and OpenAI’s chatbots can strip women in photos down to bikinis. GROK has become the focal point of a growing AI scandal after users showed that the chatbot can be used directly on X to “undress” people and generate non-consensual sexualised images, including of minors. The fallout has triggered investigations and warnings from regulators across the EU, UK, France, India and beyond.. After the uproar, Grok restricted image creation and editing to paying subscribers, with the intention of holding them accountable for how the tool is used. The problem is not isolated, Google’s Gemini and OpenAI’s ChatGPT can also be coaxed into producing similar “bikini” deepfakes, exposing wider failures of safeguards across mainstream AI tools.
- Phony visuals of Maduro’s real capture. Following the capture of Nicolás Maduro by US forces, social media was flooded with AI-generated and out-of-context images and videos falsely claiming to show the operation, amassing more than 14 million views on X in days. These visuals often closely resemble reality, making them harder to debunk.
- First draft Code of Practice on transparency of AI-generated content. The European Commission has released a first draft of its Code of Practice on marking and labelling of AI-generated content, outlining how AI content, including deepfakes and synthetic text, could be clearly labelled across the European Union.
Want to stay on top of the latest in AI and disinformation? Our AI Disinfo Hub has been recently updated. Take a look!
Brussels Corner
New Presidency of the Council
On 1 January 2026, Cyprus took over the Presidency of the Council of the EU, guiding discussions on all current topics, including on policy areas with implications for the fight against disinformation. As the incoming presidency is reporting that it is, itself, subject to a disinformation attack, and is quoted as saying that the attack “bears the hallmarks” of a Russian campaign, the problem already has top billing.
A key priority to watch is the negotiation of the EU’s next Multiannual Financial Framework (MFF) 2028–2034. Most notably, Cyprus will be expected to finalise the “negotiating box” including concrete figures by the end of its presidency in June 2026. This step will set the parameters for subsequent talks, including for the proposed AgoraEU programme, which is needed to support civil society working on countering disinformation under its Citizens, Equality, Rights and Values (CERV+) strand (more on AgoraEU in this Brussels Corner).
Another priority for the Cyprus presidency will be advancing the “omnibus” proposals under the EU’s so-called “simplification” agenda. The Commission presented its digital omnibus package on 19 November with some of the measures including rollbacks on data protection rules. These changes could enable more “targeted” (based on tracking) advertising and more tailored disinformation campaigns. While Cyprus will try to move discussions towards an agreement in the negotiations for these proposals, some of the controversies may pose challenges. Positions among member states are divided and strong lobbying is expected both from tech companies and digital rights advocates.
EUDS updates
On 18 December, the European Parliament held a plenary debate on the European Democracy Shield. The debate was opened by an oral question to the Commission tabled by Alexandra Geese (Greens/EFA) calling on it to investigate and act against biased, divisive online business models. MEPs broadly clashed over the proposed Shield. The main political groups in the European Parliament – EPP, S&D, Renew and Greens – agreed that platform algorithms pose a systemic risk to elections and public debate and that foreign interference demands tougher enforcement of the Digital Services Act (DSA), particularly during elections. However, some voices from EPP (centre-right) expressed concerns about overreach of proposed measures, stressing the need for safeguards. On the other hand, Renew (centrist), Greens and S&D (centre-left) argued that the measures in the Shield do not go far enough. Far-right political groups rejected the Shield altogether, portraying it, as usual, as a “censorship” tool, de facto supporting foreign big tech companies.
The European Parliament’s Special Committee on the European Democratic Shield is expected to release its draft report in January, which will then be open for amendments to be tabled by MEPs.
Reading & resources
- “The Trump Lie About Europe and Why it Matters”. A comprehensive critique of the current US administration’s attack on the EU Digital Services Act from globally renowned free speech expert, David Kaye, a law professor at the University of California at Irving and former UN Special Rapporteur on freedom of expression.
- Narrative control during the Israel–Iran war. This Graphika investigation examines how pro-Iran state and state-aligned actors mobilised information operations during the days of active warfare in the Israel–Iran war in June 2025. The report exposes a coordinated playbook combining state media, inauthentic social media accounts, bot networks and multilingual messaging.
- Russian manipulation: “A fatal distortion of reality”. In this interview with deutschland.de, Julia Smirnova, senior researcher at the Center for Monitoring, Analysis, and Strategy (CeMAS), explains how Russia conducts information warfare, from disinformation and cyberattacks to AI-driven propaganda, using Germany as a key target within a broader strategy to destabilise democracies.
- The Catholic News Agency of Mexico, a distribution tool for pro-Russian disinfo campaigns. A joint investigation by Maldita.es and Verificado.mx reveals how a Mexico-based Catholic news outlet systematically republishes pro-Kremlin disinformation in Spanish. By blending religious narratives, culture-war content and conspiracy theories, the outlet acts as a cross-border channel for Russian influence operations targeting Spain, Mexico and the wider Spanish-speaking world.
- Georgia’s information war. A new EU-backed study reveals how foreign influence operations and domestic disinformation are reinforcing an emerging “informational autocracy” in Georgia, using coordinated narratives and manipulation tactics to shape public opinion and undermine resilience.
- 2025 in review: Russian disinfo in focus. In this year-in-review analysis, EUvsDisinfo (EEAS) looks back at how Kremlin-aligned actors used disinformation throughout 2025 to project strength, exaggerate military success in Ukraine, and shape the terms of future negotiations. The piece shows how narrative control remains central to Russian FIMI strategy.
- Climate misinformation, a national security risk. Writing in The Conversation, Sadaf Mehrabi warns that false narratives around wildfires, evacuations and climate policy are already undermining public trust in Canada, weakening emergency response and putting lives and infrastructure at risk.
- Fact-checking works on Facebook. Drawing on 18 months of research from Sciences Po (December 2021–mid-2023), this Indicator piece shows that Facebook’s fact-checking programme curbed the spread of disinformation. The findings are especially relevant as platforms shift away from independent fact-checking models.
- Useful tools for researchers:
- Financial OSINT: Tracing corporate assets & networks. A practical OSINT guide to mapping company ownership, financial links, and hidden networks using global corporate registries.
- AI-Assisted OpSec Self-Assessment Handbook. A concise handbook to help researchers identify digital risks and strengthen operational security in an AI-driven environment.on.
This week’s recommended podcast
This week, our recommended listen, proposed by our Project Officer Inès Gentil, is the podcast series The Wargame, produced by Sky News and Tortoise Media.
The series uses a fictional simulation of a Russian hybrid attack on the UK to explore how modern crises unfold across military, cyber, and information domains. The strategic wargame features former senior policymakers, military officials and security experts reacting in real time, offering a rare insight into how decision-makers conceptualise and respond to hybrid threats.
Although the podcast does not focus solely on disinformation, it is particularly relevant for understanding information attacks as part of broader “grey-zone” strategies, where cyber operations, narrative control, media pressure and disinformation are used to destabilise societies. The series highlights how confusion, competing narratives, and uncertainty are deliberately weaponised to shape public perception and constrain political decision-making during crises.
By placing information manipulation alongside cyber and kinetic threats, The Wargame helps contextualise disinformation as a strategic tool of statecraft, rather than an isolated online phenomenon. For civil society, researchers and policymakers working on FIMI and democratic resilience, the podcast offers insights into how information operations are understood -and sometimes underestimated- at the highest levels of crisis management.
👀 Spotted: EU DisinfoLab
📍 Paris
Our own Alexandre Alaphilippe joined a small part of the French community countering disinformation that informally gathered in Paris last week. Discussions covered recent disinformation campaigns, emerging tools such as AskVera, and the future of fact-checking. The range of topics, and participants, was broad.
Fancy joining?
There was clear interest in meeting on a regular basis. EU DisinfoLab will help bring the community together, and we’d be glad to connect anyone interested in joining or setting up future meetups, in Paris and beyond!
Events & announcements
- Events & announcements
- 23-24 January: The Political Tech Summit, to be held in Berlin, offers an opportunity for political professionals working at the intersection of tech, campaigning, and democracy to exchange knowledge and discuss fresh perspectives shaping digital politics.
- 23 January-June 2026: The Cyber for Good Media will take place with the mission to protect and better equip journalists against interference and manipulation in the digital space, with a focus on OSINT and cybersecurity.
- 31 January-1 February: FOSDEM 2026, a free open source software event, will take place in Brussels, Belgium.
- 16-17 February: The DSA and Platform Regulation Conference will take place at the Amsterdam Law School, to reflect on the DSA and European platform regulation, providing an opportunity to discuss its broader legal and political context, through the overall theme of platform governance and democracy.
- 25 February: This year’s Digital Platforms Summit 2026 will examine how the Digital Markets Act (DMA) is reshaping online markets and enforcement, while looking ahead to the upcoming Digital Fairness Act (DFA). The event will explore platform governance, consumer and child protection, dark patterns, interoperability, and the future of EU digital regulation, alongside new research findings from CERRE.
- 8-10 April: The Cambridge Disinformation Summit is expected to gather the world’s leading scholars, professionals, and policy-makers to explore interventions on systemic risks from disinformation.
- Other opportunities:
- The Data Tank is inviting small and medium media and fact-checking organisations to join a new action research project aimed at building collective leverage over Big GenAI, to protect media sustainability and information integrity across Europe.
- Facts For Future, an Erasmus+ training course in Germany, is equipping youth workers with media literacy tools to counter climate disinformation and strengthen democratic, fact-based climate dialogue.
- CPDP 2026 Call for Papers under the title Competing Visions, Shared Futures for a conference to be held on 20-22 May 2026.
Jobs
- The University of Washington’s Center for an Informed Public is accepting applications for postdoctoral scholars conducting research on information integrity, online manipulation and propaganda and healthy information systems.
- OpenAI is looking for a Global safety response operations analyst.
- Earthsighis recruiting two researchers to work on cutting-edge investigations.
- Active Fence is looking for a Freelancer – OSINT / WEBINT / Intelligence Researcher.
- NewsGuard is looking for a full-time Staff Reporter and a Politics Reporter.
- Moonshot is looking for an OSINT Analyst.
- The Journal of Marketing Management has issued a call for papers on The Disinformation Economy: Digital Markets of Influence, Conflict, and Polarisation.
- The Interdisciplinary Transformation University (IT:U) is hiring two PhD students in Human Rights and Technology.
- The Center for the Study of Organised Hate is looking for a Researcher, Disinformation & Influence Operations.
- The Centre for Information Resilience has opened its talent pool for an OSINT investigator (Contractor — Russian/Ukrainian speaker.)
- AFP is now accepting applications for its Digital Investigative Journalist position in Seoul, Islamabad, and Jakarta.
- The Center for Democracy & Technology is hiring interns and policy professionals to advance digital rights, internet freedom, and responsible technology governance
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.
