Dearest readers,

Bonjour! We’re kicking off February with cold and plenty of rain, but the weather won’t stop us from bringing you exciting updates and developments – some challenging, others very promising. 

We’re thrilled to announce that the #Disinfo2026 conference call for proposals is now open until the end of February! Don’t miss the opportunity to share the topics and ideas you’d like to see covered at our annual conference, which will take place in Vilnius, Lithuania, on 6–8 October 2026

In case you missed it, we’ve got four hubs, and they’ve just been updated! In this edition, we’re giving you access to the best each one has to offer. Check out our Climate Clarity Hub, AI Disinfo Hub, Conflict & Crisis Hub, and Doppelganger Hub. 

Key investigations continue to unfold, from TikTok protests for profit to wikilaundering by PR firms, highlighting the need for platform accountability. Meanwhile, the EU’s #ClimateFactsMatter campaign is addressing climate disinformation and supporting public resilience.

And from Brussels, two significant updates you don’t want to miss: The European Democracy Shield draft report is out and the Commission’s decision on X’s fine has been published, but… not by the Commission itself. 

This newsletter not only brings you the latest on disinformation, but we also recommend an excellent book, sneak peek into a community member’s LinkedIn 🙃, share where our EU DisinfoLab colleagues go when they’re not in the office, and highlight upcoming events, job opportunities, and more… including a little trick to bypass X restrictions without logging in 😎.

Stay informed and stay involved! Thanks for reading.


Our Webinars

UPCOMING – REGISTER NOW!

5 February: Who is most vulnerable to AI-generated mis/disinformation? Psychological drivers of media literacy and belief in harmful online content

The session will explore the psychological factors that shape how people respond to false or misleading content, with a particular focus on AI-generated and online misinformation. Led by Dr Jason Potel, Visiting Researcher at Goldsmiths, it will draw on existing research to show how people may be more likely to believe or share misleading information when they feel a lack of control, confidence, or social connection.

12 February: Scroll, stop, verify. How the EU is bringing climate facts forward

Jeremy Herry from the European Commission (Directorate-general for Climate Action) will introduce their newly launched campaign #ClimateFactsMatter. This will be followed by Stephanie Hankey from Tactical Tech, who will share insights on climate disinformation trends and solutions from her latest publication, The RePlaybook.

PAST – WATCH THE RECORDINGS!
  • Climate deception ‘on air’ | Mainstream TV and radio remain among the most powerful information channels, and increasingly, vectors of climate disinformation. In this webinar, Louna Wemaere (QuotaClimat) presents comparative research from France and Brazil, showing how broadcasters amplify misleading and false claims about renewables, electric vehicles, energy prices, deforestation, and climate policy, shaping public opinion and undermining democratic debate. 
  • Pop fascism: memes, music, and the digital revival of historical extremism | Franco in glitter sunglasses. Mussolini dancing in a school corridor. Hitler celebrating like Cristiano Ronaldo. These are not “just jokes”,  they are disinformation tactics. This webinar exposes how “pop fascism” operates by laundering extremist ideology through memes, music, football fandom, AI-generated videos, and nostalgic aesthetics. With Coral García (Maldita.es) and Francesca Capoccia (Facta).

Don’t miss out, watch the recordings and explore all our past EU DisinfoLab webinars. 


Disinfo news & updates 

Major investigations
  • TikTok protests for profit. A Maldita.es investigation found 550 TikTok accounts posting over 5.800 AI-generated “protest” videos linked to Iran, Venezuela, Palestine and more, amassing over 89 million views. Creators use polarising, emotional AI content to rapidly build follower counts, then generate profit via TikTok’s Creator Rewards Programme or by selling accounts – a practice the outlet argues breaches platform rules and raises Digital Services Act compliance concerns.
  • China-linked spoofed media network. Graphika uncovered a network of 43 domains and 37 subdomains posing as reputable outlets like The New York Times, The Guardian and The Wall Street Journal to push pro-China messaging. The infrastructure shows technical links to Chinese companies tied to prior pro-China campaigns and overlaps with Spamouflage-linked amplification on Western platforms.
  • Wikilaundering exposed. The Bureau of Investigative Journalism reports that London PR firm Portland Communications allegedly ran a covert “wikilaundering” operation, using subcontractors and sockpuppet accounts to edit Wikipedia pages for governments and billionaires in ways that buried critical reporting and amplified favourable narratives. The investigation warns the stakes are rising as Wikipedia content is increasingly reused by AI search summaries and chatbots, meaning subtle edits can shape public perception far beyond the platform itself.
Resilience & Security
  • Germany, Italy back anti-disinfo hub. Both countries pledged to strengthen the European Centre of Excellence for Countering Hybrid Threats after the US withdrew under Trump, calling it “wasteful”. The move underlines Europe’s focus on information resilience amid Russian hybrid threats.
  • Germany targets “disposable agents”. Germany plans to amend its Criminal Code to punish “disposable agents” acting on behalf of foreign powers, amid sabotage investigations that authorities suspect may be linked to Russia. The move aims to strengthen penalties for deniable interference and espionage-related activity.
  • Portugal election disinfo. EDMO/IBERIFIER monitoring found 48 significant disinformation cases in Portugal’s 2026 presidential election, with anti-immigration narratives being the most prominent, often linked to “Great Replacement” framing. Researchers also flagged fraud claims and attacks spreading across X, TikTok, Facebook and Instagram.
Influence ops
  • Greenland anxiety disinfo. Pro-Kremlin accounts circulated fabricated videos impersonating Western outlets to claim Europe’s support for Ukraine has left Greenland exposed to Trump’s annexation threats. The clips spread across X, Telegram and TikTok and were amplified by Russian state and pro-Kremlin media.
  • Lavrov’s three-hour disinfo. Russia’s Foreign Minister Sergey Lavrov used his 2026 annual press conference, lasting nearly three hours, as a coordinated FIMI offensive against Europe, personally targeting EU leaders and portraying Europe’s security structures – NATO, the OSCE, and the EU – as hostile, weak and illegitimate. 
AI disinfo watch

🔎  For more AI-related disinformation news and resources, see our regularly updated AI Disinfo Hub.

Conflict information disorder 
Crisis information disorder 
  • New French health disinfo strategy. The government unveiled a national plan to combat health disinformation, creating an observatory and an “infovigilance” system to track viral falsehoods.
  • Germany rebuts RFK Jr. The German health minister rejected Robert F. Kennedy Jr.’s claims that German doctors are being prosecuted for issuing COVID vaccine or mask exemptions, calling them “completely unfounded” and “factually incorrect”. She said prosecutions only related to fraud or forged documents.
  • Nutrition myths go viral. A new review warns that social media is fuelling nutrition misinformation (detox ads, misleading “healthy” products), calling for stronger health literacy and evidence-based messaging
  • “Weather manipulation” conspiracy surge. A research report finds that “weather manipulation” conspiracy theories spiked in 2025 around disasters and political events, spreading heavily on right-wing platforms (especially Truth Social). 

🔎 Explore more: Our Conflicts & Crisis Hub tracks key narratives, recurring tactics, and past cases, with archived items and resources to go deeper.


Policy & EU action
  • #ClimateFactsMatter: EU campaign against climate disinformation. The European Commission (Directorate-General for Climate Action) has launched #ClimateFactsMatter, an EU‑wide campaign to strengthen public resilience against climate disinformation. Built around prebunking, it helps people recognise manipulation tactics early and stay anchored in credible information.
    • BE AWARE, BE PREPARED, BE INFORMED. The campaign focuses on these three pillars and offers shareable content and a toolkit for stakeholders and amplifiers. EU DisinfoLab is partnering with DG CLIMA to support this effort with research‑based insights on how disinformation spreads.
    • When climate disinformation clouds reality, you see clearly. Recent studies show that 71% of Europeans report frequent exposure to disinformation, and among 25–39-year-olds, nearly 9 out of 10 encountered it in just the past week. So the European Commission has made countering climate-related disinformation a priority. By making the issue clearer and more accessible, the campaign encourages citizens from all EU to stay alert, recognise misleading claims, and feel confident challenging them when they appear.
    • Toolkit to tackle climate disinfo. The campaign puts forward tips on how to spot disinformation, videos, a dedicated toolkit for stakeholders and amplifiers, and practical examples that show how climate disinformation works. It is designed to reach people who care about climate change but don’t always have the time to cut through the noise, such as busy millennials balancing work and family life
  • EU backs climate information integrity. On 20 January 2026, the EU endorsed the Declaration on Information Integrity on Climate Change launched during COP30 in Brazil, reaffirming its commitment to evidence-based climate debate.
  • Amsterdam bans fossil fuel ads. From 1 May 2026, Amsterdam will become the first capital city in the world to ban fossil fuel and meat ads in public spaces.
Media & narratives
  • France update: climate scepticism vs trust in science. A Fondation Descartes survey tracks how French perceptions of climate information have shifted since 2022, showing slightly higher climate scepticism, but continued strong trust in scientists and growing demand for clearer media coverage orientated towards concrete solutions and more practical advice.
  • Climate disinformation is reshaping geopolitics. NewClimate Institute argues that climate disinformation has become a geopolitical and security issue, increasingly targeting the politics of climate action (cost, trust, polarisation) rather than denying climate science.
  • Ad industry whistleblowers: “lip service” on climate & safety. DeSmog reports on an anonymous memo from senior advertising executives warning that major agencies and brands continue funding harmful online content and legitimising polluting industries.
Influence & obstruction

 “The only people who benefit from backtracking on climate action are the polluters, billionaires and powerful vested interests profiting from fossil fuels.”

– Mike Childs, head of policy at Friends of the Earth.

Guides & resources
  • Spotting climate disinformation. Euronews breaks down common tactics behind climate disinformation and shares practical tips to verify what you see online.
  • Pause and check. Yale Climate Connections shares simple “pause and check” prompts to avoid spreading misleading climate content.
  • Training opportunity: Strategic Climate Change Communication (Yale Online) is a 14-week online certificate helping professionals turn climate science into effective messaging. Applications open on 9 February 2026 (and close on 9 March 2026).

🔎 This edition’s Climate Clarity Corner brings together a few selected items on climate disinformation, building on the recently updated Climate Clarity Hub.


Brussels Corner

The Commission’s decision on X’s fine is published, but not by the Commission

The United States House Judiciary Committee has released the European Commission’s full decision behind its €120 million fine X, before the EU had made the document publicly available.

Early January, we submitted an official access to documents request to obtain the Commission’s full decision behind imposing a fine against X. The Commission rejected our request, explaining it was still preparing a redacted public version, pending consultations with the concerned third party.

Only a few days later the House Judiciary GOP published the decision with confidential parts already redacted. In a thread on X, the House Judiciary Committee framed the decision as evidence that the Commission is punishing innovation and using the Digital Services Act as a censorship instrument to target the platform for allowing free speech. This interpretation is in line with a broader disinformation campaign against the Digital Services Act (DSA) which has included hearings and reports portraying Europe as a foreign censorship threat. Moreover, the US imposed travel bans affecting European disinformation researchers and civil society – Clare Melford (GDI), Imran Ahmed (CCDH), Josephine Ballon and Ana-Lena von Hodenberg (HateAid), as well as former European Commissioner Thierry Breton. The House Judiciary Committee were not specific about whether they felt that cracking down on deceptive use of the blue check mark, penalising lack of transparency of its ads repository or punishing the illegal blocking of researcher access to public data was the “censorship” they referred to. 

Against the backdrop of restrictions against civil society members working to defend digital rights, it is especially important to note that much of the evidence used by the Commission to establish X’s alleged non-compliance drew heavily on civil society contributions. This once again underlines the essential role European civil society plays in DSA enforcement, and the urgent need for stable, long-term and independent funding to support that work.

AI and copyright

On 28 January, the European Parliament’s Legal Affairs Committee (JURI) adopted a non-legislative report “Copyright and generative artificial intelligence – opportunities and challenges”, aimed at addressing the relationship between AI development and copyright law. The report is led by rapporteur Axel Voss (EPP) and ostensibly seeks to strike a balance between technological innovation and the protection of rights holders.

According to reporting by Contexte, key compromises on the report have been reached after prolonged negotiations. The report is closely watched by stakeholders, as it is currently the main parliamentary discussion on AI and copyright, ahead of the European Commission’s planned evaluation of the Copyright Directive, expected in mid-2026.

The approach of the final report seems to suggest an “opt-out” mechanism for copyright holders/news outlets to not be a part of the training data. The principle this report follows is that technological developments must respect existing law, while existing law should not slow down innovation. Nevertheless, this kind of approach seems to burden the rightsholders more than the AI providers.

While the report is non-binding, it is expected to influence future legislative initiatives. A final vote on the report in the European Parliament’s plenary is planned in March.

European Democracy Shield Draft Report

The special Committee on the European Democracy Shield (EUDS) has published its draft report on the findings and recommendations of the Special Committee on the European Democracy Shield.

The report’s rapporteur (Parliamentarian in charge) Thomas Tobé presented the draft at an EUDS Committee meeting on 29 January. Overall, the report presents more ambition compared to the Commission’s communication released in November last year. 

On one of the key proposals – the Centre for democratic resilience, the report points out that the Commission’s proposal lacks operational detail and calls on the Commission to give the Centre a clear mandate. During the meeting Mr. Tobé also called on the Commission to consider a financing mechanism for the Centre that would involve the major online platforms. Moreover, the draft report emphasises that the AgoraEU programme under the next Multiannual Financial Framework (MFF) should provide for funding that meets the needs of civil society organisations.

The deadline for tabling amendments to this report is 11 February. The Committee is expected to have a vote on 23 June, with a plenary vote so far penciled in for July.


Reading & resources

  • UN chief: unity vs disinfo: UN Secretary-General António Guterres has recently warned that the world is entering an era of “chaos” marked by conflict, inequality, climate breakdown and weakening respect for international law. In outlining his priorities for 2026, he highlighted the need for “unity in an age of division”, explicitly calling out disinformation as a driver of polarisation and exclusion.
  • On DSA systemic risks. ISD published “Bridging Policy and Research under the DSA: A Structured Research Agenda for Identifying Systemic Risks”, outlining how researchers and regulators can better assess systemic risks under the DSA, including risks linked to asymmetric amplification of political content.
  • UK democracy, trust & Ukraine disinfo. A new Resilience & Reconstruction report argues disinformation in the UK works less by persuading with single falsehoods than by exploiting deep distrust in politics, media and institutions, weakening public resilience and shaping attitudes to Ukraine and Russia.
  • US authoritarian drift. In a New York Times Opinion column, journalist and author M. Gessen argues that one year into Trump’s second term the US is rapidly normalising authoritarian practices (detentions, intimidation, pressure on media and elections), urging civil society to act now while democratic space still remains.
  • Illiberal playbook. An analysis by the AUTHLIB project links legal, political and discursive attacks on civil society to a wider illiberal strategy in Europe, urging stronger EU enforcement and more resilient funding to protect democratic checks and balances.
  • Finland’s media literacy model. Finland teaches media literacy from the age of three as a national resilience tool against propaganda, and is now adding AI literacy to help students spot synthetic images and disinformation.
  • Trump censorship tracker. A new public dashboard by the American Sunlight Project cataloguing alleged censorship actions linked to Trump’s second-term administration. Updated weekly, it tracks incidents by category, geography and actors, useful as a monitoring tool, though framed from an advocacy perspective.
  • The Baltic Flank (new Substack). Baltic reporters have launched a new English-language investigative newsletter tracking Russia’s hybrid warfare, from espionage and sabotage to propaganda and sanctions evasion, with a frontline perspective from NATO’s eastern flank.
  • M.A.T.T.E.R. – a newsroom meme checklist. A practical editorial checklist by Marcus Bösch (understanding-tiktok.com) to help journalists assess whether to cover or embed a meme,  mapping its meaning, emotional “vibe”, format/platform dynamics, lifecycle, ethical risks (including manipulation/harassment), and real-world relevance before its amplification.

The latest from EU DisinfoLab


This week’s recommended read

This week’s recommended read is brought to you by Katrina Luize Asmane, Policy intern at EU Disinfo Lab: 

I have just finished reading “The Hour of the Predator” by Giuliano da Empoli, a striking and unsettling exploration of today’s global order. Through a series of encounters with autocrats and tech billionaires, da Empoli offers a reflection on how cynical scheming and force is increasingly determining international relations.

In what he calls “the hour of the predator,” the author suggests we are witnessing the collapse of the belief that rules and norms can restrain the struggle for power. As he writes, this moment is “essentially just a return to normality,” implying that the true anomaly was the short period when we imagined that the “bloody quest for power” could be curbed through systems of law and shared values.

“The Hour of the Predator” is a compelling and thought-provoking read for anyone trying to understand the forces reshaping international politics today.


👀  Spotted: EU DisinfoLab 

  • On 21 January, our Research Manager, Maria Giovanna Sessa gave a lecture at the Winter School on EU Policy-Making “Analysing Tech Policies in Today’s EU” at the VUB Brussels School of Governance. The session focused on the overlaps between identity-based disinformation and FIMI, with hands-on recommendations on policy enforcement to tackle the problem.
  • On 26 January, our Executive Director Alexandre Alaphilippe was interviewed by Portuguese outlet, Gerador. He discussed several key issues, including the evolving disinformation ecosystem across the EU Member States, the role of platforms in amplifying and monetising false content, the limits of media literacy, the impact of foreign interference, and the growing risk posed by AI-generated content and deepfakes.
  • On 2 February, Ana Romero delivered a lecture at the Universidad Autónoma de Madrid on “Climate Change in Southern Europe: Building Transversal Skills for Understanding and Action”. The session explored the complexities of environmental disinformation, emphasising the balance between freedom of expression and preventing harm, as well as strategies like prebunking and debunking to combat misleading narratives.
  • Brussels community meetups. We regularly get together in Brussels for informal community meetups: good conversations, shared ideas, and the chance to put faces to names. If you’re interested in joining one of the next meetups, just reply to this email and let us know.

Events & announcements  


One thing we loved

After our daily scrolling through platforms, it’s refreshing to come across a post like this one. 
Inês Narciso flagged a compelling video by TikTok news creator Dylan Page (NEWSDADDY), who explains why he took a break after his content was repeatedly demonetised and unfairly removed.     

According to Inês, what makes the piece stand out isn’t outrage or conspiratorial framing, but clarity. Dylan offers a rational, well-researched account of how content moderation (and appeals) can fail at scale. Inês highlights three key takeaways:

  1. platforms often buckle under speed and volume rather than censoring with intent; 
  2. AI still struggles to moderate contextually at scale (especially when appeal feedback loops don’t work); 
  3. creators sharing experiences can help reveal systemic patterns that remain invisible without better transparency and accountability under the EU’s DSA.  

Jobs

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!


💡 Tip of the week 

Bypass X restrictions?Are you a part of that crowd who left X for any good reasons? But colleagues, friends and families are still sharing posts from X that you can’t properly access because you’re not logged to the website?  Here is a tip that might be useful: xcancel.com, a bypass provided by the same people that created Nitter. By changing any x.com url and using xcancel.com, you can access a post and consult replies, without having to use any X account.

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.