Dear Disinfo Update reader,
The threats to the information space and democracy are escalating rapidly. In this edition of our newsletter, we’re, of course, focusing on German elections, which will be held in a few days. Additionally, we’ll explore some of our usual suspects – climate change disinformation, genAI, FIMI – and dive into the fallout from the US power shift. Our best shot at facing these challenges and pushing back is to work together – sharing insights, strategies, and resources across borders and disciplines.
An opportunity to do just that is our annual conference bringing together experts, researchers, and practitioners from across the field. The open call for #Disinfo2025 session proposals is still open for a few days – last chance to shape the agenda! We would like to thank the first contributors who have proposed over 80+ topics this year. If you have ideas, research, insights, or strategies to share, submit your proposal and help drive the conversation forward.
The counter-disinformation community is only as strong as its connections, so let’s strengthen them: discuss, challenge, collaborate. Check out below our upcoming webinars, recommended reads, the latest from EU DisinfoLab and the team, as well as key events and job openings. The challenges are mounting, but so is our collective knowledge.
Our webinars
UPCOMING – REGISTER NOW
- 20 February: Clearing the air: building trust against climate misinformation storms | How do false or misleading narratives about climate change spread across social media platforms? This session with Jodie Molyneux from Resolver will explore case studies, including the LA wildfires and East Coast hurricanes, that demonstrate how mis- and disinformation can cross over into other risk areas – hate, harassment, health misinformation, and ultimately violence.
- 27 February: Countering gender and identity disinformation – threats and strategies | Gender and Identity Disinformation (GID) is a growing and sophisticated form of information manipulation targeting women, gender-diverse individuals, and LGBT+ communities to undermine democracy and fuel division. In this webinar with Kate Saner from Artemis Alliance, we’ll explore how malign actors exploit social tensions, and showcase evidence-based strategies to counter GID, drawing from case studies in Armenia, Poland, and Georgia.
- 6 March: Melodies of malice: understanding how AI fuels the creation and spread of extremist music | In this session, we will explore “Melodies of Malice”, a report by the Global Network on Extremism and Technology (GNET), with Heron Lopes, uncovering how far-right online communities harness AI to craft music propaganda and spread disinformation across platforms like Facebook, Instagram, and YouTube.
PAST – WATCH THE RECORDINGS!
- Free Our Feeds: a path to build open, public interest social media | Social media is changing fast, with open protocols and new products emerging and bringing about the promise of change. This session with Robin Berjon, IPFS Foundation / Free Our Feeds, explored opening the door to public interest social media infrastructure and a much healthier information environment.
Disinfo news & updates
Climate disinfo heating up
- Trading green goals. The EU’s delegation to the US has hired a consultancy with ties to fossil fuel lobbyists and climate science denialists, to advise on trade and public engagement – raising questions about how such a firm could align with the European Union’s climate commitments.
- Weaponising climate lies. A report by Poland’s military counterintelligence service finds that Russia has been spreading climate disinformation in Poland as part of a “cognitive war” to sow division and weaken trust in democratic institutions. The report warns that Poland’s response has been inadequate and calls for stronger policies to counteract disinformation “as a matter of national security”.
Platforms shape perception
- Verified fake. X granted official verification to a fake account impersonating Malta’s president – while her legitimate account remains unverified. The imposter amassed 50.000 followers, pushed political content, and promoted a supposed national cryptocurrency, before being suspended.
- CaaS (Censorship as a Service). An investigation reveals that Google has facilitated requests from autocratic regimes to remove unfavourable content from its platforms.
- Profits trump principles. Google executives have defended the company’s decision to roll back diversity initiatives and drop its pledge against developing AI for weaponry and surveillance, citing legal compliance and evolving geopolitical considerations. Civil society groups and worker activists warn that these changes prioritise profit and political convenience over ethics, diversity, and human rights.
Germany’s democracy in the spotlight
- Democracy for sale? Ahead of Germany’s 2025 federal election, this investigation (in German) by CORRECTIV reveals how German political parties are running paid social media ads that spread false claims, hate speech, and misleading narratives. With Meta scaling back fact-checking efforts, these ads continue to reach hundreds of thousands of voters unchecked.
- The Mid-Atlantic Ridge: political edition. After U.S. Vice President JD Vance’s Munich speech criticising Germany’s policy to prevent building political coalitions with far-right parties, German Chancellor Olaf Scholz pushed back, calling foreign interference in the country’s elections unacceptable and rejecting Vance’s framing of political exclusion as an attack on democracy.
- Moscow’s voices. This investigation by Insikt Group reveals Russian-backed influence operations attempting to manipulate Germany’s elections through inauthentic news sites, AI-generated content, and media impersonation that amplify far-right messaging and undermine trust in democratic processes.
- Storm-1516 & R-FBI. This report from CEMAS.io and A4E (available also in German) exposes Russian influence operations “Storm-1516” and the “Foundation to Battle Injustice” (R-FBI) that aim to interfere in Germany’s federal election by disseminating fabricated news and AI-manipulated videos to discredit politicians, spreading false allegations, and inciting societal divisions.
- Destructive disinfo. German authorities are investigating a false flag disinformation campaign in which over 270 vehicles were vandalised across the country. Initially blamed on radical climate activists, the attacks were allegedly orchestrated by operatives paid by Moscow to stir up public anger against the Greens and their chancellor candidate. Read more here (in German).
- Legal challenges for data access. X is appealing a German court ruling that mandates the platform to provide researchers with real-time data access for monitoring potential election interference. The case, led by Democracy Reporting International, highlights the increasing prevalence of challenges tech companies face under the EU’s Digital Services Act (DSA), which obliges platforms to share data for specific studies. As platforms are more and more confrontational towards these regulations, legal confrontations are expected to become more frequent.
- Access to (paid) data. This analysis by Digihumanism of coordinated election influence on X shows how paid data access can help to reveal astroturfing campaigns shaping online discourse ahead of Germany’s elections.
- Watching the narrative. Another resource for researchers who are monitoring disinformation related to the federal election in Germany is NewsGuard’s German Elections Misinformation Tracking Center, where false claims related to the vote are identified and debunked.
- AI on the campaign trail. This investigation (in German) from Tagesschau exposes a network of AI-generated fake accounts on X, Instagram, and TikTok that are spreading far-right messages, manipulating political debates ahead of Germany’s elections. Disguised as young women, these profiles promote anti-immigration rhetoric, amplify AfD campaign messages, and discredit political opponents.
More Russian influence in the pipeline
- Unplugging from Russian power. Lithuania, Latvia, and Estonia disconnected from Russia’s power grid, ending a decades-old reliance that leaders say left them vulnerable to geopolitical blackmail. Officials warn of potential retaliation, including disinformation campaigns aimed at undermining the transition.
- Fake news, real threats. Czech Senator Miroslava Němcová faced threats from Russian officials over a fabricated social media post falsely attributing to her a call for a Leningrad blockade. The fake statement, widely spread by Russian media, has led to harsh attacks, including derogatory remarks and death threats against her from former Russian President Dmitry Medvedev.
- Propaganda without borders. A website called “International Reporters,” posing as independent journalism, has been flagged by Reporters Without Borders (RSF). The copycat organisation has been caught targeting international audiences with content from known propagandists, with narratives that align with Russian geopolitical interests.
…and finally: Greetings from across the pond!
- Orwellian void: USAID erased. The abrupt dismantling of USAID has created an information void, sparking public confusion and a surge in Google searches. With its official website offline, misinformation is spreading unchecked, reshaping perceptions, eroding trust, and threatening global humanitarian efforts.
- Blending the past & present. A Guardian investigation reveals that US Immigration and Customs Enforcement (ICE) has manipulated Google search results by resurfacing years-old press releases about immigration arrests with updated timestamps. This has created a misleading impression of widespread recent enforcement actions, fueling fear and confusion among immigrant communities.
- No regulation is the best regulation. In his confusing Munich Security Conference speech last weekend, US Vice President JD Vance bashed Europe, accusing its governments of retreating from their values while painting efforts to combat disinformation as an assault on free speech.
- AI: boom or balance? Earlier the same week at the AI Action Summit, Vance urged Europe to ease AI regulations, warning that strict rules could stifle innovation. The UK and US refused to sign the summit’s final statement, reportedly due to concerns over its call for “inclusive and sustainable AI.”
- Disarming democracy. The Trump administration is dismantling key federal teams that combat foreign election interference, including the disinformation monitoring unit of the Cybersecurity and Infrastructure Security Agency (CISA). CISA has frozen all its election security activities, cutting off support for state and local officials. Experts and election officials warn that this weakens defenses against disinformation targeting American voters.
- Never heard of it. During his Senate confirmation hearing, Trump’s FBI director pick Kash Patel denied ever promoting QAnon – despite years of documented evidence to the contrary.
- Public sector, private interests? Former Tesla engineer and Elon Musk ally Thomas Shedd is now leading the US General Services Administration’s tech arm, pushing an “AI-first strategy” to reshape federal operations, allegedly stating that he wants an agency that operates like a “startup software company,” which has raised concerns over the potential prioritisation of private industry interests, lack of transparency, and the risks of AI-driven misinformation in government services.
Brussels corner
- From promise to practice. On 13 February, the European Commission and European Board for Digital Services endorsed the formal integration of the Code of Practice on Disinformation into the Digital Services Act (DSA) framework as a code of conduct. This means, among other things, that adherence to the code can be considered as an appropriate risk mitigation measure. It also means that it can be included in the audits carried out to verify compliance with the DSA. Although codes of conduct are nominally voluntary, the DSA pushes that notion to its limits. For example, recital 92 states that “the refusal without proper explanations by a provider of an online platform or of an online search engine of the Commission’s invitation to participate in the application of such a code of conduct could be taken into account, where relevant, when determining whether the online platform or the online search engine has infringed the obligations laid down by this Regulation.” What participation means in this context is also unclear. Without explanation, the DSA deviates from the Council of Europe best practices towards effective legal and procedural frameworks for self-regulatory and co-regulatory mechanisms of content moderation by not including a way of expelling non-compliant companies. The Code of Conduct on Hate Speech has already been integrated into the DSA Framework.
- Interpret wisely. The first Work Programme of the newly installed European Commission was published on 11 February. It contains many elements which offer hope, or not, depending on how they are interpreted. For example, there is a promise to ensure effective implementation of legislation, which is crucial for the fight against disinformation, while the promise of simplification will sound to many as a promise to deregulate. Similarly, the prioritisation of protection of democracy offers hope, while the spreading of responsibility for disinformation across Vice-President and High Representative Kallas, and Commissioners McGrath (Democracy Shield, etc) Virkunnen (DSA, etc), Lahbib (media literacy including online and AI threats), and others may seem to many not to be a simplification. One point is, however, unequivocal – namely that “this Commission will step up its engagement to support, protect and empower civil society”, with a specific strategy planned for the third quarter of 2025, presumably at the same time as the democracy shield, which is also planned for Q3.
- DSA enforcement at a glance. The NGO European Digital Rights (EDRi) has prepared a very helpful and extensive database of key information about enforcement of the EU Digital Services Act (DSA). This includes details regarding the national enforcement body (Digital Services Coordinator or DSC) in each Member State, investigations launched by each DSC, as well as the so far around 50 cases launched by the European Commission in relation to matters under its competence. The resource is available here.
- Digital sovereignty! A group of 42 Members of the European Parliament (MEPs) has signed a letter to the European Commission promoting investment in European digital sovereignty. The letter sets out six core strategies for a reliable, clean, autonomous, competitive digital infrastructure in Europe.
Reading & resources
- Curated free speech. This commentary by LSE’s Grantham Research Institute examines how Meta’s claim to champion free speech is undermined by its algorithms designed to prioritise engagement over facts, fuelling disinformation, harming democracy, and eroding trust in public debate. Without transparency and accountability, content moderation remains ineffective.
- Great freedom, great responsibility. This paper examines the tension between free speech and the fight against disinformation, particularly in light of Trump’s executive order to “restore freedom of speech” online. The author challenges the false framing of fact-checking as censorship, highlights the role of algorithms in shaping online discourse, and argues that algorithmic transparency and fact-checking are essential for democracy.
- Hacking the narrative. State-backed cybercriminals aren’t just stealing data – they’re exploiting cyber vulnerabilities to disrupt institutions and shape global narratives. This report shows how cybercrime is increasingly being used as a tool for foreign information manipulation and interference (FIMI).
- Tracking cyber & disinfo threats. The CyberPeace Institute has launched a new platform to track cyberattacks and threats targeting civil society. It combines insights from cybersecurity assistance and publicly recorded incidents, and plans to expand its focus beyond cyber threats to include disinformation operations.
- Tracking deepfakes. DeepfakesTracker.org, developed by the Social Media Lab at Toronto Metropolitan University, provides real-time monitoring and analysis of AI-generated deepfakes and manipulated media. Its Misleading and Manipulated Media Dashboard tracks content on X flagged by Community Notes members.
- Bias in, bias out. This analysis by ISD found that TikTok’s search and recommender algorithms consistently link marginalised groups to harmful and degrading content, reinforcing societal biases in the name of engagement and profit.
- Algorithmic bias. This computational analysis uncovers a significant engagement shift on X in mid-July 2024, coinciding with Elon Musk’s endorsement of Donald Trump. The findings suggest algorithmic changes disproportionately increased engagement with Republican-leaning accounts.
- Hate gets the algorithm’s love. This study reveals that, following Elon Musk’s acquisition of X in October 2022, hate speech on the platform increased by approximately 50%, with such content also receiving significantly more “likes”. Despite Musk’s commitment to reduce bot activity, the prevalence of inauthentic accounts remained unchanged during this period. The study highlights concerns about the platform’s moderation policies and their impact on the spread of harmful content.
- Training the next generation of investigators. This piece from the Reuters Institute explores how equipping young people with open-source investigation skills – verifying information and analysing digital evidence – can help counter falsehoods and strengthen democracy.
- Who checks the facts? In this article, Maldita founder Clara Jiménez explores the potential consequences of eliminating fact-checking mechanisms in media platforms. Drawing on recent developments, such as Meta’s decision to replace its fact-checking program with Community Notes, the article underscores the critical role that fact-checkers play in maintaining the integrity of information.
- Better notes. This study by Maldita analyses over a million Community Notes on X, revealing that fact-checking organisations significantly improve their accuracy and visibility.
- Understanding Community Notes. The Community Notes Dashboard, developed by Check First, is a platform providing real-time insights into the evolution, distribution, and impact of Community Notes on X.
- Manipulation for domestic audiences. Instead of an outright ban, Russia has systematically throttled YouTube, disrupting access while maintaining plausible deniability. This investigation reveals how the tactic of blaming “technical issues” limits access to independent content while protecting YouTube’s role in the domestic economy.
- States of disinformation. This report highlights how foreign adversaries, including Russia, China, and Iran, are targeting US states with tailored information operations. It provides state-specific case studies detailing the tactics and objectives of these foreign actors.
This week’s recommended read
This week’s recommended read comes from our Researcher Ana Romero Vicente. She highly recommends reading the report “Behind the mask: Investigating EPH’s coal exit claims” that delves into whether EPH, a major European energy company, is genuinely moving away from coal or simply greenwashing its efforts. The report suggests that by transferring coal assets to a subsidiary called EP Energy Transition (EPETr), EPH is maintaining its coal investments while presenting a cleaner image to investors and regulators, with claims like being “almost coal-free by 2025” and “fully coal-free by 2030.”
Commissioned by Beyond Fossil Fuels and carried out by FIND researchers, the investigation highlights the close ties between EPH and its subsidiary, both of which are owned by Czech billionaire Daniel Křetínský. The report raises an important concern: this restructuring may be more about appearances than real environmental change, potentially misleading stakeholders about the company’s true impact on the planet. This type of greenwashing is not just misleading, it obstructs meaningful climate action, distorting public perception and delaying the necessary steps to address the climate crisis.
The latest from EU DisinfoLab
- Update: AI Disinfo Hub. Our AI Disinfo Hub remains your go-to resource for understanding AI’s impact on disinformation and strategies to combat it. In addition to the AI news in this newsletter, you can find all the latest updates here – here’s glimpse at what’s new:
- Close enough? The BBC warned in a recent report that AI assistants risk distorting its journalism, with research showing significant inaccuracies in AI-generated news responses. A study of leading AI chatbots found that over half of their answers had major issues, including factual errors and misrepresented BBC content.
- Multilingual fail. The world’s ten leading chatbots generate more false claims in Russian, Chinese, and Spanish than in English. In Russian and Chinese, over 50% of responses either contained false claims or failed to provide an answer, according to a NewsGuard audit conducted in seven languages.
- Friend and foe. This report by VIGINUM examines both the threats and opportunities of AI in combating information manipulation. It emphasises the need for global cooperation and best-practice sharing to strengthen defenses against disinformation.
- Climate Clarity. We’ve also updated the Climate Clarity Hub, our platform featuring the latest insights, research, and tools to tackle climate disinformation.
- Let’s talk about disinfo. Our team has been busy tackling disinformation across the podcast world. Tune in to hear them share insights on some of today’s most pressing challenges:
- Some Dare Call It Conspiracy. Our Executive Director Alexandre Alaphilippe and Research Manager Maria Giovanna Sessa joined “Some Dare Call It Conspiracy” podcast to discuss disinformation from Doppelganger to identity-based disinformation and Illuminati. Listen to their insights here: Alexandre’s episode & Maria Giovanna’s episode.
- Big Laws for Big Tech. As online platforms shut down fact-checking programs, legal enforcement is becoming the only viable path to holding them accountable for the spread of disinformation. In this episode of “War Against the Trolls” podcast, Alexandre Alaphilippe explains the role of the Digital Services Act (DSA) in regulating tech giants, and how AI-driven content curation affects tracing disinformation. Listen now on Spotify or Apple Podcasts.
- Everyday FIMI. Foreign Information Manipulation and Interference (FIMI) is not just a political issue – it affects our daily lives in ways we don’t always notice. In the first episode of the ATHENA podcast, Alexandre Alaphilippe joins host Tim Bayer to unpack what FIMI is, and what we can do to fight back.
Events and announcements
- 19 February: The Maldita conference, From Russia with Hoax, will explore the strategies behind disinformation campaigns and foreign interference, and the growing risks these pose to Spain and Europe.
- 19 February: Climate Action Against Disinformation organises a webinar Techno-Feudalism: Big Carbon and Big Tech’s Disinfo.
- 24 February: Next in the series of EDMO BELUX Lunch Lectures, focus will be on climate change disinformation, with the session “Ecological knick-knacks, green sect, or prophecy? Sceptical argumentation in climate crisis discourse.”
- 24-27 February: The 13th edition of RightsCon, hosted by Access Now, will take place in Taipei, Taiwan, and online.
- 28 February – 1 March: The European Festival of Journalism and Media Literacy will take place in Zagreb, Croatia.
- 19-20 March: The third edition Les Journées Infox sur Seine will gather the francophone community working on disinformation and fact checking to meet and present their current work in Paris, France.
- 8 April: Save the date for this EDMO BELUX workshop & networking event “Mitigating disinformation in Belgium and Luxembourg: Where are we now, where are we going?”
- 9-13 April: The International Journalism Festival will be held in Perugia, Italy. The second edition of the Centre for Information Resilience (CIR) Open Source Film Awards ceremony will be organised as part of the event.
- 23-25 April: The 2025 Cambridge Disinformation Summit will discuss research regarding the efficacy of potential interventions to mitigate the harms from disinformation.
- 22-25 May: Dataharvest, the European Investigative Journalism Conference, will be held in Mechelen, Belgium. Tickets are sold out, but you can still join the waiting list.
- 17-18 June: The Summit on Climate Mis/Disinformation, taking place in Ottawa, Canada, and accessible remotely, will bring together experts to address the impact of false climate narratives on public perception and policy action.
- 18-19 June: Media & Learning 2025 conference under the tagline of “Educational media that works” will be held in Leuven, Belgium. Submit your proposal for a session by the end of January.
- 27-28 June: The S.HI.E.L.D. vs Disinformation project will organise a conference “Revisiting Disinformation: Critical Media Literacy Approaches” on Crete, Greece. The call for papers is open until 10 March.
- 15-16 October: #Disinfo2025 open call, final chance! Join the counter-disinformation community in Ljubljana, Slovenia, for two days of collaboration, learning, and debate. Have research, insights, or ideas to share? Don’t miss your opportunity to contribute – submit your proposal by 21/02! Interested in partnering with us as a sponsor for the conference? Get in touch (by replying to this email)!
- 22–23 October: 17th Dubrovnik Media Days, under the theme “Generative AI: Transforming Journalism, Strategic Communication, and Education”, will take place in Dubrovnik, Croatia. Call for submissions is open until 16 March.
- 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
- Boosting fact-checking activities in Europe. The European Media and Information Fund’s (EMIF) initiative offers grants to independent fact-checking organisations to enhance their capacity to counter disinformation, especially during critical events like elections and public health crises. The current funding round is open until 28 February.
- Funding for safer online spaces. Civitates has launched a call to support organisations in Hungary, Italy, Poland, Slovakia, Spain, Romania, and the Netherlands working to create safer, more inclusive online spaces by improving the enforcement of EU tech regulations. Applications are open until 14 March, and two online info sessions are organised on 26 and 27 February.
Jobs
- The Global Public Policy Institute (GPPi) is hiring a Research Assistant in Peace and Security based in Berlin.
- ProPublica is hiring a Deputy Data Editor and a Deputy News Applications Editor for hybrid roles based in New York.
- Civitates is hiring an Impact and Learning Manager in Brussels. Apply by 28 February.
- The International Fund for Public Interest Media is hiring a Senior Communications Associate in Washington, D.C., to lead media engagement, digital strategy, and public relations.
- Freedom of the Press Foundation is looking for a Senior Software Engineer for a remote, US-based position.
- European Digital Rights (EDRi) is looking for an Executive Director. Submit your application by 9 March.
- NewsGuard is hiring for multiple roles, including Staff Reporter for Media and Misinformation, and Staff Reporter for Business and Finance Misinformation.
- The Center for Countering Digital Hate has several open positions, US-based and hybrid, including Director of Development, Director of Advertising Industry Advocacy, Senior Communications Manager, and Senior Communications Officer.
- The Center for Democracy & Technology (CDT) is hiring a State Engagement Lead in Washington, D.C., and offering academic-year externships.
- Liberties is looking for Berlin-based interns to join as an Advocacy & Research Assistant, Communications & Campaign Assistant, or Operations Team Assistant.
- Bluesky is looking for a Senior Trust and Safety lead for a remote position.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
This good post!
Take a look at ‘Letter to My Captors’ from our former colleague Nicolas Hénin.
