Dear Disinfo Update reader,

The response to our open call for #Disinfo2025 was incredible – well over two hundred proposals, packed with great ideas and fresh perspectives. The programme is taking shape, and you’ll be hearing more shortly. In the meantime, an important heads-up: registration opens soon! Follow us on social media to keep up with the latest.

Germany’s recent elections saw a surge in disinformation, platform biases, and foreign influence operations. Several studies shed light on what seems to become an increasingly manipulated information landscape, raising urgent questions about platform accountability. In this edition, we dive into the key dynamics at play. Scroll down to find more insights.

Beyond Germany’s elections, this edition also explores platform policies, AI’s role in disinformation, and Russia’s evolving influence operations. From Meta’s latest content moderation shift to TikTok’s trust and safety cuts, and new research on Kremlin-backed AI manipulation – read on for the latest developments.

Our webinars

UPCOMING – REGISTER NOW

PAST – WATCH THE RECORDINGS!

Disinfo news & updates

Germany votes

In the weeks leading up to the German elections on 23 February – where the far-right AfD was the second most voted party – numerous investigations have tried to shed light on the role of platforms in the dissemination of disinformation or their potential bias in favour of that party. Shortly before the election, several organisations published reports on the behaviour of platforms like X or TikTok. (Others could not, due to lack of access to data – an issue X itself challenged in court). The results vary depending on the design and objectives of the study and do not always demonstrate violations of the Digital Services Act (DSA), but they shed light on the growing concern about the role of platforms in prioritising content of certain parties and their role in what appears to be an increasingly manipulated information landscape.

  • Court commands, X defies. Elon Musk’s X challenged a German court ruling forcing it to allow researchers real-time access to data on the upcoming elections ahead of the country’s February elections. This case is a key test of the EU’s Digital Services Act, highlighting tensions between transparency and platform control.
  • Far from right. New research by Global Witness reveals significant far-right bias in the recommendation algorithms of TikTok and X ahead of Germany’s federal election. Testing showed TikTok recommended 78% far-right content, while X recommended 64%. The platforms primarily promoted AfD party content, highlighting a right-leaning skew. 
  • The Musk effect. X’s most viral content during Germany’s election was dominated by references to Elon Musk and his support for the AfD, according to this study by AlgorithmWatch and DRFLab raising concerns about content moderation, transparency, and X’s role in shaping public discourse.
  • TikTok’s pulse. This analysis by ISD examines the risks of algorithmic amplification of political content on TikTok in the lead-up to Germany’s federal election by assessing a potential “asymmetric amplification” of party-political content through the “For You” feed, the use of labels, and the labeling of election-related videos. It concluded that AfD content was disproportionately represented in the first five political party videos played, or that only 59% of the content of party or candidate accounts and 47% of the fan pages was classified as political.
  • Dual disinfo front. Ahead of the election, Russian influence operations and Elon Musk’s support for German far-right groups fueled a disinformation campaign that targeted the country’s democratic process. Researchers warned that a torrent of falsehoods, deepfakes, and bot-driven propaganda threatened to erode trust in political institutions. As the Kremlin and Musk reinforced each other’s narratives, German voters faced an unprecedented challenge in navigating a manipulated information landscape.
  • Weaponising climate. An investigation uncovered how disinformation tactics are being used to push Trumpian ideologies in Germany, especially regarding climate change. Politicians, think tanks, and economists are aligning with US groups that have a history of spreading climate denial and promoting anti-climate policies. Additionally, anti-climate attacks heated up before the German elections, and delay and denial clouded debate on green issues, with potentially devastating impacts on Europe’s energy transition.

Platforms: tech, tactics, troubles

  • Cuts over trust. TikTok is reducing its trust and safety team globally while increasing its reliance on AI for content moderation. The layoffs affect staff in the US, Europe, and Asia-Pacific, following a restructuring push by parent company ByteDance. The move has sparked concerns over misinformation and political manipulation on the platform.
  • Security or silencing? Meta has fired around 20 employees for leaking internal information. With recent policy shifts, layoffs, and the dismantling of DEI programs, morale within the company has reportedly taken a hit – raising questions about whether these firings are about protecting company secrets or silencing internal critics.
  • Fines and safety lines. Australia slaps Telegram with a 1 million AUD fine (approximately 600.000 EUR) for failing to promptly answer questions about combating child abuse and extremist content. The penalty highlights growing concerns over tech platforms’ responsibility in ensuring online safety. 
  • Signals of attacks. Russia-aligned threat actors are increasingly compromising Signal Messenger accounts, targeting individuals of interest to Russia’s intelligence services. As Signal gains popularity among at-risk communities, such as journalists and activists, its secure platform becomes a key target for espionage. This growing threat is expected to expand beyond Ukraine, affecting other messaging apps like WhatsApp and Telegram.
  • Digital independence. Germany’s Economy Minister and Green chancellor candidate, Robert Habeck, criticises Europe’s reliance on tech giants like Elon Musk, warning that their power threatens democracy. He calls for a European alternative to curb Silicon Valley’s influence, stressing that unchecked tech power undermines European values and autonomy.
  • Boosting Beijing? This opinion piece in The Hill highlights concerns over YouTube amplifying pro-China narratives, including content creators with ties to state media. It examines how the platform boosts propaganda and calls for greater transparency in addressing foreign influence operations.
  • Virality over veracity. As Meta phases out third-party fact-checking in the US, it is also reshaping its creator bonus programs to reward content based on engagement. While aiming to boost activity on the company’s platforms, it’s likely to further incentivise sensational or misleading content, amplifying the spread of misinformation.

Russian manoeuvres

  • Silenced and fined. A Russian court has fined Google 3.8 million roubles (approximately 40.000 EUR) for hosting a YouTube video showing Russian soldiers how to surrender. This fine is part of Russia’s ongoing efforts to control content related to the war in Ukraine, with the government also accused of slowing YouTube’s speeds to limit access to critical material.
  • AI spins. This article is an overview of how Russia has spread 302 false claims about the war in Ukraine over three years, using AI and deepfake technology to amplify these narratives. The focus shifted from accusations of Nazism to claims of corruption against Ukrainian officials.
  • Digital manipulation efforts. This investigation reveals that the Russian “Pravda” website ecosystem, previously known as “Portal Kombat,” has significantly expanded in 2024. The operation, linked to the Crimea-based IT company TigerWeb, has broadened its reach across Europe, Africa, and Asia, repurposing hundreds of automated news portals to spread pro-Kremlin content. New domains and subdomains have targeted countries ranging from the US to African nations and New Zealand.
  • Russia off the radar. The Trump administration is deprioritising Russia as a cybersecurity threat, omitting it from official statements on cyber risks and reportedly instructing US analysts to stop tracking Russian cyber activities. Meanwhile, Defense Secretary Pete Hegseth has ordered US Cyber Command to halt all planning against Russia.
  • A fool and a tool. In this commentary, Nina Jankowicz unpacks how Trump, three years after Russia’s invasion of Ukraine, is not only siding with Putin but actively spreading Russian disinformation. His false narratives about Ukraine’s actions and leadership align with Russian propaganda, further undermining truth and supporting autocratic regimes.
  • Security first. Kaspersky, a cybersecurity company founded in Russia, has been banned from Australian government systems due to concerns over potential interference and espionage. The company’s products must be removed by 1 April.
  • Information void. Experts warn that the Trump administration’s halt on US foreign aid could trigger a surge in Russian disinformation throughout Eastern Europe. With independent media outlets in the region shutting down due to funding cuts, a void in credible information may emerge.

Brussels corner

  • Democracy Shield in action. The first substantive meeting of the new European Parliament Special Committee on the Democracy Shield was held in Brussels on 17 February. It consisted of four main sections, covering:
    • The VIGINUM report on the Romanian elections. The report was presented as a pedagogical tool to show how easily coordinated fake accounts can fool recommender algorithms of the main social media platforms, a phenomenon exacerbated by (micro-)influencers. It was argued that a similar campaign would be possible elsewhere in Europe, although not necessarily with such reliance on Tiktok, which is particularly strong in Romania. It was also stressed that the offline elements of campaigning, including oversight of election expenses rules should not be neglected.
    • Insights from the German DSC on the country’s election. The presentation from the German Digital Services Coordinator (DSC), Federal Network Agency, was positive about the enforcement tools available under the Digital Services Act (DSA) but was highly critical of the lack of transparency of recommender systems. Measures like the European Commission guidelines for platforms during election periods were praised as valuable for the work of DSCs.
    • Findings from the EEAS Stratcom division. The speaker stressed that the work of the European External Action Service (EEAS) is concentrated on foreign manipulation and not what is true or not. She introduced key numbers of domains and accounts behind the Doppelganger campaign and described the elements of the “FIMI Toolbox”: The situational awareness platform, the Rapid Alert System in cooperation with the Member States, building resilience through the EUvsDisinfo database of examples of disinformation, legislation and international cooperation.
    • The rapid response mechanism under the DSA. The European Commission speaker made it clear that the mechanism is a last resort measure for extraordinary situations. A voluntary incident protocol has been developed to create a structured deliberation in case such a situation were to arise. The presentation mainly described other legal, self-regulatory and co-regulatory tools that can be relied on first before more exceptional measures.
  • A shield but no swords? French media outlet Contexte interviewed various stakeholders, revealing an emerging vision in Brussels regarding the scope of the Democracy Shield. National agencies such as VIGINUM would handle forensics and operational analysis under a “center for situational awareness,” while civil society and fact-checkers would focus on narrative analysis, coordinated by EDMO (or its successor). While this framework emphasises monitoring, it remains unclear whether it will include stronger enforcement mechanisms to cut the roots of manipulation of public discourse.
  • CoP meets DSA – what now? With the EU’s Code of Practice on Disinformation now a co-regulatory instrument under the DSA, what changes in practice? This article explores key implications, from enforcement challenges to industry reactions, and what to watch in the months ahead.

Reading & resources

  • Disinformation playground. It’s official: social media is now the number one news source for young people in the EU, surpassing television and both print and digital media. As platforms like TikTok, Instagram, and YouTube take the lead, they also expose young Europeans to an increasing risk of disinformation. With many struggling to identify fake news, the dangers go beyond misinformation, causing psychological effects like heightened mistrust, anxiety, and ignorance.
  • Faster, trusted. Fundación Maldita.es analysed over 1,1 million proposed community notes on X, revealing that fact-checking organisations are the third most cited source. Despite the quality and speed of these fact-based notes in addressing misinformation, many remain hidden, as X’s algorithm favors consensus over accuracy.
  • Blow to fact-checking. Logically, a UK-based fact-checking startup, has laid off around 40 employees as part of a global cost-cutting effort. The company, which previously secured major contracts to combat misinformation, is transitioning to a more product-focused approach while restructuring its operations.
  • Together in this. The European Fact-Checking Standards Network (EFCSN) condemns the police raid on Center for Research, Transparency and Accountability (CRTA), which runs the Serbian fact-checking platform Istinomer.rs. EFCSN describes the raid as an attack on civil society, freedom of expression, and independent media, and calls for an end to such intimidation tactics.
  • Scammers by force. Over 250 workers from 20 countries have been freed from telecom fraud centers in Myanmar’s Karen State and brought to Thailand. Lured by false job offers, they were forced into online crimes like cyber fraud and money laundering. Thai authorities are investigating potential human trafficking.
  • Anti-climate playbook. This InfluenceMap report reveals how the oil, gas, and utilities industries are using a shared playbook to delay building electrification globally, obstructing policies aimed at reducing fossil gas use. Case studies from Australia, the EU, and the US show how these industries spread misleading narratives to weaken climate action, risking public health and climate goals. Despite opposition from science-aligned organisations, fossil fuel industries continue to successfully hinder progress toward decarbonising buildings.
  • Clear Bluesky. Exploring Bluesky’s decentralised social network, this study uncovers familiar structural patterns, evaluates user-driven content curation, and finds minimal political polarisation in shared content. While users predominantly share left-center news sources with little misinformation, discussions on certain topics reveal issue-based divergences.

This week’s recommended read

Raquel Miguel, our Senior Researcher, recommends reading this report by VIGINUM – the French agency attached to the General Secretariat for Defence and National Security, and tasked with protecting France and its interests against foreign digital interference – that offers a comprehensive overview of the many influence operations that Russia has deployed during the three years of war in Ukraine.

The report examines Russian operations targeting Europe (including France) such as RRN (also known as Doppelganger), Matryoska or Overload – targeting European media and fact-checkers. It also delves into the different tactics used in campaigns deployed in Europe, such as creating media outlets to circumvent European sanctions or weaponising offline actions on European territory. Finally, it examines influence operations launched in Ukraine and the occupied territories, such as Portal Kombat or Operation Mriya, and also in the African continent, shedding light on Russia’s objectives beyond Europe.

This document is a must-read for all those interested in Russian Foreign Information Manipulation and Interference (FIMI) who want to learn about the different influence operations launched in the last three years since the beginning of the Ukrainian invasion. The paper offers a comprehensive overview of the topic and was written by one of the most experienced and knowledgeable organisations in the field.

The latest from EU DisinfoLab

  • Terms of (dis)service. Large language models (LLMs) are rapidly growing in use, raising concerns about their potential to spread misinformation. Our latest factsheet reviews the misinformation policies of 11 leading chatbots, focusing on text-generative AI and its wide-ranging applications. It examines key policy elements, content moderation, user reporting, and penalties for ToS violations.
  • More AI: Hub-date. Our AI Disinfo Hub remains your go-to resource for understanding AI’s impact on disinformation and strategies to combat it. In addition to the AI news in this newsletter, you can find all the latest updates here – and here’s a glimpse of what’s new:
    • AI models: A target for Russian influence? This NewsGuard report exposes efforts to manipulate Western AI models with pro-Kremlin narratives. Another investigation by The American Sunlight Project highlights similar risks, warning that Russian operations may be flooding large language models with propaganda.
    • Tackling AI misuse. OpenAI has published a report explaining how it worked recently to prevent the use of AI tools for malicious purposes. The report includes two disruptions involving threat actors that appear to originate from China. According to the report, those actors have used, or attempted to use, models built by OpenAI and another US AI lab in connection with an apparent surveillance operation and to generate anti-American, Spanish-language articles.

Events and announcements

Jobs

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!