Dear Disinfo Update readers,
Welcome to our first newsletter of 2025!
The year has barely begun, and things are moving at breakneck speed. In this issue, we spotlight Mark Zuckerberg’s policy reversal at Meta – what it means for fact-checking and global disinfo efforts. Our #Disinfo2025 open call is now live – your chance to propose topics for October’s Ljubljana conference. Plus, our new factsheets unpack the unique challenges Estonia and Slovenia face in tackling disinformation, and we’ve updated our AI Disinfo Hub with the latest insights on AI’s role in the evolving disinformation landscape.
There’s plenty more to explore in this Disinfo Update, from TikTok bans to greenwashing crackdowns and the ever-shifting world of Russian influence campaigns.
Keep reading!
Our webinars
UPCOMING – REGISTER NOW
- 16 January: Elections at risk? Disinformation campaigns observed in Romania and the role of the DSA in risk mitigation | In this webinar, we’ll examine some of the organised disinformation campaigns that targeted the Romanian social media space ahead of the presidential elections in November 2024. Join us as we explore how policy tools like the DSA can mitigate these risks and discuss actionable steps to effectively assess and respond to emerging risks related to elections.
- 30 January: New approaches to measuring the impact of counter-disinformation campaigns | In this webinar, Sviatoslav Hnizdovskyi, Founder and CEO of OpenMinds, will talk about his work on measuring the impact of disinformation and counter-narratives on public attitudes and behaviours.
PAST – WATCH THE RECORDINGS!
- The ban on Russian propaganda: circumventions, side effects, and impact on the 2024 EU elections | This webinar examined the extent and nature of Russian online influence operations surrounding the EU elections. With RT and Sputnik facing restrictions, this webinar explored the strategies employed by Russian proxy websites and social media channels to fill the gap.
- How to detect and analyse identity-based disinformation/FIMI | In this webinar, we explored the EEAS practical guide on using OSINT tools and methodologies to detect identity-based disinformation. Developed with EU DisinfoLab, it offers insights into protecting victims and gathering evidence.
Spotlight: Dry January
by Alexandre Alaphilippe
It wasn’t a deepfake, unfortunately. On Tuesday, 7 January, Mark Zuckerberg announced a sweeping reversal of Meta’s longstanding content moderation policies. The decision to end fact-checking in the US (for now), coupled with claims that Europe is “institutionalising censorship” and the relocation of US content moderation teams to Texas, has cast a dark shadow over the disinformation-fighting community and beyond. Wider concerns loom, too. By endorsing misleading arguments used to attack civil society, Zuckerberg risks exacerbating global harm.
U-turn and dead end
Beyond the policy shift itself, Zuckerberg’s tone raised further concerns. Blaming fact-checkers for alleged partisanship seems peculiar, especially considering Meta’s statements. According to the company, 74% of surveyed users reported seeing either the right amount or too little information labelling. Moreover, Meta once lauded its partnership with the International Fact-Checking Network (IFCN) and their rigorous standards for implementing the program.
In parallel, French media Contexte reports that Google wishes to renounce its engagements to financially support fact-checkers in Europe, under the “Strengthened” Code of Practice (sCoP). This announcement was made to sCoP stakeholders in December.
Will the Code of Conduct be strong enough?
The spotlight now turns to the European Commission and the Digital Services Act (DSA) to see whether these policy changes will impact Europe. As for Community Notes, touted as an alternative to combat disinformation, they’ve shown limited effectiveness.
The “Strengthened” Code of Practice obliges signatories to support fact-checking and provide fair remuneration. But the sCoP is scheduled to become a Code of Conduct on 1 July 2025, triggering a reshuffle of platform obligations. Companies have until 15 January 2025 to pick the commitments they want to keep binding in the future Code of Conduct.
Blue Monday
According to that ancient (and totally scientific) marketing tradition, the most depressing day of the year will be Monday, 20 January 2025. So, brace yourselves… or don’t. After all, it’s just another Monday.
Disinfo news & updates
Green claims and false flames
- Greenwashing unbanked. The UK’s advertising regulator has banned a Lloyds Bank ad promoting its “low carbon” policies, ruling it misled consumers by touting operational emissions cuts while omitting its £16 billion financing of fossil fuel projects. This decision, sparked by Adfree Cities’ complaint, comes after UN Secretary General António Guterres called for a global fossil fuel ad ban last year — marking a step towards greater accountability for greenwashing.
- Fire-fuelled falsehoods. Amid devastating wildfires in Southern California, misinformation has spread rapidly online, with bad-faith actors blaming diversity initiatives and nonexistent policies for the crisis. This article explores how weakened fact-checking on social media and climate denial fuel disinformation, overshadowing the real drivers of the disaster — climate change, drought, and infrastructure challenges.
All eyes on Meta… and TikTok
- Albania suspends TikTok. Albania’s Prime Minister announced a one-year ban on TikTok, starting early 2025, following the fatal stabbing of a student linked to social media arguments. The EU Commission is reviewing the ban, reminding Albania that such measures should only be a last resort as the country aims for EU membership by 2030.
- And the US next? This article examines how a potential nationwide TikTok ban in the US could unfold, including legislation requiring ByteDance to sell its operations in the country and the practical impacts on users. From app store restrictions to degraded functionality, it explores the challenges of enforcing such a ban.
- TikTok & Romanian elections. The European Commission has launched an investigation into TikTok under the Digital Services Act (DSA) following allegations of foreign interference during Romania’s 2024 presidential elections. The probe was prompted by reports of disinformation campaigns on the platform, including false claims that European Commission President Ursula von der Leyen canceled Romania’s elections. (Don’t forget to register for our webinar this week, where we’ll delve deeper into this case.)
- Meta’s Community (double) Standards. This report by AI Forensics reveals that Meta approved over 3.000 pornographic ads despite its allegedly strict Community Standards, while the same explicit content was removed when posted by regular users. The ads, which promoted dubious products and amassed millions of impressions in the EU, highlight inconsistencies in Meta’s enforcement of its policies.
- Meta turns a blind eye. As part of its decision to roll back content moderation rules, Meta is now permitting posts that allege “mental illness or abnormality” based on gender or sexual orientation. The company has also removed trans and nonbinary themes from its Messenger app, features that allowed users to customise their chat interfaces. The decisions have provoked backlash, including among Meta employees, about the company’s commitment to inclusivity, and raised questions over their potential to fuel hate speech.
FIMI: new tactics and counter-measures
- Matryoshka moves to Bluesky. The Russian disinformation network “Matryoshka” is now spreading anti-Ukrainian propaganda on Bluesky using AI-generated videos with fake expert voices. Previously active on X, the network has shifted to Bluesky, exploiting its growing user base.
- Russian bots & Croatian elections. A report by the Centre for Information Resilience suggests that Russian bot networks supported Croatian President Zoran Milanović in the 2024 election, amplifying his anti-NATO and pro-Russian rhetoric.
- Undesirable Recorded Future. Russian authorities have labelled US threat intelligence company Recorded Future an “undesirable” organisation, accusing it of aiding cyberattacks and propaganda against the country.
- Echoes of Russia. The German fact-checking organisation CORRECTIV suffered last November a DDoS attack, espionage, and a smear campaign. Digital traces of the attack were linked to a Russian IT firm with ties to the Ministry of Defence, coinciding with CORRECTIV’s reporting on Russian disinformation in Europe and suggesting a retaliatory motive. Read more here (in German).
- Telegram blocks Kremlin channels. Telegram has blocked access to channels belonging to major Russian state-owned news outlets across several European countries. This development follows earlier European Union sanctions on Kremlin-controlled media for propagating disinformation. Russian authorities have condemned the action as “political censorship” and threatened retaliatory measures.
- US sanctions on election interference. On 31 December 2024, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC) announced sanctions against entities in Iran and Russia for attempting to interfere in the 2024 US elections. The action builds on an earlier announcement targeting foreign malign influence operations.
- Goodbye to Effective Countermeasures. The US State Department’s Global Engagement Center (GEC), established in 2016 to counter foreign disinformation, officially closed on 23 December 2024 after Congress declined to renew its funding. While the GEC faced criticism over its role and effectiveness, its closure raises concerns about the future of US efforts to combat foreign disinformation.
- Musk & interference. Elon Musk has come under fire for his alleged interference in European and UK political processes through his platform, X. In the UK, Musk has criticised Prime Minister Keir Starmer and explored ways to bolster the right-wing Reform UK party, including scrutinising its leadership and considering alternative candidates. Musk’s promotion of far-right AfD leader Alice Weidel in Germany has further sparked concerns about biased platform practices and manipulation of public opinion. These actions have drawn calls for stricter enforcement of the Digital Services Acts (DSA) and raised questions about his broader political influence.
Policy shifts and new proposals
- Influencer accountability. Spain has passed a law requiring social media influencers – those with over 100.000 followers on one platform or 200.000 across multiple – to publicly correct false or misleading posts, aligning them with obligations already imposed on traditional media outlets.
- Insights from Spain. The Spanish Department of National Security (DSN) has published the third edition (2024) of the works of its “Forum against Disinformation Campaigns”. The publication compiles analyses, findings, and recommendations, aiming to enhance understanding and foster collaborative solutions to combat disinformation.
- Free the feeds. Could it be possible to access information without being dependent on algorithms set up by tech oligarchs? That’s the vision of Free our Feeds, a widely-supported initiative which aims at building public-interest social media platforms.
Brussels corner
- Rise of the living dead: Media exemption rebranded. Remember the media exemption? (No? Really? Fine, here’s the backstory.) Now rebranded as a “media privilege” under Article 18 of the European Media Freedom Act, the European Commission will host two days of discussions next week in Brussels with platforms and media lobbying teams. The upcoming guidelines aim to address minor yet critical issues, such as defining what qualifies as a media provider and deciding whether “media” content should remain online – even when it violates platform terms of service. Notably, no assessments or studies have been conducted by stakeholders to substantiate claims of systematic censorship of media on platforms. As our president Diana Wallis highlighted in 2023, this media exemption loophole is “creating a conflict between two laws: the DSA mandating platforms to do content moderation and the EMFA legally preventing them from doing it. This would not be a good look for the EU legislature, and until a decision of the Court comes, what will platforms do? They will likely stop moderating anything that comes close to being a ‘media’ just to avoid difficulties and costs.” Read more here.
- Shielding Democracy, for a year. The European Parliament has established a special committee, with a 12-month mandate, on the European Democracy Shield to address threats to democracy, including disinformation and foreign interference.
Reading & resources
- Europe Invasion’s playbook. This RFE/RL investigation of Europe Invasion, a high-impact X account spreading anti-immigration disinformation, exposes the tactics behind its billions of impressions. From fabricated personas to monetisation schemes, the article reveals how profit and propaganda intersect to fuel divisive narratives aligned with the rise of far-right movements in Europe.
- Western Balkans’ disinfo dynamics. This ISD report reveals how disinformation in the Western Balkans is primarily driven by local political actors, often amplifying foreign narratives to serve domestic agendas. It examines Serbia’s role as a hub for pro-Kremlin propaganda and the growing wave of anti-EU sentiment across the region.
- Science misinformation. This study by the National Academies of Sciences, Engineering, and Medicine explores the broader category of “science misinformation,” beyond specific fields like health or climate change. It examines systemic causes, the role of the information ecosystem, and proposes solutions to improve access to reliable scientific knowledge and reduce harm.
- IOHunter. This study introduces IOHunter, an advanced framework combining Graph Neural Networks (GNNs) and Language Models (LMs) to detect Information Operations (IOs) and identify coordinated manipulation across geopolitical regions, even with minimal data.
- Labeled IO data. This study addresses the lack of comprehensive control data for detecting Information Operations (IOs) on social media. It introduces labeled datasets from 26 campaigns, enabling the study of narratives, network interactions, and engagement strategies employed by coordinated accounts across various contexts.
- NextGen MAD? This article examines a “triple threat” to humanity: climate change, pandemics, and coordinated anti-science campaigns. Driven by the “5 Ps” – plutocrats, petrostates, pros, propagandists, and the press – these campaigns erode public trust and complicate efforts to combat global crises, threatening human civilisation.
- Exploring information warfare. The podcast “War Against the Trolls” dives into information warfare, featuring insights from experts including researchers, academics, defence personnel, and journalists.
- Money, power, climate policy. DeSmog investigates the deep ties between six fracking billionaires, climate denial groups, and the key figures shaping Donald Trump’s cabinet. The article uncovers how these influential actors, known for opposing climate action, have lobbied for appointments that could steer US energy policies towards expanded fossil fuel production.
- The power of misinformation. This study examines how power motives drive the sharing of content — particularly misinformation — on social media. Power-motivated users disproportionately spread falsehoods, often knowingly, in their quest to shape narratives and gain online influence and reinforce their sense of power.
- Façade of legitimacy. Virtual offices offer benefits like flexibility and cost savings but can also be exploited by cybercriminals and disinformation actors to create a façade of legitimacy. By registering shell companies at virtual addresses, bad actors can obscure their identities, fund manipulative campaigns, and complicate OSINT investigations. This article examines how such setups facilitate fraudulent activity.
- Measuring the impact of GenAI. A recent collective paper from Carnegie outlines four fundamental questions to consider when assessing the impact of generative AI on the information ecosystem. Avoiding both GenAI doomsday scenarios and dismissive minimisation, the paper emphasises the importance of recognising the complexity of the systems in which GenAI operates. It calls for the establishment of meaningful baselines to effectively measure the changes introduced by this transformative technology.
This week’s recommended read
Raquel Miguel, Senior Researcher of EU DisinfoLab, recommends reading this Washington Post article and this Politico op-ed on the political pressure and threats suffered by NewsGuard.
In recent days, Meta’s decision to suspend its cooperation with fact-checkers has made headlines all over the world. One of its implications will undoubtedly be the economic consequences for a part of the community fighting against disinformation and related threats, such as Foreign Information Manipulation and Interference (FIMI). However, it is not the only evil that threatens this community: economic pressure is added to the political one that has been already hitting this community in the United States. One of the most emblematic cases is that of NewsGuard, a company that has been assessing the credibility of news websites since 2018 – offering advice to technology companies, nonpartisan assessments of online publishers to advertisers, or user-friendly tools to readers.
This article published by The Washington Post explains how NewsGuard has become the target of incoming Trump administration regulators and far-right Republicans in Congress. In addition to this, Steven Brills (co-CEO of NewsGuard and author of The Death of Truth) delves into the allegations in this Politico op-ed. Both articles are wonderful resources that shed light on the bleak picture and the threats to professionals in the sector in a crucial moment for democratic integrity and to understand why the work of the counter-disinformation community matters.
The latest from EU DisinfoLab
- #Disinfo2025 open call. The call for proposals for #Disinfo2025 in Ljubljana, Slovenia, on 15–16 October, is now open. Share your ideas, research, or insights on disinformation with a diverse community of experts. Submit your proposal by 21 February to be part of our 2025 annual conference!
- More country factsheets. We’ve added Estonia and Slovenia to our series on disinformation landscapes across Europe. Curious about other countries? Check out the full collection here.
- The latest from AI Disinfo Hub. We have updated our AI Disinfo Hub, your go-to resource for understanding the impact of AI on disinformation and strategies to combat it. This update includes highlights from the past few weeks, including Stanford University’s 2025 predictions – collaborative agents, AI skepticism, and new risks. We also explore the stir caused by a Financial Times’ report on Meta’s plans to expand the use and integration of user-generated AI profiles across its social media platforms. The plans sparked controversy and were followed by other news about the company removing some AI-generated Instagram and Facebook profiles that had been created in 2023 after some confusion and controversial interactions.
Events and announcements
- 22 January: CERRE’s webinar “From Drafting to Impact: A Blueprint for Future-Proof EU Digital Laws” dives into EU digital laws like the GDPR, DSA, and AI Act, exploring better regulation principles, clearer evaluations, and better long-term adaptability.
- 23 January: The Sphera Media Lab in Brussels features independent media with talks, workshops, and screenings. The session “Frontlines of Truth” will explore disinformation in reporting on aggressive power structures and the vital role of independent media in democracy.
- 30 January: The International Institute of Communications will be hosting a legal counsel forum discussing AI and regulators.
- 24-27 February: The 13th edition of RightsCon, hosted by Access Now, will take place in Taipei, Taiwan, and online.
- 28 February – 1 March: The European Festival of Journalism and Media Literacy will take place in Zagreb, Croatia
- 23-25 April: The 2025 Cambridge Disinformation Summit will discuss research regarding the efficacy of potential interventions to mitigate the harms from disinformation.
- 22-25 May: Dataharvest, the European Investigative Journalism Conference, will take place in Mechelen, Belgium.
- 18-19 June: Media & Learning 2025 conference under the tagline of “Educational media that works” will be held in Leuven, Belgium. Submit your proposal for a session by the end of January.
- 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. The call for proposals is now open – submit yours by 21 February!
- 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
- Be a beta tester. Contribute to tackling online disinformation by testing a tool developed within the veraAI project to detect and analyse coordinated sharing behaviors on social media platforms. Join the Coordinated Sharing Detection Service beta testers here.
- Share your insights on platform accountability. The European Digital Media Observatory (EDMO) is conducting an EU-wide study to assess how well Very Large Online Platforms and Search Engines comply with the Code of Practice on Disinformation. Do you have experience with platform or search engine tools and partnerships related to media literacy, research collaboration, or fact-checking? As part of the EDMO network, we encourage you to share your insights by completing this survey. The findings will contribute to a better understanding of the effectiveness of current platform initiatives and inform future improvements.
Jobs
- The European External Action Service (EEAS) is hiring a Policy Officer (Data Analysis and Capabilities) in Brussels. Candidates should have experience in data analysis, information operations, and EU policy. Apply by 20 January.
- The Reuters Institute for the Study of Journalism (RISJ) at the University of Oxford is seeking a Director. Applications are open until 20 January.
- Ripple Research is hiring a fully remote Research Analyst to conduct evidence-based research on misinformation, public health, and climate change.
- The Institute for Strategic Dialogue (ISD) has several open positions, including Director/Senior Manager of Research and Policy (Berlin-based or remote within Germany).
- The Center for Countering Digital Hate (CCDH) is recruiting a London-based Senior Communications Officer to lead their UK and European media relations efforts and support strategic communications planning and operations.
- NewsGuard is recruiting for several roles, including Director, Marketing; Staff Reporter, Media and Misinformation; Contributing Analyst – UK, and Contributing Analyst – Ukraine.
- Moonshot has various London-based posts available, including Analyst (Arabic), Talent Pool – Interventions Analyst, and Talent Pool – Manager (Insights).
- CORRECTIV has several open positions across Germany.
- The European Centre for Press and Media Freedom (ECPMF) is hiring a Press & Policy Officer in Leipzig, Germany. Applications are open until 24 January.
- Internews is seeking a Monitoring, Evaluation and Learning (MEL) Specialist, Civic Defenders (remote within the UK or US) and a Senior Program Associate, Europe and Eurasia (based in Bosnia and Herzegovina, Lithuania or the UK).
- The Poynter Institute for Media Studies is hiring a Research Editor. The deadline to apply is 15 January.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
This good post!
