Dear Disinfo Update readers,
Rules matter. But rules alone don’t create change. What makes a difference is how we act on them – and that requires a shift in mindset.
Amid the ongoing war in the Middle East and relentless pressure on European civil society, it is easy to keep contemplating the disinformation problem and forget what can be done to tackle it. This week delivered several wins on that front – and they matter not just as legal victories, but as signs of a deeper shift in how Europe is beginning to treat its own rules.
In the Netherlands, Meta lost its appeal against Bits of Freedom and must now offer Dutch users access to chronological, non-algorithmic feeds. In Germany, a ruling ordered X to provide data access to researchers, showing the DSA’s transparency provisions can be litigated and won. In France, preliminary investigations were opened after fake media outlets and fake accounts targeted political parties during the municipal elections – a broader enforcement signal that goes well beyond the DSA alone.
These developments point to something more important than any individual ruling: a mindset change. Regulation only works if you use it. The rule of law is already within reach. National courts, civil society organisations, media regulators – all have mechanisms to act, win, and force accountability without waiting for Brussels. Democracy does not collapse on its own. It only does if we let it. This week is proof that when people choose to make use of the tools available to them, illegal behaviour can be challenged and stopped.
Significant challenges remain: misleading content still generates revenue on major platforms, generative AI is accelerating deceptive content at scale, and geopolitical tensions continue to fuel coordinated campaigns. But the direction is right. We will continue to report on these developments in the weeks ahead.
Enjoy the read!
Our Webinars
UPCOMING – REGISTER NOW!
26 March. AI-generated content and DSA enforcement: who is accountable?
Generative AI is testing the foundations of the Digital Services Act. If systems like ChatGPT generate content rather than just host it, who is liable? Marco Bassini (Tilburg University) unpacks how the DSA applies to generative AI, its interaction with the AI Act, and what this means for enforcement and systemic risks such as disinformation.
2 April. How can civil society defend itself? The EDRN pilot story
Join us for a behind-the-scenes look at the European Democracy Resilience Network (EDRN) pilot, a joint initiative by the CyberPeace Institute and EU DisinfoLab that supports civil society facing hybrid threats. EDRN addresses disinformation, doxing, impersonation, and other digital attacks. Inês Narciso and Tanner Wagner will share key insights from the pilot.
9 April. EVIDENCE & ENFORCEMENT: Civil society evidence under the DSA: lessons from AI Forensics
Why register for this Insider session?
- You’re a CSO or a regulator and want to know how evidence was gathered and structured.
- You want to learn about the challenges and lessons learnt from engaging with the European Commission.
- You want to explore what risks emerged for AI Forensics, especially following the publication of the full decision that affected civil society contributors.
23 April. Case Study – Decoding Russian intelligence: What medals and insignia reveal
What can military badges and medals reveal about Russia’s information operations? In this webinar, Hervé Letoqueux (CEO, CheckFirst) presents findings from OSINT investigations showing how open-source images of Russian military insignia can help uncover hidden structures within the FSB’s 16th Centre and the GRU’s Information Operations Troops.
PAST – WATCH THE RECORDINGS!
DSA: Unfolding the European Commission’s first decision against X, with Laureline Lemoine (AWO). This webinar unpacks the European Commission’s first-ever DSA non-compliance decision – fining X €120 million – examining the legal reasoning, key breaches identified, and what the landmark ruling means for civil society’s role in shaping DSA enforcement.
Synthetic friends: AI companions and the future of disinformation, with Massimo Flore (Aurora Fellows). Artificial intelligence is shifting from content generation to relational interaction. As AI companions increasingly inform, adapt, and sustain emotional continuity, persuasion may move from visible content flows to private, trust-based human–AI relationships.
🎥 Don’t miss out, watch the recordings and explore all our past EU DisinfoLab webinars.
| 🧡 A huge thank you to all our speakers, partners, and participants for making every conversation sharper, deeper, and more impactful. If your company or institution is interested in partnering with us and sponsoring our webinars, please reach out if you’d like to discuss how we can work together: info@disinfo.eu |
Disinfo news & updates
Where enforcement happens: focus on Member States
Dutch court orders Meta to comply with DSA. An Amsterdam appeals court ruled in favour of digital rights group Bits of Freedom, confirming that Meta must allow Facebook and Instagram users to choose a feed not based on profiling, as required under the Digital Services Act. The court also set potential penalties of up to €10 million if the company fails to comply. The case prompted parliamentary questions in Belgium about how the DSA requirement for non-profiled feeds should apply nationally and across the EU.
Access to data: Berlin court rules in favour of researchers to monitor Hungarian elections. An analysis by Tech Policy Press examines the implications of a February 2026 Berlin court ruling ordering X to provide Democracy Reporting International (DRI) with access to publicly available platform data under the Digital Services Act. The case highlights how researchers may enforce their right to platform data in court and what it means for studying systemic risks such as election interference.
Spain launches a tool to monitor online hate speech and disinformation. Spain has launched HODIO, a monitoring system designed to track hate speech and disinformation on social media platforms. The tool will analyse public posts to measure the reach and volume of harmful content and publish biannual reports assessing how effectively platforms are addressing online hate and polarisation.
EDRi files a DSA complaint against YouTube over recommender system design. Digital rights organisation EDRi has filed a complaint under the Digital Services Act (DSA) with Belgium’s Digital Services Coordinator (BIPT) accusing YouTube of using deceptive interface design (“dark patterns”) to push users toward profiling-based recommendations. In the legal filing, the group argues that YouTube hides or complicates access to a non-profiling recommender system by burying it in complex settings and linking it to disabling YouTube watch history, effectively discouraging its use. EDRi says these practices, that we labeled as “sludge”, may breach DSA rules on transparency, accessibility, and manipulative interface design.
Enforcement & platform accountability
X proposes changes to blue checks after EU fine. Elon Musk’s platform X has submitted a plan to the European Commission outlining how it will change its paid “blue check” verification system after being fined €120 million under the Digital Services Act. The platform also had until 16 March to pay the penalty, while regulators review whether its proposed fixes comply with EU rules against deceptive design.
EU moves to ban AI “nudification” apps after Grok deepfake scandal. EU lawmakers are pushing to prohibit AI systems that generate non-consensual sexualised images of real people after controversy surrounding X’s chatbot Grok, Politico reports. The proposed ban could take effect this summer as part of updates to the EU’s AI rulebook.
Meta report: AI boosting scams and influence operations. In its H1 2026 Adversarial Threat Report, a report that used to be issued quarterly, analysing adversarial activity throughout 2025, Meta says criminal networks and state-linked influence operations are increasingly using generative AI to scale scams, fake personas and propaganda. While the company reported dismantling millions of scam accounts and multiple foreign influence networks while cracking down on AI “nudify” apps and networks tied to drug cartels, it also mentions how its verification systems are abused by cloaking services.
Civil society, research and policy debates
Researchers sue Trump administration over visa policy targeting online safety work. The Coalition for Independent Technology Research (CITR), with support from the Knight First Amendment Institute at Columbia University and Protect Democracy, has filed a lawsuit challenging a U.S. policy that could deny visas or deport noncitizen researchers working on disinformation, hate speech, and platform governance. The coalition argues the measure threatens independent research on social media and AI and violates First Amendment protections. “Suing the US government wasn’t necessarily on the bingo card for 2026, but at this moment we cannot afford to be silent,” said Brandi Geurkink, executive director of CITR, who warned that restricting independent research on online harms would leave the public “in the dark” about the impacts of technology.
Meta and TikTok weaponized harm to chase engagement, whistleblowers say. A BBC investigation by Marianna Spring draws on more than a dozen whistleblowers at Meta and TikTok to show that both companies knew exactly what their algorithms were doing, and chose to exploit it. Internal research had made the risks clear: outrage drives engagement, engagement drives revenue. This looks like risk systematic exploitation more than systemic risk management.
Fact-checking network warns of Big Tech “retreat” from disinformation commitments. A new EFCSN white paper warns that major platforms are scaling back commitments to fight disinformation, creating a “Great Retreat” in information integrity. The report calls for stronger enforcement of the Digital Services Act, sustainable funding for fact-checking, and new infrastructure to counter AI-driven misinformation.
EU and France rethink strategy against Russian disinformation. With major elections approaching in Europe, the EU and France are reassessing how to counter Russian information manipulation. New initiatives focus on strengthening societal resilience and coordination between governments, alongside existing platform regulation under the Digital Services Act.
Who owns social media? New debate emerges in platform regulation. An analysis from the DSA Observatory argues that social media ownership is an overlooked factor in platform governance. As tech billionaires gain increasing influence over major platforms, researchers warn that ownership structures may shape content moderation, political discourse, and compliance with EU regulations.
Rethinking disinformation regulation on private messaging platforms. A new analysis on TechPolicy argues that disinformation governance must better address private messaging platforms such as WhatsApp and Telegram, where harmful narratives increasingly spread. It calls for a feature-based regulatory approach that targets broadcast and amplification tools while safeguarding end-to-end encryption in private communications.
* Simulating disinformation attacks for journalists. CheckFirst and Samsa.fr launched a training workshop that simulates an information attack to help journalists and fact-checkers detect coordinated influence operations and disinformation campaigns. Using the Tutki OSINT platform, participants analyse manipulated content across social media, messaging apps, and emails in a real-time crisis scenario.
AI Disinfo Watch
AI fuels a surge in fake war imagery during the Iran conflict. BBC Verify reports a surge of AI-generated videos and visuals circulating online and attracting hundreds of millions of views, with some creators monetising viral content. In addition to this, The Financial Times found AI-altered satellite images being shared as supposed evidence of military strikes. Meanwhile, NewsGuard found that Google’s AI Overviews can generate inaccurate summaries during reverse-image searches of fabricated or misleading visuals tied to the conflict, while also highlighting how AI-manipulated content is being used to blend geopolitical messaging with conspiracy narratives, including a viral fabricated image of an Iranian missile referencing the Epstein scandal.
X moves to curb undisclosed AI war videos. X will suspend creators from its revenue-sharing programme if they publish AI-generated videos depicting armed conflicts without clearly disclosing that they were created with AI. According to Engadget, first violations can lead to a 90-day suspension from monetisation, with repeat offenders permanently removed from the programme. However, the policy remains narrowly scoped: it applies only to accounts participating in the monetisation programme and only to AI-generated videos of armed conflict, rather than to AI content more broadly. Separately, The Verge reports that X is also testing a “Made with AI” label that would allow users to voluntarily mark posts containing synthetic or AI-manipulated content.
Low-quality AI reports from Meta slow abuse investigations. Meta’s AI moderation tools are reportedly generating large volumes of low-quality reports about potential child sexual abuse, overwhelming investigators and slowing cases. According to The Guardian, these AI-generated reports are sent to the National Center for Missing & Exploited Children and then forwarded to investigators, but US law enforcement officers say many lack crucial evidence or involve non-criminal content, forcing agencies to review large numbers of unviable reports.
How summarisation can perpetuate biases. AI summaries on smartphones may reproduce social biases. A report by AI Forensics found that Apple Intelligence’s on-device foundation model can introduce racial and gender biases when summarising content. In testing, the system more frequently referenced ethnicity for minority protagonists and often resolved ambiguous professional scenarios using gender stereotypes, highlighting concerns about bias in AI tools embedded directly in consumer devices.
For more AI-related disinformation news and resources, visit our AI Disinfo Hub.
Elections & foreign information manipulation
Slovenia: report raises concerns over alleged covert influence operation ahead of elections. An investigation by Mladina alleges that representatives of Black Cube, an Israeli private intelligence firm, travelled to Slovenia last December and met former Prime Minister and SDS leader Janez Janša, a candidate in the country’s 22 March parliamentary elections. The case raises concerns about potential covert influence tactics, including the use of fake companies, secret recordings, and timed leaks targeting political figures.
Multiple information risks emerge ahead of French municipal elections. Several investigations highlight how the information environment surrounding France’s municipal elections is being shaped by a mix of platform dynamics and coordinated influence campaigns. Researchers from People vs Big Tech found that recommender algorithms on X and TikTok disproportionately promoted far-right political content when tested with newly created accounts. At the same time, a NewsGuard investigation reports that a Russian disinformation operation linked to the Storm-1516 network created a fake campaign website targeting Paris mayoral candidate Pierre-Yves Bournazel. France’s VIGINUM agency has also detected a foreign disinformation campaign targeting La France insoumise candidates Sébastien Delogu in Marseille and François Piquemal in Toulouse. Le Monde also reports that the candidates were targeted by coordinated smear campaigns involving networks of fake accounts, automated amplification and fabricated blogs alleging misconduct.
Hungary’s elections face both domestic disinformation and foreign interference risks. Deepfake videos and misleading narratives targeting opposition leader Péter Magyar are circulating widely online ahead of Hungary’s election, highlighting the growing challenge of domestically generated election disinformation within EU member states. VSquare reports a team linked to Russia’s military intelligence (GRU) has been deployed to Budapest to support influence operations ahead of the elections. According to European security sources, the operation aims to bolster Prime Minister Viktor Orbán using tactics similar to those previously observed in Moldova.
Dark money group accused of paying influencers to attack US candidate. A secretive political group allegedly offered social media influencers $1,500 to post negative content about a Democratic congressional candidate ahead of a primary election in Illinois.
Information warfare & geopolitical disinformation
Kremlin-linked network exploits Iran war to target Ukraine. A NewsGuard investigation finds that the Russian “Matryoshka” influence campaign has used the Iran conflict to circulate fabricated reports impersonating major media outlets, aiming to discredit Ukraine and Western allies. The campaign mimicked credible outlets such as Euronews.
Iran’s state media ramps up disinformation. A report by NewsGuard finds that Iranian state-linked media have increased the spread of false claims since the escalation of the US–Iran conflict in late February. The misleading narratives include fabricated battlefield victories and recycled footage circulating across social media.
Cyber warfare escalates alongside Israel–Iran conflict. Cyber attacks are intensifying as the Israel–Iran conflict expands into the digital domain. Security analysts warn that Iranian-linked groups are likely to target government networks, defence systems and critical infrastructure, while both sides increasingly use cyber operations and information warfare to shape the conflict.
Europol warns Iran conflict could fuel extremism and disinformation in Europe. Europol has warned that the escalating Israel–Iran conflict could heighten terrorism risks in Europe, alongside increased cyberattacks, online fraud and disinformation campaigns. Several EU countries have already strengthened security measures amid concerns over Iranian-linked networks and rapid online radicalisation.
From defence to deterrence: Europe’s strategy against hybrid threats. An ECFR policy brief argues that Europe should move beyond defensive responses to hybrid threats and adopt a more offensive strategy against disinformation, cyber-attacks and sabotage. It calls for stronger action to deter hostile states, particularly Russia.

Campaign spotlight
#ClimateFactsMatter: showcasing EU climate action and local impact on the ground
While focusing on building societal resilience against climate disinformation, the #ClimateFactsMatter campaign is also shedding light on local realities and highlighting how climate action is making a real difference and impacting communities. A new phase of the campaign has been launched to help citizens to be aware, be prepared and be informed about climate disinformation and new materials are available for stakeholders in a dedicated toolkit with local stories, and videos.
Climate disinformation spreads fast but lived experience tells the real story.
- When faced with false and manipulated climate narratives, it is important to focus on climate action happening on the ground.
- In order to do so, the campaign spotlights different EU-funded projects that are being implemented across the campaign’s target countries: countering myths with real stories, tangible benefits, and impactful initiatives that bring visible change – from producing renewable hydrogen, transforming former peat extraction areas into wetlands that store carbon, improving water quality and supporting biodiversity, and backing green transition and financing EV-related production. Learn more about these local stories
New materials available. The campaign toolkit offers new ready-to-use materials for stakeholders, with more videos and resources available. The goal is to spread the word on how climate facts matter to EU citizens.
Want to know more? Watch our dedicated webinar and learn more about how the EU is fighting climate disinformation.
Policy & governance
France, Germany and Poland address climate disinformation. In a joint declaration marking the Weimar Triangle’s 35th anniversary, the three countries reaffirmed their commitment to EU climate neutrality and highlighted the need to tackle climate disinformation as part of strengthening Europe’s climate policy and societal resilience.
France’s media regulator steps up climate disinformation oversight. France’s audiovisual regulator Arcom says it is strengthening its role in tackling climate disinformation, using both national media law and the EU’s Digital Services Act to address misleading content on TV and online platforms.
Platforms and accountability
YouTube monetises climate misinformation despite its own rules. An investigation by Maldita.es found that ads are still placed on climate misinformation videos, with 20 Spanish-language channels reaching over 21 million subscribers collectively continuing to generate revenue despite YouTube’s monetisation policies.
Media & civil society
Journalists launch initiative against climate misinformation. The International Federation of Journalists (IFJ) has launched a UNESCO-funded global project to strengthen journalists’ capacity to identify and counter climate misinformation and disinformation, including through new training resources for environmental reporting.
Climate disinformation rises on the policy agenda. A Climate Group briefing warns that shrinking climate journalism, geopolitical tensions and coordinated online campaigns are putting pressure on information integrity, while governments and civil society step up responses, including through the COP30 Declaration on Information Integrity.
Disinformation risks & emerging threats
The “green AI” narrative is questioned. Writing in Tech Policy Press, Michael Khoo (Friends of the Earth) argues that tech companies are promoting an overly optimistic narrative about AI helping to solve the climate crisis while downplaying its rapidly growing energy demands. As generative AI drives a surge in data centre electricity use, he calls for stronger transparency requirements.
Agribusiness narratives under scrutiny. A report by the Changing Markets Foundation warns that meat and dairy industry messaging and lobbying are shaping climate debates to delay action on food systems and emissions.
The end of accountability: How autonomous AI could supercharge climate disinformation. An article in Canada’s National Observer warns that autonomous AI agents could dramatically scale climate disinformation and harassment online. AI systems capable of acting independently may generate reputational attacks, fabricate claims, and spread conspiratorial narratives targeting climate scientists or policymakers, while making it harder to trace responsibility.
Social media weaponised against Indigenous and climate defenders in Guatemala. A Global Witness investigation finds that social media smear campaigns are being used to criminalise Indigenous leaders and climate activists in Guatemala, linking online disinformation to arrests, harassment and fabricated accusations.
Trump’s climate rollback meets muted response. Writing in The Guardian, journalist Rei Takver argues that the Trump administration’s sweeping dismantling of US climate policies, including underpinning federal climate regulation, has faced limited resistance from political leaders, media and the climate movement. This growing “climate hush” risks weakening public debate and accountability around climate policy, a dynamic celebrated by climate denial figure Marc Morano.
Job and events
🎙️17 March: Ads that burn – Why cities are saying no to fossil fuel promotion (webinar) will be hosted by EU Climate Pact, featuring campaigners from Italy and the Netherlands, and MEP Benedetta Scuderi.
🎙️17 March: Dangerous Distractions: Disinformation on food and climate (webinar) will be hosted by the Changing Markets Foundation, featuring a panel of experts discussing strategies for addressing disinformation around food and climate issues.
🎙️25 March: Friends of the Earth is hosting an online workshop on how to communicate climate solutions amid rising climate denial and anti-“net zero” messaging.
🎓 PhD opportunity: Climate communication & polarization. The University of Groningen is recruiting a PhD researcher to study how climate change can be communicated effectively in an environment shaped by disinformation and social polarization.
This edition’s Climate Clarity Corner brings together a few selected items on climate disinformation, building on the recently updated Climate Clarity Hub.
Brussels Corner
Next steps for the Parliament’s work on the Democracy Shield
In our previous newsletter, we reported that the deadline for amendments to the draft Report on the EU Democracy Shield had passed, with 1655 amendments being tabled. Since then, the Parliamentarian in charge has already formulated a set of proposed compromise amendments, one of the three planned political level negotiations and four of the nineteen staff level meetings took place. The committee is still scheduled to vote on 23 June.
Work getting underway on AgoraEU
We also reported in our previous newsletter that the joint Civil Liberties (LIBE) and Culture (CULT) committee had begun its work on the AgoraEU funding proposal. They held a hearing with civil society representatives, including EU DisinfoLab, on 26 February and a meeting of the MEPs responsible for the proposal will happen, most probably behind closed doors, on 24 March. The first full joint meeting is not scheduled to take place until 3 / 4 June.
DSA risk assessment and mitigation reports criticised
On 10 March, Civil Liberties for Europe and the European Platform for Democracy published a short and critical analysis of the second round of risk assessment reports published by very large online platforms and search engines under the DSA.
The EU’s New Democratic Resilience Centre: More Spin Than Substance?
The European Centre for Democratic Resilience has been officially launched, although confusion still persists about its actual institutional nature. The centre’s governance and operating model remain opaque, with no clear or transparent account of how it is intended to actually function. What the Commission’s own job posting from last week does make plain, however, is that the newly advertised senior adviser post sits within Directorate General for Communication (DGCOM) and reports directly to its Director-General, who operates under the authority of the Commission President.
That institutional placement is significant. Housing an anti-disinformation centre within DGCOM raises legitimate questions about whether its mandate is genuinely about democratic resilience — or primarily about strategic communications. The distinction matters: countering disinformation through civil society engagement and independent oversight is a fundamentally different exercise than managing institutional messaging. Member States, in particular, may want to ensure that the centre operates with genuine independence, both in its decision-making and in its management — something that its current attachment to DGCOM does not self-evidently guarantee.
This also highlights a broader tension with the EU’s existing toolkit. The Digital Services Act already provides mechanisms to tackle disinformation at its roots — through sanctions and enforcement. Layering a communications-driven centre on top of that framework risks prioritising narrative management over accountability, and may ultimately undermine the kind of credible, independent approach that effectively countering foreign interference actually requires.
A further unresolved question is operational: how will the centre actually deliver on its mission? No dedicated budget has been earmarked for these activities in the EU’s current financial framework, leaving its resourcing unclear. One path forward worth considering would be a dedicated, independent financial instrument — one designed to ensure meaningful civil society participation while bridging the gaps between the many existing initiatives in this space. Without such a mechanism, there is a real risk of duplicating effort, adding bureaucratic layers, and falling short of the coherent, well-funded strategy that the challenge of foreign interference genuinely demands.
Reading & resources
UN calls for multilateral action against online hate speech. At the First Forum Against Hate in Madrid, organised by Spain’s Ministry of Migration, the UN warned that the speed and scale of online hate speech and disinformation require stronger international cooperation. UNRIC Director Sherri Aldis highlighted the role of digital platforms and algorithms in amplifying harmful content and stressed the need for global coordination to protect information integrity while safeguarding freedom of expression.
Grammarly pulls AI features that mimic real writers. Grammarly has disabled its controversial “Expert Review” feature after backlash from journalists and authors whose names and writing styles were used without permission. The company says it will rethink the tool to give experts control over how they are represented.
Disinformation campaigns target the mining sector. An Alto Intelligence analysis warns that state-aligned networks are running coordinated information campaigns around mining disputes, labour conflicts and environmental issues to shape investor perceptions and regulatory debates around critical minerals projects across Africa, Latin America and Central Asia.
Meta’s Ray-Ban AI glasses raise privacy concerns. A joint investigation by the Swedish newspapers Svenska Dagbladet and Göteborgs-Posten found that footage captured through Meta’s Ray-Ban smart glasses is sent to human reviewers in Kenya to help train AI systems. Workers reported seeing highly sensitive scenes, including people undressing or in private spaces, raising concerns about privacy, consent and compliance with European data protection rules.
Neo-Nazi networks in Spain exploit social media to spread ‘remigration’ ideology. An analysis by the Global Network on Extremism and Technology (GNET) examines the rise of Spain’s neo-Nazi group Núcleo Nacional, which uses platforms like Telegram, TikTok, and X to spread white supremacist narratives and mobilise supporters. The group promotes the “remigration” ideology and has been linked to online campaigns that amplify hate speech, disinformation, and calls for violence against migrants.
AI tools boost fact-checking capacity across the Arab world. A collaboration between Full Fact and the Arab Fact-Checking Network shows that AI tools can significantly speed up claim detection and enable real-time fact-checking across Arabic media. However, funding cuts have halted access to the tools, raising concerns about the sustainability of fact-checking infrastructure in the region.
Fact-checkers say Community Notes alone cannot curb disinformation. Writing in Tech Policy Press, EFCSN’s Stephan Mündges argues that crowdsourced notes are slow and rarely visible, citing 900 notes in six months compared with 35 million fact-check labels on Facebook in the EU, and calls for integrating professional fact-checking with the system.
Lyme disease was engineered as a bioweapon. CDC vaccine adviser Robert Malone is promoting the long-debunked conspiracy theory that Lyme disease was created as a US government bioweapon, citing an “AI-driven investigation.” Experts say the claim misrepresents declassified documents and ignores scientific evidence showing the bacteria responsible for Lyme disease have existed for tens of thousands of years.
Pop culture outrage and the spread of disinformation. An analysis by the Cardigan Collective argues that viral pop-culture controversies can function as rehearsal spaces for disinformation and influence tactics, where memes, fandom mobilisation and algorithms amplify outrage and shape online narratives.
‘Support ICE’ phishing emails target marketing platform users. A phishing campaign is targeting clients of email marketing platforms with fake messages claiming a “Support ICE” donation button will be added to outgoing emails. The scam uses politically provocative messaging to pressure users into logging in and inadvertently revealing their credentials.
Europol-led operation disrupts major phishing-as-a-service platform. A coordinated international operation led by Europol has taken down key infrastructure behind Tycoon2FA, a phishing-as-a-service platform used to bypass multi-factor authentication and target organisations worldwide. Authorities seized 330 domains linked to the service, which had enabled millions of phishing messages each month.
Resources and trainings:
- Prodigioso Volcán will host a paid online training “Bulos y crisis: estrategias contra la desinformación” (in Spanish) on 21-22 April, covering disinformation narratives, AI-driven manipulation, and strategies to respond to disinformation crises.
- Institute for Information Law (IViR) will host a paid five-day summer course on European platform regulation in Amsterdam on 29 June-3 July, offering a deep dive into EU digital policy including the Digital Services Act and Digital Markets Act, with lectures from academics, policymakers, and practitioners.
- Indicator has launched OSINT Navigator, a beta tool that helps investigators find relevant OSINT tools through natural language queries. Drawing on a curated dataset of nearly 7.500 tools from major OSINT toolkits, it suggests resources for tasks such as tracking crypto transactions or identifying website owners.
This week’s recommended read
Gary Machado, Managing Director at EU DisinfoLab, recommends reading How Tenaciously Palantir Courted Switzerland, an investigation by Adrienne Fichter and colleagues, originally published in December 2025 and now available in English and free to read.
Interestingly, the article – originally behind a paywall – is now accessible to everyone after legal action was taken in relation to it. A reminder that attempts to challenge reporting can sometimes have the opposite effect and draw wider attention to it.
Palantir’s technology is used by several law-enforcement, intelligence and health agencies across Europe, which makes scrutiny of the company’s activities particularly relevant.
Its chairman, Peter Thiel, has often positioned himself as a strong supporter of free expression and open debate. At the same time, the company is pursuing legal action over this reporting – a contrast that inevitably raises questions about how free speech is invoked in practice.
Thiel is also a major political figure in the US, known for early support for Donald Trump and for helping introduce J.D. Vance to him.
Worth reading.
👀 Spotted: EU DisinfoLab
- Trust on the agenda. Our Executive Director Alexandre Alaphilippe will be speaking at this year’s Rencontres de l’UDECAM in Paris, joining a high-level line-up of leaders from across media, business and public policy. The event looks at how trust is being built across today’s communications ecosystem.
- Community meetups. We regularly organise informal community meetups in Brussels and across Europe wherever we travel – a chance for good conversations, shared ideas, and to put faces to names. If you’re interested in joining one of the next meetups, simply reply to this email to be added to the guest list and receive more information about the time and location. 📍 Our next upcoming meetups will be in Tallinn on 24 March and Vilnius on 26 March.
Events & announcements
- Present-June: The Cyber for Good Media programme is running with the mission to protect and better equip journalists against interference and manipulation in the digital space, with a focus on OSINT and cybersecurity.
- 17 March: Ads that burn – Why cities are saying no to fossil fuel promotion (webinar) will be hosted by EU Climate Pact, featuring campaigners from Italy and the Netherlands, and MEP Benedetta Scuderi.
- 17 March: Dangerous Distractions: Disinformation on food and climate (webinar) will be hosted by the Changing Markets Foundation, featuring a panel of experts discussing strategies for addressing disinformation around food and climate issues.
- 18 March: The ATHENA Webinar: “Weaponised Narratives – Disinformation and Geopolitical Struggle in Central and Eastern Europe” will examine how disinformation is being used as a geopolitical tool, with a focus on Central and Eastern Europe.
- 18 March: Cyber Threats and Information Operations in Times of War (webinar) will be hosted by Graphika, featuring their expert analysts in a live Ask the Expert session on cyber threats and information operations during military conflicts.
- 19 March: Building institutional capacity to counter foreign information manipulation and interference (FIMI) in Ukraine (Kyiv, in-person and online, English) will be hosted by International IDEA, bringing together Ukrainian institutions, international organisations, and civil society to discuss approaches and institutional models for countering FIMI.
- 24 March: News-polygraph Conference: The Future of Verification (Berlin, in-person, English) will present research findings on AI-supported verification tools, exploring what they can, and cannot, deliver for journalists tackling disinformation.
- 25 March: Friends of the Earth is hosting an online workshop on how to communicate climate solutions amid rising climate denial and anti-“net zero” messaging.
- 31 March: The Rencontres de l’UDECAM organises Trust in media and brands, which will explore how trust is evolving between brands, media and citizens through debates, keynotes and expert discussions.
- 8–10 April: The Cambridge Disinformation Summit is expected to gather the world’s leading scholars, professionals, and policy-makers to explore interventions on systemic risks from disinformation.
- 14 April: EDMO BELUX 2.0 Workshop: “Exploring Disinformation Through Vulnerabilities: Credibility, Accountability and Inequalities” (RTL Lëtzebuerg, Luxembourg, in-person) will bring together researchers, practitioners, journalists, and policy makers to discuss disinformation from multiple angles, hosted by EDMO BELUX.
- 16–17 April: Global Forum: FIMI & Hybrid Threats (webinar) will bring together experts to discuss hybrid threats and the impact of foreign information manipulation (FIMI) on media and democratic processes.
- 19–23 May: Antony and Cleopatra (Brussels, in-person, English) is the Brussels Shakespeare Society’s next production, set in the 2030s and exploring how disinformation and AI could serve as a road from populism to dictatorship – drawing on what historians consider one of the most successful disinformation campaigns in history.
- 15–18 June: Disinformation Summer Institute 2026: A 4-day in-person institute organised in California, US, will bring together early-career researchers and senior experts for lectures, panels and discussions on studying and countering disinformation.
- 17–19 June: GlobalFact 2026 (Vilnius, in-person, English) is the annual summit of the global fact-checking community, bringing together professionals to share best practices and strengthen collaboration against misinformation and disinformation.
- 7–8 September: EDMO BELUX 2.0 final conference ”Countering Disinformation, Raising Democratic Resilience” will be organised in Brussels.
- 6–8 October: #Disinfo2026. EU DisinfoLab’s annual conference will happen in Vilnius, Lithuania. Save the date!
- Other initiatives:
- Call for collaborators (deadline: 20 March 2026): Tactical Tech is seeking experienced investigators and media professionals to develop learning resources and deliver training on AI power structures, climate and information disorder, OSINT methods, and digital influence.
- Open call: IJ4EU has reopened with €1,6 million in funding for cross-border investigative projects (deadline: 13 April). Grants of up to €50.000 (and €20.000 for freelancers) are available for teams reporting on issues of public interest, including disinformation and threats to democratic integrity.
- Call for papers (deadline: 15 September): The Journal of Marketing Management invites submissions examining how platform economies, ad tech, recommender systems and creator monetisation shape the spread of disinformation, and what interventions could strengthen societal resilience.
- The Data Tank is inviting small and medium media and fact-checking organisations to join a new action research project aimed at building collective leverage over Big GenAI, to protect media sustainability and information integrity across Europe.
🧡 THINGS WE LOVE FROM OUR COMMUNITY
A timely reminder from Fabio Votta: ”There is no such thing as a “political ad ban” on Meta. A ban that isn’t enforced isn’t a ban”.
The University of Amsterdam researcher recently questioned how effective Meta’s “political ad ban” really is, pointing to ads run by German politician Frauke Petry that have reached over 170.000 users.
The case raises broader questions about enforcement and the impact of voluntary platform restrictions on political advertising.
Jobs
- Reset Tech is hiring a full-time EU Policy Manager (Brussels, on-site) to lead political outreach and advocacy on EU digital regulation and platform accountability.
- Pagella Politica is hiring a full-time Social Media Manager (Milan-based, with partial or full remote options). The role covers strategy, social campaigns and digital marketing across Pagella Politica and Facta.
- The CyberPeace Institute is hiring an EU Project Researcher to support EU-funded digital policy and cybersecurity projects, focusing on legal research and reporting.
- The Center for Countering Digital Hate (CCDH) has several open positions, including a Database Manager (US-based), a Senior Policy adviser (US-based) and a Policy officer (Brussels-based).
- OpenAI is looking for a Global safety response operations analyst. Open until filled.
- Alice (ActiveFence) is offering several positions; scroll their page to view all open roles.
- NewsGuard is seeking a full-time Staff Reporter to analyse and rate news sources, as well as an Editorial Intern and a Business Development and Social Media Intern.
- Moonshot is seeking an OSINT Analyst (London-based) and a full-time Digital Advertising Specialist.
- The Center for Democracy & Technology (CDT) has several roles open, including a Legal Fellow and a Senior Policy Analyst.
- ProPublica is currently hiring for several roles, including a Deputy Research Editor, a Visuals Editor, and a Washington Reporter covering defense (D.C.-based).
- The University of Groningen is launching a PhD project examining how climate messaging can counter misinformation and reduce polarisation.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.
