Welcome to a new Disinfo Update.
Enforcement is no longer theoretical. With the Commission’s first DSA non-compliance decision, national courts compelling platform data access, regulatory investigations and new Member State strategies, Europe is entering the accountability phase of digital regulation. Through our next webinars, we’ll unfold the legal arguments used in the DSA enforcement. Yet unresolved gaps, from sanctions circumvention to age assurance, expose the limits of current implementation, calling for more actions to deliver accountability.
In this context, civil society is proving central to the shift towards accountability, from generating evidence and enforcing data access rights to building resistance frameworks. Sustainable funding for independent evidence and oversight work, however, remains unresolved, and we will continue pushing for this in the current AgoraEU discussions.
Meanwhile, AI is generating increasingly realistic synthetic content around geopolitical flashpoints, including recent tensions involving Iran, blurring fact and fiction as verification struggles. Nearly one in three debunked claims now involves AI-generated or AI-manipulated content. But AI-driven disinformation is no longer confined to crisis imagery or viral deepfakes. It permeates everyday information environments, from synthetic legal dramas to personalised persuasion and AI companions designed to build emotional bonds and embed influence over time. Synthetic manipulation is no longer marginal but structural.
🎉🙏 Before diving into this edition, we want to sincerely thank you for the overwhelming response to the #Disinfo2026 call: more than 350 proposals were submitted. We look forward to opening registrations on early May, with ticket details and pricing to be published shortly!
Enjoy the read!
Our Webinars
UPCOMING – REGISTER NOW!
New Series of Webinars: ‘EVIDENCE & ENFORCEMENT’
This new series shifts the focus from rules to results, exploring how evidence gathered by researchers and civil society feeds into enforcement action, and what meaningful accountability requires in practice. We launch the series with two upcoming webinars:
10 March
DSA: Unfolding the European Commission’s first decision against X

Why register for this session?
- You want to know why X has been fined by the European Commission under the DSA.
- You are curious to see what kind of arguments X used to try to evade its responsibility.
- You want to gain insights on how the Commission dealt with this first case, both procedurally and legally.
- You want to understand which type of evidence was used and how to build a solid legal case.
9 April
Civil society evidence under the DSA: lessons from AI Forensics

Why register for this Insider session?
- You’re a CSO or a regulator and want to know how evidence was gathered and structured.
- You want to learn about the challenges and lessons learnt from engaging with the European Commission.
- You want to explore what risks emerged for AI Forensics, especially following the publication of the full decision that affected civil society contributors.
5 March
Synthetic friends: AI companions and the future of disinformation

Artificial intelligence is shifting from generating content to building relationships. AI companions are designed to inform, adapt, and sustain emotional bonds over time. In this webinar, Massimo Flore will show the concept of the epistemic cocoon and explore how this shift could transform disinformation: when credibility is embedded in private human–AI relationships, persuasion operates through relational trust rather than visible content.
19 March
How can civil society defend itself? The EDRN pilot story

Join us for a behind-the-scenes look at the European Democracy Resilience Network (EDRN) pilot, a joint initiative by the CyberPeace Institute and EU DisinfoLab supporting civil society facing hybrid threats. Expanding on the CyberPeace Builders programme, EDRN addresses disinformation, doxing, impersonation, and other digital attacks. Inês Narciso and Tanner Wagner will share key insights from the pilot, including what CSOs need to stay resilient.
26 March
AI-generated content and DSA enforcement: who is accountable?

Generative AI is testing the foundations of the Digital Services Act. If systems like ChatGPT generate content rather than just host it, who is liable? Marco Bassini (Tilburg University) unpacks how the DSA applies to generative AI, its interaction with the AI Act, and what this means for enforcement and systemic risks such as disinformation.
PAST – WATCH THE RECORDINGS!
- Forced to quit: gendered disinformation, synthetic abuse, and political violence. Women are being pushed out of public life through coordinated gendered disinformation, harassment, and synthetic abuse. In this webinar, Marília Gehrke (University of Groningen) introduces the “triangle of violence” framework and shows how political exit becomes a measurable outcome of sustained, systemic abuse.
- Who is most vulnerable to AI-generated mis/disinformation? Psychological drivers of media literacy and belief in harmful online content | Led by Dr Jason Potel (Goldsmiths), this session explores the psychological factors shaping responses to AI-generated and online disinformation, showing how feelings of low control, confidence, or social connection can increase susceptibility.
Don’t miss out, watch the recordings and explore all our past EU DisinfoLab webinars.
| 🧡 A huge thank you to all our speakers, partners, and participants for making every conversation sharper, deeper, and more impactful. If your company or institution is interested in partnering with us and sponsoring our webinars, please reach out if you’d like to discuss how we can work together: info@disinfo.eu |
Disinfo news & updates
MEMBER STATES STRENGTHEN THEIR RESPONSE
- France strengthens national response to FIMI. France’s new 2026–2030 strategy sets out a comprehensive framework to counter foreign digital interference, combining Digital Services Act (DSA) enforcement, AI risk oversight, enhanced attribution capacities and deeper European cooperation. The plan strengthens VIGINUM’s role in addressing systemic interference risks and highlights cooperation with civil society organisations
- Germany moves towards ending online anonymity. While no concrete measures have been announced yet, signals from Berlin suggest change may be coming. Chancellor Friedrich Merz has called for an end to widespread online anonymity, pointed to the ongoing debate on banning social media for minors, and criticised the power of algorithms and AI, indicating that Germany could soon move towards stricter digital regulation.
ENFORCEMENT UNFOLDS AT MULTIPLE LEVELS
- Court rules against X in impersonation case. A Cologne Regional Court has ordered X to stop distributing a fake profile impersonating German satirist Jan Böhmermann, ruling that the account could not be considered parody. The case may have broader implications for how platforms are held accountable for impersonation and identity abuse amid evolving digital regulation frameworks.
- DRI wins landmark DSA data access in national court. A Berlin court has ordered X to grant Democracy Reporting International (DRI) access to platform data under the DSA for researching Hungarian elections. The decision confirms that civil society can enforce DSA data access rights in national courts, strengthening transparency and systemic risk research across the EU.
- Ireland launches GDPR probe into Grok. Ireland’s Data Protection Commission has opened a General Data Protection Regulation (GDPR) investigation into X’s chatbot Grok over personal data processing and the generation of sexualised AI images, marking a Member State regulatory response to AI-related content risks.
- Commission probes Shein under the DSA. The European Commission has opened formal proceedings against Shein over addictive design, recommender system transparency and the sale of illegal products, marking another DSA enforcement step on systemic platform risks.
DSA’S UNRESOLVED GAPS
- RT DE’s mirror network bypasses EU ban. An investigation from Correctiv finds that despite EU sanctions, RT DE continues reaching millions in Germany through more than 20 mirror domains. The report describes the enforcement gaps and regulatory confusion happening at national level that allow sanctioned Kremlin propaganda to remain accessible in the EU.
- Hungary and its internet hotline trust flagger. Hungary’s National Media and Infocommunications Authority, which serves as the country’s Digital Services Coordinator, has appointed its own legal aid service, the Internet Hotline, as the country’s sole trusted flagger under the DSA. While not formally breaching DSA’s rules, the move raises questions about institutional separation and potential conflicts of interest.
- Age assurance and the DSA’s enforcement blind spot. This brief argues that protecting minors online is less about new rules and more about enforcing existing ones. Despite Article 28 DSA requiring platforms to identify minors with “reasonable certainty”, most still rely on self-declaration, exposing systemic enforcement gaps and legal tensions with the GDPR and the Audiovisual Media Services Directive (AVMSD).
- EP hearing challenges VLOPs’ child safety claims. At a cross-party hearing in the European Parliament (EP), researchers disputed platforms’ DSA risk assessments on addictive design, harmful content and age verification. The Commission acknowledged reporting gaps and pointed to the upcoming Digital Fairness Act, raising questions about how far existing DSA powers are being used.
- Age checks in app stores: A solution or a diversion? As Europe debates restricting under-16s’ access to social media, Big Tech proposes shifting age verification to Apple and Google’s app stores. Experts warn that this risks deflecting responsibility from platforms themselves, arguing that enforcement under the DSA should remain the primary tool for protecting minors online.
* What it’s like to be a teenager on social media * A 15-year-old girl describes the pervasive misogyny she faces daily on social media platforms like Instagram and TikTok, including objectifying comments, rape jokes, and sexual shaming. She explains how these harmful messages are normalised, amplified by algorithms, and affect her self-esteem and relationship with her gender.
CIVIL SOCIETY IN ACTION
- European Democracy Resilience Network (EDRN) Blueprint Report. Together with the CyberPeace Institute and more than 30 civil society organisations, EU DisinfoLab has contributed to the EDRN Blueprint Report, a practical framework to help CSOs defend against hybrid threats , combining disinformation, FIMI and cyber operations.
- Pro-science creators counter online manipulation. Scientists and medical experts are increasingly using TikTok, YouTube and Instagram to respond directly to climate denial, vaccine scepticism and health myths, meeting audiences where they are and helping rebuild trust in evidence-based information.
* Resource * Global social media age restriction tracker

Tech Policy Press has launched a community-driven tracker mapping global legislative efforts to restrict or ban minors’ access to social media. The living resource invites contributions from civil society and monitors proposals, amendments and enforcement developments.
- Funding the watchdogs. A recent EU DisinfoLab briefing explored how the next EU budget (2028–2034) could unlock structural funding for counter-disinformation work, reinforcing calls from civil society that effective DSA enforcement requires sustainable support for independent evidence generation.
👉 The message is gaining momentum across the community (see “Things we loved” below).
AI & DISINFORMATION
- Yearly fact-check intelligence report. AI is playing a growing and increasingly sophisticated role in global disinformation trends. A recent report analysing 1.357 fact-checked claims over a 57-day period found that roughly one in three cases (32%) involved AI-generated or AI-manipulated content. Political figures were the most frequent targets, political manipulation the primary motive, and outrage the dominant emotional trigger.
- Fake verdicts, fake lawyers. “AI Lawslop”, a growing genre of AI-generated legal content, is flooding YouTube with networks of channels posting fabricated courtroom dramas, synthetic bodycam footage and deepfake videos of real legal figures. An investigation by Indicator identified 24 channels that together amassed more than 1,7 billion views, often blending AI-generated scenes with real footage to make detection harder.
- Disrupting malicious uses of AI. OpenAI’s latest threat report details how malicious actors are attempting to misuse AI models in influence operations and cyber activity. Among the case studies, there is a Chinese influence operation that used AI-generated content across platforms, as well as other actors employing multiple AI models at different stages of their workflows, from drafting propaganda to refining messaging.
- ZDF removes New York correspondent with immediate effect. Germany’s public broadcaster ZDF has removed its New York correspondent after an internal review found she used an AI-generated video in a report on US immigration. The decision follows earlier scrutiny over the use of synthetic and misattributed visuals in the flagship heute journal, which had already prompted mandatory staff training.
- Personalisation weakens LLM safeguards. A large multilingual red-teaming study shows that even simple personalisation, based on language, country, age or political orientation, makes LLMs more likely to bypass safety guardrails and generate persuasive disinformation. The research highlights systemic weaknesses in current models and introduces AI-TRAITS, a 1,6 million-item dataset to support future detection and safety efforts.
🔎 For more AI-related disinformation news and resources, visit our AI Disinfo Hub.
BENDING THE RULES, BENDING THE TRUTH
- US portal targets European content bans. The US State Department is developing a platform (“freedom.gov”) intended to give users in Europe access to content restricted under national laws, reportedly including potential VPN functionality. The move could heighten transatlantic tensions over the Digital Services Act and raise questions about regulatory sovereignty and state-backed content circumvention.
- Google warns against EU sovereignty push. Google’s legal chief Kent Walker has publicly pushed back against the EU’s tech sovereignty drive, warning Brussels not to “erect walls” that could hurt Europe’s competitiveness. In his view, plans to reduce reliance on US tech risk slowing innovation, particularly in AI, as the EU steps up digital regulation and enforcement.
- Trump’s “cancel culture”. The Financial Times explores growing concerns over press freedom in Trump’s second term, arguing that legal, regulatory and political pressure on media outlets risks chilling journalism. A reminder that weakening independent media reshapes the information environment, and can create fertile ground for disinformation and narrative control.
- US authorities seek to unmask anonymous anti-ICE accounts. The Department of Homeland Security has issued hundreds of administrative subpoenas to tech companies seeking identifying information behind social media accounts that criticise or track ICE. While officials say the requests aim to protect officers, civil liberties groups warn that expanded use of such powers could chill lawful political speech.
Brussels Corner
1655 amendments tabled to the European Democracy Shield (EUDS) report
Last month, the special Committee on the European Democracy Shield (EUDS) published its draft report. Read our 3 February Brussels corner update on the report here.
11 February was the deadline for MEPs to table their amendments, and there have been many, the total reaching an impressive 1655. Find a full list of amendments on the Parliament’s website here.
The extensive list of amendments will now mostly be consolidated into compromise amendments, in negotiations between European Parliament political groups, which will subsequently be put to a vote in committee before the resolution as a whole is adopted at a subsequent plenary sitting of the Parliament.
We will look specifically at all recommendations mentioning the development of funding mechanisms and legal protections for the counter-disinformation community.
CULT and LIBE starting work on AgoraEU
While the first full joint meeting of CULT/LIBE is not, at time of writing, scheduled on the Parliament’s website, the Committees are starting their joint work on the AgoraEU programme. AgoraEU is a part of the upcoming EU’s Multiannual Financial Framework (MFF).
The two committees share the competence over the file and the draft report from the co-rapporteurs is expected to be published in early May (timeline subject to change). Consequently the vote in committee should be held during a committee week in October and vote in the plenary sitting of the Parliament in November.
See more about our position on the AgoraEU proposal on our website.
X sues the Commission over the 120 million fine
X is contesting the €120 million fine issued by the European Commission for violations under the Digital Services Act (DSA), by bringing three cases to the Court of Justice of the European Union’s (CJEU).
To unpack the implications of the case and the role of civil society evidence in the Commission’s decision, we are hosting a closed webinar on 9 April with Marc Faddoul (AI Forensics).
It seemed inevitable that an appeal would be launched – a cynic would say that it is a logical way of maximising the cost to the European Commission for enforcing the law and making it clear that every legal machination will be used to delay justice and increase the costs of enforcement.
Three different cases were filed on 16 February. One is from X Internet Unlimited Company and X Holdings, another from xAI Holdings and one seemingly from Elon Musk himself. In a public statement, he said what one would expect him to say about the legitimacy of the ruling..
The move appears to reflect a broader strategy used by major platforms. Companies are challenging regulatory fines in court and appealing decisions to contest and potentially delay enforcement of the DSA.
None of these (apparently) strategic moves by X can, by default, stop enforcement of the fine, nor X’s obligation to an end to its breaches of EU law, at least not without the agreement of the Court. That said, there is no indication that any such respite was ordered by the Court, nor that the Commission is insisting that X’s obligations are being respected.
US sanctions target EU citizens: what is EU’s concrete response?
On 25 February, the Committee on the Internal Market and Consumer Protection (IMCO) held an exchange of views on US sanctions targeting EU citizens involved in enforcing the Digital Services Act (DSA).
One of the targeted individuals, co-founder of HateAid, A.-L. von Hodenberg stressed that the DSA structurally relies on evidence provided by civil society and researchers to function effectively. The recent restrictions have shaken the community, making protective measures essential to ensure organisations can continue this work.
Clare Melford, co-founder of Global Disinformation Index (GDI), who was also subject to the travel ban, urged the Commission to formally recognise civil society monitoring, expand Article 40 researcher access with sanctions-resilient infrastructure, and provide sustained funding, particularly during elections and crises.
Parlamentarians from political groups, including Renew, Socialists & Democrats, Greens/EFA and the Left expressed solidarity and criticised the Commission for not responding more strongly. Many emphasised the need for concrete protections for civil society.
The debate comes at a crucial time, as discussions on the next Multiannual Funding Framework are underway. So far, the Commission’s AgoraEU proposal addresses SLAPP protections for journalists, but does not yet foresee broader safeguards for civil society actors defending EU law.
The Commission Responds to a Parliamentary Question on User-Friendly Reporting Under the DSA
In January of this year, an MEP Pascal Arimont (EPP, Belgium) tabled a parliamentary question, pointing out the various obstacles put in the way, by platforms, to reporting illegal content. His question sought an answer from the European Commission on specific steps it intends to take to ensure that platforms provide a user-friendly notice mechanism for reporting illegal content.
We wrote a blog post taking a deeper dive into the lengthy and burdensome reporting mechanisms platforms have designed to discourage users from reporting illegal content, despite the DSA’s requirement for user-friendly notice mechanisms.
The Commission has provided an answer to Mr. Arimont’s question by referring to the adoption of preliminary findings against Meta in October 2025 for suspected breaches of Article 16(1) (requiring user-friendly notice and action mechanisms), and proceedings initiated in December 2023 against X for alleged failures in handling notices on illegal content and related procedural obligations. However, the Commission did not provide a more concrete answer on specific steps including a timeline which Mr. Arimont requested.
Reading & resources
- Cyber and narrative warfare expand in Israel–Iran conflict. As military strikes intensify, the conflict is moving into cyberspace, EuroNews reports hacking campaigns and warnings of potential disinformation aimed at shaping public perception. Yet despite Tehran’s reputation for cyber and influence operations, analysts say pro-Iranian groups have so far played a negligible role, with retaliation limited to exaggerated claims and minor disruptions, according to Bloomberg. So far, Wired claims that X was flooded with viral disinformation including recycled footage, misattributed videos and also AI-generated images exaggerating the scale and impact of the attacks, as reported by France 24. BBC Verify has debunked manipulated satellite imagery, AI-enhanced explosion images, and fake social media accounts circulating online.
- Prigozhin’s network absorbed by Russia’s foreign intelligence. A major investigation by Forbidden Stories reveals that Yevgeny Prigozhin’s influence apparatus was taken over by Russia’s Foreign Intelligence Service (SVR) after his death. Based on a leaked dataset, journalists identify 60 agents deployed across Africa and Latin America, exposing how the Kremlin has reorganised and expanded its global disinformation operations.
- A war foretold, and doubted. The Guardian reconstructs how US and UK intelligence uncovered Putin’s invasion plans, and why many European leaders dismissed the warnings. A timely read on strategic miscalculation, information credibility and the blind spots that FIMI and hybrid threats can exploit.
- Attributing Russian information influence. A new NATO Strategic Communications Centre of Excellence (StratCom COE) report applies its attribution framework to real Russian influence campaigns, clarifying evidential standards in the context of FIMI, EU sanctions and the DSA.
- Four years of war: Russian disinformation adapts. A review by Maldita of Russian disinformation since 2022 shows a shift from recycled propaganda and “denazification” claims to AI-generated content targeting global audiences. Recent campaigns focus on deterring foreign volunteers and eroding support for Kyiv, using micro-influencers, diplomatic channels and increasingly sophisticated amplification tactics.
- The rise of Russia’s vigilante movement. An Open Minds investigation charts the expansion of the “Russian Community” into a nationwide network conducting raids and migrant patrols, amplified by hundreds of coordinated social media groups. The case illustrates how nationalist narratives and digital mobilisation, alongside informal ties to law enforcement, can translate into real-world coercion, blurring the line between state and non-state actors in Russia’s hybrid landscape.
- Google disrupts Chinese-linked surveillance network. The tech company says it dismantled infrastructure used by UNC2814 (“Gallium”), a Chinese-linked hacking group that accessed 53 organisations across 42 countries, including telecom operators. The group reportedly used Google Sheets as part of its operations to evade detection, highlighting the continued convergence of cyber espionage and global hybrid threats.
- Mapping Serbia’s propaganda ensemble. Crta and Istinomer analyse the hierarchical communication network surrounding President Aleksandar Vučić, identifying 54 actors and eight dominant narratives used to sustain crisis, delegitimise opponents and manufacture internal/external threats. A detailed case study of institutionalised propaganda architecture and coordinated narrative control.
- Brand safety is not censorship. In the Financial Times, Clare Melford (Global Disinformation Index) recounts how the US State Department revoked her visa, accusing her of promoting censorship. She argues that helping advertisers avoid divisive or propaganda-linked content strengthens market choice, and will be even more vital in the age of generative AI.
- The Big Tech Lobby Playbook. This new series by SOMO examines how major tech companies shape digital laws worldwide, highlighting lobbying strategies used across the US, EU, Brazil, India, Kenya and Australia, and proposing counter-strategies to reclaim democratic oversight of digital governance.
- Australian lawmakers scrutinise climate disinformation ecosystem. Australia’s Senate select committee on information integrity focusing on the role of social media algorithms, AI-generated content, coordinated campaigns and opaque political funding. The hearings brought together tech platforms, industry actors, regulators and experts to examine climate disinformation and online influence dynamics.
- The AI climate hoax. A new report by Ketan Joshi examines industry claims that artificial intelligence will significantly speed up climate mitigation. It argues that companies often blur the distinction between relatively low-energy AI tools used to improve efficiency and far more energy-intensive generative AI systems driving rapid data centre expansion.
- Fake doctors’ network selling unproven supplements across Europe. A Pravda Association investigation uncovers a coordinated network of fake “medical experts” operating on social media to sell shilajit, a supplement promoted as a miracle cure. At least 30 profiles across multiple European countries are linked to an affiliate marketing scheme connected to a Vietnam-based e-commerce infrastructure.
* Resource * Guide to investigating ecommerce sites. This paid OSINT-focused resource by Craig Silverman (Indicator) offers practical tools and techniques to analyse online stores, detect ecommerce fraud, and uncover the individuals or entities behind suspicious websites.
This week’s recommended read
Raquel Miguel, Senior Researcher at EU DisinfoLab, recommends reading a new study published in Nature, which examines how two feed designs on X, algorithmic versus chronological, shape political engagement and attitudes.
By tracking what happens when users switch between feeds, the researchers find that moving from a chronological feed to an algorithmic one increases engagement and is associated with stronger conservative positions, while switching from algorithmic to chronological shows no significant effects, likely due to persistent exposure effects from the algorithmic feed.
The paper highlights another well-known crucial point: it’s not that users change their political orientation, but that algorithmic curation can amplify polarisation and set the agenda. Thus, reinforcing the editorialisation role of algorithms, and therefore, the responsibility platforms carry when they shape what audiences see.
The study’s findings are in line with the long-standing support in the counter-disinformation community for switching off algorithmic feeds and moving to chronological timelines. If the effects of algorithmic exposure are persistent, a time-limited switch during election periods, for instance, may not achieve much. Instead, this points to the need for a stronger request: promoting chronological feeds beyond electoral periods, or even as the default in the platforms’ design.
👀 Spotted: EU DisinfoLab
- Maria Giovanna Sessa, Research Manager at EU DisinfoLab, examines how the Russian-linked “Pravda” disinformation network targeted Italy during the Milan-Cortina 2026 Winter Olympics. In her latest analysis for Gli Stati Generali, she outlines how a coordinated ecosystem of pro-Kremlin domains amplified false claims, media impersonation and culture-war narratives to rehabilitate Russia’s image, delegitimise Italy and weaponise the Games within a broader strategy of cognitive warfare.
- Maria Giovanna has also co-authored “Gender and identity disinformation: impacts on female politicians, and implications for democratic participation – roundtable discussion” published in the Journal of Gender Studies. The publication is the result of a roundtable hosted last year by Artemis Alliance.
- Brussels community meetups. We regularly get together for informal community meetups: good conversations, shared ideas, and the chance to put faces to names. Join us for the next one on Monday, 16/03, in Brussels. Reply to this email to be added to the guest list.
Events & announcements
- Present-June: The Cyber for Good Media programme is running with the mission to protect and better equip journalists against interference and manipulation in the digital space, with a focus on OSINT and cybersecurity.
- 5 March: The seminar “The EU–Japan Security and Defence Partnership and the Challenge of Disinformation” (Madrid, in-person, English) will explore how the EU and Japan can strengthen cooperation against disinformation under their 2024 Security and Defence Partnership.
- 10–12 March: Voices – European Festival of Journalism and Media Freedom (Florence, in-person) will bring together journalists, media professionals and citizens to discuss press freedom, media literacy and responses to disinformation in Europe.
- 24 March: News-polygraph Conference: The Future of Verification (Berlin, in-person, English) will present research findings on AI-supported verification tools, exploring what they can, and cannot, deliver for journalists tackling disinformation.
- 8-10 April: The Cambridge Disinformation Summit is expected to gather the world’s leading scholars, professionals, and policy-makers to explore interventions on systemic risks from disinformation.
- 15–18 June: Disinformation Summer Institute 2026: A 4-day in-person institute organised in California, US, will bring together early-career researchers and senior experts for lectures, panels and discussions on studying and countering disinformation.
- 17–19 June: GlobalFact 2026 (Vilnius, in-person) is the annual summit of the global fact-checking community, bringing together professionals to share best practices and strengthen collaboration against misinformation and disinformation.
- 7–8 September: EDMO BELUX 2.0 final conference ”Countering Disinformation, Raising Democratic Resilience” will be organised in Brussels.
- 6–8 October: #Disinfo2026. EU DisinfoLab’s annual conference will happen in Vilnius, Lithuania. Save the date!
- Other initiatives:
- Call for contributions (deadline: 8 March): EDMO BELUX 2.0 Final Conference. EDMO BELUX 2.0 invites proposals for papers/studies, themed panels and hands-on workshops on fact-checking, media literacy and disinformation research, with a focus on Belgium/Luxembourg and EU policy implementation.
- Call for collaborators (deadline: 20 March 2026). Tactical Tech is seeking experienced investigators and media professionals to develop learning resources and deliver training on AI power structures, climate and information disorder, OSINT methods, and digital influence.
- Open call. IJ4EU has reopened with €1,6 million in funding for cross-border investigative projects (deadline: 13 April). Grants of up to €50.000 (and €20.000 for freelancers) are available for teams reporting on issues of public interest, including disinformation, and threats to democratic integrity.
- Call for papers (deadline: 15 September). The Journal of Marketing Management invites submissions examining how platform economies, ad tech, recommender systems and creator monetisation shape the spread of disinformation, and what interventions could strengthen societal resilience.
- The Data Tank is inviting small and medium media and fact-checking organisations to join a new action research project aimed at building collective leverage over Big GenAI, to protect media sustainability and information integrity across Europe.
🧡 Things we loved from our Community
After (almost) missing an invitation to the Élysée, spotted in his junk mail just 24 hours before, Marc Faddoul (AI Forensics) jumped on a train and made it to the AI Summit anniversary dinner, where he took the floor to deliver a clear message: DSA enforcement depends on civil society evidence, yet there is no structural funding to sustain that work.
As he put it, “The EU is lucky to have a strong civil society ecosystem holding big tech to account… But that ecosystem can’t run on goodwill alone.”
Marc called for a bounty system linked to fines and a dedicated EU budget line in the 2028–2036 Multiannual Financial Framework (MFF). A sharp intervention on the financial sustainability of those holding Big Tech to account, and a timely reminder that regulation must be matched with sustainable funding to deliver real impact.
🇪🇺 Things we loved from Member States
France’s official @FrenchResponse account in X sharp pushback against platform pressure and disinformation.
-> From correcting misleading claims about EU rules

– > To backing Spain with a clear “Hola Spain, we’ve been there” in defence of platform regulation 🤝

Jobs
- EFCSN (European Fact-Checking Standards Network) is seeking a full-time Junior Grants & Admin Officer (remote, Europe-based). The role focuses on EU grant administration, financial reporting and compliance. Applications close on 13 March.
- Reset Tech is hiring a full-time EU Policy Manager (Brussels, on-site) to lead political outreach and advocacy on EU digital regulation and platform accountability.
- Pagella Politica is hiring a full-time Social Media Manager (Milan-based, with partial or full remote options). The role covers strategy, social campaigns and digital marketing across Pagella Politica and Facta.
- The CyberPeace Institute is hiring an EU Project Researcher to support EU-funded digital policy and cybersecurity projects, focusing on legal research and reporting.
- The Center for Countering Digital Hate (CCDH) has two open roles: a Government & Parliamentary Affairs Officer (London-based), and a Database Manager (US-based).
- OpenAI is looking for a Global safety response operations analyst. Open until filled.
- Alice (ActiveFence) is offering several positions; scroll their page to view all open roles.
- NewsGuard is seeking a full-time Staff Reporter to analyse and rate news sources, as well as an Editorial Intern and a Business Development and Social Media Intern.
- Moonshot is seeking an OSINT Analyst (London-based) and a full-time Digital Advertising Specialist.
- The Center for Democracy & Technology (CDT) has several roles open, including an AI Governance Fellow and a Senior Technologist in the AI Governance Lab.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.
