Dear reader,

Is the wind finally turning?

With the historic verdict obtained this week in New Mexico and California — holding platforms liable for their impact on the mental health of children and teenagers — it is now the American justice system that is turning its attention to the algorithmic systems platforms have been developing for years.

This ruling also validates the European approach of targeting the underlying systems that amplify disinformation and deepen polarisation, since it is precisely these same systems that are now under legal fire.

On the European front, DSA enforcement continues to move forward not only through the European Commission, but increasingly through Member States. A new judicial ruling this week in Germany ordered TikTok to modify its recommender system and reporting tools to comply with DSA accessibility requirements.

As for the Commission itself: while it remains substantively active on enforcement, it must step out of its cautious posture and claim the legal victories it is already achieving. This is essential. Only by fully owning the legal confrontation will we ensure that the perpetrators and enablers of these operations face the consequences of their actions.

This matters all the more as Hungary heads to elections on April 12 — and following the municipal elections that prompted the opening of a judicial inquiry in France over suspected Israeli foreign interference, with interference reported by media, including the Israeli private intelligence firm Black Cube now established by Slovenia’s own intelligence agency SOVA in the country’s parliamentary election of March 22.

The question of legal remedies and judicial action remains critical and calls for more action and pressure.


Our Webinars

UPCOMING – REGISTER NOW!

2 April. How can civil society defend itself? The EDRN pilot story

Join us for a behind-the-scenes look at the European Democracy Resilience Network (EDRN) pilot, a joint initiative by the CyberPeace Institute and EU DisinfoLab that supports civil society facing hybrid threats. EDRN addresses disinformation, doxing, impersonation, and other digital attacks. Inês Narciso and Tanner Wagner (CyberPeace Institute) will share key insights from the pilot. 

23 April. Case Study – Decoding Russian intelligence: What medals and insignia reveal

What can military badges and medals reveal about Russia’s information operations? In this webinar, Hervé Letoqueux (CheckFirst) presents findings from OSINT investigations showing how open-source images of Russian military insignia can help uncover hidden structures within the FSB’s 16th Centre and the GRU’s Information Operations Troops.

EVIDENCE & ENFORCEMENT WEBINARS:

9 April. 
Civil society evidence under the DSA: lessons from AI Forensics. 

The Commission’s decision to fine X under the DSA was not only built on regulatory investigation; it was also strongly supported by evidence produced by civil society organisations, such as AI Forensics.
Why register for this Insider session? In this session, Marc Faddoul, Director of AI Forensics, will share insights from this case.

  • You are a CSO or regulator and want to understand how evidence was gathered and structured.
  • You want to learn about the challenges and lessons learnt from engaging with the European Commission in this case.
  • You want to explore what risks emerged for AI Forensics, especially following the publication of the full decision that affected civil society contributors.

28 May. David vs Goliath in the DSA era: Lessons from Bits of Freedom’s victory

Article 38 of the EU Digital Services Act could be described as a masterpiece of legal drafting – 57 words to say very large online platforms must provide a version of their service not based on profiling. Meta provided a sludgy, unusable option. In the absence of action from the European Commission, Dutch NGO Bits of Freedom took on the mantle of David, and took the Meta Goliath to court. And won. And then, in appeal, won again. What lessons were learnt? Can this success be replicated elsewhere? Are there pitfalls that need to be avoided? Join the conversation to hear first-hand insights from Rejo Zenger, Policy Advisor at Bits of Freedom.


4 June. Enforce on the DSA enforcers: how can member states legally push the commission to act? 

The DSA creates a hybrid enforcement system: the European Commission has exclusive competence over VLOPs and VLOSEs, while national Digital Services Coordinators (DSCs) oversee other intermediaries. This raises a structural tension and dependency: what legal options do EU Member States have if the Commission fails to fulfil its supervisory and enforcement role? Join this discussion with Legal Consultant & Governance Advisor Christine Allan de Lavenne from SIDE Law Office.

PAST – WATCH THE RECORDINGS!

AI-generated content and DSA enforcement: who is accountable?
This webinar with Marco Bassini (Tilburg University) explores how generative AI is reshaping content production and testing the foundations of the Digital Services Act. It examines whether AI systems that generate content can fall within the DSA’s intermediary regime, what this means for liability and risk governance, and how the framework interacts with the AI Act, including implications for enforcement and systemic risks such as disinformation.

DSA: Unfolding the European Commission’s first decision against X
This webinar with Laureline Lemoine (AWO) unpacks the European Commission’s first-ever DSA non-compliance decision – fining X €120 million – examining the legal reasoning, key breaches identified, and what the landmark ruling means for civil society’s role in shaping DSA enforcement.

🎥 Don’t miss out, watch the recordings and explore all our past webinars.


Disinfo news & updates 

EU ENFORCEMENT & REGULATORY DEVELOPMENTS

Rapid Response System activated ahead of Hungarian elections under standard DSA election procedures. The Hungarian Digital Media Observatory (HDMO) reports that the Rapid Response System under the EU Code of Practice on Disinformation has been activated ahead of the 12 April elections in Hungary. The mechanism allows trusted organisations to flag potential violations, such as fake accounts and non-transparent political advertising, to major platforms for priority review, while enforcement decisions remain with the platforms themselves.

EU Commission flags widespread failures in protecting minors on adult platforms. An assessment by the European Commission has found that several major pornography websites are not adequately safeguarding minors, citing weak age verification systems and insufficient enforcement measures. The findings highlight regulatory gaps in digital child protection and intensify pressure on platforms to comply with stricter EU online safety standards.

First rulings begin to shape DSA data access for researchers. Early decisions under Article 40 of the Digital Services Act are starting to clarify how vetted researchers can access platform data. The developments mark an important step in operationalising one of the DSA’s key transparency provisions, while also raising questions about consistency, scope, and how easily researchers will be able to enforce their rights in practice.

EU moves forward with nudify app ban while delaying broader AI rules. The European Union has reaffirmed plans to prohibit so-called “nudify” applications under its AI regulatory framework, while postponing key provisions of the AI Act, such as watermarking AI-generated content, to allow more time for implementation. The decision reflects ongoing tensions between rapid technological development and regulatory capacity, as policymakers seek to balance innovation with safeguards against harmful uses of AI.

EU STATES TAKE ACTION & PUSH FOR CLARITY

France pushes for clarity on DSA enforcement during elections.
France has asked the European Commission to clarify how Member States can enforce the DSA at the national level during European elections, warning of legal uncertainty in tackling disinformation and foreign interference. In a letter to Commission President Ursula von der Leyen, Paris calls for updated election guidelines and clearer coordination between the EU and national authorities.

German court clarifies DSA obligations in ruling on TikTok’s design practices. A German court has ruled that TikTok must adjust its web-based recommender systems and user reporting tools, finding that its implementation of the EU’s Digital Services Act (DSA) falls short of accessibility and user-friendliness requirements. The decision marks one of the first national interpretations of key DSA provisions and underscores the growing role of private enforcement in shaping how platforms operationalise regulatory obligations.

Dutch court bans AI-generated “nudify” images in landmark ruling on Grok. A Dutch court has prohibited the use of xAI’s chatbot Grok to generate or distribute non-consensual sexually explicit images, including so-called “nudified” content, finding existing safeguards insufficient. The ruling imposes strict penalties for non-compliance and places responsibility on X to prevent such harms, marking one of the first judicial interventions in Europe addressing AI-generated sexual content and reinforcing broader regulatory scrutiny of generative AI systems.

PLATFORM ACCOUNTABILITY & LEGAL PRESSURE

Women sue X over AI-generated sexualised deepfakes linked to Grok.
Women and girls are taking legal action against X, alleging the platform failed to prevent the spread of non-consensual sexualised deepfakes generated using its Grok AI system. The case highlights growing legal pressure on platforms to address harms linked to generative AI and raises questions about liability, safety-by-design, and enforcement gaps.

X engages with the EU after €120 million DSA fine. Elon Musk’s platform X is in contact with the European Commission following a €120 million fine under the Digital Services Act, although it remains unclear to what extent the company is fully complying or whether the penalty has been settled. The case marks one of the EU’s most significant enforcement actions to date and will test whether proposed platform changes meet requirements on transparency, risk mitigation, and user protection.

Meta fined $375m over misleading child safety practices. A court in New Mexico has ordered Meta to pay $375 million after finding that the company misled users about protections for minors on its platforms. The ruling underscores growing scrutiny of major technology firms’ safety claims and raises broader concerns about transparency, accountability, and the adequacy of self-regulation in protecting vulnerable users online.

Musk tweaks Grok access as pressure mounts on X’s AI strategy. Elon Musk has introduced changes to how users can interact with Grok, limiting certain features behind a paywall in a move that signals growing pressure to monetise AI tools on X. The shift highlights broader tensions between accessibility, platform incentives, and the role of AI systems in shaping information flows, as concerns grow over their impact on content moderation and misinformation dynamics.

CIVIL SOCIETY: EXPOSING GAPS, SHAPING RESPONSES

Platforms’ monetisation systems fuel disinformation.
An investigation by Maldita reveals that major platforms are enabling the spread of disinformation through monetisation schemes that reward engagement, while a lack of transparency prevents independent scrutiny. The findings raise concerns about structural incentives that amplify harmful content and the limits of current accountability mechanisms.

Misinformation persists and is often amplified across major platforms in Europe. A second wave of research by Science Feedback on Structural Indicators of Disinformation confirms that trends are systemic, not incidental. TikTok continues to show the highest levels of misinformation, while repeat spreaders, especially on X, benefit from a strong “misinformation premium” in engagement. The study also highlights the rise of largely unlabeled AI-generated content and finds that low-credibility accounts on X are growing their audiences significantly faster than reliable sources.

‘Effort aversion’ weakens effectiveness of Community Notes. New research by researchers at Clemson University (USA) suggests that X’s Community Notes system may struggle to address complex misinformation, as contributors are less likely to engage with claims that require higher effort to evaluate. The study finds that simpler, more obvious claims are more likely to receive notes, raising concerns about blind spots in crowdsourced fact-checking and the limits of relying on user-driven moderation systems.

Private messaging remains a blind spot in disinformation response. An analysis in Le Monde argues that private messaging platforms such as WhatsApp and Telegram continue to evade effective regulation, despite playing a growing role in the spread of disinformation. The piece calls for a more nuanced approach that addresses amplification features and virality without undermining encryption and fundamental rights.

Gaps in platforms’ DSA risk assessments on monetisation. What To Fix has published its third annual evaluation of Very Large Online Platforms’ (VLOPs) risk assessment reports under the EU Digital Services Act. The analysis finds that platforms’ coverage of monetisation-related risks is often incomplete, leaving gaps in how incentives for harmful content are addressed. The findings raise concerns about transparency, accountability, and regulators’ and civil society’s ability to understand how monetisation affects disinformation and user safety.

Algorithmic gatekeeping in visual fact-checking. New research finds that reverse image search tools play a critical role in determining which visual content gets verified, effectively acting as gatekeepers in the fact-checking process. The study raises questions about bias, transparency, and the infrastructure underpinning misinformation detection.

Human rights approach to disinformation calls for systemic rethink. A new Council of Europe analysis argues that responses to disinformation must be grounded in human rights principles, including freedom of expression and access to information. It calls for policies that address structural drivers of manipulation while avoiding overreach and censorship.

Global economic costs of disinformation come into focus. A new report from Sopra Steria highlights the growing economic impact of disinformation, from market disruptions to reputational damage and security risks. The analysis frames disinformation not only as a societal threat but also as a significant economic challenge for governments and businesses, while reflecting an industry perspective on the issue.

AI DISINFO WATCH

Influence campaign on TikTok uses AI videos to boost Hungary’s Orbán ahead of crucial elections.
A FIMI campaign used AI-generated content on TikTok to influence Hungary’s April 2026 elections. The operation deployed synthetic news anchors, deepfake-style celebrity endorsements, and networks of inauthentic accounts to amplify pro-Orbán narratives and smear his opponent. Combined with parallel activity identified by NewsGuard from the Russian “Matryoshka” influence operation on X and Telegram, the case illustrates how AI is lowering the cost and increasing the scale, speed, and plausibility of election interference.

Meta will move away from human content moderators in favor of more AI. Meta plans to significantly reduce its reliance on human content moderators, replacing much of the process with AI systems over the coming years. While humans will remain involved in high-risk decisions and system oversight, the move signals a deeper dependence on automated moderation at scale. The shift raises concerns about error rates and reduced human oversight, as some users believe these systems make too many mistakes and make it difficult for their appeals to be reviewed by a person.

How AI content detection is being weaponized in the Iran war. AI’s impact on the Iran conflict now extends well beyond manipulated images and is increasingly shaping public perception in more insidious ways. AI detection tools and technical analyses are being weaponised, with fabricated or misapplied “forensic” evidence used to falsely discredit authentic images and erode trust in verification systems, as highlighted by Tech Policy. At the same time, AI is still being used to produce deceptive videos, including clips showing female fighter pilots intended to project regime strength. According to an Alethea report, these videos amassed 25 million views in just a few days after being amplified by a coordinated TikTok network. The New York Times has already described this phenomenon as an “alternative reality” of the conflict, driven by a flood of AI-generated fake war footage that is blurring the line between real and fabricated events at scale.

Grammarly is pulling down its explosively controversial feature that impersonates writers without their permission. Grammarly has disabled its “Expert Review” feature following backlash and a class action lawsuit over its use of AI to generate editing suggestions attributed to real writers without their consent. According to Wired, the tool presented feedback as if it came from journalists, authors, and academics, raising concerns about the unauthorised use of names and identities. Futurism reports that the feature impersonated both living and deceased writers and was withdrawn after strong criticism from those affected. In a LinkedIn post, journalist Julia Angwin confirmed she is suing the company, arguing that the feature misappropriated the identities of hundreds of professionals for commercial purposes. The case highlights growing scrutiny over AI systems that simulate real individuals’ voices and expertise without permission.

The next disinformation battlefield is private. Disinformation may increasingly shift from public platforms to private AI-driven conversations, where “synthetic friends” adapt to users’ preferences and communication styles. Massimo Flore, who recently explained this dynamic in an EUDL webinar, argues that these systems can become trusted interlocutors, shaping how individuals interpret information within personalised “epistemic cocoons,” making influence harder to detect, monitor, or challenge.

🔎  For more AI-related disinformation news and resources, visit our AI Disinfo Hub.

FOREIGN INTERFERENCE & INFORMATION WARFARE 

EEAS presents new FIMI Deterrence Playbook to strengthen response to foreign information manipulation.
At its annual conference in Brussels, the European External Action Service (EEAS) presented its 4th annual report on Foreign Information Manipulation and Interference (FIMI), introducing the FIMI Deterrence Playbook as a key framework to make FIMI activities more costly and less sustainable for threat actors. The report builds on the EEAS FIMI Toolbox and broader response framework, focusing on strengthening the EU’s capacity to deter and disrupt FIMI operations.

Russia repurposes Middle East war narratives to target Ukraine. Pro-Kremlin actors are increasingly linking the war in the Middle East to the conflict in Ukraine, blending unrelated events to push misleading narratives. Analysts warn that these campaigns aim to discredit Kyiv, suggest declining international support, and shift global attention away from Russia’s invasion.

Russia impersonates fact-checkers to undermine trust and target Armenia. A pro-Russian network has been posing as the verification organisation NewsGuard to spread false narratives targeting Armenia, particularly in the run-up to elections. The tactic reflects a broader strategy of mimicking credible sources to erode trust in fact-checking and amplify geopolitical influence operations through coordinated networks and fabricated content.

Russian-linked networks recruit operatives through low-cost incentives. Investigations reveal that individuals are being recruited into Russian-backed sabotage training programmes with minimal financial incentives and promises of travel. The scheme illustrates evolving hybrid warfare tactics, leveraging economic vulnerability and covert training to expand operational reach within Europe.

Alleged Russian plot to stage attack to influence Hungarian elections. Reports indicate that Russian actors proposed orchestrating a fake assassination attempt to sway Hungary’s electoral dynamics. The case reflects increasingly aggressive forms of election interference, combining disinformation with potential staged events to manipulate public perception and political outcomes.

Foreign disinformation as warfare exposes vulnerabilities in liberal democracies. The Foreign Affairs Committee warns that foreign states are increasingly using disinformation as a strategic tool to undermine open societies, combining digital influence operations with broader hybrid tactics. The analysis finds that liberal democracies remain particularly exposed due to their openness, with the UK’s response described as fragmented and insufficiently coordinated. It calls for a more comprehensive national strategy, stronger institutional capacity, and closer international cooperation to improve resilience against evolving information threats.

Epistemic divides between Western rationalism and Russian “Sophia” underpin information warfare strategies. Marianna Prysiazhniuk’s analysis argues that Western and Russian conceptions of truth operate within fundamentally different epistemological frameworks, creating structural vulnerabilities in the information domain. Western traditions emphasise empirical verification and rational discourse, while Russian approaches prioritise holistic and ideologically coherent understandings of truth, often subordinated to power. This asymmetry enables the exploitation of liberal epistemic norms, where openness and evidence-based reasoning become weaknesses in adversarial contexts. The study concludes that these dynamics facilitate disinformation and reflexive control strategies that erode trust and hinder coordinated responses.


Brussels Corner

Council conclusions on advancing the EU’s capacity to counter hybrid threats
On 16 March, the EU Member States in the Council of the European Union adopted a policy document (“conclusions”) on countering hybrid threats. Such documents are written in very diplomatic language. Of note are the invitation to “the Commission and the High Representative, where applicable, to make full use of the relevant Union instruments, including through the full implementation and enforcement of the Digital Services Act,” and the even more plaintive call on online platforms (the largest of which, with the exception of X, are part of the Code of Conduct on Disinformation)  “to enhance cooperation with the EU and the Member States and implement ambitious and robust measures to counter” hybrid threats. In the same week, French President Emmanuel Macron reportedly wrote to Commission President von der Leyen calling inter alia for an update of the guidelines, adopted under the DSA, on mitigating systemic risks for electoral processes.


Reading & resources

US Court ruling sheds light on political interference in climate science. A recent legal victory reveals how the Trump administration sought to undermine climate science, highlighting broader risks of political interference in knowledge production. The case raises concerns about how disinformation and institutional pressure can shape public understanding of scientific issues.

Corporate disinformation tactics from greenwashing to gaslighting. The “Toxic Accounts” report examines how companies deploy misleading narratives to shape public perception, from exaggerated sustainability claims to manipulative messaging strategies. The analysis expands the scope of disinformation debates beyond politics to corporate influence.

Climate disinformation and fossil fuel reliance pose security risks. Former defence officials have warned that continued dependence on fossil fuels, combined with coordinated climate disinformation campaigns, represents a growing threat to national security. The analysis links environmental policy, information integrity, and geopolitical stability, highlighting the strategic implications of delayed climate action.

US establishes Bureau of Emerging Threats to address evolving risks. The United States has launched a new Bureau of Emerging Threats aimed at tackling complex challenges such as cyber operations, disinformation, and technological risks. The initiative signals a strategic shift towards more integrated responses to hybrid threats in an increasingly contested global information environment.

AI-driven cognitive manipulation set to reshape disinformation landscape. A World Economic Forum analysis warns that advances in artificial intelligence will accelerate the scale and sophistication of disinformation, particularly through targeted cognitive manipulation techniques. The report emphasises the need for resilience strategies, including media literacy, regulatory frameworks, and cross-sector collaboration to mitigate emerging risks.

Fraudulent Facebook ads fuel global investment scams. A surge in fake advertisements on Facebook has been linked to international investment fraud schemes, exploiting the platform’s reach to target unsuspecting users. The trend highlights persistent challenges in moderating online advertising ecosystems and preventing financial exploitation through deceptive digital campaigns.

Meta expands AI tools and partnerships to combat online scams. Meta has announced new measures to tackle scams and fraud across its platforms, including AI systems to detect impersonation and coordinated criminal activity. The company says it is working with financial institutions and law enforcement, reflecting a broader shift toward automated enforcement and cross-sector cooperation.

Cyberattack targets Die Linke headquarters IT systems. Germany’s Left Party, Die Linke, has reported a cyberattack affecting the IT infrastructure of its central office, with indications of a ransomware incident. The breach has disrupted internal operations and prompted an ongoing investigation, with no confirmed attribution to foreign actors, highlighting persistent cybersecurity risks facing political organisations and the potential impact on party activities and sensitive data.

Resources and trainings:

  • Prodigioso Volcán will host a paid online training “Bulos y crisis: estrategias contra la desinformación” (in Spanish) on 21-22 April, covering disinformation narratives, AI-driven manipulation, and strategies to respond to disinformation crises.
  • Institute for Information Law (IViR) will host a paid five-day summer course on European platform regulation in Amsterdam on 29 June-3 July, offering a deep dive into EU digital policy, including the Digital Services Act and Digital Markets Act, with lectures from academics, policymakers, and practitioners.
  • Indicator has launched OSINT Navigator, a beta tool that helps investigators find relevant OSINT tools through natural language queries. Drawing on a curated dataset of nearly 7.500 tools from major OSINT toolkits, it suggests resources for tasks such as tracking crypto transactions or identifying website owners.

The latest from EU DisinfoLab

We’ve just published the first blogpost in a new series on the liability of online intermediaries. Written by our Senior Policy Expert Joe McNamee, the piece offers a timely and insightful overview of the EU and US approaches, tracing their shared origins while highlighting key differences in how they balance innovation, responsibility, and freedom of expression.


This week’s recommended read

This week’s recommended read is brought to you by Maria Giovanna Sessa, Research Manager at EU DisinfoLab.

A must-read if you’ve ever wondered what might happen to your Zoom calls once AI tools get involved. The 404 Media investigation unpacks how a little-known company is quietly transforming meeting recordings into AI-generated podcast-style content – often without participants fully realising it – highlighting the blurry line between productivity tools and data exploitation. 

At a time when AI meeting summaries and transcription features are becoming standard, this raises urgent questions about consent, secondary data use, and how easily everyday conversations can be repackaged into entirely new, monetisable products.


👀  Spotted: EU DisinfoLab 

On 18 March, our Executive Director Alexandre Alaphilippe participated in the European External Action Service’s annual FIMI conference, contributing to a panel on raising the costs for perpetrators, sponsors, and enablers of FIMI. He emphasised the need for a shift from developing new frameworks to actively enforcing existing tools, highlighting recent legal actions in France and ongoing civil society efforts to implement the Digital Services Act at national level.

He also authored an op-ed titled “The DSA Showed Teeth Against X but the EU Is Afraid to Call It a Win” on Tech Policy Press, reflecting on the first enforcement actions under the Digital Services Act and the broader questions around transparency and enforcement in EU platform regulation.

Over the past two weeks, our team has been active in Brussels, Tallinn, Vilnius, and Paris; hosting community meetups, building meaningful connections, and sharing our mission across Europe. A big thank you to everyone who joined us along the way! Your energy and insights are what make this work meaningful. These moments together continue to strengthen our network and reinforce our collective effort to combat disinformation at scale.

→ Want to join one of our upcoming meetups? Just reply to this email; we’d love to hear from you.


Events & announcements  


🧡 THINGS WE LOVE FROM OUR COMMUNITY

This week, we appreciated a thoughtful LinkedIn post by Alberto F, senior researcher at The Citizen Lab, reflecting on Meta’s latest Adversarial Threat Report

While acknowledging the strong investigative work behind the report, he raises important questions about shifts in reporting frequency, the reduced prominence of coordinated inauthentic behavior (CIB), and the narrow focus on traditional geopolitical adversaries.

His post is a timely reminder of the importance of maintaining transparency, consistency, and critical scrutiny in how influence operations are documented and communicated.


Jobs

  • Pagella Politica is hiring a full-time Social Media Manager (Milan-based, with partial or full remote options). The role covers strategy, social campaigns and digital marketing across Pagella Politica and Facta.
  • The CyberPeace Institute is hiring an EU Project Researcher to support EU-funded digital policy and cybersecurity projects, focusing on legal research and reporting.
  • The Center for Countering Digital Hate (CCDH) has several open positions, including a Database Manager (US-based), a Senior Policy adviser (US-based) and a Policy officer (Brussels-based).
  • OpenAI is looking for a Global safety response operations analyst. Open until filled.
  • Alice (ActiveFence) is offering several positions; scroll their page to view all open roles.
  • NewsGuard is seeking a full-time Staff Reporter to analyse and rate news sources, as well as an Editorial Intern and a Business Development and Social Media Intern.
  • Moonshot is seeking an OSINT Analyst (London-based) and a full-time Digital Advertising Specialist.
  • The Center for Democracy & Technology (CDT) has several roles open, including a Legal Fellow and a Senior Policy Analyst.
  • ProPublica is currently hiring for several roles, including a Deputy Research Editor, a Visuals Editor, and a Washington Reporter covering defense (D.C.-based).
  • The University of Groningen is launching a PhD project examining how climate messaging can counter misinformation and reduce polarisation.
  • Europol is hiring a Senior Operational Analyst to support investigations through criminal intelligence analysis, reporting, and analytical capability development.


Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.