Dear Disinfo Update readers,
Welcome back to a new edition of Disinfo Update from EU DisinfoLab. In a fast-changing geopolitical context, and at a time when funding for civil society is under increasing pressure, this edition will touch on specific priorities.
First, we encourage the community to mobilise around AgoraEU to help unlock sustainable EU funding for the next budget cycle. We’ll host a specific briefing on this matter and have resources available here. This is a key moment to ensure that counter-disinformation work remains properly supported and independent in the years to come.
Second, we are stepping up our focus on effective enforcement. With the Digital Services Act (DSA) now in force, the question is no longer only what the rules say, but how they are applied in practice. This is why we are launching a new webinar series focused on enforcement, exploring the legal grounds for the DSA fine against X and including how civil society evidence contributes to these investigations and what these processes cost for organisations.
Finally, we also focus on developments in EU Member States, where new legal and policy initiatives show growing momentum to address the root causes of disinformation.
As always, everyone can engage in a way that fits their capacity, by participating, sharing, contributing, or simply staying informed. Together, we can keep building a stronger and more resilient European information space.
🔔 And one last thing: our #Disinfo2026 call for proposals is still open until the end of the month. What should we be discussing in Vilnius this 6-8 October? Send us your ideas, research, and bold questions. This is your conference too, help shape the programme.
Our Webinars
UPCOMING – REGISTER NOW!
19 February
Briefing: How to unlock more EU funding for 2028-2034 to counter disinformation?

The counter-disinformation community faces serious financial uncertainty. As philanthropic and platform funding declines and no sustainable EU alternative is yet in place, coordinated action is urgently needed to safeguard Europe’s information ecosystem. In this webinar, our Katrīna Luīze Ašmane will present AgoraEU, a proposed programme to fund research and investigations countering disinformation. This is an insider session, a webinar intended exclusively for civil society, including non-profit, academics, and researchers working on countering disinformation. Registrations will be reviewed and validated on the basis of the professional email address provided.
26 February
Forced to quit: gendered disinformation, synthetic abuse, and political violence

Women are being pushed out of public life through coordinated gendered disinformation and harassment, increasingly amplified by synthetic and multimedia content. In this webinar, Marília Gehrke (University of Groningen) explores the “triangle of violence”, examining the relationship between content, victims, and audiences, to show how gendered disinformation operates as a sustained system rather than isolated attacks. Drawing on the Forced to Quit initiative, she highlights political exit as a measurable outcome of this abuse.
5 March
Synthetic friends: AI companions and the future of disinformation

Artificial intelligence is shifting from generating content to building relationships. AI companions are designed to inform, adapt, and sustain emotional bonds over time. In this webinar, Massimo Flore explores how this shift could transform disinformation: when credibility is embedded in private human–AI relationships, persuasion operates through relational trust rather than visible content. Introducing the concept of the epistemic cocoon, the session examines the implications for counter-disinformation and governance.
19 March
How can civil society defend itself? The EDRN pilot story

Join us for a behind-the-scenes look at the European Democracy Resilience Network (EDRN) pilot, a joint initiative by the CyberPeace Institute and EU DisinfoLab supporting civil society facing hybrid threats. Expanding on the CyberPeace Builders programme, EDRN addresses disinformation, doxing, impersonation, and other digital attacks, Inês Narciso and Tanner Wagner will share key insights from the pilot, including what CSOs need to stay resilient and why collaboration in defence is essential.
New Series: ‘EVIDENCE & ENFORCEMENT”
This new series shifts the focus from rules to results, exploring how evidence gathered by researchers and civil society feeds into enforcement action, and what meaningful accountability requires in practice. We launch the series with two upcoming webinars:
10 March
DSA: Unfolding the European Commission’s first decision against X

In December 2025, the European Commission issued its first non-compliance decision under the Digital Services Act (DSA), fining X €120 million – a major milestone that sets the tone for DSA enforcement. The decision found breaches related to deceptive design, advertising transparency and access to data for researchers, and highlights the role of civil society in DSA proceedings. In this webinar, Laureline Lemoine from AWO unpacks the Commission’s legal reasoning, how the process unfolded, and what the decision means for civil society organisations seeking to increase their impact under the DSA.
9 April
Civil society evidence under the DSA: lessons from AI Forensics

The Commission’s decision to fine X under the DSA was strongly supported by evidence produced by civil society, yet collecting and submitting usable evidence remains difficult in practice. In this closed webinar, Marc Faddoul from AI Forensics shares how evidence was gathered and structured, what was learned from engaging with the Commission, and what risks emerged when protections failed, including concerns following the publication of the full decision that affected civil society contributors.
PAST – WATCH THE RECORDINGS!
- Scroll, stop, verify. How the EU is bringing climate facts forward | Jeremy Herry from the European Commission (DG CLIMA) introduces their newly launched campaign #ClimateFactsMatter. This is followed by Stephanie Hankey (Tactical Tech), who presents The RePlaybook.
- Who is most vulnerable to AI-generated mis/disinformation? Psychological drivers of media literacy and belief in harmful online content | Led by Dr Jason Potel (Goldsmiths), this session explores the psychological factors shaping responses to AI-generated and online disinformation, showing how feelings of low control, confidence, or social connection can increase susceptibility.
Don’t miss out, watch the recordings and explore all our past EU DisinfoLab webinars.
| 🧡 A huge thank you to all our speakers, partners, and participants for making every conversation sharper, deeper, and more impactful. If your company or institution is interested in partnering with us and sponsoring our webinars, please reach out if you’d like to discuss how we can work together: info@disinfo.eu |
Disinfo news & updates
EU MEMBER STATES TAKE ACTION
- European countries probe targets X and Grok over data and sexual deepfakes. French authorities have raided X’s Paris offices as part of a probe into alleged unlawful data extraction and the spread of illegal sexual content. UK regulators have opened a separate investigation into Elon Musk’s AI tool Grok over concerns it generated sexualised deepfakes without consent. The cases mark escalating European scrutiny of X and its AI operations.
- EU ‘solidarity’ with Spain’s social media restrictions. Spain’s Prime Minister Pedro Sánchez has announced a sweeping digital crackdown, including banning social media for under-16s, criminalising algorithm manipulation that amplifies illegal content, and holding platform executives legally accountable for harmful material. The plan also introduces measures to track online hate and polarisation, signaling a more stringent approach to digital regulation. These proposals have faced criticism from figures like Elon Musk and Pavel Durov, the founder of Telegram. While the European Commission’s spokesperson for digital affairs, Thomas Regnier, noted the challenges of enforcing these measures under the Digital Services Act (DSA), the Commission later expressed “solidarity” with Spain and other EU countries, including France’s Assemblée Nationale, which has approved a bill banning social media for children under 15, and Greece, which is also considering a social media ban for minors.
- France expands VIGINUM’s powers ahead of elections. France aims to counter foreign digital interference, especially from Russia., ahead of upcoming elections by granting VIGINUM expanded powers. The new decree allows the agency to monitor data from search engines, AI tools, and smaller social media accounts.
- French Response: Meme diplomacy gains traction. Launched in September 2025, the Foreign Ministry’s “French Response” X account is now drawing attention for its rapid, meme-driven rebuttals to pro-Russian narratives and viral false claims. With 180.000+ followers and around 15 million weekly impressions, the initiative marks a more aggressive state-led approach to countering online disinformation.
TRACKING FIMI
Deep dives:
- Propaganda machine across three continents. Forbidden Stories reveals that a leak of 1.400+ internal documents exposes a Russia-linked network running influence campaigns across Africa, Latin America and parts of the Middle East, backed by a €7,3 million 2024 budget. The investigation alleges oversight by Russia’s foreign intelligence service (SVR), detailing tactics from paid media placement to destabilisation efforts.
- Mapping GRU information ops through medals and OSINT. A Check First investigation uses military insignia analysis and open-source intelligence to trace Russia’s GRU Information Operations Troops (VIO). By examining 118 photos of medals and symbols, researchers partially reconstructed the units’ structure, facilities and command chain, shedding light on the state actors behind covert cyber and influence operations targeting Europe.
- Kremlin network dominates German Telegram politics. An investigation by The Insider and Novaya Gazeta Europe finds that roughly half of political content in Germany’s Telegram space is driven by a Kremlin-linked network tied to RT and actors connected to a GRU psychological operations unit.
- Inside the IRA: What 350 CVs reveal. New research maps the inner workings of Russia’s Internet Research Agency (IRA) “troll factory” using 350 former employees’ CVs, revealing a structured, multi-level organisation that recruited young graduates and operated like a professional media and PR company.
Operational snapshots:
- Russia tests domain controls and disrupts YouTube access. Russia has introduced a new method to block YouTube, Instagram, and WhatsApp by removing their domains from the National System of Domain Names (NSDI), making them inaccessible without a VPN. This move follows the ongoing “slowdown” of YouTube and is seen as a way to save resources while targeting platforms like Telegram.
- Stolen Aussie accounts repurposed in pro-China influence push. X accounts previously posting about everyday Australian issues have been hijacked and repurposed to push pro-China messaging. The profiles began circulating corruption claims about flood recovery in the Philippines, in a suspected AI-enabled influence operation.
- Olympics disinfo offensive. A Kremlin-aligned network, Matryoshka, seeded at least 28 fabricated reports during the 2026 Winter Olympics, impersonating outlets like CBC and Reuters. The AI-enhanced clips falsely portrayed Ukrainian athletes as criminals and cheaters, spreading from Telegram to wider pro-Kremlin networks.
- Cross-border narratives target Ukraine. Coordinated campaigns targeting the Ukrainian Armed Forces and foreign volunteers generated millions of views, alongside largely AI-generated false corruption claims about President Volodymyr Zelensky.
- As the last US–Russia nuclear arms control treaty expires, pro-Kremlin narratives shift blame. Following the 6 February 2026 end of New START, the agreement that capped strategic nuclear weapons and enabled mutual inspections, pro-Kremlin outlets have portrayed Moscow as the responsible actor seeking to preserve nuclear stability, while blaming the West for the treaty’s collapse and amplifying nuclear brinkmanship rhetoric.
- Foreign influence threatens France’s municipal elections. Ahead of France’s municipal elections in March 2026, foreign interference looms, with over 80 fake news websites linked to pro-Russian groups targeting local campaigns.
US POLITICS MEETS EU REGULATION
- Republican lawmakers accuse the European Commission of censoring American speech. A new report from US Republican lawmakers claims the European Commission has pressured social media platforms to alter their content moderation policies, affecting free speech in the U.S. The report highlights concerns over the global reach of European regulations like the Digital Services Act, accusing the EU of exporting censorship and targeting conservative views.
- Privacy concerns and Commission silence. European civil society organisations are alarmed after the US report revealed EU officials’ and NGO members’ names in communications with tech platforms, while redacting the names of platform employees. Additionally, the European Commission has remained silent, failing to defend the same civil society organisations whose work it relies on for the enforcement of EU regulation.
- Double-speech. In parallel, NetChoice, a coalition of private tech companies, published its letter to the US House of Representatives, “applaud[ing] the Committee’s continued, bold leadership to put an end to foreign censorship efforts that are threatening Americans’ rights online.” The list of NetChoice members is available here.

- MAGA fund to destabilise Europe. Donald Trump has been accused by European lawmakers and campaigners of attempting to destabilise Europe through a new MAGA fund that supports think tanks and charities aligned with his agenda. At a recent event in the European Parliament, far-right groups and Trump allies rallied against EU regulations like the Digital Services Act.
AI DISINFO UPDATE
- X is testing an AI-driven version of Community Notes. The experimental feature, called “Collaborative Notes,” uses Grok to generate a draft fact-check after a contributor requests context on a post. Other contributors can then rate and edit the draft. Framed as a way to scale Community Notes and add context faster, the move deepens AI’s role in X’s fact-checking system. While framed as a scalability upgrade to Community Notes, the move also signals a deeper integration of AI into the platform’s fact-checking architecture.
- Emotional reach beats AI warnings. New research from OpenMinds suggests that labelling AI-generated content is no guarantee of reduced influence. Analysing 18,500 on AI-generated videos posted in a likely Russian-linked YouTube channel, researchers found that 40% of users expressed support or empathy towards AI-generated war characters, while only 13% explicitly recognised the images as synthetic. Corrective comments attracted significantly less engagement. The findings reinforce growing evidence that transparency measures alone are not a silver bullet against emotionally charged synthetic influence operations.
- Emerging “AI swarms”. Combining large language models with autonomous agents could turn traditional botnets into adaptive “AI swarms.” Unlike rigid bots, these systems could coordinate, infiltrate communities and adjust messaging in real time, scaling influence operations and making them harder to detect.
- AI ads divide chatbot makers. OpenAI’s move to test ads on the free tier of ChatGPT has prompted criticism from rival Anthropic, which warned that competitors could embed targeted ads within chatbot responses, a claim OpenAI denies. The exchange highlights wider concerns that advertising in LLM systems could blur neutrality and subtly influence users, turning AI monetisation into a growing governance and trust issue.
🔎 For more AI-related disinformation news and resources, see our regularly updated AI Disinfo Hub.
COMMUNITY ACTION IN HUNGARY
- X to court over election data access. Researchers, led by Democracy Reporting International (DRI), have sued X to force the platform to grant access to data related to Hungary’s elections. They argue that EU law requires transparency to assess risks such as disinformation and foreign interference. The case tests the enforcement of the Digital Services Act, which mandates that very large platforms provide vetted researchers with access to data for public-interest scrutiny. DRI has previously taken legal action against X over similar transparency obligations and secured a favourable ruling.
- Deepfakes and ads. Lakmusz’s latest weekly roundup shows how political advertising continues to circulate despite Meta and TikTok bans. A pro-government group promoted an AI-generated deepfake video, later shared by Viktor Orbán, as a sponsored ad, while misleading claims and anti-Ukraine narratives gain traction ahead of the April elections, exposing clear enforcement gaps.
👉 Scroll down to our “One thing we loved” section for a closer look at how community members unpacked this case , and how, step by step, “it gets worse.”
Brussels Corner
Update – Funding the information ecosystem in AgoraEU
The Presidency of the Council of the European Union, currently held by Cyprus, has refined its approach to the EU’s Multiannual Financial Framework (MFF) 2028-2034, focusing so far on technical work. The file returns to the political level in March, with key discussions among European Affairs Ministers on 2-3 March (informal) and 17 March (formal), ahead of an exchange between EU leaders at the 19 March summit.
In parallel, the Council circulated a second draft compromise on 9 February, reinforcing the distinction between the “Culture” and “Media+” components and adding sectoral support for music and publishing.
The new AgoraEU programme represents a unique opportunity to strengthen Europe’s long-term resilience. However, its current design remains primarily centred on supporting media, journalism, and fact-checking, while other essential civil society organisations, such as counter-disinformation actors are still treated as secondary beneficiaries. To fully realise the programme’s objectives, civil society organisations working to counter disinformation must be recognised as core actors, with access to dedicated funding, full inclusion in anti-SLAPP (Strategic lawsuits against public participation) protections, and sustainable funding mechanisms that safeguard their independence and ensure long-term operational capacity.
The compromise amendments circulated on 9 February do not yet address these concerns.
If you are a civil society member of the EU DisinfoLab community, including non-profits, academics, and researchers working on countering disinformation, you can request to join our community briefing session on 19 February, 14:30-15:30 CET “How to unlock more EU funding for 2028-2034 to counter disinformation?”.
Preliminary findings on TikTok
On 6 February, the European Commission preliminarily found TikTok in breach of the Digital Services Act (DSA) with regard to design features that make the platform addictive. These include infinite scroll, autoplay, push notifications, and a highly personalised recommendation system designed to keep users engaged for longer.
The Commission found that TikTok did not properly assess how these features could harm users’ physical and mental wellbeing, especially for children and vulnerable adults. It also appears that TikTok failed to put in place effective measures to reduce these risks. Tools like screen time limits and parental controls do not seem to meaningfully address the problems caused by addictive design.
TikTok now has the opportunity to respond to the Commission’s preliminary findings. However, the final decision could be delayed by lengthy arguments, as was the case with X, where it took 17 months from preliminary findings to a final ruling.
See you in September 2027? (We hope sooner!)
EMFA article 18 guidelines
The European Commission has published its long-awaited guidelines on Article 18 of the European Media Freedom Act (EMFA), which introduces a “media privilege” for (very large) online platforms. Under Article 18, very large online platforms (VLOPs) must engage in dialogue with media providers before removing or restricting their content, reflecting the idea that media organisations are subject to additional regulation and deserve stronger procedural safeguards.
EU DisinfoLab lobbied against media exemption already when it was initially proposed as an amendment to the Digital Services Act (DSA), but was rejected by the European Parliament Internal Market and Consumer Committee (IMCO). However, the policy was later proposed and adopted in EMFA under the responsibility of the Culture and Education committee (CULT). More on the EU DisinfoLab’s position against the Media Exemption can be read here.
Under EMFA, entities that claim to be media service providers have to submit a declaration to a very large online platform claiming that they meet the necessary criteria. It is then up to the platform to either decide if it accepts the declaration or has a “reasonable doubt” about it. If they have a doubt that they consider reasonable, they can ask for verification from national authorities. Or the platforms can just accept all declarations.
Last year, the Commission published a call for consultations on these guidelines. EU DisinfoLab responded, warning that the “reasonable doubt” threshold is too high and unclear, and calling for a standard electronic system that would automatically share declarations with both platforms and national regulators, increasing transparency and accountability, reducing the risk of false or misleading declarations causing harmful or illegal content from being protected under the provision.
While it is an acknowledgement of our concerns that an attempt at a clarification of “reasonable doubt” is made, it is difficult to understand why the simple step of automatically informing national regulators, as we suggested, was not followed by the Commission.
In the guidelines, the Commission also sees a (new pro-bono?) role for Civil Society Organisations in reviewing the declarations, a role that could have been more easily, more authoritatively and more efficiently filled by national authorities, if the declarations were automatically sent to them.
WhatsApp becomes the first messaging VLOP under the DSA
The European Commission has officially designated WhatsApp as a Very Large Online Platform (VLOP) under the Digital Services Act (DSA). The decision is based on WhatsApp’s Channels feature, which has reached more than 45 million users in the EU, which is the threshold for stricter regulation.
This makes WhatsApp the first messaging service to fall under the EU’s more comprehensive online rules. As a VLOP, WhatsApp must now take concrete steps to identify and reduce systemic risks, such as content that could harm children’s wellbeing or undermine democratic processes.
From the moment of designation, WhatsApp has four months to comply with its new obligations, although the Commission has not yet announced the exact start date.
Meanwhile, Telegram has not been designated as a VLOP, as it has not yet been adjudged to have exceeded the 45 million user threshold in the EU. Since Telegram has its EU headquarters in Belgium, its compliance with the DSA is currently overseen by the Belgian telecom regulator (IBPT/BIPT), at least until it is deemed to have crossed the VLOP threshold.
Reading & resources
- A timely call to rebuild trust. A strong piece from Andy Pryce argues that rebuilding local civic connections and revitalising trusted public media are essential to drying up the swamp of discontent where disinformation thrives. As trust in institutions declines, hostile actors gain fertile ground for hybrid threats. Pryce makes the case for investing in civil society, regional journalism, and public engagement as pillars of national resilience.
- Mainstream media’s role in amplifying climate denial. The BBC has faced criticism for giving climate change denier Diana Furchtgott-Roth a platform on its Today programme. Furchtgott-Roth, a former Trump advisor, dismissed the severity of climate change, making misleading statements that were not challenged by the presenter.
- Climate journalism rollback. The Jeff Bezos-owned Washington Post has reportedly laid off much of its climate staff as part of broader newsroom cuts, significantly reducing the size of a team that had been expanded in recent years.
- Don’t declare fact-checking dead. Responding to claims that fact-checking “doesn’t work,” International Fact-Checking Network (IFCN) director Angie Drobnic Holan points to research showing it slows the spread of viral falsehoods. She warns that growing scepticism and platform rollbacks risk weakening one of the core defences against disinformation.
- AI, influence and the future of the information battlefield. A new NATO StratCom report explores how AI, automation and emerging technologies are reshaping the global information environment. It warns of algorithmic influence operations, data poisoning of AI systems, and growing risks to democratic resilience as machines increasingly shape what societies see, know and believe.
- Digital threats and election integrity: lessons from the 2025 Dutch vote. The Hybrid Election Integrity Observatory (HEIO) Final Report assesses online risks during the October 2025 Dutch parliamentary elections. It finds that while the vote remained free and fair, it faced sustained digital pressure, including AI-generated propaganda, coordinated inauthentic behaviour, fraud narratives, and weak platform enforcement. The report calls for stronger transparency, faster moderation, and improved platform accountability ahead of future elections.
- NewsGuard has filed a lawsuit against the US Federal Trade Commission (FTC), accusing the agency of violating its First Amendment rights. The company claims the FTC improperly targeted it over its media reliability ratings, including by restricting its business relationships during a major ad industry merger review. NewsGuard argues the case is about protecting independent journalism from government retaliation.
This week’s recommended read
This week’s recommended read is brought to you by Joe McNamee, our Senior Policy Expert.
Joe points us towards the Digital Services Act Hub, which he describes as a “treasure trove of news, analysis, training courses and even a primer for those starting to learn about the DSA, expertly curated by Martin Husovec, Associate Professor of Law at the London School of Economics.”
This resource is particularly valuable for anyone interested in the Digital Services Act, whether you’re just starting to explore it or already have a solid understanding. The Hub includes a primer for newcomers, online courses for those aiming to gain deep knowledge of key aspects of the legislation, a newsletter to stay informed of the latest updates, and blog posts discussing current topics, controversies, and campaigns. It’s a comprehensive platform for anyone looking to engage with the legislation at any level of expertise.
👀 Spotted: EU DisinfoLab
Alexandre Alaphilippe, Executive Director of EU DisinfoLab, was recently interviewed by Contexte about the privacy concerns raised by the US House Judiciary Committee’s investigation into EU content moderation practices. He expressed concern over the exposure of EU officials’ and civil society members’ names in confidential communications, while platform employees’ identities were redacted. Alexandre states:
The document has now been leaked, and there is a history of confidentiality breaches. We’re not dealing with people of goodwill, and they control the narrative. It raises the question whether the Commission is truly capable of handling such sensitive issues.”
Read more here.
Events & announcements
- Present-June: The Cyber for Good Media programme is running with the mission to protect and better equip journalists against interference and manipulation in the digital space, with a focus on OSINT and cybersecurity.
- 24 February: Wikimedia event ‘Information Integrity & Wikipedia: How community-governed platforms can inform future policy-making’. Happening at the European Parliament, Brussels.
- 25 February: This year’s Digital Platforms Summit 2026 will examine how the Digital Markets Act (DMA) is reshaping online markets and enforcement, while looking ahead to the upcoming Digital Fairness Act (DFA). The event will explore platform governance, consumer and child protection, dark patterns, interoperability, and the future of EU digital regulation, alongside new research findings from CERRE.
- 25 February. This webinar hosted by Graphika on How Scammers Impersonate Brands Across Platforms dives into scammers industrialising impersonation with tactics that can target any organisation.
- 8-10 April: The Cambridge Disinformation Summit is expected to gather the world’s leading scholars, professionals, and policy-makers to explore interventions on systemic risks from disinformation.
- 15–18 June: Disinformation Summer Institute 2026. A 4-day in-person institute organised in California, US, will bring together early-career researchers and senior experts for lectures, panels and discussions on studying and countering disinformation.
- 7–8 September: EDMO BELUX 2.0 final conference ”Countering Disinformation, Raising Democratic Resilience” will be organised in Brussels.
- 6–8 October: #Disinfo2026. EU DisinfoLab’s annual conference will happen in Vilnius, Lithuania. Save the date. The call for proposals is open until the end of this month!
- Other initiatives:
- Call for collaborators (deadline: 20 March 2026): Tactical Tech is seeking experienced investigators and media professionals to develop learning resources and deliver training on AI power structures, climate and information disorder, OSINT methods, and digital influence. The collaboration is part of the EU-funded Collaborative and Investigative Journalism Initiative (CIJI).
- The Data Tank is inviting small and medium media and fact-checking organisations to join a new action research project. It aims at building collective leverage over Big GenAI, to protect media sustainability and information integrity across Europe.
- Call for contributions (deadline: 8 March 2026): EDMO BELUX 2.0 Final Conference. EDMO BELUX 2.0 invites proposals for papers/studies, themed panels and hands-on workshops on fact-checking, media literacy and disinformation research, with a focus on Belgium/Luxembourg and EU policy implementation.
- Call for papers (deadline: 15 September 2026): The Journal of Marketing Management invites submissions examining how platform economies, ad tech, recommender systems and creator monetisation shape the spread of disinformation, and what interventions could strengthen societal resilience.
🧡 One thing we loved
We’re proud to spotlight the work of our community members, and this LinkedIn thread is a great example of why that matters.
Tommaso Canetta, Deputy Director of Pagella Politica/Facta and Coordinator of fact-checking activities at EDMO and IDMO, unpacks how AI-driven political disinformation is escalating in Hungary ahead of the elections. He traces how a pro-government group circulated an AI-generated “phone call” between opposition leader Péter Magyar and Ursula von der Leyen, and then builds the case layer by layer: several politicians (including Viktor Orbán) amplified it; the video ran as a sponsored Facebook ad despite being clearly synthetic; and a pro-government outlet reported it in a way that blurred the line between fiction and reality. Each step introduced with a stark refrain: “but it gets worse.”
Aleksandra Atanasova, OSINT Investigations Lead at Reset Tech, adds the platform perspective: the deepfake carries no AI label, and reporting it to Meta is far from straightforward, the “Mark the content as generated by AI” option is buried behind multiple clicks. A sharp reminder that platform design choices can amplify harm, especially in election contexts.
Jobs
- Maldita.es is hiring a (Spanish-speaking) Senior editor specialising in policy, legislation and/or judicial affairs
- OpenAI is looking for a Global safety response operations analyst. Open until filled.
- Alice (ActiveFence) is offering several positions; scroll their page to view all open roles.
- NewsGuard is seeking a full-time Staff Reporter to analyse and rate news sources, as well as report on trends in digital news and online misinformation.
- Moonshot is seeking an OSINT Analyst (London-based) and a full-time Campaigns Analyst (US, remote).
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.
