Dear Disinfo Update reader,

In this edition, we navigate the tough climate of disinformation, from taking climate quite literally with new disinformation strategies and their impacts during elections and crisis, to unveiled deepfakes and platform regulation or, yet again, the relentless Kremlin trail in Europe between FIMI and attempted arson charges. 

The information landscape is wide, and so is our coverage. Our #Disinfo2025 conference in Ljubljana will keep the same spirit, covering community resilience and response, emerging threats, the role of digital platforms and evolving technologies in enabling disinformation, as well as regional case studies from across the globe, unpacking how disinformation adapts to different political, cultural, and social contexts – and how responses are evolving in turn.

Want to know more? Find more on the four conference tracks on our website and stay tuned as we share more in our upcoming newsletters. And if this early glimpse piqued your interest, secure your ticket here! Early bird rates expire on 27 May.

Scroll down and dive in!

Our webinars

UPCOMING – REGISTER NOW!

  • 15 May: 2025: Climate disinfo reload | AI bots are talking green while pushing denial. Trump calls global warming “good for us.” Musk goes from EV hero to climate contrarian. In this explosive webinar, Global Witness and Ripple Research reveal how 2025’s climate disinformation is smarter, louder, and more dangerous, stretching from Washington to cities around the world. Learn how the disinfo economy is thriving and what can still be done to stop it. One hour. Two experts. Zero time to waste.
  • 22 May: Mapping Manipulation: How the FIMI Exposure Matrix sharpens attribution and reveals connections | After defining Foreign Information Manipulation and Interference (FIMI) and exploring responses to this growing threat, the European External Action Service (EEAS) takes a bold methodological leap in its latest report that we are bringing to you in an exciting EU DisinfoLab webinar. The core of this report is the FIMI Exposure Matrix, a major step forward in the field of attribution. This webinar will discuss the matrix and its four-tier classification of information channels, as well as unpack the dynamics of Russian and Chinese FIMI campaigns, drawing from previous attacks. 
  • 29 May: UK riots: how coordinated messaging mobilised rioters — lessons in detecting inauthentic behaviour | In this webinar, Kamila Koronska from University of Amsterdam will present new research on how coordinated messaging on X and Facebook helped spark the 2024 UK riots, revealing how disinformation and hate speech fuelled unrest both online and offline.

PAST – WATCH THE RECORDINGS!

  • Exploring political advertising data – who targets me? | In this session, Sam Jeffers showcased tools and dashboards developed by Who Targets Me to explore how political advertising campaign spending, targeting, and messaging are tracked across platforms. We looked at the data gathered by Who Targets Me, how to access it, and how to better read political advertising campaigns, particularly during elections, with examples from the UK.
  • Influence of foreign narratives on contemporary conflicts in France (in French with subtitles available) | This webinar delved into how foreign interference shapes public opinion and threatens democratic resilience, drawing on insights from Laurent Cordonier’s recent report on disinformation narratives in France.

Disinfo news & updates 

Platforms, disinfo, and societal impact

  • A Chilling Precedent. In a move raising serious alarm, Trump appointee at the State Department Darren Beattie—a former White House speechwriter and founder of the far-right outlet Revolver News—initiated a sweeping and targeted effort to expose internal communications from a State Department office tasked with countering foreign disinformation. Modelled after the “Twitter Files” playbook, Beattie’s request sought unredacted emails referencing dozens of journalists, researchers, and critics of Trump – many long vilified in right-wing media. Framed as a transparency effort, the operation amounts to a politically driven dragnet with serious risks of harassment, retaliation, and exposure of individuals and sensitive grantees. Experts warn the move is designed to produce a chilling effect on those researching disinformation and protecting democratic discourse – undermining oversight while fueling narratives of institutional censorship.
  • Unmasking the perpetrator. An investigation led by Bellingcat, in collaboration with Danish outlets TjekDet, Politiken and the Canadian Broadcasting Corporation (CBC), has revealed the identity of a key administrator behind MrDeepFakes – an unassuming Canadian pharmacist based outside Toronto. MrDeepFakes was the largest platform for celebrity deepfake pornography, hosting 70,000 explicit and sometimes violent videos and visited millions of times every month. The site was shut down on 4 May, a week after the US Congress passed the Take it Down Act, which criminalises the distribution of non-consensual deepfake porn.
  • False judgment. Conservative social media users are sharing AI-generated images supposedly showing Wisconsin Circuit Judge Hannah Dugan’s mugshots and jail booking pictures, in an effort to disparage her and depict her as hysterical. The judge was arrested in late April 2025 for allegedly helping an undocumented migrant avoid arrest by immigration authorities
  • Shedding light on the narratives. Disinformation surfaced during and after the power outage in Spain and Portugal, with narratives allegedly based on outlets such as CNN or Reuters. For instance, misinformation about a rare climate phenomenon was shared by headlines like Sky News and CNN Portugal, linking it back to a deleted Reuters article. Other disinformation strategies included fabricated statements attributed to the President of the European Commission, Ursula von der Leyen, about Russian cyberattacks and a decontextualised video of the demolition of a coal thermal plant in 2022 claimed as a “nuclear power plant”. Moscow meddled with the information landscape, sharing false images of the Iberian Peninsula via  the Pravda network, along with Russian content impersonating British and French newspapers, claiming that the blackout was a consequence of EU sanctions against Russia as part of ‘Operation Matryoshka.’
  • Silencing the opponent. As hostilities between India and Pakistan escalate, Meta has banned a prominent Muslim page on Instagram in India at the government’s request. The @muslim account is among the most followed Muslim news sources on Instagram, with 6.7 million followers. Alongside this, India has also banned Pakistani actors and cricketers, as well as channels and news outlets, for “spreading provocative content”.
  • Fine information. Meta has faced a $290 million fine and “unrealistic” regulatory demands from the Nigerian government, which could lead to Facebook and Instagram being shut down in the country. As for the regulations, Nigeria’s Data Protection Commission accused Meta of invasive data practices and imposed strict requirements, including prior approval for data transfers and educational content on privacy risks. 

The Kremlin’s trail

  • Storm the zone. The French counter-foreign interference agency VIGINUM released an extensive report on an Information Manipulation Set (IMS) named Storm-1516. Beyond this code name, the agency attributed at least 77 information operations (IOs) to this Russian group. While highlighting the range of narratives promoted by Storm-1516, the report also represents a significant step forward in mapping the connections between this IMS and other coordinated efforts such as Doppelganger/RRN, CopyCop, and networks previously operated by Evgeny Prigozhin.
  • Disposable minions or seasoned spies? A pro-Russian Ukrainian man was arrested in Poland after Polish enforcement found materials for arson in his backpack. The man, living on the brink of poverty in Germany, was recruited on Telegram to take pictures of malls and eventually set one on fire for 4,000 euros, with no idea of who could have been behind such a request. He was arrested as he attempted to leave Poland,  having only completed the first part of the mission and leaving all malls unburnt. Once the goal was not achieved, the same individual that contacted him then, posed as a neo-Nazi in a supremacist Telegram group, asking “for comrades who make arson to the store of black migrants”. This is understood to be part of a campaign of sabotage and arson orchestrated by Moscow using vulnerable targets, much like previous Russian intelligence operations, like GRU unit 29155, which was uncovered and expelled from embassies.
  • A history of interference. For the first time, French authorities have condemned the Russian military intelligence service (GRU), accusing it of having carried out a cyberoffensive targeting ten entities since 2021, such as the intrusion attempts against sporting bodies linked to the organisation of the 2024 Paris Olympic Games. Authorities have also implied prior Russian involvement in the hacking Emmanuel Macron’s 2017 presidential election campaign and the 2015 cyberattack on the television channel TV5 Monde. 
  • Meddling with the neighbours. A Russian disinformation operation creating fake videos and articles is targeting Moldova’s pro-Europe President Maia Sandu with claims of corruption ahead of the country’s September 2025 parliamentary elections. This comes after Sandu’s re-election in 2024, after she vocally criticised Russia and defeated a pro-Kremlin candidate in elections heavily subjected to Russian interference.

Sudden (climate) changes

  • More truth on climate policy. EU environment ministers gathered in Warsaw to discuss how to tackle disinformation targeting EU climate and environmental policies. Prompted by a wave of conspiracy theories following a major blackout in Spain and Portugal, Polish Climate Minister Paulina Hennig-Kloska warned of politically motivated disinformation often linked to foreign adversaries like Russia and Belarus. Ministers acknowledged a lack of coordinated action and stressed the need for effective countermeasures and clearer, more accessible EU communications. Climate groups also expressed concern that disinformation is fuelling backlash against EU green policies.
  • Hot climate in Turkey. Climate disinformation, scepticism and denialism have gone viral in Turkey, ahead of parliament’s vote on a pivotal new climate law focused on “reducing greenhouse gas emissions” and implementing new regulations. Environmentalists have criticised the bill as it fails to address the climate crisis and mainly focuses on businesses. The online narratives were instead sceptical, with some claiming they did not believe in climate change, others arguing the law would pave the way to diseases, or some asserting that there should be “no rush” to pass the climate law at a time when Trump has withdrawn from the Paris Agreement.
  • Deep-quakes. The earthquake that struck Myanmar earlier this year exposed the fragility of the country to disinformation, especially during a crisis. While AI can create simulations of natural calamities that are useful for climate modelling, it is also frequently used to produce deepfake disaster videos that may mislead ordinary viewers. This, combined with disinformation, erodes trust in information and institutions, but can also potentially exacerbate the challenges of humanitarian assistance and disaster relief (HADR).

Brussels corner

European Commission holds workshop on assessment of systemic risks

On 7 May, the European Commission held a workshop on systemic risks within the context of the enforcement of the Digital Services Act (DSA). The workshop had four tracks, including one on risks to civic discourse and electoral processes. The discussions will, inter alia, feed into the Commission’s drafting of the first annual report foreseen by article 35.2 of the DSA. That article requires the Board (consisting of national digital services coordinators and the Commission), in cooperation with the Commission, to produce a systemic risk report every year.

In the workshop, particular scenarios, prepared by the Commission, were discussed to analyse possible mitigations of the risks they described:

  • The first session on civic discourse dealt with possible illegal harassment of electoral candidates, involving unspecified “elements” of gender-based harassment, with suspicions of inauthenticity, including potentially automated accounts being used.
  • The second session was more targeted, addressing a scenario where false information about elections and electoral processes was shared and amplified by domestic and foreign actors and, after the election confidence in the election was undermined.

European Parliament rapporteur publishes his Working Document on Democracy Shield

Tomas Tobé (Sweden, EPP), the Member of the European Parliament (MEP) in charge (“rapporteur”) of the planned output of the Parliament’s special committee on the “Democracy Shield”, has published his planned Working Document. As the Parliament’s procedures are such that providing input into the European Commission’s planned September launch of its “Democracy Shield” initiative, this document allows the rapporteur to provide his views on what the key policy actions should be. While other parliamentarians had the possibility to share their views on what the document should include, no option to suggest amendments was provided.

The Working Document addresses ten key priorities, proposing initiatives, where appropriate.

  • 1. EU structures to combat foreign information manipulation and interference (FIMI). The rapporteur stresses the need for a comprehensive, coherent, and future-oriented response. He points to Viginum in France and the Swedish Psychological Defence Agency as best practice. Overall, however, he laments the lack of coordination and expertise. This should be addressed by a new, independent structure, building on existing experience and systems. Also, as a separate but linked issue, the EU’s intelligence capacities should be developed.
  • 2. Digital Resilience. The rapporteur describes the hybrid nature of some attacks, the exploitation of social media algorithms and disinformation via attacks on large language models as some examples. He points to the exploitation of micro-influencers and the high volume of inauthentic behaviour reported by Tiktok to show the scale of the problem, and questions whether additional measures, possibly in the context of the planned Digital Fairness Act, would be needed. He says swift finalisation of the ongoing proceedings will be key, and seems to tacitly support Telegram’s possible designation as a very large online platform under the DSA. He calls on Telegram, which he describes as having a “possible role in facilitating criminal activities and foreign interference in the EU” to sign up to the voluntary code of practice on disinformation. Finally, he calls for fostering innovation to counterbalance the influence of foreign tech giants.
  • 3. Media, information integrity, and fact-checking. The Working Document highlights the need to support media literacy and independent media, decrying all manner of actions to hinder the work of journalists and the growing level of press freedom violations in Europe. He praises EU initiatives such as the European Media Freedom Act (EMFA) and anti-SLAPP Directive. He warns against covert actions by foreign actors in the EU media market. On the other side, he encourages support for important media projects, including projects weakened by the removal of US funding. He supports civil society-based fact-checking and, apparently, consistent long-term funding.
  • 4. Civil society. The key role of civil society in exposing and fighting FIMI, in building resilience and particularly in crisis situations, is recognised. The rapporteur calls for adequate funding and conditions for funding to be made available. He also calls for the protection and promotion of the role of civil society in decision-making. While supporting measures to improve transparency of third-country funding of civil society, he urges caution and says that such measures should be designed in a way that prevents them from being misused to stigmatise the legitimate activities of civil society. The Working Document welcomes the European Civil Society Strategy and recommends that the strategy be integrated into the European Democracy Shield.
  • 5. Cybersecurity and protection of critical infrastructure. The rapporteur points to numerous examples of attacks on physical and digital infrastructure. He also warns of the EU’s dependence on foreign actors and foreign-made technologies in critical infrastructures and supply chains, and praises EU legislation such as Network and Information Security 2 (NIS2) – while lamenting lack of transposition – and the Cyber Resilience Act.
  • 6. Justice and Home Affairs. The rapporteur describes how certain entities active in the area of Justice and Home Affairs, such as Europol, Eurojust and Frontex, could play a role in achieving the goals of the Democracy Shield.
  • 7. External Action. The working document stresses the need to provide support for candidate countries and countries further afield, both to protect democracy and also to safeguard the EU’s strategic interests.
  • 8. Election systems and electoral resilience. The rapporteur calls for various measures to protect the integrity of elections in particular support for the European Cooperation Network on Elections. It also draws attention to the vulnerabilities caused by the digitisation of the elections in some member states. He stresses the need for measures to protect the physical security of candidates and elections and calls for measures to address the actions of foreign actors in the party political landscape in Europe.
  • 9. The role of sanctions in the protection of democracy. The rapporteur provides an overview of the existing sanctions instruments he called for and a risk-based approach in order to better implement the instruments currently available, and calls for further work on the use of criminal law in this policy area.
  • 10. European Union’s preparedness. The final section of the working document calls for preparedness to be embedded in all aspects of policy making. This is because cyber attacks and hybrid attacks are addressed to the functioning of society, the economy, and to our internal affairs. The rapporteur calls for a high level of solidarity and cooperation in the EU.

Parliament committee exchange of views on implementation and enforcement of the AI Act  

Members of the European Parliament held an exchange of views with the European Commission on the implementation of the Artificial Intelligence Act, focusing in particular on the draft Code of Practice for General Purpose AI (GPAI) models. 

Co-rapporteur for the legislation, Brando Benefei (S&D, Italy) criticised what he sees as the drift towards lighter obligations for AI companies. He said discussions focused on transparency, copyright and fundamental rights. He described a slide from a multi-stakeholder approach to one that is industry-centred, “due to pressures”. He expressed concerns that the process was eroding core protections. The third version of the draft code treats key safeguards for fundamental rights as optional, contradicting the AI Act, he said, adding that the relevant provisions only concern a handful of companies, so any weakening is unjustifiable.

The European Commission gave a general response about the implementation of the AI Act, which has numerous deadlines. 

In the exchange of views Alexandra Geese (Greens, Germany) lamented the weakening of the AI Act in relation to free speech and disinformation, as well as pointing to the more extreme political positions taken by certain companies and facilitating foreign interference and election manipulation, the consequence will undermine European democracy and the European economy. In a point also raised in the Parliament’s Democracy Shield working document, she pointed to the poisoning of large language models. Sandro Gozi (ALDE, Malta) echoed the point, asking about verification of training data. 

The Commission responded that it was seeking to ensure that the Code would be in line with the AI Act. The representative added that, in relation to disinformation, even if certain risks are not mentioned, this does not mean that they do not have to be addressed as systemic risks.

Reading & resources

Different uses of the Internet

  • Conduits and tactics to sway elections. OpenMinds published insightful research on the state of affairs and impact of pro-Russian Telegram channels in Romania during elections. It was found that almost a quarter of the Romanian Telegram network consists of channels closely linked to pro-Kremlin media, pushing pro-Russia and conspiratorial voices, as well as anti-EU content. These channels have been posting Russian propaganda since the beginning of 2022, with content – for instance – amplifying Russian officials’ voices, such as MFA spokesperson Maria Zakharova’s claim about the “cancellation” of Russian culture in Romania or promoting Ukraine-related news with a pro-Kremlin slant, like blaming Ukraine for halting Russian gas transit. A more subtle tactic involves unaffiliated channels sharing pro-Russian posts; this type of conduit amounts to 52% of the whole Romanian Telegram.
  • Artificial Intelligence and Unethical Bots. Reddit is considering legal action against researchers at the University of Zurich for an “improper and highly unethical experiment”. The researchers ran a covert experiment in the r/changemyview subreddit, making AI bots engage in debates around sensitive issues. The bots pretended to be – for instance – rape survivors or black people who were opposed to the Black Lives Matter movement, and then mined the posting history of users who replied to them and determine personal details that would make bots more effective (e.g. age, location).
  • Peacebuilding through facts. Thirty journalists from The Gambia and Guinea-Bissau were trained in advanced fact-checking covering open-source intelligence, geolocation analysis, and search engine investigations. The training was part of the UNESCO component of the UN Peacebuilding Fund-funded project titled ‘Strengthening the National Infrastructure for Peace to Promote Social Cohesion in The Gambia.’

What’s up in Europe?

  • Policy impact. This NATO report, “Impact of the Digital Services Act: A Facebook Case Study”, examines how the Digital Services Act (DSA) has affected Facebook by analysing the spread of harmful content from Polish and Lithuanian accounts, and comparing data from before and after the regulation came into force.
  • Fighting disinformation with headlines. “Seeking Truth, Ensuring Quality: Journalistic Weapons in the Age of Disinformation” brings together leading European media perspectives to confront journalism’s pressing challenges. Published by the University of Bergen and Media Cluster Norway as part of the Journalistic Weapons conference, it showcases efforts to protect information integrity and democracy through a multistakeholder approach.

The temperature is rising

  • “Developments” against climate disinfo. The United Nations Development Programme (UNDP) highlighted how climate misinformation and disinformation undermine global climate action, as misleading narratives, such as climate denial, greenwashing, and conspiracy theories, spread online, delaying responses and destabilising communities, especially in environmentally vulnerable regions. Executives need to understand how unchecked disinformation impacts policy, strategies, and trust by ensuring fact-checking and relying on trustworthy sources are incorporated in strategic planning.
  • Making sense of sensemaking. Climate action is undermined by populist distrust in government institutions and growing polarisation. An insightful study delves into the nuances of ‘sensemaking’, resistance, and polarisation as well as how emotions about climate transition are leveraged by disinformation, obscuring understanding. The study overall aims to better understand climate-action barriers in British Columbia, Canada.

This week’s recommended read

This week, our recommended read, proposed by our Researcher Ana Romero Vicente, concerns synthetic propaganda. In a fascinating and ongoing research, Marcus Bösch investigates how governments, especially the US administration, are using generative AI to craft and spread synthetic propaganda on social media. From AI-generated videos to meme-worthy filters, Bösch explores how these digital tactics blur the line between official communication and trolling, with the aim of influencing public perception. 

Bösch offers early insights backed by literature and some fascinating examples. As he continues his investigation, he warmly welcomes ideas from the counter-disinfo community to enhance the research. If you have thoughts, suggestions, or relevant resources, feel free to reach out and collaborate with him on this crucial topic.

With more findings to come, this is a must-read for anyone concerned about the role of AI, media manipulation, and disinformation in today’s digital world.

The latest from EU DisinfoLab 

This week, we bring a fresh update to our AI Disinfo Hub! The repository has been expanded with new publications and insights, including the long shadow of Russian influence and the latest developments in AI regulation. A highlight you won’t want to miss, make sure to visit the hub for much more!

AI-powered disinfo in minority language: We have recently seen how the Russian influence network Pravda exploits AI for a variety of purposes, both to create fake content and to “infect” Large Language Models to help spread its propaganda and disinformation (the so-called LLM grooming). In the current edition, we present two examples of such uses: on the one hand, a site called Pravda Alba is exploiting AI to generate falsehoods in Scottish Gaelic. While the Gaelic-speaking population is small, targeting minority language communities allows for less scrutiny, leveraging AI to create material in less-monitored spaces. Meanwhile, the Pravda Australian branch is flooding AI Western chatbots such as ChatGPT, Google’s Gemini and Microsoft’s Copilot with Russian propaganda ahead of the federal election, according to ABC. Although the site has limited real-world engagement, experts warn this could retrain AI models to spread Kremlin-friendly narratives.

Copyright clash over AI training: The Trump administration has dismissed Shira Perlmutter, the nation’s top copyright official, just days after she released a report questioning the legality of using copyrighted works to train AI systems. The move follows the firing of the Librarian of Congress and has sparked criticism from Democrats, who view it as a politically motivated power grab. Perlmutter had emphasised the importance of human creativity in determining copyright protections, an approach at odds with growing industry pressures.

US legislation on deepfake harms: The US House of Representatives has passed the Take It Down Act, a bipartisan bill aimed at removing nonconsensual intimate images, including sexually explicit deepfakes and revenge porn, from online platforms. With overwhelming support, the bill now heads to President Donald Trump, who has expressed his intent to sign it into law. The legislation requires platforms to act within 48 hours to remove harmful content and establishes penalties for creating and distributing such images. While the bill offers protection for victims, concerns about its potential impact on free speech and encrypted communications have been raised by digital civil rights groups.

Events & announcements  

  • 22 May: AI and the Future of Journalism event will take place at King’s College, University of Cambridge.
  • 22-25 May: Dataharvest, the European Investigative Journalism Conference, will be held in Mechelen, Belgium. Tickets are sold out, but you can still join the waiting list.
  • 2 June: This online event is a 60-minute discussion with Karen Hao, the author of the forthcoming book Empire of AI, the first comprehensive account of OpenAI’s evolution from a nonprofit research lab to a dominant force at the centre of the global AI race.
  • 10-11 June: NATO Riga StratCom Dialogue 2025 is taking place in Riga, Latvia. Media accreditation is now open.
  • 13 June: Defend Democracy will host the Coalition for Democratic Resilience (C4DR) to strengthen societal resilience against sharp power and other challenges. The event will bring together transatlantic stakeholders from EU and NATO countries to focus on building resilience through coordination, collaboration, and capacity-building. 
  • 17 & 24 June: VeraAI will host two online events exploring how artificial intelligence is being used to analyse digital content and support media verification. The first one is a practical session for journalists, fact-checkers, and other media professionals, featuring hands-on demonstrations of veraAI tools. The second one is a research-focused session for academics, scientists, and scholars, offering deep dives into the latest findings and methodologies. Register for one or both sessions here.
  • 16-17 June:  The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies.
  • 17-18 June: The Summit on Climate Mis/Disinformation, taking place in Ottawa, Canada, and accessible remotely, will bring together experts to address the impact of false climate narratives on public perception and policy action.
  • 18-19 June: Media & Learning 2025 conference under the tagline of “Educational media that works” will be held in Leuven, Belgium.
  • 26 June – 2 July: The training Facts for Future – Countering Climate Disinformation in Youth Work will bring together youth workers, trainers & civil society pros in Eisenach, Germany, to tackle climate disinformation and populist narratives.
  • 27-28 June: The S.HI.E.L.D. vs Disinformation project will organise a conference “Revisiting Disinformation: Critical Media Literacy Approaches” on Crete, Greece.
  • 30 June – 4 July: The Summer Course on European Platform Regulation 2025, hosted in Amsterdam, will offer a deep dive into EU platform regulation, focusing on the Digital Services Act and the Digital Markets Act. Led by experts from academia, law, and government, the course will provide in-depth insights into relevant legislation.
  • 8-11 July: The AI for Good Global Summit 2025 will be held in Geneva. This leading UN event aims to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 
  • 6-13 July: WEASA is hosting a summer school titled “Digital Resilience in the Age of Disinformation” for mid-career professionals.
  • 15-16 July: The European Media and Information Fund (EMIF) will hold its Summer Conference in Lisbon, Portugal. Save the date!
  • 4-6 September: #SISP2025 at the University of Naples Federico II with panels on “Digitalization: Normative and Empirical Challenges”, “Digital Sovereignties: Governance, Security and Rights”, “Governance, security & rights in the age of AI, geopolitics, and global tech ”and “Institutionalizing Cybersecurity in the European Union: A Distinctive Approach?”
  • 19 September: Generative futures: Climate storytelling in the age of AI disinformation is a workshop that will be held in Oxfordshire (UK), exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation.
  • 24-25 September: This year’s JRC DISINFO hybrid workshop is titled “Defending European Democracy”. Save the date and stay tuned for more info!
  • 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. Make sure you request your ticket now as the early bird sale finishes on 27 May!

Spotted: EU DisinfoLab

  • Shedding light on CIB. On 6 May, our colleague Ana Romero Vicente shared her expertise in the webinar ‘What is Coordinated Inauthentic Behaviour (CIB) and why is it so difficult to identify?’ hosted by the Centre national de la recherche scientifique (CNRS). 
  • EU DisinfoLab sheds light on current narratives. Our work on Doppelganger was mentioned by Spanish outlet Verifica.efe in its article on pro-Russian disinformation about the blackout in Spain and Portugal. The pro-Kremlin campaign used, in fact, impersonation of Western media like The Independent to make disinformation gain credibility.
  • Impactful outreach. Newtral wrote an article about LLM Grooming, quoting our webinar on the topic with Sophia Freuden from the American Sunlight Project.
  • Let’s meet! In the coming week, alongside our usual bases (Brussels, Madrid, Milan, Athens), part of our team will be heading to Ljubljana, to prepare for our annual conference and to Denmark for the Copenhagen Democracy Summit. If you’re attending too – or just nearby – drop us a reply to this email. We’d love to try and meet up!

Jobs 

  • EU DisinfoLab is looking for a Policy and Advocacy Intern to join our Brussels team. This on-site role offers hands-on experience with EU policy-making alongside our Senior Policy Expert. Apply by 18 May.
  • Harvard’s Berkman Klein Center for Internet & Society welcomes applications from academics and professionals for its 2025-2026 fellowships. The deadline is 30 April.
  • Political Intelligence is seeking an EU Public Affairs intern based in Brussels. 
  • CNN is looking for an OSINT researcher in Washington DC, Atlanta, or New York. 
  • The Center for Countering Digital Hate is looking for an EU Policy Officer, remotely.
  • EEAS is looking for an Information and Communication Assistant – Threat Actor Response Team East in the SG.STRAT.4 Information Integrity and countering Foreign Information Manipulation and Interference (FIMI) Division.

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.