Dear Disinfo update reader,
Let’s start with a topic that’s quite literally about power – and what happens when it disappears. Yesterday, Spain, Portugal (and briefly some areas in the south of France) were hit by a widespread power outage, triggering dangerous assumptions and offline disinformation on the roots of the blackouts. Read here what our colleague in Madrid wrote about the confusion on the ground. Events like these serve as a reminder of how disinformation can exacerbate chaos and crisis, and highlight the importance of authoritative information and crisis preparedness, also on the information side of crisis situations.
If there’s one thing the blackout showed us (once again), it’s that disinformation doesn’t wait for the lights to come back on. In this packed edition, we unpack the global landscape of disinformation from the Philippines to Romania, diving into climate discourse, FIMI, and the new frontiers of hacktivism. Luckily, in response to this tough climate, new tools and regulations have come to the rescue.
Despite growing threats and funding cuts, the counter-disinformation community keeps growing, so long as we continue collaborating and staying vigilant on upcoming threats. Stay updated and check out our upcoming webinars, recommended reads, events, and job openings. Most importantly, make sure to join us in Ljubljana, Slovenia, for our annual conference #Disinfo2025 – the early bird offer finishes on 27 May, so make sure you get your ticket!
So scroll down and check out all the updates!
Our webinars
UPCOMING – REGISTER NOW!
- 30 April: Exploring political advertising data – who targets me? | In this session, Sam Jeffers will use tools and dashboards developed by Who Targets Me to explore how political advertising campaign spending, targeting, and messaging are tracked across platforms. We’ll look at what data is available, how to access it, and how to better read political advertising campaigns, particularly during elections, and consider how this data can complement research.
- 8 May: Influence of foreign narratives on contemporary conflicts in France (in French with subtitles available) | This webinar will dive into how foreign interference shapes public opinion and threatens democratic resilience, drawing on insights from Laurent Cordonier’s recent report on disinformation narratives in France. The session will be held in French, with real-time subtitles available.
- 15 May: 2025: Climate disinfo reload | AI bots are talking green while pushing denial. Trump calls global warming “good for us.” Musk goes from EV hero to climate contrarian. In this explosive webinar, Global Witness and Ripple Research reveal how 2025’s climate disinformation is smarter, louder, and more dangerous, stretching from Washington to cities around the world. Learn how the disinfo economy is thriving and what can still be done to stop it. One hour. Two experts. Zero time to waste.
- 29 May: UK riots: how coordinated messaging mobilised rioters — lessons in detecting inauthentic behaviour | In this webinar, Kamila Koronska from University of Amsterdam will present new research on how coordinated messaging on X and Facebook helped spark the 2024 UK riots, revealing how disinformation and hate speech fuelled unrest both online and offline.
PAST – WATCH THE RECORDINGS!
- LLM grooming: a new strategy to weaponise AI for FIMI purposes | This session with Sophia Freuden from The American Sunlight Project unpacked the emerging concept of LLM grooming – a novel strategy to manipulate large language models (LLMs) for foreign information manipulation and interference (FIMI) purposes.
- Disinformation and real estate exploitation: the case of Varosha | This webinar with Demetris Paschalides from the University of Cyprus examined the intersection of disinformation campaigns and real estate exploitation in occupied Cyprus, specifically focusing on the “ghost city” of Varosha/Famagusta – a case study also analysed in the ATHENA project’s broader research into foreign information manipulation and interference (FIMI).
Disinfo news & updates
Lights out on disinformation
On April 28, Spain, Portugal, and briefly the south of France were affected by nationwide blackouts. Our researcher based in Madrid, Ana Romero Vincente, reported a city plunged into uncertainty:
“Throughout the day, as both countries remained largely cut off from communication, speculation about who might be behind such an unusual event spread offline. Neighbours gathered in the streets, exchanging theories about possible causes: some spoke of terrorist attacks, others of cyberattacks, with rumours pointing towards Morocco, Israel, or Russia. “What’s next?” some wondered aloud. “We’re going to run out of water too, quick, go fill your bathtubs!” others exclaimed.
In Spain, radio (temporarily accessed mainly through car batteries) became one of the few available sources of information. While it provided an essential lifeline to the outside world, it also became a vehicle for spreading unverified estimates and fueling uncertainty. At one point, a radio host claimed the blackout might have been caused by an “induced atmospheric vibration,” implying it was a deliberate act. With no internet to fact-check and no further explanation given, listeners were left bewildered, turning once again to speculation to fill the silence. Some broadcasters suggested that the problem would be resolved within 10 hours; others, adopting a more alarmist tone, warned it could take a week or longer. Although not disinformation in the strict sense, this type of speculation sowed confusion in people already vulnerable due to the information vacuum.
Meanwhile, mis-disinformation and false narratives quietly took root, despite Spaniards and Portuguese having little access to them until today, when electricity was finally restored. Narratives about foreign and hybrid attacks have flourished, some are blaming wildfires; others are warning it is the result of a powerful solar storm. And unfortunately, many more theories -and more to come- are already beginning to take shape.”
Elections
- Yielding disinformation in the Philippines. A network of fake social media accounts rose to defend and praise Philippine President Duerte, and spread fake news on the International Criminal Court after the international tribunal charged the president over his drug war. The network is considered a deliberate and organised campaign, with sophistication that makes the fake accounts hard to distinguish from real users. This spike of disinformation and “digital warfare” is especially concerning ahead of the upcoming mid-term elections in the Philippines.
- From Albany to Albania. The Heritage Foundation – the US group that produced Project 2025, a key blueprint for Donald Trump’s second term policies – is reportedly assisting politicians in shaping the upcoming Albanian election. Candidate of the Democratic Party known for previous corruption charges, Sali Berisha, has openly boasted of the group’s involvement in his campaign, claiming that it has helped to “design” his policy platform. The involvement is visible considering his pledge to “make Albania great again” on his conservative anti-tax agenda.
- No News in Canada. Meta blocked news from its apps in Canada in 2023 after a new law required the social media giant to pay Canadian news publishers a tax for publishing their content. The effect of this ban has started to show, with hyperpartisan posts and misinformation going viral, particularly around the new Prime Minister Mark Carney, often amplified by right-wing accounts like Canada Proud.
- AI-mazon FIMI. Ahead of Canada’s federal election, a flood of AI-generated political books, many targeting the Prime Minister, appeared on Amazon. Lacking editorial oversight, these books, appearing legitimate, could mislead voters. Experts are calling for greater transparency, including mandatory disclosure of AI authorship, to safeguard the integrity of elections from the growing threat of AI-generated political content. (Read more in our AI Disinfo Hub)
- Chinese interference in Canada… The Canadian federal inquiry on foreign interference revealed that entities aligned with India and China interfered in recent elections, albeit without major impact on the results. In early April, Canada’s Security and Intelligence Threats to Elections (SITE) Taskforce revealed that a Chinese-language influence campaign backed by Beijing was targeting Chinese-speaking Canadians on the popular multi-function app WeChat. The messaging promoted Carney as a strong statesman, subtly positioning him as better equipped to manage relations with the United States.
- …and Australia. Chinese interference seems to be targeting Australian elections as well, with messaging against Opposition Leader Peter Dutton, suggesting a preference in Beijing for Anthony Albanese to be re-elected. China-supported messages targeting Australia’s federal election have taken an overtly critical tone. They often appear in state-aligned media outlets such as the Global Times and amplified on Chinese social media platforms such as Rednote and WeChat.
- Tough climate for Australian elections. Climate disinformation has been rampant ahead of the Australian federal election campaign. Various companies are pushing for greater reliance on gas while opposing renewable energy, misrepresenting facts about wind turbines and natural gas.
- Russian interference in Dutch EU elections. The Dutch military intelligence agency MIVD announced that Russia is strengthening their hybrid attacks against the Netherlands to destabilise society. These disruptions include attempts to influence the 2024 European elections, with cyberattacks on websites belonging to Dutch political parties and public transport companies, aiming to impede voting for Dutch citizens.
- Doppelganger strikes in Poland. Alliance4Europe and Debunk.org published a report on how the Russian influence operation Doppelganger is interfering with the current Polish presidential elections. They analysed its activities on X between 4 March and 4 April 2025, revealing that 279 posts were used to target the elections. The operation spread anti-EU, anti-Ukrainian, and anti-Polish establishment narratives, while also attempting to create division in Polish society by exploiting domestic grievances.
- Romanian government against disinformation. To guarantee fairer processes during the upcoming election re-run, Romania’s Central Election Bureau stepped up enforcement efforts. It approved 153 complaints and ordered the removal of more than 500 online posts, including unverified content from anonymous accounts, mislabeled ads by politicians and posts by people who don’t hold public office but could possibly influence the electorate.
- TikTok is atoning for its mistakes in Romania. Following doubts whether TikTok had failed to protect the integrity of the Romanian annulled election, the platform has implemented new features to safeguard upcoming polls. These measures include a media literacy campaign in collaboration with the Romanian government, cooperation with national authorities and fact-checking organisations to detect violations and the creation of an election centre, with a task force composed of experts in cybersecurity, deceptive practices, misinformation, and election integrity. To learn more about Romanian elections and TikTok’s role, check out this short podcast.
Politics vs. disinfo?
- Mind the gap. The US has cut 90% of development funding for Ukraine, putting Ukrainian independent journalism on the brink of collapse. Due to the Russian invasion, pillars of independent Ukrainian wartime reporting were heavily reliant on international grants, with 37 billion USD invested by the US. There are now growing concerns that Russia will exploit the information void left by the U.S. funding cuts to fill the reporting gap on Ukraine.
- Team against Thai. During a parliamentary debate in Thailand, leaked internal documents of a Cyber Team were disclosed. Such documents identified international NGOs – including Amnesty International – local civil society groups, and activists as “high-value targets” to attack. This explains why the Cyber Team disseminated harmful and defamatory content online, answering aggressively or with smear campaigns. The Team even attempted to brute-force attacks on prominent activists and political opponents’ accounts during the 2023 Thai elections.
- Disinformation against Pope Francis. Disinformation has spread following the death of Pope Francis. Decontextualised and fake images have gone viral such as claims of Satanic rituals at the Pope’s funeral (revealed to be unrelated footage) and a widespread video of the Pope swatting the hand of President Trump (which had originally aired as a joke on a comedian’s late-night TV show). Some AI-enabled images circulated alongside malicious links that led to scams or fraudulent websites
- A transparent leave. Peter Marks, the Food and Drug Administration’s top vaccine regulator in the US, abruptly resigned Friday, citing as the cause Health and Human Services Secretary Robert F. Kennedy Jr.’s efforts to spread misinformation about the safety of immunisations. Marks was willing to address RFK Jr. concerns about vaccine safety, however, the regulator noted that transparency was not the Secretary’s goal.
- US vs the truth. Within President Donald Trump’s first 100 days in office, the United States has cut funding and shut down agencies focused on disinformation research and foreign influence operations. More precisely, social media platforms have scaled back content moderation, while the National Science Foundation cancelled hundreds of research grants related to media literacy, counter-disinformation, diversity, equity, and inclusion. This could trigger national security risks, with Russia and China having more freedom to sow disinformation and intensify foreign influence and interference. On the other hand, Trump’s Federal Communications Commission (FCC) has pushed to revitalise the notion of ‘news distortion’, launching inquiries against ABC and CBS news channels, and threatening other outlets on the grounds of dissatisfactory coverage. Conservative organisations have asked for the termination of these inquiries, labelling them as “regulatory overreach”.
- Russia vs the truth. On 9 April, Russian government institutions announced the launch of the Global Fact-Checking Network (GFCN). At an event held in Moscow last November, they stated that its goal is to “bring together fact-checking journalists who share the ‘views and values'” of its promoters, something that directly conflicts with the methodology of independent fact-checkers. Although they claim that this network will have “30 journalists from 30 countries” who “will be dedicated to exposing falsifications,” for now, only 10 so-called experts appear on its website.
- Disinformation crossing oceans. The Danish intelligence services have accused Russia of orchestrating a disinformation campaign claiming that a Danish parliamentarian sought Russian help to prevent the US from annexing Greenland. According to the intelligence services, the campaign is contextualised in a climate of ongoing influence operations in which Russia attempts to sow discord in Trans-Atlantic relations and undermine Western support for Ukraine.
- Guy Fakes. Today’s hacktivists – especially those going against critical infrastructure – have targets and timings suggesting ties to government intelligence agencies and growing connections to offensive cyber units. This is particularly relevant in the case of groups like Killnet, Anonymous Russia, and Anonymous Sudan springing into action in support of the Kremlin’s interests, providing the government plausible deniability. Attacks have occurred globally, with attempts by CyberArmyofRussia_Reborn1 to disrupt Texas water facilities via remote-management software. The attack may have been carried out by Russian military hackers posing as hacktivists.
Brussels corner
On 22 April, the European Parliament’s Democracy Shield Committee held an exchange of views with invited experts. It consisted of two rounds of presentations and discussions. Each session consisted of three speakers, followed by questions. The following is a very brief overview of some of the key points raised.
Session 1
- The American Sunlight project explained that the big platforms keep people enraged as a means of keeping them engaged. These platforms are now aligned with the US president to serve personal enrichment. As a result, foreign anti-democratic forces are taking advantage, while homegrown forces have prevented our responses, including through tactics such as the manipulation of AI large language models. The project was able to identify this abuse, despite the huge expense to access data from X and the woefully inadequate data available from other platforms.
- A journalist and author from Bellingcat and The Insider listed physical attacks that served to reinforce online attacks. He pointed to attacks in Poland blamed on Ukrainians, attacks on internet cables, cyber-attacks, GPS jamming for flight sabotage, as well as disinformation campaigns. He said that this is a war and this needs to be recognised as such to be fought effectively.
- Alliance4Europe presented an overview of the Counter Disinformation Network, and restated the points made by the previous speaker, arguing that we can’t simply rely on “business as usual”. We need to become results-oriented and equip ourselves with the resources to be effective. They identified five major operations active during the German election campaign and explained how, in different ways, they sought to split the society.
- In response to MEP questions, American Sunlight argued for better support for quality journalism, citizen education and support for civil society. The Bellingcat/Insider journalist spoke in favour of more naming and shaming once information comes to light about particular operations, and to do this much more rapidly than is currently the case. Alliance4Europe called for a full and effective implementation of the Digital Services Act (DSA), with better information reaching people who only inform themselves via social media. Finally, we need more cross-cutting cooperation across different specialisms. The bots can be shut off, although this doesn’t come cheap. Bluesky, a small organisation with limited resources, could act decisively, so the big platforms can too.
Session 2
- A journalist and co-director of the Arena Programme (Johns Hopkins University), addressed the question, how you can defend yourself from an information campaign from an enemy that does not play by any democratic rules. A democratic response is possible. We need to understand the networked world, like Russia has learned. We learned to disrupt military supply chains by being more strategic. A good example is how Moldova uncovered operations before the election, unlike in Romania and, in 2016, in the USA. We can also look at historical successes, like Radio for Europe. In response to questions, he called for “radical transparency” in relation to online platforms to allow people to understand why some content is promoted and other content is demoted.
- The Swedish Psychological Defence Agency is a defence agency, whose aim is to counter foreign attempts to break the national ability to resist attacks. Their aim is to build societal resilience through research reports and handbooks, as well as training courses, cooperation with civil society, educational films and media literacy. They also regularly report to the Swedish government and prepare Swedish society for war or danger of war ahead of any threat becoming real, supporting – and learning from – Ukraine. They support international cooperation, evaluate outcomes, and share experiences.
- The Helsinki-based European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE) explained that it supports international cooperation. Russia’s operations seek to undermine our democratic values and our support for Ukraine. To fight this, we need good situational awareness, robust countermeasures, robustness and resilience. We should be able to continuously understand the tactics and narratives. We need awareness among the public, and reaching all society is difficult. We also need to repair weaknesses in the information environment, both offline and online.
Reading & resources
New resources
- Portal Kombat. Checkfirst and Digital Forensic Research Lab (DFRLab) have created the Pravda Dashboard, a live, hour‑by‑hour tracker of Russia’s Portal Kombat network. The dashboard offers: Article & Domain Totals, Language & Source Breakdown, Publication‑Time Patterns, and Country‑Level Targeting. Their dataset is open‑source on GitHub, so you can inspect, extend or integrate it into your own research.
- A better search. Search Whisperer 2.0 is a tool that helps you craft better Google searches by translating your vague queries into precise, effective search terms. It analyses your original search, suggests specific keywords and advanced search operators (like filetype: or site:), recommends authoritative sources for your topic, and provides instant alternatives to improve your results.
- A new toolkit against IBD. The Organisation for Identity and Cultural Development has published a practical guide and framework to combat identity-based disinformation, titled “Reclaiming Our Narratives: A Practical Guide to Countering Identity‑Based Disinformation”
Same platforms, new rules (and problems)
- Blue-check. Bluesky has introduced blue checks as a new layer of verification beyond domain handles. The blue checks can also be issued by select independent organisations through their Trusted Verifiers feature. On the user’s profile, it will be possible to check to see which organisation verified the user. Bluesky will review these verifications as well to ensure authenticity.
- Footnotes as first footsteps against disinfo. TikTok is testing a new “community notes”-style feature, called Footnotes, to add more context to posts that may be misleading or inaccurate. Footnotes will appear alongside videos on the app to “add helpful details that may be missing”. They will only appear if they’ve been rated “helpful” by the company’s algorithm, which will take into account whether “people who usually have different opinions” agree on the note’s context.
- Latest from X. X has enforced stricter rules around parody accounts since April, imposing that accounts that impersonate another user or person must use key words such as “fake” or “parody” at the start of their account names and use a different image to the X account of the original person they are copying.
Monetising Meta Misinformation
- The cure to disinformation? An investigation led by AI Forensics revealed Meta’s ongoing failure to moderate advertisements, with a growing concern for health-related ads. By using Meta’s Ad Library, their report examined Meta’s ads since the implementation of the Digital Services Act (DSA), identifying 46.000 advertisements of unapproved drugs and health claims violating at least 15 of Meta’s own rules (Watch also the recording of our webinar on this topic).
- Blackhat advertisement has gone corporate. Some companies have turned into full-scale suppliers of ad infrastructure – renting aged Facebook agency accounts, farming verified accounts, and building in-house tools to mimic legitimate advertiser behaviour and openly sharing methods to bypass restrictions. Not only can the restrictions be bypassed with technical know-how, but enforcement is even scarcer in countries like the Philippines, Vietnam, or Latin America, compared to the US and EU. Enforcement on behalf of platforms like Meta is inconsistent, despite public claims, as they continue to profit from the ad spend.
A tough climate
- Climate in court. Renowned climate scientist Michael Mann is contesting court sanctions stemming from his 2024 defamation trial, where he initially won $1 million in damages against two right-wing bloggers. The judge later slashed the award and accused Mann’s legal team of misconduct for presenting grant data deemed misleading, allegations the team firmly denies. Mann’s lawyers argue the errors were corrected early on and mischaracterised by the court. The case now hangs on whether the judge will reverse these penalties, in a trial widely seen as symbolic in the fight against climate science denial.
- Climate skepticism on the rise. The claim that “Global warming isn’t real” is not the primary statement of the climate-oppositionists anymore. As most people experience the effects of global warming, they share other misleading statements like “Climate solutions don’t work,” “Climate change has some benefits,” and pollution reduction policies are “tools for governments to control people.” Additionally, climate-sceptic narratives have merged with COVID-sceptic ones. Overall, climate sceptic posts grew on all platforms in the past three years, by 43% on YouTube and 82% on X.
- Climate activism, misrepresented. A global survey of 130.000 people across 125 countries reveals that 89% want stronger government action on climate change, but most wrongly believe they’re in the minority, a phenomenon fueling a “spiral of silence.” This misperception, researchers say, prevents collective climate action that the public actually supports. The article highlights how fossil fuel-funded disinformation campaigns have amplified a small but vocal group of climate sceptics, distorting public discourse and masking the true scale of global concern. Correcting these perception gaps could unleash powerful social momentum for climate solutions.
AI Hubdates
We’ve just updated our AI Disinfo Hub – your source for tracking how AI shapes and spreads disinformation, and how to push back. Here’s a quick view of what’s new:
- New platforms on the horizon. OpenAI is reportedly working on its own social network, a potential competitor to X. Early prototypes show a feed powered by ChatGPT, with a focus on images and interactive content. This could reshape the social media landscape, but the real intrigue lies in how it would give OpenAI a massive trove of real-time user data to enhance its AI models. The big question now is: will it disrupt social media giants like X and Meta?
- A new perspective. OpenAI is revising its safety approach, opting to no longer pre-screen its AI models for manipulation risks before release. Instead, it will rely on post-launch monitoring and terms of service enforcement. Critics argue this could open the door to mass manipulation, while supporters suggest pre-deployment testing isn’t feasible. The decision comes amid concerns about AI’s role in social media. In light of this, the growing ability of AI tools like ChatGPT to generate fake but realistic images of public figures. Despite guardrails, CBC News found these safeguards can be bypassed, potentially fueling disinformation, especially during elections.
- AI Storm hits Russia. A recent study analysing over 30 cases reveals how Russia employs generative AI in disinformation and influence ops with little strategic coherence, mainly for content flooding, laundering, and low-effort manipulation. Deepfakes are on the rise, used to discredit targets and forge deceptive communications, while popular Western AI tools remain central to these efforts. Notably, NewsGuard reports that Kremlin-backed operation Storm-1516 illustrates this evolution: from late 2024 to early 2025, it deployed AI-powered narratives attacking France’s Macron and Ukraine across 39.000 posts. These tactics, NewsGuard warns, are now embedding misinformation directly into the digital tools people rely on for facts. Another recent study by NewsGuard reveals pro-Kremlin propagandists are exploiting a viral AI trend to smear Ukrainian President Volodymyr Zelensky, generating Barbie-style “action figure” images that falsely depict him as a drug-abuser.
This week’s recommended read
This week, our recommended read, proposed by our Project Officer Inès Gentil, is the latest investigation by Reset Tech: “The Dormant Danger: How Meta Ignores Large-Scale Inauthentic Behavior Networks of Malicious Advertisers“. The report exposes how Meta is failing to tackle millions of dormant Facebook pages created to serve as latent advertising assets – an ecosystem ready to be activated for political propaganda, scam campaigns or disinformation operations like the pro-Kremlin Doppelganger campaign.
Reset Tech’s findings show how malicious actors exploit Meta’s advertising systems by rotating between millions of pre-made inauthentic pages, evading detection, and reaching tens of millions of users across the EU. Despite clear indicators, Meta has overlooked these networks for years, raising serious concerns under the EU Digital Services Act (DSA) about systemic risks to civic discourse and electoral integrity.
The report stresses that countering these threats requires proactive identification and removal of dormant assets, rather than reactive moderation of visible ads. More precisely, it advocates for the use of “pattern recognition algorithms” to detect suspicious accounts at the moment of their creation and calls for the systematic removal of dormant assets before they can be exploited. It also highlights the urgent need for increased advertising transparency, better risk assessment, and strengthened regulatory oversight to effectively mitigate these systemic risks.
Events & announcements
- Your voice counts. The Irish Digital Services Coordinator – Coimisiún na Meán – is carrying out a survey to understand more about researchers’ needs, readiness and barriers to data access under Article 40 of the Digital Services Act. The results from the survey will be used to inform Digital Services Coordinators (DSCs) across the EU as they prepare for the successful implementation of the vetted researcher data access provisions. The survey will remain open until close of business on 6 May, takes around 10-15 minutes to complete and can be accessed here.
- Support for counter-disinformation professionals. EMIF launched an initiative to support counter-disinformation professionals through Support and Assistance Facility for Experts (SAFE). Current and previous EMIF grantees, members of the European Fact-Checking Standards Network (EFCSN) and members of European Digital Media Observatory hubs networks can apply.
- 29 April – 3 May: Global Witness is teaming up with the British Natural History Museum at “Generation Hope”. It aims to teach the next generation of climate activists, scientists and experts about the importance of trustworthy information, how disinformation spreads online and why it matters for climate action.
- 7-8 May: The Delegation of the European Union to Ukraine invites journalists, educators, and civil society activists to Cherkasy to participate in a two-day offline сapacity-building event, ‘Debunking disinformation on the European Union and the EU’s support to Ukraine: fact-checking Workshop.’
- 7 May: The “Arctic and Climate Change: The Intersection of Geopolitics and Disinformation” in Ottawa will unpack how strategic narratives and disinformation campaigns distort Arctic climate discussions, and how we can build resilience against them.
- 8-9 May: Data & Society will host an online workshop on the intersection of generative AI technologies and work. This workshop aims to foster a collaborative environment to discuss how we investigate, think about, resist, and shape the emerging uses of generative AI technologies across a broad range of work contexts.
- 22-25 May: Dataharvest, the European Investigative Journalism Conference, will be held in Mechelen, Belgium. Tickets are sold out, but you can still join the waiting list.
- 13 June: Defend Democracy will host the Coalition for Democratic Resilience (C4DR) to strengthen societal resilience against sharp power and other challenges. The event will bring together transatlantic stakeholders from EU and NATO countries to focus on building resilience through coordination, collaboration, and capacity-building.
- 17 & 24 June: VeraAI will host two online events on artificial intelligence and content analysis, including the verification of digital media items with the help of technology. The first session is dedicated to end-user facing tools addressing journalists and fact-checkers as a target audience. The second session will be primarily directed at the research community.
- 16-17 June: The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies.
- 17-18 June: The Summit on Climate Mis/Disinformation, taking place in Ottawa, Canada, and accessible remotely, will bring together experts to address the impact of false climate narratives on public perception and policy action.
- 18-19 June: Media & Learning 2025 conference under the tagline of “Educational media that works” will be held in Leuven, Belgium.
- 26 June – 2 July: The training Facts for Future – Countering Climate Disinformation in Youth Work will bring together youth workers, trainers & civil society pros im Eisenach, Germany to tackle climate disinformation and populist narratives. Apply by 11 May.
- 27-28 June: The S.HI.E.L.D. vs Disinformation project will organise a conference “Revisiting Disinformation: Critical Media Literacy Approaches” on Crete, Greece.
- 30 June – 4 July: The Summer Course on European Platform Regulation 2025, hosted in Amsterdam, will offer a deep dive into EU platform regulation, focusing on the Digital Services Act and the Digital Markets Act. Led by experts from academia, law, and government, the course will provide in-depth insights into relevant legislation.
- 8-11 July: The AI for Good Global Summit 2025 will be held in Geneva. This leading UN event aims to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact.
- 6-13 July. WEASA is hosting a summer school titled “Digital Resilience in the Age of Disinformation” for mid-career professionals.
- 19 September: Generative futures: Climate storytelling in the age of AI disinformation is a workshop that will be held in Oxfordshire (UK) exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation.
- 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. Make sure you request your ticket now as the early bird sale finishes on 27 May!
- 25 or 26 October: Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI, the European Workshop on Trustworthy AI organised as part of the 28th European Conference on Artificial Intelligence ECAI 2025, Bologna, Italy.
- 29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.
- 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
Spotted: EU DisinfoLab
- Shedding light on CIB. Our colleague Ana Romero Vicente will share her expertise in the upcoming webinar ‘What is Coordinated Inauthentic Behavior (CIB) and why is it so difficult to identify?’ hosted by the Centre national de la recherche scientifique (CNRS). The session will take place on 6 May at 14:00 CEST. If you’re interested, you can register here.
- Let’s meet! In the coming two weeks, alongside our usual bases (Brussels, Madrid, Milan, Athens), part of our team will be heading to Denmark for the Copenhagen Democracy Summit. If you’re attending too – or just nearby – drop us a reply to this email. We’d love to try and meet up!
Jobs
- EU DisinfoLab is looking for a Policy and Advocacy Intern to join our Brussels team. This on-site role offers hands-on experience with EU policy-making alongside our Senior Policy Expert. Apply by 18 May.
- Tactical Tech is looking for an Accountant to fill a part-time accountant position in Berlin.
- Harvard’s Berkman Klein Center for Internet & Society welcomes applications from academics and professionals for its 2025-2026 fellowships. The deadline is 30 April.
- The Digital Services Act Enforcement Team published a call for expression of interest for Contract Agents in Function Groups IV and III to join the enforcement team of the Digital Services Act. Expressions of interest are invited in the following profiles: Legal officer (FG IV), Policy Specialist (FG IV) , Operations Specialist (FG IV), Traffic measurement specialists (FG IV), Technology specialists, data scientists and applied researchers (FG IV), Legal, Policy, and Operations Assistants (FG III)
- The World Health Organization (WHO) is seeking experts to join its Health-Security Interface Technical Advisory Group (HSI-TAG). Expressions of interest are open until 5 May.
- The AI Safety Institute has various open positions.
- The Pulitzer Center is hiring a director of editorial programs. Apply before 18 May.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.