Dear Disinfo Update reader,

Welcome to the latest Disinfo Update – and what a packed edition it is! From shadow alliances and covert campaigns to platform shake-ups, there’s plenty to catch up on.

This time, Russia and China feature prominently, with several influence operations exposed – a reminder that, despite constant attacks, democratic societies are anything but defenceless. We also take a closer look at African countries, both as targets and actors in the information space. On top of that, there’s plenty of EU flavour this time, from the Democracy Shield to the AI Act and from platform enforcement to fresh funding. And on the unexpected side, Robert F. Kennedy Jr. has rattled the anti-vax world by backing the MMR vaccine, and, staying in the US, Trump seems ready to cast climate change as a blessing rather than a threat.

By the way, #Disinfo2025 registration is open – don’t miss your chance to join us in Ljubljana this October!

Plenty more just below – dive in!

Our webinars

UPCOMING – REGISTER NOW

  • 3 April: Safeguarding information integrity – the DSA in action | Join us for an insider’s look at how the European Commission is putting the Digital Services Act (DSA) into action. Luboš Kukliš, Lead on Information Integrity for the DSA framework at the Commission, will share the latest developments around the DSA’s implementation – specifically in the area of information integrity.
  • 10 April: LLM grooming: a new strategy to weaponise AI for FIMI purposes | This session with Sophia Freuden from The American Sunlight Project will unpack the emerging concept of LLM grooming – a novel strategy to manipulate large language models (LLMs) for foreign information manipulation and interference (FIMI) purposes.
  • 17 April: Disinformation and real estate exploitation: the case of Varosha | This webinar with Demetris Paschalides from the University of Cyprus examines the intersection of disinformation campaigns and real estate exploitation in occupied Cyprus, specifically focusing on the “ghost city” of Varosha/Famagusta – a case study also analysed in the ATHENA project’s broader research into foreign information manipulation and interference (FIMI).
  • 30 April: Exploring political advertising data – who targets me? | In this session, Sam Jeffers will use tools and dashboards developed by Who Targets Me to explore how political advertising campaign spending, targeting, and messaging are tracked across platforms. We’ll look at what data is available, how to access it, and how to better read political advertising campaigns – particularly during elections – and consider how this data can complement research.

PAST – WATCH THE RECORDINGS!

  • Clickbait Cures: How Meta & Google Fail to Stop Fake Meds Ads | This webinar delved into an investigation by Aleksandra Atanasova from Reset Tech exposing how Meta and Google continue to host deceptive health ads across Europe. The session explored the persistent gaps in platform enforcement, the risks posed by misleading medical claims, and why stronger regulatory action is necessary. Watch the replay of the discussion and learn more about the investigation’s key findings.

Find all our past and upcoming webinars here.

Disinfo news & updates

Institutional threat assessments

  • Digital iron curtain. Following its initial two reports on foreign information manipulation and interference (FIMI), one outlining the concept and providing a preliminary analysis, and the other focusing on response tools, the European External Action Service (EEAS) has now released its third report, introducing the FIMI Exposure Matrix as a key component of its counter-FIMI methodology. This latest report represents a qualitative methodological leap, particularly in the attribution of FIMI campaigns, through a four-category classification of channels and an enhanced focus on identifying recurring attack patterns. In addition, it provides an overview of recent Russian FIMI campaigns, examining both their internal interactions and links to Chinese influence operations. Based on an analysis of attacks across over 80 countries and 200 organisations, the report sheds light on the nature of these connections, often grounded in an opportunistic narrative alignment.
  • Shadow alliance. Europol warns that Russia and other state actors are using criminal networks to escalate sabotage, cyber-attacks, and migrant smuggling across the EU. A new report details a “shadow alliance” aimed at destabilising institutions through persistent, coordinated disruptions, while AI is amplifying online fraud and cybercrime.
  • Musk’s influence, not interference. Canadian officials monitoring foreign interference in the upcoming federal election clarified that while social media influence is under scrutiny, the opinions of public figures like Elon Musk do not constitute foreign interference. Musk’s posts, despite his large following in Canada, are seen as personal opinions, not covert actions. Officials emphasised that foreign interference includes deliberate, covert actions by state actors to manipulate the election, which is distinct from individual expressions.

Russia-China destabilisation axis

  • Spy exposed. Austria has uncovered a Russian-backed disinformation campaign about Ukraine, linked to the arrest of a Bulgarian woman accused of espionage. The operation, involving online activity and misleading graffiti, targeted German-speaking countries, particularly Austria, after Russia’s 2022 invasion. The suspect has admitted to her involvement, especially during that year.
  • Ghost writers. An investigation has uncovered a covert influence campaign using fabricated journalists to spread anti-French sentiment across West and Central Africa. These “ghost reporters” publish pro-Russian narratives in local media, exploiting paid content systems to push propaganda and to reshape public perception and geopolitical alliances in the region.
  • Cartoons in motion. Animated videos and cartoons blending historical grievances with modern propaganda are fueling anti-French sentiment in West Africa and boosting Russian influence. These visuals often glorify groups like the Wagner Group, portraying France as an occupier and Russia as a liberator. By tapping into colonial history, these videos shift focus from jihadist groups to France as the enemy, strengthening pro-Russian narratives in the Sahel and Central Africa.
  • Pharaohs of disinfo. Egypt’s state-run media has become a key platform for spreading Russia’s narrative on the war in Ukraine. Through partnerships with major Egyptian outlets, Russia has promoted its self-defence stance and painted Ukraine as the aggressor. Despite the war’s economic toll on Egypt, Russian propaganda continues to influence public opinion in both Egypt and the broader Middle East and North Africa (MENA) region.
  • RT proxy machine. RT is secretly funding video bloggers to spread pro-Kremlin narratives while hiding their ties to the network. These influencers, posing as independent journalists, share content on YouTube, TikTok, and Telegram to bypass bans on Russian state media. Payments are funnelled through intermediaries, making direct Kremlin links hard to trace. The content includes anti-Western rhetoric, climate skepticism, and narratives undermining Ukraine, reflecting Russia’s evolving disinformation tactics.
  • UNHCR targeted. According to a CORRECTIV investigation (in German), a Russian hacking group may have attempted to penetrate the network of the United Nations High Commissioner for Refugees (UNHCR). The phishing attempt was discovered when an UNHCR employee received a fake press inquiry from CORRECTIV. The email contained a link leading to installing malware. 
  • Ireland bound. Russian and Chinese influence networks have sent over 7.500 posts about Ireland on social media in the past year. While Russian networks show less interest in Ireland compared to other European countries, Chinese networks are more active, focusing on promoting China as a global leader.
  • Tariffs to tension. Chinese state influence actors are amplifying narratives aimed at undermining US trade policies, particularly regarding tariffs, and exacerbating diplomatic tensions between the US and Europe. Recent campaigns highlight criticism of US tariffs, claiming they harm US workers, while also promoting China as a stronger diplomatic partner. These efforts are part of Beijing’s broader strategy to influence global opinion and create divisions.
  • Tap into fired. A secretive Chinese-linked network is targeting recently fired US government workers using fake consulting companies and job ads to lure potential hires, potentially for intelligence gathering. While ties to Beijing remain unclear, experts say the tactics resemble past espionage efforts.
  • Predator of press freedom. French journalists have become targets of a massive cyber harassment campaign after their investigation into Decathlon’s ties to a Chinese company accused of using Uyghur forced labour. Amplified by Chinese state media, the campaign includes violent insults and death threats, spreads disinformation and discredits the reporters and their investigation.

Platform norms and reforms, crossing moral doors?

  • Shutting up & shutting down. X has suspended several accounts linked to opposition figures in Turkey amid widespread civil unrest. The suspensions follow protests triggered by the arrest of Ekrem İmamoğlu, Istanbul’s mayor and main rival to President Erdoğan. Many of the affected accounts belong to university-based activists sharing protest-related information. X’s actions align with Turkey’s 2022 social media law, which allows the government to suppress content, and come amid broader restrictions on social media platforms in the country.
  • Au revoir, do svidaniya, ma’a as-salama. Pavel Durov, founder of Telegram, has returned to Dubai after being allowed to leave France, where he was under investigation for failing to curb disinformation and illegal content on his messaging platform. The inquiry, which includes allegations related to child abuse material and other criminal activities, highlights concerns over Telegram’s role in spreading harmful content.
  • Censorship uncensored. Romania’s media watchdog, ANCOM, has challenged Elon Musk’s claims of censorship, defending its actions as part of a broader effort to combat Russian interference in elections. Pavel Popescu, ANCOM’s vice president, stated Romania is engaged in a “hybrid war” against disinformation. Musk criticised the regulator, linking it to censorship, but Popescu countered, suggesting Musk should engage directly with Romanian authorities. 
  • Next subpoena: Google. The representative Jim Jordan from Ohio (Republican Party) has subpoenaed Alphabet, the parent company of Google, demanding documents to investigate whether YouTube removed content at the request of the Biden-Harris administration. This move comes after Meta ended its fact-checking practices claiming to restore “free speech” on its platforms. Jordan accuses Big Tech of working with the government to suppress conservative voices, building on momentum since the 2021 removal of Trump from Twitter. 
  • Faceblocked. Papua New Guinea has temporarily shut down Facebook to address misinformation, pornography, and hate speech. The move, criticised by many as an abuse of power, has sparked outrage, especially among businesses. The government has not clarified how long the ban will last.

Brussels corner

  • Third draft of the AI Act Code of Practice attracts damning criticism.
    • The new AI Office is facilitating the drawing-up of a Code of Practice to detail “rules” on general purpose AI. The Code intends to “represent a central tool for providers to demonstrate compliance with the AI Act, incorporating state-of-the-art practices.” The third draft of this document was published this earlier month.
    • In March 2023, the European Parliament’s research service explained that the “disruptive nature raises policy questions around privacy and intellectual property rights, liability and accountability, and concerns about their potential to spread disinformation and misinformation. EU lawmakers need to strike a delicate balance between fostering the deployment of these technologies while making sure adequate safeguards are in place.”
    • In response, Members of the European Parliament (MEPs), as well as the Council Presidency negotiator have sent a damning letter to the EU’s digital Commissioner, Vice-President Henna Virkkunen saying starkly that the draft does not represent the intentions of the co-legislators. This is both politically damning, as the MEPs represent a majority of the Parliament that adopted the AI Act, but also legally damning, because any court assessing a challenge to the legality of the Code would need to assess if it respected the legislator’s intent. 
  • Shaping the shield. A coalition of 50 civil society organisations, coordinated by the European Partnership for Democracy (EPD), has released a timely contribution to the debate on the European Democracy Shield (EDS), a framework of measures designed to protect democracy, planned for Q3 of 2025 by the European Commission. Their priorities and concrete recommendations offer guidance for shaping a strong and future-proof instrument to protect European democracies.
  • Democracy Shield Special Committee meets three times. The European Parliament’s Special Committee on the Democracy Shield met three times in March.
    • On 18 March, the Special Committee held a joint exchange of views with the Civil Liberties, Justice and Home Affairs Committee (LIBE). The focus of the meeting was Artificial Intelligence. In the meeting, presentations were made by, and discussions held with:
      • DG Communications Networks, Content and Technology (CONNECT) of the European Commission, focusing on the AI Act and the DSA. He said the draft Code of Practice on AI was being prepared by independent experts and therefore the Commission could not comment on it.
      • DG Justice and Consumers (JUST) focusing on the General Data Protection Regulation (GDPR) and the Transparency of Political Advertising Regulation;
      • The Council of Europe, focusing on the recently-adopted CoE Convention on AI
      • The Centre for Governance of AI, whose representative also identified himself as one of the drafters of the draft Code of Practice on AI
      • Algorithm Watch, whose representative gave an overview of the NGO’s analysis of the legal framework
    • On 19 March, the Special Committee held a joint meeting with the Subcommittee on Security and Defence (SEDE) to discuss relevant issues with EU Commissioner Virkkunen. The Special Committee Chair pointedly asked about progress on proceedings against various online platforms under the Digital Services Act (DSA). In her first set of answers to questions, the Commissioner gave an overview of previously announced elements of the Democracy Shield while, in her second set of answers she said she is ready to make quick decisions, but that the Commission services need time to prepare cases. She said she hopes to make decisions “very soon”. In the third and final set of responses she said the platforms want to comply, even if there are critical political speeches and letters. She responded to, but did not answer, a question about the systemic risks created by the design and priorities of recommender systems. She said that the Commission is looking at how Telegram is counting its users, as this is very important to determine the level of regulation. Finally, she stressed that planned simplification measures would not involve weakening digital instruments but rather would aim to reduce bureaucratic burdens for SMEs. 
    • On 27 March, the Special Committee heard presentations and exchanged views with three entities:
      • the European Digital Media Observatory (EDMO), both its Director and its Coordinator of fact checking, on its work and working methods;
      • the Director of the Moldovan Centre for Strategic Communications and Fight against Disinformation on their work and analysis of threats;
      • the Estonian Consumer Protection and Technical Regulatory Authority (CPTRA) and GLOBSEC, on regulation of Telegram
      • The Committee also heard from the European Commission, in the form of FG ENEST (Enlargement and Eastern Neighbourhood)
  • DIGITAL funding. The European Commission has published the Digital Europe Programme (DIGITAL) work programme for 2025–2027, setting out planned funding for a range of digital initiatives, including efforts to tackle disinformation. Among the measures, 2,56 million euros is earmarked for the European Digital Media Observatory (EDMO) and 5 million euros for a European Network of Fact-Checkers. The programme also announces a 14 million euro investment in the creation of a European Democracy Shield. The published 2025–2027 work programme does not provide information about continuing the EDMO Hubs funding after their current funding ends.

Reading & resources

  • Machine of disinfo. White House Press Secretary Karoline Leavitt’s false claim about a 50 million USD Gaza condom contract highlights a growing pattern of disinformation within the Trump administration. Amplified by figures like Elon Musk, misleading statements are now central to political strategy, eroding trust in institutions and muddying public discourse. This “post-truth” era is reshaping political debates and challenging efforts to push back against misinformation.
  • Tech backpedal. The Trump administration’s Federal Trade Commission has removed key business guidance blogs, including those on AI and consumer protection cases involving Amazon and Microsoft. The move, which raises concerns about federal transparency laws, benefits big tech companies by erasing compliance advice on data use. Critics argue it could help tech firms, especially in areas like content moderation, by reducing oversight on how they handle data and user behaviour online.
  • Warming the Earth for good? Trump’s push to downplay climate change intensifies, with plans for a new federal report that could reverse climate regulations. Despite overwhelming scientific evidence, the Trump administration is considering efforts to present global warming as beneficial to humanity. This marks a new chapter in his ongoing disinformation campaign, aiming to weaken climate rules while expanding executive power. Critics warn that this could harm public health and the environment, as the fight over climate science heats up.
  • Flip rocks anti-vax world. Robert F. Kennedy Jr.’s recent support for the MMR vaccine (Measles, Mumps, Rubella) has shaken the anti-vaccine community. Once a vocal critic, Kennedy’s call for widespread availability in response to a measles outbreak has sparked outrage and confusion among his former allies. While he emphasises personal choice, his shift has raised concerns that it could weaken the anti-vaccine movement’s stance.
  • Chinese-style FIMI. EUvsDisinfo spoke with Daria Impiombato, analyst at the Australian Strategic Policy Institute, about the characteristics of China as a FIMI actor, including using foreign influencers to help improve Beijing’s image abroad, suppression of critical voices, and what we can do collectively to counter some of these malign activities. 
  • Defusing water & climate disinfo. Water and climate misinformation is on the rise, making it harder to focus on real solutions. In this roundtable organised by WaterHub, experts Abbie Richards (Media Matters) and Phil Newell (CAAD) break down how false narratives spread, who benefits from them, and what we can do to counter disinfo effectively, before, during, and after media moments.
  • From bombs to bots: Russia’s reach. Russia is waging an intensifying shadow war across Europe and the US, marked by sabotage, subversion, and cyberattacks, according to a new CSIS report. Led by the Russian foreign intelligence agency GRU, these operations have nearly tripled in the past year and target critical infrastructure, transportation, and defense industries, often tied to Western support for Ukraine. The report calls for a more assertive strategy, including offensive cyber operations, increased sanctions, and counter-influence campaigns.
  • Espionage via Chrome. Kaspersky has uncovered Operation ForumTroll, a likely state-sponsored cyber-espionage campaign targeting Russian media and academia. Hackers exploited a zero-day Chrome flaw via a wave of phishing emails impersonating organisers of a well-known Russian scientific forum. The emails contained malicious links that infected victims instantly. The Russian government blamed this campaign on the US.
  • Chatting conspiracy away. Disinformation spreads easily across social media, and traditional debates often strengthen false beliefs. Research suggests AI chatbots like ChatGPT can be more effective in persuading individuals away from conspiracy theories. A 2024 study found that conversations with the AI reduced believers’ conviction by 20%, with one in four abandoning their beliefs entirely. Additionally, “prebunking” misinformation and teaching critical thinking can help prevent false beliefs from forming. 
  • History cultivates critical thinking. In today’s digital world, teaching critical thinking through history education is key to fighting online misinformation. This article discusses how history classes help students assess sources and context, fostering “historical thinking” and “civic online reasoning.” A strong historical foundation enables students to spot misinformation and think critically about political and social issues. 
  • Same method, same flaws. This article explores Meta’s shift from its independent fact-checking program to the new Community Notes system, which will use the same visibility algorithm as X. Notes will only appear when users who typically disagree find them useful, a model that has struggled to effectively surface critical fact-checks in the past. It also raises concerns about the lack of clarity on key aspects, including the possibility for users to appeal unfair notes.
  • Room for hope. This study analyses the factors that make community-created fact-checks on X’s Community Notes platform helpful. It finds that providing links to external, unbiased sources significantly increases the perceived helpfulness of these community-created fact-checks. Additionally, those linked to highly biased sources are rated as less helpful, suggesting that the platform effectively penalises one-sided or politically motivated reasoning. 
  • New paradigms in trust and safety. Decentralised social media introduces new governance challenges with the concept of defederation, allowing servers to disconnect from others to prevent harmful content. This power to block communication between servers raises questions about balancing free speech and safety. As major platforms like Meta and Bluesky join the decentralised space, the consequences of defederation decisions become more significant. Experts are exploring how to navigate these challenges, ensuring user safety while maintaining openness and user choice, and the role of commercial platforms in supporting decentralised moderation.
  • Academic sabotage. The Alliance Defending Freedom (ADF) is targeting Indiana University’s Observatory on social media, accusing it of being part of a fake “censorship-industrial complex.” The ADF’s claims aim to discredit research on misinformation, particularly around election fraud and vaccine misinformation. The Observatory has refuted these myths, clarifying that it does not engage in content moderation and operates independently of the government. 
  • Press challenges. The Medianet 2025 Media Landscape Report reveals growing concerns among Australian journalists about AI, social media, and misinformation, with 63% not using generative AI. Many fear its impact on jobs and journalistic integrity, while 67% of journalists believe social media fosters echo chambers. Trust in media continues to decline, and misinformation remains a significant threat. 
  • Brainwashing of the people, by the people, with the people. This paper explores how both democratic and autocratic regimes use disinformation and mass persuasion, blurring the line between the two. Modern “spin dictatorships” manipulate information through social media, AI, and surveillance. Psychological factors and non-systemic drivers like leader charisma fuel susceptibility to manipulation. The paper calls for a rethinking of propaganda, advocating for pluralistic media, digital literacy, and addressing the psychological roots of disinformation.
  • VMD, a growing threat. The Visual and Multimodal Disinformation (VMD) assessment report by The University of Ottawa Information Integrity Lab provides an accessible study of the strategies available to analyse, contain, and combat VMD. Case studies include a discussion of deep fakes, impacts on public health and safety, as well as climate change information distortion.
  • France revamps info warfare. NATO’s Deputy Secretary General, Marie-Doha Besancenot, is joining France’s Ministry of Foreign Affairs as Advisor for Influence and Strategic Communication. She will work to modernise France’s strategy on influence and information warfare.

This week’s recommended read, brought to you by Maria Giovanna Sessa, is actually an entire library! The Uncensored Library is a project by Reporters Without Borders (RWB) designed to safeguard digital freedom. In a time when democracy and free speech are under threat – and media is censored or controlled in many countries – one surprising platform remains open: the world’s most popular video game, Minecraft. RWB cleverly uses this loophole to bypass internet censorship and bring the truth back to life..

The latest from EU DisinfoLab

  • #Disinfo2025 registration is open. We’ll be in Ljubljana on 15–16 October to share, learn, and connect with the counter-disinformation community. Space is limited, so request your ticket as soon as possible to secure your spot! 
  • Another AI Hub-date. We’ve just added new content to our AI Disinfo Hub – your source for tracking how AI shapes and spreads disinformation, and how to push back. Here’s a snapshot of what’s new:
    • Meta Platforms is taking steps to curb misinformation ahead of elections. In Canada, advertisers will be required to disclose AI-generated or digitally altered political and social issue ads. Meanwhile, in Australia, Meta’s independent fact-checking program will address false content, including deepfakes.
    • Language shapes censorship. Chinese AI models like DeepSeek censor sensitive topics, following China’s regulations. However, a recent analysis found censorship varies by language, with models responding more freely in English than in Chinese. Experts link this to training data biases and differing safeguards, highlighting AI’s cultural and political alignment challenges.

Events and announcements

  • 3 April: The AI-on-Demand Webinar Series launches its first webinar “AI Meets Media: Tackling Disinformation with Cutting-Edge Innovation”.
  • 6 April: EDMO BELUX’s call for abstracts for an English-French bilingual special issue “Untangling the knots between disinformation and inequalities” in the “Recherches en Communication” publication is open until 6 April.
  • 8 April: EDMO BELUX workshop & networking event “Mitigating disinformation in Belgium and Luxembourg: Where are we now, where are we going?” will take place in Brussels. Register by 2 April.
  • 9-13 April: The International Journalism Festival will be held in Perugia, Italy. The second edition of the Centre for Information Resilience (CIR) Open Source Film Awards ceremony will be organised as part of the event.
  • 10 April: The DSA Online Dispute Resolution Conference, hosted by ADROIT, will explore the evolving ecosystem under the Digital Services Act. Engage with industry leaders and gain insights into DSA compliance.
  • 10 April: The IRSEM event “Identifying and Countering Disinformation : Lessons for Europe from Estonia and France,” will take place in Brussels, with the focus on the 2024 European elections and the role of education in strengthening democratic resilience.
  • 10-11 April: AlgoSoc’s international conference The Future of Public Values in the Algorithmic Society will take place in Amsterdam, The Netherlands.
  • 23-25 April: The 2025 Cambridge Disinformation Summit will discuss research regarding the efficacy of potential interventions to mitigate the harms of disinformation.
  • 22-25 May: Dataharvest, the European Investigative Journalism Conference, will be held in Mechelen, Belgium. Tickets are sold out, but you can still join the waiting list.
  • 13 June: Defend Democracy will host the Coalition for Democratic Resilience (C4DR) to strengthen societal resilience against sharp power and other challenges. The event will bring together transatlantic stakeholders from EU and NATO countries to focus on building resilience through coordination, collaboration, and capacity-building. 
  • 17 & 24 June: veraAI will host two online events on artificial intelligence and content analysis, including the verification of digital media items with the help of technology. The first session is dedicated to end user facing tools addressing journalists and fact-checkers as a target audience. The second session will be primarily directed at the research community.
  • 17-18 June: The Summit on Climate Mis/Disinformation, taking place in Ottawa, Canada, and accessible remotely, will bring together experts to address the impact of false climate narratives on public perception and policy action.
  • 18-19 June: Media & Learning 2025 conference under the tagline of “Educational media that works” will be held in Leuven, Belgium.
  • 27-28 June: The S.HI.E.L.D. vs Disinformation project will organise a conference “Revisiting Disinformation: Critical Media Literacy Approaches” on Crete, Greece.
  • 30 June – 4 July: The Summer Course on European Platform Regulation 2025, hosted in Amsterdam, will offer a deep dive into EU platform regulation, focusing on the Digital Services Act and the Digital Markets Act. Led by experts from academia, law, and government, the course will provide in-depth insights into relevant legislation.
  • 15-16 October: #Disinfo2025 is heading to Ljubljana, Slovenia. Mark your calendars – and request your ticket now to make sure you don’t miss out!
  • 22–23 October: 17th Dubrovnik Media Days, under the theme “Generative AI: Transforming Journalism, Strategic Communication, and Education”, will take place in Dubrovnik, Croatia.
  • 25 or 26 October: Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI, the European Workshop on Trustworthy AI organised as part of the 28th European Conference on Artificial Intelligence ECAI 2025, Bologna, Italy.
  • 29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.
  • 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.

Jobs

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!