Dear Disinfo Update reader,

The heat is definitely on! With our partners, we’re proud to launch HEAT: Harmful Environmental Agendas & Tactics, a cross-border investigation into how climate disinformation is heating up in France, Germany, and the Netherlands. From geoengineering conspiracies to strategic amplification networks, HEAT maps the tactics and narratives fuelling distrust – and the policy blind spots that keep letting it happen. Catch the key findings in our upcoming webinar on 26 June!

Elsewhere in this issue: from fresh revelations about Meta’s alleged revenue-sharing with sanctioned Russian outlets, to presidential X-hacks, AI-generated fashion bots, and disinformation in Argentina, it’s safe to say the so-called “slow summer season” remains a myth. Plus, we’ve got events, readings, new research – and a summer school schedule that’s keeping our team busy.

Dive in!

Our webinars

UPCOMING – REGISTER NOW!

  • 26 June: HEAT is rising – Harmful Environmental Agendas & Tactics in France, Germany, and the Netherlands | Disinformation is fuelling climate confusion and democratic distrust across Europe. This webinar launches HEAT, a cross-border investigation by Logically and EU DisinfoLab, supported by the European Media and Information Fund, uncovering how harmful climate narratives are deliberately spread in Germany, France, and the Netherlands. Expect fresh findings and concrete evidence of manipulation!
  • 3 July:  Sanctions, so what? How Meta may have channelled money to EU-sanctioned Russian propaganda outlets | Using publicly available data, WHAT TO FIX found that Meta continued – and in some cases initiated new – revenue redistribution partnerships with RT, Sputnik and other EU-sanctioned entities months after their introduction of the sanctions. Our speaker Victoire Rio (WHAT TO FIX) will unfold how the investigation points to serious concerns with Meta’s sanctions compliance, as well as to a need for greater scrutiny and oversight of social media monetisation services.  
  • 17 July: Influence for sale: mapping the online manipulation market | This session explores the fast-growing market for online manipulation, where fake engagement and influence-for-hire services are readily available. It covers recent research into what drives the cost of these services, and introduces the Cambridge Online Trust and Safety Index—a new tool tracking manipulation across platforms and countries. Our speaker, Jon Roozenbeek (King’s College London & University of Cambridge), will unpack how this market works and suggest new directions for research and regulation.

PAST – WATCH THE RECORDINGS!

Disinfo news & updates 

Presidents & platforms

  • Radio takeover. As the Trump administration cut US-funded news broadcasts in Asia, including a broadcast on 60 shortwave radio frequencies, China gained significant traction in the information war. Beijing has in fact exploited that void to add 80 new frequencies, jamming stations previously owned by the US-funded Radio Free Asia. These include 26 new Tibetan language frequencies and 16 new Uyghur language frequencies that increase the reach of China’s propaganda.
  • A bad entanglement. Paraguay’s President Santiago Pena’s X account had likely been hacked after the leader appeared to promote the trading of cryptocurrency Bitcoin. A post was shared on his account in English, with a Spanish-language statement purporting to be from the government, had declared that the Latin American country had made Bitcoin legal tender and that it would roll out a $5 million Bitcoin-backed reserve fund.
  • Disinformation thrives in unrest. Since protests against immigration raids in Los Angeles began, false and misleading claims about the ongoing demonstrations have spread on social media. For instance, Russian media and conservative pro-Russian voices have embraced right-wing narratives about the protests, including one that alleged the Mexican government was encouraging the demonstrations against President Donald Trump’s immigration policies. Mexico has strongly rejected the accusation as utterly false. Conspiracy theories also run rampant, with accounts claiming that “Soros-funded organisations” had dropped off pallets of bricks near Immigration and Customs Enforcement (ICE) facilities. Worryingly, generative AI tools like Grok and ChatGPT have added fuel to the fire, producing misleading or incorrect responses about the protests that, once amplified, have further distorted public understanding during this volatile moment.
  • Control everything with a click. Russia is betting on a super-app strategy to sideline foreign platforms and control digital communications, through a new instant messaging platform promoted by the government and endowed with privileges unavailable to competitors. So far, the main contender is VK, the company behind Russia’s most popular social network which has presented a beta version called “Max”. The prototype has received many comparisons to China’s WeChat, and with it, similar criticism of censorship.

News from the internet front

  • Atoning for past mistakes. As 78% of Czechs fear manipulation on social media, especially after Romanian elections, TikTok is taking preemptive action ahead of the country’s October parliamentary elections by deploying a 53-member team of local moderators to monitor content on the platform. This task force will focus on detecting disinformation, flagging political content generated by AI, and enforcing transparency among political influencers.
  • Verify your age to tweet. The French government is considering designating X as a porn platform, to have the platform implementing strict age verification requirements, as new regulation on the platform accepts distribution of pornographic content. For the same reason, Tanzania has already blocked X.
  • Alternative or innovation? Bluesky is facing criticism for allegedly becoming a left-leaning echo chamber with hostile discourse, leading some users, including investors, to disengage. This perception risks overshadowing what Bluesky actually is: a gateway to a broader ecosystem built on the open AT Protocol. The network enables users to build or join apps tailored to specific interests or communities. Despite the backlash, Bluesky has grown to over 36,5 million users and supports a range of social experiences beyond its flagship app. Promoting this diversity is crucial to prevent Bluesky from being boxed in as just another X alternative.
  • Free speech > information. YouTube has eased its moderation approach, letting more content that may break its rules stay up if it’s considered in the public interest. Starting in December 2024, the platform raised its threshold for violations and now tells reviewers to prioritise context like politics, health, and social issues. Videos are more likely to stay online if less than half of the content breaks the rules, especially when it involves public figures or controversial topics. Moderators are also urged to weigh free speech against potential harm and escalate unclear cases. This move aligns with a broader trend of reduced moderation across tech platforms following Trump’s re-election and ongoing legal scrutiny of Google.
  • Disinformation is always in fashion. Famous French fashion influencers have been targeted by an army of pro-Shein bots, which have been mass-commenting in favour of the platform and its financial affordability. Overall, more than 2.000 bots have been identified, with profiles created in July 2024, that were full of AI-generated images. An army of ghost accounts responsible for 31.000 comments, photos and likes relaying the arguments of the Chinese giant. Following an internal investigation, Shein denies being at the origin of these fake accounts and talks about a cynical “manoeuvre aimed at smothering authentic voices”.
  • Chats or Ch-ads. Meta announced that over the next few months, it will introduce advertisements to WhatsApp, specifically in the ‘Updates’ tab, which is used by approximately 1,5 billion people daily, rather than as a feature within chats. Meta said it will collect some data from users to help with targeting the ads, including location and language, but will not collect any information from messages or calls. Nonetheless, several concerns about privacy have been raised by professionals.
  • A ticket to lie. Maldita.es recently uncovered a massive scam in which Facebook pages impersonated Spanish public transport services in 47 cities to phish personal data and credit card information. Despite being flagged and reported, given the EU’s Digital Services Act (DSA) requirement to remove illegal content, 93% of the posts remained public on the platform, with ads being shared to promote the scams.

The FIMI files

  • Russian disinfo ops in Argentina? Argentina’s intelligence agency has reportedly identified a network of suspected Russian spies accused of spreading pro-Kremlin disinformation in the region – echoing tactics linked to Project Lakhta.
  • China & Russia. China and Russia are presenting similar cyber risks to Europe, says the Czech president, following a recent cyberattack on his government linked to a Beijing-backed hacking group.

Climate edition

  • From crisis to catastrophe. A major report by the International Panel on the Information Environment (Ipie) warns that climate misinformation, driven by fossil fuel lobbies, right-wing politicians, and hostile states, is obstructing action and undermining trust. Reviewing 300 studies, the report finds that denialism has evolved into targeted attacks on climate solutions. 
  • EU caves on green claims. The European Commission plans to withdraw its proposed Green Claims Directive following pressure from conservative lawmakers, particularly the centre-right EPP group. The law was intended to curb corporate greenwashing by requiring companies to back up environmental claims with evidence. Its rollback marks a significant concession to political opposition and a blow to EU efforts to hold businesses accountable for misleading sustainability claims.

Brussels corner

Last week, Carlos Hernández from Maldita published a sharp and timely edition of his newsletter, focusing on policy developments in the disinformation and fact-checking space (highly recommended reading – subscribe if you haven’t already!). One particular item stood out to us.

As the “Strengthened” Code of Practice (SCoP) on disinformation is quietly being phased out – ahem, replaced – by a new Code of Conduct, few have paid attention to the final self-assessment reports submitted by the signatories. Over the past four years, these reports have been met with consistent skepticism, with critics frequently accusing platforms of failing to even report accurately on their own voluntary commitments. In a final flourish, Carlos notes that Google chose to report only on the limited set of actions it intends to carry over into the new Code of Conduct – an ironic farewell to the SCoP, if there ever was one. This seems to run counter to the Commission’s Principles for better self- and co-regulation, which suggests that what we really need is a Code of Practice on Codes of Conduct, ideally strengthening the principles already adopted by the Commission and explained on a page where none of the links work.

To capture the mood, the EU DisinfoLab team suggests a film pick: Groundhog Day. It’s the perfect metaphor for grasping – truly and deeply – the nature of both the Code of Practice and the Code of Conduct.

Reading & resources

  • Revenue or sanction escape? WHAT TO FIX has published a report on how Meta platforms may have violated EU sanctions and channelled money to RT, Sputnik and other EU-sanctioned entities via Facebook’s Revenue Redistribution Programs. Meta has engaged in revenue redistribution partnerships with Russia Today (RT) and Sputnik pages since 2020. These pages have been restricted from monetising in the immediate aftermath of Russia’s full-scale invasion of Ukraine and were re-listed Sputnik pages for revenue redistribution starting in October 2022 and through to October 2023. Join our next week’s webinar to learn more!
  • Harmonising efforts against hate online. This policy paper by OECD maps responses to technology-facilitated gender based violence (for example non-consensual intimate image-sharing, stalking, or hate messages) across G7 countries. It focuses on four key areas: national strategic and legal frameworks; the regulation of online platforms; data collection mechanisms; and capacity-building for justice system actors.
  • Effects of disinfo and FIMI in Georgia. This paper discusses the use of disinformation accusations as a strategy to deflect political blame. A survey experiment in Georgia (n = 1200) tested whether framing a scandal as disinformation changes public opinion. Results show that such defences, including those citing Russia, do not reduce perceived guilt: voters dismiss the candidate regardless of the excuse. These findings suggest that disinformation defences are largely ineffective and may even backfire.
  • AI learning loss. Research found that over four months, users of large language models (LLM) consistently underperformed at neural, linguistic, and behavioural levels. While LLMs offer immediate convenience, the findings highlight potential cognitive costs; these results raise concerns about the long-term educational implications of relying on LLMs and underscore the need for deeper inquiry into AI’s role in learning.
  • Deceivingly human. The warning signs that once helped identify misinformation – grammatical errors, awkward phrasing and linguistic inconsistencies – are rapidly becoming obsolete as AI-generated content becomes indistinguishable from human writing. This article examines the shift from purely automated detection to “transparent” systems, which provide users with the reasoning behind their assessments and empower users by explaining their decision-making process.
  • “Tralalero tral-algorithm.” Brainrot has been chosen by Oxford University as the word of 2024. Indicating bizarre and vulgar AI-generated video with unrelated characters (like Ballerina Cappuccina and Trallalero Trallala) and controversial audio in Italian, both the word and the material have gone algorithmically viral. This content, however, has started a trend of creators monetising it by selling courses, ads, and guides to create a proper digital business.
  • Media competition fuelling misinformation. This study explores how rival outlets, in a bid to outdo one another, may end up amplifying falsehoods. The result? A race to the bottom, where short-term gains come at the cost of long-term trust.
  • EU Climate MAGAfication. This DeSmog analysis details how US far-right lobbying groups, including the Heartland Institute and Heritage Foundation, are pressuring the EU to weaken its climate regulations. Framing green laws as threats to US sovereignty, these MAGA-aligned forces are fueling a rollback of environmental standards across the Atlantic. The piece highlights how transatlantic disinformation and deregulatory pressure are chipping away at Europe’s climate commitments.
  • Mayor voices. This Guardian opinion piece is a joint statement by Sadiq Khan (Mayor of London) and Anne Hidalgo (Mayor of Paris) calling for urgent action to counter climate disinformation and its destructive impact on local climate policy.

This week’s recommended read

This week’s recommended read, an investigation into election disinformation, comes from our Executive Director Alexandre Alaphilippe

This time, our reading recommendation turns to Jean-Marc Manach’s article in Next.ink (in French), which investigated the network of Gen-AI websites promoted by Google Discover. Google Discover is an algorithm that recommends news content to users based on their search queries. The investigation, which focuses solely on the French landscape, shows that Google Discover has given significant visibility to inauthentic websites whose content is generated by artificial intelligence. It reveals that these actors deliberately target a niche to attract the maximum number of visits based on clickbait content generation, and further, benefit from advertising revenue from these visits. 

Why is this problematic? Because, on the one hand, these sites quickly publish unverified and often false information – such as the withdrawal of banknotes, the alleged discovery of gold mines, or fictitious taxes to fund Ukraine’s war efforts. In a previous investigation, Next.ink found over 4.000 websites, with 150+ recommended by Google Discover. As a result, this content appears before verified, ethically-bound news outlets. 

Not only do they appear first and thus attract more visits, but they also capture a portion of the advertising revenue that would otherwise go to these legitimate media outlets. This enriches the creators of these websites and encourages them to continue this rat race. This issue is truly central, as it leads to defunding high-quality, valuable journalism in favour of unverified – and now artificially generated – information.

Jean-Marc Manach told us he would like to expand his research and share his methodology with the counter-disinformation community. You can securely get in touch with him here.

The latest from EU DisinfoLab 

  • HEAT it rising. We’re proud to launch HEAT – Harmful Environmental Agendas & Tactics, a groundbreaking cross-border investigation revealing how climate disinformation is strategically deployed across Germany, France, and the Netherlands. Drawing on open-source intelligence, in-depth research, and detailed policy and platform-level recommendations, the report exposes inauthentic amplification tactics and how once-fringe conspiracy narratives are becoming embedded in mainstream discourse – amplified by domestic actors and foreign influence operations exploiting platform vulnerabilities. Despite clear harms, climate disinformation remains unregulated under the Digital Services Act (DSA), exposing a serious policy gap. HEAT addresses this directly, offering credible analysis and actionable steps to support its recognition as a systemic risk. The investigation was conducted by Logically and EU DisinfoLab, with support from the European Media and Information Fund (EMIF).
  • AI (h)updates. We’ve just added new content to our AI Disinfo Hub – your source for tracking how AI shapes and spreads disinformation, and how to push back. Here’s a snapshot of what’s new:
    • Just as humans need vaccines, so do models. Researchers from the University of Central Florida and the University of Groningen, in collaboration with the Vector Institute, proposed a novel AI training paradigm, called model immunization. This paradigm suggests fine-tuning AI models with small doses of labelled falsehoods, treating them as “vaccine doses” to help them proactively reject misinformation. The method mimics immunisation, aiming to build models that recognise and resist false claims.
    • Disrupting malicious uses of AI. OpenAI has dismantled 10 covert influence operations that misused its AI tools, four of which were likely linked to the Chinese government. The campaigns used ChatGPT to generate propaganda, fake social media engagement, internal performance reviews, and even marketing materials. Other operations tied to Russia, Iran, and China included impersonating journalists and analysing sensitive correspondence. 
    • Building AI chatbots to spread fringe beliefs. Conspiracy theorists are developing and training their own AI models to build chatbots that promote fringe beliefs, spreading debunked claims and even helping users craft posts and letters to persuade others. While many conspiracy communities distrust AI, they are also exploiting it to reinforce narratives of censorship, surveillance, and state control.

Events & announcements  

  • 30 June-4 July: The Summer Course on European Platform Regulation 2025, hosted in Amsterdam, will offer a deep dive into EU platform regulation, focusing on the Digital Services Act and the Digital Markets Act. Led by experts from academia, law, and government, the course will provide in-depth insights into relevant legislation.
  • 2 & 10 July: EDMO is hosting a two-part online training series on understanding and responding to climate disinformation, open to those working in the field.
  • 8-11 July: The AI for Good Global Summit 2025 will be held in Geneva. This leading UN event aims to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 
  • 6-13 July: WEASA is hosting a summer school titled “Digital Resilience in the Age of Disinformation” for mid-career professionals.
  • 15-16 July: The European Media and Information Fund (EMIF) will hold its Summer Event in Lisbon, Portugal.
  • 4-6 September: This year’s edition of the SISP Conference #SISP2025 will take place at the University of Naples Federico II and will host conversations on digital sovereignty, EU cybersecurity policy, and the challenges posed by emerging technologies.
  • 19 September: Generative futures: Climate storytelling in the age of AI disinformation is a workshop that will be held in Oxfordshire (UK), exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation.
  • 24-25 September: This year’s JRC DISINFO hybrid workshop is titled “Defending European Democracy”. Save the date and stay tuned for more info!
  • 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. The perfect time to grab your ticket is now – we bet you won’t want to miss it.
  • 25 or 26 October: Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI, the European Workshop on Trustworthy AI organised as part of the 28th European Conference on Artificial Intelligence ECAI 2025, Bologna, Italy.
  • 29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.
  • 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
  • MediaEval 2025 Synthetic Images challenge. As part of the veraAI and AI-CODE projects, a new challenge invites researchers to tackle synthetic image detection and manipulation localisation using real-world data. More info, registration, and dataset links are available on the MediaEval 2025 website and GitHub.
  • AI tools & FIMI. A short survey is open as part of the EU-funded RESONANT project, exploring the effectiveness and reliability of AI tools in detecting disinformation and Foreign Information Manipulation and Interference (FIMI). If you work in this area, we invite you to share your insights.
  • Call for contributions on the DSA. The European Centre for Algorithmic Transparency is looking for researchers who are looking into systemic risks of Very Large Online Platforms and Search Engines, in particular risks related to the mental and physical health of minors, to present their findings to an audience of policymakers and fellow researchers. Submit a proposal by 20 August.

Spotted: EU DisinfoLab

Jobs 

  • Wikimedia Europe is looking for a Finance and Administration Officer in Brussels.
  • Debunk.org is hiring for several positions, including a Researcher and Analyst for Disinformation Analysis, a Media Literacy Expert on Disinformation, and an Administrator / Project Coordinator for Countering Disinformation.
  • The University of Bergen is recruiting a Postdoctoral Research Fellow in Media Studies to join the IMAGINE project, exploring citizen experiences with AI in news media. Apply by 7 August.
  • Privacy International is hiring a Tech Advocacy Officer for a full-time, permanent role based in London. Apply by 2 July.
  • The Global Investigative Journalism Network (GIJN) is looking for a Program Director.
  • The Public Media Alliance is seeking a Communications Officer for a hybrid role based in Norwich, UK. Apply by 30 June.
  • The Forum on Information and Democracy is recruiting a Development & Resource Officer for a full-time, Paris-based role. Apply by 27 June.

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.