Dear Disinfo update reader,
Summer is in full swing, and there is no better way to welcome the heat than with our last newsletter before the summer break. Outside is hot, and so is this newsletter, full of new tools and readings and boiling updates on Brussels’ consultation, platforms regulation, AI, and climate disinfo.
We say goodbye to you with a last webinar on Thursday on the online manipulation market, leaving you with plenty of other discussions to rewatch and a teaser of what is to come with the programme of our annual conference now up on our website!
You read that right, now you can already get a glimpse of how #Disinfo2025 is going to unfold in October, and you will not be able to wait until the end of summer. Get your tickets here and don’t miss out on the exciting and much-needed conversation we will have in the heart of Slovenia.
But for now, we have plenty to cover – from the link of bots for Scottish independence and disinformation and missile strikes, passing through fresh reports and consultations recaps.
Dive in, and see you soon in Ljubljana!
Our webinars
UPCOMING – REGISTER NOW!
- 17 July: Influence for sale: mapping the online manipulation market | This session explores the fast-growing market for online manipulation, where fake engagement and influence-for-hire services are readily available. It covers recent research into what drives the cost of these services, and introduces the Cambridge Online Trust and Safety Index – a new tool tracking manipulation across platforms and countries. Our speaker, Jon Roozenbeek (King’s College London & University of Cambridge), will unpack how this market works and suggest new directions for research and regulation.
PAST – WATCH THE RECORDINGS!
- HEAT is rising – Harmful Environmental Agendas & Tactics in France, Germany, and the Netherlands | Disinformation is fuelling climate confusion and democratic distrust across Europe. This webinar launched HEAT, a cross-border investigation by Logically and EU DisinfoLab, supported by the European Media and Information Fund, uncovering how harmful climate narratives are deliberately spread in Germany, France, and the Netherlands. Re-watch for fresh findings and concrete evidence of manipulation!
- Sanctions, so what? How Meta may have channelled money to EU-sanctioned Russian propaganda outlets | Using publicly available data, WHAT TO FIX found that Meta continued – and in some cases initiated new – revenue redistribution partnerships with RT, Sputnik and other EU-sanctioned entities months after their introduction of the sanctions. Our speaker Victoire Rio (WHAT TO FIX) unfolded how the investigation points to serious concerns with Meta’s sanctions compliance, as well as to a need for greater scrutiny and oversight of social media monetisation services.
Disinfo news & updates
Governments and platforms
- What’s (the safest) App. The U.S. House Office of Cybersecurity has deemed WhatsApp a high risk to users due to the lack of transparency in how it protects user data, the absence of stored data encryption, and potential security risks involved with its use. Officials recommend using apps like Signal, iMessage, and Teams instead. Meta disputes the ban, claiming WhatsApp is more secure. The move follows past app bans like TikTok and reports of spyware targeting WhatsApp users.
- Channels of terror. Australia has designated Terrorgram, a decentralised network of violent extremist Telegram channels, as a terrorist organisation, making membership punishable by up to 25 years in prison. Terrorgram operates as a decentralised online network that exploits social grievances to radicalise users. To counter this, Australia must pair policing with stronger policies that address the root causes of extremism—like economic hardship, social isolation, and political distrust—to build long-term national resilience.
- A tighter grip. In the UK, tech platforms could be forced to prevent illegal content from going viral and limit the ability for people to send virtual gifts to or record a child’s livestream, under more online safety measures proposed by Ofcom. The regulator’s proposals encompass mechanisms to report content showing illegal activity, as well as requiring platforms to use proactive technology to detect content harmful to children.
- X-tradition? French prosecutors have launched a criminal investigation into X, owned by Elon Musk, over alleged algorithm manipulation and user data misuse linked to foreign interference. The probe follows a January inquiry triggered by complaints about the platform spreading hate speech and undermining democratic debate. Two French lawmakers also recently referred X to the national digital regulator over offensive content from its AI chatbot, Grok.
Tensions and Tech
- (Dis)information warfare. During the armed conflict between Israel and Iran, Newsguard has identified 22 false claims and 62 websites spreading such narratives. Some viral false posts included disinformation on the targets of Iran’s missiles, the evacuation of US troops from the Middle East, and China supporting Iran with Cargo planes, a narrative also not countered by frequently used chatbots. Notably, DFRLab has found that Grok, developed by Elon Musk’s xAI, has provided incorrect and inconsistent answers in response to users’ queries about the 12-day conflict between Israel and Iran.
- Increase AI military defences. Following the Indian strike on the Nur Khan Airbase in Pakistan, social media was flooded with AI-created satellite images, deepfake videos of explosions and airfield damage, and fake audio messages where a senior Pakistani commander is heard declaring a nuclear alert. This increases the alertness in South Asia on AI, as forged or synthetic content within a conflict can trigger panic and chain reactions, putting nuclear stability at risk.
- FIMI blackout. In the aftermath of Israel’s attack on Iran, Iran-linked accounts posing as pro-Scottish independence users went offline after Israeli strikes on Iran’s cyber-infrastructure. These accounts, tied to the Islamic Revolutionary Guard Corps, mimicked UK users to amplify division and produced a significant share of pro-independence content to exaggerate support and destabilise the UK.
- Farming disinformation. Brazil’s ambassador to the EU has accused opponents of the Mercosur agreement of spreading disinformation. The ambassador has told MEPs in Brussels that a disinformation campaign surrounding the trade deal, which was politically agreed in December 2024 between the EU and the Mercosur countries, centred around false “animal disease rates” in Brazil. The EU member states have yet to adopt the deal, but some oppose it, for fear of inadequate environmental and phytosanitary standards
- Documenting propaganda. Billboards have appeared in Italian cities advertising the screening of RT documentaries sharing Russian propaganda, under the tagline “They ban the truth, we show it”, and organised by pro-Kremlin journalists based in Donbas. Although it is unclear whether RT is actively involved in this campaign, the initiative apparently circumvents a 2022 EU regulation, issued following the Russian invasion of Ukraine, prohibiting the distribution of RT and Sputnik content in EU member states.
Brussels corner
Financing interferences and attacks on democracy
On 23 June, the European Parliament’s special committee on the Democracy Shield held a public hearing entitled “Financing interferences and attacks on democracy”. This was split into two round tables.
First was “Monetary flows, financial networks, opaque funding of false NGOs and other strategies used to undermine democratic processes”.
- The first speaker was the Head of Centre of Finance and Security, RUSI (Royal United Services Institute) Europe. The speaker said the threat is financial, strategic and increasingly systemic. Hostile actors are moving away from overt aggression to financial tools – transactions instead of tanks. This passes under the radar, exploiting gaps in our legal and organisational systems, and has effects before we even notice. They are designed to destabilise democracies. The necessary response is resilience, strengthening what we already have in financial systems and political finance laws, as coordinated and adaptive as the threat itself. Enforcement is weak and patchy. Armenia was used by the speaker as a case study of common failings in public oversight and coordination and public-private cooperation.
- European Council on Foreign Relations. The speaker gave an overview of vote-buying in Moldova. She pointed to the low average income as one reason for bribery to be effective there. She pointed to research by Deutsche Welle indicating that 4 million euros was spent to buy votes in one region, using Telegram to coordinate the voters. Moldovan authorities were able to block 97 Telegram bots taking part in this operation. Moldova’s election monitor is suffering severe cuts as a result of USAID restrictions. Similarly, media outlets in Moldova are suffering from significant cutbacks. This is likely to result in increased Russian efforts to influence Moldovan politics, meaning that the EU will need to redouble its efforts to defend Moldovan democracy
In response to comments and questions from MEP’s, the speaker from RUSI said they thought they required legislation was in place, but that the resources, or possibly will, to enforce the legislation were not necessarily available. She heavily stressed the need for cooperation between election oversight bodies and regulators of financial flows.
The second panel was entitled “Best practices and methods to combat financial interference as a threat to democracy”.
- The Deputy Executive Director for Governance from Europol said their work on this topic is part of their work on serious and organised crime. Increasingly organised crime is involved in hybrid threats and undermining our institutions, with the line between financially motivated crime and politically motivated crime increasingly blurred, such as in cyberattacks. Europol’s role is to support Member State activities rather than carrying out coercive measures itself. It has several specialised centres to undertake this activity. In response to questions from MEPs, he said that Europol can only act upon the request of Member States and pointed out that they have extensive expertise on tracking financial flows.
- The Deputy Director General, European Anti-Fraud Office (OLAF), which fights misappropriation of EU funds, which undermines trust in government, described their role in investigations and their investigative mandate and their work on foreign interference. OLOF prefers prevention via anti-fraud policies. All new instruments go through a fraud-proofing process. They also engage in awareness-raising training to help EU bodies to avoid fraud, as well as investigate fraud when it does happen and issue administrative and other recommendations as outputs of investigations. All allegations are investigated in order to maintain trust in EU institutions.
- A Professor of Financial Accounting at Cambridge University and Co-Director of the Centre for Financial Reporting and Accountability said that we are in the most acute information accountability crisis of our lifetimes.
- We are being drawn into a form of hybrid warfare with mafia states that benefit from the legal protections and privileges of states against which domestic law enforcement has few weapons. Europe has experienced hybrid warfare most acutely through Russia’s disinformation campaigns, energy blackmail, threats to undersea cables, but now increasingly from the US. Social media exploits a divide-and-conquer strategy in target countries.
- He quoted Steve Bannon as saying “we viscerally stoke grievance”. He added that crypto and meme assets are a financial cesspool leveraged by many of the darkest actors on the planet.
- As for countermeasures, he urged the enforcement of articles 34/35 of the DSA to identify if recommender systems are amplifying illegal content and the anti-money laundering legislation to target crypto-currency. He also said we should not rely on mafia states’ enforcement mechanisms. In response to questions, he focussed on the dangers from the mobilisation of aggrieved communities. He also stressed the need for penalties to be dissuasive including on a personal level by naming individuals behind the crimes.
European Parliament Democracy Shield Special Committee very active in June & July
Ahead of the expected publication of the draft report of the Parliamentarian responsible this week, the Special Committee was very busy in June and July. Some of the highlights are briefly mentioned below.
On 24 June, there was an exchange of views with a representative of the Polish government to discuss the Polish Resilience Council to defend society from information manipulation campaigns and the Polish push for such an approach on an EU level. This was followed by an exchange of views with the European Endowment for Democracy on support for democracy in EU neighbourhood countries, looking in particular at Georgia, Ukraine and Moldova.
Also on 24 June there was an exchange of views on North Korea’s financial warfare and cybercrime with a representative of the European External Action Service
It held a hearing with the European Fact-Checking Standards Network (EFCSN) on 3 July. This was followed by an exchange of views with the Director of the Authority for European Political Parties and Foundations.
Reading & resources
Tools
- Defence and décryptage. The French Ministry of Defence launched a new section on its page dedicated to disinformation, called “Fake news décryptage“
- Measuring misinfo’s impact. Newsguard launched the Reality Gap Index, measuring Americans’ propensity to believe false claims online. In June, nearly 50% of Americans believed in the most viral false narratives.
Academic papers
- Everyone likes climate misinfo. A recent study published in Nature finds that social media users are more likely to engage with climate disinformation rather than reliable climate content. Precisely, although misinformation is lower in volume under this study, it creates greater relative engagement.
- The rise and fall of crowd-sourced moderation. A new paper has been published on the “algorithmic resolution of crowd-sourced moderation on X in polarised settings across countries”. Its findings show that X’s Community Notes effectively captures each country’s main polarising dimension but fails by design to moderate the most biased content, posing potential risks to civic discourse and elections.
Readings
- Reporting Overload, renewed. Checkfirst and Reset Tech released an in-depth updated analysis of Operation Overload, a large-scale, multi-platform Foreign Information Manipulation and Interference (FIMI) campaign promoting narratives aligned with the Kremlin’s political agenda, leveraging AI-generated content and targeting global audiences.
- HOT takes. CAAD researched how the hateful speech directed at Greta Thunberg is part of an ongoing strategy from the right and far-right to discredit climate action by linking to broader populist issues. This “issue-stacking” tactic combines climate disinfo with other divisive themes to undermine climate action.
- Widespread repercussions. A new comprehensive report by CEPA underlines how Russian and Chinese FIMI operations not only impact the USA’s soft power but also have repercussions on the West’s hard power, national security, and democratic integrity.
- Portugal: polls and platforms. EDMO just published its encompassing report on the Portuguese General Elections held in 2025, investigating information and disinformation on social media
- AP-LIE. Following the expansion of access to TikTok’s research API to Europe, AI Forensics conducted research to test the platform’s official API by querying a set of videos we encountered with different methodologies. It found, for instance, that prominent accounts have inaccessible content, as well as official TikTok company videos, such as CEO statements with more than 30+ million views.
- A bronze medal for disinfo. According to the newly published UN Global Risk Report, mis-disinformation is the 3rd most (perceived) important global risk. Compared to climate inaction, which was valued as a top threat in all continents, disinformation was ranked more highly in Europe, North America, Latin America and the Caribbean and Sub-Saharan Africa than in other regions.
- Fight or flight with AI. RUSI’s report on Russia and the future of disinformation warfare highlights the range of actors discussing the use of AI to automate and amplify content as well as a narrative device. Different voices (e.g. Wagner group and pro-Russian hacktivists) portray AI as a powerful tool to overwhelm adversaries, while also fearing the Western dominance on AI development and its possible impact on Russian public opinion.
This week’s recommended read
Raquel Miguel, our Senior Researcher, recommends reading Guerras Cognitivas. Cómo Estados, empresas, espías y terroristas usan tu mente como campo de batalla (English: Cognitive Wars. How States, Corporations, Spies, and Terrorists Use Your Mind as a Battlefield) by Daniel Iriarte.
In his first solo book, journalist and disinformation analyst Daniel Iriarte presents a compelling and well-documented account of how the human mind has become the new frontline in modern conflict. Cognitive Wars combines investigative journalism and analysis to illuminate a threat that, according to Iriarte, is already reshaping our societies.
The book offers a clarifying perspective by distinguishing overlapping concepts such as disinformation, propaganda, foreign interference, and FIMI, before delving into the idea of cognitive warfare. Iriarte explores how a wide range of actors – from states to extremist groups, but also companies – are exploiting and making a profit from this phenomenon to influence not just what people believe, but how they think.
Through detailed case studies, he illustrates how cognitive campaigns can manipulate elections, destabilise societies, and spill over into tangible real-world consequences.
The book is accessible to both experts and general readers, making a complex and urgent issue easier to grasp. Cognitive Wars is a must-read for anyone concerned with the integrity of information in today’s world. Its core message is a stark warning: if these strategies succeed, the very value of truth – and our capacity for thinking independently – may be at risk.
The latest from EU DisinfoLab
- The German disinfo landscape. What’s fuelling disinformation in Germany – and who’s fighting back? V2 of our factsheet, updated with the support from the Friedrich Naumann Foundation for Freedom, covers key narratives and emblematic cases, policies, and the community countering disinformation. Read it here.
- AI Disinfo Hub-date. Our AI Disinfo Hub remains your go-to resource for understanding AI’s impact on disinformation and strategies to combat it. In addition to the AI news in this newsletter, you can find all the latest updates here – here’s glimpse at what’s new:
- Grok of hate. Elon Musk’s AI company xAI faced backlash after its chatbot Grok generated a series of antisemitic posts on social media. The newly updated version of the bot posted content that included antisemitic tropes and even expressed admiration for Adolf Hitler. xAI has apologised for Grok’s “horrific behaviour” and said that new instructions caused the AI chatbot to prioritise engagement, even if that meant reflecting “extremist views” from user posts on X.
- Deepfake diplomat assault. Malicious actors executed an AI-driven impersonation scheme targeting US Secretary of State Marco Rubio: utilising advanced voice synthesis tools, they reached out to at least five senior officials, including three foreign ministers, a U.S. state governor, and a sitting member of Congress. The attackers exploited encrypted messaging platforms trusted by the administration. The State Department is conducting investigations.
- AI note writers. X will begin using artificial intelligence to draft Community Notes in an effort to accelerate and expand its fact-checking system. Developers will be able to submit their own AI agents for review, and if their practice contributions are deemed useful, they may be authorised to produce public notes on the platform. X’s VP of Product, Keith Coleman, reassured users that humans will remain in charge of approving notes, with AI simply supporting the drafting process. There is growing criticism over this initiative, including warnings from former UK technology minister Damian Collins, who cautioned that letting AI bots write corrections could fuel the spread of “lies and conspiracy theories”.
- Trust your doctor, not a chatbot. A study demonstrates why we shouldn’t be taking health advice generated by artificial intelligence (AI). It reveals that four out of five major AI systems tested, developed by OpenAI, Google, Anthropic, Meta and X, produced disinformation in 100% of their responses to health queries when prompted via developer tools. The fabricated answers included false claims about vaccines, cancer cures, and infertility.
- Seeking Climate Clarity. We’ve also updated the Climate Clarity Hub with over 70 new items related to climate disinformation. This platform is a robust, evolving resource, your go-to destination for tracking, understanding, and countering climate disinformation. Explore the Audiovisual corner that has been expanded with the latest webinars and podcasts on climate disinformation. A significant number of new publications have been added to the Investigation corner with reports covering climate disinformation dynamics, policy frameworks, the intersection of AI and climate disinformation, a miscellaneous collection of articles and a Who’s Who of the community working on these topics. The “Heated News”section includes:
- The “green” corporate hypocrisy. Major companies publicly claim climate leadership while quietly funding trade groups that obstruct climate policy, contributing to widespread greenwashing. This corporate double standard misleads the public and policymakers, fueling a disinformation ecosystem that undermines climate action.
- The road to disinformation: A new study shows that climate disinformation on transport is not simply the result of isolated falsehoods or outright denial, but rather a complex and adaptive phenomenon deeply rooted in local political, economic, and cultural realities.
- Trump’s anti-science scientist. After shutting down the U.S. website on climate change, The Trump administration has brought on scientists known for questioning the role of human activity in global warming, following the dismissal of hundreds of federal climate experts.
- HEAT: Harmful Environmental Agendas & Tactics. This investigation offers an unprecedented cross-border analysis across Germany, France, and the Netherlands. It reveals how climate disinformation is strategically seeded and amplified across Europe, both by domestic and foreign actors. Conspiracy narratives, once confined to the margins, have now entered the mainstream. The HEAT report also exposes Coordinated Inauthentic Behaviour on major platforms and lays out urgent policy recommendations to counter this systemic threat.
- Final events rewatch. We also hosted the two veraAI final events in the previous months. Check out the recordings here for both the 17 June’s recording and outcomes as well as the 24 June recording and outcomes.
Events & announcements
- 15-16 July: The European Media and Information Fund (EMIF) will hold its Summer Conference in Lisbon, Portugal. Save the date!
- 11-15 August: Bellingcat will host a five-day in-person training in Toronto that will focus on digital investigation techniques and provide extensive guided, hands-on practice.
- 14 August: Bellingcat will host an Intermediate Scraping Webinar, covering several different approaches to scraping and data access, including real case studies from Bellingcat research, and hands-on exercises.
- 4-6 September: This year’s edition of the SISP Conference #SISP2025 will take place at the University of Naples Federico II and will host conversations on digital sovereignty, EU cybersecurity policy, and the challenges posed by emerging technologies.
- 18-19 September: The 2025 edition of the International Democracy Day Brussels Conference will take place, under the title “A World Turned Upside Down: Democracy And Inclusion In An Age Of Insecurity”
- 19 September: “Generative futures: Climate storytelling in the age of AI disinformation” is a workshop that will be held in Oxfordshire (UK), exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation.
- 24-25 September: This year’s JRC DISINFO hybrid workshop is titled “Defending European Democracy”. Save the date and stay tuned for more info!
- 30 September: The 13th edition of Privacy Camp is set to take place, exploring the theme Resilience and Resistance in Times of Deregulation and Authoritarianism. Submit your session by 21 July!
- 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. The perfect time to get your ticket is now – we bet you won’t want to miss it.
- 25 or 26 October: Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI, the European Workshop on Trustworthy AI organised as part of the 28th European Conference on Artificial Intelligence ECAI 2025, Bologna, Italy.
- 29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.
- 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
- MediaEval 2025 Synthetic Images challenge. As part of the veraAI and AI-CODE projects, a new challenge invites researchers to tackle synthetic image detection and manipulation localisation using real-world data. More info, registration, and dataset links are available on the MediaEval 2025 website and GitHub.
- AI tools & FIMI. A short survey is open as part of the EU-funded RESONANT project, exploring the effectiveness and reliability of AI tools in detecting disinformation and Foreign Information Manipulation and Interference (FIMI). If you work in this area, we invite you to share your insights.
- Call for contributions on the DSA. The European Centre for Algorithmic Transparency is looking for researchers who are looking into systemic risks of Very Large Online Platforms and Search Engines, in particular risks related to the mental and physical health of minors, to present their findings to an audience of policymakers and fellow researchers. Submit a proposal by 20 August.
- Feedback on consultation paper. The Commission intends to collect stakeholders’ feedback to a consultation paper with a view to inform the Commission’s work on the guidelines under Article 18(9). The consultation paper builds on preliminary exchanges held with relevant stakeholders, namely representatives of providers of VLOPs, media service providers, civil society organisations and fact-checking organisations. Submit your feedback by 23 July.
- Education on nuclear disinformation. Are you a secondary school teacher or student based in Belgium? Do you have your own good idea of possible ‘fake news’ related to ionising radiation and/or radiation protection? Submit it to your registration before 30 September.
Jobs
- The Reuters Institute for the Study of Journalism at the University of Oxford is looking for a Director of Journalist Programmes. The deadline for applications is 1 September.
- Factous (Maldita.es) is looking for a Content creator in Madrid
- The University of Bern offers two fully funded PhD places in the project “Through a Glass, Darkly: How to Study Authoritarian Regimes”. The project investigates how foreign audiences learn about the inner workings of authoritarian regimes, with a focus on China and Russia.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.
