Dear Disinfo Update reader,
Hello, we are back, we missed you!
What a summer. It may have been slow for holidays, but not for disinformation. From Washington–Brussels clashes over the DSA, to Moldova’s pre-election stress tests, and to new investigations into synthetic propaganda, July and August kept the field anything but quiet.
In this edition, we’ve compiled the highlights you might have missed: key investigations, policy developments, and community resources, all setting the stage for a busy autumn, with our webinars resuming and #Disinfo2025 around the corner – check out the programme, including a new Day 0 workshop! Have you registered yet? Places are filling up quickly, so make sure to secure your spot while there are still a few left. We cannot wait to see you in Ljubljana!
👋 New faces at EU DisinfoLab! Isabella Giraldo, Comms Intern. Grad student in genocide prevention, here to put her comms and digital smarts to work, and Katrina Luize Asmane, Policy Intern. Law & diplomacy grad excited to explore disinfo policy. Welcome!
Enjoy the read!
Our Webinars
UPCOMING – REGISTER NOW!
- 4 September: Synthetic Propaganda – Generative AI and the Future of Political Communication | From conflict parties sharing AI war spam to presidents posting themselves as pope, wrestler or superstar to right-wing extremists synthetically remixing Third Reich aesthetics. Generative AI tools are changing political communication. Marcus Bösch dissects current case studies and discusses ethics and effects.
- 11 September: The EU Code of Practice on disinformation: evaluating VLOPSE compliance and effectiveness | How well are major platforms putting the EU Code of Practice on Disinformation into action? A new EDMO report reviews Meta, Google, Microsoft, and TikTok, looking at their transparency reporting and the impact of measures taken in 2024. In this webinar, co-author Trisha Meyer will share the findings and reflect on what they mean for the effectiveness of the Code.
- 18 September: Operation Overload: smarter, bolder, powered by AI | The Russian propaganda operation targeted at media organisations and fact-checkers is stronger than ever. Operation Overload, which was first documented in June 2024, is conquering new platforms and harnessing AI tools to target fact-checkers, media and international audiences with Kremlin propaganda. Aleksandra Atanasova and Guillaume Kuster will shed light on how the campaign has evolved, what makes it increasingly sophisticated, and how the community can respond.
PAST – WATCH THE RECORDINGS!
- Sanctions, so what? How Meta may have channeled money to EU-sanctioned Russian propaganda outlets | Drawing on five and a half years of archived data, WHAT TO FIX uncovered that Meta maintained, and in some cases initiated, revenue-sharing partnerships with RT, Sputnik, and other EU-sanctioned entities, even months after sanctions were introduced. In this webinar, Victoire Rio presents the investigation’s findings and what they mean for platform accountability and regulation.
- Influence for sale: mapping the online manipulation market | Malicious bots make up a quarter of internet traffic, fuelling scams, disinformation, and online manipulation. In this webinar, Jon Roozenbeek explores how this hidden market operates, what drives the cost of manipulation, and how these insights can inform future research and regulation. He also introduces the Cambridge Online Trust and Safety Index, a free tool tracking the price and availability of manipulation services across platforms and countries. The recording will be available shortly.Discover and watch our full webinar series recordings here.
Disinfo news & updates
World Wide Watch: Tech, votes, and regulation
- US-EU saga that escalated quickly. August has been a month of escalating clashes between Washington and Brussels over Europe’s Digital Services Act (DSA), which the Trump administration falsely claims censors Americans and harms US tech firms. On 7 August, Secretary of State Marco Rubio ordered US diplomats to lobby European governments to repeal or amend the law. Later in the month, Federal Trade Commission Chairman, Andrew Ferguson, warned US tech companies they could face domestic penalties if they comply with EU or UK “censorship” rules. And by 26 August, reports surfaced that the administration is considering unprecedented sanctions, including visa bans, on European officials implementing the DSA. The conflict has also affected the US-EU trade talks, with Washington seeking concessions on the DSA while the EU maintains the law as a red line. Furthermore, France is pushing for a stronger EU response. Digital samba. Brazilian President Lula has finalised a proposal to regulate social media platforms, which will soon be sent to Congress. The move aims to set clearer rules for online content, platform responsibilities, and moderation. The exact details of the proposal are not yet public.
- Target: Moldova. The country faces a high-stakes digital battleground ahead of September elections. Russian-linked hackers “Curly COMrades” target government and energy networks, while the Overload/ Matryoshka propaganda campaign spreads false narratives online to discredit the pro-EU government (similar tactics have been observed from Russian-linked groups like Storm-1516 in other European countries). The EU is stepping in with a new fact-checking hub, closer coordination with authorities, and pre-election stress tests with platforms like Meta, Google, and TikTok to detect and counter disinformation in Moldova.
- VoteVanHolland shield. Dutch authorities are gearing up for the 29 October elections by meeting major social media platforms under the EU’s DSA. The 15 September roundtable will bring together regulators, NGOs, and platforms like TikTok, Facebook, and X to review measures for preventing disinformation, hate speech, and foreign interference ahead of the vote.
- FIMI goes orbital. The EU is boosting satellite defences after a GPS jamming incident disrupted Commission President Ursula von der Leyen’s flight to Bulgaria, suspected to be caused by Russia. More low Earth orbit satellites will be deployed and detection capabilities will be improved to strengthen resilience against such interference.
- Foreign influence gets a pass. The US dismantled its Foreign Malign Influence Center (FMIC) in mid-August, the last federal body dedicated to tracking state-sponsored disinformation and interference. As Just Security reports, FMIC’s functions have been folded into other intelligence units, a move critics warn weakens national security just as Russia, China, and Iran ramp up influence operations.
- “It’s all fine.” According to Reuters, the European Commission has stalled its investigation into Elon Musk’s social media platform X for alleged breaches of digital transparency rules under the EU’s Digital Services Act, delaying a decision until after US-EU trade talks conclude.
- +18 to surf. Australia is set to expand age verification to search engines, aiming to shield minors from harmful content, but experts warn of privacy risks, digital exclusion, and overreach.
- Critical kick. French livestreamer Raphaël Graven died during a 12-day stream on the platform Kick, sparking scrutiny of the site and French regulators. Authorities faced criticism for failing to act, complicated by Kick’s lack of proper EU representation under the DSA.
- Erase to survive. A study of content moderators in India finds that harsh working conditions, tight targets, job insecurity, and minimal support, lead them to rely on automated tools and simplified rules, often removing content without considering context.
World Platforms’ latest moves
- Revolt against bots. TikTok moderators in Germany protest plans to replace humans with AI, warning that automated content checks could miss hate speech and misinformation.
- No way back. Reddit is blocking the Wayback Machine from archiving most content to stop AI companies from scraping its data and claiming to protect user privacy.
- Unlinked protections. LinkedIn joins Meta and YouTube in dropping anti-trans protections, weakening safeguards against harassment just as attacks on LGBTQ rights intensify.
- Check-mate? Google stops funding fact-checking in Australia, letting its AAP FactCheck deal expire amid a global pushback on fact-checking. Who will be next?
- Breaking the code. Meta refuses to sign the EU’s AI code of practice, calling it an “overreach” that could stunt innovation, citing restrictions on AI model development.
- X-it when data is requested. Elon Musk’s X refuses to cooperate with a French probe into alleged algorithm bias and data fraud, calling the investigation a threat to free speech.
- Ad-free elections. Meta will no longer authorise political and issue ads in the EU from October, citing complex new transparency rules, though users can still post about politics.
Heated investigations
- Falsos Amigos. Graphika’s latest report uncovers a China-linked network of websites and social media accounts using AI to translate and disguise China Global Television Network (CGTN) articles in multiple languages. The goal: spread pro-China, anti-West content under the guise of independent outlets.
- Avatar reporters. An investigation by Indicator reveals dozens of TikTok accounts are using AI-generated avatars of real journalists to spread fake news, like false child-abduction laws. These deepfakes exploit viewers’ trust, gaining millions of views and widespread engagement.
- Ban fiction. Three years after the EU sanctioned Kremlin-affiliated media outlets, ISD finds that internet service providers and social media platforms largely fail to block access to banned Russian state media. DNS-based blocking is inconsistent and easily circumvented, allowing high traffic and continued dissemination on platforms like X.
- Blamegame. This ISD report examines how the “no innocents” narrative online falsely blames all Palestinians for Hamas’ actions, spreading hate and justifying violence, with inconsistent moderation on social media platforms.
- Weathered. A Center for Countering Digital Hate (CCDH) investigation reveals that 300 viral posts from Meta, X, and YouTube, amassing over 221 million views, spread false claims about extreme weather events, often outpacing lifesaving emergency alerts and putting public safety at risk.
Russian wrongdoing
- App trap. From September, all new phones and tablets in Russia must come with the state-backed MAX app, integrated with government services. This continues Russia’s push to reduce dependence on foreign apps like WhatsApp and Telegram. Critics warn MAX could be used to track users, while authorities claim it’s safer than foreign alternatives. For a detailed report on Russia’s wider internet censorship and surveillance, read Human Rights Watch’s 2025 report.
- Digital occupation. OpenMinds and DFRLab report that Russia is using over 3.600 AI-driven Telegram bots in occupied Ukrainian territories to spread pro-Kremlin narratives and discredit Ukraine.
- Pravda push. Maldita.es reports that the Russian disinformation network Pravda spread nearly 300 false stories in five days about the Torre Pacheco riots, using Telegram channels to amplify hoaxes and sow social divisions in Spain.
- NoName, now known. European and US authorities disrupted the pro-Russian hacker group NoName057(16), known for launching DDoS attacks against Ukraine and allies, targeting hundreds of websites while recruiting volunteers via pro-Kremlin channels.
- Ticket to scam. Over 1.000 Facebook pages, in 60 countries, impersonated local transport services to steal credit card info, according to Maldita.es. The scam was run by Vietnamese and Russian servers, showing an international, coordinated phishing network.
AI Disinfo udates
- Summer update on AI & FIMI: Various countries continue to exploit AI for FIMI purposes. China is using AI firms to shape public opinion, with campaigns in Hong Kong and Taiwan and data collection on US Congress members. North Korea is also deploying AI in its long-running IT worker infiltration scheme, and an investigation shows pro-Russia networks “grooming” large language models by flooding the web with disinformation. Meanwhile, a fabricated video of exiled Hong Kong activists that points to a shift toward more targeted psychological warfare.
- Summer update on AI Regulation:
- In the EU: The European Commission has released a voluntary code of practice to guide companies in complying with the EU’s new AI Act.. Meta has refused to sign, calling it an “overreach” that will “stunt”, while it has been signed by OpenA. Google will sign it too, although it warned that some provisions could slow approvals, expose trade secrets, and hurt Europe’s competitiveness.
- In the US: President Trump has signed an executive order “preventing woke AI in the federal government,” in an attempt to shape the ideological behaviour of AI. The directive could have sweeping effects since nearly all major tech firms want government adoption of their tools. On a similar note, the Missouri Attorney is accussing Google, Microsoft, Meta, and OpenAI chatbots of biased and “misleading” answers that undermined Donald Trump’s record.
- Platforms turning to AI. How platforms are using AI is becoming increasingly controversial. Content moderation is at the forefront: AI is replacing human reviewers faster than it learns the job, and TikTok is even cutting 300 UK moderator roles as it shifts to automation and regional hubs. At the same time, platforms are testing other questionable uses: from Meta’s chatbots that initiate direct messages and remember user data, to YouTube quietly editing creators’ videos with AI without their consent.
- The era of AI propaganda has arrived. As generative AI proliferates, the biggest threat may not be an avalanche of lies or hate online, but the quiet, insidious manipulation of everyday conversations, propaganda designed to blend seamlessly into digital life. A report on the Iran-Israel war warns that AI deepfakes and synthetic videos turned the conflict into a “war over reality.” Meanwhile, an academic paper introduces the concept of “slopaganda” (more on this in our Recommended read section), a trend already visible on platforms: a study found TikTok and Instagram flooded with poorly labeled synthetic content. Want to stay on top of the latest in AI and disinformation?
Our AI Disinfo Hub has just been updated, take a look!
Reading & resources
- Resent-Men. Haily Tran’s piece offers sharp insights on how male grievance is weaponised into extremist narratives, blending personal struggles with radical ideologies.
- Psychological warfare. Israel and Iran used social media and AI-driven disinformation during a 12-day conflict to manipulate public perception, and influence audiences.
- #MahsaAmini. The article examines how Iran suppressed the hashtag #MahsaAmini on Persian Twitter during the 2022–2023 protests, when it became a symbol of the “Women, Life, Freedom” movement.
- Taiwan’s FYP. Experts warn China could use TikTok to subtly influence Taiwanese youth, spreading disinformation and shaping opinions in a pre-war or gray zone scenario.
- Election interference, Poland edition. FIMI Defenders, a project from the FIMI-ISAC evaluates in this report threats of foreign information manipulation and interference (FIMI) to the 2025 Polish presidential elections.
- Multiliteracy. Finland leads the world in media literacy education, teaching even preschoolers to spot misinformation, lies, and gossip embedding critical digital skills.
- Disinfo boom. ATHENA project’s experts warn of a disinformation explosion fueled by emerging tech in its latest publication.
- Flood the zone with truth. This article stresses the need for robust climate journalism to counter misinformation and hold polluters accountable.
- Green gaps. This analysis reviews how major online platforms handle climate change disinformation, highlighting regulatory gaps under the EU Digital Services Act (DSA).
- Fossil free COP. The Climate Reality Project is asking to sign an open letter to COP 30 delegations urging them to keep all fossil fuel lobbyists out of the negotiations. Also, PR firm Edelman, which works for Shell, won COP30 media duties, raising conflict-of-interest concerns.
- In research we trust. This report warns that independent technology research is under threat, but stresses that collective action can safeguard public oversight of technology’s societal impact.
- Safety sidelined? Casey Newton examines how trust and safety (T&S) teams at major tech companies, once defenders against misinformation, hate, and abuse, now face layoffs, political pressure, and policy rollbacks, while their leaders remain largely silent. He also notes that EU regulation has turned T&S into a compliance-driven function. The article sparked strong reactions and prompted Newton to write a follow-up, but it remains an important reflection on the state of online safety.
- DISARM, reloaded. This article helps understand how the DISARM Framework is evolving to v2, helping analysts document and counter online disinformation and influence operations.
- Course to tackle climate disinfo. UNESCO has launched a free online course to tackle climate disinformation through media and information literacy.
- DSA tool. The European Commission has launched the DSA data access portal. It was established by the delegated act on data sharing under the DSA to facilitate access to data for vetted researchers.
- Meet Oppi: CheckFirst has launched Oppi, a training platform to Master OSINT Analysis Against Information Manipulation.
- Africa’s new battlefield. Alessandro Arduino’s book Money for Mayhem examines how mercenaries, drones, and AI-driven disinformation are reshaping warfare in Africa.
- Behind the badge. The podcast “Behind the badge: who really wants to be a trusted flagger?” explores what it means to hold the EU’s “trusted flagger” status, the challenges that come with it, and why not everyone is eager to take on the role.
This week’s recommended read
This week’s recommended read, proposed by our Project Officer Inès Gentil, is the article by Michał Klincewicz, Mark Alfano and Amir Ebrahimi Fard: “Slopaganda: The Interaction Between Propaganda and Generative AI.” This paper introduces a new concept – slopaganda – to describe the epistemic consequences of flooding the digital information space with low-quality and AI-generated content meant to confuse, distort understanding and weaken critical thinking.
From a cognitive science and AI ethics perspective, the authors argue that slopaganda is not classic propaganda. It doesn’t aim to convince, but rather to clog (or to saturate) the discourse with incoherence and ambiguity, making it harder for the public to reason or decide. One blatant example the authors present is that News Corp Australia is reportedly producing over 3,000 AI-generated “local” news stories every week.
The paper analyses the serious cognitive and democratic risks posed by this information pollution, from decision fatigue to false equivalence and increasing manipulation vulnerabilities. They authors also propose several ways forward; labelling AI-generated content, strengthening regulatory frameworks, improving media literacy, and building structural monitoring systems (like algorithm audits), in order to mitigate slopaganda’s impact on democratic decision-making.
What is particularly interesting in this piece is how it brings the notion of trust to the forefront – not just how it is manipulated or eroded, but what it takes to rebuild it in an age of “AI-saturated noise”.
Events & announcements
- 4-6 September: This year’s edition of the SISP Conference #SISP2025 will take place at the University of Naples Federico II and will host conversations on digital sovereignty, EU cybersecurity policy, and the challenges posed by emerging technologies.
- 15 September: EDMO BELUX Lunch Lecture series continues with the session “Seeing is Believing: Visual Misinformation at Election Time”.
- 18-19 September: The 2025 edition of the International Democracy Day Brussels Conference will take place, under the title “A World Turned Upside Down: Democracy And Inclusion In An Age Of Insecurity”.
- 19 September: “Generative futures: Climate storytelling in the age of AI disinformation” is a workshop that will be held in Oxfordshire (UK), exploring how generative AI is reshaping climate storytelling, both as a force for inspiration and a driver of disinformation.
- 24 September: Media & Learning’s next Wednesday Webinar ”Persuasion by design: understanding Influence(rs)” will explore how audiovisual and social media shapes our values, norms, and beliefs.
- 9 October: Climate at a Crossroads 2025. Tackling Disinformation in Climate and Economic Policy. Experts, policymakers, and civil society gather in Ottawa, Canada, and online to address climate disinformation and its impact on governance.
- 15-16 October: Our annual conference #Disinfo2025 will take place in Ljubljana, Slovenia. The perfect time to get your ticket is now – we bet you won’t want to miss it.
- 17 October: MLA4MedLit online conference. Stay tuned; the event description and list of speakers will be announced shortly.
- 24-31 October: Global Media and Information Literacy Week 2025. Minds Over AI – MIL in Digital Spaces. Stakeholders around the world organise events and UNESCO co-hosts with a Member State during this conference to be held in Cartagena de Indias, Colombia.
- 25 or 26 October: Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI, the European Workshop on Trustworthy AI organised as part of the 28th European Conference on Artificial Intelligence ECAI 2025, Bologna, Italy.
- 29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.
- 19 November: Media & Learning Webinar: Understanding and responding to Health Disinformation. The session will offer practical tools for educators, librarians, NGOs, and civil society to counter pseudoscience, conspiracy narratives and support informed health communication.
- 20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.
- 17 December: Media & Learning Wednesday Webinar about Lines of speech: hate, harm and the laws across borders.
- MediaEval 2025 Synthetic Images challenge. As part of the veraAI and AI-CODE projects, a new challenge invites researchers to tackle synthetic image detection and manipulation localisation using real-world data. More info, registration, and dataset links are available on the MediaEval 2025 website and GitHub.
- AI tools & FIMI. A short survey is open as part of the EU-funded RESONANT project, exploring the effectiveness and reliability of AI tools in detecting disinformation and Foreign Information Manipulation and Interference (FIMI). If you work in this area, we invite you to share your insights.
- Education on nuclear disinformation. Are you a secondary school teacher or student based in Belgium? Do you have your own good idea of possible “fake news” related to ionising radiation or radiation protection? Submit it before 30 September.
Spotted: EU DisinfoLab
- Get a drink with us on Wednesday! Our informal community meetups in Brussels have already brought together researchers, policy folks, and civil society over the past months – and we’re picking them up again this autumn. They’ll be back on a happily flexible schedule, with the next one on 3 September, and everyone’s welcome. Interested in joining? Just reply to this newsletter to let us know.
- Our Executive Director, Alexandre Alaphilippe, will be speaking at two key events this September:
- 9 September: He will give an update on disinformation threats at “Face à la guerre informationnelle. Le Quai d’Orsay en première ligne”.
- 24–25 September: He will join a session on AI and disinformation at the JRC DISINFO hybrid workshop “Defending European Democracy”
Jobs
- Heinrich-Böll-Stiftung European Union is looking for journalists reporting on climate disinformation for their Climate disinformation media fellowship 2025. Apply by 9 September.
- The Joint Research Centre of the European Commission is looking for a Knowledge Management Officer – Digital Collaboration.
- Free Press Unlimited is looking for a Programme Lead Safety of Journalists.
- Citizen Lab is hiring a Systems and Security Administrator.
- Civitates is looking for an Impact and Learning Manager.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
This good post!

Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.
