Dear Disinfo update reader,
As 2025 comes to a close and we welcome you to our final Disinfo Update of the year, the global information environment is shifting faster than regulators, platforms, and democracies can keep pace. This month’s developments illustrate a landscape marked by fragmentation: states are increasingly relying on overt influence channels, platforms continue to struggle with basic integrity functions, and political actors worldwide are reframing counter-disinformation efforts as censorship. At the same time, algorithmic incentives, lax oversight, and shrinking transparency mechanisms are deepening existing vulnerabilities, such as climate narratives and geopolitical flashpoints.
Our latest roundup highlights how these dynamics intersect: Russian and Chinese state actors refining their playbooks, global platforms rolling out features that inadvertently supercharge misinformation, Western democracies confronting growing ideological battles over truth and media legitimacy. As we move into 2026, one thing is clear: safeguarding the information space requires long-term structural resilience, meaningful accountability, and renewed collaboration across sectors. We hope this edition helps you track the signals and prepare for the systemic challenges ahead.
As the year draws to a close, we would like to thank you for reading, sharing, and engaging with Disinfo Update throughout 2025. We wish you a restful end-of-year break and a strong start to the new year. We’ll be taking a short pause before the next edition, and we look forward to reconnecting with you soon in 2026.
Get reading and enjoy!
Our Webinars
UPCOMING – REGISTER NOW!
- 15 January: Pop fascism: memes, music, and the digital revival of historical extremism | Franco with glitter sunglasses. Mussolini dancing in a school corridor. Hitler celebrating like Cristiano Ronaldo. What looks like “just jokes” is, in reality, a pipeline of pop fascism. In this webinar, we unpack how fascist symbols and dictators are smuggled into pop culture through memes, music, football fandom, AI-generated videos, and nostalgic aesthetics to make extremism feel normal, funny… even desirable. Drawing on the cross-border investigation by Maldita.es (Spain) and Facta (Italy), Coral García and Francesca Capoccia will walk us through the methodology and findings, showing how this content evolves through three phases: normalisation, acceptance, and idolisation. We will explore how disinformation and conspiracy theories are embedded into viral trends, and how these narratives differ across Italy and Spain on platforms such as TikTok, X, Telegram, and YouTube. Join us to see how history is being rewritten in real time on social media and what to look for before pop fascism becomes part of everyday common sense.
- 29 January: Climate deception ‘on air’ | Mainstream TV and radio remain some of the most influential sources of information, yet they increasingly host misleading claims about climate science and climate policy. In this webinar, Louna Wemaere (QuotaClimat) presents new comparative findings from France and Brazil, revealing how broadcasters shape public perception by spreading misinformation about renewables, electric vehicles, energy prices, deforestation, and climate measures. Join us to understand how climate falsehoods reach millions, why they matter for democracy, and what can be done to counter them.
PAST – WATCH THE RECORDINGS!
- Starting over: A new strategy for US information integrity | The United States government has chosen to unilaterally disarm in the global contest for truth and integrity, even as authoritarians and technology companies find increasing alignment. The abrupt shutdown of our information-integrity tools poses significant risks, leaving Americans and people worldwide increasingly exposed to an information ecosystem designed to manipulate the public and erode social trust. Speaker, Adam Fivenson, explores what “starting over” should look like, the hard lessons that must be confronted, and the strategic choices that will define the future of democratic resilience.
- Are platforms curbing disinformation? Scientific, cross-platform evidence from six VLOPs | How much mis- and disinformation do people actually see on major online platforms? Are repeat misinformers being structurally rewarded with extra reach and monetisation? There has been no scientific, cross-platform way to answer these questions… until now. This webinar introduces the first results from SIMODS (Structural Indicators to Monitor Online Disinformation Scientifically), the first project to scientifically measure the prevalence of mis- and disinformation across platforms and languages. Emmanuel Vincent from Science Feedback walks us through the methodology behind this groundbreaking effort and reveal several striking findings. He will also address a key concern for our community: the significant data access barriers researchers still face, and demonstrate how such rigorous, cross-platform measurement is crucial to building the Structural Indicators envisioned in the European Code of Practice on Disinformation.
- Command and control: How ANO Dialog surveils the Russian info space for the Kremlin | Behind Russia’s polished state messaging lies a vast monitoring apparatus: ANO Dialog. As the nerve centre of the Kremlin’s information control, it quietly manages propaganda and online manipulation across thousands of social media channels. In this session, Serge Poliakoff (University of Amsterdam) examines how ANO Dialog operates, why it represents a new model of state-controlled disinformation, and what its reach means for the information landscape today.
Disinfo news & updates
🛰️FIMI operations & geopolitical narratives
- Kremlin messaging gains ground in Latin America through local media amplification. Reporting by The New York Times shows that Russian state-owned outlets such as RT and Sputnik have significantly expanded their Spanish-language operations across Latin America, with a particular focus on Mexico. The effort is aimed at fuelling anti-US sentiment and reshaping regional perceptions of the war in Ukraine. US officials warn that RT’s rapid growth in Mexico, combined with limited resources to counter foreign influence, has allowed Kremlin-backed narratives to gain traction in the region. A parallel investigation by the Alliance for Securing Democracy and Fact Chequeado reveals how this strategy materialises at the local level. The study finds that the Club de Periodistas de México, through its platform Voces del Periodista Diario, has increasingly republished content from RT, Sputnik, and Cuba’s state-run Prensa Latina, with nearly three-quarters of articles since April 2025 originating from these outlets. This practice of “information laundering” allows foreign state narratives to appear as locally produced journalism, granting them legitimacy and reach within Mexico’s media ecosystem.
- Russia deploys disinformation to deter foreign fighters from joining Ukraine. A recent investigative report by Maldita reveals that Russia is actively using disinformation campaigns to discourage foreign volunteers, particularly from Latin America, from enlisting in Ukraine’s military. Targeting Colombian recruits in particular, the Kremlin spreads fabricated and AI-generated narratives claiming catastrophic Russian dominance, non-payment of death benefits by Ukraine, and even false stories of organ harvesting from deceased soldiers. By pushing emotionally charged disinformation through sensational media, these campaigns aim to erode trust, induce fear, and ultimately weaken international support for Ukraine’s defense.
- The Kremlin expands “turn to the South” with launch of RT India. Russia has unveiled RT India, a new state-funded broadcaster that signals a significant shift in the Kremlin’s global information strategy. After losing access to Western markets following the Ukraine invasion, Moscow is reallocating resources toward non-Western audiences, building on earlier successes such as RT Arabic and RT en Español. RT India reinforces a narrative of “the West vs. the rest,” drawing on Soviet-era anti-colonial rhetoric to frame Russian foreign policy as anti-imperialist and supportive of the multipolar world. To boost trust and mask its geopolitical aims, the outlet features local celebrities and familiar public figures, helping Kremlin-aligned messaging, especially on Ukraine, blend seamlessly into local media ecosystems. The launch underscores Russia’s expanding influence efforts across Latin America, Africa, the Middle East, and now South Asia, where tailored disinformation can gain traction under the guise of culturally resonant, localised content.
- Pro-Russia group arrested amid espionage concerns. French authorities have detained four individuals, including the leaders of the pro-Russia group SOS Donbass, on suspicion of conducting intelligence-gathering operations for a foreign power. The group previously ran a poster campaign promoting pro-Kremlin narratives, raising concerns about disinformation-driven influence operations. The arrests follow President Emmanuel Macron’s warnings that Russia is waging “hybrid wars” across Europe using proxies to destabilise countries supporting Ukraine. Although defense counsel argues that the accusations simply target political views, the investigation highlights growing fears that pro-Russian activism in France may extend beyond expression into covert intelligence and influence activities.
- Russia blocks Western platforms as digital control tightens. An article by The Record states that Russia has intensified its campaign against Western tech by imposing new restrictions on WhatsApp, accusing the encrypted platform of enabling terrorism and refusing to comply with domestic regulations. The state communications watchdog moved quickly, triggering service disruptions, after citing an alleged leak of sensitive diplomatic calls as evidence of WhatsApp’s “security risks.” However, this crackdown has extended beyond messaging apps. According to another recent article by The Record, Russia has expanded restrictions on Western technology platforms, including WhatsApp, Snapchat, FaceTime, and Roblox, citing national security and “public morality” concerns. The ban on Roblox has sparked notable backlash among its roughly 18 million Russian users, primarily young people who have taken social media to protest losing access to online communities and purchased content. Authorities justified the move by alleging the presence of “extremist materials,” framing the restrictions as child protection. Critics see the crackdown as part of a broader strategy to control information flows, limit foreign platforms, and steer users toward state-approved alternatives.
- China moves to overt state-led disinformation against Japan. In 2025, China shifted its influence strategy toward openly using official government channels, such as diplomatic social media and state media, to spread disinformation targeting Japan. Rather than relying on covert networks, these campaigns leverage the credibility of formal state messaging to undermine Japan’s role as a defense partner in the Indo-Pacific. Narratives often draw on the 80th anniversary of WWII, framing Japan’s current security cooperation as a form of regional remilitarisation. This tactic aims to erode trust, isolate Japan from its allies, and normalise propaganda within diplomatic communications, marking a notable shift in China’s disinformation playbook.
📱Platform integrity failures & the spread of mis/disinformation
- EU prioritises privacy over platform monitoring in child safety law. EU member states have agreed on new online child protection legislation that does not require platforms like Google or Meta to detect or remove child sexual abuse material actively. Instead, companies must assess the risk of abuse on their platforms and apply preventative measures. The decision, seen as a win for tech companies and anti-surveillance advocates, reflects broader digital policy trends in which concerns over privacy and excessive monitoring also shape how the EU regulates disinformation and influence operations online. Enforcement will be delegated to individual member states, with support from a forthcoming EU Centre on Child Sexual Abuse.
- Location feature on X weaponised to spread misinformation. Elon Musk’s social media platform X recently introduced a feature that publicly displays users’ countries of origin, purportedly to improve authenticity and safeguard information integrity. Instead, an ABC News article reports that the tool has been quickly discredited due to widespread inaccuracies, often caused by VPN use, prompting experts to call the rollout a significant breach of user trust. The incorrect tags have already been misused to delegitimise journalists and amplify misinformation, particularly in conflict reporting. Critics argue that the incident underscores ongoing concerns that X prioritises appearance over actual platform safety, further eroding confidence in its commitment to countering disinformation.
- Tech firms bypass journalism to control narratives. An article by The Guardian reports that major tech companies are increasingly building their own media ecosystems, such as in-house publications and curated podcasts, to shape public perception without scrutiny. Firms like Palantir and Andreessen Horowitz, along with figures like Elon Musk, selectively engage with friendly outlets or create their own channels to promote unchallenged narratives. This strategy allows them to sidestep independent journalism at a time of growing public distrust and mirrors broader trends where powerful actors avoid critical media. By controlling the platforms that carry their message, these companies effectively engage in narrative manipulation, blurring the line between corporate communication and disinformation.
- EU review finds platforms’ anti-disinformation efforts mostly superficial. A new analysis by Global Policy Journal of Very Large Online Platform’s implementation of the EU’s voluntary Code of Practice on Disinformation (CoPD) finds that compliance remains largely fragmented and performative. Despite the framework’s goal of curbing false content, platforms continue to fall short on transparency, offering researchers inconsistent data access and providing only vague, poorly documented evidence of their media literacy and fact-checking work. These gaps reveal a widening divide between the EU’s regulatory ambitions and how platforms actually operate, raising concerns ahead of upcoming legal mandates under the Digital Services Act (DSA). The authors conclude that only sustained, verifiable, and inclusive oversight will ensure platforms meaningfully address Europe’s growing disinformation challenges.
- FBI spoofed websites. A new security analysis details a severe escalation in cyberattacks where threat actors are spoofing the FBI’s IC3 website and extending their deception to social media. These sophisticated scams involve impersonating the trusted IC3 brand on Facebook, often using AI-generated personas and manipulated media to promote fraudulent recovery schemes that steal sensitive information. The objective of these malicious campaigns is to trick the public into providing personal data or engaging in private communication channels by masquerading as official reporting tools.
- EU fines X over deceptive blue check marks. The European Union has issued its first major penalty under the Digital Services Act (DSA), a roughly $140 million (approximately €120 million) fine against X (formerly Twitter) for violations including “deceptive design.” Regulators found that X’s blue checkmark system misleads users by making paid accounts appear “verified,” blurring the line between authentic realities and purchased status. The ruling also cites X’s failures in advertising transparency and researcher data access, key obligations under the DSA aimed at strengthening information integrity and user protection. The EU’s landmark decision signals its readiness to hold platforms accountable and restore trust in the online environment. X now faces a deadline to correct these practices or risk further sanctions.
🌱Climate disinformation & environmental narrative manipulation
- Digital distortion and climate disinformation in Europe and beyond. ClientEarth has released Digital Distortion, revealing how major social media platforms are actively amplifying climate disinformation across Europe and beyond. The analysis shows that engagement-driven algorithms and advertising incentives consistently elevate misleading narratives often pushed by fossil fuel interests, political actors, and covert networks, over verified scientific information. This dynamic not only endangers citizens during extreme weather events but also erodes public trust and political will for climate action. ClientEarth concludes that platforms’ failures constitute a systemic risk under the EU Digital Services Act, underscoring the urgent need for enforcement and structural reform.
- EU-funded food campaign accused of misleading environmental messaging. A DeSmog investigation into the EU’s 1.5 billion “Enjoy, it’s from Europe!” food promotion program reveals that public funds are overwhelmingly used to market meat and dairy products with misleading sustainability claims. Some campaigns reportedly dismiss environmental concerns entirely, one even labeled pig farming emissions as “absolutely fake.” Members of the European Parliament are now calling for an urgent review, warning that the scheme not only undermines the EU’s own climate goals but also actively misleads consumers, echoing tactics often seen in disinformation campaigns. Critics argue the initiative serves as a financial boost to industrial agriculture rather than promoting sustainable or healthy food systems.
- Tech platforms still ill-equipped to counter climate disinformation. A new perspective piece by Tech Policy Press highlights how major tech platforms continue to fail at addressing climate disinformation, despite its severe long-term risks. The analysis points to structural blind spots within trust and safety teams, which prioritise immediate harms (like violent content) over the slower but deeply damaging effects of climate denial. Meanwhile, well-funded fossil fuel interests exploit engagement-driven algorithms, allowing misleading content to spread far more easily than messages from under-resourced climate communicators. Experts argue that fixing these systemic weaknesses will require coordinated efforts, including stronger global regulation, advertiser pressure, and new models that rebuild public trust in accurate climate information.
Want to stay on top of the latest in climate change and disinformation? Our Climate Clarity Hub has just been updated. Take a look!
⚖️Democratic governance, free speech, and the politics of disinformation
- Global survey finds disinformation among top public security fears. A 2025 global survey across 30 countries by IPSOS shows widespread anxiety about international security, with disinformation and hacking ranked as the world’s most urgent threats. While most respondents still support global cooperation and strong defense, many believe their governments should shift focus inward due to economic pressures. The survey also highlights shifting geopolitical perceptions: Canada emerges as the most positively viewed global actor, while the United States experiences a notable reputational decline, falling behind China in perceived gains in global influence.
- Macron’s fight against fake news. President Emmanuel Macron’s latest initiative to address online disinformation has ignited fierce criticism from right-wing media and political rivals, exposing a deepening ideological battle over who gets to define and label credible news. His proposal, focused on warning about “fake news” and exploring voluntary labeling by media professionals, was quickly cast by outlets linked to Vincent Bollore and figures such as Marine Le Pen as an authoritarian attempt at censorship. Though Macron denies any intention to regulate truth, opponents likened the effort to a “Ministry of Truth,” echoing polarised debates familiar in US politics.
- New US visa policy targets trust-and-safety work as “censorship.” The Guardian reports that a State Department Directive from the Trump administration seeks to deny visas to foreign nationals who have worked in areas like fact-checking, content moderation, or combating misinformation, framing these activities as threats to Americans’ free speech. The policy, initially aimed at H-1B applicants in the tech sector, requires heightened scrutiny of applicants’ professional and social media history to identify involvement in what the administration labels “censorship.”
⚔️Conflict & crisis hub
- Vaccine disinformation: Old myths, renewed impact. Vaccine-related misinformation is resurfacing across different crises, undermining public trust and fuelling real-world harm. A new global report warns that measles outbreaks are being accelerated by persistent anti-vaccine narratives that discourage immunisation, with the WHO describing measles as an early warning signal of wider vaccine failure. Separately, medical experts have condemned the US Centers for Disease Control and Prevention for reviving space for the long-debunked claim linking childhood vaccines to autism, a move critics warn risks legitimising conspiracy theories and deepening vaccine hesitancy. At the same time, actor Liam Neeson has lent his voice to Plague of Corruption: 80 Years of Pharmaceutical Corruption Exposed, an anti-vaccine documentary promoting Robert F. Kennedy Jr. and amplifying long-debunked claims that vaccines cause autism and toxicity.
- African swine fever and disinformation. Outbreaks of African Swine Fever in Croatia and Spain are fuelling disinformation, undermining public trust in veterinary responses. False claims about the disease’s origins, severity, and biosecurity measures are circulating, exacerbating the crisis. Fact-checkers are debunking wild conspiracy theories and clarifying the virus’s real impact on pig populations, not humans.
- Russia state-backed disinformation in Congo and Armenia. Disinformation linked to state actors is intensifying across fragile political and health environments. In the Democratic Republic of Congo, false claims alleging US-run “bioweapon labs”, amplified by Russian officials, are circulating alongside Ebola outbreaks, discouraging communities from seeking medical care. In parallel, Russia is escalating a coordinated foreign information manipulation campaign in Armenia ahead of the 2026 elections, using bots, deepfakes and impersonation sites to portray the government as corrupt or foreign-controlled while positioning Moscow as Armenia’s only credible protector.
- From cyber attacks to mistranslation: Information operations in Israel and Japan. Israel has issued a rare public warning that Iran is intensifying cyber and disinformation operations targeting civilians and critical infrastructure, describing a trajectory towards cyber-based warfare. Separately, NewsGuard reports that pro-China accounts are miscaptioning videos of Japanese influencers to falsely depict local support for Beijing’s territorial claims, using mistranslation to manufacture consent in a geopolitical dispute.
- From typhoons to floods of disinfo. Disinformation tactics like “butterfly attacks” are fuelling internal divisions In Taiwan, particularly after Typhoon Ragasa. Impostors have spread false claims about volunteer misconduct, while AI-generated content has amplified tensions. Similarly, Vietnam’s flood crisis has been complicated by AI-generated disinformation, hindering rescue efforts and prompting authorities to warn the public against spreading false content.
Want to stay on the frontlines of the latest conflict and crisis news? Our Conflict and Crisis Hub has just been updated!
🤖AI disinfo hub
- AI for FIMI. New research by Graphika shows how AI is increasingly being turned into a tool for foreign interference: Generative AI is enabling state and non-state actors to rapidly scale influence operations by producing large volumes of low-quality but highly distributable content, often referred to as “AI slop,” which is designed to undermine trust and political cohesion. In parallel, the UK Foreign Secretary Yvette Cooper has warned that several countries are using AI-generated videos to undermine Western support for Ukraine, framing this activity as part of a broader strategy of information warfare targeting democratic resilience. Meanwhile, US experts caution that they remain ill-prepared to defend against AI-driven disinformation warfare, according to Foreign Affairs.
- AI search governance. An AI Forensics analysis warns that Europe’s current regulatory framework does not clearly cover AI search. These systems, ranging from chatbots like ChatGPT to AI features embedded in search engines such as Microsoft Copilot and Google Gemini, are reshaping how people access information. Unlike traditional search, which mainly points users to existing sources, AI search often generates new responses, blurring the line between retrieving information and creating it. The shift holds promise, but so are the risks, such as misinformation, bias, and lack of transparency, heightening concerns over information integrity. And AI search sits in a regulatory grey zone between the Digital Services Act and the AI, leaving key gaps in oversight.
- Real case of the impact of AI hoax. A suspected AI-generated image falsely showing a collapsed railway bridge in northern England triggered the temporary suspension of train services after an earthquake, as Network Rail carried out emergency safety checks. According to the BBC, although the bridge was undamaged, the hoax mobilised inspection teams, causing delays to dozens of passenger and freight services. The case underscores how AI-manipulated content can prompt costly infrastructure responses and public disruption even without confirmed malicious intent.
- Predators on TikTok via AI-generated content. A new investigation by Maldita.es identifies a network of TikTok accounts posting sexualised content of minors, including AI-generated videos and reposted content of real children, that attracts predatory engagement and funnels users towards Telegram. In comments and bios, accounts promote off-platform contacts where illegal child sexual abuse material is advertised for sale or exchange, while some creators monetise through TikTok subscriptions and external payment routes. Maldita.es also reports weak enforcement of terms of service and DSA: after flagging accounts that appear to violate TikTok’s own rules on sexualised “youth” AI imagery, most remained accessible, pointing to failures in mitigating risks to minors.
- AI Regulation in USA. Donald Trump’s executive order pressuring US states to roll back AI regulations has triggered fierce pushback, led by California, which accused the White House of undermining public safeguards and bending to Big Tech lobbying, according to The Guardian. As legal challenges loom, Axios reports that the administration has doubled down by issuing federal guidance requiring agencies to root out so-called “woke AI” and enforce ideological neutrality in government systems. At the same time, Cyberscoop explains that lawmakers are moving to strengthen penalties for AI-enabled fraud and impersonation, underscoring growing concern over misuse rather than governance. The debate has spilled into online satire, with a viral AI-generated image of Trump using a walker highlighting, as reported by Newsguard, the very deepfake risks policymakers claim to address.
Want to stay on top of the latest in AI and disinformation? Our AI Disinfo Hub has just been updated. Take a look!
Brussels Corner
What the first DSA fine means for platform regulation
On 5 December, the Commission issued a fine of €120 million to X for breaching its transparency obligations under the Digital Services Act (DSA). These breaches include the misleading design of its “blue checkmark,” insufficient transparency in its advertising repository, and its failure to grant researchers access to public data.
In response to the fine, Elon Musk’s X cut the European Commission off from its advertising control panel. X accuses the Commission of attempting to exploit the platform’s Ad Composer to amplify its post about the fine. In doing so, X’s claim effectively draws attention to an additional design weakness in its Ad Composer, reinforcing concerns about broader systemic risks on the platform.
The issued fine marks only the beginning of DSA enforcement. X can be expected to appeal and regulators need to hold firm as case law develops. The goal of the DSA is not to collect fines but to compel platforms to change harmful business practices and comply with democratically agreed rules. Effective enforcement also requires addressing malicious semi-compliance (cases where companies technically follow the law while undermining its purpose) and applying tougher sanctions when necessary. Finally, the evidence for this non-compliance decision relied heavily on the work of civil society and fact-checking communities, whose essential contribution to supporting enforcement should be recognised and sustainably funded.
At the time of writing, the Commission’s full non-compliance decision is still not public. This decision should explain the rationale for the imposed fine and clarify the threshold of evidence required to establish a breach of the DSA.
Initial steps in EU institutions towards adopting the AgoraEU funding package
Following the publication of the European Commission’s AgoraEU funding package, part of the Multi-annual Funding Framework, the EU co-legislators (the Council and Parliament) have taken initial steps toward agreeing on their positions.
The European Parliament has decided that the work will be shared between the Civil Liberties, Justice and Home Affairs Committee (LIBE), which handles a large number of legislative files, and the Culture and Education Committee (CULT), which handles a much smaller number of legislative files. Swedish Greens/EFA MEP Alice Kuhnke (LIBE) and French S&D MEP Emma Rafowicz (CULT) will be the rapporteurs for the proposal, with work expected to begin in early 2026. Meanwhile, the Council has started more substantive discussions. It has focused on procedural points, including the possible creation of a committee to oversee implementation of the various strands of the programme, even though, to our knowledge, no such committee was ever necessary in the past, as well as on a rather long-winded definition of cultural industries that would exclude those that do not correspond to a very messy definition, which would, inter alia, require recipients to “have the potential to generate innovation and jobs in particular from intellectual property.” Discussions on the CERV+ strand (“Rights, Equality, Citizens and Civil Society,” “Daphne,” and “Democratic Participation and Rule of Law”) appear to have barely touched on activities to uphold democracy.
The Democracy Shield draft report
The European Parliament’s draft report on the Democracy Shield is expected to be published next month. Signals of what might or might not be there can be found in the Parliament’s working document.
According to the working document, the report was supposed to serve as an “early contribution” to the Commission’s communication on EUDS, which was released on 12 November (read our position here). Arriving after the publication of the Commission’s document, the Parliament’s report has already missed the opportunity to influence the Commission’s communication; however, in certain areas, the working document indicates a stronger ambition than the Commission’s document.
The rapporteur, Tomas Tobé, looks favourably on the creation of a new independent EU-level structure dedicated to combating FIMI and appears to have a clearer vision of what such a body could do than the Commission’s less ambitious ideas.
Similarly, the working document devotes a substantial section to cooperation in justice and home affairs and sanctions, whereas the Commission’s communication only briefly mentions this. Whether these stronger ambitions will be reflected in the draft report remains to be seen.
Is There a Better Way to Shield Our Democracy?
On 9 December, MEPs from the Greens/EFA, S&D, Renew, and EPP organised an event at the European Parliament on “Election Integrity and Foreign Interference in Romania, Moldova and Poland.”
Speakers shared insights into the various Foreign and Domestic Information Manipulation and Interference (FIMI and DIMI) tactics observed during elections in each country, noting that domestic actors are also used to spread disinformation.
The discussion made clear that social media is no longer just a space for entertainment but has become a weaponised tool. This dynamic is enabled by engagement-driven algorithms that amplify illegal and polarising content. It was emphasised that known interference tactics can be easily replicated across borders, making this a truly EU-wide challenge.
The issue is not a lack of legislation but a lack of enforcement, and platforms cannot be assumed to act in good faith. As co-legislators, MEPs have a responsibility to push for stronger action and to ensure the protection of Europe’s electoral integrity.
Reading & resources
- New paper calls for national institutions to build long-term information resilience. A new discussion paper by International Idea warns that escalating disinformation and malign influence campaigns are increasingly exploiting weaknesses in national information systems, eroding public trust, and democratic stability. Current counter-disinformation efforts, the authors argue, remain too fragmented and fail to address the underlying systemic vulnerabilities that make societies susceptible to manipulation. To meet this growing threat, the paper calls for a coordinated, whole-of-society approach and advocates establishing dedicated national institutions focused on long-term information resilience. These bodies would strengthen cross-sector collaboration and provide the sustained capacity needed to protect democracies from evolving information threats.
- UNESCO launches course to counter climate disinformation. UNESCO has launched a new free online course to tackle the surge in climate change disinformation, which continues to undermine public understanding of one of the world’s most urgent threats. The program strengthens Media and Information Literacy (MIL) skills by teaching participants to critically evaluate digital content, recognise misleading narratives, and respond ethically to misinformation. By empowering global citizens to identify and counter climate-related disinformation, the course supports UNESCO’s broader effort to ensure reliable information remains a public good during the climate emergency.
- 2025 marked a turning point as tech giants normalised digital deception. A new analysis by Indicator argues that 2025 was the year major institutions and tech companies openly embraced digital deception, with profound consequences for the public. The shift was symbolised by Mark Zuckerberg’s decision to end Meta’s third-party fact-checking program, an emblem of the broader rollback of moderation and oversight across platforms. This permissive environment enabled the explosive growth of AI-generated “slop,” deepfakes, ragebait marketing, and large-scale digital scams, including among venture-backed startups. The authors contend that these trends reflect a deeper systemic failure: tech giants continue to prioritise engagement and revenue over information integrity.
- New ATHENA publication. Foreign Information Manipulation and Interference: Case Studies from the ATHENA Project is the project’s latest publication, edited by Dr. David Wright, including a contribution from EU DisinfoLab. This new volume offers a comprehensive look at how state and non-state actors use FIMI to shape democratic processes worldwide. Drawing on 32 case studies and 20 expert interviews, the book examines who conducts these operations, how they target societies, which channels they exploit, and what countermeasures exist. A key highlight is its detailed analysis of the tactics, techniques, and procedures (TTPs) used in modern disinformation campaigns, including rapidly advancing deepfake text, image, audio, and video manipulation.
- Cambridge Online Trust and Safety Index. This article reveals that a thriving market for on-demand SMS verifications is a key enabler of inauthentic online activity, including bots and coordinated disinformation campaigns. Using the Cambridge Online Trust and Safety Index (COTSI), researchers tracked verification prices across 197 countries and 500 platforms, finding that costs often spike ahead of national elections, particularly on apps like Telegram and WhatsApp. The study suggests that stricter SIM card registration could help disrupt this manipulation economy and reduce the spread of disinformation.
- Reddit tests verified profiles to address identity and misinformation threats. Reddit has launched a limited alpha test of verified profiles, marked with a grey checkmark, that allow individuals and businesses to confirm their identities. The opt-in feature aims to add clarity for users engaging with experts, journalists, and brands and to “ease the burden on moderators who often verify users manually.” As platforms continue to grapple with mis/disinformation, Reddit frames the initiative as “privacy-preserving” and focused on transparency rather than amplification.
This week’s recommended read
Raquel Miguel, Senior Researcher at EU DisinfoLab, recommends two blog posts warning about the serious implications of introducing advertising into AI-driven chatbots.
Drawing on a recent leak confirming that OpenAI is moving in this direction after other companies had previously announced similar steps, Alberto Romero examines the motivations behind the decision — notably that LLMs are not economically sustainable — and the far-reaching consequences for users.
In a blog post titled “Why ads on ChatGPT are more terrifying than you think,” Romero argues that this shift means that OpenAI’s primary clients will no longer be users, but advertisers, effectively turning the company into “a full-blown media company selling access to attention.”
While similar dynamics have already transformed search engines and social media platforms, applying this model to generative AI introduces unique risks and complexities that we do not yet fully understand. Because chatbots are built around information synthesis, an LLM will be able to embed ads directly into its answers, likely prioritising responses that benefit advertisers. In “Advertising is coming to AI. It’s going to be a disaster,” Daniel Barcay also warns that “ads can be woven invisibly into the fabric of conversation itself, making it virtually impossible to detect.” He further highlights the legal implications, arguing that this model will push the boundaries of existing fair-advertising regulations.
The latest from EU Disinfo Lab
- Updated factsheet: disinformation landscape in Italy. This new update highlights major cases influencing Italian discourse, recurring narratives, and the actors pushing back, from fact-checkers and public institutions to civil society. It also tracks how Italy’s policy response is evolving, including developments linked to the Digital Services Act (DSA). Developed with the support of Maria Giovanna Sessa and Mattia Caniglia, this report is part of EU Disinfolab’s broader effort to map disinformation trends across Europe and strengthen understanding of how information manipulation operates at the national level.
- A stronger European response to FIMI. During the #Disinfo2025 conference in October, FIMI Cluster partners in the ATHENA and ARM projects convened to assess Europe’s response to Foreign Information Manipulation and Interference (FIMI). This blogpost captures key takeaways from the meeting – from the systemic risks enabled by platform design, to the fragile promise of the Digital Services Act (DSA), and the growing challenges of attribution in the age of generative AI. It makes the case for moving beyond voluntary frameworks and focusing on what really matters: enforcement, data access, and coordinated action to defend democratic resilience.
- From election interference to DSA enforcement. Our latest report, Regulatory challenges & gaps in addressing systemic platform abuse, wraps up a year of work from the ‘FIMI Defenders for Election Integrity’ project, which tracks foreign information manipulation around elections in four countries. The report explores how manipulation persists despite existing platform rules. It revisits twelve case studies through a DSA lens, pointing to gaps in enforcement and the kind of evidence still needed to turn civil society monitoring into meaningful accountability.
Spotted: EU DisinfoLab
- On 9 December, our executive director, Alexandre Alaphilippe, spoke at the #BeersPoliticsEU session titled “The EU Democracy Shield: Too late to mend the cracks or better late than never?” in Brussels. The session focused on answering several questions about the Democracy Shield, like what exactly it will do, how it will fight against foreign interference and protect European democracy, the role of the new European Centre for Democratic Resilience, media literacy, and the protection of journalists.
Events & announcements
- 17 December: Media & Learning Wednesday Webinar about Lines of speech: hate, harm and the laws across borders.
- 23-24 January: The Political Tech Summit, held in Berlin, offers an opportunity for political professionals working at the intersection of tech, campaigning, and democracy to exchange knowledge and discuss fresh perspectives shaping digital politics.
- 24 January: CDPD 2026 Call for Papers.
- 23 January-June 2026: The Cyber for Good Media Academy will take place with the mission to protect and better equip journalists against interference and manipulation in the digital space, with a focus on OSINT and cybersecurity. Applications for this event open on 3 November and close on 5 December, with the program beginning on 23 January.
- 31 January-1 February: FOSDEM 2026, a free open source software event, will take place from January 31 to February 1 in Brussels, Belgium.
- 16-17 February: The DSA and Platform Regulation Conference will take place at the Amsterdam Law School, to reflect on the DSA and European platform regulation, providing an opportunity to discuss its broader legal and political context, through the overall theme of platform governance and democracy.
- 25 February: This year’s Digital Platforms Summit 2026 will examine how the Digital Markets Act (DMA) is reshaping online markets and enforcement, while looking ahead to the upcoming Digital Fairness Act (DFA). The event will explore platform governance, consumer and child protection, dark patterns, interoperability, and the future of EU digital regulation, alongside new research findings from CERRE.
- 8-10 April: The Cambridge Disinformation Summit is expected to gather the world’s leading scholars, professionals, and policy-makers to explore interventions on systemic risks from disinformation.
Jobs
- Active Fence is looking for a Disinformation Researcher.
- Applications are now open for our Communication and Community internship for 2026!
- NewsGuard is looking for a full-time Staff Reporter.
- NewsGuard is also accepting applications for the Politics Reporter position.
- Moonshot is looking for an OSINT Analyst.
- The Division of Journalism in the School of Communication at American University is looking for an open rank, investigative journalism and executive editor of the investigative reporting workshop.
- The European Commission is looking for a Policy Officer with the AI office.
- The Journal of Marketing Management has issued a call for papers on The Disinformation Economy: Digital Markets of Influence, Conflict, and Polarisation.
- The Interdisciplinary Transformation University (IT:U) is hiring a PostDoc and two PhD students in Human Rights and Technology.
- The Heinrich-Böll-Stiftung’s Global Unit for Democracy and Human Rights in Brussels is now accepting applications for the Administration Support (Part-time) position.
- Verificat is seeking a Coordinator of Educational Projects.
- Lighthouse Reports is now accepting applications for its OSINT Fellowship.
- The Center for the Study of Organised Hate is looking for a Researcher, Disinformation & Influence Operations.
- The Centre for Information Resilience has opened its talent pool for an OSINT investigator (Contractor — Russian/Ukrainian speaker.)
- The BBC is now accepting applications for its Journalist, Disinformation Unit position.
- AFP is now accepting applications for its Digital Investigative Journalist position in Seoul, Islamabad, and Jakarta.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.
