Author: Ana Romero-Vicente
Contributor and reviewer: Ira Pragnya Senapati, Ripple Research
Why this update matters
This document is a revised and updated version of the technical document Platforms’ policies on climate change misinformation first published in 2023. Back then, we mapped how five major platforms, Facebook, Instagram, YouTube, TikTok, and X/Twitter, addressed climate change disinformation through their policies and enforcement systems.
At that time, the Digital Services Act (DSA) had not yet entered into force. Therefore, platforms had no legal obligation to address climate disinformation in the EU, and each one did so in its own way, with varying levels of ambition and effectiveness.
The DSA is now in force, but climate disinformation is not explicitly recognised as a “systemic risk” under Articles 34–35. This omission limits its inclusion in platform risk assessments, mitigation efforts, and transparency reports, and leaves enforcement largely discretionary. Without specific guidance or mandates, platforms retain wide latitude in deciding whether and how to address climate harms.
With this regulatory gap in mind, we set out to examine how platform responses to climate disinformation evolved or failed to evolve between 2023 and 2025, and what their policies look like in practice across Facebook, Instagram, YouTube, TikTok, X, and, newly included in this edition, LinkedIn. This update aims therefore to:
- Refresh memory by documenting what actions were in place in 2023.
- Measure progress or regression, both in public commitments and enforcement practices.
- Support renewed pressure on platforms to address climate disinformation more seriously.
- Encourage EU regulators to explicitly recognise climate disinformation as a systemic risk under the DSA , and ensure future guidance, risk mitigation requirements, and platform transparency reflect that urgency.
As the climate crisis accelerates, it is crucial to demand that very large online platforms (VLOPs) (take meaningful, measurable action to reduce the spread and amplification of harmful climate narratives, whether through misleading organic content, monetised falsehoods, algorithmic echo chambers, or paid advertisements.
Methodology
This assessment is based exclusively on a review of publicly available policy documentation provided by six major platforms (Facebook, Instagram, YouTube, TikTok, X, and LinkedIn). The focus was limited to sections of their official websites or transparency hubs that address content moderation, misinformation, and advertising policies. No independent monitoring of disinformation content was conducted for this report. Analysis of EU policy and regulatory context reflects internal expertise and the institutional position of the EU DisinfoLab on the treatment of climate disinformation under the Digital Services Act (DSA).
Executive summary
As the climate crisis deepens, online platforms continue to play a central, yet inadequately governed role in shaping public understanding of climate change. This updated analysis of platform policies from 2023 to 2025 reveals a landscape of partial progress, regulatory evasion, and growing systemic risks.
Despite new obligations under the Digital Services Act (DSA), climate disinformation remains largely unregulated, falling through the cracks of enforcement and transparency regimes. Platforms are not legally required to recognise climate disinformation as a systemic risk, and most continue to treat it as a marginal issue, if they address it at all.
Our findings confirm that:
- TikTok is the only platform with a dedicated, climate-specific content moderation policy, while others (Facebook, Instagram, YouTube, X, LinkedIn) either apply general misinformation rules or provide no relevant framework.
- YouTube has formally rejected onboarding third-party fact-checkers under the DSA, weakening accountability and setting a troubling precedent.
- Meta’s Climate Science Center and Climate Info Finder, previously referenced in its transparency and help resources, are no longer included in public-facing documentation as of 2025. This absence may indicate deprioritisation of climate-focused user resources.
- No platform addresses AI-generated climate disinformation through specific moderation tools or disclosure mechanisms, despite the accelerating use of synthetic media in climate denial and greenwashing campaigns.
- Recommender systems remain unexamined vectors for the amplification of climate disinformation. None of the assessed platforms include climate risks in their systemic risk audits under Article 34(1)(c) of the DSA.
While a few partial measures persist, such as TikTok’s definition of climate denial, periodic search interventions, and ad demonetisation policies on TikTok and YouTube, these remain narrow in scope, inconsistently enforced, and rarely apply to unpaid organic content.
Other platforms that previously referenced climate disinformation (such as Meta) now do so only under generic misinformation categories, and have not reaffirmed or updated climate-specific enforcement frameworks in 2025.
Crucially, no platform offers climate-specific appeal pathways, takedown transparency, or defined enforcement thresholds, leaving users without clarity, remedy, or redress.
In response, this report proposes a dual framework of policy and platform action, calling on EU institutions to formally designate climate disinformation as a systemic risk and requiring platforms to adopt transparent, climate-specific moderation systems. Without these measures, the EU’s digital governance goals, and its climate transition targets, remain undermined by unchecked falsehoods and opaque amplification systems.
This research has been supported by Heinrich-Böll-Stiftung European Union

