February 27, 2025

Author: Maria Giovanna Sessa (EU DisinfoLab)

Reviewers: Amaury Lesplingart (Check First), Joe McNamee (EU DisinfoLab)

Introduction

  • Large language models (LLMs) are rapidly proliferating. Like any technological tool, they can be harnessed for legitimate purposes but also misused or exploited by malicious actors. As these models become more integrated into everyday applications, concerns about their role in spreading misinformation continue to grow, calling for solid policies to prevent this threat.
  • This factsheet collects and analyses the misinformation-related policies of 11 leading chatbots, selected based on NewsGuard’s selection. Our focus is on text-generative AI, given its widespread use across various domains – including content creation, translation, and summarisation – as well as its role in assisting users by answering questions.
  • For each LLM, we examine key policy elements, including explicit references to misinformation and related prohibited activities – such as scams or impersonation. Additionally, we outline content moderation practices, user reporting mechanisms, and the consequences of violating the platform’s Terms of Service (ToS).
  • A note on methodology and limitations: the information provided here is based on publicly available sources, which vary in clarity, accessibility, and format. While we have made every effort to compile a thorough and accurate guide, gaps or omissions may exist due to the availability of information at the time of our writing. If any inaccuracies are identified, we welcome feedback and will gladly make corrections.
Find more factsheets here: