September 28, 2023

Author: Raquel Miguel, EU DisinfoLab

Reviewer: Noémie Krack, KU Leuven Centre for IT & IP Law – imec

Last updated: 4 June 2024

Executive Summary

The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, allowing content to be easily manipulated and contributing to accelerate its distribution. Focusing on content, recent technical developments, and the growing use of generative AI systems by end-users have exponentially increased these challenges, making it easier not just to modify but also to create fake texts, images, and audio pieces that can look real. Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.

In view of these rapid changes, it is crucial to understand how platforms face the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation. Are they able to distinguish legitimate uses from malign uses of such content? Do they see the risks embedded in AI as an accessory to disinformation strategies or copyright infringements, or consider it a matter on its own that deserves specific policies? Do they even mention AI in their moderation policies, and have they updated these policies since the emergence of generative AI to address this evolution?

Answers to these questions are crucial as the Digital Services Act (DSA) provides new complaint mechanisms for users in the European Union on the lack of enforcement of terms and conditions. The DSA, while mentioning disinformation only in its recitals and not in its provisions, still provides many obligations which will help to combat disinformation including through user’s empowerment measures and increased transparency requirements. The DSA will also require very large platforms and search engines (VLOPs and VLOSEs) to assess their mitigation measures (and results) against systemic risks and implement crisis protocols under exceptional circumstances.

The present factsheet delves into how some of these VLOPs – Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube – approach AI-manipulated or AI-generated content in their terms of use, exploring how they address its potential risk of becoming mis- and disinformation.

EU DisinfoLab published a first version in September 2023, which required an update two months later. The rapid advance of this technology combined with the perceived threats during a year full of elections worldwide has resulted in new recommendations by the European Commission related to the risks of AI (and specifically, generative AI); and in these platforms announcing individual or collective new measures or slight changes in their policies in 2024, which we included in this third version.

  • Following an open consultation, the European Commission published on 26 March a list of guidelines on recommended measures to VLOPs and VLSEs to mitigate systemic risks online that might impact the integrity of elections, especially looking at the European Parliament elections in June. These non-binding guidelines include a recommendation to “adopt specific mitigation measures linked to generative AI” (…) for example by clearly labelling content generated by AI (such as deepfakes), adapting their terms and conditions accordingly and enforcing them adequately”.  The guidelines also recommend that platforms      focus on AI’s challenges in their media literacy campaigns.
  • The collective steps taken by the platforms in 2024 include the adoption of a voluntary pledge to adopt a common framework for fighting election-related deepfakes intended to mislead voters and participation in the Coalition for Content Provenance and Authenticity (C2PA), which provides an open technical standard for labelling and tracing the origin of different media types.
  • The individual steps in 2024 reaffirm the approach that platforms focus on to tackle the problem of labelling content. YouTube announced a new tool requiring creators to disclose to viewers when realistic content is AI-generated, while TikTok said it would label, in a more proactive and automatic way, AI-generated content uploaded from other platforms. On their side, Meta announced that it would rely more on labelling (with new labels and more context in case of high-risk content) than on takedowns when dealing with AI content. From July onwards, Meta will not remove AI-generated or manipulated content solely based on Meta’s manipulated video policy unless it violates other policies.

The analysis concluded that some definitions are divergent but have been moving towards harmonisation in 2024. In September 2023, only Facebook and TikTok mentioned “artificial intelligence” (including deepfakes in the case of Facebook) directly in their policies aiming to tackle disinformation. TikTok and X included “synthetic media” in their policies about manipulated and misleading media. However, in 2024, Meta, YouTube, and TikTok also refer to AI-generated or generative AI in their policies.

While the distinction between general misinformation policies and AI-specific considerations isn’t always evident, there’s a growing trend among platforms to incorporate specific guidelines for content altered or generated by AI. However, Meta’s recent decision to rely on other policies when removing content could be a step back. In addition, the platforms often overlook mentioning AI-generated text and refer mainly to images and videos in their policies but they also start to mention audio.

In cases, like TikTok, where platforms explicitly address synthetic or manipulated media with AI, they try to distinguish between allowed and banned uses. Little variations in the rationale behind content moderation exist: the driving force is either the misleading and harmful potential or a more compliance-oriented approach in terms of copyright and quality standards of the content.

On a different note, all the studied platforms qualify as Very Large Online Platforms (VLOPs) according to the DSA. The DSA is technically neutral, i.e., it applies regardless of the technology used to produce the content.

Meanwhile, the strengthened Code of Practice on Disinformation has been complemented by the obligations contained in the DSA. Additionally, the co-regulatory mechanism present in the DSA will reinforce the Code once it becomes an official DSA code of conduct. The strengthened Code, in its 15th commitment, relevant signatories of the Code (all of the studied platforms except X) are specifically called to “establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content”.

While X has withdrawn from the Code, it still has to abide by the DSA. Therefore, all the five studied platforms must comply with the DSA due diligence obligations and justify the means they deploy to combat disinformation on their services. This could require that they adopt new measures. Among other required actions, platforms should update their policies to meet new needs in the face of rapidly evolving technologies, enhance cooperation with experts, and take some responsibility (instead of passing the burden to users and the AI industry) on this complex topic.

Since the initial release of this document in September 2023 and the publication of a second version two months later, YouTube, TikTok and Meta announced some changes in their policies related to AI in 2024 that we incorporated into this updated version.

Platforms’ policies on AI-manipulated and generated misinformative content EU DisinfoLab has developed an analytical framework to analyse and compare the policies of five platforms on different misinformative topics. Factsheets on electoral, health, and climate change misinformation have already been published following this framework. The same methodology (focusing on definitions and actions, and types of actions) is applied to AI-generated and manipulated misinformation. As far as applicable, the notes included in the table are verbatim mentions of the platforms’ policies. In other cases, for the sake of simplification, the notes are a summary or analysis by the author.

This page has been updated on 4 June 2024 with the version 3 of the factsheet, and the Executive Summary also adjusted accordingly. You can find the version 1, published on 28 September 2023, that includes the policy updates announced by Meta and YouTube in November 2023, here (pdf), and the version 2, published on 6 December 2023, here (pdf).