by Raquel Miguel, Senior Researcher, EU DisinfoLab
In September 2023, we published a factsheet exploring how five platforms – Facebook, Instagram, YouTube, X (formerly Twitter), and TikTok – faced the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation. For this purpose, we relied on a self-developed methodology to describe and assess platforms’ policies on certain sensitive issues. Two months later, we published an update including some announced changes by those platforms that qualify as Very Large Online Platforms (VLOPs) according to the Digital Services Act (DSA).
The rapid advance of this technology combined with the perceived threats during a year full of elections worldwide has resulted in these same platforms announcing individual or collective new measures or slight changes in their policies in 2024. These steps are symptomatic of the need to react to a seemingly unstoppable trend that poses increasingly new challenges.
In June 2024, we published a new version of our report “Platforms’ policies on AI-manipulated or generated misinformation”, compiling the recent steps. Spoiler alert: most of the actions taken in 2024 reaffirm the approach these platforms were already focusing on to tackle the problem, i.e., labelling.
Common steps:
- Tackling election-related deepfakes. In February 2024, the five platforms under study committed to combatting AI misinformation in this year’s election cycle and signed, together with other tech giants, a voluntary pledge to adopt a common framework for fighting election-related deepfakes intended to mislead voters
- Joining the C2PA coalition. In addition, TikTok and Google were the latest to join the Coalition for Content Provenance and Authenticity (C2PA), in which Meta and X were also participating. The Coalition provides an open technical standard for labelling and tracing the origin of different media types (e.g., describing who created an image or video, when and how it was created, and the source credibility). This is important because it is the standard companies claim to rely on when actively tagging content uploaded to their platforms.
Individual steps:
- YouTube: a new tool and an “honour-based” request to users. On 18 March 2024, YouTube announced the introduction of a new tool in Google’s Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, scene, or event – is made with altered or synthetic media, including generative AI. There are exceptions: YouTube will not require creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance. With this approach, YouTube relies on an “honour system” based on the goodwill of content creators (as described by The Verge). However, the platform mentions an unclear possibility to add an AI disclosure to videos even if the uploader hasn’t done so themselves, “especially if the altered or synthetic content has the potential to confuse or mislead people.” The future intention is to work towards an updated privacy process for people to request the removal of “AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice”.
- Meta: more labels, less takedowns. On 6 February, Meta announced its plans to start applying tags to Facebook, Instagram, and Threads posts containing AI-generated images the company has identified. On 5 April, the company announced the most significant change: an extension of the labelling of content, which will include more context on “high-risk” content and less removal of AI-generated content. Until May, Meta was labelling photorealistic images as “Imagined with AI” using its own Meta AI feature. From May onwards, Meta said it would apply the tag “Made with AI” to content that has “industry standard AI image indicators” or has already been identified as “AI content” by its creator. This applies to images, video, and audio. In addition to that, Meta will add context to high-risk material (such as political content) to identify content that may have been created to intentionally or unintentionally deceive people. From July onwards, Meta will not remove AI-generated or manipulated content solely based on Meta’s manipulated video policy unless it violates other policies (such as voter interference, bullying, harassment, violence, incitement, or other Community Standards issues). In short, Meta plans to rely more on its labelling and context-adding approach. If AI content is flagged as false or altered by fact-checkers, the approach will show it lower in the feed, so fewer people see it, and add an overlay label with additional information.
- TikTok: proactive and automatically labelling content from other platforms. On 9 May 2024, TikTok announced a more proactive approach to labelling. Until then, the platform was labelling content created using TikTok’s AI technology and requiring creators to tag any other content they produced containing realistic AI. The change consisted of proactively and gradually expanding that labelling to include AI-generated content uploaded from other platforms. The policy applies to images and videos and will be expanded to audio in the near future.
- X (former Twitter): same policy from 2023, no changes in 2024. Unlike the other platforms, X has not announced any changes this year to its AI content policy, which is still based on its synthetic and manipulated media policy.
On a final note on the European regulation framework, all these platforms abide by the DSA and more, in detail, by the systemic risk mitigations included for VLOPs; all except X are signatories of the strengthened Code of Practice (CoP) on Disinformation which requires “policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content” (15th commitment). In addition to this, the European Commission published a list of guidelines to VLOPs (and VLSEs – Very Large Search Engines) on recommended actions to mitigate systemic risks that might impact the integrity of elections, including measures linked to generative AI (mainly focused on clearly labelling content, and the enforcement on readapted terms and conditions, but also on addressing the topic in media literacy campaigns). While in their policy updates the platforms appear to respond to some of these non-binding demands, it should be noted that some of them pre-date the publication, while others do not explicitly refer to their desire to comply with these requirements.
Besides the DSA, the CoP, and the European guidelines, the AI Act will bring new rules and obligations on a risk-based approach. The text has been greenlighted and will soon be published in the EU Official Journal (Eurlex), but it will take some time before the adopted legislation comes into force.
Conclusions
The announced measures signal platforms’ willingness to address the potential risks of using AI to generate mis- and disinformation. However, in most cases, these are very minor changes based on their existing policies and do not open a real chapter in the management of AI-generated content.
Platforms mostly reinforce their focus on labelling as a solution. Especially remarkable is the case of Meta, which bolstered its labelling approach and showed more reluctance to remove AI-generated content. Meta also moved the frame from their manipulated video policy – and the risk to mislead as rationale – to other policies when deciding to delete content. Considering that those policies possibly do not cover all the potential harm this technology can do, this step can be considered even a step backward in the challenges posed by AI content moderation.
As a result, many of the problems we have already pointed out persist in 2024. This leads to the following final observations:
- Labelling does not address all the risks posed by AI technologies; it should complement other moderation measures and not necessarily prevent other harsher actions against harmful content.
- These platforms still overlook AI-generated text in their policies, which refer only to images and videos, and to audio more recently. Ignoring the risks posed by AI-generated text can be negligent, especially when AI companies such as OpenAI recently disclosed how their tools are being used to generate text implemented in some covered influence operations.
- These platforms go on to rely on unclear statements on some occasions, as in the case of YouTube, which assures that the platform may take a proactive approach to tagging if there is a potential to mislead users, without clear objective premises or foundations.
- Platforms continue to place the burden of responsibility on users (as in the case of YouTube) or on the AI industry for labelling content. Little effort on self-detection is made, which can leave a loophole for content that the tech industry or users do not identify.
This blogpost was written for veraAI project, edited by Jochen Spangenberg (DW), and originally published here: https://www.veraai.eu/posts/platforms-ai-policy-update-june2024-by-eudl