EU actions against disinformation

This article presents the actions undertaken at European level to tackle the spread of online disinformation. It will be updated regularly.

Last update: 17/06/2019

In April 2017, Vice-President Andrus Ansip in charge of the completion of the Digital Single Market described fake news as “a serious problem”. With the Communication “tackling online disinformation: a European approach”, the European Commission chose to engage with stakeholders in order to define an Action Plan to tackle the spread of disinformation in Europe.

At the end of 2017, the European Commission announced the creation of a high level expert group, composed of representatives of the civil society, social media platforms, news media organisations, journalists and academia. The group delivered its report in March 2018. As well as a public consultation and a multi-stakeholders conference, the report from the expert group served for the drafting of a self-regulatory code of practice released at the end of September 2018.

Alongside the Code of Practice, the European Commission has launched an independent network of fact-checkers and an online platform on disinformation, as well as measures aiming at enhancing media literacy. The monitoring of the progress made with the assitance of the European Regulators Group for Audio-Visual Media Services (ERGA), will examine the need for further actions.

Related articles:

Code of practice:

Representatives of online platforms, leading social networks, advertisers and advertising industry agreed on a self-regulatory Code of Practice to address the spread of online disinformation and fake news. Each platform committed to the Code has presented its individual road-map and committed to publish monthly reports on the implementation of the Code of Practice.

On 14/06/2019 the European Commission has published a Report on the implementation of the Action Plan Against Disinformation. The Report explains in more detail how the Action Plan and the Elections Package helped to fight disinformation in the context of the European elections. The report does not identify cross-border disinformation campaign from external actors but collected evidence of sustained disinformation activities by Russian sources.

Implementation reports by online platforms:

January 2019

Facebook report available here

  • Committed to extend the transparency on political advertising already in place in the US, Brazil and UK. The person paying for the ad will need to confirm its location and identity. The ads will be archived for seven years.
  • Committed to strengthen the verification process of content authenticity and detection of fake accounts.
  • Facebook policies for the advertising network will ban the inclusion of misleading, deceptive, sensational or excessively violent content. This includes deceptive claims (such as false news), offers, or business practices.
  • Announced it will establish a European research advisory commission to award researches identified as relevant to the Academic community.

Google report available here

  • Mentioned it has already implemented obligations for advertisers to comply with policies against misrepresentation complemented by a “valuable inventory policy”. 
  • Commited to introduce an election-ads transparency report and searchable ad library in a downloadable format.
  • Commited to surface fact-check content and support projects to surface indicators of credibility. 

Twitter report available here

  • Announced promoted content will be clearly labelled as such.
  • Commited to continue its efforts to address spam, malicious automation, and fake accounts.
  • In terms of support to the research community, Twitter released accounts suspected of information operations and provides support to organisations focusing on European environment (including EU Disinfolab, which received support from Twitter in 2018).

Mozilla report available here

  • Hired dedicated staff to work on disinformation project.
  • Commited to roll out a new version of Firefox with additional privacy settings and tracking protection targeted to reduce user’s exposure to disinformation campaigns.

Implementation reports by trade associations:

February 2019

Facebook report available here

  • Did not report on results of the activities undertaken in January with respect to scrutiny of ad placements.
  • Announced that a pan-EU archive for political and issue advertising will be available in March 2019.
  • Updated on cases of interference from third countries in EU Member States, but did not report on the number of fake accounts removed due to malicious activities targeting specifically the European Union.

Google report available here

  • Provided data on actions taken during January to improve scrutiny of ad placements in the EU, divided per Member State.
  • Published a new policy for ‘election ads’ on 29 January, and announced it will start publishing a Political Ads Transparency Report as soon as advertisers begin to run such ads.

Twitter report available here

  • Did not provide any metrics on its commitments to improve the scrutiny of ad placements.
  • On political ads transparency, contrary to what was announced in the implementation report in January, Twitter postponed the decision until the February report.
  • On integrity of services, Twitter added five new account sets, comprising numerous accounts in third countries, to its Archive of Potential Foreign Operations, which are publicly available and searchable, but did not report on metrics to measure progress.

March 2019

Facebook report available here

  • Announced the rolling out of the political advertisement libraries in Europe for both Facebook and Instagram. Making the API Archive accessible on request. Facebook has also announced that a report on the Ads Library will be released mid-May.
  • Presented the actions taken against the ads that have violated its policies for containing low quality, disruptive, misleading or false content.
  • Presented the actions taken towards taking down eight coordinated inauthentic behaviour networks, originating in North Macedonia, Kosovo and Russia.

Google report available here

  • Announced it is increasing cooperation with fact-checking organisations and networks (e.g. FactCheckEU).
  • Launched the EU Elections Ads Transparency Report.
  • Announced Google’s Ads Library has entered the testing phase.
  • Took several additional measures, including more visibility to Youtube videos that receive government or public funding.

Twitter report available here

  • Made political advertisement libraries publicly accessible.
  • Provided further details on the public disclosure of political ads in Twitter’s Ad Transparency Center.
  • Provided figures on actions undertaken against spam and fake accounts.

April 2019

Facebook report available here

  • Took measures in the EU against ads that violated its policies for containing low quality, disruptive, misleading or false content or trying to circumvent its systems.
  • Has launched its elections operation centre in Dublin, which involves specialists covering all EU Member States and languages.
  • Further expanded its fact-checking partnerships, now including 21 partners in 14 European languages.

Google report available here

  • Improved scrutiny of ad placements in the EU, including a breakdown per Member State.
  • Took action against two accounts for violation of the company’s misrepresentative content policies in the EU-based AdSense.
  • Provided an update regarding its policy on transparency of political ads.
  • Rolled out expanded access to Facebook Ad Library API for researchers to analyze ads related to politics or issues.
  • Google’s Transparency Report on political advertising in the EU and its searchable ads library have been made public and provide data on sponsor identity, amounts spent and display periods

Twitter report available here

  • Reported on a new election integrity policy, prohibiting specific categories of manipulative behaviour and content, such as misleading information about how to participate in the elections and voter intimidation.
  • Provided figures on measures against spam and fake accounts, but did not provide further insights on these measures, i.e. how they relate to activity in the EU.
  • On the transparency of political ads, Twitter provided information on ads prevented from being served.
  • Created a link to access the Ads Transparency Centre directly from the political campaigning ad appearing in the Twitter feed

May 2019

Facebook report available here

  • Launched its Ads Library Report.
  • Reported that between the launch of the ads authorisation process late March up to 29 May 2019, there were 343,736 political ads across the EU, with an amount of €19.8 million of political ads spend and provided a breakdown per Member State.
  • Established the Data Transparency Advisory Group (DATG), tasked with providing an independent, public assessment of whether the 4 metrics shared in the Community Standards Enforcement Report are meaningful and accurate.
  • Updated its vaccine misinformation policy and informed that it is now reducing the distribution of Pages that violate this policy in the News Feed.
  • Reported on tools and actions specific to the European elections such as an escalation channel for political pages to report issues.

The Commission urges Facebook to provide more granular and continuous information in its upcoming annual report regarding the closure of fake accounts or CIB networks so as to better assess malicious behaviour targeting specifically the EU. Lastly, it urges Facebook to provide data on a consistent basis so as to allow an accurate and continuous assessment of the effectiveness of its policies and the progress achieved.

Google report available here

  • Said it improved scrutiny of ad placements in each EU Member State.
  • Took action against 88 website publishers for violation of its policies on valuable inventory.
  • Reported on receiving 676 verification applications and successfully verifying 174 advertisers to run political ads during the campaign for the EU Parliament elections between 1 May and 26 May 2019.
  • Identified and labelled more than 98,000 election ad creatives from verified advertisers.

The Commission calls upon Google to consider ways to improve its metrics to enable a more granular assessment of the progress achieved in the EU.

Twitter report available here

  • Reported about 1,428 ads rejected in the EU for not complying with its Unacceptable Businesses Practices ads policy between 1 May and 20 May 2019 and provided a breakdown per Member State.
  • Reported that it prevented 1,975 ads from being served to EU users for non-compliance with its Quality Ads policy between 1 May and 20 May 2019 and provided a breakdown per Member State.
  • Updated on its election integrity policy, prohibiting three categories of manipulative behaviour and content: i) misleading information about how to participate to the elections; ii) voter suppression and intimidation; and iii) false or misleading affiliation.

The Commission urges Twitter to develop a policy on issue-based advertising, which ensures transparency and public disclosure of such ads, and to inform the Commission of its progress in its upcoming annual report.

It is to note that on 22 May 2019, Microsoft joined the Code of Practice and subscribed to all its commitments.

One thought on “EU actions against disinformation

Comments are closed.