Representatives of online platforms, leading social networks, advertisers and advertising industry agreed to a self-regulatory Code of Practice to address the spread of online disinformation and fake news. Each platform committed to the Code presented its individual road-map and committed to publish monthly reports on the implementation of the Code of Practice.

Since its initiation the Code of Practice, or CoP, has expanded to include members of the advertising industry and trade associations.

In June of 2020, under the 10 June 2020 Joint Communication “Tackling COVID-19 disinformation – Getting the facts, platform signatories began producing monthly reports specifically on their actions to promote authoritative health content and reduce disinformation related to the pandemic.

The most recent set of reports, published in February 2021, can be found here.


At the end of 2017, the European Commission announced the creation of a high level expert group, composed of representatives of the civil society, social media platforms, news media organisations, journalists, and academia.


The group delivers its report. As well as a public consultation and a multi-stakeholders conference, the report from the expert group served for the drafting of a self-regulatory code of practice released at the end of September 2018.


Facebook report available here

  • Committed to extending the transparency on political advertising already in place in the US, Brazil, and the UK. The person paying for the ad will need to confirm its location and identity. The ads will be archived for seven years.
  • Committed to strengthening the verification process of content authenticity and detection of fake accounts.
  • Facebook policies for the advertising network will prohibit the inclusion of misleading, deceptive, sensational, or excessively violent content. This includes deceptive claims (such as false news), offers, or business practices.
  • Announced it will establish a European research advisory commission to award research identified as relevant to the Academic community.

Google report available here

  • Mentioned it has already implemented obligations for advertisers to comply with policies against misrepresentation complemented by a “valuable inventory policy”. 
  • Committed to introducing an election-ads transparency report and searchable ad library in a downloadable format.
  • Committed to surfacing fact-check content and supporting projects to surface indicators of credibility. 

Twitter report available here

  • Announced that promoted content will be clearly labelled as such.
  • Committed to continue its efforts to address spam, malicious automation, and fake accounts.
  • In terms of support to the research community, Twitter released accounts suspected of information operations and provides support to organisations focusing on European environment (including EU DisinfoLab, which received support from Twitter in 2018).

Mozilla report available here

  • Hired dedicated staff to work on its disinformation project.
  • Committed to rolling out a new version of Firefox with additional privacy settings and tracking protection targeted to reduce users’ exposure to disinformation campaigns.

Facebook report available here

  • Announced the rolling out of the political advertisement libraries in Europe for both Facebook and Instagram. Making the API Archive accessible on request. Facebook also announced that a report on the Ads Library will be released mid-May.
  • Presented the actions taken against the ads that have violated its policies for containing low quality, disruptive, misleading or false content.
  • Presented the actions taken towards taking down eight coordinated inauthentic behaviour networks, originating in North Macedonia, Kosovo, and Russia.

Google report available here

  • Announced it is increasing cooperation with fact-checking organisations and networks (e.g. FactCheckEU).
  • Launched the EU Elections Ads Transparency Report.
  • Announced Google’s Ads Library has entered the testing phase.
  • Took several additional measures, including more visibility to Youtube videos that receive government or public funding.

Twitter report available here

  • Made political advertisement libraries publicly accessible.
  • Provided further details on the public disclosure of political ads in Twitter’s Ad Transparency Center.
  • Provided figures on actions undertaken against spam and fake accounts.

Facebook report available here

  • Took measures against ads in the EU that violated its policies for containing low quality, disruptive, misleading or false content, or content trying to circumvent its systems.
  • Launched its elections operation centre in Dublin, which involves specialists covering all EU Member States and languages.
  • Further expanded its fact-checking partnerships, now including 21 partners in 14 European languages.

Google report available here

  • Improved scrutiny of ad placements in the EU, including a breakdown per Member State.
  • Took action against two accounts for the violation of the company’s misrepresentative content policies in the EU-based AdSense.
  • Provided an update regarding its policy on transparency of political ads.
  • Rolled out expanded access to Facebook Ad Library API for researchers to analyse ads related to politics or issues.
  • Google’s Transparency Report on political advertising in the EU and its searchable ads library have been made public and provide data on sponsor identity, amounts spent, and display periods.

Twitter report available here

  • Reported on a new election integrity policy that prohibits specific categories of manipulative behaviour and content, such as misleading information on how to participate in the elections, as well as voter intimidation.
  • Provided figures on measures against spam and fake accounts, but did not provide further insights on these measures, i.e. how they relate to activity in the EU.
  • On the transparency of political ads, Twitter provided information on ads prevented from being served.
  • Created a link to access the Ads Transparency Centre directly from the political campaigning ad appearing in the Twitter feed

Facebook report available here

  • Launched its Ads Library Report.
  • Reported that between the launch of the ads authorisation process late March up to 29 May 2019, there were 343,736 political ads across the EU, with an amount of €19.8 million of political ads spend and provided a breakdown per Member State.
  • Established the Data Transparency Advisory Group (DATG), tasked with providing an independent, public assessment of whether the 4 metrics shared in the Community Standards Enforcement Report are meaningful and accurate.
  • Updated its vaccine misinformation policy and informed that it is now reducing the distribution of Pages that violate this policy in the News Feed.
  • Reported on tools and actions specific to the European elections such as an escalation channel for political pages to report issues.

The Commission urges Facebook to provide more granular and continuous information in its upcoming annual report regarding the closure of fake accounts or CIB networks so as to better assess malicious behaviour targeting specifically the EU. Lastly, it urges Facebook to provide data on a consistent basis so as to allow an accurate and continuous assessment of the effectiveness of its policies and the progress achieved.

Google report available here

  • Improved scrutiny of ad placements in each EU Member State.
  • Took action against 88 website publishers for violation of its policies on valuable inventory.
  • Reported on receiving 676 verification applications and successfully verifying 174 advertisers to run political ads during the campaign for the EU Parliament elections between 1 May and 26 May 2019.
  • Identified and labelled more than 98,000 election ad creatives from verified advertisers.

The Commission calls upon Google to consider ways to improve its metrics to enable a more granular assessment of the progress achieved in the EU.

Twitter report available here

  • Reported on 1,428 ads rejected in the EU for not complying with its Unacceptable Businesses Practices Ads policy between 1-20 May 2019, and provided a breakdown per Member State.
  • Reported that it prevented 1,975 ads from being delivered to EU users due to non-compliance with its Quality Ads policy between 1-20 May 2019, and provided a breakdown per Member State.
  • Updated on its election integrity policy, prohibiting three categories of manipulative behaviour and content: i) misleading information about how to participate to the elections; ii) voter suppression and intimidation, and iii) false or misleading affiliation.

The Commission urges Twitter to develop a policy on issue-based advertising, which ensures transparency and public disclosure of such ads, and to inform the Commission of its progress in its upcoming annual report.

It is important to note that, on the 22nd May 2019, Microsoft joined the Code of Practice and subscribed to all of its commitments.


In June of 2020, under the 10 June 2020 Joint Communication “Tackling COVID-19 disinformation – Getting the facts right”  the Commission began requesting platform signatories of the code to produce more detailled monthly reporting on their actions related to COVID-19 disinformation.


After 5 months of reporting in a programme originally set for six months, the Commission asked the signatories to prolong the transparency measures until June 2021 , or another six months. Platforms were also asked to focus on the actions to limit disinformation around COVID-19 vaccines.