March 29, 2021

This paper served as the submission from the EU DisinfoLab on the forthcoming guidance to be prepared by the European Commission on a new EU-wide Code of Practice on Disinformation, and was developed based on learnings from the organisation’s research, our contribution to the Study on the assessment of the Code of Practice against Disinformation SMART 2019/0041, our contribution to the ITU/UNESCO report “Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression”, and ongoing discussions with partners on how to regulate disinformation.

Executive Summary

The EU DisinfoLab sees the Code of Practice as a necessary, but transitional co-regulatory tool, capable of bridging the gap left between the current absence of EU-wide rules on disinformation and the eventual enforcement of the Digital Services Act in 2023/2024. The previous Code was too focused on tracking specific “behaviours” (which evolve too rapidly) and “content” (which changes form too easily). To better address disinformation, we urge the Commission to integrate a stronger focus on how disinformation is “distributed” across various types of platforms, traditional news sites, and different types of social media platforms – a comment also made in the VVA study on the implementation of the initial Code of Practice.[1] Moving forward, the EU DisinfoLab hopes the new Code can include a broader range of signatories, including crowdfunding platforms and closed messaging spaces like Telegram; more granular reporting obligations; and a more structured approach to monitoring enforcement with a permanent body, similar to the Election Integrity Partnership (EIP) which was set up in the US to monitor the Presidential Election. This partnership or cooperation framework could be chaired by the European Commission and include representatives from Civil Society Organisations (CSOs) capable of observing the progress made by signatories throughout the year as well as follow-up on large-scale investigations carried out by researchers on the prominent cases of mis- and disinformation in the EU. We have organised more detailed feedback below according to the four thematic areas of the multi-stakeholder dialogue: I) Integrity of Services & User Empowerment; II) Access to Data, Cooperation with Fact-Checkers and Researchers; III) Scrutiny of Ad Placements & Transparency of Political and Issue-Based Advertising; IV) KPIs & Monitoring of the Code of Practice.

I) Integrity of Services & User Empowerment

Integrity of services

Rather than standardised definitions, a grid of platform definitions

Much of the discussion on “integrity of services” has centered around agreeing on certain definitions of disinformative behaviours. While the largest platforms, such as Facebook and Google, may wish to engage in a discussion on standardising EU definitions, this risks creating a drawn-out process that will achieve little in the short term. Rather than attempting to standardise a single EU definition of, for example, Coordinated Inauthentic Behaviour (CIB), researchers and policy-makers must first know more about the methodologies each platform is using, as well as understand when a consistent approach is applied, and when a manual override, or ad-hoc decision, is taken by the company’s leadership.[2] Eventually, the Code could oversee the creation of a publicly available grid making readily available a comparison of methodologies on specific behaviours from one platform to another, including an indication of when and how ad-hoc content moderation decisions are taken, or will be needed.

The challenge of non-english disinformation

More work is needed on non-English disinformation. We see an imbalance in the takedowns of websites spreading disinformation depending on the language being used. We have seen this in our investigations on Tierra Pura for Spanish disinformation, and Observateur Continental for French disinformation, where actors with links to Chinese opposition groups or the Russian GRU used translation to disinform in Europe. After being flagged with evidence and press reports, we would typically see the English side of the operation taken down by Facebook, Twitter and YouTube, while the French or Spanish content remains, spreading the same content in just different languages.

From a perspective of tackling disinformation effectively, it is worth considering that EU citizens deserve one Digital Single Market composed of 24 different languages, not individual markets that may be unfairly segmented based on a rough linguistic approximation. If platforms provide services across this market, it should also be capable of moderating and enforcing the decisions taken across the 24 languages and provide detailed reports on the decisions taken in the various languages of the EU. If this is not possible, a starting point could be for signatories to distinguish the largest disinformation outlets flagged by fact-checkers and researchers as well as a list of disinformation items uncovered each year, categorised by language and detailing the targeted audience with some information on the actors and type of content used. Eventually, integrity of services should also mean platforms taking extra precautions for certain EU Member States deemed at risk during election cycles, contested elections, or for Member States experiencing a declining civic space.

II) Access to Data, Cooperation with Fact-Checkers and Researchers

Meaningful transparency

The EU DisinfoLab agrees that too much transparency can pose a risk to users. However, there is a middle ground to be struck in which meaningful transparency can be provided by the Code signatories on information related to clearly identifiable disinformative narratives like QAnon, COVID-19 conspiracies, or climate denial. This means the provision of disaggregated data by either language or Member State, data on “Potential Reach”, data on amplification, insights into takedowns, explanations for the distinction between spam and IOs[3], disaggregated takedown data by law enforcement and individuals, and data about the levels of user engagement with detected disinformation campaigns.

A new mechanism for “trusted disinformation flaggers” in the EU

The Code should introduce a reporting mechanism for platforms to respond to requests made by accredited researchers. This would, we hope, help platforms take action against information operations (or IOs), bolster their moderation decisions with conclusive evidence, and shield the consumer from harmful content. Currently, if a well-established European organisation uncovers an IO with convincing evidence, there is no guarantee action will be taken. Therefore, the Commission should consider creating a mechanism whereby trustworthy organisations have a process to flag disinformation and platforms have a channel to follow-up and carry out the necessary checks.  This could mean setting up a system analogous to granting entities a “trusted flagger” status, to test the provisions outlined in Article 19 of the DSA, but specifically designed for actors in the disinformation space, with emphasis on appropriate expertise and linguistic coverage – rather than overall capacity.[4] Eventually, the Commission could even consider convening a committee or working group to scrutinise how platforms have dealt with specific cases raised by the accredited disinformation flaggers. One model for this type of partnership is the US Election Integrity Partnership (EIP), a coalition of premier research teams focused on supporting real-time monitoring and information exchange between the research community, election officials, government agencies, civil society organisations and social media platforms.[5] While the EIP was limited in time and only designed to publication deliverables, it provides an excellent template for improving cooperation and connecting the various stakeholders in Europe’s disinformation ecosystem.

The European Democracy Media Observatory (EDMO) could potentially be the vehicle to oversee such a structure. However, some improvements would need to be made to EDMO to further develop its reputation within the research community, which expects strong effectiveness in exposing disinformation in the EU and financing independent research. The managers of any new cooperation mechanism must be well attuned to the needs of the EU’s disinformation research community, and should earmark funding to regularly audit the landscape.

III) Scrutiny of Ad Placements & Transparency of Political and Issue-Based Advertising

KYC obligations for signatories

More transparency is needed in the scrutiny of ad placements. On this point, the EU DisinfoLab echoes the calls already made by a number of civil society organisations (CSOs), particularly the Global Disinformation Index (GDI). We endorse signatories to commit to introducing stricter Know Your Customer (KYC) rules for the placement of online ads, and to have specific advertising policies covering disinformation, as well as how they measure and enforce these policies.

Beyond the Code: Setting stricter standards and a European Online Ad Library

More generally, the EU needs to set an agreed standard of advertising transparency that will create more certainty for advertisers and platforms. As outlined in the Joint Call for Universal Ads Transparency (Sep 2020), signed by the EU DisinfoLab and 28 other civil society organisations, we believe the time has come to acknowledge the limits of self-regulatory approaches in the ad-tech sector. As already outlined in the European Democracy Action Plan (EDAP), the EU must consider binding requirements on platforms, a governance structure guaranteeing enforcement, a verification process of advertisers, real-time transparency disclosures for individual users, and most importantly, a mandatory, public, and functional repository for online ads: a “European Online Ad Library”.

IV) KPIs & Monitoring of the Code of Practice.

Clear KPIs

The EU DisinfoLab is in favour of the creation of clear and measurable Key Performance Indicators (KPIs) that would enable cross-platform comparisons and objective measurements.

Overall, our organisation agrees with the Commission’s suggestion, outlined in the discussion paper for the session on KPIs and Monitoring Enforcement of the Code, that two classes of KPIs could be pertinent: a first class of service-level indicators (covering data on user engagement, effectiveness of user empowerment tools, ad revenues flowing to disinformation websites) and a second class of structural indicators on the overall impact of the Code on disinformation.

A role for CSOs in the Verification Process

However, as mentioned already in the VVA assessment, a greater focus is needed on monitoring the “distribution” of disinformation. This focus on distribution can already be achieved by implementing some of the measures outlined above (public access distribution metrics, online repositories for political ads), but more work is needed on the governance and oversight dimension to ensure we do not find ourselves in a situation whereby research communities become dependent on parceled data provided by the tech industry. The Mission report (May 2019) submitted to the French government for a transparent process of content distribution, laid out a structure for an informed policy dialogue in which researchers would be guaranteed access to distribution data and a process in which civil society could participate in the verification process of the actions taken by social media platforms while identifying future trends or flagging potential issues.[6]

Conclusion

Europe’s civil society is rising to the disinformation challenge with new types of expertise, like open source investigative intelligence (OSINT) and digital forensics, crowdsourced participation, consumer literacy and data activism.[7] They are exploring unusual partnerships, for instance between journalists and classrooms, and experimenting with new techniques like artificial intelligence. This is a new sector taking shape. But despite their role in our information ecosystem, many struggle to make their voices heard; their security and sustainability are not assured. They also face capacity issues and novel cybersecurity risks.

In the 2030 Digital Compass, the Commission rightly underlines the need for the EU to build on its strengths and support a “robust civil society”.[8] Giving specialist NGOs and researchers a bigger voice in the Code of Practice is one concrete way of meeting this expectation.

Annex I

Recent Reports and Studies

On data access for disinformation researchers, Renée DiResta of the Stanford Internet Observatory, https://www.ned.org/wp-content/uploads/2021/01/Disinformation-Researchers-Robust-Data-Partnerships-DiResta.pdf

EU DisinfoLab and Trisha Meyer on typology of responses platforms have taken to moderate content, https://www.disinfo.eu/publications/how-platforms-are-responding-to-the-disinfodemic/

Julie Posetti and Kalina Bontcheva Policy Brief on Covid-19 disinfo, https://en.unesco.org/sites/default/files/disinfodemic_deciphering_covid19_disinformation.pdf

Recent reporting by Issie Lapowsky challenges the completeness of FB’s transparency reporting, specifically Facebook’s fourth-quarter content moderation report, https://www.protocol.com/facebook-hate-speech-transparency


[1] See also Brookings Institution (April 2020) “Adding a ‘D’ to the ABC Disinformation Framework” by Alexandre Alaphilippe.

[2] Notable examples include Twitter CEO’s decision to ban and reverse past policies on former President Donald Trump (which were explained publicly) or Facebook’s CEO intervention to retain US conspiracy theorists like Alex Jones on their platform (not revealed publicly).

[3] Twitter talks about fighting malicious automation and spam, bundling them together. Reddit (not a signatory) talks about spam and “content manipulation” with little distinction.

[4] According to Art. 19 of DSA: Online platforms shall take the necessary technical and organisational measures to ensure that notices submitted by trusted flaggers are processed and decided upon with priority and without delay. The status of trusted flaggers under this Regulation shall be awarded, upon application by any entities, by the Digital Services Coordinator of the Member State in which the applicant is established, where the applicant has demonstrated to meet all of the following conditions: (a) it has particular expertise and competence for the purposes of detecting,

identifying and notifying illegal content; (b) it represents collective interests and is independent from any online platform; (c) it carries out its activities for the purposes of submitting notices in a timely, diligent and objective manner.

[5] The EIP was launched by the Atlantic Council’s Digital Forensic Research Lab (DFRLab), the Stanford Internet Observatory, Graphika, and the University of Washington Center for an Informed Public.

[6] Mission report submitted to the French Secretary of State for Digital Affairs (May 2019), “Creating a French framework to make social media platforms more accountable: Acting in France with a European vision”. The proposals were unfortunately not adopted.

[7] As highlighted in the EU DisinfoLab’s report (Feb, 2021), “The Many Faces Fighting Disinformation: Safeguarding Civil Society’s Role in the Response to Information Disorders”.

[8] European Commission (9 March 2020), Communication “2030 Digital Compass: the European Way for the Digital Decade”, p. 1.