January 1, 2020

We’d like to thank those within our community who provided us with their insights and expertise, which helped inform the formulation of our proposals. It was greatly appreciated!

The Internet has reached an unprecedented scale, both in terms of users and the data collected and processed. Platforms have enriched connectivity between citizens, facilitating unlimited access to – and the sharing of – information. Yet, the lack of transparency mechanisms and oversight over the business model of online platforms, i.e. the “attention economy”[1], have made it possible to manipulate information on a large scale.

The agenda outlined by Commission president Ursula von der Leyen[2] presents an opportunity to ensure that Europe establishes the right regulatory framework and that the current platforms’ obligations are properly enforced to tackle this challenge.

As noted by Roger McNamee (an early investor in Facebook), “hate speech, disinformation and conspiracy theories are not a by-product of the business model; they are the business model”[3]. In order to generate advertising revenue, the business model of online platforms is based on the massive collection of citizens’ personal data and subsequent profiling to deliver curated content that’s designed to keep users engaged for as long as possible. That’s why highly emotional disinformation and clickbait content have become prolific.

The EU Code of Practice on Disinformation was a starting point in taking urgent measures against the proliferation of disinformation in the context of the 2019 European Elections. However, disinformation is not specific to the electoral process, and self-regulation lacks the necessary oversight mechanisms to fully restore trust in the online information ecosystem. Already before its enactment, the sounding board of the multi-stakeholder forum set up by the Commission highlighted its limitations.[4]

In the meantime, attempts at regulation on content moderation by EU Member States are criticized for intruding on citizen’s rights and infringing on freedom of speech. Therefore, we advocate for a strong co-regulatory approach targeting the content distribution processes rather than content itself[5]. This would be complemented by oversight from a regulatory body with enforcement powers, similar to the structured co-regulatory framework detailed by the French Mission’s report on “Creating a French framework to make social media platforms more accountable”[6].

Most importantly, we believe that, when opening the discussion on any regulatory proposals, the following principles should be taken into consideration for effective regulation:

  1. Understanding the platform’s digital architecture: ensuring transparency of algorithmic content curation and acceleration
  2. Favouring collective intelligence: agile regulation empowering civil society and the research community
  3. Setting the right consumer protection framework: defining and assessing online platforms’ content moderation processes
  4. Empowering users: giving users control to choose what information they see
  5. A proactive approach for building resilience: anticipating the future informational landscape

Understanding the platform’s digital architecture: ensuring transparency of algorithmic content curation and acceleration

Communication on social media is mediated by the platform’s digital architectures[7]. While to date most of the regulatory attention has focused on harmful content, it is only one aspect of viral deception. Other key aspects are manipulative actors and deceptive behavior, which with harmful content, constitute the “ABC framework”.[8] In particular, actors (A) and behavior (B) dimensions are – in a large part – determined by the platforms’ design, but they are under-investigated due to the asymmetry of information available between the platforms, the regulator, and stakeholders.

Still, the underlying architecture of platforms can have harmful consequences, which ought to be analysed. Considering the impact that a new algorithmic or design feature may have on online discourse, it is safe to say that a form of oversight is necessary. Concretely, to date, it is impossible to assess the precise extent to which deceptive posts and pages are recommended or the amount of money generated from this content. More transparency on the reach of content, online advertisements, as well as revenue is, therefore, necessary to disincentivise such strategies.

To fill the information gap between the platforms and stakeholders, the regulators should have full access to machine-readable data on the possibility to assess the distribution of content online and the audiences reached by algorithmic recommendations and paid content.

Favouring collective intelligence: agile regulation empowering civil society and the research community

Academics and civil society activists regularly voice their concerns over the lack of accessible data provided by the platforms for their research[9], while companies must manage obligations to preserve their users’ privacy. Even formalised research projects in cooperation with platforms encounter difficulties relating to accessing data. This was recently illustrated by the Social Science One funders warning that they would halt their research project with Facebook if the platform did not provide access to the data promised to them.[10]

Despite the few reliable data accessible, civil society actors and researchers have none the less demonstrated the harmful impact of algorithmic recommendations and targeted advertising. For instance, AlgoTansparency[11] revealed that YouTube’s recommendation algorithm, designed to keep users for as long as possible on the platform, favors clickbait and conspiracy theories. In addition, WhoTargetsMe[12] investigates Facebook’s targeted advertising and now campaigns for specific ad transparency obligations for users to understand why they have been targeted. Moreover, on a regular basis, EU DisinfoLab uncovers deceptive networks whose audiences were amplified by recommendation features[13].

The regulators should have authority over companies to provide privacy-protected data to civil society and researchers. For independent monitoring and assessment, they would provide a framework on the data that should be accessible and in which format.

Setting the right consumer protection framework: defining and assessing online platforms’ content moderation processes

Current regulations require platforms to quickly remove flagged illegal content – often by automated filtering, leading to criticisms regarding the removal of legitimate content and the negative implications that this has on citizens’ freedom of speech. At the same time, we question the fairness of Facebook’s Oversight Board,[14] which raises legitimate concerns in that a private jurisdiction would take over the role of a democratic institution to protect freedom of speech – all while not being fully operationally independent. Critics have also voiced how Facebook’s business model itself presents obstacles to realising the Board’s effectiveness.[15]

We believe regulation should not focus on the content itself, but rather focus on outlining fair and transparent processes for moderating content, which would be audited by the regulator. This implies defining clear principles and safeguards, as well as enforcement mechanisms.

Empowering users: giving users control to choose what information they see

Users should have greater autonomy and choice over the content they see on the platforms as a result of the curation algorithms. In that regard, we welcome efforts taken by certain platforms to allow the deactivation of recommendation algorithm[16]. This should be viewed as complementary to media and digital literacy, which is crucial for building long-term resilience in society. In total, citizens must be emboldened to preserve a free and fair online space.

In such a role, the regulator would aggregate third-party initiatives such as media rating and verification tools. We believe that users should be empowered to enable and disable algorithmic curation on their feeds and that self-curation features should be provided for by the platforms.

A proactive approach for building resilience: anticipating the future informational landscape

New technological developments such as shallow fakes, deep fakes, and automated content generation raise strong concerns about the future of information manipulation, alongside those from new platforms and usages. To stay ahead of emerging trends, which bring new challenges to fighting against information manipulation and hate speech online, we need to anticipate their potential consequences for the information landscape.

In close cooperation with the research community, the regulators should monitor evolving disinformation tactics and anticipate any emerging challenges in order to stay ahead when it comes to the future development of the online media industry.


[1] Wu T. (2016), The Attention Merchants: The Epic Scramble to Get Inside Our Heads.

[2] The expected regulation on AI, as well as the Digital Services Act, complemented by an EU Democracy Action Plan.

[3] McNamee R. (2019), Zucked, Waking up to the Facebook catastrophe.

[4] https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=54456

[5] Vermeulen M. (2019), To regulate online content or not: is that really the question? APC.

[6] https://minefi.hosting.augure.com/Augure_Minefi/r/ContenuEnLigne/Download?id=AE5B7ED5-2385-4749-9CE8-E4E1B36873E4&filename=Mission%20Re%CC%81gulation%20des%20re%CC%81seaux%20sociaux%20-ENG.pdf

[7] https://journals.sagepub.com/doi/abs/10.1177/1077699018763307

[8] https://science.house.gov/download/francois-addendum

[9] https://blog.mozilla.org/blog/2019/04/29/facebooks-ad-archive-api-is-inadequate/

[10] https://www.ssrc.org/programs/view/social-data-initiative/sdi-statement-august-2019/

[11] https://algotransparency.org/

[12] https://whotargets.me/en/

[13] https://www.disinfo.eu/2019/09/11/suavelos-white-supremacists-funded-through-facebook/

[14] https://newsroom.fb.com/news/2019/09/oversight-board-structure/

[15] https://hbr.org/2019/10/facebooks-oversight-board-is-not-enough

[16] https://www.lifewire.com/how-to-use-twitter-timeline-algorithm-4174499