With the emergence of blogs and YouTube, quickly followed by platforms like Facebook, Twitter or Instagram, social media became the main content distributors and source of information for modern societies. This allowed for a breakthrough in democracy: it engendered an unprecedented degree of pluralism. In this context, anyone could publish any content, potentially reaching millions – a power reserved for traditional media until then.

Echo chambers and filter bubbles

While pluralism is rightly cherished in democracy, it has given rise to the establishment of online echo chambers in that users have been given the opportunity to select and process information according to their individual belief systems and worldviews. And with this comes confirmation bias: a process whereby people ignore information that doesn’t correspond to their beliefs, which results in polarisation. These phenomena are further compounded by filter bubbles

Filter bubbles refer to the bubbles created by platforms’ recommendation algorithms. In this context, online platforms base the information they give to users on the data acquired from their search history, liked pages and groups, past click behaviour, location, etc

According to Eli Pariser (2011), who coined the term, such online personalisation enabled by the algorithms has a crucial impact upon the quality of the online public sphere, as filter bubbles can create a digital informational barrier around people, which prevents them from seeing opposing viewpoints.

The attention economy

This business model is based on the massive collection of our personal data and subsequent profiling to deliver curated content that’s designed to keep us engaged online for as long as possible. In this way, the longer we’re online, the more revenue online platforms make from targeted advertising. Online platforms are able to profile users based on their likes, dislikes, and engagement, offering brands the perfect opportunity to target audiences with specific content.

Filter bubbles, online echo chambers, and the process of advertising on online platforms collectively make it relatively easy for specific information to reach the right audience online. As Kelly and François (2018) note, “targeting polarising echo chambers with disinformation is a lot easier and more effective than forcing messages into the mainstream”. Consequently, the more polarisation and audience segmentation, the increased likelihood of false or manipulative information, which is reenforced by the attention economy.

The information disorder

We avoid using the term “fake news” as it is inadequate to describe just how complex the phenomenon is. As First Draft also note, it’s a term that’s “appropriated by some politicians to describe news outlets whose coverage they find disagreeable”. To describe the reality, we instead prefer to use the terms coined by First Draft.

Disinformation: Information that is false and deliberately created to harm a person, social group, organisation, or country.
Misinformation: Information that is false, but not created with the intention of causing harm.
Malinformation: Information, that is based on reality, used to inflict harm on a person, organisation, or country.

The sheer versatility of mis–, mal–, and disinformation has given rise to new emerging threats and trends. At present, conspiracies theories are entering the mainstream and satire is being used to excuse disinformation. Moreover, disinformation has gone beyond false information and now encompasses manipulated content, including deepfakes, readfakes, and decontexualised audio-visual material to name a few. With the increase in new technologies, we foresee that disinformation may well become more sophisticated and believable in the future.

Challenges

The fight against disinformation poses many challenges, but these are worth noting:

  • Emerging disinformation campaigns and strategies are not detected fast enough to effectively deter their impact;
  • Our democratic institutions and electoral laws were not built to handle the changes that the public sphere has undergone in the last fifteen years;
  • Disinformation is a financially viable business; therefore, actions must be taken to disincentivise disinformation;
  • Online platforms are hijacked by a cross-range of actors to spread disinformation. These actors include individual users, foreign influence campaigns, to national politicians, among others;

Disinformation transcends borders. It’s a global problem, yet culturally and context specific.

Looking ahead

If we’re to properly tackle disinformation and restore trust in the online information ecosystem, the action needs to be collective and responses need to be tailored according to each stakeholder involved, including but not limited to:

  • Private actors: Providing the research community with access to data
  • National and supranational governance: defining the right regulatory framework that provides solutions to disinformation while preserving freedom of speech
  • Citizens: Strengthening media and digital literacy
  • Media: Rigorous and independent journalism
  • Civil society and academia: Innovative research and community building