by Maria Giovanna Sessa, Raquel Miguel Serrano, Ana Romero-Vicente, Joe McNamee, Inès Gentil, and Alexandre Alaphilippe, EU DisinfoLab
This short booklet gives an introduction to disinformation in the digital age, to help defend our societies from large-scale manipulation. It breaks down the problem of disinformation into its key types and key drivers, and in each case, looks at possible solutions.
What is disinformation?
Introduction
Maybe we should start with what it is not – it is not someone mistakenly sharing information that is not true, nor is something disinformation because we, the European Parliament, or a national government do not agree with it. Disinformation is the creation, presentation and dissemination of verifiably false or misleading information for the purposes of deceiving the public for economic gain or political purposes.
Disinformation is far from new, there have always been efforts to mislead the public. However, the shift to the “digital age” has created new challenges and new aspects that did not exist previously. The purpose of this short booklet is to give an introduction to disinformation in the digital age, in order to help defend our societies from large-scale manipulation.
Why is this a problem?
It is a problem because individuals and groups need to make decisions and, to make a decision, one needs reliable information. We need to make decisions about which candidates and political parties to vote for, how to protect our health, how to protect our planet, and so on. If we are robbed of reliable and trusted information, we are robbed of our personal self-determination and our democracy.
Very broadly speaking, disinformation is driven by two motivations – political influence and commercial gain. The most talked-about political disinformation is foreign information manipulation and interference (FIMI). There is also a wide variety of commercial motivations behind disinformation, ranging from particular industries seeking to mislead the public for their own benefit, to individuals and groups seeking to exploit the online advertising business model for profit. An example of the former is the finding of the US House of Representatives Oversight Committee that “Big Oil” had “sought to portray itself as part of the climate solution, even as internal industry documents reveal how companies have avoided making real commitments.” A famous example of the latter was the 23-year-old US graduate who, in 2016, correctly guessed that he could make fast and easy money off of Google Ads by creating a fake news website and publishing viral and entirely made-up stories about electoral fraud in the 2016 US Presidential election.
This booklet breaks down the problem of disinformation into its key types and key drivers. In e ach case, it looks at possible solutions. Of course, it will never be possible to stop all disinformation, but we can fight it by recognising that it exists, that it is a major problem and that tough choices urgently need to be taken to protect our self-determination, our health, our climate and our democracies.
What can be done?
In each case, we look at possible solutions. Of course, it will never be possible to stop all disinformation, but we can fight it by recognising that it exists, that it is a major problem and that tough choices urgently need to be taken to protect our self-determination, our health, our climate and our democracies.
Types of disinformation
Political
Foreign Information Manipulation and Interference (FIMI)
Introduction
FIMI (Foreign Information Manipulation and Interference) is a term the EU European External Action Service (EEAS) coined to describe state-sponsored manipulations of information. The concept focuses on the behaviour of malign actors rather than the content of the operation – as reflected in the ABC (Actors, Behaviours, Content) theoretical framework. In particular, it draws from cyber-threat intelligence analysis to study the TTPs (tactics, techniques, and procedures) employed in disinformation campaigns and takes a whole-of-society approach to mitigate them. The EEAS promoted this terminology with the aim of harmonising concepts, so that the defender community can better research and respond to foreign influence operations. The EEAS also facilitated the creation of the FIMI-ISAC (Information Sharing and Analysis Center) to share knowledge. Several projects in the European Union are currently addressing this challenge.
Why is it a problem?
FIMI is basically a subset of disinformation and a broader concept at the same time: it is fair to say that not all disinformation is FIMI, and FIMI is not only disinformation. The key factors of FIMI are the involvement of a third country, a manipulative pattern of information (which may or may not be true), the existence of coordination, and clear intent to harm or negatively impact values, procedures, and political processes, as defined by the EEAS.
FIMI campaigns are used in the context of hybrid warfare, which is increasingly being waged in the digital realm. Recent international events such as the war in Ukraine or the war in Gaza have highlighted the use of FIMI, which researchers around the world have uncovered with increasing frequency (campaigns such as Doppelganger, Portal Kombat, CopyCop, and PAPERWALL are some examples). FIMI campaigns are frequently used by Russia and China, but also by other actors seeking to manipulate or undermine confidence in democracies, institutions or governments or to do different kinds of damage.
FIMI campaigns are difficult to address because, as their definition indicates, they constitute ‘a mostly non-illegal pattern of behaviour’ that cannot always be tackled by legal measures, which means that other mechanisms must be found to address them. They are also a problem because real content (not necessarily disinformation) may be included in these campaigns, and tackling them can be misunderstood as deliberate censorship. Another challenge is exploring the intersections between domestic (DIMI) and foreign manipulation of information because domestic actors can also be connected with FIMI campaigns.
What can be done?
The fight against FIMI must be understood as a long-term effort and tackled accordingly. A comprehensive set of different responses is needed:
- to build and maintain the capacity to research and expose FIMI campaigns;
- to gather evidence on and prosecute the threat actors;
- to build awareness and societal resilience;
- to enhance cooperation among different stakeholders (public and private sector, civil society, academia, media, etc);
- to build a toolbox including legal instruments; and find the accurate and proportionate responses (based on evidence and real cases).
It is also important not to exaggerate the threat and to build trust in democracies. However, addressing vulnerabilities and the internal problems that malign foreign actors can weaponise is also important.
Further reading
ATHENA is a Horizon Europe project that contributes to Europe’s defence against foreign information manipulation and interference (FIMI). It aims at early detection of FIMI campaigns, and better understanding of the behavioural and societal effects of FIMI as well as the efficacy of deployed countermeasures before, during and after FIMI campaigns.
ATHENA is funded by the European union under the Grant Agreement 101132686 ATHENA HORIZON-CL2-2023-DEMOCRACY-01. UK participants are supported by UKRI Grant number 10107667.
Electoral disinformation
Introduction
The deliberate spread of false or misleading information to influence electoral outcomes poses a significant threat to democratic processes worldwide. Electoral disinformation can take many forms, including false claims about candidates, incorrect information about voting procedures, or fabricated news stories designed to sway public opinion. Online platforms enable direct and continuous communication with the electorate, a hallmark of modern politics. This has amplified the reach and impact of electoral disinformation, allowing it to spread quickly and extensively at minimal cost. Since elections are fundamental to democratic governance, ensuring their integrity is crucial. This makes combating electoral disinformation a top priority for governments, civil society, and technology companies.
Why is it a problem?
Electoral disinformation undermines the democratic process by distorting the information landscape that voters rely on to make informed decisions. The Doppelganger operation is an example of how malign actors – Russian in this case – exploit technologies such as AI (to manipulate visuals, generate and translate texts, or automatically spread the content) and digital infrastructure such as the internet domain name system and paid ads. Both state and non-state actors use these campaigns as tools of hybrid warfare to destabilise political systems and influence foreign policy.
Electoral disinformation does not just come from abroad. The repetition of voter fraud conspiracies can lead to episodes like the Capitol Hill riot, which showed the dramatic offline consequences of systematic exposure to political and electoral disinformation. The erosion of public trust in electoral institutions and outcomes fosters cynicism and disengagement among voters. In this regard, online platform policies addressing disinformation are often ad hoc, while attention to the problem must remain high even beyond official electoral periods. Electoral disinformation campaigns disproportionately target vulnerable individuals (e.g., women in politics) and groups (e.g., voter suppression directed at Black, Indigenous, and people of colour communities). The consequences of electoral disinformation are profound, leading to polarisation and exacerbated societal divisions. This can potentially skew election results and decrease voter turnout which causes long-term damage to the credibility of democratic institutions and their processes.
What can be done?
The Digital Services Act should be used to its maximum extent, with flanking measures being taken on a national level:
DSA:
- The systemic risk to electoral processes generated by Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) should be thoroughly analysed, anticipated and mitigated. This includes reinforcing internal processes, ensuring algorithmic transparency, and collaborating with fact-checking organisations to quickly identify and remove disinformation.
- Maximum access to data for vetted researchers should be ensured, so that problems can be identified quickly.
- The European Commission’s guidelines on mitigating systemic risks to electoral processes should be constantly reviewed and broadened to recognise the reality that misinformation can have a nefarious effect at any time.
National measures:
- Public information campaigns should be run at the national level, to equip populations to defend themselves.
- Robust legislation to penalise the deliberate spread of false information related to elections should be put in place and implemented.
Attention to threat disruption should go beyond electoral periods, as the harms are often imposed over a longer timeframe.
Further reading
Issue-based
Climate mis- and disinformation
Introduction
Climate disinformation hides the reality and consequences of climate change and the necessity for immediate action. It is prevalent on social media platforms, where greenwashing, hate speech, and other harmful content threaten public health and security. Despite EU legislative and non-legislative measures addressing disinformation, there is a lack of specific measures targeting climate disinformation, and major platforms exploit this gap and often fail to comply with the spirit of their commitments. Strengthening and enforcing European regulation and platforms’ internal policies is essential.
Why is it a problem?
Very large online platforms (VLOPs) and search engines (VLOSEs) have taken varying approaches to addressing climate-related harm that they facilitate. They have developed and strengthened measures to tackle climate disinformation through content moderation, user guidance and improving media literacy. Still, they fall short in the design of comprehensive policies to combat this issue.
The Digital Services Act (DSA) outlines VLOPs’ and VLOSEs’ responsibilities, requiring them to “identify, analyse, and assess systemic risks” linked to their services. While the DSA provides a non-exhaustive list of these risks, it overlooks specific mention of climate-related risks. Platforms capitalise on this gap, as recent studies show their failure to tackle climate disinformation effectively.
Overall, platforms provide limited data on the prevalence of climate disinformation and their capability to mitigate it. This lack of transparency makes it challenging for researchers and policy stakeholders to fully assess the scale of the problem. Additionally, unrestricted fossil fuel advertising and a lack of specific anti-greenwashing policies exacerbate the issue.
What can be done?
- Climate change disinformation should be classified as a “systemic risk” under the DSA and mis-/disinformation about the climate should be recognised as a threat to EU public health.
- Platforms’ transparency reports should include thorough and complete climate disinformation data.
Further reading
Anti-vaccine and influencers
Introduction
Most very large online platforms have made significant efforts to curb the spread of health disinformation through various policies and interventions. Measures include removing misleading content, flagging false information, and promoting accurate health resources from trusted sources. Despite these efforts, health-related disinformation remains prevalent among numerous communities and online spaces in areas such as anti-vaccine claims, miracle cures, diet and weight loss fads, or misconceptions about mental health. In this landscape, non-medical experts and influencers often play a major role in spreading this disinformation.
Why is it a problem?
The pandemic forever changed our understanding of disinformation. False and misleading narratives continue to thrive on social media to this day. For instance, unscientific false claims and conspiracy theories about the COVID-19 vaccines overlap with Russian propaganda targeting European authorities and institutions. Anti-vaxxers drive conversations on a range of diseases – such as HIV and measles – and blame vaccines for neurological conditions such as autism. Moreover, copious lifestyle tips on social media offer another channel for spreading more health disinformation, promoting unhealthy or unrealistic practices lacking scientific backing, and often targeting younger people. Similarly, non-medical advice on mental health issues can do more harm than good, preventing people from getting the right treatment.
Behind this phenomenon are usually personalities who offer health advice to their audiences on TikTok, X, Facebook, Instagram, and YouTube despite lacking the professional credentials to do so. These content creators and influencers often have large followings, and social media platforms’ algorithms tend to promote their engaging content regardless of its veracity. While some trends are relatively harmless, others can significantly influence public attitudes and behaviour, including vaccine hesitancy, the resurgence of preventable diseases, loss of trust in healthcare professionals and institutions, or inciting harmful behaviour.
What can be done?
- Platforms need to enforce robust policies to protect individuals and communities. This is crucial since many people, especially younger users, use social media to seek answers without adequate critical thinking skills and information resilience.
- Policymakers should mandate social media platforms to implement stricter verification processes for health-related content.
- Recommender algorithms should be adjusted to prioritise information from trusted sources while demoting content flagged as misleading or false.
- Health misinformation should be recognised as a systemic risk under the DSA, and platforms must be held accountable for failing to mitigate risks effectively.
Further reading
Gender- and identity-based disinformation
Introduction
Identity-based disinformation (IBD) – targeting individuals and groups based on their gender, gender identity, sexual orientation, and health and reproductive rights – poses significant challenges. IBD is being increasingly weaponised by malign actors involved in foreign interference and disinformation campaigns. The threat peaks during electoral periods particularly targeting female and LGBTQI+ politicians by exploiting existing prejudices and vulnerabilities. This amplifies negative stereotypes and harmful narratives that feed into anti-democratic, hyper-conservative thinking.
Despite recognition from multiple stakeholders, responses have been insufficient, making disinformation spiral into violence and hate speech. Existing regulatory frameworks, such as the Digital Services Act (DSA) or the EU Directive on combating violence against women and domestic violence, provide a foundation for action, especially since acquired rights (including the Istanbul Convention) are now under attack. However, emerging cases of IBD and inadequate mitigation efforts – for instance, at the platform policy level – highlight the need for a more comprehensive approach that systematically integrates a gender and identity lens.
Why is it a problem?
Recent developments and increased use of digital platforms have magnified the challenges associated with IBD. This issue is pervasive, spreading false narratives and harmful stereotypes. The success of movements like QAnon – drifting into trends as the one of ‘trad wives’ – or the straw man of inoculating schoolchildren with “gender theory” exemplifies how disinformation is weaponised against vulnerable communities, deepening societal divides and threatening democratic norms. In this context, harmful content (such as disinformation) and illegal content (such as violence) must be understood as part of the same spectrum. This threatens the safety and health of entire communities and is conducted by well-organised and well-funded transnational movements.
IBD is triggered by non-compliance with traditional gender norms (e.g., women occupying previously all-male spaces). Moreover, belonging to multiple marginalised groups intensifies the negative effects of IBD. Examples abound of women in public (including online) spaces who face disinformation campaigns aimed at undermining their reputation, credibility, and even their humanity, for the ultimate purpose of silencing them.
Unfortunately, the situation is already worsening through the misuse of artificial intelligence. Deepfakes disproportionately affect women in the form of pornographic content, ‘nudification’ apps are easily accessible, and technology-facilitated gender-based violence is thriving.
What can be done?
Labelling content online is insufficient to address the complexity and scale of identity-based disinformation. A comprehensive strategy is required; IBD online should be:
- seen as an early warning system for potential offline and online violence,
- treated as a security threat of EU-wide importance as it presents a serious and immediate threat to the safety and freedom of individuals;
- understood as a deep-seated societal issue – much bigger and more pervasive than sexism, misogyny, homophobia or transphobia – and therefore ill-adapted to soft measures like media literacy.
Therefore, online platform policies addressing misinformation and user protection should explicitly incorporate the identity (and gender) dimension.
Further reading
- Gender-Based Disinformation: Advancing Our Understanding and Response
- Gender-based disinformation 101: Theory, examples, and need for regulation
- What is Gendered Disinformation?
- Monetising misogyny
- FIMI targeting LGBTIQ+ people: Well-informed analysis to protect human rights and diversity
- Networks of Dissuasion: Mapping Online Attacks on Reproductive Rights in France
Economic drivers of disinformation
Payments to creators
Introduction
It has been credibly estimated that online platforms distribute around twenty billion US dollars of advertising revenue to content creators around the world. This amount of money is both vast and growing rapidly, doubling in the last two years. Despite the huge sums at stake and the real risk of incentivising poor quality or dangerous content, there is little to no regulation, oversight or transparency.
Why is it a problem?
A lack of due diligence on who is profiting from these huge and growing streams of money has led to a bonanza for con artists and disinformation producers, as well as fuelling the development of increasingly accessible and affordable inauthentic markets for the know-how, assets like botnets, and tools used to exploit this lucrative area of activity.
Profit-driven actors are incentivised to optimise their payout by minimising production costs and maximising distribution. This unregulated market has led directly to a proliferation of designed-for-engagement and monetisation content – including sensationalised and controversial commentaries, deepfakes and cheapfakes, as well as illegitimately acquired or cheaply produced content.
Authentic creators and publishers argue that competing with low-quality and inauthentic content is impossible, especially when coupled with abuse of their trademarks and copyright.
Advertisers, already suffering from an online ad market plagued with fraud, now report that they are sold ad placements that fail to meet platforms’ commitments to brand safety and suitability and carry reputational risks.
What can be done?
- Regulators should urgently review the applicability and application of existing legislation relevant to this enormous market and close any gaps in regulation.
- Where regulation does exist, its applicability to this market needs to be carefully assessed and rigorously enforced. Horizontal legislation, such as the Digital Services Act, should recognise revenue sharing as an element of all relevant systemic risks.
- Legislators must understand that the platforms have designed this broken market to maximise profits. They will not “self-regulate” to solve problems they perceive as externalities.
Further reading
Recommender systems
Introduction
Recommender systems are algorithms used to rank, filter and target individual pieces of content, such as posts or advertising, to users of a platform. As these systems regulate how users find and interact with all kinds of information on a platform, they are at the heart of user experience and of the business model of these corporations. For social media companies, such as YouTube, X (formerly known as Twitter) and Facebook, the business model is advertising driven by data collection, and data collection comes from user engagement.
Why is it a problem?
We find outrage more interesting than agreement, we find provocation more engaging than conciliatory content. As Mark Zuckerberg himself said, “no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average.” This creates a problem of incentives for our online experience – platforms make more money from recommender systems that promote content that is inflammatory and divisive, at huge societal and human cost.
It is therefore logical and predictable that on YouTube, for example, recommended videos were 40% more likely to be reported as harmful than videos users found via specific searches. Similar results were found in 2022, when Amnesty International investigated Meta’s role in the human rights violations against Rohingya in Myanmar. The report found that Meta’s algorithm proactively amplified and promoted content that incited violence, hatred and discrimination against the Rohingya population. In Europe, an investigation of the 2021 German elections documented under-moderation of illegal content and disinformation, as well as amplification of divisive content, e.g., via automated recommendations for political pages, groups and profiles spreading hate, violence and disinformation or by placing paid ads for said content. This mechanism particularly benefited right-wing extremist parties.
This effect is worsened by payments to content creators who generate such “engaging” content. It is worsened even further by this phenomenon making it harder for traditional quality news outlets to generate engagement and revenue.
What can be done?
- Article 38 of the Digital Services Act requires platforms to offer a non-profiling-based recommender system, although this provision is unclear and needs as yet inexistent rigorous enforcement.
- Ultimately, these systems need to be recognised as central to the systemic risks (risks stemming from the design or functioning of their service and its related systems, including algorithmic systems”) which are core to the regulation of very large online platforms (VLOPs) and very large online search engines (VLOSEs).
- Extensive analysis and proposals have been devoted to helping the European Commission to ensure that it carries out its DSA enforcement function – this should be fully utilised by the European Commission: “Fixing recommender systems” by Panoptykon, the Irish Council for Civil Liberties and People vs. Big Tech.
Technical drivers of disinformation
The nexus between disinformation and cybercrime
Introduction
Disinformation campaigns and cybercrime have become intertwined phenomena, posing significant threats to national security, public health, and democratic processes. State actors and criminal groups often employ similar tactics, techniques, and procedures (TTPs) and leverage critical infrastructures to conduct both cyberattacks and disinformation campaigns. The interplay between state-sponsored hacking, disinformation campaigns and the erosion of public trust means that there is an urgent need to enforce more robust countermeasures.
Why is it a problem?
The convergence of cybercrime and disinformation campaigns amplifies their overall impact. The primary difference between disinformation attacks and cyber attacks lies in their targets: cyber attacks aim to compromise IT infrastructure using tools like malware, viruses, Trojan horses etc, while disinformation attacks exploit cognitive biases to manipulate public perception and opinion. These attacks often overlap, as cyber-attacks can support disinformation efforts and vice versa. Specifically, cyber attacks can gather data for targeted disinformation campaigns, while disinformation can act as reconnaissance for future cyber-attacks.
For instance, a Distributed Denial of Service (DDoS) is a coordinated cybersecurity attack that floods the targeted digital services and networks with excessive requests, overwhelming the system and preventing legitimate access. Similarly, a well-coordinated disinformation campaign fills the broadcast and social channels with false information and noise. Another common threat is hack-and-leak operations, which involve the theft and public exposure of sensitive information about candidates, campaigns, and other political figures.
Other notable examples include the 2020 European Medicines Agency hack and the dual use of IT by the Daesh group for propaganda and cyberattacks. Additionally, the Doppelganger campaigns and activities carried out since the full-scale invasion of Ukraine are a demonstrable example of the dual use of IT capabilities for cyber warfare and information warfare.
A major problem is that policymakers and regulators have treated these attacks separately, have deployed different countermeasures, and even have teams working in silos to protect and defend against these attacks. This lack of coordination creates gaps that bad actors can exploit.
What can be done?
- Strengthening of cybersecurity infrastructure through adopting advanced threat detection systems, enhancing incident response protocols, and promoting information sharing among stakeholders;
- Treating disinformation as a cybersecurity issue to develop effective countermeasures against cognitive hacking (such as psychological inoculation, pre-bunking and debunking).
Better awareness among policymakers and regulators of this overlap in order to:
- Develop comprehensive strategies to protect our digital and societal infrastructure from these evolving threats;
- Reduce the opportunities for cybercriminals to weaponise disinformation.
- Educate the public about the risks of disinformation and to promote AI and media literacy to mitigate the impact of these campaigns;
- Fully implementing regulations such as DSA, GDPR and the AI Act to ensure that online platforms play their part in ensuring robust data protection, security and content moderation.
Further reading
- Challenges to effective EU cybersecurity policy Briefing Paper
- How Russian hackers targeted NATO’s Vilnius summit
- AI and cybersecurity: How to navigate the risks and opportunities
- Countering Disinformation Effectively: An Evidence-Based Policy Guide
- Hybrid nature of modern threats for cybersecurity and information security
- Cybersecurity in the face of information warfare and cyberattacks
- Propaganda and Disinformation as a Security Threat.
- Disinformation and Russian Issues
AI-manipulated and generated mis- and disinformation
Introduction
The development of artificial intelligence (AI) technologies has long been a challenge in the disinformation field. It allows content to be easily manipulated and can accelerate its distribution. Multiple stakeholders have recognised the problem, but have not sufficiently addressed it. Existing or forthcoming regulations and other instruments provide a basis for some actions: Code of Practice on Disinformation (commitment 15), the DSA (articles 34 and 35 on systemic risks), and the AI Act (article 5.1(a) and 5.1(b) in particular). However, emerging cases of disinformation created with AI and platforms’ inadequate mitigation measures show that a more holistic approach is needed.
Why is it a problem?
Recent technical developments and the growing use of generative AI systems have vastly increased the disinformation challenges described in this document. AI is making it easier to modify and create fake texts, images, and audio clips that can appear real. Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and misperceptions. However, the risks do not only reside in creating realistic-looking content: the possibilities of production at scale and increased distribution are further challenges. This technology also allows the creation of customised content that can better reach specific target audiences.
Indeed, disinformation with artificial intelligence is not only a technical issue but also opens the door to plausible deniability of all types of content and makes true content vulnerable to challenge. It thus poses an almost existential challenge to credibility. Additional problems lie in the opacity of the data that feeds these systems, conditioning their outputs, and in the public’s misunderstanding of this technology. This leads to it being used for inappropriate purposes, thus increasing confusion.
Stakeholders are reacting at different paces to this challenge, and some attempts at collaboration have been made, but much remains to be done. Another challenge is that local solutions are not valid for a global issue.
What can be done?
- Do not primarily rely on labelling of AI-generated content (e.g. as adopted by many platforms). This approach is insufficient to address a problem that can change the nature and scale of disinformation. While labelling may be desirable, it is not the only measure to solve the problem.
- Adopt a more holistic view of the problem is required – e.g. by robustly addressing amplification of harmful AI by recommender systems and tracking/addressing societal harms
- Identify the best stakeholder(s) to address specific problems and develop more shared responsibility for defining and implementing solutions.
- Make maximum use out of media literacy programmes so that the pitfalls of the technology can be understood and the production of harmful content can be avoided.
Further reading
- EU DisinfoLab’s report on platforms’ policies on AI misinformation
- EU DisinfoLab’s webinar: Beyond Deepfakes: AI-related risks for elections
- OpenAI paper: Disrupting deceptive uses of AI by covert influence operations
- Meta’s Adversarial Threat Report, First Quarter 2024 (use cases of AI)
The veraAI project aims to develop and build trustworthy AI solutions in the fight against disinformation.
veraAI is co-funded by the European Commission under grant agreement ID 101070093,
and the UK and Swiss authorities.
Societal solutions for disinformation
The importance of media and information literacy in countering disinformation
Introduction
Media literacy – the ability to access, analyse, evaluate and create media – equips individuals with the essential skills to critically evaluate the wide range of information they are exposed to on a daily basis. This is why it is essential to stress the need for media and information literacy to maintain an informed and democratic society.
Why is it a problem?
Social media algorithms designed to maximise engagement often promote sensational and misleading content, making it difficult for individuals to distinguish between reliable information and misleading or false information. A lack of media literacy further complicates this issue, as individuals struggle to critically assess the credibility of information, identify biases, and differentiate between reliable sources and disinformation. Additionally, the development of “AI-Cyclopedia”, or AI outputs created without any insights into the source data on which they are based, makes it more difficult to access, identify and evaluate the sources used to generate data and, therefore, its credibility.
Consequently, people may form opinions and make decisions based on inaccurate information, affecting their roles as informed members of society and consumers. Given the constantly evolving media environment, continuous learning and adaptation in media literacy strategies are essential to keep pace with the evolving media landscape.
Advancing media literacy is essential to enable people to develop their critical thinking skills, make informed decisions and actively participate in society. However, addressing the challenges of disinformation and rapidly evolving media landscapes requires a comprehensive approach, including not only the integration of media and digital literacy programmes into curricula, but also robust policies and regulatory frameworks, debunking and pre-bunking strategies, and (of course) funding.
What can be done?
- Integrating media literacy into educational curricula at all levels is essential. This includes understanding the motivations behind different media, recognising biases, and verifying sources. Schools and universities should offer courses on media and digital literacy, critical thinking, and responsible media consumption. For example, Finland’s comprehensive media literacy strategy serves as a model, showing the effectiveness of integrating media literacy into the national curriculum.
- Regulations and policies like the Audiovisual Media Services Directive (AVMSD) require member states to promote media literacy and mandate video-sharing platforms to provide effective media literacy tools, and the European Parliament’s Resolution on Media Literacy in a Digital World (2018) calls for coordinated action among member states. Expanding and enforcing such regulatory frameworks can enhance public resilience to disinformation.
- Additionally, programs such as the European Commission’s Digital Education Action Plan (2021-2027) and Media Literacy for All Programme support projects that promote media literacy, and could be also more developed.
- Governments and organisations should, in parallel, launch campaigns to raise awareness about the importance of media and digital literacy. These campaigns can provide practical tips on how to spot disinformation and encourage the public to question the information they encounter. Community-based workshops and seminars can also provide hands-on training and support. Finally, encouraging the use of fact-checking services can help individuals verify the accuracy of information. Ultimately, social media platforms should integrate such tools to alert users about questionable content, such as synthetic media / deepfakes detection tools.
Further reading
- Digital and media literacy : connecting culture and classroom.
- Promoting media literacy learning-a comparison of various media literacy models.
- Transforming Learning: The Power of Educational Technology
- Social media literacy: A conceptual framework
- EDMO The Importance Of Media Literacy In Countering Disinformation
- Media literacy – European Commission