March 30, 2023

By Maria Giovanna Sessa, EU DisinfoLab

The author would like to thank Francesco Poldi, Rita Jonušaitė, Nicolas Hénin, Raquel Miguel Serrano and Ana Romero Vicente for their contributions to some of the definitions.

As is often the case, the field of disinformation has its own specific terminology, which is continuously enriched with new words. They might be hard to grasp for beginners, or even generate different understandings. Therefore, we decided to compile a list of over one hundred and fifty recurring terms in the context of disinformation.

For each word, you’ll find here a snap definition with a concrete example to ensure that the community has a solid, yet accessible, tool to navigate the field. Some terms are not uniquely related to disinformation, but can be used to facilitate its proliferation.

In this domain, First Draft’s information disorder glossary and Data & Society’s Lexicon of Lies have been a source of inspiration at the time of drafting this repository, which we aim to update when necessary.

Click on the below boxes to discover the entries per letter (or click here for a pdf version of the glossary):


  • ABCDE: A framework, created by Camille François, for describing and analysing influence operations initially known as ABC: i.e., manipulative Actors, deceptive Behaviour, and Harmful content. To this, Alexandre Alaphilippe added a fourth element: Distribution. Another framework by James Pamment proposed an ABCDE (Actors, Behaviour, Content, Degree, Effect), where the “Degree” component tackles the distribution and audience, and the “Effect” addresses the impact.
  • Active measures: A term commonly used to refer to the political warfare activities of the Soviet Union and the Russian Federation. It often relies on a combination of propaganda, disinformation, political pressure, and sometimes covert military activity to influence the course of world events. An example is Operation Infektion, a KGB active measure disinformation campaign from the 1980s, instilling the idea that the United States had invented AIDS.
  • Agenda 2030: This is a set of seventeen Sustainable Development Goals to be reached by 2030, adopted by the United Nations in 2015 and designed to be a “blueprint for peace and prosperity”. However, in the disinformative landscape, the concept appears to often be related to conspiracy theories or COVID-19 plots.
  • Agitprop: Abbreviated from Russian “agitatsiya propaganda” (agitation propaganda). It is a political strategy designed to provoke the audience to mobilise, protest, or take a particular action. Widely used by the Bolsheviks, it can be disseminated through print, film, audio, and other media. Recently, some agitprop actions have provoked public confusion and debates about their link to disinformation.
  • Algorithm: This definition is applied to the field of social media, indicating a mathematical set of rules and instructions that systematically sorts, filters, and recommends platform content for users based on how likely they are to like and interact with it.
  • Alternative facts: After Trump’s inauguration in 2017, White House press secretary, Sean Spicer, insisted that “this was the largest audience to ever witness an inauguration”. When confronted, senior aide to the President, Kellyanne Conway, replied “our press secretary gave alternative facts to that”. She coined a new concept by which arguments are used to support claims that do not conform to objective reality.
  • Alternative media: Also called “reinformative” media, it refers to an outlet that spreads content aimed at counterbalancing the allegedly biased mainstream news coverage. Many of these websites find themselves at the margins of the political spectrum.
  • Alt-tech: The term is referred to social media platforms and Internet service providers that found popularity among the alt-right, far-right, and others sharing extreme or fringe opinions, due to their actual, or perceived, laxity in moderating content compared with mainstream Internet service providers.
  • API (Application Programming Interface): A set of programming code by which different web applications and server softwares interact, basically acting as a facilitator. The PayPal payment buttons, a fintech service that allows users to connect personal financial information to their PayPal account, is one example of API. PayPal payments are embedded into many websites requiring financial transactions. Therefore, by interacting with PayPal, these websites will not have direct access to the user’s bank or card information thanks to API integrations. 
  • AI (Artificial Intelligence): The theory and development of computer systems able to mimic problem-solving and decision-making capabilities of the human mind. However, there is still an open debate about this definition, and others are available here.
  • Astroturfing: The attempt to create an impression of widespread spontaneous support for a policy, individual, or product. In reality, it is initiated and controlled by a concealed group. The term is derived from “AstroTurf”, a synthetic carpeting resembling natural grass, to joke on the notion of fake support that is hidden behind the appearance of a grassroot effort. In 2023, Samsung resorted to astroturfing against HTC competitor product (the HTC One), paying fake negative online comments to convince people that it was highly defective.
  • Attribution: Clear identification of the actors behind a certain piece of disinformation or information campaign, which requires extreme caution.
  • Automation: The process of designing a machine to complete a task with little or no human direction; used – among other things – to manufacture the amplification of disinformation.


  • Bad Sources: This 2023 EU DisinfoLab investigation digs into anti-Pakistan/China influence operations in India, and how some non-existent organisations and fake personae are regularly quoted by Indian news agency Asian News International (ANI) are also picked up on well-known digital portals.
  • Bioweapons conspiracy: A conspiracy theory according to which bioweapons are being secretly created for world destruction, e.g., presenting COVID-19 as a virus artificially made in a Wuhan laboratory. In the context of the war in Ukraine, the alleged presence of bioweapon labs on Ukrainian territory was used to justify the Russian invasion.
  • Bot: A software program that performs automated, repetitive, pre-defined tasks, typically imitating or replacing human behaviour. In this regard, Bot Sentinel is an interesting platform developed to detect and track troll-bots and untrustworthy Twitter accounts.
  • Botnet: A network of bots that act in coordination and are generally operated by one person or group (known as the “bot-herder”). These are often used in Distributed Denial of Service (DDoS) attacks and phishing.


  • Catfishing: A form of fraud where a person creates a fake identity to target a particular victim on social media, especially on dating apps for romance scams.
  • Chatbot: A computer program that can chat with its users. Chatbots simulate conversations by sending automatic or predefined messages, synthesising voice, or offering decision buttons.
  • Cheapfake: The term was coined by Britt Paris and Joan Donovan to indicate altered media that has been manipulated without advanced processing technologies (differently from deepfakes), e.g., by speeding or slowing footage. An example is a viral video of Nancy Pelosi, which was slowed down to imply she was intoxicated.
  • Chemtrails: The false belief that long-lasting condensation trails are made of chemical agents sprayed by aircrafts for hidden criminal purposes that range from weather modification to human population control.
  • CIB (Coordinated Inauthentic Behaviour): A term coined by Facebook to describe the use of multiple actors engaging in violations of the platform’s community standards, e.g., misrepresenting themselves through the creation of fake accounts or artificially boosting the popularity of content. The EU DisinfoLab created a three-parts “CIB Detection Tree”: coordination, source, and impact assessment.
  • Clickbait: The practice of writing sensationalised, misleading, or false headlines in order to attract clicks on a piece of content and therefore encourage traffic.
  • Climate delayism: A systematic and coordinated strategy to baselessly question climate actions to slow down or postpone indefinitely those actions.
  • Climate doomism: The convictions that the battle against climate change is already lost and, therefore, climate actions or policies are pointless.
  • Code of Practice on Disinformation: A tool (launched in 2018 and updated in 2022) that brings together major tech platforms, players in the advertising industry, fact-checkers, research and civil society organisations (signatories to the Code) in the fight against disinformation through a set of voluntary commitments and measures. The signatories to the Code decide which commitments they sign up to. It is their responsibility to ensure the implementation of these commitments.
  • Confirmation bias: The tendency to interpret information in a way that confirms what one already believes. For instance, during an election, people tend to believe information that paints the candidate they support positively, while dismissing information that portrays them negatively.
  • Conspiracy theory: The belief that a small group of powerful people are making secret arrangements to advance their personal interests, consequently causing harm to society. For examples, scroll down to “Great Replacement”, “Kalergi plan”, “QAnon”, etc.
  • Content moderation: The organised practice of screening user-generated content online to determine the appropriateness of the content for a given site, locality, or jurisdiction. Actions range from reducing content visibility to content and user suppression. Content moderation techniques include manual pre-moderation, manual post-moderation, reactive moderation, distributed moderation, or automated moderation.
  • Cookie: Information stored on an Internet user’s computer for session management (e.g., recall their individual login information and preferences), personalisation (e.g., using the recorded information for targeted ads), and tracking purposes (e.g., keeping items in online shopping carts and suggesting similar products).
  • Crawler: A web crawler, spider, or search engine bot is a software performing specific functions in order to extract information from websites for the purpose of web indexing.
  • Crypto-funding: The practice of collecting donations via crypto-currency. It is preferred by some disinformative websites as it is perceived as anonymous, anti-systemic, and censorship-proof, as illustrated in our research.
  • Cyber-squatting: The practice of buying a domain name for the sole purpose of preventing someone else from buying it, also known as “domain squatting” (see Doppelganger). This is usually done to resell the domain name at a higher price to a buyer who wishes to own the domain.
  • Cyborgs: Hybrid accounts that combine bot automation with the occasional human input.


  • Dangerous speech: Dangerous speech is any form of expression (e.g., speech, text, or images) that can increase the risk that its audience will condone or commit violence against members of another group. Online disinformation and hate speech constitute dangerous speech when they include elements that can lead to offline discrimination and brutality.
  • Data anonymisation: The process of protecting private or sensitive information by erasing or hashing identifiers that connect an individual to stored data, so that the data is retained but the source remains anonymous.
  • Data mining: The process of analysing big volumes of data by combining tools from statistics and artificial intelligence to recognise useful patterns. For instance, data mining techniques enable companies to predict future trends and make more informed business decisions.
  • DDos (Distributed Denial of Service): When botnets flood a targeted application or server with requests, causing it to crash. The objective is to deplete the target’s bandwidth, preventing valid requests from being processed. In 2018, a large botnet carried out the largest DDoS attack ever recorded. Generating peak incoming traffic of an unprecedented 1.35Tbps, the attack took GitHub, the largest software development platform on the Internet, offline for 15-20 minutes.
  • Deepfake: An image or footage that has been convincingly altered and manipulated through some form of machine learning (differently from cheapfakes) to misrepresent someone as doing or saying something that was not actually done or said. Soon after the invasion, a deepfake of Ukrainian President Zelensky allegedly calling for surrender appeared worldwide.
  • Deep learning: Deep learning is a subset of machine learning, which is essentially a neural network (i.e., a model made up of information interconnections) with three or more layers. These neural networks attempt to simulate the behaviour of the human brain, allowing it to learn from large amounts of data, that is to say adapting and modifying its structure based both on external data and internal information. Learning can be supervised, semi-supervised or unsupervised.
  • Deep state: The conspiratorial conviction that an unidentified secret network of non-elected government officials and private entities operate extra-legally in coordination to influence and enact government policy. Similar concepts are those of “shadow government” and “state within the state”.
  • Denial: The refusal to admit the truth of a concept or an event that is supported by the majority of scientific or historical evidence, such as the existence of a pandemic or a war. We extensively wrote about COVID-19, monkeypox, Ukraine war, and climate change denial. Historically, one of the best-known cases is the Holocaust denial (see definition).
  • Deplatforming: The removal, blocking, or banning of an account or content from an online platform due to a policy violation (see also content moderation).
  • Digital humans: They are a conversational type of AI, created using state-of-the-art computer-generating imagery and designed by teams of expert animators and visual effects specialists. Among them, there are virtual influencers, such as Lil Miquela and Imma.
  • Disinformation: Information that is false and is disseminated intentionally to cause harm.
  • Disinformation entrepreneurs: Actors who exploit major events, such as the war in Ukraine, to spread false content or propaganda for ideological, reputational, or financial gains. Our study shows how dormant and new YouTube channels exploited the war in Ukraine to spread pro-Russian disinformation.
  • Disinformation-for-hire: A growing industry in which private marketing, communications, and public relations firms are paid to sow discord by spreading false information and manipulating content online. A recent Forbidden Stories piece recounts how “digital influence mercenaries” were paid to spread online gender-based disinformation against a journalist.
  • Doppelganger: A Russia-based influence operation network that has been operating in Europe since at least May 2022. It uses multiple “clones” of authentic media that operate through different yet similar Internet domains, reproduce the same designs, and target users with fake articles, videos, and polls.
  • Dork: Also known as “dork query” or “Google dork query”, it is a search string or custom query that uses advanced search operators to find information not readily available on a website. Here is a list of dorks to search Google.
  • Doxing: The act of publishing private or identifying information about a person or organisation online, with malicious intent.
  • DSA (Digital Services Act): A ground-breaking legislation on Internet safety and platform accountability that regulates digital services (from simple websites to Internet infrastructure services and online platforms) operating in the European Union market or delivering services to European Union users and is a part of the Digital Services Package. The DSA entered into force in late 2022 and will become applicable to all services in early 2024. It will apply to VLOPs and VLOSEs (see definitions) earlier, several months after their designation. The DSA will create a stronger incentive structure for companies to tackle disinformation, thanks to the harmonisation of regulatory oversight and the introduction of due diligence obligations for online platforms. However, it has various limits, such as an extensive focus on illegal content and obligation only for VLOPs and VLOSEs to assess and mitigate systemic risks.


  • Echo chamber: A closed ecosystem in which participants only encounter beliefs that amplify or reinforce their pre-existing beliefs on various topics. Echo chambers are a direct consequence of filter bubbles.
  • Encrypted messaging: Encryption converts human-readable plaintext into so-called ciphertext (i.e., encrypted text transformed from plaintext using an encryption algorithm) to ensure secrecy. For example, end-to-end encryption ensures only the sender and the receiver can read or listen to what is sent.
  • EMFA (European Media Freedom Act): A proposal for a regulation to strengthen media freedom and plurality in Europe. It deals with media ownership transparency, funding for public service media, spyware against journalists and media content online, among other issues. The latter is a concerning matter for the counter-disinformation community as it has opened the door for the media exemption (see definition) to come back in the legislative process. If adopted, it would create a massive loophole for disinformation and undo any progress that has been achieved in countering disinformation in the last years. 


  • Fabricated content: Content that is 100% false, designed to deceive and do harm. To illustrate this, Hitler never said that Black people are the “true Hebrews” as some claimed on social media. This is part of First Draft’s typology to classify mis- and disinformation.
  • Fact-checking: The process of verifying information to promote the veracity and correctness of reporting and statements. Counting on over a hundred verified signatories; the International Fact-checking Network (IFCN) created a Code of Practice for fact-checking organisations.
  • Factoid: A piece of information or news that is repeated so often that it is believed to be true. An example is the belief that the Great Wall of China is visible from the moon.
  • Fake news: False or misleading information presented as news. Although it can be inaccurately used as a synonym of disinformation, the term has been popularised by Donald Trump, who exploited it to cast doubt upon credible news.
  • False connection: When the content is not supported by headlines, visuals, or captions. In early 2022, photos from Gaza were reshared online with captions claiming to show explosions in Ukraine. This is part of First Draft’s typology to classify mis- and disinformation.
  • False context: Genuine content that is shared with false contextual information. For instance, the authentic news that a nurse had fainted after receiving the COVID-19 vaccine in late 2020 was shared online with false contextual information that it was due to the lethality of the vaccine, rather than a vagal reaction. This is part of First Draft’s typology to classify mis- and disinformation.
  • Filter bubble: A state of intellectual isolation that can result from personalised searches when a website algorithm selects what information a user would like to see based on information about the user, enabling a self-confirming feed exclusively based on content that fits the user’s preferences.
  • FIMI (Foreign Information and Manipulation Interference): The European External Action Service (EEAS) defines it as “a mostly non-illegal pattern of behaviour that threatens or has the potential to negatively impact values, procedures and political processes. Such activity is manipulative in character, conducted in an intentional and coordinated manner. Actors of such activity can be state or non-state actors, including their proxies inside and outside of their own territory”.
  • Firehosing: A propaganda technique by which many messages are broadcast rapidly, repetitively, and continuously over multiple online channels without considering truthfulness or consistency.
  • Freedom Convoy: A series of protests and blockades against COVID-19, vaccine mandates, and containment measures, which began in early 2022 in Canada, and inspired similar protests abroad.
  • Freedom of speech: The power or right to express one’s opinions without censorship, restraint, or legal penalty. It is often weaponised to reject content moderation, although disinformation limits the targeted group’s possibility to practice their freedom of speech.


  • GAN (Generative Adversarial Network): A class of machine learning methods in which two neural networks are trained competitively in the context of a game zero sum. This type of framework allows the neural network to learn how to generate new data having the same distribution as the data used in the training phase. For example, it is possible to obtain a neural network capable of generating hyper-realistic human faces.
  • Gaslighting: A form of psychological manipulation in which the abuser attempts to sow self-doubt and confusion in their victim’s mind. For example, Russia insistently denied that the “little green men” who appeared in Crimea in 2014 were Russian soldiers in unmarked uniforms.
  • GDPR (General Data Protection Regulation): A regulation in European law on data protection and privacy in the European Union and the European Economic Area. Its primary aim is to enhance individuals’ control and rights over their personal data and to simplify the regulatory environment for international business.
  • Gender-based disinformation: The spread of deceptive or inaccurate information and images against women – especially in positions of power and visibility. It frames them as untrustworthy, unintelligent, emotional, or sexual to alter the public’s understanding of their track records. The intent is immediate political gain, as well as discouraging women looking into political careers or leadership roles.
  • Gender ideology or gender theory: Simply concerned with understanding and accepting cross-cultural similarities and differences in human views on women, men, and alternative gender identities, the concept was adopted by a global movement to articulate opposition to gender equality, reproductive rights, sexual education, and LGBTQ+ rights.
  • Great Replacement: A far-right conspiracy theory according to which white European populations are being demographically and culturally replaced with non-white peoples (especially from Muslim countries) through mass migration, demographic growth, and a drop in the birth rate of white Europeans.
  • Great Reset: An economic recovery plan launched by the World Economic Forum’s (WEF) after COVID-19. The term also refers to a growing online conspiracy theory that considers the WEF a global elite who is using the pandemic to seize control of the global economy and impose radical social change.


  • Hack-and-leak: A tactic that combines a cyber-attack and an information operation, also falling into the category of malinformation. The leaked contents may be authentic, fabricated, doctored or a mix of all these (for an example, see “MacronLeaks”).
  • Hacktivism: The act of hacking or breaking into a computer system, for politically or socially motivated purposes.
  • Harmful content: Content that does not always strictly fall under legal prohibitions but that might nevertheless have harmful effects, such as disinformation.
  • Hate speech: Discourse that expresses hate or encourages violence towards a person or group based on inherited characteristics such as race, religion, sex, or sexual orientation. On 9 December 2021, the European Commission adopted a Communication prompting a Council decision to extend the current list of ‘EU crimes’ in Article 83(1) TFEU to hate crimes and hate speech.
  • Hoax: A deception or falsehood, which we use in general terms to indicate a single piece of mis- or disinformation.
  • Holocaust denial and distortion: Also referred to as “holo-hoax”, it is an antisemitic conspiracy theory falsely asserting that the Nazi genocide of Jews, is a myth, exaggeration, or was not committed by Nazis.


  • Illegal content: Information items that are not compliant with a given legislation, such as hate speech, terrorism, incitement to violence, child abuse material, or intellectual property breaches. Disinformation is harmful but not necessarily illegal.
  • Imposter content: The impersonation of genuine sources (e.g., see Doppelganger). This is part of First Draft’s typology to classify mis- and disinformation.
  • Indian Chronicles: A massive operation targeting international institutions, such as the EU and the UN, to serve Indian interests, unveiled by EU DisinfoLab in 2019 and 2020. Active in Brussels and Geneva in producing and amplifying content to undermine (primarily) Pakistan, the network resurrected dead media, think-tanks, NGOs, and experts.
  • Influence operations: Term used primarily in the context of military operations, as well as by social networks sometimes, to indicate the deliberate and coordinated attempts of unidentified actors to manipulate the public debate using inauthentic accounts and inaccurate information.
  • Infodemic: A rapid and far-reaching spread, online and offline, of an excessive amount of both accurate and inaccurate information about an event, such as a disease outbreak.
  • Information disorder: A collective term coined by Claire Wardle to refer to indicate disinformation, misinformation, and malinformation.
  • IRA (Internet Research Agency): Russian company linked to oligarch Yevgeny Prigozhin engaging in online propaganda and influence operations on behalf of Russian business and political interests (e.g., during the 2016 U.S. elections and through Project Lakthta).
  • Italygate: A pro-Trump, QAnon-affiliated conspiracy theory, which alleges that the 2020 United States presidential election was rigged using satellites and military technology to remotely switch votes from Donald Trump to Biden from the U.S. Embassy in Rome.


  • Junk site: A website – also known as “content farm” – that contains large quantities of low-quality content (and often false and hyper-partisan). It is either created or aggregated from other websites for the purpose of improving its search engine rankings. To be classified as such, an outlet would need to fulfil three out of five criteria regarding: poor professionalism, sensationalistic style, lack of credibility, high bias, and use of counterfeit content.


  • Kalergi plan: A far-right, antisemitic conspiracy theory, which denounces an alleged plot to mix white Europeans with other races via immigration (see also Great Replacement).
  • Kompromat: Damaging information about a prominent or visible person, which may be used to create negative publicity, blackmail, and extortion. In 2016, a sex-tape featuring former Russian Prime Minister Mikhail Kasyanova was released by Kremlin-friendly television channel NTV to destroy his reputation.


  • Lawful but awful: A speech or action that cannot be prohibited by law (including the terms of service of platforms) but that profoundly violates many people’s sense of decency, morality, or justice.
  • Love jihad: This Islamophobic conspiracy theory, also known as “Romeo jihad”, claims that Muslim men groom Hindu women for conversion to Islam via marriage as part of a domination plan through demographic growth and replacement by Muslims against India. It shares similarities with the anti-Semitic Kalergi plan conspiracy.


  • MacronLeaks: Two days prior the 2017 French elections, over 20,000 emails, related to Macron’s campaign, were leaked in a failed attempt to disrupt the vote. The attack was promoted on Twitter by an army of trolls and fake accounts (bots) in a hack-and-leak operation.
  • Malign actors: General term – also known as “malicious actors” – used to describe those who intentionally create or spread disinformation.
  • Malinformation: Information that is based on reality but is used to harm or threaten a person, an organisation, or a country (e.g., see “doxing”).
  • Manipulated content: When genuine information or imagery is manipulated to deceive, for instance in the form of a doctored photo, video, or text. This is part of First Draft’s typology to classify mis- and disinformation.
  • Media exemption: A legal obligation that would prevent VLOPs and VLOSEs from down ranking, deleting, or even labelling any content coming from a press publication, regardless of whether a given post is actively peddling hateful, patently false or otherwise harmful content. Although media exemption was finally rejected in the DSA, discussions on the European Media Freedom Act (EMFA) now include an extended version of the media exemption.
  • Meme: An amusing or interesting visual item (e.g., photo, screenshot, cartoon, or video) that spreads widely online. In the context of disinformation, they may be used to mislead or spread false information in a humorous or culturally-relevant way, or if the audience believes that what they are seeing is true.
  • Metaverse: A virtual-reality space in which users can interact with a computer-generated environment and other users.
  • Micro-targeting: A marketing strategy that employs users’ data (i.e., collected via cookies) to segment them into groups for content targeting. It has been used for malicious purposes, especially during elections to target voters with personalised political advertisements. In 2010, Cambridge Analytica targeted Black youth voters in Trinidad with a campaign that discouraged them from voting.
  • MIL (Media and Information Literacy): The Moscow Declaration on Media and Information Literacy (2012) defines it as “a combination of knowledge, attitudes, skills, and practices required to access, analyse, evaluate, use, produce, and communicate information and knowledge in creative, legal and ethical ways that respect human rights”. High levels of MIL contribute to society’s information resilience.
  • Militant accelerationism: The term defines a right-wing terrorist ideology that aims to bring about the collapse of liberal, democratic, and capitalist societies by accelerating existing conflicts or perceived processes of decay. This can be done both through attempts to manipulate public discourse and through violent means, according to a recent report.
  • Misinformation: Information that is false, but believed to be true by those disseminating it. It differs from disinformation in the absence of an intention to mislead or harm. For instance, during the pandemic, many people shared doctored images of wild animals flourishing in quarantined cities, thinking they were true.
  • Misleading content: Misleading use of information to frame an issue or an individual. A recurrent hoax in Italy is that “migrants receive 30 Euros per day”. This is a misleading statement drawn from a 2014 document reporting a call for bids to infrastructures hosting migrants, whose spending for the reception of asylum-seekers should not exceed 30 Euros per day per person. This is part of First Draft’s typology to classify mis- and disinformation.


  • New World Order: A conspiracy theory about a secretive power elite with a globalist agenda (i.e., the Illuminati), who is conspiring to eventually rule the world through an authoritarian one-world government. On this topic, our investigation explored how several online scammers exploit this conspiracy.


  • Online platform: A digital service that uses the Internet to facilitate interactions between two or more separate but interdependent users (whether they are companies or private individuals).
  • OSINT (Open-Source Intelligence): The collection and analysis of data gathered from publicly available sources to produce actionable intelligence.


  • Perception hacking: A small-sized intrusion that triggers an oversized psychological effect once it enters the public debate. It gives the idea that certain actors or issues on the agenda are more powerful than they actually are. For instance, the news that Russian and Iranian groups had broken into the 2020 U.S. election data system ahead of elections was exaggerated, while they had simply obtained publicly available information.
  • Phishing: A fraudulent practice, which is usually part of a hacking attempt, consisting of sending messages – usually emails and direct messages – purporting to be from reputable companies in order to induce individuals to reveal personal details, such as financial information.
  • PII (Personally Identifiable Information): Information that, when used alone or combined with other relevant data, can identify an individual (including direct identifiers and passport information and quasi-identifiers are race).
  • Pizzagate: A viral conspiracy theory during the 2016 U.S. elections that preceded QAnon, according to which a Washington pizzeria provided coverage for a paedophilia ring linked to members of the Democratic Party. 
  • Plandemic: A conspiracy theory claiming that the COVID-19 pandemic was planned. A homonymous documentary starring Judy Mikovits, who is known for her discredited medical claims.
  • Post-truth: A situation in which emotions and beliefs shape public opinion; rather than facts. Experts flag the Brexit referendum and Trump’s election as evidence that we live in a post-truth era, given the disproportionate role that disinformation had in these campaigns.
  • Propaganda: True or false information spread to persuade an audience, which is often politically connoted. In detail, white propaganda uses accurate, albeit selectively presented, information and identified sources.
  • Pwned: A term that originated in video game culture and means “owned”, in the sense that someone’s personal data has been violated and its confidentiality compromised.


  • QAnon: A U.S. far-right conspiracy theory that originated in 2017 on 4Chan, based on the interpretation of fabricated claims (called “Q drops”) and made by an anonymous individual or individuals known as “Q”. Followers believe that the Trump administration secretly fought a cabal of paedophiles made of public figures, who would be massively arrested on a day known as “the Storm”.
  • Querdenker: Founded in 2020 in Germany by Michael Ballweg, it is a loose heterogeneous movement (literally meaning “lateral thinkers”) to protest against lockdowns and vaccination during the pandemic. It is supported by a variety of groups, including right-wing extremists, other anti-vax movements and conspiracy theorists. The German Office for the Protection of the Constitution has put the movement under observation because of concerns that they may be trying to delegitimise the state. And in fact, some of his followers were recently involved in a coup plot led by the Reichsbürger, another movement that denies the legitimacy and sovereignty of the German institutions.
  • Questionable-cause logical fallacy: The fallacious idea that “correlation implies causation” and thus, two events occurring simultaneously are assumed to have a cause-and-effect relationship. This fallacy is at the basis of conspiracy thinking.


  • Radicalisation: A phased and complex process by which an individual or a group embraces a radical ideology or belief that accepts, uses, or condones violence, including acts of terrorism, to reach a specific political or ideological purpose.
  • Report: On social media, it is the process through which users can ask for content in violation of the platform’s policies to be removed or its access restricted. Content categories that can be reported vary across platforms (here are the ones for Telegram).
  • Reptilians: A conspiracy theory according to which the world is controlled by shapeshifting reptilian aliens (i.e., “lizard people”) who take on human form and gain political power to manipulate human societies.
  • Reverse image search: A process that allows to find other Internet venues where an image (photo or key frame) has been posted. For example, tools such as WeVerify, Yandex, or TinEye let users find if a picture has been manipulated or is used out of context.


  • Satire or parody: These may incur in disinformation when the receiver of the message does not understand the transmitters’ ironic intent and takes the message for authentic. Therefore, it has no intention to cause harm but has the potential to fool, as our research unveils. This is also part of First Draft’s typology to classify mis- and disinformation.
  • Scam: A fraudulent or deceptive act or operation, usually via email or private message. A popular variant is the so-called “Nigerian scam”, which involves promising the victim a share in a large sum of money or a payment in return for a small up-front payment, allegedly used to obtain the large sum.
  • School shooting denial: Some U.S. conspiracy theorists claim that numerous school shooting that occurred in the country over the years (e.g., Sandy Hook, Uvalde, etc.) were false flag operations staged by the U.S. government.
  • Scraping: Web scraping, web harvesting, or web data extraction is the process of extracting data from a website. Such programs are made to emulate human browsing on the web. While web scraping can be done manually by a user, the term generally refers to automated processes implemented using a specifically crafted web crawler.
  • Sealioning: The bad-faith practice of pursuing people with persistent questioning, often about basic and easily retrievable information or unrelated peripheral points for evidence. This is carried on with a loudly-insisted-upon commitment to reasonable debate, disguised as a sincere attempt to learn and communicate. The goal is to exhaust the target’s patience to make them look as unreasonable.
  • SEO (Search Engine Optimisation): The process (coming from marketing) of improving a website’s visibility on search engines to help attract visitors. When this process is conducted through means that violate the search engines’ terms of services, it is known as “black hat SEO”.
  • Sock puppet: A fictitious online identity created specifically to deceive, i.e., a fake persona. Sock puppet accounts differ from catfishing as the former are short-lasting, not very detailed, and not necessarily conceived for malign intent.
  • Spam: Unsolicited or irrelevant online messages, typically sent to a large number of users for the purpose of promoting, advertising, or scamming an audience. A fun fact: the word comes from an iconic 1969 sketch by Monthy Python’s Flying Circus.
  • State-controlled media: Outlets that are under editorial control or influence of the state or government (e.g., RT in Russia). These are different from state-owned media, whose editorial freedom is guaranteed by their governance structure.
  • Subvertising: The practice of making spoofs or parodies of corporate and political advertisements. For example, in Germany, Die Partei authored of a fake election poster criticising the CDU’s conservative stances. It showed CDU candidate Sylvia Pantel, the CDU logo, and the slogan “Politics for all white men and their housewives”.
  • Synthetic media: Also known as “AI-generated media”. It is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially using artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning (e.g., deepfakes).


  • Technological optimism: The belief that problems such as pollution, resource depletion, and overpopulation can be solved entirely through the proper applications of advanced technology. The position is often adopted in conjunction with climate change denialism and delayism.
  • Terrorgram: A network of Telegram channels and accounts that subscribe to and promote militant accelerationism (see definition). Ideologically neo-fascist, these channels regularly share instructions and manuals on how to carry out acts of racially-motivated violence and anti-government, anti-authority terrorism.
  • TFGBV (Technology-Facilitated Gender-Based Violence): The United Nations Population Fund defines it as digital violence that is “committed and amplified through the use of information and communications, technologies or digital spaces against a person based on gender. It is facilitated through the design and use of existing as well as new and emerging technologies (both hardware and software). It is always evolving”. It often overlaps with gender-based disinformation.
  • ToS (Terms of Service): A document stating details about what a service provider is responsible for as well as user obligations that must be adhered to for continuation of the service.
  • Troll: A user who intentionally antagonises others online by posting inflammatory, insulting, or disruptive content to get attention, upset, or provoke. Community members fighting against these trolls have started calling themselves “elves”.
  • Troll army or troll factory: An institutionalised group of Internet trolls that seeks to interfere in political opinions and decision-making. According to the Computational Propaganda Research Project, to win the 2016 elections, Philippines President Duterte spent the equivalent of hundreds of thousands of Euros to fund troll armies that would spread favourable propaganda and target opponents.
  • TTPs (Tactics, Techniques, and Procedures): This is the term used by cyber-security professionals to describe the behaviours, processes, actions, and strategies used by a malign actor to develop threats and engage in cyber-attacks. Tactics are the high-level description of an actor’s behaviour. Techniques are a more detailed, medium-level, description of a behaviour in the context of a tactic. Procedures are the low-level, highly detailed description in the context of a technique.


No entry


  • Vaccine hesitancy: A refusal or delay in acceptance of vaccines despite the availability of vaccine services and supporting evidence. The minority of people adhering to anti-vaccine activism are called “anti-vaxxers”.
  • Virality: The tendency of an image, video, or piece of information to be circulated rapidly and widely from one Internet user to another, regardless of its authenticity.
  • Vishing: A combination of ‘voice’ and ‘phishing’, it indicates a phone scam that uses social engineering tactics to persuade victims to provide personal information, typically with the goal of accessing financial accounts. Users are often tricked into believing that their bank account was compromised or that they received an unmissable offer.
  • VLOPs (Very Large Online Platforms): According to the DSA (see definition), a VLOP is an online platform with at least 45 million monthly active users, e.g., Facebook, Twitter, or YouTube.
  • VLOSEs (Very Large Online Search Engines): According to the DSA, an online search engine that has at least 45 million monthly active users, e.g., Bing, Google, or Yahoo.
  • Voter fraud conspiracy: Unproven claims that regular democratic elections were rigged, which started with the “Stop the Steal” campaign following Donald Trump’s defeat during the 2020 US elections. Similar trends were reported in Brazil, France, and Germany.
  • VPN (Virtual Private Network): A non-physical network to which access is provided upon authentication and authorisation. It is used to encrypt the user’s network traffic in a way that does not expose information regarding their IP address and geolocation. VPNs are banned in China, while a surge in their use occurs in countries that are censoring the Internet, such as Russia and Iran.


  • -washing: A deceitful marketing strategy that consists in pretending to defend a socially desirable cause to improve one’s reputation rather than actually backing it up with genuine action. Examples include greenwashing (when an organisation provides the appearance of being environmentally conscious without any substance); wokewashing (appearing to promote social justice), purpose-washing (appearing to promote a cause-based purpose), or genderwashing (appearing to promote gender equality).
  • Web tracker: A piece of code that gets executed by the browser each time a user visits the webpage that contains it. Web trackers gather data on how users interact with the websites, the number of visits and times spent on the page, the scrolling speed and other relevant information that allow web administrators to know more about their public and thus target specific audiences. More information of what trackers are and how they work is available here.
  • Website defacement: A form of cyber-attack on a website or web page that changes its visual appearance, modifying or replacing the hosted content.
  • Whataboutism: The tactic of responding to an accusation or difficult question by making a counter accusation or raising a different issue. Since the outbreak of the war in Ukraine, the solidarity of Western populations has been often criticised using similar expressions to emphasise the lack of similar responses during extra-European conflicts.
  • White supremacy: The conspiracy and radical belief that white people constitute a superior race and should therefore dominate society, typically to the exclusion or detriment of other racial and ethnic groups.
  • Word camouflage: Often used to avoid content moderation, this practice (also known as “leetspeak”) consists in replacing standard letters by numerals or special characters that resemble the letters in appearance. Our study analyses its use around COVID-19.


  • Xuanchuan: A Chinese term that describes a mis-directional strategy on social medial in which coordinated posts flood the discussion with positive messages or attempts to change the subject. This leads dissenting users to retreat from a discussion that they perceive to have become an echo chamber.


No entry


  • Zombie-NGO: Expression used for the first time in EU DisinfoLab’s “Indian Chronicles” investigation to indicate fake or defunct organisations that malign actors claim to be real and still existing for deceiving purposes.
  • Zoom bombing: The unwanted and disruptive intrusion of Internet trolls into a video-conference call, generally displaying offensive behaviour and obscene material. Zoom fixed the issue by enhancing security features (i.e., enabling a waiting room, providing passwords, allowing the host to approve or block user entries), but the unpleasant phenomenon might still occur.


  • 5G conspiracy: The unscientific belief that 5G technology affects human health, increasing the possibilities to contract diseases, such as COVID-19.