by Raquel Miguel Serrano & Maria Giovanna Sessa
The pressing question facing our diverse community today is how to effectively respond to disinformation threats and Foreign Information Manipulation and Interference (FIMI) campaigns. While there has been significant emphasis on exposure and raising awareness – driven by defenders’ continuous uncovering of new cases – the focus is now shifting towards exploring better response strategies to enhance the overall effectiveness of the fight.
Typically, the model employed for this purpose operates as a closed circuit, beginning with hoaxes, incidents, and campaigns and concluding with the implementation of countermeasures or responses.
However, the true story rarely ends after one (or several) responses: even after a debunk, deplatforming, or suspension from social networks, disinformation continues to circulate. After sanctions are imposed, threat actors find ways to evade them. Not to mention the new Tactics, Techniques, and Procedures (TTPs) attackers adopt to counter specific measures adopted against them, as well as the additional actions of other members of the counter-disinformation community inspired by their colleagues.
Therefore, responses cannot be viewed as isolated elements within a continuously interacting system. Understanding disinformation campaigns as a living, evolving entity – where each response prompts new attacks and countermeasures – is essential to grasping the complexity of the problem.
Therefore, examining the story beyond individual responses to disinformation campaigns is essential in the quest for more effective measures. Understanding the impact of each step is crucial if we aim to design tailored and impactful responses. The key question to ask is: what happens next?
At EU DisinfoLab, we have been dedicating time to analysing responses to disinformation campaigns and FIMI. In a previous publication, we introduced a methodology for assessing the cost-effectiveness of adopted measures using a unique case study – the Doppelganger campaign, which provided an unmatched research opportunity. The primary goal of that research was to evaluate past and present responses. However, it also served a deeper purpose: to inform evidence-based future actions, strengthen responses to FIMI, and enhance capacity to tackle emerging threats.
First, we conducted an exhaustive mapping and analysis of potential responses to disinformation campaigns, building on the DISARM Blue Framework and incorporating our contributions to address identified gaps. While we acknowledge the value of the DISARM Blue Framework, we recognise its limitations and do not necessarily endorse all the measures it suggests. This process involved evaluating the cost-effectiveness of various measures while anticipating their consequences and overall impact.
Second, we developed a response-impact framework to measure and visually represent the effect of different actions. This framework evaluates the outcomes of various responses and ensures compatibility with tools like DISARM or the Kill Chain. By doing so, it addresses a gap in the community by analysing and encoding the broader narrative beyond individual responses. Additionally, the framework allows users to filter responses based on the desired impact, making it a flexible and practical tool that the community can adapt and improve. This approach empowers stakeholders to choose tailored responses suited to their specific needs.
A THREE-STEP VISION
Step 1: Response-by-response mapping and analysis
First, we identified and mapped the measures implemented in the Doppelganger campaign, using the DISARM Blue Framework as a foundation. To address its limitation, we expanded the framework with additional responses that were not covered initially (see Annex) and divided all responses into five categories:
I. Exposure-related responses: These include journalistic reporting, fact-checking, and research to uncover and counter disinformation. |
II. Community engagement-related responses: Actions designed to mobilise and strengthen the defender community against disinformation. |
III. Distribution-related responses: These involve actions like content takedowns, deplatforming, and reducing content visibility, often requiring collaboration with platforms. |
IV. Infrastructure-related responses: Measures targeting the technical backbone of disinformation campaigns, such as domain takedowns or disruptions to content production and distribution. |
V. Sanction and legal responses: Actions aimed at limiting the resources of bad actors, often through law enforcement measures or legal sanctions. |
Next, we analysed the potential cost-effectiveness of each response, focusing on their impact by evaluating five intermediate factors that can ultimately contribute to the main objective: removing the campaign from the public space. As explained previously, these factors are:
I. Increased situational awareness: Enhancing understanding of the disinformation campaign and its dynamics. |
II. Impact on threat actors’ capabilities: Assessing the extent to which responses disrupt the production of disinformation content and its distribution infrastructure. |
III. Capacity to trigger new responses: Evaluating the ability of actions to prompt two types of responses, i.e., mobilisation by the defender community and tactical adaptations by threat actors. |
IV. Increased opportunities for attribution: Improving the ability to identify and link threat actors to their operations. |
V. Deterrence: The ultimate objective is to reduce the likelihood of future campaigns by diminishing the perceived benefits or increasing the costs for threat actors. |
Step 2. A response-impact framework
Building on these categories, we developed a response-impact framework that assigns unique IDs to each impact factor, making them easily identifiable within the community. This framework fills a gap in current research by visually representing the impact of incident responses and evaluating their effectiveness based on cost-effectiveness, measured against the five key indicators described earlier.
EU DisinfoLab’s Response-Impact Framework
EXTERNAL ID | NAME | DESCRIPTION |
---|---|---|
I. Situational awareness | ||
EUDL2001 | Increased situational awareness | Evaluates how effectively a response improves stakeholders’ understanding of the threat landscape. Key metrics include the number of stakeholders reached, the frequency of information dissemination, and the quality of information shared, both in qualitative and quantitative terms. |
EUDL2001.1 | Cooperation among stakeholders | Assesses the extent of collaboration and information exchange among stakeholders within the defender community, measured through the variety and effectiveness of collaborative efforts and communication channels utilised. |
II. Impact on threat actors’ capabilities | ||
EUDL2002 | Impact on the threat actors’ capabilities in the production or distribution of illegal and harmful content | Assesses the response’s impact on diminishing threat actors’ capacity to produce and disseminate illegal or harmful content. Metrics include reductions in content volume, disruptions to distribution channels, and increased operational costs for threat actors |
EUDL2002.1 | Impact on the threat actors’ technical capabilities | Measures the extent to which a response disrupts or diminishes threat actors’ technical tools and resources. Metrics include the loss of software, network infrastructure, and digital assets such as social media accounts, pages, and websites. |
EUDL2002.2 | Economic impact on threat actors | Evaluates the financial impact on threat actors resulting from disrupted operations. Metrics include reduced revenue and heightened operational costs. |
III. Triggering new responses | ||
EUDL2003 | Triggering new incident responses by the defender community | Evaluates whether a response prompts further actions from the defender community. Metrics include the initiation of additional investigations, takedowns, and collaborative efforts. |
EUDL2004 | Triggering new attack patterns by the threat actors | Assesses whether the response compelled threat actors to modify their TTPs. Metrics include observed changes in TTPs and the extent of adaptation efforts by threat actors. |
IV. Attribution | ||
EUDL2005 | Increased opportunities for attribution | Assesses the response’s effectiveness in enhancing the likelihood of identifying and attributing disinformation activities to specific threat actors. Metrics include the number of successful attributions and the quality of evidence linking activities to those actors. |
V. Deterrence | ||
EUDL2006 | Dissuasive effect: Deterrence | Evaluates the degree to which incident responses deter threat actors from persisting in their activities. Metrics include a measurable decrease in disinformation campaigns and increased cost or complexity for threat actors to continue operations. |
Cross-mapping impact and responses
By combining steps 1 and 2, we can analyse the impact of the various response categories, as encoded in our response-impact framework. Applied to the countermeasures identified in the Doppelganger case, this framework provides valuable insights. The following table clearly represents this evaluation, detailing the effectiveness and influence of each category.
Response mapping: | |||||
Response-Impact Framework: | EXPOSURE | COMMUNITY ENGAGEMENT | DISTRIBUTION | INFRASTRUCTURE | SANCTIONS AND LEGAL RESPONSES |
Improvement in situational awareness [EUDL2001] | X | X | X | X | |
Cooperation among stakeholders [EUDL2001.1] | X | X | |||
Impact on the threat actors’ capabilities in the production or distribution of illegal and harmful content [EUDL2002] | X | X | X | X | |
Impact on the threat actors’ technical capabilities [EUDL2002.1] | X | X | X | ||
Economic impact on threat actors [EUDL2002.2] | X | X | X | ||
Triggering of new incident responses by the defender community [EUDL2003] | X | X | X | X | X |
Triggering of new attack patterns by the threat actors [EUDL2004] | X | X | |||
Increased opportunities for attribution [EUDL2005] | X | X | X | X | X |
Dissuasive effect: deterrence [EUDL2006] | X | X | X |
Step 3. Response design based on the desired impact
One key advantage of this model is its ability to evaluate responses individually and filter them based on the desired impact, making it a practical tool for informed decision-making. For instance, using this model, if the goal is to enhance cooperation among stakeholders, the focus would be on measures that expose disinformation campaigns and foster greater community engagement. Conversely, if the objective is to impose economic consequences on threat actors, responses targeting their infrastructure or legal actions would be prioritised. Ultimately, achieving a deterrent effect requires a strategic combination of broad community engagement, legal actions, and measures to disrupt the infrastructure of bad actors.
CONCLUDING REMARKS
This proposal represents a significant advancement in addressing FIMI campaigns by offering a comprehensive approach to encoding and visualising incident responses and their impact. By introducing additional observables for responses it enhances the existing possibilities of DISARM and enables it to adapt to both current and emerging needs. While initially rooted in a single case study, the methodology is scalable and can incorporate ongoing measures, new potential measures, and insights from future case studies.
Moreover, the response-impact framework moves beyond the static “campaign→response” model to investigate the subsequent developments and the interplay of events triggered by the adopted measures. This approach provides a deeper understanding of the disinformation ecosystem’s dynamic, interconnected, and evolving nature. Additionally, integrating external IDs opens opportunities for innovative visualisation on open-source platforms.
ANNEX
Responses listed in our model
Exposure-related incident responses (Doppelganger operation)
- Responses identified in the DISARM Blue Framework:
- Media exposure [C00184];
- Expose actor and intentions [C00115];
- Provide proof of involvement [C00116];
- Engage payload and debunk [C00119];
- Debunk and defuse a fake expert/credential [C00113];
- Prebunking [C00125].
- Further responses suggested by EU DisinfoLab:
- Publish and share IoCs to identify assets used by threat actors [EUDL004].
Community engagement-related incident responses (Doppelganger operation)
- Further responses suggested by EU DisinfoLab:
- Public authorities’ engagement with the case [EUDL001];
- Potential targets’ engagement with the case [EUDL002];
- Public opinion’s engagement with the case [EUDL003].
Distribution-related incident responses (Doppelganger operation)
- Responses identified in the DISARM Blue Framework:
- Content moderation [C00107] [C00122];
- Downgrade/de-amplify so the message is seen by fewer people [COO117];
- Social media source removal [C00172];
- Remove suspicious accounts [C00197];
- Deplatform account [C00133];
- Deplatform message groups and/or message boards [C00135].
- Additional responses identified in the DISARM Blue Framework (i.e., responses that have not been identified at the time of our writing but could be potentially activated using the existing framework):
- Identify and delete or rate limit identical content [C00074];
- Platform adds warning label and decision point when sharing content [C00142];
- Platform regulation [C00012];
- Reduce political targeting [C00065];
- Use advertiser controls to stem flow of funds to bad actors [C00216];
- Change search algorithms for disinformation content [C00078].
- Further responses suggested by EU DisinfoLab:
- Action on ads [EUDL005]:
- Content moderation of ad content [EUDL005.001];
- Publish transparent data on ad buyers [EUDL005.002];
- Measures to ban threat actors from acquiring ads [EUDL005.003];
- Implement transparency measures for recommender systems [EUDL006];
- Addressing systemic risk on platforms [EUDL007];
- Providing access to data for research [EUDL008].
- Action on ads [EUDL005]:
Infrastructure-related incident responses (Doppelganger operation)
- Responses identified in the DISARM Blue Framework:
- Block source of pollution [C00071];
- Block access to disinformation resources [C00070];
- Mute content [C00085];
- Redirection/malware detection/remediation [C00182];
- Remove or rate limit botnets [C00123].
- Additional responses identified in the DISARM Blue Framework:
- Take pre-emptive action against actors’ infrastructure [C00153].
Sanctions and legal responses (Doppelganger operation)
- Responses identified in the DISARM Blue Framework:
- Legal action against for-profit engagement factories [C00060];
- Use banking to cut off access [C00129];
- Unravel/target the Potemkin villages [C00162].
- Further responses suggested by EU DisinfoLab:
- Ban incident actors from funding sites [C00155].