Recommendations on how to incorporate disinformation into the EU’s Cybersecurity Strategy
This paper provides feedback from the EU DisinfoLab on the EU’s updated cybersecurity strategy, the Joint Communication on the EU’s Cybersecurity Strategy for the Digital Decade (16 Dec 2020) and is developed based on our experience conducting investigations into sophisticated disinformation campaigns run by state and non-state backed actors, recent publications, and ongoing discussions with public authorities and civil society partners in the European Union (EU), the UK, and the US, on how to counter the evolving threat.
Overview
The EU’s “Cybersecurity Strategy for the Digital Decade” is a compelling attempt to link the various foreign, digital, penal, and economic dimensions of the EU’s cybersecurity strategy, it is therefore essential that disinformation be firmly included in this frame. Especially now. Over the course of the Coronavirus pandemic, disinformation and cyberattacks have grown in parallel, exposing existing fragilities and novel risks. For end-users and cybersecurity professionals, the growing attack surface and the growing quantity of data make distinguishing safe and unsafe content more difficult. Meanwhile it remains all too easy to create and maintain assets for cybercrime and disinformation campaigns (single-use email addresses, fraudulent domains and accounts, etc.). The EU’s Cybersecurity Strategy must show an evolved understanding of what constitutes abusive and illegal behavior. The EU should recognise high-impact coordinated disinformation campaigns as cyber attacks. This recognition will allow it to establish the frameworks needed for effective attribution and dissuasion, first and foremost by improving information sharing and situational awareness among stakeholders from both the disinformation and cybersecurity fields, as well as governments, the private sector, and the technical community.
A. Context: Why Disinformation is a Cybersecurity Issue
Drawing on our research into coordinated disinformation campaigns and our own experience as an NGO in the field, we wish to highlight four areas of convergence between disinformation and cybersecurity of relevance to EU policymakers: the “terrain” on which disinformation is distributed (the social web and the internet stack, networking infrastructure, routing services), the “tactics” that increasingly combine disinformation as part of the cyberattack delivery package, the “targets” leading to victims of cyberattacks simultaneously being victim of disinformation, and what we could call the “temptation”, ie. the lucrative possibility of both disinformation campaigns and cyberattacks.
- Terrain: While there is much focus on disinformation across major social media platforms, disinformation is an inherently distributed phenomenon. Disinformation campaigns continue to make use of networking infrastructure and routing services, leveraging different levels of the internet stack. As EU DisinfoLab’s recent investigations have demonstrated, social media platforms often serve as gateways and amplifiers of disinformation websites. In this way, disinformation and cybersecurity implicate many of the same members of the private sector and the internet technical community.
- Tactics: There is significant overlap between disinformation and cybersecurity regarding the tools and methods of attack. Disinformation is increasingly part of the cyberattack delivery package, used to deliver malware by manipulating people’s fears and heightened emotions (for instance the deployment of “fearware”, a subset of phishing lures that rose in prominence during the pandemic and rely on anxieties and informational deficits). The continuous proliferation of hack and leak operations as well as the coordination between hybrid tactics (illustrated in the Sandworm case) demonstrates this convergence. There is also significant convergence between disinformation campaigns and the tactics used in cybercrime, for example, via illegal dark web transactions, illegally obtained documents, and various kinds of fraud.
- Targets: Disinformation campaigns and cyberattacks can cause similar harms and are sometimes combined to reach the same targets. While a data breach can compromise information security, so can the manipulation of data. We saw an example of this related to Covid-19 vaccines early this year, when hackers stole confidential documents from the European Medicines Agency (EMA) a European Union regulatory body to seow mistrust in the Pfizer-BioNTech vaccine. Meanwhile, so-called “anti-democracy attacks” and “cyber influencing attacks” like media manipulation and astroturfing in the context of elections illustrate the hybrid nature of interference in democratic processes.
- Temptations: Hacking, cybercrime and influence operations are lucrative endeavors, often outsourced to skilled professionals. While individuals and businesses may have increased their readiness for ransomware attacks, disinformation strategies like defamation and extortion are now being used to cause reputational damage and seek profit. These activities all have strong financial incentives and as yet insufficient consequences, due in part to the challenges of attribution but also to the lack of dissuasive/restrictive measures.
B. Secure the Network: Disinformation & the Network and Information Security (NIS) Directive
Acknowledging the Disinformation Threat in NIS
The reformed Network and Information Security (NIS) Directive, once adopted, has the opportunity to foster a more sophisticated understanding of disinformation within the cybersecurity community while introducing new and stricter obligations for companies to ensure adequate cyber preparedness and response. Elections are an exemplary area where the EU has committed itself to a “comprehensive approach”. We are confident that efforts to secure the next EU elections will foreground collaboration across the disinfo- and infosec- communities. This comprehensive approach is also necessary to secure other critical infrastructure and processes. The case of 5G cell towers across Europe has exemplified the need to secure public perception; infrastructure must be resilient to cyber attack and data breaches, but also to strategic disinformation.
Stricter transparency for the Domain Name System
Under the reformed NIS directive, top-level domain (TLD) registries would be considered essential entities, along with other actors that are understood to be critical to “the integrity of the internet”. We agree with this perspective, and welcome the obligations that the new directive would put in place for maintaining accurate and up-to-date registration data. Access to accurate registration data (WHOIS data) is needed for disinformation research and response, including fact-checking and digital verification. The “legitimate access seekers”, this remains too vague: vetted disinformation researchers working in the public interest and in compliance with the GDPR should be granted lawful access to these data as well as to historical data needed for investigations. Such vetted researchers could obtain a trusted notifier status to facilitate information sharing with the technical community related to suspicious domains. Furthermore, the EU must also motivate a higher level of scrutiny among domain name registrars for actors who purchase domains in bulk (over 30 domains at once), register under proxies, and register under false names, as these behaviors are frequently associated with abusive disinforming activities. Currently DNS abuse is understood as malware, botnets, phishing, pharming, and certain kinds of spam. We would make the case that fraudulent news websites and fraudulent domain name registration that show a clear aim to disinform end-users constitute technical abuse of the DNS, not merely website content abuse. In this view, we would encourage members of the networking community to form a co-regulatory body for information sharing, standard setting, and collaboration with civil society.
C. Address the Nexus Between Disinformation and Cybercrime
While disinformation is usually understood as ‘harmful but legal’ content, disinformation campaigns often rely on illegal or unregulated practices, for instance use of darknet forums and marketplaces. As we have witnessed since the outbreak of Covid-19, darknet marketplaces have been used to peddle false information and fraudulent equipment and remedies, with serious consequences for public health and confidence on online trade. Before Covid-19, our research uncovered the black market sale of social media accounts. The illegal sale of IP addresses and SIM swapping to hijack high value or credible identities is also common practice in disinformation operations.
Given this nexus between disinformation campaigns and cybercriminal behavior, we welcome the announcement that Europol and ENISA will expand their capacity to monitor and address certain disinformation threats. Disinformation should also be integrated within the cybercrime agenda under the Security Union Strategy to ensure competence and capacity in this area across member states. Finally, given the lucrative potential of disinformation, agencies like Europol, ENISA, and FATF should be vigilant to the financial transactions behind disinformation campaigns.
Know-Your-Business-Customer requirements should be expanded to platforms, services, and marketplaces that are at particular risk of financing disinformation in order to identify and prevent this behavior. KYBC requirements should not be limited to the obligations placed on platforms through the Digital Services Act (DSA), but should also be extended to business customers on infrastructure services such as domain name registrars and hosting providers.
D. Safeguard the Guardians: Support for Civil Society
The EU must remain vigilant to the cybersecurity needs of civil society actors tackling disinformation day-to-day. As highlighted in the EU DisinfoLab’s report “The Many Faces Fighting Disinformation”, Europe’s civil society is rising to the disinformation challenge with new types of expertise, but they also face novel challenges like cybersecurity risks and online reputational and harassment campaigns. The EU must provide additional cybersecurity support for these actors in line with its ambition to become digitally sovereign. This could come in the form of grants, software, training, or credentialing programs for CSOs addressing disinformation. Meanwhile, existing European programs that provide support for SMEs as well as EU cybersecurity training and recruitment programs should include disinformation researchers and experts within their remit. Finally, DG CONNECT could examine ways to include support for disinformation researchers as part of the €1.7bn allocated to cybersecurity in the Digital Europe Programme. In the 2030 Digital Compass, the Commission rightly underlines the need for the EU to build on its strengths and support a “robust civil society”. Placing disinformation policy at the heart of the Commission’s cybersecurity strategy would not only make it easier to coordinate Member State responses to the evolving threat, it would also support the specialist NGOs and researchers in Europe’s civil society who are fighting disinformation, thereby becoming one concrete way of meeting the expectation for both a robust civil society and more resilient EU.
E. Raise the Stakes: Sanctions
The EU is committed to reinforcing its Cyber Diplomacy Toolbox to prevent, deter and respond to cyberattacks, and disinformation fits squarely within this frame. Defense against disinformation campaigns requires the same measures as defense against cyberattacks: multi stakeholder information sharing and situational awareness, the capabilities to perform attribution, and effective dissuasive measures. Disinformation, cyberattacks, and cybercrime all have strong financial incentives with as yet insufficient consequences. It is high time to raise the stakes for the malicious actors and prevent the same actors from repeating the same offenses. Sanctions are not a subject to be taken lightly, nor are they a silver bullet. Still, the EU can play a leading role in defining a hyper-targeted sanctions regime for disinformation, one that accounts for the geopolitical complexity of responsibility, implicates relevant members of the private sector, and respects fundamental human rights.
F. Lead on the Global Stage: A European Cyber Diplomacy Grounded in Human Rights
The EU’s emphasis on cyber diplomacy, for example through the EU Cyber Diplomacy Network, is an opportunity to claim leadership at the international level, notably on the Second Additional Protocol to the Budapest Convention. The EU’s position is strengthened by its commitment to International Human Rights Law and European Human Rights Guidelines and makes it well-placed to continue to promote its vision for cyberspace grounded in human rights. The GDPR remains a powerful tool to strengthen cybersecurity through its protection of personal data, and an example of the EU’s ability to set international standards in cyberspace. In addition, it should assert its vision of a decentralised, open internet, as this is also the most aligned with human rights principles. The concentration of the core functions of the internet into the hands of a few private companies increases cyber vulnerability. Given that cyberspace is largely maintained by the private sector, the EU should also look to the UN Guiding Principles on Business and Human Rights as a tool to increase the safety, trustworthiness, and resilience of our digital environment.
Conclusion
Because disinformation is by nature an online risk, it is a challenge for our cybersecurity governance and ecosystem to address. In order to fully safeguard our digital infrastructure, preserve information security, and guarantee EU citizens a safe and trustworthy online environment, the EU must address disinformation and cybersecurity in the same breath.