Disinfo News and Updates

  • Vaccine Smear Campaign. Radio Free Europe/Radio Liberty investigation discovered a network of Russian marketing companies (known for selling nutritional supplements and malware) to be behind a campaign asking social media influencers to smear the Pfizer vaccine.
  • Facebook Lab-Leak Reversal. Amid the revival of the lab-leak hypothesis, Facebook has revised its policy from February and lifted a ban on comments linking Covid-19 to the Wuhan lab.
  • Facebook Threat Report. Facebook’s recent influence operations report covering three years of campaigns focused on Russian and Iranian actors, but also found the U.S. among the top 5 sources of influence operations. More from First Draft and Protocol.

EU Policy Monitor

In the EU Institutions

  • Hate Speech. Didier Reynders, EU Justice Commissioner announced last week that the Commission will propose a common definition of hate speech to apply across EU member states.
  • AI. The Parliament adopted two new resolutions on artificial intelligence: a resolution in the Culture and Education Committee (CULT) that stresses the importance of measures to protect cultural diversity and prevent bias, and a resolution in the Internal Market Committee (IMCO) highlighting the potential of AI in private and public sector digital transition. 
  • DSA and DMA. French officials have strong demands of the DSA, including increased enforcement capability at the member state level and stricter rules for online marketplaces. Meanwhile France, Germany and the Netherlands published a joint position last week asking for clarity on the interaction of the Digital Markets Act with competition law and requesting more authority at the national level. 

In Member States

  • Belgium. Belgium’s interior ministry discovered a “complex, sophisticated and targeted cyberattack”, launched in April of 2019. Belgian experts point to China. On May 20,  Belgian’s National Security Council approved a new cybersecurity strategy.
  • Ireland. Facebook has asked the Irish Government to delay the introduction of a new Online Safety and Media Regulation Bill which could overlap with the EU’s Digital Services Act.

The Commission Issues Guidance on the Code of Practice on Disinformation

Last Wednesday, the European Commission issued a Guidance document designed to improve the EU’s Code of Practice on Disinformation, the voluntary self-regulation programme established in 2018, to which Facebook, Google, Twitter, TikTok, and a handful of other platforms and trade associations are signatories. In alignment with the Digital Services Act (DSA) regulation which focuses on illegal online content, this programme will be redesigned and fortified to better fulfill its role as the EU’s primary tool to address disinformation. Unlike the DSA, the Code of Practice is voluntary, though it could gain indirect enforcement power in a co-regulatory framework, as foreseen in the DSA’s provisions for Codes of Conduct: signatories could use their commitments in the Code to avoid some of the regulation’s penalties.*

In addition to this move towards co-regulation, the Guidance calls for reinforcing the Code through:

  • Inviting additional signatories (specifically mentioning “private messaging services” as well as fact-checkers and providers of technological solutions)
  • Reinforced commitments related to demonetising disinformation (especially in advertising) and to maintaining integrity of services against manipulative behavior used to spread disinformation.
  • Reporting in standardised formats, with Member State breakdowns
  • Better cooperation with fact-checkers and increased coverage across EU countries and languages
  • A robust framework for access to data for researchers, to be devised
  • More information to help users navigate online, like more transparent recommender systems, tools to flag disinformation, and redress mechanisms
  • A monitoring framework with Key Performance Indicators, to be devised
  • A Transparency Centre displaying signatory performance on commitments
  • A permanent Task Force chaired by the Commission, composed of signatories, representatives from the European External Action Service, the European Regulators Group for Audiovisual Media Services (ERGA) and from the European Digital Media Observatory (EDMO)

While the Guidance is specific in some areas (“trade secrets” deserve appropriate protection, 8.1.3.), it remains vague in others. For instance, online services “that are widely used at EU level and have higher risk profiles with respect to the spread of disinformation should report every six months” (9.2.1) but it is not stated what would determine such a risk profile (this will be left to the Task Force). 

The Guidance may disappoint some civil society organisations and experts in the disinformation space, who, rather than seeing our recommendations in a concrete proposal with enforcement power, find our demands echoed back at us. Nicolas Kayser-Bril titled his analysis for Algorithm Watch “EU Commission asks foxes to stop eating chickens but does not build fence”. Indeed, much work remains to be done; the Guidance encourages other signatories to join the Code and contribute to it’s development, and leaves many aspects to be “tailored” by the relevant actors. An understandable strategy, encouraging CSO’s to join the code could create a bit of a catch-22 for stakeholders who traditionally monitor this issue independently from both platforms and the Commission. 

Much in this new Guidance remains to be interpreted and specified. EU DisinfoLab has contributed our feedback to the Code and analysed the signatories’ monthly reports, and will continue to monitor the Code’s development closely.

*NB: The DSA’s passage is still likely two years away, so the Code of Practice will remain self-regulatory until at least that time. Also, it seems the Code’s eventual co-regulatory status is at least somewhat tied to the DSA’s risk assessment provisions (Art. 26 ), which were suggested to be deleted by the LIBE Committee in their draft DSA opinion. (vote to take place mid July).

Research, Reports, Long Reads …

  • A recent study from the European Parliament, requested by the EP’s Subcommittee on Human Rights (DROI), considers the impact of online disinformation on democratic processes and human rights in countries outside the bloc, and proposes actions for the EU to take.
  • A team of Georgetown researchers has demonstrated how effectively a sophisticated text-generating AI system, OpenAI’s GPT-3, can be used to generate disinformation.
  • Ayushman Kaul of the DFR Lab reports on a bogus fact-checking website “India Vs. Disinformation” created by a Canada-based communications firm to amplify pro-government content and release “fact checks” targeting political opponents.

Events, Announcements, Training Opportunities

  • The Digital Future Society has launched an international Call for Solutions “Tech against disinformation” to help improve Fact-checking in Spain. Apply before 9 July here.
  • June 2: The Center for Countering Digital Hate (CCDH) will host an online event on the role of regulation in mitigating disinformation and saving lives. Sign-up here.
  • June 9. Heinrich Böll Stiftung Brussels is hosting an online discussion on gendered disinformation, featuring EU DisinfoLab researcher Maria Giovanna Sessa! Register here.
  • June 10: EU DisinfoLab’s Community Lab at RightsCon “Gender and disinformation: towards a gender-based approach for researchers, activists, and allies”. More info here
  • June 16 – 17: The WeVerify project is hosting a free two-day workshop and tech demo on AI to counter disinformation. More info here.
  • June 7 – 13 June: The CASOS Summer Institute aims to provide an intense and hands-on introduction to network analytics and visualization from a combined social-network, network-science, link-analysis and dynamic network analysis perspective. More here.
  • June 10 – 15: The IDeaS Summer Institute is an intense, hands-on training camp that teaches participants about the theories, methods, and tools to identify and combat disinformation, hate speech, and extremism online. More here/

Jobs

  • Digital Action is looking for a Campaigns Adviser on democracy and technology. 
  • Democracy Reporting International (DRI) is looking for a German Elections Social Media Research Expert.
  • The Atlantic Council’s Digital Forensic Research Lab (DFRLab) is seeking to hire an associate editor.
  • The World Wide Web Foundation is looking for a consultant to develop a gender policy playbook.
  • CENTR, the Council of European National Top-Level Domain Registries, is hiring a Policy Officer.
  • News literacy organisation Lie Detectors is recruiting trainees for a spring/summer 2021 start date.