October 11, 2022

By Rita Jonusaite (EU DisinfoLab), Maria Giovanna Sessa (EU DisinfoLab), Kristina Wilfore (#ShePersisted), and Lucina Di Meco (#ShePersisted)

This analysis is the result of a collaboration between EU DisinfoLab and #ShePersisted, and aims to input into the ongoing debates on the Directive on combating violence against women and domestic violence that the European Commission has proposed in March 2022. While putting forward several interesting initiatives to protect victims offline and online, the Directive is limited in scope and does not consider the impact of harmful content, notably gender-based disinformation (GBD). To complement this policy document, a Technical Document on the phenomenon is available here. The next sections advance general considerations on how the Directive could answer some of the challenges posed by GBD, and finally present concrete suggestions to co-legislators on how the Directive could be improved accordingly. 

The Directive: an opportunity to mitigate cyber-violence and gender-based disinformation

The Directive on combating violence against women and domestic violence considers recent phenomena such as cyber-violence against women, recognising that it has been on the rise as the Internet infrastructure allows widespread amplification of certain forms of it. In this sense, GBD could be understood as a form of cyber-violence where the online space creates conditions to increase and amplify the harm it causes that would not be possible otherwise. Social media platforms act as amplifiers for these misogynistic lines of attacks, and coordinated attacks can easily spread.

When digital platforms do not enforce regulatory measures, malign actors quickly take advantage of weak self-regulation structures. Social media platforms do not consistently punish repeat offenders or remove posts that violate terms of service. The curation of content based on engagement and attention, not quality, is an impeding factor in addressing gendered disinformation. For example, Meta, Facebook’s parent company, has policies against threats of violence, hate speech, violent and graphic content, nudity and sexual activity, cruel and insensitive content, manipulated media, deepfakes, fake accounts, and coordinated inauthentic behaviour. Yet, digital forensic analysis, and even unsophisticated searches, demonstrate how gendered disinformation that violates such policies is still thriving on the platform. 

Despite its harm, in most countries, disinformation is not illegal. Thus, legal action is often not an option for GBD victims. While disinformation cannot and – in our view – should not be criminalised in the same way as the offences lined up in the Directive (Chapter 2), we believe that online violence and GBD are not mutually exclusive and that GBD should be specifically acknowledged in the Directive. It should be sufficiently recorded and problematised, as disinformation is often the precedent, background, and trigger of violence. More needs to be done to research, raise awareness, and implement adequate policy responses to GBD. Only by considering and addressing online harms more broadly can the Directive deliver on its aims.

Streamlining solutions: ways in which the Directive can specifically address gender-based disinformation 

The EU DisinfoLab is suggesting the following recommendations to include GBD perspectives in the Directive on combating violence against women and domestic violence. 

Overarching recommendations

  • GBD should be recognised as a phenomenon that can lead to offline violence rather than a series of isolated events. It should also be specifically considered when VLOPs identify and mitigate systemic risks under the DSA, paying particular attention to coordinated campaigns. 
  • Given the uncertain trajectory of any one piece of legislation meant to curtain online harm, it is important to recognise that many reform principles can address online gender harm and should be part of the political dialogue around tech policy reform from now on.
  • While complementing what has been agreed in the Digital Services Act (DSA) in terms of responsibilities – platforms must ensure user safety when accessing them – the Directive should ensure that companies introduce policies, remedies, and mechanisms that are tailored from a gender perspective across all aspects of the platform, and that are designed in consultation with those affected.
  • A lack of enforcement of self-regulatory measures from digital platforms means that malign actors can easily take advantage of weak self-regulation structures. This should be duly reflected upon and addressed when designing measures to tackle online violence, including GBD. 
  • Increasing transparency could enable researchers to understand better metrics of gendered and sexualised harassment and disinformation, as the ratio between numbers of reported cases of GBD and numbers of cases that were moderated. Such information is essential to understand and evaluate the breadth and magnitude of social media companies’ investments to address this issue and identify areas for improvement. Relevant information comprises the number of people working on content moderation, their cultural competency, their position and power within the companies, and their influence on more significant decisions, such as the design of income-generating algorithmic preferences.
  • It is important to ensure that targets of GBD are not treated as helpless victims and are given agency over their experience and analyse GBD taking into consideration the appropriate context and regional understanding. 

Specific recommendations 

  • Companies should introduce a rapid response system where victims can flag GBD in a manner tailored to prevent them from reliving their traumatic experience and is addressed with priority (article 16 of the Directive) to avoid further harm. 
  • Individual assessment to identify victims’ protection needs (article 18 of the Directive) and Individual assessment of victims’ support needs (article 19 of the Directive) consider GBD and its potential to inflict offline violence as well as other harms to the victim. 
  • The guidelines for law enforcement and judicial authorities (article 23 of the Directive) should also include capacity building on GBD with specialised, gender-sensitive training. 
  • Mandate relevant EU agencies such as the European Institute for Gender Equality and the European Union Agency for Fundamental Rights to gather information on GBD and the best existing practices to address it. Moreover, when publishing independent reports and making recommendations on violence against women and domestic violence, national equality bodies should also investigate GBD (article 24 of the Directive).
  • Victims’ support measures should provide information to GBD victims on tools and remedies to address it (article 27 of the Directive). 
  • Helplines should also offer support and advice for GBD victims (article 31 of the Directive). 
  • Preventive measures (article 36 of the Directive) such as awareness-raising campaigns, research, and education programmes should address GBD as a part of measures aimed at tackling cyber-violence. 
  • Media training activities should also aim at raising awareness and understanding by media professionals on GBD (article 37 of the Directive) to help reduce the risk and spread via unprejudiced and factually correct coverage. 
  • Member States should cooperate with and consult civil society organisations, including those directly working with or representing victims of GBD, on their policy responses to tackle GBD and support its victims (article 41 of the Directive). 
  • We need a uniform standard across platforms regarding what constitutes GBD in which it is clearly stating that GBD violates their Terms and Conditions (article 42 of the Directive). In reporting the content on a platform, it should be possible to indicate GBD specifically, at the same time providing context to the piece of content in question. 
  • Providers of intermediary services should reinforce and/or develop mechanisms for cross-platform content moderation. Victims of online violence, including GBD, should be able to refer to a decision on content that has already been moderated on another platform instead of reliving their traumatic experiences (article 42 of the Directive). 

Conclusions 

By mandating enhanced safeguarding and accountability measures, the Directive can ensure that GBD no longer slips through the regulatory cracks. Empowering the Member States to have a direct hand in reporting and transparency procedures will help naturalise sorely needed corporate accountability in the tech sector. Creating best practice examples of awareness-raising campaigns and digital literacy programs that are future-oriented can prevent sexist practices from taking root in new media. In its purpose and policy output, the Directive must display a sound understanding of the networked nature of disinformation and extremism as reproduced online. It is crucial to involve independent agencies and regulators in the campaign to increase the knowledge of GBD, from the complex and multifaceted gendered narratives at play to the technological risk factors that make social media platforms and algorithms such effective distributors of hate and disinformation.

The time to act is now by putting in place a robust regulatory framework that can address GBD specifically. The Directive is a crucial piece of legislation in this regard.