October 20, 2021
Acknowledgments:
The EU DisinfoLab would like to thank the co-facilitators of this session, the participants, and the RightsCon organisers. This piece is a summary of the discussion without attribution, as the session was conducted under Chatham House Rules to protect the identity of the participants. The summary is elaborated with supporting research and contributions from the co-facilitators.

On June 10, the EU DisinfoLab co-hosted a Community Lab at RightsCon 2021 which focused on the intersection of disinformation and gender. This session emerged from the understanding that digital platforms represent an increasingly conflicted space where women, gender non-conforming people, and marginalised groups are disproportionately targeted and harassed. All too often these groups must either struggle to make their voices heard or fear for their safety when they do. The effect is that women and marginalised groups are leaving online spaces, forgoing their fundamental right to participate in civil and political life and the full enjoyment of their freedom of expression and opinion. Titled, “Gender and Disinformation: towards a gender-based approach for researchers, activists, and allies”, the session wished to give oxygen and validation to the emerging discussion on this topic and build a common position among the researchers, activists, journalists, and interested members of civil society. In a community dialogue format, we sought to better understand gendered disinformation, drawing on the diverse perspectives present to help account for the different aspects and geographies affected by the phenomenon.

Defining gendered disinformation

The first challenge encountered in the study of gendered disinformation is definitional. At present various definitions exist with slightly different areas of focus. Lucina Di Meco describes gendered disinformation as “the spread of deceptive or inaccurate information and images against women political leaders, journalists and female public figures” in a way that draws on misogyny and societal stereotypes, “framing them as untrustworthy, unintelligent, emotional/angry/crazy, or sexual”. According to Di Meco, “this type of disinformation is designed to alter public understanding of female politicians’ track records for immediate political gain, as well as to discourage women seeking political careers or leadership roles”. In their 2020 report, The National Democratic Institute presents state-aligned online gendered disinformation as online activities which attack or undermine their targets on the basis of their gender by weaponising narratives for political, social or economic objectives. In their 2021 examination of online gendered and sexualized disinformation campaigns against women in public life on social media platforms, Jankowicz and colleagues at the Wilson Center identify three defining characteristics of online gendered disinformation: falsity, malign intent, and coordination.

While the above definitions come from a Western European and North American perspective, a commonality of attacks bridging Global South and Global North appears to be the way that women in some dimension of public life may be targeted – either as journalists, activists, or civic leaders – since these figures are often at the forefront of movements challenging established powers. Participants expressed the concern that gendered disinformation may have very different consequences according to the geographic or political setting. Other participants voiced the concern that gendered disinformation may be overly narrow, implying a binary assumption, despite the fact this type of disinformation also replicates other gendered stereotypes like transphobia. It was agreed that the term should in practice be intersectional, globally relevant (not regionally biased), and inclusive of non-binary gender identities in addition to cis-gendered women.

Gender-based disinformation (GBD) vs. gender-based violence (GBV)

Another primary definitional issue was if and how to isolate a phenomenon that sits at the crossroads of disinformation and online violence. Despite its harm, much disinformative content is not illegal per se, for example, the instance of Canada’s former environment minister Catherine McKenna being referred to as “Climate Barbie” by a colleague: the hashtag was later picked up and disseminated online thousands of times. Participants also approached the question of whether we should distinguish gendered hoaxes as an exclusively disinformation-related issue or under the umbrella of online gender-based violence (GBV). Some participants explained their preference to differentiate between the two for the practical reason that platform operators and stakeholders seem more inclined to address disinformation – which they consider to be a global, existential threat – than they are to respond to abuse against women and marginalised groups. Other times, depending on the legal context, using one term or another may allow for legal action.[1] Participants also emphasised the need to see individual attacks or pieces of content in their context in order to understand how individual attacks may be part of a broader attempt to discriminate against consideration of women’s leadership, qualifications, likability, etc.

An online vs. a general problem

Participants also discussed whether to frame the issue exclusively as an ‘online’ problem or as a societal problem, since the online is often seen as a reflection of the offline. While sexist attitudes are integral to understanding gendered disinformation, social norms alone do not explain how attacks against women in politics have become so pervasive, particularly on social media. The online dimension presents specificities and tactics that need to be accounted for in their own right (with novel strategies like ‘revenge porn’ presenting an acute example). Online space may increase and amplify, rather than simply mirror, offline violence against women. For instance, online disinformation campaigns targeting female politicians may legitimise offline violence with greater magnitude and coordination, such as the plot to kidnap a US governor in retaliation for her actions to contain the spread of COVID-19. An online campaign may signal or lay the ground for an offline campaign, which may only come months later, or else an online campaign may endure longer than an offline campaign.

Some participants recalled that the online-mirrors-offline mantra may be a method to evade responsibility, by saying that little can be done to prevent misbehaviour rooted in misogynistic culture. Others referred to the engagement-based business model of platforms as partially responsible for propagating extremist views.

Towards an activist definition

Participants shared many operational concerns, related to documentation, awareness raising, and response to gendered disinformation, in agreement that the phenomenon is not sufficiently recorded, acknowledged, or problematised. Therefore, while multiple definitions of gendered disinformation exist and definitional challenges remain, participants agreed that priority should be placed on an activist definition that can be put to work in research, awareness raising, and advocacy, and used strategically to advance the issueamong stakeholders in capacity to address it. This is particularly valuable in the face of evidence of platforms’ reticence to tackle this problem on their services.[2]

Evidence of gendered disinformation

In this section, we wish to collect the examples of gender-based disinformation reported during the breakout rooms of the Community Lab. This account is by no means exhaustive, but reflects the range of contexts affected by the phenomenon. While the evidence shared was sometimes anecdotal (not supported by technical evidence), participants’ acknowledgement of similarities and trends supported their claims, and suggests a need for increased documentation and analysis.

Gendered-disinformation during conflict

In conflict settings, where the boundaries between disinformation and violence and between online threats and offline risks are hard to distinguish, aggressive and defamatory rhetoric can put women’s lives at risk. In Libya, women are targeted on both sides of the East-West divide that divides the country. For example, congresswoman Siham Sergiwa, a target of disinformation, was kidnapped in 2019 and she is currently thought to be dead. One year later, activist lawyer Hanan Al-Barassi was assassinated after becoming a victim of disinformation for calling out corruption. Examples of gendered disinformation from conflict settings show that the same phenomenon can have vastly different stakes, and serve as a reminder of the need for contextualisation and regional understanding.

Information takes on a particularly harmful and disempowering character when it comes to contexts such as the evolving situation in Afghanistan since the withdrawal of the United States where women, particularly journalists and activists, are fighting narratives put forth by the Taliban representatives by highlighting the situation on the ground in the face of the concealment of facts and disinformation. Information, even regarding women’s rights, is weaponised on different sides to present diametrically different narratives while the voices of Afghan women are being drowned out. The situation is further complicated by the fact that many Afghan women are keeping a low profile on social media for fear of identifying themselves which could lead to reprisals by the Taliban — this has meant that disinformation proliferates and remains unchecked in the absence of those most impacted by it, unable to speak up.

Organised gendered-disinformation campaigns

Participants pointed to instances of anti-women disinformation campaigns conducted by state actors[3] against journalists, activists, and politicians. The anecdotes shared by participants are echoed in previous studies, which found that women are more often victims of trolling and social media attacks like brigading.[4] For instance, CNN reported on “an organised army of far-right trolls on Indian social media” which they refer to as belonging to the ruling party, to attack women in politics. Activists and religious minorities are particularly subject to coordinated disinformation and state-backed campaigns, noted participants engaged in monitoring, citing the Pinjra Tod movement and historical persecution of Muslims. Participants from different countries feel coordinated disinformation campaigns are becoming more common as government institutions invest in infrastructure to target dissidents through the dissemination of incomplete and false information. These campaigns often appear organic or mix with organic activities, thus giving the state plausible deniability.

While coordinated disinformation campaigns target men as well, the nature and modus operandi of campaigns against women and sexual minorities is distinctly gendered. In Mexico, for instance, where there are numerous instances of journalists being targeted and harassed over social media “by government‐sponsored cyber troops”, Posetti et al. explain that women are often at the centre of these attacks. These campaigns also result in uniquely gendered harms for female journalists, like misogynistic slurs, ‘pile-ons’, and threats to physical safety.

As with other kinds of disinformation, large gatherings, events, and movements prove the object of gendered disinformation campaigns. Participants reflected on the Holistic Protection and Leadership of Women Human Rights Defenders (WHRDs) Generation Equality Forum held in Mexico in 2021, and noted that the increased influence of feminist movements WHRDs has been met with backlash and slander, including in the form of well-planned and well-financed campaigns to harass WHRDs and cast doubt on the legitimacy of their work.

The weaponisation of religion, “blasphemy campaigns”

The construction of disinformation campaigns, it was discussed, can leverage the intersection between gender identity and religious beliefs. This strategy amplifies the vulnerability of these categories, reflecting on context-specific narratives that can be especially destructive, such as the accusation of blasphemy against women activists in Pakistan.[5] In Western societies too, gendered disinformation can be nourished by a radical interpretation of religion, for example, Christian fundamentalism, and enabled by a patriarchal system that defends an ultraconservative conception of sexuality. In this regard, disinformation is used to manipulate definitions of gender, and build exclusionary, homo- bi- and transphobic cultural narratives. An example is offered by the debate on Italy’s DDL Zan, a bill to criminalise violence and hate speech against the LGBTQ+ community, consistently opposed by the far-right and the Vatican with falsities and misconceptions.

The impact of gendered disinformation

It is typical of gender-based disinformation to portray women as incapable or unworthy of occupying space in the public arena either due to intellectual limits or ulterior motives. This strategy seeks to remove them from a position of power and visibility, and can also discourage women from entering politics in the first place. It can also lead to politicians self-censoring and disengaging from the political debate in ways that harm their effectiveness and their well-being. A silencing effect can take many forms: from women self-censoring by deleting posts to avoid harassment or entirely deleting their social media accounts, to more extreme situations where female politicians live in hiding. Examples of the latter exist from Afghanistan to Scotland and show clearly how gendered disinformation effects democratic functioning and women’s basic rights of political participation.

Depending on the context, a disinformation campaign may be life-threatening.[6] In war-torn countries, this exclusion can go as far as preventing women from being involved in the peace process. As women withdraw from online and public spaces, or even flee their country in fear for their safety, opponents and malign actors behind disinformation campaigns may take this chance to further misrepresent them, for instance by framing them as disloyal.

Individual and collective psychological effects are difficult to assess and quantify, as are the risks for researchers and fact-checkers analysing the phenomenon in the public eye. Participants noted the cognitive dissonance and paradox produced by the claim that misogyny is emotionless and in opposition to the alleged sensitivity of women, even though gender-based disinformation aims to trigger an emotional response in the audience. This adds a further layer to the discussion about the mental health and safety of researchers fighting disinformation, e.g. fact-checkers, who are often the targets of attacks.

Another goal of cyber-misogyny, a core element of gender-based disinformation, is to remind women that they are inherently vulnerable; this dynamic plays along with the perpetuation of stereotypes, making it more difficult for present and future generations to eradicate these sexist constructs. For these reasons, participants reflected, it is crucial that when addressing the problem, the targets are not treated as helpless victims and are given agency over their experience.

Gendered disinformation has consequences and impact well beyond the women and other groups whom it targets. In many countries, women’s political leadership is challenging entrenched illiberal and autocratic political elites by disrupting what are often male-dominated political networks that had previously permitted corruption and abuse of power. Pushing women out of the political arena may only be the first step of a broader threat to democracy and human rights; a movement seeking to push female politicians and activists aside and reignite gendered stereotypes and misogyny.

Responding to the phenomenon

The urgency of the need to address gendered disinformation is recognised around the world. UN Special rapporteur Irene Khan acknowledged the problem and pointed to some responses in her report for the 47th UN General Assembly.

“In the age of the Me Too movement, both States and companies should confront gender disinformation online as a priority and also give special attention to its consequences in the real world. Companies should introduce appropriate policies, remedies and mechanisms that are tailored from a gender perspective across all aspects of the platform experience and that are designed in consultation with those affected by this pernicious behaviour. States should also integrate fully gendered perspectives into their policies and programmes to address disinformation and misinformation, including in media, information and digital literacy programmes.”

Participants offered many recommendations during the Community Lab. We have aggregated the most clearly articulated and agreed recommendations below, but this is not a comprehensive list. A discussion of specific regulatory instruments, like the EU’s Digital Services Act and the UK’s Online Safety Bill, was beyond the scope of the session.

Document the threat

Gendered disinformation needs to be understood as a pattern rather than a set of isolated episodes. This is particularly sensitive given the proximity to online gender-based violence (GBV) and hate speech. It is essential to continue to document the threat and produce research that can be used for advocacy and reform. A growing body of research, both qualitative and quantitative, on gendered disinformation allows us to share trends and compare across contexts, to find resonance with other movements, for instance in relation to GBV or journalistic safety. Civil society organisations, including women’s organisations, and journalists must have sufficient access to platform data for this documentation and risk assessment. Resources should be devoted to understanding the misogynist roots of disinformation, perpetrated by domestic actors but which may dovetail with foreign influence operations or extremist movements.

Raise the alarm

Evidence gathering is necessary for raising the alarm among various actors best positioned to take action: relevant authorities in the private sector, governments, international institutions, human rights organisations and other civil society actors, media, etc. Participants mentioned the value of bringing instances of gendered disinformation to the attention of the United Nations and the international community who can defend the targets of campaigns through public statements of support. International media are also important, participants remarked, as this is one of the fastest ways to motivate platforms to put in place protection measures or take action on disinformation campaigns.

Early warning

The recognition of early signs of threats circulating online should never be underestimated. It can take months of persistent, low-scale disinformative discourse against a target before violence occurs, and the escalation rate from online harassment into offline attacks depends on the context. Participants noted that the identification of a specific audience of a disinformation attack is crucial, for instance, calling an armed group to action in a conflict setting. Efforts should be made to identify risks systematically rather than on a case-by-case basis, and to take action not only to protect women, but also to ensure the health of democratic processes like elections, peace talks and transitions of power. To achieve this, platforms could collaborate more with relevant civil society organisations to establish an official method to quickly flag dangerous campaigns. As one expert noted, an official method to deal with dangerous gender-based disinformation campaigns should start with incentivising platforms to rigorously implement their own terms of service when it comes to threats, harassment, doxing, and manipulated imagery.

Maintain an intersectional lens

In general, disinformation has a parasitic nature that exploits audience-dividing issues, building on multi-layered narratives, as evidenced with blasphemy in conservative religious societies. It is necessary to prioritise topics that can be triggering in each context (e.g. religion, culture, politics, and other identity aspects). This comes with the recognition that targets of gendered disinformation have intersecting identities and those most vulnerable are often targeted on multiple grounds. So far, the focus on women may reflect on the visibility that they have in the public sphere, which remains binary (male-female). Future research on gender-based disinformation should aim at overcoming a binary conception of gender identities to include trans and non-binary persons.

Understand context and value expertise

To step out of a one-fits-all logic, it is necessary to engage local experts to detect gendered disinformation and elaborate suitable counter-disinformation, protection, and resilience efforts. Context is key and local experts that understand the cultural norms and power structures embedded in a specific geographic area and the nuances of the language are best positioned to suggest solutions. Participants with expertise in social media monitoring stressed the importance of knowing the actors who traditionally disinform and target women in the context they are monitoring. They shared strategies for mapping these actors and building risk profiles for potential targets, closely monitoring key individuals, locations, topics, events, terms etc. Participants agreed that content moderation by platforms requires expertise and language skills in the relevant context, as automated systems will often miss the nuances of some gendered disinformation campaigns.

Further Resources

Download the attached file for a list of resources related to the topic of gender-based disinformation. EU DisinfoLab hopes to be able to update this periodically and reflect the growing body of work on this topic.


[1] In their 2021 report, Thakur and Hankerson suggest helpfully that “One way to think of the difference between the two is that gendered disinformation involves intentionally spreading false information about persons or groups based on their gender identity, and online GBV involves targeting and abusing individuals based on their gender identity.” Thakur, D., Hankerson, D. L., & Seeger, E., 2021, Facts and their Discontents: A Research Agenda for Online Disinformation, Race, and Gender, Center for Democracy and Technology, https://cdt.org/wp-content/uploads/2021/02/2021-02-10-CDT-Research-Report-on-Disinfo-Race-and-Gender-FINAL.pdf

[2] Participants lamented that at present, there has been little effort to connect the dots between extremism and the online targeting of women leaders, for instance the growth of violence-prone Incel movements and explicitly misogynistic communities like the Proud Boys. 12 August 2021, According to the Anti Defamation League, online misogyny is embraced by white supremacists and other right-wing extremists partly as an organising tool, to present themselves as defenders of “conservative” values in order to make their views more acceptable to a wider audience. 12 August 2021, “Venerating the Housewife:” A Primer on Proud Boys’ Misogyny, The Anti Defamation League, https://www.adl.org/blog/venerating-the-housewife-a-primer-on-proud-boys-misogyny

[3] Attribution of campaigns, particularly to nation-states, is a challenge beyond the scope of this discussion. Without technical evidence, these statements necessarily remain anecdotal.

[4] See Tweets from Observatorio de las Violencias de Género “Ahora Que Si Nos Ven”, 14 May 2021, https://mobile.twitter.com/ahoraquesinosv4/status/1392977188350271492

[5] Ahmed, I.,2021, Asia Bibi v. The State: The politics and jurisprudence of Pakistan’s blasphemy laws. Third World Quarterly, 42(2), 274-291, https://doi.org/10.1080/01436597.2020.1826300

[6] Participants shared the value of using the Dangerous Speech framework to assess the risk of campaigns, in combination with the ABC (Actors/Behaviour/Content) framework. While the ABC framework helps bring a focus to disinformation actors, the DSP framework helps prioritise campaigns for monitoring and response: e.g. does the campaign target a person? Who is the audience? Who are the amplifiers? What is the rhetoric? And based on these, what is the risk? For instance, a general audience may imply less risk than messages reaching an armed group.