How inaction or partial action by social media platforms enables the continuous spread of disinformation
(and what we can do about it)
The responses received from the social media platforms to the EU DisinfoLab’s investigations have often been insufficient to reduce the spread of disinformation. In some cases, the responses were non-existent, as the emails of our researchers went unanswered. This article provides a summary of such cases, highlighting the need for greater platform accountability. For more on our proposals to fix this problem see the EU DisinfoLab’s Open Letter on the Digital Services Act (DSA)
Disclaimer: This blog-post covers the actions known to the EU DisinfoLab at the time of publication in September 2021. We returned to this on January 11, 2022 and updated accordingly to reflect the actions platforms have taken following our publication.
Key take-aways
- Black-box decision-making prevents researchers from scrutinising platform activity. Despite partial efforts by the platforms, it is still extremely difficult for civil society actors to get any meaningful information on how platforms act to counter the activities of disinformation actors.
- Inconsistencies prevail in how the largest platforms responded to our findings on disinformation networks, ranging from no response to partial action taken behind closed doors. In some cases, the platforms (Facebook, YouTube or Twitter) removed only part of the assets linked to a unique network, leaving to groups and actors spreading disinformation enough room to continue their manipulative activities.
- Unequal enforcement continues across the EU based on language and region. For the same information operation linked to Russian actors, we noticed a large discrepancy related to the language: all the English-speaking assets were taken down while the French-speaking assets remain online.
- Policy infringements going unanswered. In many cases, our findings also show the infringements of one or several Coordinated Inauthentic Behaviour (CIB) policies implemented by platforms. Still, based on our reports, the actions taken by social media platforms have often been, at best, very limited, at its worst, inexistent.
Over the past two years, the EU DisinfoLab has worked on nine OSINT investigations to expose tactics and strategies used by a wide range of actors to disseminate disinformation online. Our findings were published openly on our website, covered by major media outlets, and almost always shared directly with contacts within the social media platforms concerned. The investigations showed how social media platforms are essential distribution mechanisms for malicious actors to have their false content amplified. One essential way to fight disinformation is to deprive them of these amplification channels. Moreover, major social media platforms now have policies against Coordinated Inauthentic Behaviour (CIBs). Despite the differences between platform policies definitions, and sometimes a lack of transparency or precision over what constitutes a CIB, we analyse this notion in a series of blog posts that can be found here.
In most of our investigations, we found clear behaviours that went against the platforms’ policies to fight CIBs. It includes among others:
- Assets hiding their location and real identity;
- The artificial amplification of external URLs through spam behaviours;
- The use of pictures and names that do not exist or worse, name and pictures that do exist and belong to other people;
- Efforts to artificially influence organic conversation through the use of multiple accounts;
- Use of intentionally misleading profile information.
Here are some of our most notable examples.
InfoRos: What happens when social media platforms act only partially against Russian disinformation
In June 2020, we published an extensive report showing how a Russian entity called InfoRos, tied to Russian military intelligence (GRU), was using a French-speaking website (ObservateurContinental.fr) and an English-speaking one (oneworld.press) to spread disinformation and polarising narratives in European and US informational ecosystems.
Our investigation was covered by the French media Next Impact and we shared our findings with Twitter and Facebook, which were the main platforms used by both websites. We did not see any actions from either social media platforms until July 2020, when the New York Times and AP wrote articles about how sources inside the US government considered InfoRos as part of the Russian effort to spread disinformation around the COVID-19 pandemic. Articles from the US press led to an instant response from Facebook and Twitter: they banned accounts and pages tied to the English-speaking One World and it remains impossible to share links to oneworld.press on both platforms. However, despite our investigation, no actions were taken by Twitter and Facebook against the French side of this Russian information operation. [the Facebook page and Twitter account are both still available on 11/01/2022]. It has left Observateur Continental with plenty of opportunities to disinform. For example, in October 2020, you could find on the Observateur Continental Facebook page a post linking to an article suggesting that the French government might try to take advantage of the pandemic to kill part of the French population, in particular the old people. This type of conspiracy theory revolving around a drug called Rivotril was fact-checked multiple times in France. In April 2021, InfoRos was put on the US Treasury sanctions list for its ties with the GRU 72nd Main Intelligence Information Center (GRITs). We hope that this decision will lead to more transparency from Facebook and Twitter about why they have decided to not implement any measures against the French-speaking asset connected to InfoRos.
France Libre 24: How the inaction of Facebook allowed an alternative website to become one of the main sources of COVID-19 disinformation in France
In January 2020, we published an investigation, well-covered in the press by POLITICO EU, showing how a new alternative media France Libre 24 (FL24), which spreads disinformation and polarising content in French, had been hiding its ties to members of the Polish far-right.
One of the main France Libre 24 contributors, designated as “head of publication” after the release of our investigation, was called Anne Van Gelder. Despite many efforts, we have not been able to find clear evidence that this person exists and several clues suggest that the picture used to create the Facebook profile under the name “Anne Van Gelder” is an AI-generated profile picture. As mentioned in several of Facebook’s Coordinated Inauthentic Behavior Reports, the use of “fictitious accounts and personas as a central part of their operations to mislead people about who they are and what they are doing” infringes Facebook policies. And yet, as of January 2022, the account of Anne Van Gelder remains on the platform despite our communication about this matter with the company. Moreover, while Twitter rapidly took some action and banned the small FL24 Twitter account, Facebook has still not banned the FL24 Facebook page as of today. Since January 2020, this page has doubled its number of followers (from 20k followers to 40k followers). Worryingly, FL24 has continued to appear in multiple fact-checking articles as a source and amplifier of disinformation (see here, here, or here). An analysis from NewsGuard Tech even ranked FL24 as the leading French-speaking website with regards to the spread and amplification of COVID-19 disinformation. While Facebook could have acted against a disinformation actor, they have allowed FL24 to become a major hub for COVID-19 disinformation in France.
Tierra Pura: How a small Spanish-speaking conspiracy website with ties to the Epoch Times can still expand its activities on Facebook
In February 2021, we published an investigation, covered by the Spanish press agency EFE and Canadian media Radio-Canada, showing how a new Spanish-speaking media outlet called Tierra Pura had extremely close ties to the Epoch Times, spreads conspiracy theories about the COVID- 19 pandemic and the US elections.
We shared our findings with YouTube, Twitter, and Facebook. YouTube rapidly banned the Tierra Pura channel but did not give us more insights as to why they took the decision. Following an internal investigation, Twitter suspended several accounts linked to Tierra Pura and blocked definitely the possibility of sharing Tierra Pura links on the platform. Their decision was based on the connections between Tierra Pura and “the Beauty of Life” (BL.com). The BL.com is connected to the Epoch Media Group and has been banned from multiple social media platforms since December 2019, when multiple organizations revealed how the organization was involved in massive disinformation campaigns. Despite our findings and the continuous spread of conspiracy theories by Tierra Pura, Facebook seems to have taken only very limited actions against most of the Facebook and Instagram pages tied to this outlet. Since our investigation, Tierra Pura has been able to continue to regularly publish disinformation on on Instagram (where it has over 6k followers) and Facebook, for example linked to COVID-19 vaccines or the US elections, through at least four main pages (TierraPura.org, TierraPura.org Argentina, TierraPura.org/BR, TierraPura.org – Latinoamérica). Concerningly, it is unclear if Tierra Pura has been deprived of its ability to share ads despite clear signs that this is also a way for the website to disseminate disinformation. Four months after the publication of our investigation, one page was still able to publish controversial ads. Some of these ads were used for example to spread conspiracy theories around the Capitol Hill riot. As a reminder, according to an announcement in August 2019, Facebook is supposed to have banned the Epoch Times from publishing ads on the platform.
FanDeTV: A slow and incoherent response from Twitter
Our investigation was published with le Monde in April 2019 and is available here. This network, which regularly spread disinformation and is tied to an individual likely motivated by financial gains but also ideology, revolved mainly around a small set of Twitter accounts. As shown by our investigation, a large part of the activities linked to these Twitter accounts were automated and coordinated. If the main account (@FandeTV) and two other accounts were suspended in the months following our investigation, a majority of “fandeTV”’s other Twitter accounts remained tempora online, notably @poutinefrance, @trumpfranceinfo and @votezpoisson.
Undeterred by the small steps taken by Twitter to limit his activities, Cedric D.M (aka FandeTV) also opened a new Twitter account called “Alertes Infos USA & Hydroxychloroquine” during the pandemic. This account didn’t hesitate to retweet false allegations of voter frauds linked to the US elections or to amplify COVID-19 disinformation around COVID-19. This specific account was suspended in February 2021, after we flagged it to Twitter.
Update: These accounts were finally removed following this publication’s release September 2021: @PoutineFrance, @TrumpFranceInfo, @VotezPoisson, as were three other accounts mentioned in our investigation. Following a request for comment by EU DisinfoLab sent on September 8th, Twitter made the following statement: “Twitter has suspended the following accounts for violating our rules: @PoutineFrance @TrumpFranceInfo @VotezPoisson. This decision was based on our team’s review of the accounts as well as the available information from accounts in a related network that were previously suspended.”
L’Infonationale: When Facebook does half of the job, fake French media outlets managed from Ukraine can continue to thrive
In December 2019, we published an investigation about fake French-speaking media outlets managed from Ukraine. These fake outlets are mainly copy-pasting content from other sources (including fringe French political and alternative websites), amplifying regularly polarizing messages and disinformation. Our investigation has been covered by Le Monde. To amplify the content from the fake websites, the Ukrainian network has been using:
- Facebook pages impersonating French politicians (Marine Le Pen 2022, Jean-Luc Mélenchon 202);
- Facebook pages linked to the fake media outlets (L’InfoNationale, l’InfoPopular, Republique5);
- Heavily spamming of some French Facebook groups with content from the fake media outlets.
Following our investigation, the Facebook pages impersonating French politicians were quickly removed. But the Facebook pages linked to the fake media outlets have remained online. No measures were also taken against the Facebook accounts heavily spamming Facebook groups with polarizing political content and disinformation. Therefore, for example, you can still find accounts tied to the Ukrainian network amplifying regularly COVID-19 shared in the group “COVID-19 / vaccine / Professor Raoult” that is now a private group renamed “COVID-19 / vaccination campaign 2022”), an inflammatory article about a French MP who allegedly hopes that 70 years-old French people will get COVID-19, or an interview with a controversial researcher containing anti-vaccine disinformation…). Some of these accounts claim to be French but are very likely managed from Ukraine or Russia. Despite the infringement of multiple CIB policies (amplification through coordination, spam behaviors, fake accounts…), Facebook took only tiny measures which haven’t prevented this disinformation network to continue its activities.
How can the Digital Services Act help us solve this problem?
The above examples of platform inaction are the result of a lack of accountability and oversight. Luckily, the EU’s draft Digital Services Act (DSA) provides an opportunity to address this problem. With a few tweaks to the European Commission’s proposal, disinformation researchers in Europe would have a standard setting regulatory framework for the coming years. Here are some examples:
- Ensuring complaint-handling systems work for researchers. The fight against disinformation would benefit from a formalised system defined by law enabling accredited researchers to notify platforms of infringements to their own terms and conditions. This system should enable researchers to notify them when they have not acted against these infringements, a similar proposal is contained in the proposals for an “internal complaint-handling system” (article 17, of the draft DSA). The Commission’s proposal would need to be amended because, according to the current wording, complaints can only be submitted when a platform takes a decision to remove or disable content.
- Ensuring out-of-court dispute settlement bodies can be activated by civil society. Civil society should also have access to the out-of-court dispute settlement body proposed in the DSA (article 18). Researchers should be able to take further action against platforms if the platform in question decides not to act on content clearly breaching the rules that users abide by as defined in the T&Cs. Such a system would contribute to putting an end to the arbitrary nature of the decisions by platforms. It would hopefully also reduce the “name and shame” vicious circle that is currently the only meaningful solution for civil society to get platforms to act against such malicious networks. In other words, researchers should not have to contact The New York Times whenever there is a need to take down a malicious network. This would not be needed if there was a transparent process of accountability and a right of appeal.
- More transparency on corporate terms & conditions. A heavier reliance on the application of company terms & conditions would also necessitate more transparency about potential changes to terms & conditions. To prevent the practice of platforms making sudden changes in their terms of conditions to justify a lack of action, EU regulators should demand a very large online platforms provide access to searchable archives of terms & conditions.
For more information about how the DSA can help fight disinformation, see the EU DisinfoLab’s Open Letter, co-signed by 50+ leading experts on disinformation here.
NOTE: Both Twitter and Facebook were contacted for comment by the EU DisinfoLab on their reactions to our OSINT investigations before the publication of this piece. Twitter provided comments about the FanDeTV case while Facebook did not respond.