June 26, 2023

By Maria Giovanna Sessa, Senior Researcher, EU DisinfoLab

1. Introduction

On June 22, CheckFirst published their latest investigation uncovering a massive scam involving 1500+ Facebook ads luring users to over 160 domains hosting fake media sites. The websites would redirect visitors to a form pretending to collect personal information to join a fictional investment platform. Pushing a get-rich-quick scheme, scammers used social engineering techniques to gain control of hundreds of content creators’ Facebook pages that collectively reached over 10 million users. 

This research employs data provided by CheckFirst to EU DisinfoLab and EDMO Belux to focus on the Belgian case. Belgium is one of the countries involved in the deceptive campaign, which impersonates six news media organisations in the country, including Le Soir, RTBF, and De Standaard. The ads and misleading articles targeted the Belgian public, conveying false testimonies by national political figures such as Elio Di Rupo and Alexander De Croo.

As this is not the first time the campaign is exposed, concerns arise regarding the enforcement of regulation and ways to limit the spread of this operation, given Meta’s failure to act in mitigation. The following section presents how the deception targeted Belgian outlets, politicians, and audiences. Consequently, actions that can be taken are provided, considering the framework of the Belgian legislation and the Digital Services Act (DSA).

2. Belgian case study

This part delves into examples from Belgium to display what the fraudulent campaign looked like online. It analyses the layout of a typical ad and provides an overview of the politicians and public personalities targeted, media impersonated, as well as the audience it aimed to reach. 

2.1 The recipe for scam ads: known media, public personalities, and a scandal

The ads suggest that a prominent media outlet either interviewed public figures or uncovered a scandal that involved them. As described below, sensationalistic language is employed, implying the revelation of a secret that “only 7% of Belgians know” or the unfolding of a scandal that “shocked the whole world” and might entail the end of Alexander De Croo’s career. 

The Belgian Prime Minister is targeted as well as Véronique Barbier, TV broadcaster for RTBF, whose logo is used to ensure credibility. The ad also contains a link to another website, promoting social benefits with a clickbait title.

Figure 1. A scam ad using impersonated media and public figures.

2.2 Belgian politicians and actors: exploiting the famous to promote an investment scam

Besides Alexander De Croo, who recurrently appears in the manipulated content identified, Elio Di Rupo, Minister-President of Wallonia, is also repeatedly featured. CheckFirst pointed out that the negative reactions in the form of angry emojis that pile up under Facebook ads risks being detrimental to these politicians’ image. 

The scammers’ ultimate goal seems to be clickbait, as other publications portray the politicians in a more positive light. For instance, an ad claims that “Elio Di Rupo invites all Belgian to profit” from the promoted scam investment. Another article states: “Belgians made millions from home during the coronavirus pandemic thanks to Prime Minister De Croo. The big banks WANT HIM GONE!”. 

Figures 2 and 3. A Facebook ad using Elio Di Rupo (left) and a website impersonating La Libre using Alexander De Croo (right), both promoting the investment scam. 

Besides politicians, public figures are also mentioned to reinforce a notion of credibility and desirability. These include Belgian actor Benoit Poelvoorde, whose latest investment also allegedly frightened the big banks, and internationally known celebrities, i.e., footballer Romelu Lukaku – who apparently invested 50 million in a start-up, Elon Musk, and Jeff Bezos.

Figures 4 and 5. Websites impersonating Le Soir (up) and De Standaard (down) to promote the investment scam.

2.3 Impersonating Belgian news website: a cross-language scam

As anticipated, a prominent strategy of this operation consisted of forging news websites. Some have already emerged, such as Le Soir, La Libre, and De Standaard. Others were 7sur7, DeMorgen, and HLN, for a total of six Belgian news organisations. Belgium is particularly relevant in this investigation, as it has the highest number of impersonated outlets in a single country. A helpful disclaimer is that, despite using a similar tactic of media cloning, this operation is unrelated to Doppelganger, unveiled by EU DisinfoLab.

The impersonators unjustly appropriated the logo and outlook of these online newspapers, translating the same content into French and Dutch. To illustrate this, two clickbait articles below celebrate the launch of “the newest platform Bit Capex Pro 360, whose objective is to help families get rich”. This fabricated initiative is attributed to Tesla in the impersonation of Dutch-language newspaper HLN and Amazon in that of French-speaking outlet 7sur7, starring respectively photos of Elon Musk and Jeff Bezos. 

Figures 6 and 7. Websites impersonating Dutch-speaking HLN (up) and French-speaking 7sur7 to promote the investment scam.

One of these stories caught the attention of fact-checkers, as RTBF debunked a fraudulent get-rich-quick claim featuring Tesla and Twitter’s owner. Le Soir also issued a retraction, calling out the impersonation and distancing itself from the scam.

2.4 Age, language, and country of residence: targeting Belgian audiences

Dragging Belgian media and personalities into the collective imagination promoted by the campaign is enough to demonstrate the targeting of Belgian audiences. To corroborate this further, the image below shows a “why am I seeing this ad” Facebook panel. In this case, the advertiser has selected French-speaking users based in Belgium. Another interesting detail is the age cohort (over 30), probably thought to have enough economic power and interest to invest in the scam.

Figure 8. Facebook’s explanation of why a user is seeing the deceptive ad. 

3. Legal implications and actions

The astonishing size and duration of the campaign (already ongoing in 2018) evidence the limits of Meta’s detailed advertising policy and review mechanisms. The fact that 1,517 identified ads (thanks to the Ad Library) managed to bypass these safeguards raises concerns regarding the platform’s enforcement capabilities, especially in the context of the Digital Services Act (DSA) as well as the affected countries’ national legislation. The rest of the study explores the relevant articles in the EU and Belgian legislation that could apply to this operation.

3.1 Applicable Belgian legislation: Is this campaign illegal?

  • Disinformation

Under Belgian law, disinformation/misinformation, as an individual behaviour consisting in sharing fake news, is not yet regulated. Therefore, it does not per se constitute illegal behaviour.

  • Impersonation of a person, impersonation of an organisation, and fraud

The impersonation of a person is punishable under the Belgian Criminal Code by imprisonment for up to 3 months and/or a fine of up to 2,400 euros (Article 231 of the Belgian Criminal Code). 

The article defines impersonation as anyone “who publicly takes on a name that does not belong to them”. Therefore, it seems applicable to the campaign that used social engineering techniques to gain control of third parties’ Facebook pages and make up false interviews with public personalities. 

Moreover, the article is considered to include the impersonation of an organisation’s name, which is otherwise not explicitly mentioned.

Article 496 of the Belgian Criminal Code defines fraud as the offence committed by a person who, intending to appropriate something belonging to another, has it delivered to him by fraudulent means, including using a false name or capacity. Fraud is punishable by imprisonment for up to five years and a fine of up to 24,000 euros (Article 496 al. 1).

A violation of the hosting services’ terms of service
The investigation did not find Belgium-based hosting services. Nonetheless, two were identified from neighbouring countries. On the one hand, Dutch AbeloHost, whose terms of service prohibit “scam/fraudulent websites” and “any websites associated with hacks or cheats”. On the other hand, French OVHcloud, whose terms of service prohibit a “fraudulent use of Content, or use of Content in violation of rights belonging to a third party such as personality rights, copyrights, patents, trademarks or other intellectual property rights”.
  • Online content moderation

The “e-commerce Directive” (Directive 2000/31) provides rules for information society services. In particular, Article 14(2) establishes that where the service provider has actual knowledge of an unlawful activity or information, he shall immediately notify the Public Prosecutor, who shall take the appropriate measures in accordance with Art. 39bis of the Code of Criminal Procedure. Then, Article 15(2) states that service providers shall be obliged to inform the competent judicial or administrative authorities without delay of any alleged unlawful activities or information provided by the recipients of their services.

The Commission Recommendation (EU) 2018/334 on measures to effectively tackle illegal content online includes infringements of consumer protection laws, which can undermine the trust of their users and damage their business models. In some instances, the service providers might gain some advantage from such illegal activities, e.g., Facebook monetising off of the scam. 

  • Unfair advertising

Unfair advertising is a form of misleading commercial practice towards consumers, tackled by Articles VI.97 to VI.100 of the Belgian Code of Economic Law. According to Article VI.97, “a commercial practice shall be regarded as misleading if it contains false information and is therefore deceptive or (…) is likely to cause the average consumer to take a transactional decision that he would not have taken otherwise”. Moreover, Article VI.99 provides that a misleading omission (e.g., not mentioning an element that may be important for the consumers to make an informed decision) may also be considered a misleading commercial practice.

3.2 Applicable DSA articles (Belgian case): Obligation to act once notified

Article 6 exempts platforms from being liable for the content they host unless they have actual knowledge of the illegal activity or illegal content. According to CheckFirst, the affected creators whose pages were hacked notified Facebook through the platform’s notification system or the Meta Business Help Centre, although each country established its own benchmarks to assess the “actual knowledge”. Nonetheless, the case was taken to court in the UK, while Germany asked Meta for a comment on the matter. Therefore, Facebook has actual knowledge about these illegal activities (Article 6.1(a)) but did not act expeditiously to remove or disable access to the illegal content (Article 6.1(b)).

Transparency of notices received and measures taken

  • Articles 15, 24, and 42 on transparency reporting will bind Meta to report on their activities in response to receiving notices, for instance, on how many notices were submitted by Belgian authorities (Article 15.1(a)), trusted flaggers (Article 15.1(b)), or through the internal complaint-handing system (Article 15.1(d)). This investigation should be a case study for an auditor to assess Meta’s general compliance under  Article 37, as defined by Article 15 of the DSA.
  • Article 20’s internal complaint handling system empowers users to appeal to a platform’s over- and under-moderation, the latter being a tool for victims to challenge Facebook’s inaction. Similarly, Article 21’s introduces an out-of-court dispute settlement. At this stage, we have no information whether Meta has implemented these provisions or not.
  • Article 23 on measures and protection against misuse allows a platform to suspend the provision of its services to recipients that frequently provide manifestly illegal content for a reasonable period. At this stage, we have no information whether Meta has implemented these provisions or not.

However, this case raises an issue of who is responsible for the content of a hacked Facebook Page – or in the case of paid influence. As a result, several affected creators reported the abuse to the platform, which sometimes proceeded to delete the Page instead of restoring the legitimate ownership. Similar concerns are linked to risk mitigation measures for systemic risks. 

  • Facebook failed to disclose to its users the real identity of those who paid for the advertisement, thereby violating Article 26.1(b) and(c) on online advertising transparency.

A violation of Facebook’s ads policy:
1,517 scam ads managed to circumvent Meta Advertising Standards, which prohibit unacceptable business practices (i.e., scamming people out of money or personal information), and misleading claims that were fact-checked or made on a hacked page. The unauthorised access to a user’s account and circumventing the platform’s ad review is also against the platform’s Community Standards. Doubts arise regarding the adequacy of the platform’s review mechanisms (automated and manual) and policy enforcement capacity, especially in the context of the DSA’s implementation.

A repeated operation shows the risk is not addressed properly

The risk assessment (Article 34) is crucial. Meta should consider the systemic risks of dissemination of illegal content – in this operation – through its services. The provision is especially relevant in countries, like Belgium, which do not criminalise disinformation per se. Therefore, this could refer to the impersonation and violation of trademark laws undergone by the media or potentially the impersonation and defamation of politicians made in the ads. Furthermore, Article 35 on risk mitigation also applies, indicating the measures that Meta should take to mitigate the risks posed by the campaign. Meta should assess to what extent its advertising system can be changed to prevent operations like this to happen in the future, for instance by ensuring a better authentication system of advertisers.

Insufficient risk mitigation measures and lack of accountability:
In 2021 and 2022, Le Soir, RTBF, and Full Fact reported on parts of this operation. The Full Fact article notes that Martin Lewis had sued (then) Facebook Inc. but ended up dropping the charges after the company promised to take action against these scam ads. As the operation has been going on for years, Facebook failed to identify a similar modus operandi and threat indicators. The inadequacy of risk mitigation measures calls attention to the need for platforms’ accountability.
  • As Facebook ads are crucial to this case, Article 39 on additional online advertising transparency is of great importance. As pointed out in the section on Article 40 below, the Meta Ad Library currently only collects a small portion of ads that are qualified as issue-based or political, making it impossible to assess the size of this operation.

Access to data to further research this campaign

  • Article 40 on data access and scrutiny is key to ensure further research, allowing CheckFirst and the EDMO Belux network to request data about Meta’s takedowns.

What a data access request would look like:
The Facebook Ads Library was useful in gathering the ads, especially compared to other platforms such as TikTok, Yahoo, and Google, where the investigation could not progress. However, CheckFirst experienced some difficulty in retrieving ads-related data. For example, some ads would appear in the researchers’ feed or be deleted by Facebook from the controlled pages but did not appear in the library. In some ads about “social issues, elections or politics,” the funding entity was not disclosed, in violation of Measure 6.2 of the 2022 Strengthened Code of Practice on Disinformation, signed by Meta. To fully assess the size of this years-long scam operation, additional data requests would include data points on the ads’ reach, click-throughs, and funding. Specific data related to the ads’ expenditure and audience demographics would have favoured the investigative process. In addition, having a reverse image search functionality in the ad library would have been extremely helpful, considering the systematic recycling of visuals.CheckFirst dedicated approximately 80 hours to conducting the open-source investigation over two weeks. The leading researcher commented: “with unrestricted data access, the time spent might have doubled, but the volume of discovered data could have been tenfold. We strongly believe that our current findings merely scratch the surface of the operation due to the necessity of manually conducting all processes, from searches to screenshot captures and data exports”.

Potential and future application

  • There is a potential application of Article 9. If the relevant Belgian judicial or administrative authorities had issued an order to act, Facebook should have informed the authority of any effect given to the order.
  • Article 10 would be crucial to attribute illegal activities to specific persons. If the relevant Belgian judicial or administrative authorities had issued an order to provide specific information, Meta should have informed the authority of any effect given to the order. For instance, authorities could request information about the providers of false URLs or Facebook ads. However, it would be necessary that one of the directly affected parties – i.e., an impersonated media – filed a legal complaint with the national authorities.