Dear Disinfo Update readers,

In this edition of our newsletter, we re(-re-re-)visit Doppelganger. Despite a range of responses from key stakeholders, including sanctions and legal actions, the operation, now in its second year, persists. In our latest blog, we’ve broken down cost-effective strategies to combat this operation into five actionable categories, and this recent publication by NATO, authored by our researchers, assesses platform regulation on the EU disinformation environment in the context of the Doppelganger case.

We are also gearing up to reconnect with you at #Disinfo2024 and to dive into the latest in disinformation research and solutions. The very first draft of the conference programme is now online – take a look at the already confirmed sessions. Are YOU ready? Make sure to request your tickets by 31 May to not miss out on the Early Bird offer!

In this newsletter, you’ll also find a bucketload of updates about elections, AI, social media platforms, and the rest of the usual suspects. Read on!

Our webinars

Upcoming – register now!
Past – watch the recordings!

Your message here?

Want to reach the 10.000+ readers of this newsletter?

Disinfo news & updates

  • TikTok’s initiatives. TikTok has detailed its efforts to counter covert influence operations and enhance platform transparency. The platform states that it has identified and dismantled networks engaged in coordinated inauthentic behaviour, and announces new initiatives including API access for researchers to evaluate content moderation systems and the publication of detailed quarterly transparency reports.
  • Don’t hype it, nor hide it. This article argues that both underestimating and overstating the threat of disinformation can be detrimental to democracy – while foreign influence operations can impact open societies, exaggerating their effectiveness benefits adversarial regimes.
  • Ad-verse outcomes. According to a report by The Guardian, Meta has come under scrutiny for approving a series of AI-manipulated political ads in India that spread disinformation and incited violence. The incident raises concerns about social media platforms’ ad review processes and their role in the spread of politically motivated disinformation.
  • Costly phone prank. The perpetrator behind the robocalling operation that sent out automated calls impersonating President Joe Biden to target potential voters has been charged and fined 6 million USD by the US Federal Communications Commission (FCC).
  • Setting the AI safety bar high. Sixteen leading AI companies have pledged to adhere to a set of AI safety standards. These standards involve publishing thresholds that define unacceptable risks, detailing their strategies for mitigating such risks, and outlining measures to ensure their AI advancements remain within these limits.
  • …in the meantime, at OpenAI. After two key members left OpenAI’s AI safety and trust team over frustrations about the lack of prioritisation of safety, the company has dissolved its AI safety and trust team. The decision brings up questions about the company’s commitment to the future of its alignment research.
  • AI again. This NewsGuard investigation exposed an AI-enhanced disinformation campaign falsely claiming that six countries have withdrawn their gold reserves from the United States. The campaign leverages generative AI to produce and disseminate content, aiming to undermine trust in the country’s financial system.
  • Deepfake debate. At the Cannes Film Festival, a film featuring an AI-generated President Putin as its main character has garnered significant attention and financial success. The film explores themes of political manipulation and disinformation, resonating at many levels with contemporary global concerns about the impact of deepfake technology on politics and society.

Brussels corner

  • More Q’s than A’s. Five weeks late, the European Commission has finally answered the parliamentary question by a Dutch MEP Paul Tang (S&D) on the apparent contradiction between the obligation under Article 18 of the European Media Freedom Act to leave “media service provider” content online for at least 24 hours, even if it appears to be spreading disinformation, and the obligation under the Digital Services Act (DSA) and the strengthened Code of Practice on Disinformation to remove disinformation. The Commission explained that the obligation to leave apparent disinformation online does not cover situations which are covered by the platforms’ risk assessments and mitigation under articles 34 and 35 of the DSA, nor does impinge on the (voluntary) “obligations” of the strengthened Code of Practice, for reasons that the Commission did not communicate.
  • Future of digital policy. On 21 May, the Council of the European Union adopted two presidency conclusions documents related to countering disinformation. The first document aims to set the scene for digital policy-making for the coming five years. Disinformation gets a single mention, in a list of illegal or harmful phenomena which need to be fought while fostering innovation, entrepreneurship and capital market development. The second document focuses on democratic resilience and safeguarding electoral processes from foreign interference. It details various legislative, non-legislative, and institutional tools developed by the EU, emphasises the importance of the issue, and urges institutional actors to take initiatives in this policy field. The text enumerates the various legislative, non-legislative, and institutional tools developed in and by the EU, highlights the importance accorded to the topic, and reinforces calls for the various institutional actors to take initiatives in this policy field.
  • United against FIMI. This joint statement by 16 Member States including Germany, France, Italy and Poland, calls for increased cooperation with civil society organisations and the involvement of diverse stakeholders, including expert bodies like the European Digital Media Observatory (EDMO), to effectively counter disinformation and foreign information manipulation and interference (FIMI).

Reading & resources

  • Chinese social media perspectives. This study examines the discourses linked to the Russia-Ukraine war on Weibo and Douyin, the two leading Chinese social media platforms. Its key findings include that Weibo amplifies Russian state perspectives, aligning closely with Russian narratives, and that Douyin frames the conflict as beneficial to Chinese national interests, stirring nationalistic sentiments. The study highlights the importance of cross-platform analysis to understand public sentiment and the Chinese state’s discursive strategies.
  • The disinformation economy. More than 80% of traffic to known sources of disinformation can be monetised through online advertising.  This report’s findings highlight the role of the opaque online advertising industry in sustaining disinformation and point out how small changes to disrupt the financial incentives behind disinformation could make a significant difference in tackling the spread of false information.
  • Be election smart. As the European Parliament elections approach, ‘Be Election Smart’ campaign by the European Digital Media Observatory (EDMO) aims to empower voters to make informed decisions and safeguard the integrity of the democratic process. The campaign resources are translated into various EU languages by the EDMO Hubs – for Belgium and Luxembourg, take a look at the campaign page of EDMO BELUX (to which EU DisinfoLab is happy to contribute)!
  • Moscow’s Balkan agenda. This documentary by the Balkan Investigative Reporting Network (BIRN) exposes the extensive reach of Russian disinformation efforts in the Balkans, highlighting how existing social and political tensions are exploited to amplify pro-Kremlin narratives.
  • Tin Foil Confessions. Brent Lee, a former conspiracy theorist who we had the honour of hosting at our 2023 annual conference, is launching a new limited series podcast and book that will explore the rise of conspiracism and its societal impact. The project seeks to enlighten the public on the allure of conspiracy theories and provide tools for helping loved ones disengage from these harmful beliefs. To bring this project to life, consider pitching in with a donation.
  • More for those who enjoy audio. How could Russian disinformation could skew the EU elections? What’s behind Europe’s modest successes in countering it? This podcast discusses some of the latest examples of interference from the Kremlin, such as sharing AI-generated deepfake videos of politicians or cloning the voices of public figures to impersonate them.

This week’s recommended read

Our Executive Director Alexandre Alaphilippe‘s recommendation this week is not a read, but a watch. Two weeks ago, the Swedish channel TV4 embarked on a very interesting journey by infiltrating the Sweden Democrats Party. The far-right party, currently in the ruling coalition, have seen a surge in their electoral results over the past 15 years.

The twelve-month undercover investigation by TV4 (here in Swedish) reveals that the party’s communications staff has been secretly managing a troll factory using fake accounts on TikTok, Instagram, Facebook, and YouTube to manipulate and inflame online debates with disinformation and hate speech. The report also uncovers the connections between allegedly independent conservative media Riks and the Sweden Democrats’ communications department, showing how the online media have aligned their strategy in common meetings with the party’s communications direction. The strategy aims to push young people towards the extreme right values and turn them into permanent voters. The investigation further reveals discussions about sowing division and hate by impersonating Arab communities in Sweden to push them towards their opponents. 

International coverage of this investigation can be found in both French and English.

Why is this watch recommended? While everybody acknowledges the significant issue of Foreign Information Manipulation and Interference (FIMI), the report reminds us that domestic actors can also manipulate information. The investigation shows that the Sweden Democrats’ team has employed various tactics, including buying new phones, changing IP addresses, and using VPNs to bypass platform security measures. The total views of these disinformation campaigns exceed three million views per month, which, given Sweden’s ten million inhabitants population, represents a significant percentage of people who may have been exposed to this disinformation and smear campaign.

Events & announcements

  • 4 June: The event ‘How to Engage Persuadables and Combat Misinformation During an Election’ organised in London focuses on strategies to engage voters and tackle misinformation during election periods.
  • 4-6 June: The ClientEarth Summit raises awareness about some of the greatest environmental challenges of our time. This year the topics we will be touching upon include climate change, biodiversity loss, and disinformation.
  • 6 June: AI, Media & Democracy lab organises the webinar ‘Discussion on AI and the European Elections’ that dives into the opportunities and threats of (generative) AI for democracy and elections in Europe.
  • 6-7 June: The EU Parliamentary Election VERIFICATHON workshop will take place in Amsterdam and online. This two-day event invites journalists, fact-checkers, and activists to identify and verify AI-generated disinformation about the EU. Funded and supported by the veraAI project, the workshop includes a verification sprint and a competition. Register here.
  • 17-19 June: The European Dialogue on Internet Governance, EuroDIG 2024, will be organised in Vilnius, Lithuania.
  • 19 June: The ‘Meet the Future of AI’ event will be organised by AI4Media, Titan, veraAI, AI4Trust, AI4Debunk, and AI-CODE in Brussels. This second edition will focus on relevant issues and challenges around generative AI for the public good and democracy. Participation is free, but registration is required.
  • 19 June: ‘EU Vision for Media Policy in the Era of AI’, hosted by AI4Media in Brussels, will explore AI’s impact on media, focusing on EU legislation and the ethical and legal challenges AI poses to media freedom and expression. This morning session precedes the ‘Meet the Future of AI’ event at the same venue in the afternoon. Both events require separate registrations.
  • 26-28 June: The International Fact-Checking Network’s (IFCN) 11th Global Fact-Checking Summit, GlobalFact 11, will be held in Sarajevo in Sarajevo, Bosnia and Herzegovina, and online.
  • 9-10 July: An ideathon ‘Elections Free of Disinformation’ is scheduled to take place in Thessaloniki, Greece, to pioneer innovative strategies to combat disinformation in future elections. Apply by 31 May to participate.
  • 16-18 July: The 2024 International Conference on Social Media & Society will gather leading social media researchers from around the world to London.
  • 1 October: The Tech and Society Summit will bring together civil society and EU decision-makers in Brussels, Belgium, to explore the interplay between technology, societal impacts, and environmental sustainability.
  • 9-10 October: Our annual conference, #Disinfo2024 will take place in Riga, Latvia. Request your ticket by 31 May to benefit from the Early Bird offer!
  • 29 October: Coordinated Sharing Behavior Detection Conference will be organised by the University of Sheffield. The event will bring together experts to showcase, discuss, and advance the state of the art in multimodal and cross-platform coordinated behaviour detection.
  • 14 November: Arcom, the French Audiovisual and Digital Communication Regulatory Authority, is calling for research proposals around the themes of information economics, digital transformation, audience protection, and media regulation for its 3rd Arcom Research Day. Submit your proposal by 1 September.
  • RightsCon 2025. Submit your proposal for RightsCon 2025 by 2 June.


This good X!