Dear Disinfo Update readers,

This edition of our newsletter addresses (among other topics) emerging threats that are shaping the upcoming EU elections, and elections in general. Below, you’ll find a collection of news and resources on artificial intelligence, social media platforms, and climate change disinformation, viewed through the election lens.

Further exploring the issue of climate change disinformation, and in support of our community’s ongoing efforts, our Climate Clarity hub brings curated insights, updates, and actions to help you navigate the complex landscape of climate change disinformation. We’ve just updated its contents – take a look!

Onward and upward! Scroll downward to explore the critical issues covered in this newsletter.

Our webinars

Upcoming – register now!
Past – watch the recordings!

Your message here?

Want to reach the 10.000+ readers of this newsletter?

Disinfo news & updates

  • Climate disinfo & elections. This article investigates the shift of conspiracy theorists from COVID-19 to climate change, using EU climate policies as a new target. It examines the potential impact of these disinformation campaigns on the upcoming EU elections.
  • Green Deal disinfo. This article highlights a surge in disinformation targeting the EU Green Deal ahead of the European Parliament elections. False claims are being spread to undermine support for the initiative, to foster distrust among voters and complicate making informed democratic choices.
  • Ads amplifying antagonism. This analysis highlights how state-aligned actors in Hungary are disproportionately amplifying hostile narratives against perceived enemies, both domestic and European, via ads on social media platforms. This troubling development could have serious implications for democratic discourse, especially given the approaching EU Parliament and local elections.
  • Lost in AI translation. Generative AI is aiding fact-checkers, especially in this election-heavy year with rising disinformation via AI-created content. However, AI-based tools often lack local linguistic and contextual accuracy. Experts from Norway, Georgia, and Ghana noted significant limitations in AI training models, affecting non-Western and minor languages. 
  • AI: bound to err. Research indicates that AI chatbots, inherently designed to generate responses, often produce erroneous “hallucinations” or fictitious outputs. Experts suggest that integrating robust fact-checking systems could mitigate such errors. However, given the fundamental design of large language models (LLMs), achieving perfect accuracy is unattainable, which means their use in contexts where accuracy is critical requires caution.
  • False Façade / CopyCop. In recent days, the European External Action Service (EEAS) Strategic Communication division (Stratcom), in collaboration with Spanish authorities, uncovered a new FIMI operation involving the creation and dissemination of pro-Russian content. The operation was dubbed False Façade, in a reference to the aim of obscuring the origins and laundering the content before targeting Western audiences. The same network of websites was denounced shortly after by Recorded Future’s researchers, who chose a different name for the campaign, CopyCop, with an emphasis on the use of generative AI to plagiarise and modify content from legitimate media sources. This research delves deeper into the technical structure and dares to offer an attribution, and highlights coordination with different Russian campaigns such as Doppelganger and Portal Kombat.
  • DSA whistle. The EU Commission has introduced whistleblower tools for the Digital Services Act (DSA) and Digital Markets Act (DMA) that enable secure and anonymous reporting breaches by very large online platforms (VLOPs) and very large online search engines (VLOSEs) designated under DSA.
  • The true truths. This NewsGuard analysis reveals that websites with “truth” in their names are predominantly misleading, and that 89% of the websites analysed are major spreaders of misinformation.
  • Too little too late? A couple of weeks ago, the EU Commission launched investigations into Meta under the Digital Services Act (DSA) for inadequately addressing the spread of election disinformation. Doubts have been also expressed about the effectiveness of these investigations, deeming them too late to impact the upcoming European elections. Critics, including Baltic and Nordic NGOs and researchers, have pointed out the insufficient number of Nordic language content moderators on the company’s platforms to curb Russian disinformation campaigns targeting NATO and the EU. Read also our reaction to the investigations.
  • Anatomy of a scroll. This article, part of Politico’s ‘Bots and ballots: How artificial intelligence is reshaping elections worldwide’ series, examines TikTok’s AI algorithm and its role in spreading misinformation during the Israel-Palestine armed conflict. The analysis reveals that the platform’s design, which prioritises engaging content, promotes the spread of viral misinformation.
  • TikTok, what next? TikTok and its Chinese parent company, ByteDance, have filed a lawsuit in a US federal court seeking to block a law signed by President Joe Biden. The law was enacted in response to concerns that TikTok could pass sensitive user data to the Chinese government and spread disinformation and propaganda. According to TikTok, this represents an “extraordinary intrusion on free speech rights”. This legal action comes amidst increasing global scrutiny and bans of the app in several countries.
  • TikTok labelling. TikTok is expanding its AI content labelling policy. The platform already labels content made using its own in-app AI effects and mandates that creators identify any content they generate that includes realistic AI, as reported in the previous edition of our newsletter. Now, in coalition with C2PA, TikTok is implementing additional generative AI transparency measures. Videos uploaded to the platform will be scanned for AI markers and will be labelled as such in-stream when detected.
  • Platform whack-a-mole. Disinformation remains prevalent in the Indian digital landscape, despite the 2020 TikTok ban, which was hoped to curb its spread ahead of the world’s largest democratic elections. The void was quickly filled by other platforms. This highlights the complexity of the challenge – addressing broader issues rather than focusing solely on specific platforms​ is needed.
  • European Democracy Shield. At the Copenhagen Democracy Summit this morning, European Commission President Ursula von der Leyen committed to strengthening Europe’s defences against foreign interference and manipulation if re-elected.

Reading & resources

  • Deepfakes & democracy. This paper examines the influence of deepfakes on elections in 2023 and serves as a reference for the 2024 elections. Key findings include an increase in the number of deepfakes used during electoral campaigns. However, none had a decisive impact on the election outcomes. The psychological and social responses to these deepfakes are of greater concern.
  • Watermarking AI-generated content. There are (at least) three gaps that make watermarking an inadequate remedy for addressing AI-generated content intended to manipulate audiences: Watermarking methods embedded into models can usually be removed by downstream developers, bad actors can create fake watermarks and trick watermark detectors, and some open-source models will continue to lack watermarks even after the adoption of watermarking requirements. This article digs into the challenges and shortcomings of watermarking as a tech-based solution against AI-generated disinformation.
  • Disinformation & the rule of law. This article explores the detrimental effects of disinformation on democratic processes and legal frameworks in the EU, highlighting two main strategic responses: securitisation and self-regulation, depicted as competing yet coexisting strategies crucial for tackling the pervasive issue of disinformation effectively.
  • Good news about Bad News. This study assesses ‘Bad News’, a serious game designed to inoculate citizens against misinformation that has gathered wide visibility, in a traditional upper-secondary school classroom setting. The key results highlight the game’s potential to help students spot disinformation techniques, and enhance their digital literacy and critical thinking.
  • FIMI norms. The ‘Study on International Norms for Foreign Information Manipulation and Interference (FIMI)’ recommends the creation of specific international norms to address FIMI, and aims to utilise existing international law principles and norm-setting processes from related fields. It highlights the importance of considering multiple factors such as the content, means, methods, effects, actors, and targets of FIMI under international law, and emphasises the need for a comprehensive, multi stakeholder approach in developing these norms to ensure inclusivity and effectiveness.
  • Spot the bot. This article offers insights into the challenges of identifying AI-generated text, pointing out the flaws in detection tools that often mislabel texts. It teaches how to spot unique AI text patterns, and highlights the need for a critical method in distinguishing machine-written content from human-authored text.
  • Climate coverage crossfire. This UNESCO report reveals that the large majority of environmental journalists have faced intimidation or violence due to their coverage of environmental and climate issues, exacerbated by the surge in disinformation. The report emphasises the urgent need for enhanced measures to protect and support these journalists and for better governance of digital platforms.
  • Educational oil spills. This article exposes how fossil fuel companies infiltrate educational systems via Science, Technology, Engineering, and Mathematics (STEM) lesson plans provided by Discovery Education, a company offering digital educational content and resources. This raises concerns about the neutrality of educational materials and their use in subtly promoting pro-fossil fuel narratives.

This week’s recommended read

This week, EU DisinfoLab’s Project & Comms Manager Heini Järvinen recommends the book ‘Foolproof: Why We Fall for Misinformation and How to Build Immunity’ by Sander van der Linden, social psychology professor at Cambridge University and co-developer of ‘Bad News’. It explores the dynamics of misinformation in contemporary society through an analogy of a virus spreading through social interactions. The book presents a psychological approach to inoculate the public against disinformation, scaling up until “herd immunity” is achieved. The author suggests strategies such as prebunking and making individuals aware of manipulation tactics before they encounter them, to foster societal resistance to disinformation.

The book’s analogy of misinformation as a virus makes it easy to grasp. It simplifies the complex mechanisms behind the spread and impact of disinformation, and makes the topic more relatable and understandable. If you’re looking for heavy reading in a light format, go for this one!

…and next on the reading list is ‘The Psychology of Misinformation (Contemporary Social Issues Series)’, a recent release from the same (co-)author.

Events & announcements

  • 17 May: A free training for journalists and educators ‘Fighting disinformation: StopFake’s best practices’ will take place in Tirana, Albania.
  • 17-19 May: ‘Truth, Lies & Democracy’, a game jam to combat fake news and misinformation, will take place in Barcelona, Spain.
  • 22 May: Media & Learning Wednesday Webinar ‘Promoting MIL and Youth Citizen Journalism through Mobile Stories’ will dive into youth and media literacy.
  • 4-6 June: The ClientEarth Summit raises awareness about some of the greatest environmental challenges of our time. This year the topics we will be touching upon include climate change, biodiversity loss, and disinformation.
  • 6 June: AI, Media & Democracy lab organises the webinar ‘Discussion on AI and the European Elections’ that dives into the opportunities and threats of (generative) AI for democracy and elections in Europe.
  • 17-19 June: The European Dialogue on Internet Governance, EuroDIG 2024, will be organised in Vilnius, Lithuania.
  • 26-28 June: The International Fact-Checking Network’s (IFCN) 11th Global Fact-Checking Summit, GlobalFact 11, will be held in Sarajevo in Sarajevo, Bosnia and Herzegovina, and online.
  • 9-10 July: An ideathon ‘Elections Free of Disinformation’ is scheduled to take place in Thessaloniki, Greece, to pioneer innovative strategies to combat disinformation in future elections.
  • 16-18 July: The 2024 International Conference on Social Media & Society will gather leading social media researchers from around the world to London.
  • 25-31 August: The 2024 Digital Rights Summer School, organised by the SHARE Foundation, EDRi, and the Digital Freedom Fund, will take place in Montenegro, focusing on the intersection of new technologies and human rights. Apply by 15 May.
  • 1 October: The Tech and Society Summit will bring together civil society and EU decision-makers in Brussels, Belgium, to explore the interplay between technology, societal impacts, and environmental sustainability.
  • 9-10 October: Registrations are open for our annual conference, #Disinfo2024. Apply for your ticket by the end of May to benefit from the Early Bird offer.
  • RightsCon 2025. Submit your proposal for RightsCon 2025 by 2 June.
  • CrowdTangle. Following Meta’s announcement to shut down CrowdTangle in August 2024, Mozilla Foundation’s public petition and an open letter signed by over 170 signatories urging the company to reverse its decision, the Coalition for Independent Technology Research (CITR)’s has launched a CrowdTangle Research Community Survey that aims to forecast the effects of losing this vital tool for researchers to monitor the spread of disinformation on social media platforms. If you are using CrowdTangle, share your experiences via the survey by 17 May.