Dear Disinfo Update readers,

Welcome to another edition of our newsletter – your go-to destination for the latest and most pertinent news, events, and announcements, crafted exclusively for you.

In our previous Disinfo Update, we launched an open call for #Disinfo2024. We’ve already received some fantastic proposals, and now we’re hungry for more from your brilliant minds. Help us make our mission impossible – choosing the best ideas – even more challenging! Share your proposals before 25 February 2024.

Next week, on Tuesday 6 February, we’ll launch our 2024 webinars with an eye-opening session that will uncover the intricate web of deception woven by EU/US-sanctioned oligarch Ilan Shor and his pro-Russian party in their relentless bid to sabotage Moldova’s aspirations for EU membership. And this is just the beginning – stay tuned for more webinars on a wide range of timely topics, where we’ll be giving voice to experts from our valued community throughout the year!

Given the extraordinary election year unfolding, it’s no wonder that this newsletter edition is packed with insights on election disinformation and Foreign Information Manipulation and Interference (FIMI), spanning from a compilation of Russia’s ventures to Taylor Swift. We will also explore emerging narratives in climate change disinformation – moving beyond simple denial to discrediting proposed solutions – and some AI-based solutions to tackle the problem.

Don’t stop now! All that 👆 and more is just a scroll away!

Disinfo news & updates

  • ChatGPT & elections. With elections around the corner in over 50 countries, OpenAI, the creator of ChatGPT, unveiled its measures to combat election misinformation: banning the use of its AI for creating deceptive chatbots, restricting political campaigning applications, digitally watermarking AI-generated images, and partnering with organisations to redirect users to accurate voting information. The efficacy of these undeniably positive-sounding safeguards remains to be seen. 
  • Walking the talk … and as the first step towards putting their good intentions into practice, OpenAI suspended the developer behind a bot impersonating Democratic Representative Dean Phillips, a candidate in the US 2024 elections.
  • More on AI & US elections. Last week, ElevenLabs reportedly closed the account responsible for an audio deep-fake that impersonated President Biden calling voters and asking them not to vote.
  • FIMI framework. The US Department of State introduced the Framework to Counter Foreign State Information Manipulation that aims to establish a common understanding of the threat and collaborative action areas for the US and its allies. Its five key action areas are strategies and policies, governance structures, human and technical capacity, involvement of civil society, independent media, and academia, and multilateral engagement.
  • Election disinfo: Taiwan edition. This newsletter discusses China’s disinformation tactics in Taiwan’s recent election, employing deepfake videos, Facebook personas, and AI-generated anchors to spread narratives discrediting the ruling Democratic Progressive Party (DPP) and its president-elect.
  • Russian influence in Latin America. Focusing on Spanish-language tweets related to key events during the 2022 invasion of Ukraine, this research explores Russian influence in Latin America and reveals a rise in pro-Russian sentiment. The findings suggest an increase in intentional influence operations and raise concerns about Russia using fake social media accounts for manipulation.
  • All alike. Matriochka, a new Russian operation targeting European fact-checkers has been uncovered last week by AFP, in cooperation with online activist group Antibot4Navalny. Sharing similarities with Doppelganger, this IO is bombarding fact-checkers with fake claims, mobilising their resources to verify online content.
  • X & Z. Digital forensic experts in Germany have uncovered a vast, pro-Russia disinformation campaign against the government using thousands of fake accounts on X. More than a million German-language posts were sent from an estimated 50.000 fake accounts suggesting that the German government was neglecting the needs of Germans as a result of its support for Ukraine, both in terms of weapons and aid, as well as by taking in more than a million refugees.
  • Swiftly unravelled narratives. This article discusses a Fox News segment suggesting that Taylor Swift is a “front for a covert political agenda,” perpetuating disinformation that has circulated in right-wing circles, including claims ranging from her being a “Pentagon asset” to assertions of witchcraft and political involvement. Experts anticipate an increase in such attacks as the 2024 US election approaches. Last weekend, fake sexually explicit images of the singer started circulating on X, and as a temporary precaution for user safety, the platform blocked searches for her name.
  • Conspiracies on Etsy. The e-commerce platform Etsy has become the latest target of the QAnon conspiracy movement. Content that violates Etsy’s policies is being manipulated by online groups to make unfounded allegations about child trafficking rings. This activity, originating from unmoderated platforms, represents yet another example of how trust in various stakeholders – including public authorities, NGOs, companies, and elected officials – is being eroded.
  • LLMs for climate. The Endowment for Climate Intelligence (ECI) has launched ClimateGPT, an open source AI platform fighting climate disinformation. Rather than one chatbot, ClimateGPT is an ensemble of three task-specific Large language models (LLMs)  and it has passed several tests and controls ensuring that it does not spread climate misinformation. The tool provides data in 20 languages.
  • Newsflash! How can we alert the public about the dangers of disinformation? Belgium’s non-profit Centre d’Action Laïque has taken an innovative approach. They distributed ‘VRA!MENT’ (‘Really’), a mock newspaper brimming with far-right disinformation, purportedly from a fictional country named Dystonia. Complementing this campaign, a dedicated website has been set up to debunk the stories featured in the newspaper.

What we’re reading

  • EEAS on FIMI. The European External Action Service’s (EEAS) second Report on Foreign Information Manipulation and Interference (FIMI) Threats builds on the previous report, establishing a comprehensive Response Framework for coordinated defence against FIMI. Drawing on 750 investigated incidents from December 2022 to November 2023, the report emphasises effective analysis-to-response linkages and underscores the significance of collaboration among stakeholders, with a case study demonstrating its application in past elections for use in upcoming 2024 elections.
  • The new climate denial. This report by the Center for Countering Digital Hate (CCDH) highlights the alarming rise of “New Denial” narratives, constituting 70% of climate denial content on YouTube in 2023. The study analyses the shift from outright denial of anthropogenic climate change to attacks on climate science and solutions, emphasising the need for climate change advocates, funders, and policymakers to adapt strategies to counter these evolving narratives that aim to undermine both solutions and scientific consensus.
  • Back to the future of AI. This study presents a deep dive into the future of artificial intelligence, drawing from the insights of 2778 top-tier AI researchers. Covering predictions on AI milestones, uncertainties surrounding societal impacts, and a spectrum of expert opinions, the survey offers a comprehensive exploration of perspectives shaping the AI landscape.
  • Learning from Ukraine. The Digital Forensic Research Lab (DFRLab) and the European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE) have published a document showing how Ukraine has been able to respond to the information war launched by Russia.
  • Wayback Google Analytics. Bellingcat has developed a lightweight open source research tool which automates the collection of tracking codes and discovery of relationships between websites using copies of sites maintained by The Internet Archive’s Wayback Machine. This will help researchers sidestep recent changes to how Google manages its analytics data.
  • Spot the fake. This video tutorial helps you identify AI-generated fakes, explains how they are made, and provides examples of their misuse and impact.
  • Updated Social Media Research Toolkit. The Social Media Lab of the Toronto Metropolitan University has updated its Social Media Research Toolkit, a curated list of 50+ social media research tools. The toolkit includes tools that have been used in peer-reviewed academic studies, and many of them are free to use and require little or no programming.
  • Author: AI or human? In an age where generative AI models can craft content that’s often indistinguishable from human-authored work, the need to accurately discern its origin has become more pronounced than ever. To navigate this evolving landscape, the Brooking Institute has released this guide on the implementation of AI watermarking.

This week’s recommended read

In this edition, our Research Manager Maria Giovanna Sessa recommends exploring this research conducted by the Institute for Strategic Dialogue (ISD). ISD found 127 TikTok videos and 54 X posts glorifying mass shooters, amassing 1.7M views in 4 months. The research highlights significant shortcomings in platforms’ enforcement of content policies concerning safeguarding minors and combating the spread of terrorist or violent extremist content, including cross-platform information sharing and responses.

The latest from EU DisinfoLab

  • Elevate #Disinfo2024 with your ideas! Join us at #Disinfo2024 in Riga on 9-10 October, and be part of the impactful discussions! Send us your ideas for a talk or a theme via this form by 25 February.
  • Aegis against information manipulation. EU DisinfoLab is a proud member of the ATHENA project consortium. The project kicked off in November 2023 to combat foreign information manipulation and interference (FIMI), and it aims to scrutinise instances of FIMI, develop countermeasures, and enhance public awareness. We are excited to contribute to this crucial initiative, working collaboratively towards a more secure digital future for Europe.

Events & announcements

  • 2 February: Tactical Tech’s online investigation talks series ‘Exposing the Invisible’ explores practical methods, tools, and ethical considerations for investigators. The upcoming session zeros in on investigating online disinformation networks. Sign up here.
  • 6 February: We’ll kick off our 2024 webinars with Aleksandra Atanasova, Open-Source Intelligence Lead at RESET. She will shed light on their joint investigation with WatchDog.md that discovered the largest Moldovan paid ad campaign witnessed on Facebook in 2023, including coordinated inauthentic behaviour and glaring violations of platform policies. Sign up here!
  • 26-27 February: The University of Amsterdam will be hosting the European Digital Media Observatory (EDMO) Scientific Conference on Disinformation, an interdisciplinary event that aims to stimulate discussions regarding the challenges, impacts, and approaches for tackling disinformation across diverse fields.
  • 27 February – 1 March: Media Literacy Matters – The European Digital and Media Literacy Conference in Brussels brings professionals from around the EU to discuss tackling disinformation through digital and media literacy and strengthening European cooperation.
  • 18-20 March: This two-day workshop ‘Dis/Mis-information and Young People: Developing Strategies for Critical Media Literacy’ organised by the Cumberland Lodge as part of their Youth and Democracy project aims to equip educators with tools to address the challenges posed by dis-/misinformation to secondary school students.
  • 10 June: 3rd ACM International Workshop on Multimedia AI against Disinformation (MAD’24) will be organised in Phuket, Thailand. Topics include, for example, disinformation detection in multimedia content, multimodal verification, and methods and synthetic and manipulated media detection. Call for Papers is open, with the deadline of 3 March.
  • 9-10 October 2024: This year’s EU DisinfoLab conference #Disinfo2024 brings us to Riga, Latvia. Do you have a publication or an engaging talk idea? This is your chance to share your insights with our diverse community! Submit your proposal here before 25 February.

Jobs

  • The Carnegie Endowment for International Peace seeks a Research Analyst to work with scholars on the evolving digital technology landscape in Africa in its Washington DC-based Africa Program.
  • Logically Facts has several open positions across Europe: Norway-based Fact-checker, UK-based Senior Fact-checker, and Senior Editor and Deputy Editor based in the UK or anywhere in Europe.
  • The Center for Countering Digital Hate is looking for UK and US-based (hybrid) Executive Assistants.