Dear Disinfo Update readers,

Our series of webinars exploring an array of timely topics has attracted interest and brought many of you together to hear insights from experts from our wonderful community. This afternoon, following the launch of our new Climate Clarity hub, we are organising a webinar on platforms’ responses to EU regulations aimed at preventing climate change disinformation. You’re still on time to join us (starting at 14:30 CEST), and find out more about the current efforts to counter climate misinformation on LinkedIn, Facebook, Instagram, Pinterest, Snapchat, TikTok, YouTube, and X.

As announced, our annual conference registrations are open. Places are limited, so don’t miss out on securing your spot. As we prepare for #Disinfo2024, we are also looking ahead to future opportunities. If you’re interested in collaborating on our annual conferences or a new format, Brussels policy events, get in touch!

In this edition, we bump into the usual suspects – AI, election integrity, FIMI, EU regulations, and platforms – and list some events and career opportunities that might interest you. Just scratching the surface here; dive into the deeper content below!

Our webinars

Upcoming – register now!
Past – watch the recordings!

Disinfo news & updates

  • EU Commission acts on climate disinformation. The European Commission has announced measures to monitor and analyse how climate disinformation narratives enter the public space and impact opinion and behaviour. It pledges to enhance the use of relevant policy tools, digital solutions, and communication approaches, including ensuring that compliance with the Digital Services Act (DSA) properly covers disinformation and that climate science is properly covered in social media companies’ compliance with the Code of Practice on Disinformation. More insights into these developments will be shared in our webinar later today – don’t miss out!
  • DSA data access. This paper, ‘Enabling Research with Publicly Accessible Platform Data: Early DSA Compliance Issues and Suggestions for Improvement’, informed by real-world experiences gathered through the DSA40 Data Access Tracker, offers a critical look at how very large online platforms (VLOPs) are meeting their obligations under Article 40(12) of the Digital Services Act’s (DSA).
  • Probe into Russian influence. Belgium’s federal prosecutor has initiated an investigation into a pro-Russian propaganda network accused of paying Members of the European Parliament (MEPs) to advance the Kremlin’s agenda and interfere in democratic processes ahead of the upcoming European elections.
  • Russian influence in Germany. This op-ed recaps Russia’s recent information warfare efforts in Germany, including ‘Doppelganger’, ‘Taurus affair’, and ‘Voice of Europe’, and a reflection about the possible responses against them.
  • China, AI, and elections. Online actors linked to the Chinese government are increasingly leveraging artificial intelligence to target voters in the US, Taiwan and elsewhere with disinformation, according to new cybersecurity research and US officials.
  • Meta’s update on AI-content. From May, Meta will add a label ‘Made with AI’ to a wider range of content generated or modified using artificial intelligence, including videos, images, and audio. In addition to this, the platform will stop removing AI content solely on the basis of their manipulated video policy, unless it violates other policies.
  • AI vs. audio deepfakes. A Spanish media group PRISA Media’s new AI tool designed to detect audio deepfakes aims to tackle the threat to trust in news. The launch of the tool is timely given the current election year.
  • Supporting Georgian fact-checkers. The European Fact-Checking Standards Network (EFCSN) calls on the Government of Georgia to stop harassing and intimidating fact-checkers. This call to action is highlighted by concerns over the reintroduction of a Foreign Agent Law, which has drawn criticism for potentially restricting freedom of expression and affecting the operation of NGOs and independent media.
  • Election disinformation in India. A study by Access Now and Global Witness tested YouTube’s advertising content review system and found that the platform could be used to spread election-related disinformation in India. Google, the owner of YouTube, contested these findings, asserting that their multi-layered enforcement approach ensures all advertisements comply with their policies.

Reading & resources

  • Russia and the far-right. This study by the International Centre for Counter-Terrorism (ICCT) publication presents an analysis of Russia’s influence on far-right extremism across Europe, and discusses the implications for European policy-making.
  • Misinformation, disinformation & online safety. This white paper by Cambridge Intelligence examines the risks posed by misinformation and disinformation to corporations, public sector agencies, political systems, and democracy itself. The paper sheds light on how organisations can use visual analysis tools to identify bad actors, how link analysis uncovers hidden connections across social media platforms, and how AI can be used in the fight against disinformation.
  • Detecting AI-generated content. These guidelines by the European Digital Media Observatory (EDMO) provide a quick checklist for identifying whether images, audio, video, or text are generated by artificial intelligence.
  • Stress-testing ad repositories. Tech platforms’ ad libraries are critical tools for the public to assess the role of advertising on services used by billions every day. In addition to that, the EU’s Digital Services Act requires that the largest online platforms and search engines have ad libraries. Ahead of European elections, Mozilla partnered with CheckFirst to stress test them for usability, and published a report and a set of recommendations.
For those who prefer listening over reading, explore these podcasts:
  • Climate change 101: Dismantling Denialism. This ‘Some Dare Call It Conspiracy’ episode discusses the irrationality of climate change denial in the face of clear scientific evidence, urging a proactive response to global warming challenges.
  • AI & media literacy. This podcast by UNESCO on Media and Information Literacy explores how media and information literacy can respond to the challenges posed by artificial intelligence.
  • AI, disinformation, and misinformation. This episode of ‘Communicating International Security’ explores the complex relationship between AI, disinformation, and misinformation, and how AI tools can both generate and combat fake information, affecting public opinion and political landscapes.

This week’s recommended read

This week, Maria Giovanna Sessa, our Research Manager, recommends reading this article by the Washington Post that reveals a new element of the Doppelganger operation they have just exposed. Troll farms and fake surveys targeted audiences on Facebook and TikTok with the purpose of stroking racial and social discord in the United States, including showing reduced support for Ukraine.

The latest from EU DisinfoLab

  • Of domain names and ducks. How can we tackle the problem of doppelganger disinformation campaigns? This blog post explores the suitability of the classic duck method to taking faster action against deceptive behaviour seen in Doppelganger operation, while diligently protecting freedom of expression. We stress that ICANN should not be a content regulator.
  • Mitigation of systemic risks in the disinformation space. The European Commission conducted a public consultation on the draft guidelines for Very Large Online Platforms (VLOPs) on the measures expected of them under the Digital Services Act (DSA) to mitigate systemic risks in the context of electoral processes. EU DisinfoLab collected views on this matter from the vera.ai consortium partners for a joint response to the consultation. This blog post summarises its key points.

Events & announcements

  • 13-14 May: The EDMO Annual Conference 2024 is scheduled to be held in Brussels, Belgium.
  • 30-31 May: Radicalisation Awareness Network (RAN) Mental Health Working Group meeting under the title ‘The attraction of conspiracy narratives and disinformation: a mental health perspective’, will take place in Bucharest, Romania. Call for participants, targeted mainly to practitioners from the mental health sector working with radicalised individuals or in prevention of radicalisation, is open until 17 April.
  • 23-27 September. The University Federico II of Naples is organising the Summer School on Signal Processing (S3P-2024) in Capri, Italy. It will cover a range of research fields from signal and image processing to machine learning, computer vision, and computer graphics. Apply by 20 April.
  • 9-10 October: This year’s EU DisinfoLab annual conference #Disinfo2024 will convene in Riga, Latvia.
  • RightsCon 2025. Call for proposals is now open for next year’s RightsCon that will be organised in February 2025. Submit your proposal by 2 June.

Jobs

  • The Atlantic Council’s Digital Forensic Research Lab (DFRLab) is seeking a Research Associate, with flexible job location options, to track disinformation and information manipulation relating to weapons of mass destruction (WMD) and chemical, biological, radiological, and nuclear (CBRN) policy issues, with the focus on narratives arising from Russia’s war against Ukraine.
  • Mozilla is recruiting a cohort of up to ten Mozilla Senior Fellows who are making trustworthy AI a reality. Register by 17 April, and complete your full application by 6 May.
  • The European Policy Centre (EPC) is seeking a Brussels-based Policy Analyst for its European Politics and Institutions Programme, to work on topics related to EU institutions, populism and radicalism, democracy and citizens’ participation. Apply by 30 April.
  • Digital Defenders Partnership is looking for a Project Officer for a remote role to support grant processing in Eastern Europe, the Caucasus, and Central Asia.
  • ARTICLE 19 is hiring a UK or US based Head of Digital Programme.