Dear Disinfo Update readers,

In this edition, we continue to marvel at the newly unveiled programme of our annual conference. (Better secure your spot at #Disinfo2024 now to not miss out!)

We also welcome you, for the first time, to visit your green oasis of this newsletter: the Climate Clarity Corner. This section features essential insights on climate change disinformation, highlighting both recent and significant past publications. Join us in our effort to foster a sustainable, disinformation-free future!

And that’s not all, of course. Now that the European elections are behind us, there could be temptation to consider our mission accomplished. While some attempts to disrupt elections have indeed been called off, we should remain attentive and keep our guard up. Leap in and get an overview of what the new Commission mandate might bring in!

Our webinars

Upcoming – register now!
Past – watch the recordings!

Welcome to the Climate Clarity Corner, your bi-annual oasis in the Disinfo Update newsletter. Here, we’ve curated the most crucial insights on climate change disinformation, from the latest publications to still-relevant older ones. Together, let’s pave the way for a sustainable disinformation-free future! More information at our Climate Clarity Hub!

📢 HEATED CLIMATE INFORMATION
  • Sounds like a lot of greenwashing. Plastic producers have deceived the public about recycling for decades. There is an urgent need for cleaning the climate advertising industry. Shell flaunts a green image while launching a ‘climate tech’ start-up to advertise oil industry jobs while major oil giants are paying Google to shield them from critics. These are just a few glaring greenwashing cases…
    • Good news: We are optimistic that the UN chief’s significant speech on 5 June will mark the start of excellent initiatives. António Guterres claimed that fossil-fuel companies are the “godfathers of climate chaos” and should be banned in every country from advertising. A Dutch court’s decision that most of KLM’s advertising campaign claims were misleading (and thus illegal) is a historic victory and a significant step forward!
    • Initiatives: 24 climate groups and academics have sent a letter to Google urging the search engine to update their climate disinformation policy. #heyGooglecleanup
    • We recommend: What the shell!, a useful spotter’s guide to oil lies by Badvertising, and the Information pollution guide by Drilled on fossil fuels’ favourite narratives.
  • Climate watchdogs under fire. UNESCO reported in May that 70% of environmental journalists have been attacked in the last 15 years, with online disinformation increasing. A few months earlier, the International Press Institute had reached similar conclusions.
    • Good news: Scientists achieved a victory when Michael E. Mann won a USD 1 million judgement against climate change deniers. “I hope this verdict sends a message that falsely attacking climate scientists is not protected speech,” stated Mann.
    • Initiatives: Spanish climate journalists have signed a manifesto against the rise of climate denialism and hatred on social networks. Also, the Joint Declaration on Climate Crisis and Freedom of Expression includes an important recommendation regarding the access to information about climate issues.
    • We recommend: The role of journalism in exposing climate misinformation is crucial. Learn more by listening to this panel of experts at the International Journalism Festival. This article by Nature explains why the fight against misinformation is valid, warranted and urgently required, while this publication shows how the media should react to climate disinformation. 
🗓️ EVENTS AND ANNOUNCEMENTS
🕵️ POLICY & INVESTIGATION CORNER
  • Platforms and regulation. To learn about platforms` policies on climate misinformation, don’t miss our factsheet. Additionally, explore how platforms have responded to the EU regulation by reviewing CAAD’s report “Underperforming & Unprepared”. This study highlights significant gaps in how platforms address climate disinformation.
  • AI impact. This study, “AI, a threat to climate change” by Friends of the Earth reveals AI’s significant energy and water demands, while also exploring its role in spreading climate disinformation. This is corroborated by a study from NewsGuard and echoed by Inside Climate News, which claims that AI can spread climate misinformation “much cheaper and faster”.
  • Narratives. “Discourses of climate delay”, a study published by Cambridge University Press in 2020, is a must-read, and the more recent “The New Climate Denial” by CCDH complements it. Additionally, “Deny, Deceive, Delay” by ISD documents and exposes new trends in climate disinformation observed at COP 26COP 27 and COP 28.
  • Bad actors. To answer the eternal question of who is behind climate disinformation, delve into this recently published article by DeSmog about the EU election candidates casting doubt on climate science or painting EU regulators as part of an authoritarian “sect”.  EU DisinfoLab’s research “Don’t stop me now” exposes over 30 worldwide websites dedicated exclusively to questioning the climate change crisis and disseminating false and misleading content about it. “Los Eco-Ilógicos”, by Graphika also looks into the sprawling online network that spreads climate misinformation to Spanish-speaking communities around the world. Additionally, this CNN article shows how a subsection of wellness influencers, known for spreading conspiracy theories about the pandemic, are now latching onto climate change.
🎧 AUDIOVISUAL CORNER

Webinars

Podcasts

📖 OUR TOP PICKS

Disinfo news & updates

  • Meta hits pause. Meta has suspended its AI training plans in Europe and the UK following concerns over compliance with data protection standards​​ raised by the Irish Data Protection Commission and the UK’s Information Commissioner’s Office.
  • YouTube community notes. YouTube announced they are testing a new feature to provide viewers with more context about the content they encounter on its platform.
  • Gendered disinformation in the EU elections. The Global Disinformation Index (GDI) reported a surge in gendered disinformation and abuse, misogyny, conspiracy theories, and various forms of bigotry, targeting female EU leaders and candidates across different member states​​ during the European Parliamentary elections.
  • Unexpected effect. According to Deutsche Welle, the surge of disinformation has unexpectedly bolstered the credibility of trusted media outlets. The proliferation of misinformation has led audiences to seek reliable news sources, enhancing the role of established media in countering disinformation.
  • FIMI alert: Moldova. The US, Britain, and Canada have accused Russia of plotting to influence Moldova’s upcoming election in October and attempting to undermine the country’s democratic process through covert efforts to exacerbate societal tensions and foment negative perceptions of the West.
  • Russia targets France – elections. Pro-Kremlin online networks conducted a disinformation campaign to support the French far-right Rassemblement National party during the EU Parliament elections. There are now concerns that another operation might attempt to influence the results of the upcoming legislative election.
  • Russia targets France – Olympics. Russia has intensified its disinformation efforts targeting the Paris Olympics. A fabricated video that purports to be a CIA warning to Americans about high risks of attacks in the city’s metro, originated on Russian platforms, has amassed over 100.000 views on X and Facebook.
  • Pentagon targets China – COVID19. This investigation by Reuters revealed that the Pentagon conducted a secret campaign to discredit China’s Sinovac COVID-19 vaccine during the pandemic. The operation aimed to undermine confidence in Chinese vaccines and aid, particularly targeting the Philippines.

Brussels corner

  • Can EDMO become the EU Viginum? The EU Commission’s next mandate aims to establish additional efforts to reinforce counter-disinformation activities. French media outlet Contexte reported that Ursula von der Leyen plans to transform the European Digital Media Observatory (EDMO) and its hubs into a European “Viginum.” For those unaware, Viginum is the French agency countering foreign digital interferences, and became well-known for its publications of attributing Foreign Information Manipulation and Interference (FIMI), mostly to Russia for Doppelganger operation or Azerbaijan for disinformation surrounding the recent outbursts in New Caledonia.
  • What’s next? The only issue with this metaphor is that French Viginum is not an independent civil society body but a government agency. It reports to the Secretary General of National Security and Defence (and directly under the authority of the French Prime Minister). On the other hand, since its inception, EDMO and its hubs have been carried out by independent consortia of academics, media, and civil society organisations. It’s still unclear what the model for such an EU agency could be and how it would coordinate with parallel efforts from the European External Action Service (EEAS), the FIMI ISAC, and other civil society bodies.
  • They just want to have funds… Contexte also reports that the European Commission is looking at the Europe Creative budget to fund these counter-disinformation efforts. This includes the EDMO/Viginum project and the future European Board of Media Services (an updated ERGA). Contexte notes that tensions have arisen among Member States, and especially France, as the 2025 Creative Europe budget has not yet been published. Wait and see, replied the Commission in a nutshell.

Reading & resources

  • Hate speech, misinformation, disinformation. This report provides an in-depth examination of how the phenomena of hate speech, misinformation, and disinformation intersect and diverge. It underscores the challenges in addressing these issues, particularly in digital spaces where they often overlap, complicating regulatory and counter-disinformation efforts, and calls for a multi-faceted approach combining legal, educational, and technological measures to effectively tackle them.
  • Fake image factories. Midjourney’s measures to prevent the creation of AI-generated images that can influence elections are proving insufficient. Researchers from the Centre for Countering Digital Hate (CCDH) identified ongoing misuse of the platform to produce realistic and manipulative visuals that might pose a threat to the integrity of electoral processes.
  • Historical integrity. This UNESCO report highlights the potential dangers of generative AI in distorting the historical record of the Holocaust and spreading antisemitism.

This week’s recommended read

This time, Inès Gentil, our Project Officer, suggests readings to dive into generative AI and disinformation.

This insightful piece, ”Information apocalypse or overblown fears – what AI mis‐ and disinformation is all about? Shifting away from technology toward human reactions,” explores the impact of generative AI on misinformation and disinformation. The authors tackle the question of whether fears about an information apocalypse are justified or ‘overblown’, arguing that focusing solely on the quantity of AI-generated content can lead to underestimating the real threat. Instead, they suggest a paradigm shift, to focus more on human reactions to AI-fuelled disinformation and the mere potential of AI technology. The authors argue that while AI facilitates the creation of disinformation, the real impact comes from how people respond to it. The fundamental question is how this new technology affects our trust and understanding of information. The authors stress the need for better AI and media literacy, and call for cooperation between the tech industry and policymakers to enhance detection and combat AI-generated disinformation. They also advocate for psychological inoculation – informing people about how they might be misinformed – to reduce susceptibility to disinformation.

Additionally, presented in the “Meet the Future of AI – Generative AI and Democracy” event in Brussels last week, related research ”What Does the Public in Six Countries Think of Generative AI in News?” by Richard Fletcher and Rasmus Kleis Nielsen, reveals that public expectations about the impact and responsible use of generative AI vary considerably across sectors. While its use in the health and science sectors is seen as more trustworthy, scepticism and fears of irresponsible use are more pronounced in sectors such as social media and news production. Although many people are undecided about the overall impact of AI, younger and more educated individuals tend to view it more optimistically.

These two articles are a must-read for anyone interested in the current and future impact of AI on our information ecosystem. They offer practical insights into how we can better prepare ourselves and our society to deal with the challenges posed by AI-generated disinformation.

The latest from EU DisinfoLab

#Disinfo2024 programme is out! Last week, we unveiled the programme of our annual conference, Disinfo2024, which will take place in Riga, Latvia, on 9-10 October. For two days, the brilliant experts in our community will come together to explore crucial issues like the impact of Foreign Information Manipulation and Interference (FIMI) on elections, new paradigms of generative AI, and cybersecurity from both European and global perspectives. You do not want to miss this – secure your spot now!

Your message here?

Want to reach the 10.000+ readers of this newsletter?

Events & announcements

  • 2 July: The European Digital Media Observatory (EDMO) will host an online training session “The Information Laundromat”, targeted to researchers, fact-checkers, journalists, and media literacy practitioners working on tackling disinformation.
  • 10-12 July: This course (in Spanish), organised by the University of Zaragoza in collaboration with Radiotelevisión Española (RTVE), aims to equip journalists with advanced tools and innovative strategies in using AI for news verification and countering disinformation.
  • 15-16 July: The European Media and Information Fund (EMIF) Summer Conference, “Impact and Future Outlook,” will take place in Lisbon, Portugal.
  • 16-18 July: The 2024 International Conference on Social Media & Society will gather leading social media researchers from around the world to London.
  • 1 October: The Tech and Society Summit will bring together civil society and EU decision-makers in Brussels, Belgium, to explore the interplay between technology, societal impacts, and environmental sustainability.
  • 9-10 October: This year, our annual conference takes you to Riga. Check out the #Disinfo2024 programme, fresh off the drawing board, and request your tickets now!
  • 29 October: Coordinated Sharing Behavior Detection Conference will bring together experts to showcase, discuss, and advance the state of the art in multimodal and cross-platform coordinated behaviour detection.
  • 14 November: Arcom, the French Audiovisual and Digital Communication Regulatory Authority, is calling for research proposals around the themes of information economics, digital transformation, audience protection, and media regulation for its 3rd Arcom Research Day. Submit your proposal by 1 September.
  • Digital Sherlocks. Applications for the DFRLab’s Digital Sherlocks Fall 2024 cohort are now open. Apply by 5 July for this free training program to learn open-source investigative techniques, source verification, narrative analysis, geolocation, and more.
  • Boosting fact-checking activities in Europe. European Media & Information Fund (EMIF) is supporting fact-checking projects, and welcomes proposals from organisations or consortia based in the EU, UK, or EFTA countries. The eight funding round is open until 28 June.
  • Tackling toxic content – call for submissions. The MDPI journal Information is inviting submissions for a Special Issue titled “Beyond Detection: Disinformation and the Amplification of Toxic Content in the Age of Social Media.” Articles on innovative methods and tools for analysing and mitigating the spread of toxic content and disinformation on social media platforms are welcomed. Deadline for manuscript submissions is 30 November.
  • AI & market power fellowship. The European AI & Society Fund has announced a call for proposals for its Global Fellowship Programme on AI & Market Power. Individuals with diverse skills, especially in journalism, OSINT, and technical research, are encouraged to apply by the deadline of 8 July.

Jobs

  • The Center for Countering Digital Hate (CCDH) is hiring a Portfolio Manager (Europe or the US) and Digital Media Coordinator (US).
  • ProPublica is looking for a US-based Computational Journalist with technical knowledge and understanding of emerging technologies, to join their data team.
  • Access Now is looking for Policy & Advocacy Manager, to lead their policy and advocacy efforts in Europe. Deadline to apply is 30 June.
  • The Digital Freedom Fund is seeking a Communications Project Lead for a remote, four-month consultant position. Apply by 1 July.
  • WHAT TO FIX is looking for a Policy Lead to work on their Platform Monetization project.
  • The Thomson Reuters Foundation is looking for a Media Development Manager to join their Media Freedom Team.
  • The Organized Crime and Corruption Reporting Project (OCCRP) is hiring a Security Analyst to enhance their information security measures and support investigative journalists, for a remote or Amsterdam/Sarajevo based position.
  • ASL19 is seeking a Technical Support Coordinator.
  • Debunk.org has several open positions, including a Researcher and analyst for disinformation analysis, and a Media literacy expert on disinformation.
  • Logically is recruiting for several positions, including Fact Checker – Slovenia, UK-based Senior Manager, Government Affairs, and Senior OSINT Analyst.
  • Global Disinformation Index (GDI) is hiring for a remote role of a Finance Manager based in the UK or the US.