Dear Disinfo Update readers,

Welcome to our biweekly newsletter on disinformation, offering you a curated selection of news, events, and announcements in the disinformation field from around the world.

In this edition, you’ll find a range of light and weighty summer reads: feats and follies of social media platforms, hopes and hazards of generative AI, stumbles and surging strategies for election disinformation, and more.

If you missed our two outstanding webinars, on climate change disinformation and the European disinformation landscape(s), just hit the button below to catch up on the valuable insights awaiting you.

Nothing more to see here, keep scrolling!

Disinfo news & updates

  • Empower love, debunk hate. There has been a surge in anti-LGBTQ disinformation and hate speech on social media platforms during Pride events, resulting in extreme online responses and incitements to violence. Insufficient moderation on social media platforms exacerbates the issue, with young people being particularly exposed. Read the article here.
  • Riots. Disinformation surrounding the demonstrations in France includes exaggeration, racism, and xenophobia. False stories have been spread to magnify the disorder and create a sense of chaos, with the aim of inciting anger and promoting extremist policies, using decontextualised footage and old videos. This analysis by the EDMO fact-checking network explores the case.
  • Another load of BS. A news report by the Toronto Star builds on our Bad Sources investigation, revealing the spread of false information about a conference on Sikh terrorism in Toronto, using fake experts and non-existent speakers. The article highlights the need for authorities to pay closer attention to news sources, and raises concerns about India’s press freedoms and its alleged interference in Canadian political affairs.
  • On a righteous path. Google has announced updates to its Fact Check Explorer tool, aimed at supporting fact-checkers, journalists, and individuals interested in verifying information. The global beta version now allows users to search by image, enabling fact-checkers to determine if an image has been previously checked by others. Additionally, a new feature provides the context and timeline of an image, helping combat manipulation through misplaced context.
  • Blue bird weathervane. Last week, Twitter implemented temporary limits on user access, restricting the number of tweets that can be viewed per day, citing data scraping concerns. While some speculated the move was aimed at promoting Twitter Blue subscriptions or due to capacity constraints, users sought alternatives like Mastodon, Bluesky, and Threads. Merely days after introducing the restrictions, Twitter silently eased them.
  • Mark vs. Elon. Meta launched Threads, a new app linked to Instagram, as a “friendly rival to Twitter”. The app, described as a “text-based conversation app”, bears a strong resemblance to Twitter’s dashboard and is available for free (and, oh snap, with no restrictions on the number of posts users can see). However, in the EU, it faces higher hurdles due to the stricter data protection rules, and it seems it will not launch in Europe for the foreseeable future.
  • Viral pollution. A BBC investigation found hundreds of climate change-denying videos in multiple languages on TikTok, with only five percent of reported English-language videos being removed within 24 hours, despite the platform’s announcement in April that it would remove such content. Read about the investigation here, and listen to the commentary on Radio 4 Today (starting at 38:29).
  • Shiba Inus for Ukraine. This CNN article reports on NAFO (North Atlantic Fellas Organization), a decentralised online volunteer organisation using memes with cartoon-like dog avatars dressed in Ukrainian military gear to mock and discredit Russia’s war in Ukraine. While their approach may seem ridiculous, some experts have suggested that humor could be an effective weapon in countering disinformation and propaganda.
  • Upholding human rights in AI. A joint civil society statement urges the Council of Europe Committee on Artificial Intelligence (CAI) to maintain human rights standards in the Convention on AI. The statement calls for a broad scope and definition of AI systems, and rejecting blanket exemptions for AI systems used in national defence and security.
  • Transparency mirage. According to Who Targets Me, Google’s ‘election ads’ policy and ad library, while intended to expose abuses and ensure political accountability, falls short of its full potential. The definition of a ‘political ad’ is narrow, failing to encompass important aspects, which leads to incomplete and inconsistent data. These transparency failures seem to be intentional, as stated in this article.
  • Defending democracy. How is the European Parliament approaching the 2024 European elections in the face of disinformation and foreign interference threats? Read some takeaways from the European Committee of the Regions EuroPCom 2023 here.

What we’re reading and watching

  • Measuring vulnerability to disinformation. The 2023 edition of the European Media Literacy Index, published since 2017 by the Open Society Institute, reveals that Finland ranks highest in media literacy, while countries close to the war in Ukraine remain most vulnerable to disinformation. Read more and download the full report here.
  • Still haven’t found what I’m looking for. In this article, social media and web research specialist Henk van Ess showcases some of the social search techniques using Facebook, which will be part of the Global Investigative Journalism Network’s (GIJN) forthcoming Open Source Research Guide, to be published in September. 
  • Venturing into the future of AI. The event “Meet the Future of AI: Countering Sophisticated & Advanced Disinformation” was organised in Brussels on 29 June by Horizon Europe research projects AI4Media, AI4Trust, TITAN, and vera AI, together with the European Commission. During the day, various aspects surrounding the development and use of artificial intelligence and its relationship to the disinformation sphere were discussed and showcased, including policy implications and challenges to fight disinformation in a panel chaired by our Alexandre Alaphilippe. Read the event report here.
  • Monochrome migration monologue. The BRIDGES Working Paper ‘Migration narratives in media and social media – The case of Hungary’ examines two case studies, which both share a common feature: the absence of meaningful discussion and debate that are typically associated with the role of the media as a democratic institution. Download the paper here.
  • Double-edged sword. This study suggests GPT-3 is able to produce accurate information that is easier to understand than that produced by humans, but it can also produce more compelling disinformation. The study reflects on the dangers of AI for disinformation and on how information campaigns can be improved to benefit global health.
  • It’s only getting worse. By enabling the creation of manipulated but increasingly realistic videos that depict real people in fake situations, artificial intelligence is exacerbating the age-old issue of women facing sexual harassment. For an insightful perspective on the topic of deepfake pornography, watch this interview with Nina Jankowicz (following up on her eye-opening story listed in our last newsletter).

This week’s recommended read

This week’s recommended read by our Executive Director Alexandre Alaphilippe examines a dodgy Lithuanian disinfo business. According to Lithuanian National Radio and Television (LRT), Member of the European Parliament (MEP) Viktoras Uspaskichas and his ex-wife have funded a new social media platform and alternative news site,, investing up to 200.000 euros in it during the past two years. The platform is associated with disinformation, notably on topics like the COVID-19 pandemic, vaccines, and the Ukraine war. It claims to offer a censorship-free space devoid of “system-unfavourable” facts, a narrative close to US far-right so-called “alternative facts”.

This is, unfortunately, yet another demonstration that many disinformation actors are portraying themselves as “alternative media”. Being considered a media outlet (however alternative) opens loopholes for malicious actors to exploit, such as the media exemption currently in discussion in the European Media Freedom Act (EMFA). This case also shows, once again, that these alternative media can benefit from direct financial support of powerful actors – and the asymmetry grows while independent civil society is struggling to retain talents, skills and expertise.

The latest from EU DisinfoLab

  • Polluting the truth about climate change. On 3 July, we hosted, together with the Heinrich Böll Foundation, a webinar that shed light on the various forms of climate change disinformation and explored its far-reaching consequences from different angles. Watch the recording of this insightful webinar here.
  • European disinformation landscape. Following the release of a new batch of our disinformation landscape country factsheets that unveil the most emblematic cases, recurrent narratives, community actors, and policy initiatives in each country, our webinar on 28 June took a deep dive into the topic. Missed it? Replay here.
  • #Disinfo2023. Join leading experts and stakeholders from across the disinformation world to dig into the pressing issues in the disinformation space. It isn’t (yet) too late to register for the EU DisinfoLab 2023 Annual Conference that will take place in Krakow, Poland, on 11-12 October. No live streaming, no remote participation – be there, or be polygon!

Events and announcements

  • 13 July: This webinar will present a new case study “From Catalyst for Freedom to Tool for Repression: A #ShePersisted Analysis of Gendered Disinformation in Tunisia” that analyses the patterns, impact and modus operandi of gendered disinformation campaigns targeting women leaders.
  • 4 September: EDMO BELUX Lunch Lectures will re-launch after the summer break with Guillaume Kuster as guest speaker, shedding light on CheckFirst’s incredible investigation that unearthed a large-scale Facebook ad scam operation. Sure it’s early, but you don’t want to miss this, so no harm signing up here already!
  • 11-12 October: Don’t miss out on #Disinfo2023! Explore our remarkable lineup of speakers and the draft programme with two tracks that will leave you wishing you could clone yourself, and register here.
  • 8 November: BENEDMO Network Conference will take place in The Hague in November. Save the date!
  • Host analysis device to fight misinformation. CheckFirst extends the CrossOver network to monitor content recommendation algorithms, and volunteers are needed to host small analysis devices in Canada, the Democratic Republic of Congo, France, Mali, Morocco, Senegal, and Switzerland. If you have 24/7 internet connection, access to your router and an available electrical outlet near it, and want to combat misinformation, enlist here!
  • Call for papers on disinformation against women and girls. The International Organisation of La Francophonie (OIF) has launched a call for papers inviting multidisciplinary researchers and project leaders to explore themes around preventing and addressing disinformation against women and girls. The selected contributions will be published on ODIL, the Francophone platform for initiatives combating disinformation, and will be showcased in a valorisation workshop. Submit your proposal (in French) by 16 July.
  • Contribute to building a collective view. The Hunt Laboratory for Intelligence Research at the University of Melbourne will be running an expert elicitation forum with the aim of establishing a collective view on the current strategies and best practice to combat, mitigate and build resilience to disinformation, misinformation, and malign influence. If you have expertise in disinformation or a related area and would like to participate or find out more, click here.

Job opportunities

This good tweet!