Dear Disinfo Update readers,

Did you miss #Disinfo2024? Are you feeling nostalgic for the insightful presentations and the crisp Latvian air? Check out #Disinfo2024 in Review for a recap of the presentations, panels, and interviews.

In this newsletter, we dive into social media platforms, AI, and Romania. This is the last Disinfo Update of the year; we hope you enjoy the holidays!

Our webinars

UPCOMING – REGISTER NOW

  • 9 January: The ban on Russian propaganda: circumventions, side effects, and impact on the 2024 EU elections | Join us for a session examining the extent and nature of Russian online influence operations surrounding the EU elections. With RT and Sputnik facing restrictions, this webinar will explore the strategies employed by Russian proxy websites and social media channels to fill the gap. Stephen Hutchings and Vera Tolz from the University of Manchester will share key findings from a new report assessing the impact and effectiveness of these operations.
  • 30 January: Measuring the effectiveness of narratives | In this webinar, Sviatoslav Hnizdovskyi, Founder and CEO of OpenMinds, will talk about his work on measuring the impact of disinformation and counter-narratives on public attitudes and behaviours.

PAST – WATCH THE RECORDINGS!

  • How to detect and analyse identity-based disinformation/FIMI | This session explored the EEAS practical guide on using OSINT tools and methodologies to detect identity-based disinformation. Developed with EU DisinfoLab, it offers insights into protecting victims and gathering evidence.
  • Building FIMI resilience through models of practice: Taiwan’s 2024 election | This webinar explored Taiwan’s resilience to China’s FIMI campaigns ahead of the January 2024 election. Ben Graham Jones from Doublethink Lab shared insights on fostering whole-society resilience and strategies for addressing global FIMI challenges. 
  • AI and disinformation: a legal perspective | Noémie Krack, legal researcher at KU Leuven, examined the EU’s evolving legal framework to tackle Gen AI-driven disinformation. The session highlighted key provisions from the AI Act, the Digital Services Act, and findings from the AI4Media project.

Disinfo news & updates

This section includes articles about Russian disinformation, climate denial, AI, and more.

  • Outrage and misinformation: This article explores the link between misinformation, outrage, and social media sharing. The research, using Facebook and Twitter data and behavioural experiments, reveals that misinformation evokes more outrage than factual news and that outrage significantly increases information sharing, regardless of its accuracy. 
  • Martial law announcement thought to be a deepfake: South Korean President Yoon Suk Yeol’s surprise declaration of martial law, swiftly overturned by parliament, has sparked a major political crisis. Opposition leader Lee Jae-myung initially dismissed the announcement as a deepfake, highlighting the shocking and unprecedented nature of the event. 
  • Climate denial influencer: Mother Jones investigates Wide Awake Media, a prolific climate denial account on X (formerly Twitter), highlighting its rapid growth and influence. The article reveals the apparent operator, Jarrod Fidden, an Australian entrepreneur previously known for a conspiracy theorist dating app. Wide Awake Media’s success is attributed to Elon Musk’s lax content moderation policies on X, allowing the spread of climate misinformation and the monetisation of verification, boosting visibility and revenue.
  • Meta on election integrity: This Meta report details their efforts to maintain election integrity across their platforms during the 2024 global election cycle. Key themes include balancing free expression with user safety, mitigating the impact of AI-generated misinformation, and combating foreign interference. 
  • Bluesky and misinformation: This article analyses the migration of users and disinformation actors, from X to the social media platform Bluesky. It highlights how far-right accounts and disinformation specialists are establishing themselves on Bluesky, aiming to continue spreading misinformation and divisive rhetoric. While Bluesky offers robust community-based moderation tools (user-generated content warnings and shareable blocklists) the article raises concerns about its vulnerability to manipulation, citing instances of false accusations and the hijacking of its moderation systems.
  • CEO shooting conspiracies: Forbes discusses the rapid spread of misinformation surrounding the murder of UnitedHealthcare CEO Brian Thompson. The article highlights how social media platforms, such as X and Reddit, became breeding grounds for unverified speculation and conspiracy theories surrounding the crime, often contradicting official police statements. 
  • Gender and tech: Hanah Stiverson argues that digital harassment, violence against women, and the erosion of reproductive rights are interconnected issues indicative of a global rise in authoritarianism. This blog post highlights the alarming statistics of gender-based violence and technology-facilitated harassment, exacerbated by the overturning of Roe v. Wade.
  • Russian disinformation campaign: Reset Tech details a rapidly expanding network of pro-Kremlin accounts on X, which has doubled in size over five months. The network, linked to Russia’s Doppelganger operation, uses paid verification to amplify its reach, strategically exploiting X’s features for maximum visibility and generating significant revenue for the platform. 
  • EU sanctions: The European Union is imposing sanctions on sixteen individuals and three entities for their involvement in Russia’s destabilising activities. The targeted individuals and entities represent various actors, from military intelligence operatives and propaganda networks to individuals facilitating intelligence operations within EU member states.

Romania, FIMI, and election annulment

This section includes articles that discuss foreign interference, Facebook/TikTok, the Digital Services Act, and how they relate to the recent developments in Romania.

  • Interference in Romanian election: Romania’s constitutional court annulled the first round of its 2024 presidential election, won by far-right candidate Calin Georgescu, due to alleged Russian interference. Evidence suggests Russian meddling involved cyberattacks and social media manipulation to boost Georgescu’s campaign. 
  • Meta’s response: This research note from CheckFirst discusses Meta’s failure to adequately moderate political advertising in Romania’s 2024 presidential election. A coordinated network of Facebook pages linked to the far-right AUR party ran a massive ad campaign, violating Meta’s advertising policies and potentially Romanian electoral law. The campaign’s scale (3,640 ads reaching 148 million people) and lack of platform response raise serious concerns about Meta’s accountability in protecting electoral integrity. 
  • The EU, TikTok, and the DSA: Romanian intelligence suggests a Russian-coordinated campaign used TikTok influencers to boost the surprise front-runner, Calin Georgescu, a far-right populist. The EU, leveraging its Digital Services Act, is demanding information from TikTok regarding this alleged influence operation, highlighting concerns about foreign interference in elections and the need for greater online content moderation. 
  • TikTok removes three influence operations: TikTok executives testified before the European Parliament, discussing the removal of three separate influence campaigns targeting the Romanian presidential election, including one linked to Sputnik (a Kremlin-backed media outlet). Despite claiming these were “very, very small networks,” the takedowns occurred between the first and second rounds of voting, following a surprising win for an ultranationalist candidate with a significant TikTok presence. 
  • Political ads on Facebook: On Facebook, over €224,000 was spent on over 3,600 political ads attacking the pro-EU candidate and promoting the pro-Russia candidate, Calin Georgescu, whose unexpected victory was later annulled. The campaign, using a network of seemingly independent pages with shared infrastructure, highlights the potential for social media manipulation to influence elections. 
  • Energy infrastructure targeted: Romania’s major electricity provider, Electrica Group, reported a cyberattack in progress, prompting an investigation involving national cybersecurity authorities. Speculation points towards pro-Russian hackers as potential perpetrators, given the recent annulment of Romania’s presidential election due to alleged Russian interference.

Reading & resources

Some of the topics discussed in this section include fact-checking, AI, education and identity.

  • Identity-based disinformation and Myanmar: This report from the Stanley Center for Peace and Security discusses the use of hate speech, inflammatory rhetoric, and the targeting of identity in Myanmar. It provides insight into the implications of identity-based disinformation for security/democracy as well as the challenges of responding to it.
  • AI and elections: Meta claims that despite widespread concerns, AI’s influence was ultimately “modest and limited in scope”, largely due to their proactive measures against coordinated disinformation campaigns. This involved detecting and removing AI-generated misinformation, such as deepfakes, across their platforms (Facebook, Instagram, Threads). 
  • Fact-checking and decentralised networks: Wired discusses the concept of prosocial media. They argue that existing platforms, driven by algorithms that maximise engagement, often amplify divisive content, undermining civic discourse and manipulating users. The author proposes solutions like decentralised networks – exemplified by platforms such as Mastodon and Bluesky – which offer greater user control and data privacy. The article also explores community-based fact-checking, it discusses community notes on X and Cofacts (a community-sourced fact-checking platform).
  • Climate denial on TikTok: Global Witness investigates TikTok’s failure to effectively moderate climate change denial in video comments, despite its stated policy against such misinformation. The report details a study where only 1 out of 20 reported instances of climate denial in comments on major news organisation’s COP29 videos were removed by TikTok, highlighting a significant gap between policy and practice.
  • Fact-checking survey: The European Digital Media Observatory (EDMO) conducted a survey on fact-checking by public and non-independent bodies within the EU. A significant proportion of respondents reported instances of government or politically affiliated entities producing fact-checking content, raising concerns about potential conflicts of interest and damage to the reputation of independent fact-checkers. The survey highlights the risk of compromised impartiality and reduced public trust if non-independent actors engage in fact-checking, emphasising the need for strict independence in this crucial area. 
  • Investigative journalism resources: The Global Investigative Journalism Network has published a list of their top guides and tipsheets from 2024. Some of the resources they provide include the “Introduction to Investigative Journalism” guide, “Introduction to Fundraising for Investigative Journalism”, and the “Guide to Investigating Extreme Heat”.
  • AI music and extremism: Heron Lopes from the Global Network on Extremism and Technology details how far-right extremist groups are exploiting AI music generation platforms to create and disseminate hateful propaganda. The report focuses on 4chan’s /pol/ board, an online forum known for discussing politics and often associated with extremist views. It identifies how users collaboratively produce high-quality extremist music, using techniques like “jailbreaking” to circumvent platform safety measures.
  • Education for counter-disinformation: This working paper explores the crucial role of education in combating disinformation. The authors argue that effective countermeasures require a multi-faceted approach, encompassing media and information literacy training, the development of critical thinking skills, and fostering positive attitudes towards reliable information. They examine various educational interventions, including gamification and the integration of subject-specific knowledge.
  • 2020 election misinformation on Facebook: This article investigates the spread of (mis)information on Facebook during the 2020 U.S. presidential election. The authors analyse the diffusion patterns of misinformation and compare them to the spread of non-misinformation content.
  • Fact-checking and Russia: GFCN, a Russian initiative to establish a seemingly independent “Global Fact-Checking Network”, orchestrated by Kremlin-linked entities like TASS and ANO Dialog (both sanctioned for disinformation), aims to counter legitimate fact-checking organisations by creating a network that promotes pro-Kremlin narratives while misusing fact-checking terminology. The article highlights the danger of this strategy in confusing the public, undermining the credibility of established fact-checkers, and ultimately furthering the spread of disinformation. 

Holiday disinformation

Disinformation doesn’t stop for the holidays…

  • Graphika published a report on holiday-themed online scams. The report, “Holiday Hoaxes Unwrapped,” uses Graphika’s ATLAS platform to investigate and detail how scammers use social media to deceive users into losing money and personal information during the holiday shopping season. 
  • Get ready for some holiday OSINT… check out the NORAD Santa tracker!

This week’s recommended read

This week’s recommended read, brought to you by Maria Giovanna Sessa, is an HKS Misinformation Review commentary on the limits of fact-checking when it comes to conspiracy theories that blend verifiable facts with speculative possibilities, like the “climate lockdown” narrative. 

Fact-checking focuses on objectively knowable facts, such as specific claims or data, and not opinions or hypothetical future scenarios. On the one hand, this voids the “but-who-decides-what’s-true” (aka arbiters of truth) argument, as debunking operates within the realm of demonstrable truth, providing clarity on factual claims while leaving opinions and speculation outside its scope. On the other hand, this practice tends to fail when subjective beliefs come into play.

Interestingly, the author suggests a somewhat overlooked solution, emphasising the role of philosophy as providing the framework to understand the epistemic challenges of disinformation, clarifying the boundaries between objectively knowable and subjectively perceived. 

The latest from EU DisinfoLab

  • #Disinfo2024 in review. Our annual conference provided scholars, researchers, journalists, and practitioners the chance to learn about and discuss the most pressing issues in the counter-disinformation community. For a recap of the many insightful panels, interviews, and presentations, check out #Disinfo2024 in Review
  • OSINT and IBD blog post. We recently published a blog post discussing the recent toolkit from the European External Action Service (EEAS) which explores the use of OSINT to detect identity-based disinformation-focused foreign interference. This toolkit was developed in collaboration with EU DisinfoLab researchers, Maria Giovanna Sessa and Raquel Miguel Serrano.
  • New country factsheets. We have published new country factsheets for Croatia, Denmark, and the Czech Republic, completing our series on disinformation landscapes across Europe. Check them out to learn more about the state of disinformation in each country. For the rest of our factsheets click here

Events and announcements

  • 6-10 January 2025: The Digital Methods Initiative (DMI), Amsterdam, is holding its annual Winter School on ‘Chatbots for Internet Research?‘.
  • 30 January 2025: The International Institute of Communications will be hosting a legal counsel forum discussing AI and regulators. 
  • 24-27 February 2025: RightsCon 2025, hosted by Access Now, will be held in Taipei, Taiwan, and will bring together leaders and activists to discuss key digital rights issues.
  • 28 February – 1 March 2025: The European Festival of Journalism and Media Literacy will take place in Zagreb, Croatia
  • 22-25 May 2025: Dataharvest The European Investigative Journalism Conference will take place in Mechelen, Belgium.
  • 15-16 October 2025: After this year’s brilliant outcome, the EU DisinfoLab annual conference #Disinfo2025 will be bringing the community together again, this time in Ljubljana, Slovenia. Mark your calendars!

Jobs

  • Internews has several job openings for a payroll accountant, a senior program officer, a grants and contracts officer.
  • The Center for Countering Digital Hate has job openings for the director of advertising industry advocacy and senior communications manager.
  • The Institute for Strategic Dialogue has several openings including digital policy coordinator, director/manager of research and policy, development coordinator, director of development, finance coordinator, and foundation grants manager.
  • ProPublica has openings for assistant director of leadership gifts, deputy Texas editor, Florida reporter, and southwest reporter.
  • The Centre for Information Resilience has openings for account manager, bookkeeper, and partnership delivery manager.
  • Recorded Future is hiring for director of intelligence services systems, senior manager of product marketing, and senior marketing operations manager.
  • NewsGuard has several openings including director of marketing, media and misinformation staff reporter, and contributing analyst.
  • Moonshot has multiple openings including analyst (Arabic) and campaigns analyst.
  • Access Now is hiring a MENA program associate.
  • Correctiv is hiring for several positions including media educator, project manager, and product manager.

This good post!

What’s the status of the EU’s tech policy enforcement? Check out this post about an “Enforcement spotlight” by Maria Koomen and Mehmet Onur Cevik from the Centre for Future Generations (CFG).