AI Disinfo Hub

The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.

This hub intends to assist you to better understand how AI is impacting the disinformation field.  To be up-to-date on the latest developments, we will collect the latest Neural News and Trends and include upcoming events and job opportunities that you cannot miss.

Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.

In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.

Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.

Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS

We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!

News
News

The Conversation: In May, Google introduced SynthID Detector, a tool designed to identify AI-generated content across text, images, video, and audio. However, there are important limitations. The tool is mainly effective for content produced using Google’s own AI systems, like Gemini (text), Veo (video), Imagen (images), or Lyria (audio). It won’t reliably detect content created with non-Google tools, such as ChatGPT. That’s because SynthID doesn’t actually detect AI-generated content in general, it can only recognize specific markers embedded by Google’s own AI models.

The Guardian: This article explores how artificial intelligence is fuelling a disturbing rise in digital misogyny, creating new forms of violence against women and girls. It reveals how AI is being used to build online “brothels”, generate simulated child abuse, and develop sex robots with features designed to mimic rape. The piece argues that the unchecked development of AI threatens to embed gender inequality more deeply into society, especially as men remain its dominant users and beneficiaries.

Time: This investigation by TIME and several tech watchdogs reveals that Google’s AI tool Veo 3 can create realistic deepfake videos containing misleading or inflammatory depictions of news events. Despite some safeguards, the tool was able to generate clips such as a Pakistani crowd setting fire to a Hindu temple, Chinese researchers handling a bat in a wet lab, an election worker shredding ballots, and Palestinians gratefully accepting U.S. aid in Gaza. Experts warn that, if shared on social media in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence.

ABC: Leaked documents reveal that China employs advanced AI technologies alongside human censors to systematically erase public memory of the 1989 Tiananmen Square massacre. The censorship system uses machine learning to detect not only direct references but also symbolic imagery, such as sequences resembling the iconic “Tank Man” photo, even if disguised with everyday objects like bananas and apples.

France 24: AI chatbots, increasingly relied upon for instant fact-checking, have been shown to frequently spread misinformation rather than correct it. During India’s recent conflict with Pakistan, these tools wrongly identified unrelated video footage as military strikes, fueling confusion. Beyond this, investigations revealed that chatbots sometimes fabricate details, as when an AI-generated image of a woman was falsely confirmed as authentic by a chatbot in Uruguay. The decline in human fact-checkers at major tech platforms has exacerbated the problem, raising concerns about the reliability, political bias, and manipulation of AI-powered fact-checking tools.

Reporters Without Borders: Reporters Without Borders has raised the alarm over the growing use of generative AI to impersonate trusted French media outlets in French-speaking African countries. Recent deepfakes and synthetic audio clips mimicking journalists from Radio France Internationale (RFI) and France 24 have circulated widely on platforms like WhatsApp and TikTok, misleading the public with fabricated news.

Arxiv: Witness has developed The Truly Innovative and Effective AI Detection Benchmark, which provides a comprehensive framework for assessing AI detection tools through a sociotechnical lens, emphasizing their effectiveness in real-world scenarios and their usefulness to critical information stakeholders. Shaped by input from communities, case studies of deceptive AI handled by the WITNESS Deepfakes Rapid Response Force, and international consultations, the benchmark delivers practical guidance and concrete recommendations to help develop, improve, and promote robust, forward-looking detection technologies with global relevance.

The New York Times: The Trump administration’s Make America Healthy Again Commission unveiled a report last week that it claimed would provide an evidence-based approach to children’s health policy. However, the report referenced studies that don’t actually exist, covering topics like drug advertising, mental health, and asthma treatments. According to Dr. Ivan Oransky, a medical journalism professor at NYU, the inaccuracies are strikingly similar to the kinds of mistakes commonly seen in content generated by AI systems.

The Conversation & Florida International University:  As AI-generated disinformation grows more sophisticated, researchers at Florida International University are developing tools to fight back using the same technology. By teaching AI to analyse narratives, identifying storytellers, cultural cues, and timelines, the team is helping uncover how false stories spread and take root. From fake election videos to culturally tailored propaganda, the study highlights the power of storytelling in persuasion and the urgent need for culturally literate, narrative-aware AI systems to detect and counter digital influence campaigns.

DeSmog: A group called KICLEI, mimicking the international environmental network ICLEI, has been sending thousands of AI-generated emails to over 500 Canadian municipalities, urging councils to abandon net-zero climate targets. Using a custom AI chatbot dubbed the “Canadian Civic Advisor,” KICLEI crafts tailored messages that downplay climate change, focus on “real pollution, not CO2,” and cast doubt on the scientific consensus. Several municipalities, including Thorold, Ontario and Lethbridge, Alberta, have already voted to weaken or withdraw from key climate initiatives after receiving KICLEI materials. Scientists have labelled many of KICLEI’s claims as misinformation, while the group denies spreading falsehoods.

Brookings: Technology firms and their executives have increasingly embedded themselves within the US federal government, gaining greater access to confidential data and benefiting from a loosening of earlier AI regulations. While some tech leaders argue that AI doesn’t require strict oversight, growing public concern and real-world issues, such as privacy violations, biased algorithms, and security vulnerabilities, highlight the urgent need for thoughtful governance. History shows that when new technologies spark public unease, pressure builds for government action, making openness and accountability crucial to sustaining trust and ensuring the industry’s future stability.

IT News: A new application for AI-driven deception has been identified and tested: the Australian Army is experimenting with the so-called TrapRadio, a system leveraging artificial intelligence to generate fake radio signals that imitate the behavior and patterns of important communications to confuse adversaries and safeguard frontline trops. 

Financial Review: The development of artificial intelligence and AI chatbots has revolutionized the dominance of search engines, as in the case of Google. For the first time in decades, the tech giant’s once-unshakable monopoly faces real competition, and the question at stake is whether Google will be able to maintain its dominance in online search at a time when AI chatbots are redefining how people access information.

The Hill: The Chicago Sun-Times published an AI-generated summer reading list containing entirely fictional book titles, which appeared both in the online and in print editions without any editorial oversight. The list quickly drew criticism and ridicule from readers who noticed the fake entries. The newspaper has since acknowledged the error, admitting it failed to review the content. What seems like a minor anecdote calls into question the use of artificial intelligence technologies in journalism without human oversight.

BBC: Deepfakes have advanced in a critical area that could make them significantly harder to detect. A new study published in Frontiers in Imaging reveals that synthetic videos are now capable of replicating realistic pulse signals in human bodies—biological cues whose absence was previously used to identify fakes. This development may render many existing detection tools less effective. Experts warn that this breakthrough could further erode public trust in visual media, and emphasize the need for cryptographic authentication methods, not just more advanced detectors, as a long-term defense strategy.

Events, jobs & announcements

The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place on June 16-17 at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies. PCAIDE offers a unique platform for experts to engage in open dialogue and collaborate on addressing key issues in the development of sociotechnical systems.

The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva. This leading UN event on AI brings together top names in AI, with a high-level lineup of global decision makers. Its goal is to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 

From July 14-18, 2025, the AIDA Symposium and Summer School will explore the latest in AI and ML. Co-organised by AIDA and Aristotle University of Thessaloniki, this hybrid event offers expert-led lectures, special sessions, and hands-on tutorials.

The UK’s AI Safety Institute is recruiting for multiple roles in research, engineering, strategy, and operations. As part of a high-impact initiative focused on AI governance, successful candidates will contribute to critical work in a fast-paced, interdisciplinary environment alongside leading experts.

AI & Disinfo Multimedia

A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.

Webinars

Our own and community webinar collection exploring the intersections of AI and disinformation

AI Disinfo in depth

A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.

Research

A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation

About policy & regulations

A look at regulation and policies implemented on AI and disinformation

Miscellaneous readings

Recommended reading on AI and disinformation

Community

A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.

Tools

A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.

AI Research Pilot by Henk van Ess is a lightweight, browser-based tool designed to help investigators, journalists, and researchers get more out of AI, not by using AI as a source, but as a guide to real sources.

Initiatives & organisations

Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.

veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.

AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.

AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.

AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence

AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.

The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.

The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.

GZERO’s newsletter offers exclusive insights into our rapidly changing world, covering topics such as AI-driven disinformation and a weekly exclusive edition written by Ian Bremmer.

Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.

AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.

Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects. 

Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.

AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience to prevent or mitigate bad outcomes.

The TGuard project develops innovative methods for detecting disinformation in social media and formulating effective strategies for preventing AI-generated false reports.

The AI-on-Demand (AIoD) Platform is a European hub for trustworthy AI, offering open access to models, datasets, tools, and educational resources. Backed by the EU, it supports researchers, innovators, and public institutions in developing and sharing responsible AI technologies aligned with European values.

BBC Verify Live is a real-time news feed that gives audiences a behind-the-scenes look at how BBC journalists verify information. Using tools like open-source intelligence, satellite imagery, and data analysis, the BBC Verify team investigates disinformation, checks facts, and authenticates content as news breaks. Available on the BBC News homepage and app, this initiative aims to boost transparency and trust in journalism, especially in the face of rising threats from disinformation and AI-generated content.

Last updated: 09/06/2025

The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.