AI Disinfo Hub

The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.

This hub intends to assist you to better understand how AI is impacting the disinformation field.  To be up-to-date on the latest developments, we will collect the latest Neural News and Trends and include upcoming events and job opportunities that you cannot miss.
 

Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.

In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.

Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.

Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS

We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!

News
News

Politico: The European Union’s main institutions have banned staff from using AI-generated videos and images in official communications, stressing the need to preserve authenticity and credibility and to avoid confusion online. The move contrasts with developments in other countries, where political actors are actively using synthetic media. It has also sparked debate, with some arguing that a total ban could limit innovation and miss an opportunity to educate the public about the responsible and transparent use of AI in political communication.

Science: As AI chatbots become a source of advice on personal and social issues, research shows they often validate users’ positions more than humans do, failing to challenge beliefs and amplifying existing biases while reducing exposure to corrective feedback, even in ethically questionable situations. This tendency can discourage users from reconsidering their actions, potentially reinforcing false beliefs and contributing to disinformation dynamics.

The Verge: The European Parliament has supported banning nudify apps amid outrage over sexualised deepfakes, while delaying key AI Act rules on watermarking and high-risk systems. The dual approach highlights tensions between addressing immediate harms and maintaining progress on broader AI transparency and disinformation safeguards.

The Verge: Wikipedia has prohibited the use of AI to write or rewrite articles, citing concerns over accuracy, verifiability, and the risk of misleading content. While limited uses such as translation and copyediting remain allowed, the move reflects growing efforts to curb unreliable AI-generated text and protect information integrity.

NewsGuard: A network of YouTube channels has used AI-generated audio to impersonate former US presidents, including Bill Clinton, Barack Obama and George W. Bush, producing political commentary on topics such as the Iran war. The content appears to be largely financially motivated, with channels monetising deepfake videos through programmatic advertising and attracting large audiences.

IWF: New data reveals a sharp rise in AI-generated child sexual abuse material, with 8,029 images and videos identified in 2025, 65% classified in the most severe legal category, which includes offences such as rape and sexual torture. Analysts warn that offenders are not only creating synthetic content but also discussing capturing real-world footage of children to convert into AI-generated abuse material, raising urgent concerns about how generative AI is lowering barriers to harm at scale.

The New Yorker: Anthropic has introduced a so-called “AI constitution” for its chatbot Claude, a set of principles the system is trained to follow, to make AI safer and more aligned with human values. However, critics argue it reflects a broader transfer of responsibility from democratic institutions to private tech firms, raising concerns about accountability and about who defines the rules and ethics governing AI.

Tech Policy: From courtroom evidence to legal advice, AI is increasingly shaping judicial processes. This analysis, published by Tech Policy Press, explores how interactions with AI chatbots are being used as evidence in criminal and civil cases, marking a new frontier in digital investigations. As users turn to AI for advice or reflection, these exchanges can become legally discoverable, raising concerns over privacy, admissibility, and reliability. Separately, and according to Commercial Litigation, OpenAI is facing a $10 million lawsuit filed by Japanese insurer Nippon Life, which claims that flawed legal advice generated by ChatGPT led a former client to initiate legal action against the company.

Hybrid CoE: A Hybrid CoE report examines how China and Russia are integrating AI into foreign information manipulation (FIMI), not as a replacement but as a force multiplier that increases scale, speed, and targeting precision. China leverages a strong domestic AI ecosystem to enable data-driven, highly personalised influence operations, including micro-targeting, synthetic media, and algorithmic amplification. Russia, with weaker AI capacities, relies on more accessible tools to scale existing tactics focused on volume, disruption, and narrative laundering. The report highlights that both actors are enhancing established disinformation strategies, with emerging developments such as agentic AI and AI ecosystem manipulation likely to further expand the reach and adaptability of hybrid influence operations.

Graphika: A new Graphika report examines how AI-powered “nudifier” services, which generate non-consensual intimate imagery, are expanding through coordinated, profit-driven online ecosystems. These services rely on large networks of inauthentic accounts, affiliate marketing schemes, and cross-platform promotion to evade moderation, including SEO poisoning, PDF injection into trusted domains, and AI-generated content designed to rank in search engines. The findings highlight how harmful AI services are industrialising distribution and monetisation strategies, raising concerns about platform enforcement gaps and the broader abuse of generative AI tools at scale.

World Economic Forum: A World Economic Forum analysis warns that AI and synthetic media are accelerating disinformation into a systemic threat to democratic stability. Advanced tools enable highly targeted manipulation, using psychological profiling and emotionally charged content to amplify polarisation and shape public perception. With deepfakes becoming harder to detect and widely accessible, the report highlights how disinformation now operates at scale, exacerbating broader global risks. It calls for stronger resilience through verification systems, media literacy, and governance frameworks to counter AI-enabled cognitive manipulation.

Tech Policy: A Tech Policy Press analysis warns that the anthropomorphic design of AI chatbots is leading users to treat their outputs as authoritative statements rather than generated text. This has already resulted in errors in journalism and legal contexts, where AI-generated responses have been misinterpreted as factual or evidentiary. The piece highlights how this misplaced trust can undermine judgment, obscure accountability, and create new risks for information integrity.

Tom’ Guide: So-called “AI slop”, low-quality, mass-produced AI content, is rapidly spreading across social media, designed to maximise engagement, outrage, or ad revenue. Unlike traditional clickbait, this content can adapt to trends and user behaviour at scale, making it harder to detect and more effective at capturing attention. Its viral spread is fuelled by near-zero production costs and platform algorithms, raising concerns about declining information quality, user manipulation, and the broader impact on the online information ecosystem. Some initiatives have emerged to track and document these trends, with accounts such as Facebook AI Slop highlighting harmful or misleading examples circulating online.

European Leadership Network: AI chatbots can generate different versions of reality depending on the language used, raising growing security concerns. Research testing major models found that responses in Russian were significantly more likely to include propaganda narratives or omit factual information, while Western systems sometimes introduced “false balance” (by exposing different perspectives) on well-established facts. These patterns suggest that language-dependent outputs are not random errors but structural biases that can be exploited to shape perceptions at scale, turning AI into a potential vector for cognitive warfare and information manipulation.

The Hallucination Herald tests fully autonomous AI journalism.
Launched in March 2026 by developer Juan Pisanu, The Hallucination Herald is a fully automated digital newspaper run by a network of AI agents acting as reporters, editors, and fact-checkers. Operating without human intervention, the project serves as an editorial experiment exploring the potential, and risks, of agentic AI in news production, including questions around accuracy, accountability, and the future of journalism.

Events, jobs & announcements

Senior Research Engineer Amruta Deshpande and Intelligence Specialist Angie Waller will share insights from Graphika’s latest research, examining how AI-generated imagery is used in real-world threat scenarios and what more effective, proactive detection strategies look like in practice.

Participants will gain a better understanding of emerging AI-enabled risks and how organisations can move from reactive responses to more anticipatory, resilience-based approaches.

🔗 Register and submit questions in advance

Schmidt Sciences is recruiting AI Institute Fellows-in-Residence for a 12–18 month programme for recent PhD graduates in AI or computer science.

📍 New York City (on-site) | ⏳ Fixed-term | 💼 
🗓️ Applications: Rolling (apply early) | 

Fellows split their time between independent AI research and supporting the development of the AI & Advanced Computing Institute, including grantmaking and programme design. Priority areas include multi-agent systems, AI for scientific discovery, trustworthy AI and alignment, AI’s impact on the labour market, and hardware-enabled verification.

Alice (formerly ActiveFence) is hiring across a range of roles to tackle online harms, AI security risks, and trust & safety challenges at scale. The company brings together intelligence analysts, engineers, and security experts to help make digital platforms and AI systems safer and more resilient.

📍 Locations: Israel (Ramat Gan), USA (New York), Vietnam
🧭 Teams: Intelligence, Security, Infrastructure, Marketing
🗓️ Applications: Rolling

Open roles include AI analysts, mobile threat analysts, security research leads, infrastructure specialists, and product marketing positions.

🔗 View open positions & apply

The Centre for Responsible AI (CeRAI) at IIT Madras is recruiting across a range of research, technical, and policy roles focused on responsible, ethical, and governance-oriented AI.

🗓️ Applications: Rolling / no fixed deadline indicated

🔗 View openings & apply.

AI & Disinfo Multimedia

A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.

Webinars

Our own and community webinar collection exploring the intersections of AI and disinformation

Podcasts

Community podcasts exploring the intersections of AI and disinformation

AI Disinfo in depth

A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.

Research

A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation

A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation

policy & regulations

A look at regulation and policies implemented on AI and disinformation

Miscellaneous readings

Recommended reading on AI and disinformation

Community

A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.

Tools

A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.

AI Research Pilot by Henk van Ess is a lightweight, browser-based tool designed to help investigators, journalists, and researchers get more out of AI, not by using AI as a source, but as a guide to real sources.

LLM Journalism Tool Advisor is an interactive guide designed to cut through the noise, by walking you through a simple, step-by-step decision tree to pinpoint the best tool and the best strategy for your immediate task.

Digital Digging offers a handbook with seven strategies on how to identify AI-generated.

A new AI-powered tool that identifies where a photo was taken by analysing visual clues in the image. Launched by Where Is This Photo, it uses machine-learning models to predict locations — useful for quick geolocation checks or curiosity-driven searches.

Faktabaari has launched an interactive game that trains users to spot whether images are real or AI-generated, a quick, playful way to build digital and visual literacy.

The Agence France‑Presse (AFP) Digital Course, supported by the Google News Initiative, offers a 75-minute module on how AI is reshaping the information ecosystem, common types of AI-generated misinformation, and best practices for verification.

Image Whisperer is an experimental online image authenticity checker, created by Henk van Ess, designed to help journalists, researchers and fact-checkers evaluate whether a still image is likely authentic, manipulated, or AI-generated

The Global Investigative Journalism Network (GIJN) has launched a practical verification guide for journalists to assess whether text, image, audio or video is likely AI-generated.

Rather than a single software product, it teaches reporters a structured workflow combining quick checks, deeper analysis, and multiple verification techniques under real-world time pressure. 

AI Community Notes Tracker is a live monitoring tool developed by Indicator, that tracks the share of AI-generated or AI-assisted Community Notes on X. It helps researchers and practitioners see how AI is being used in X’s crowdsourced fact-checking/contextual annotation system and understand shifts in platform moderation practices.

NewsGuard has launched a real-time detection datastream identifying over 3,000 “AI content farms”, websites generating large volumes of undisclosed AI-written content to spread misinformation or capture ad revenue. Combining automated detection (Pangram Labs) with human verification, the tool helps platforms, advertisers, and researchers identify low-quality AI-generated sites and mitigate their impact on the information ecosystem.

Initiatives & organisations

Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.

veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.

AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.

AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.

AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence

AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.

The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.

The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.

GZERO’s newsletter offers exclusive insights into our rapidly changing world, covering topics such as AI-driven disinformation and a weekly exclusive edition written by Ian Bremmer.

PR Hall of Shame is a watchdog-style list, developed by Press Gazette, exposing brands and PR networks linked to AI-generated “fake experts” quoted in the press, helping journalists spot credibility risks and reduce synthetic ‘expert’ manipulation.

AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.

Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects. 

Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.

AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience to prevent or mitigate bad outcomes.

The TGuard project develops innovative methods for detecting disinformation in social media and formulating effective strategies for preventing AI-generated false reports.

The AI-on-Demand (AIoD) Platform is a European hub for trustworthy AI, offering open access to models, datasets, tools, and educational resources. Backed by the EU, it supports researchers, innovators, and public institutions in developing and sharing responsible AI technologies aligned with European values.

BBC Verify Live is a real-time news feed that gives audiences a behind-the-scenes look at how BBC journalists verify information. Using tools like open-source intelligence, satellite imagery, and data analysis, the BBC Verify team investigates disinformation, checks facts, and authenticates content as news breaks. Available on the BBC News homepage and app, this initiative aims to boost transparency and trust in journalism, especially in the face of rising threats from disinformation and AI-generated content.

Deepfake Glossary by Reality Defender: The Deepfake Glossary is a practical guide to the terms shaping today’s synthetic threat landscape. Review it to stay ahead of the evolving terminology.

The Universitat Politècnica de València (UPV), together with INECO, has created the AI and Diversity Observatory, a pioneering project that seeks to identify biases in artificial intelligence from an inclusive perspective. Collaborating with vulnerable groups and human rights organizations, the Observatory analyzes concerns and proposals to promote equitable and non-discriminatory AI. In addition, it will monitor trends and issues related to AI in society.

Prebunking at Scale is a new European initiative led by Full Fact, Maldita.es, and EFCSN that uses AI to detect emerging misinformation narratives early and help fact-checkers pre-emptively counter false claims before they go viral, especially on short-form video platforms.

The Pulitzer Center’s AI Spotlight is a new open curriculum offering free training materials to help journalists better understand, investigate, and report on artificial intelligence and its societal impacts.

The Data Tank is new initiative designed to help small and medium public-interest media organisations respond to the challenges posed by generative AI. The project brings together media outlets, researchers, regulators, and civil society to explore collective solutions such as data collaboratives, knowledge commons, innovative licensing models, and advocacy coalitions, aiming to strengthen media sustainability, bargaining power, and content integrity in the face of extractive AI practices.

PR Hall of Shame by Press Gazette, is a watchdog-style list exposing brands and PR networks linked to AI-generated “fake experts” quoted in the press, helping journalists spot credibility risks and reduce synthetic ‘expert’ manipulation.

Last updated: 10/04/2026

The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.