AI Disinfo Hub

The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.

This hub intends to assist you to better understand how AI is impacting the disinformation field.  To be up-to-date on the latest developments, we will collect the latest Neural News and Trends and include upcoming events and job opportunities that you cannot miss.
 

Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.

In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.

Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.

Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS

We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!

News
News

The Conversation: This article, published in The Conversation, examines how the metaphors and narratives we use to describe AI shape public understanding, and, in turn, how AI is designed, adopted, and governed. The author argues that many dominant portrayals of AI (humanlike “assistants,” artificial brains, and the ubiquitous humanoid robot) have little basis in reality. Instead, these myth-driven images can obscure what today’s AI systems actually are, exaggerate their capabilities, and blur their limitations, making it harder to use and regulate it.

NewsGuard: Following the real capture of Venezuela’s leader Nicolás Maduro by U.S. forces, social media was flooded with AI-generated and out-of-context images and videos falsely claiming to show the operation, amassing more than 14 million views on X in days. NewsGuard finds that these visuals often closely resemble reality, making them harder to debunk. This illustrates how AI-enhanced imagery and recycled footage are increasingly used to amplify political narratives and manipulate perception, even when the underlying event is real.

Reuters: Italy’s antitrust authority has closed its investigation into the Chinese AI system DeepSeek after the company agreed to introduce binding measures to better warn users about the risk of AI “hallucinations.” The commitments require clearer, more prominent disclosures that AI-generated responses may be inaccurate, misleading, or fabricated, addressing concerns over consumer protection and transparency.

Wired: GROK has become the focal point of a growing AI scandal after users showed that the chatbot can be used directly on X to “undress” people and generate non-consensual sexualised images, including of minors, making the abuse highly visible at scale. Euronews reports that the fallout has triggered investigations and warnings from regulators across the EU, UK, France, India and beyond, with mounting pressure on xAI over child safety, consent and liability. As Axios outlines, the controversy is also sharpening a broader legal debate: because Grok generates and publicly shares the images itself, platforms may face direct responsibility rather than relying on user-content protections. While Grok stands out for its visibility on X, reporting from Wired shows the problem is not isolated, Google’s Gemini and OpenAI’s ChatGPT can also be coaxed into producing similar “bikini” deepfakes, exposing wider failures of safeguards across mainstream AI tools.

Dysinfluence / Marc Owen Jones: This report investigates a coordinated, AI-assisted influence and disinformation ecosystem (including the use of books written with AI) involving a cluster of Emirati social media personalities, pseudo-news websites, and right-wing media outlets. It shows how AI-generated content, recycled accounts, fake or opaque news sites, and books that look to have been writen with AI, are used to launder narratives. Those narratives are aligned with UAE, pro-Israel, and European far-right talking points, especially around the Muslim Brotherhood, migration, Sudan, and Gaza. A central connective figure is Amjad Taha and his company Crestnux Media, which appear to promote, amplify, and help legitimise this network through advertising, events, and cross-platform coordination.

Wired: AI misuse is enabling new forms of fraud: Scammers in China are increasingly using AI-generated photos and videos to fake damaged goods and fraudulently claim refunds from e-commerce platforms. According to this article, as image-generation tools become cheaper and more realistic, scammers are lowering the barrier for organised and individual fraud, undermining trust-based return systems and forcing platforms to rethink verification and refund policies.

Lawfare Media: This article examines how future AI systems that can continue learning after deployment could fundamentally challenge existing and proposed AI regulations. Most regulatory approaches assume models are fixed products with stable capabilities. By contrast, systems that learn autonomously and evolve over time could complicate risk assessment, auditing, and enforcement. The article argues that this shift would also blur liability and responsibility among developers, intermediaries, and users, and urges policymakers to anticipate these challenges now, before such models become widespread.

European Commission: The European Commission has released a first draft of its Code of Practice on marking and labelling of AI-generated content, outlining how AI content, including deepfakes and synthetic text, could be clearly labelled across the European Union. The draft signals the possible use of a common visual marker and is intended to guide providers and deployers in meeting the AI Act’s transparency obligations, before the rules take effect in August 2026.

ABC: As with many breaking-news events, a surge of disinformation followed the attack at Bondi Beach in Australia last December, when 15 people were killed at a Hanukkah gathering. Much of the false content about the attackers and victims was AI-generated. ABC News Verify traced for instance one widely shared deepfake allegedly showing one of the victims staging the attack, that was created with Google’s AI tools. But AI-generated text also contributed to the confusion, with Grok, Elon Musk’s AI-driven chatbot, spreading and amplifying false narratives soon after the attack. The Financial Review reported how Grok made up the identity of a heroic by stand who disarmed an attacker as Edward Cabtree, completing it with a fabricated backstory. The chat questioned the authenticity of the confrontation and described the situation in a surreal way (as a man climbing a palm tree or an Israeli hostage taken by Hamas on October 7), as reported by Gizmodo.  Crikey highlights how this episode illustrates how AI misinformation eat its own tail,  with Grok absorbing and rapidly repeating AI-generated falsehoods.

Axios: What once seemed a remote, hypothetical risk is rapidly becoming a realistic scenario. AI systems are demonstrating the ability to carry out increasingly sophisticated hacking tasks, raising fears that autonomous cyberattacks are approaching reality. Researchers and tech companies warn that even today’s imperfect models can already find vulnerabilities, write exploits, and assist threat actors, suggesting future versions could dramatically scale cybercrime and state-backed attacks.

Pagella Politica: Most of Italy’s parliamentary parties have agreed to a voluntary commitment to refrain from using AI-generated deepfakes in political campaigning and to publicly correct any such content shared in error. The initiative, developed by fact-checkers at Pagella Politica in collaboration with Facta, was endorsed across the political spectrum, with the notable exception of the Lega. The right-wing party, led by Matteo Salvini, did not sign the pledge.

AP News: As generative AI becomes embedded in everyday digital life, militant and extremist groups are beginning to experiment with the same tools. AI lowers barriers by enabling these actors to produce and test content at scale, including propaganda and deepfakes, and improve recruitment processes though multilingual tailored messages to reach, persuade, and mobilize new audiences. At the same time, platform algorithms can amplify emotionally charged and misleading content during conflicts or crises. Combined with rapid advances in AI capabilities, the risks to information integrity, public trust, and online safety are likely to escalate quickly.

Semafor: The Washington Post has launched a beta AI tool that generates personalised news podcasts, even though internal tests found most scripts failed basic publishability checks. Staff flagged errors ranging from misquotes and fabrications to biased framing, raising fresh questions about trust and quality as newsrooms rush to roll out consumer-facing AI products.

The Alan Turing Institute: A new study by the Alan Turing Institute, in collaboration with the AI Security Institute and Anthropic, finds that large language models may be easier to poison than previously assumed. Researchers show that inserting a hidden backdoor into an LLM can require only a small, roughly constant number of malicious documents, around a few hundred, regardless of model size, suggesting that data poisoning attacks could be both scalable and practical. The findings raise fresh concerns about the security of AI systems trained on open web data and the need for stronger protections against misuse.

Yingxin Zhou and Jingbo Hou: This paper examines how the introduction of generative AI fact-like responses (via X’s chatbot Grok) affects participation in human-led fact-checking systems, specifically Community Notes. The authors find that when users can rely on AI-generated replies, engagement in crowdsourced fact-checking drops, especially among highly active contributors who are crucial to the system’s effectiveness. The study warns that AI tools may unintentionally undermine human verification ecosystems rather than complement them.

Events, jobs & announcements

One year into the second Trump administration, US AI policy has taken a sharp and unexpected turn, from rapid AI infrastructure expansion and workforce automation to shifts in regulation, public ownership, and the global export of the “American AI technology stack”.

This online discussion organised by Data & Society brings together leading experts to unpack what is really driving these changes, how AI governance is being reshaped, and what the downstream consequences may be for workers, civil rights, democracy, and global tech power.

Speakers:

  • Alondra Nelson (Institute for Advanced Study)

  • Edward Ongweso Jr. (Security in Context; This Machine Kills)

  • Vittoria Elliott (WIRED)

Format: Online
Date: 22 January 2026
Time: 2:00 PM ET

🔗 Register here

The Institute for Law & AI (LawAI) is offering Seasonal Research Fellowships for law students, professionals, and academics interested in working at the intersection of AI, law, and public policy.

📍 Remote | ⏳ Seasonal (Summer / Winter)

Fellowships are available across multiple workstreams, including:

  • EU Law
  • US Law & Policy
  • Legal Frontiers

Research fellows contribute to LawAI’s core research agendas and policy-relevant work at the cutting edge of AI governance and legal design.

🔗 More information & applications

Schmidt Sciences is recruiting AI Institute Fellows-in-Residence for a 12–18 month programme for recent PhD graduates in AI or computer science.

📍 New York City (on-site) | ⏳ Fixed-term | 💼 $150,000/year
🗓️ Deadline: Rolling applications (apply early) | 🗓️ Cohort starting 2026

Fellows split their time between independent AI research and supporting the development of the AI & Advanced Computing Institute, including grantmaking and programme design. Priority areas include AI agents, trustworthy AI, AI for science, labour impacts, and alignment.

🔗 More information & apply

ActiveFence is hiring across multiple roles to help tackle online harms, AI security risks, and trust & safety challenges at scale. The company brings together intelligence analysts, engineers, data scientists, and researchers to ensure the internet remains a safer, more resilient space.

📍 Multiple locations (Israel, UK, Vietnam, remote/hybrid roles)
🧭 Teams: R&D, Trust & Safety, AI & GenAI Security, Data Science, Engineering
🗓️ Deadline: Rolling applications

Open roles include positions in GenAI security, malware research, data science, DevOps, and platform engineering, among others.

🔗 View open positions & apply

The Centre for Responsible AI (CeRAI) at IIT Madras is currently advertising multiple research, technical, and policy roles focused on responsible, ethical, and governance-oriented AI.

📍 India (IIT Madras) | 🌍 Interdisciplinary
🗓️ Deadline: Not specified (roles appear to be open / rolling)

Roles listed include:

  • Research Scientists & Postdoctoral Fellows
  • Policy Analysts & Junior Researchers
  • AI / LLM Engineers & Software Developers
  • Project & Programme Staff (technical and non-technical)
  • Technical roles are often recruited via the Wadhwani School of Data Science & AI, while policy and social science roles are applied for directly through CeRAI.

🔗 View openings & apply

The Centre for the Governance of AI (GovAI) is recruiting for several roles and fellowships focused on AI governance, policy, and research.

📍 UK / Global
🗓️ Key deadline: 4 January 2026 (23:59 GMT)

Open opportunities include:

  • Summer Fellowship 2026 (Research Track & Applied Track)
  • Head of Community
  • Research Assistant (expression of interest, rolling)

🔗 Details & applications

AI & Disinfo Multimedia

A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.

Webinars

Our own and community webinar collection exploring the intersections of AI and disinformation

AI Disinfo in depth

A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.

Research

A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation

About policy & regulations

A look at regulation and policies implemented on AI and disinformation

Miscellaneous readings

Recommended reading on AI and disinformation

Community

A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.

Tools

A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.

AI Research Pilot by Henk van Ess is a lightweight, browser-based tool designed to help investigators, journalists, and researchers get more out of AI, not by using AI as a source, but as a guide to real sources.

LLM Journalism Tool Advisor is an interactive guide designed to cut through the noise, by walking you through a simple, step-by-step decision tree to pinpoint the best tool and the best strategy for your immediate task.

Digital Digging offers a handbook with seven strategies on how to identify AI-generated.

A new AI-powered tool that identifies where a photo was taken by analysing visual clues in the image. Launched by Where Is This Photo, it uses machine-learning models to predict locations — useful for quick geolocation checks or curiosity-driven searches.

Faktabaari has launched an interactive game that trains users to spot whether images are real or AI-generated, a quick, playful way to build digital and visual literacy.

The Agence France‑Presse (AFP) Digital Course, supported by the Google News Initiative, offers a 75-minute module on how AI is reshaping the information ecosystem, common types of AI-generated misinformation, and best practices for verification.

Image Whisperer is an experimental online image authenticity checker, created by Henk van Ess, designed to help journalists, researchers and fact-checkers evaluate whether a still image is likely authentic, manipulated, or AI-generated

The Global Investigative Journalism Network (GIJN) has launched a practical verification guide for journalists to assess whether text, image, audio or video is likely AI-generated.

Rather than a single software product, it teaches reporters a structured workflow combining quick checks, deeper analysis, and multiple verification techniques under real-world time pressure. 

Initiatives & organisations

Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.

veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.

AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.

AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.

AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence

AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.

The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.

The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.

GZERO’s newsletter offers exclusive insights into our rapidly changing world, covering topics such as AI-driven disinformation and a weekly exclusive edition written by Ian Bremmer.

Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.

AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.

Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects. 

Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.

AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience to prevent or mitigate bad outcomes.

The TGuard project develops innovative methods for detecting disinformation in social media and formulating effective strategies for preventing AI-generated false reports.

The AI-on-Demand (AIoD) Platform is a European hub for trustworthy AI, offering open access to models, datasets, tools, and educational resources. Backed by the EU, it supports researchers, innovators, and public institutions in developing and sharing responsible AI technologies aligned with European values.

BBC Verify Live is a real-time news feed that gives audiences a behind-the-scenes look at how BBC journalists verify information. Using tools like open-source intelligence, satellite imagery, and data analysis, the BBC Verify team investigates disinformation, checks facts, and authenticates content as news breaks. Available on the BBC News homepage and app, this initiative aims to boost transparency and trust in journalism, especially in the face of rising threats from disinformation and AI-generated content.

Deepfake Glossary by Reality Defender: The Deepfake Glossary is a practical guide to the terms shaping today’s synthetic threat landscape. Review it to stay ahead of the evolving terminology.

The Universitat Politècnica de València (UPV), together with INECO, has created the AI and Diversity Observatory, a pioneering project that seeks to identify biases in artificial intelligence from an inclusive perspective. Collaborating with vulnerable groups and human rights organizations, the Observatory analyzes concerns and proposals to promote equitable and non-discriminatory AI. In addition, it will monitor trends and issues related to AI in society.

Prebunking at Scale is a new European initiative led by Full Fact, Maldita.es, and EFCSN that uses AI to detect emerging misinformation narratives early and help fact-checkers pre-emptively counter false claims before they go viral, especially on short-form video platforms.

The Pulitzer Center’s AI Spotlight is a new open curriculum offering free training materials to help journalists better understand, investigate, and report on artificial intelligence and its societal impacts.

The Data Tank is new initiative designed to help small and medium public-interest media organisations respond to the challenges posed by generative AI. The project brings together media outlets, researchers, regulators, and civil society to explore collective solutions such as data collaboratives, knowledge commons, innovative licensing models, and advocacy coalitions, aiming to strengthen media sustainability, bargaining power, and content integrity in the face of extractive AI practices.

Last updated: 08/01/2026

The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.