
AI Disinfo Hub
The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.
In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.
Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.
Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS
We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!
News
News
We’re talking about AI all wrong. Here’s how we can fix the narrative (The Conversation, 07/01/2026)
The Conversation: This article, published in The Conversation, examines how the metaphors and narratives we use to describe AI shape public understanding, and, in turn, how AI is designed, adopted, and governed. The author argues that many dominant portrayals of AI (humanlike “assistants,” artificial brains, and the ubiquitous humanoid robot) have little basis in reality. Instead, these myth-driven images can obscure what today’s AI systems actually are, exaggerate their capabilities, and blur their limitations, making it harder to use and regulate it.
Phony visuals of Maduro’s real capture (NewsGuard, 05/01/2026)
NewsGuard: Following the real capture of Venezuela’s leader Nicolás Maduro by U.S. forces, social media was flooded with AI-generated and out-of-context images and videos falsely claiming to show the operation, amassing more than 14 million views on X in days. NewsGuard finds that these visuals often closely resemble reality, making them harder to debunk. This illustrates how AI-enhanced imagery and recycled footage are increasingly used to amplify political narratives and manipulate perception, even when the underlying event is real.
Italy closes probe into DeepSeek after commitments to warn of AI 'hallucination' risks (Reuters, 05/01/2026)
Reuters: Italy’s antitrust authority has closed its investigation into the Chinese AI system DeepSeek after the company agreed to introduce binding measures to better warn users about the risk of AI “hallucinations.” The commitments require clearer, more prominent disclosures that AI-generated responses may be inaccurate, misleading, or fabricated, addressing concerns over consumer protection and transparency.
Google’s and OpenAI’s chatbots can strip women in photos down to bikinis (Wired, 23/12/2025)
Wired: GROK has become the focal point of a growing AI scandal after users showed that the chatbot can be used directly on X to “undress” people and generate non-consensual sexualised images, including of minors, making the abuse highly visible at scale. Euronews reports that the fallout has triggered investigations and warnings from regulators across the EU, UK, France, India and beyond, with mounting pressure on xAI over child safety, consent and liability. As Axios outlines, the controversy is also sharpening a broader legal debate: because Grok generates and publicly shares the images itself, platforms may face direct responsibility rather than relying on user-content protections. While Grok stands out for its visibility on X, reporting from Wired shows the problem is not isolated, Google’s Gemini and OpenAI’s ChatGPT can also be coaxed into producing similar “bikini” deepfakes, exposing wider failures of safeguards across mainstream AI tools.
Amjad Taha, muslim brotherhood maxxing and the emirati dysinfluencer factory (Dysinfluence / Marc Owen Jones, 22/12/2025)
Dysinfluence / Marc Owen Jones: This report investigates a coordinated, AI-assisted influence and disinformation ecosystem (including the use of books written with AI) involving a cluster of Emirati social media personalities, pseudo-news websites, and right-wing media outlets. It shows how AI-generated content, recycled accounts, fake or opaque news sites, and books that look to have been writen with AI, are used to launder narratives. Those narratives are aligned with UAE, pro-Israel, and European far-right talking points, especially around the Muslim Brotherhood, migration, Sudan, and Gaza. A central connective figure is Amjad Taha and his company Crestnux Media, which appear to promote, amplify, and help legitimise this network through advertising, events, and cross-platform coordination.
Scammers in China are using AI-generated images to get refunds (Wired, 19/12/2025)
Wired: AI misuse is enabling new forms of fraud: Scammers in China are increasingly using AI-generated photos and videos to fake damaged goods and fraudulently claim refunds from e-commerce platforms. According to this article, as image-generation tools become cheaper and more realistic, scammers are lowering the barrier for organised and individual fraud, undermining trust-based return systems and forcing platforms to rethink verification and refund policies.
When AI models can continually learn, will our regulations be able to keep up? (Lawfare Media, 18/12/2025)
Lawfare Media: This article examines how future AI systems that can continue learning after deployment could fundamentally challenge existing and proposed AI regulations. Most regulatory approaches assume models are fixed products with stable capabilities. By contrast, systems that learn autonomously and evolve over time could complicate risk assessment, auditing, and enforcement. The article argues that this shift would also blur liability and responsibility among developers, intermediaries, and users, and urges policymakers to anticipate these challenges now, before such models become widespread.
First draft Code of Practice on transparency of AI-generated content (European Commission, 17/12/2025)
European Commission: The European Commission has released a first draft of its Code of Practice on marking and labelling of AI-generated content, outlining how AI content, including deepfakes and synthetic text, could be clearly labelled across the European Union. The draft signals the possible use of a common visual marker and is intended to guide providers and deployers in meeting the AI Act’s transparency obligations, before the rules take effect in August 2026.
Racist and antisemitic false information spreads online following Bondi Beach terrorism attack (ABC, 16/12/2025)
ABC: As with many breaking-news events, a surge of disinformation followed the attack at Bondi Beach in Australia last December, when 15 people were killed at a Hanukkah gathering. Much of the false content about the attackers and victims was AI-generated. ABC News Verify traced for instance one widely shared deepfake allegedly showing one of the victims staging the attack, that was created with Google’s AI tools. But AI-generated text also contributed to the confusion, with Grok, Elon Musk’s AI-driven chatbot, spreading and amplifying false narratives soon after the attack. The Financial Review reported how Grok made up the identity of a heroic by stand who disarmed an attacker as Edward Cabtree, completing it with a fabricated backstory. The chat questioned the authenticity of the confrontation and described the situation in a surreal way (as a man climbing a palm tree or an Israeli hostage taken by Hamas on October 7), as reported by Gizmodo. Crikey highlights how this episode illustrates how AI misinformation eat its own tail, with Grok absorbing and rapidly repeating AI-generated falsehoods.
AI models are perfecting their hacking skills, by Axios (16/12/2025)
Axios: What once seemed a remote, hypothetical risk is rapidly becoming a realistic scenario. AI systems are demonstrating the ability to carry out increasingly sophisticated hacking tasks, raising fears that autonomous cyberattacks are approaching reality. Researchers and tech companies warn that even today’s imperfect models can already find vulnerabilities, write exploits, and assist threat actors, suggesting future versions could dramatically scale cybercrime and state-backed attacks.
C’è un accordo tra i partiti per non usare i deepfake contro gli avversari (Pagella Politica, 16/12/2025)
Pagella Politica: Most of Italy’s parliamentary parties have agreed to a voluntary commitment to refrain from using AI-generated deepfakes in political campaigning and to publicly correct any such content shared in error. The initiative, developed by fact-checkers at Pagella Politica in collaboration with Facta, was endorsed across the political spectrum, with the notable exception of the Lega. The right-wing party, led by Matteo Salvini, did not sign the pledge.
Militant groups are experimenting with AI, and the risks are expected to grow (AP News, 15/12/2025)
AP News: As generative AI becomes embedded in everyday digital life, militant and extremist groups are beginning to experiment with the same tools. AI lowers barriers by enabling these actors to produce and test content at scale, including propaganda and deepfakes, and improve recruitment processes though multilingual tailored messages to reach, persuade, and mobilize new audiences. At the same time, platform algorithms can amplify emotionally charged and misleading content during conflicts or crises. Combined with rapid advances in AI capabilities, the risks to information integrity, public trust, and online safety are likely to escalate quickly.
Iterate through: Why the Washington Post launched an error-ridden AI product (Semafor, 14/12/2025)
Semafor: The Washington Post has launched a beta AI tool that generates personalised news podcasts, even though internal tests found most scripts failed basic publishability checks. Staff flagged errors ranging from misquotes and fabrications to biased framing, raising fresh questions about trust and quality as newsrooms rush to roll out consumer-facing AI products.
LLMs may be more vulnerable to data poisoning than we thought (The Alan Turing Institute, 09/10/2025)
The Alan Turing Institute: A new study by the Alan Turing Institute, in collaboration with the AI Security Institute and Anthropic, finds that large language models may be easier to poison than previously assumed. Researchers show that inserting a hidden backdoor into an LLM can require only a small, roughly constant number of malicious documents, around a few hundred, regardless of model size, suggesting that data poisoning attacks could be both scalable and practical. The findings raise fresh concerns about the security of AI systems trained on open web data and the need for stronger protections against misuse.
From notes to bots: How generative AI impacts human-led fact-checking (Yingxin Zhou and Jingbo Hou, 30/09/2025)
Yingxin Zhou and Jingbo Hou: This paper examines how the introduction of generative AI fact-like responses (via X’s chatbot Grok) affects participation in human-led fact-checking systems, specifically Community Notes. The authors find that when users can rely on AI-generated replies, engagement in crowdsourced fact-checking drops, especially among highly active contributors who are crucial to the system’s effectiveness. The study warns that AI tools may unintentionally undermine human verification ecosystems rather than complement them.
Events, jobs & announcements
Events, jobs & announcements
Event, 22 January 2026, online: "One year later: What we’ve learned about Trump’s AI agenda"
One year into the second Trump administration, US AI policy has taken a sharp and unexpected turn, from rapid AI infrastructure expansion and workforce automation to shifts in regulation, public ownership, and the global export of the “American AI technology stack”.
This online discussion organised by Data & Society brings together leading experts to unpack what is really driving these changes, how AI governance is being reshaped, and what the downstream consequences may be for workers, civil rights, democracy, and global tech power.
Speakers:
Alondra Nelson (Institute for Advanced Study)
Edward Ongweso Jr. (Security in Context; This Machine Kills)
Vittoria Elliott (WIRED)
Format: Online
Date: 22 January 2026
Time: 2:00 PM ET
Fellowship opportunity: Institute for Law & AI – Seasonal Research Fellowships
The Institute for Law & AI (LawAI) is offering Seasonal Research Fellowships for law students, professionals, and academics interested in working at the intersection of AI, law, and public policy.
📍 Remote | ⏳ Seasonal (Summer / Winter)
Fellowships are available across multiple workstreams, including:
- EU Law
- US Law & Policy
- Legal Frontiers
Research fellows contribute to LawAI’s core research agendas and policy-relevant work at the cutting edge of AI governance and legal design.
Fellowship opportunity: AI Institute Fellow-in-Residence (Schmidt Sciences)
Schmidt Sciences is recruiting AI Institute Fellows-in-Residence for a 12–18 month programme for recent PhD graduates in AI or computer science.
📍 New York City (on-site) | ⏳ Fixed-term | 💼 $150,000/year
🗓️ Deadline: Rolling applications (apply early) | 🗓️ Cohort starting 2026
Fellows split their time between independent AI research and supporting the development of the AI & Advanced Computing Institute, including grantmaking and programme design. Priority areas include AI agents, trustworthy AI, AI for science, labour impacts, and alignment.
Career opportunities: ActiveFence (Trust & Safety & AI Security)
ActiveFence is hiring across multiple roles to help tackle online harms, AI security risks, and trust & safety challenges at scale. The company brings together intelligence analysts, engineers, data scientists, and researchers to ensure the internet remains a safer, more resilient space.
📍 Multiple locations (Israel, UK, Vietnam, remote/hybrid roles)
🧭 Teams: R&D, Trust & Safety, AI & GenAI Security, Data Science, Engineering
🗓️ Deadline: Rolling applications
Open roles include positions in GenAI security, malware research, data science, DevOps, and platform engineering, among others.
Career opportunities: Centre for Responsible AI (CeRAI), IIT Madras
The Centre for Responsible AI (CeRAI) at IIT Madras is currently advertising multiple research, technical, and policy roles focused on responsible, ethical, and governance-oriented AI.
📍 India (IIT Madras) | 🌍 Interdisciplinary
🗓️ Deadline: Not specified (roles appear to be open / rolling)
Roles listed include:
- Research Scientists & Postdoctoral Fellows
- Policy Analysts & Junior Researchers
- AI / LLM Engineers & Software Developers
- Project & Programme Staff (technical and non-technical)
- Technical roles are often recruited via the Wadhwani School of Data Science & AI, while policy and social science roles are applied for directly through CeRAI.
Job opportunities: Centre for the Governance of AI (GovAI)
The Centre for the Governance of AI (GovAI) is recruiting for several roles and fellowships focused on AI governance, policy, and research.
📍 UK / Global
🗓️ Key deadline: 4 January 2026 (23:59 GMT)
Open opportunities include:
- Summer Fellowship 2026 (Research Track & Applied Track)
- Head of Community
- Research Assistant (expression of interest, rolling)

AI & Disinfo Multimedia
A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.
Webinars
Webinars
Our own and community webinar collection exploring the intersections of AI and disinformation
- Are AI detection tools effective? TRIED puts them to the test. With Zuzanna Wojciak (WITNESS). Hosted by EU DisinfoLab (23/10/2025)
- How AI tools are accelerating pro-China messages online, with Margot Fulde-Hardy and Chris Block (Graphika). Hosted by Graphika (25/09/2025)
- Synthetic propaganda – Generative AI and the future of political communication, with Marcus Bösch (University Münster). Hosted by EU DisinfoLab (04/09/2025)
- AI Red Teaming 101. Full course (Episodes 1-10), with Amanda Minnich, Nina Chikanov (Microsoft) and Gary Lopez (ADAPT). Hosted by Microsoft (09/07/2025)
- This is what happens when you let Elon Musk build an AI, with Nolan Higdon and Sydney Sullivan. Hosted by The disinfo detox (20/05/2025)
- LLM grooming: a new strategy to weaponise AI for FIMI purposes, with Sophia Freuden (The American Sunlight Project). Hosted by EU DisinfoLab (10/04/2025)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, with Heron Lopes (UCDP). Hosted by EU DisinfoLab (06/03/2025)
- Safeguarding Australian elections: Addressing AI-enabled disinformation, with Kate Seward (Microsoft ANZ), Antonio Spinelli (International IDEA) and Sam Stockwell (CETaS). Hosted by ASPI (06/02/2025)
- Faking It - Information Integrity, AI and the Law -Global Game Changers Series-, with Monica Attard and Michael Davis (UTS), Creina Chapman (ACMA), Cullen Jennings (Cisco Systems) and Jason M Schultz (Canva). Hosted by University of Technology Sydney (29/11/2024)
- AI and Disinformation: A legal perspective, with Noémie Krack (KU Leuven). Hosted by EU DisinfoLab (07/11/2024)
- Generative AI and Geopolitical Disruption, with Corneliu Bjola (Oxford Internet Institute), Antonio Estella and Maria Dolores Sanchez Galera (Carlos III University), Peter Pijpers (Netherlands Defence Academy), Michael Zinkanell (Austrian Institute for European and Security Policy), and Gregory Smith (RAND Corporation). Hosted by Solaris (25/10/2024)
- DisinfoCon 2024 - Taking stock of Information Integrity in the Age of AI, with Carl Miller (Center for Analysis of Social Media at Demos). Hosted by Democracy Reporting International (26/09/2024)
- Advancing synthetic media detection: introducing veraAI, with Akis (Symeon) Papadopoulos (Centre for Research and Technology Hellas – Information Technologies Institute). Hosted by EU DisinfoLab (29/08/2024)
- Using Generative AI for the production, spread, and detection of disinformation – latest insights and innovations, with Kalina Bontcheva (University of Sheffield). Hosted by EU DisinfoLab (27/06/2024)
- Beyond Deepfakes: AI-related risks for elections, with Sophie Murphy Byrne (Logically). Hosted by EU DisinfoLab (30/05/2024)
- The Top 9 AI Breakthroughs of 2024 (You Won’t Believe Are Real). By AI Uncovered (08/11/2024)
- Tools and techniques for using AI in digital investigations, with Craig Silverman (ProPublica). Hosted by EU DisinfoLab (25/04/2024)
- OSINT & AI: Advanced Analysis, with Ivan Kravtsov (Social Links) and Gary Ruddell (Independent Cyber Threat Intelligence Professional). Hosted by Social Links (16/11/2023)
Podcasts
Podcasts
Community podcasts exploring the intersections of AI and disinformation
- How chatbots — and their makers — are enabling AI psychosis. Hosted by The Verge (18/09/2025)
- Seriously, what is ‘Superintelligence’? Hosted by Wired (28/06/2025)
- Is technological progress always good? Hosted by Responsible bytes (02/04/2025)
- AI Is transforming geopolitics. Hosted by New Lines Magazine (21/02/2025)
- The rise of DeepSeek, the Chinese AI chatbot making waves in tech. Hosted by Teka Teka (19/02/2025)
- Privacy, digital rights, AI and the law. Hosted by Technology & Security (17/02/2025)
- How DeepSeek controls the conversation. Hosted by Digital Digging (29/01/2025)
- AI regulation and risk management in 2024. Hosted by The AI in business Podcast (21/01/2025)
- The case for human-centered AI. Hosted by McKinsey Digital (20/12/2024)
- Destination Deception 2025. Hosted by Faked Up (18/12/2024)
- What is AI slop and did it lead to a Halloween parade hoax in Dublin? Hosted by The Explainer (05/11/2024)
- Beyond the ballot: Misinformation, trust and truth in elections. Hosted by The National Security Podcast (24/10/2024)
- Do not "summarize this"! Episode 4: improve prompts to get a better summary. Hosted by Digital Digging (28/09/2024)
- How to detect fake AI-texts, episode 1 of podcast series on AI & Research. Hosted by Digital Digging (17/09/2024)
- Moderating Global Voices. Hosted by Decoding Hate (10/02/2021)

AI Disinfo in depth
A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.
Research
Research
A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation
- OpenAI report: AI as a healthcare ally How Americans are navigating the system with chatGPT, by OpenAI (05/01/2025)
- AI-assisted analysis of war-related content on grey zone domains, by Lund University (18/12/2025)
- Child pornography just a click away: How Pedophiles access illegal content on Telegram Via TikTok, by Maldita (11/12/2025)
- AI deepfakes of real doctors spreading health misinformation on social media, by The Guardian (05/12/2025)
- Prompt, Upload, Repeat: Agentic AI Accounts Flood TikTok, by AI Forensics (03/12/2025)
- Cheap tricks: How AI slop Is powering influence campaigns, by Graphika (27/11/2025)
- Google’s Nano Banana Pro generates excellent conspiracy fuel, by The Verge (21/11/2025)
- White nationalist talking points and racial pseudoscience: welcome to Elon Musk’s Grokipedia, by The Guardian (17/11/25)
- King of slop: How anti-migrant AI content made one Sri Lankan influencer rich, by The Bureau of Investigative Journalism (16/11/2025)
- People are more susceptible to misinformation with realistic AI-synthesized images that provide strong evidence to headlines, by Harvard Misinfo Review (10/11/2025)
- X is using AI fact-checkers, by Columbia Journalism Review (06/11/2025)
- Performance of recent reasoning-driven LMs across verification, confirmation and recursive knowledge tasks in the dataset, by Nature (01/11/2025)
- Chatbots are pushing sanctioned Russian propaganda, by Wired (27/10/2025)
- When chatbots surface Russian state media, by ISD (27/10/2025)
- AI tools amplify anti-Muslim hate on Indian social media: think tank, by Asia Nikkei (23/10/2025)
- Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory, by EBU (22/10/2025)
- Russian AI sites can’t stop gushing about Putin, by Newsguard (21/10/2025)
- How scammers entice targets via impersonation and fictional financial aid offers, by Graphika (21/10/2025)
- AI models get brain rot, too, by Wired (21/10/2025)
- OpenAI’s Sora: When seeing should not be believing, by Newsguard (17/10/2025)
- Resisting, refusing, reclaiming, reimagining: Charting challenges to Nnrratives of AI inevitability, by Zenodo (17/10/2025)
- Be careful what you tell your AI chatbot, by HAI Stanford University (15/10/2025)
- LLMs grooming or data voids? LLM-powered chatbot references to Kremlin disinformation reflect information gaps, not manipulation, by Misinfo Review (15/10/2025)
- Audience use and perceptions of AI assistants for news, by BBC (15/10/2025)
- The illusion of AI safety, by CCDH (14/10/2025)
- Generative AI and news report 2025: How people think about AI’s role in journalism and society, by Reuters (07/10/2025)
- We say you want a revolution. PRISONBREAK – An AI-enabled Influence operation aimed at overthrowing the Iranian regime, by Citizenlab (02/10/2025)
- Revisionist future: Russia's assault on large language models, the distortion of collective memory, and the politics of eternity, by King's College London (29/09/2025)
- This podcast company went all in on AI, by Indicator (24/09/2025)
- AI models are using material from retracted scientific papers, by Technology Review (23/09/2025)
- Are bad incentives to blame for AI hallucinations?, by TechCrunch (07/09/2025)
- Psychological tricks can get AI to break the rules, by Wired (07/09/2025)
- Chatbots spread falsehoods 35% of the time, by Newsguard (04/09/2025)
- MIGS launches new report “Wired for War: How Authoritarian States are Weaponizing AI against the West, by MIGS Institute (02/09/2025)
- How safety measures failed when we asked AI chatbots to create false content, by International Journalists' Network (02/09/2025)
- BBC reveals web of spammers profiting from AI Holocaust images, by BBC (29/08/2025)
- One long sentence is all it takes to make LLMs misbehave, by The Register (26/08/2025)
- More powerful than lies: Taiwan's 2025 recall campaign and the rise of AI-generated mini clips, by Fact Link (20/08/2025)
- Scientists created an entire social network where every user is a bot, and something wild happened, by Futurism (19/08/2025)
- La IA creada por el líder de Hazte Oír: contenido que homenajea a Franco, desinformación y mensajes xenófobos, by El País (16/08/2025)
- The art of persuasion: how top AI chatbots can change your mind, by Financial Times (13/08/2025)
- AI revolution: Hackers increasingly taking advantage of GenAI tools to code malware and more, by Cyber Daily (04/08/2025)
- The era of AI propaganda has arrived, and America must act, by The New York Times (04/08/2025)
- AI Generated algorithmic virality, by AI forensics (31/07/2025)
- British 999 call handler's voice cloned by Russian network using AI, by BBC (30/07/2025)
- AI chatbots often advise women to ask for lower pay than men: new study, by Women Agenda (29/07/2025)
- Iran-Israel AI war propaganda Is a warning to the world, by Carnegie Endowment (28/07/2025)
- Trump-Epstein AI fakes draw millions of views, by Newsguard (25/07/2025)
- Chinese AI Models Register a 60 Percent Fail Rate in NewsGuard Audit of Pro-China Claims, by Newsguard (25/07/2025)
- AI ‘Nudify’ websites are raking in millions of dollars, by Wired (14/07/2025)
- Bad actors are grooming LLMs to produce falsehoods, by The American Sunlight Project (11/07/2025)
- Microsoft shuts down 3,000 email accounts created by North Korean IT workers, by The Record (03/07/2025)
- Putin is weaponising AI to target Brits with disinformation campaign in new digital 'arms race', experts warn, by Daily Mail (01/07/2025)
- Q2 2025 Deepfake threat intelligence report, by Resemble.AI (01/07/2025)
- AI chatbots could spread ‘fake news’ with serious health consequences, by Unisa (30/06/2025)
- Russia, AI and the future of disinformation warfare, by Rusi (30/06/2025)
- Deciphering authenticity in the age of AI: how AI-generated disinformation images and AI detection tools influence judgements of authenticity, by Springer Nature Link (29/06/2025)
- AI is tarting to wear down democracy, by The New York Times (26/06/2025)
- Operation Overload: An AI fuelled escalation of the Kremlin-linked propaganda effort, by CheckFirst (26/06/2025)
- AI is tarting to wear down democracy, by The New York Times (26/06/2025)
- KAIST develops AI comment detection technology to combat online manipulation in Korea, by Chosun Biz (24/06/2025)
- Grok struggles with fact-checking amid Israel-Iran war, by DFRLab (24/06/2025)
- Why do some language models fake alignment while others don’t?, published by arxiv ()22/06/2025)
- Disrupting malicious uses of AI: June 2025, by Open AI (05/06/2025)
- Leaked files reveal how China is using AI to erase the history of the Tiananmen Square massacre, by ABC (02/06/2025)
- Hey chatbot, is this true? AI 'factchecks' sow misinformation, by France 24 (02/06/2025)
- Generative AI used to copy and clone French news media in French-speaking Africa, by Reporters Without Borders (02/06/2025)
- TRIED: Truly Innovative and Effective AI Detection Benchmark, by WITNESS (30/05/2025)
- Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns, by The Conversation & Florida International University (29/05/2025)
- A weaponized AI chatbot Is flooding Canadian City Councils with climate misinformation, by DeSmog (28/05/2025)
- Just as humans need vaccines, so do models: Model immunization to combat falsehoods, by Shaina Raza, et al. 2025 (23/05/2025)
- On the conversational persuasiveness of GPT-4, by Nature (19/05/2025)
- The new wave of Russian disinformation blogs, by UK Defence Journal (18/05/2025)
- AI job recruitment tools could 'enable discrimination' against marginalised groups, research finds, by ABC News (07/05/2025)
- Synthetic propaganda, by Marcus Boesch (05/05/2025)
- How Russia is using Gaelic and AI to peddle disinformation in Scotland, by The Times (03/05/2025)
- Why does AI hinder democratization?, by PNAS (03/05/2025)
- Pro-Russian influence operation targeting Australia in lead-up to election with attempt to 'poison' AI chatbots, by ABC (02/05/2025)
- Disasters and disinformation: AI and the Myanmar 7.7 Magnitude Earthquake, by RSiS (01/05/2025)
- Generative AI in electoral campaigns: Mapping global patterns, by IPIE (01/05/2025)
- Deepfakes just got even harder to detect: Now they have heartbeats, by BBC (30/04/2025)
- Americans largely foresee AI having negative effects on news and journalists, by Pew Research Center (28/04/2025)
- Operating multi-client influence networks across platforms, by Anthropic(23/04/2025)
- AI is inherently ageist. That’s not just unethical – it can be costly for workers and businesses, by The Conversation (22/04/2025)
- Values in the wild: Discovering and analyzing values in real-world language model interactions, by Anthropic (21/04/2025)
- False face: Unit 42 demonstrates the alarming ease of synthetic identity creation, by Unit 42 (21/04/2025)
- Russian propaganda campaign targets France with AI-fabricated scandals, drawing 55 million views on social media, by Newsguard (17/04/2025)
- OpenAI’s new reasoning AI models hallucinate more, by Tech Crunch (17/04/2025)
- Russia’s use of genAI in disinformation and cyber jnfluence: Strategy, use cases and future expectations, by CRC (13/04/2025)
- LLM pass the Turing Test. But that doesn’t mean AI is now as smart as humans, by The Conversation (08/04/2025)
- What we learned from tracking AI use in global elections, by Rest of World (08/04/2025)
- Emotional prompting amplifies disinformation generation in AI large language models, by Frontiers (07/04/2025)
- AI Index 2025: State of AI in 10 Charts, by HAI Stanford University (07/04/2025)
- OpenAI’s Sora Is plagued by sexist, racist, and ableist biases, by Wired (23/03/2025)
- AI’s answers on China differ depending on the language, analysis finds, by Tech Crunch (20/03/2025)
- Users turning to ChatGPT for news may find misinformation in responses, by Logically Facts (18/03/2025)
- Deepfake detectors vulnerable ahead of election, by InnovationAus (13/03/2025)
- Russia-linked Pravda network cited on Wikipedia, LLMs, and X, by DFRLab (12/03/2025)
- Urgent action is needed to secure the UK’s AI research ecosystem against hostile state threats, by The Alan Turing Institute (07/03/2025)
- A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propaganda, by Newsguard (06/03/2025)
- Chinese AI video generators unleash a flood of new nonconsensual porn, by 404 Media (06/03/2025)
- AI search has a citation problem, by Columbia Journalism Review (06/03/2025)
- An AI slop "science" site has been beating real publications in Google results by publishing fake images of SpaceX Rockets, by Futurism (06/03/2025)
- Character flaws, by Graphika (05/03/2025)
- Slopaganda: The interaction between propaganda and generative AI, by Michał Klincewicz, Mark Alfano, Amir Ebrahimi Fard (03/03/2025)
- Hybrid threats and the amplifying power of AI: Five strategic scenarios, by Alto Intelligence (01/03/2025)
- Towards a common reporting framework for AI incidents, by OECD (28/02/2025)
- Microsoft outs hackers behind tools to bypass generative AI guardrails, by Bloomberg (27/02/2025)
- The smarter AI gets, the more It start cheating when it's losing, by The Byte (22/02/2025)
- Disrupting malicious uses of AI, by Open AI (21/02/2025)
- Deepfake threat: Only 0.1% can spot AI-generated fakes, by Security Brief (19/02/2025)
- Grok’s responses to questions on the German elections were mostly accurate and relied heavily on media sources, by Reuters Institute (19/02/2025)
- How 35 YouTube channels spread disinformation using AI about Spanish and European politics, by Maldita (14/02/2025)
- Inconsistent and unreliable: Chatbots provide inaccurate Information on German elections, by Democracy Reporting International (12/02/2025)
- Representation of BBC News content in AI assistants, by BBC (11/02/2025)
- An adviser to Elon Musk’s xAI has a way to make AI more like Donald Trump, by Wired (11/02/2025)
- Red-teaming in the public interest, by Data & Society (09/02/2025)
- AI misinformation monitor of leading AI chatbots multilingual edition, by Newsguard (07/02/2025)
- Challenges and opportunities of AI in the fight against information manipulation, by VIGNIUM (07/02/2025)
- The use of artificial intelligence in counter-disinformation: a world wide (web) mapping, by Frontiers (07/02/2025)
- Search Google Maps with the help of AI, by Digital Digging (06/02/2025)
- Rechts, weiblich, Fake, by Tagesschau (05/02/2025)
- Russian propaganda may be flooding AI models, by American Sunlight (01/02/2025)
- AI-Generated Disinformation in Europe and Africa, by KAS (31/01/2025)
- Scammers are creating fake news videos to blackmail victims, by Wired (27/01/2025)
- Russian propagandist turns his sights to German election, by Reuters (23/01/2025)
- Greenwashing and bothsidesism in AI chatbot answers about fossil fuels' role in climate change, by Global Witness (22/01/2025)
- Knowing less about AI makes people more open to having it in their lives, by The Conversation (20/01/2025)
- AI isn’t very good at history, by Tech Crunch (19/01/2025)
- A fact-checking tool based on Artificial Intelligence to fight disinformation on Telegram, by Universidad de Navarra (12/01/2025)
- Apple urged to withdraw 'out of control' AI news alerts, by BBC (07/01/2025)
- AI could usher in a golden age of research – but only if these cutting-edge tools aren’t restricted to a few major private companies, by The Conversation (06/01/2025)
- These defenders of democracy do not exist, by Conspirador Norteño (05/01/2025)
- An AI-Powered Audit: Do Chatbots Reproduce Political Pluralism?, by Democracy Reporting International (27/12/2024)
- ChatGPT search tool vulnerable to manipulation and deception, tests show, by The Guardian (24/12/2024)
- Predictions for AI in 2025: Collaborative agents, AI skepticism, and new risks, by Stanford University (23/12/2024)
- Bridging the data provenance gap across text, speech and video, by arXiv:2412.17847 (19/12/2024)
- Fake AI versions of world-renowned academics are spreading claims that Ukraine should surrender to Russia, by The Insider (13/12/2024)
- ElevenLabs used for Russian propaganda, by AI Tool Report (11/12/2024)
- AI enters Congress: Sexually explicit deepfakes target women lawmakers, by The 19th News (11/12/2024)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, by GNET (11/12/2024)
- Scottish Parliament TV at risk of deepfake attacks, by Infosecurity (10/12/2024)
- Revealed: bias found in AI system used to detect UK benefits fraud, by The Guardian (06/12/2024)
- Evaluating Large Language Models capability to launch fully automated spear phishing campaigns: Validated on human subjects, by arXiv (30/11/2024)
- Study of ChatGPT citations makes dismal reading for publishers, by Tech Crunch (29/11/2024)
- How ChatGPT Search (mis)represents publisher content, by Columbia Journalism Review (27/11/2024)
- Persuasive technologies in China: implications for the future of national security, by Australian Strategic Policy Institute (26/11/2024)
- "Operation Undercut" shows multifaceted nature of SDA’s influence operations, by Recorded Future (26/11/2024)
- Philippines, China clashes trigger money-making disinformation, by France24 (26/11/2024)
- Not even Spotify is safe from AI slop, by The Verge (14/11/2024)
- AI-enabled influence operations: Safeguarding future elections, by Cetas (13/11/2024)
- Disconnected from reality: American voters grapple with AI and flawed OSINT strategies, by ISD (07/11/2024)
- AI hallucinations caused artificial intelligence to falsely describe these people as criminals, by ABC News (03/11/2024)
- Exploiting Meta’s weaknesses, deceptive political ads thrived on Facebook and Instagram in run-up to election, by Pro Publica (31/10/2024)
- "Say it’s only fictional”: How the far-right is jailbreaking AI and what can be done about it, by ICCT (30/10/2024)
- How X users can earn thousands from US election misinformation and AI images, by BBC (30/10/2024)
- Hospitals use a transcription tool powered by an error-prone OpenAI model, by The Verge (28/10/2024)
- Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said, by AP news (26/10/2024)
- GenAI and Democracy, by DSET (25/10/2024)
- Prebunking elections rumors: Artificial Intelligence assisted interventions increase confidence in American elections, by California Institute of Technology, Washington University in St. Louis, Cambridge University (24/10/2024)
- Large Language Models reflect the ideology of their creators, by arXiv (24/10/2024)
- Amazon Alexa users given false information attributed to Full Fact’s fact checks, by Full Fact (17/10/2024)
- Ensuring AI accountability: Auditing methods to mitigate the risks of Large Language Models, by Democracy Reporting International (14/10/2024)
- Pig butchering scams are going high tech, by Wired (12/10/2024)
- An update on disrupting deceptive uses of AI, by openAI (09/10/2024)
- Generative Artificial Intelligence and elections, by Center for Media Engagement (03/10/2024)
- Grok AI: A deepfake disinformation disaster for democracy, by CCDH (29/8/2024)
- OpenAI blocks AI propaganda, by AI Tool Report (19/8/2024)
- Disrupting deceptive uses of AI by covert influence operations, by OpenAI (30/5/2024)
- AI-pocalypse Now? Disinformation, AI, and the super election year, by MSC (01/04/2024)
About policy & regulations
About policy & regulations
A look at regulation and policies implemented on AI and disinformation
- You can now verify Google AI-generated videos in the Gemini app, by Google (18/12/2025)
- UK to push for nudity-blocking software on devices to protect children, by Financial Times (15/12/2025)
- States take the lead policing AI in health care, by Axios (13/12/2025)
- Gavin Newsom pushes back on Trump AI executive order preempting state laws, by The Guardian (13/12/2025)
- What to know about Trump’s executive order to curtail state AI regulations, by AP News (12/12/2025)
- Image of Trump using a walker Is an AI fake, by Newsguard (12/12/2025)
- White House issues federal agency guidance against "woke" AI, by Axios (11/12/2025)
- A pay-to-scrape AI licensing standard is now official, by The Verge (10/12/2025)
- Big Tech warned over AI 'delusional' outputs by US attorneys general, by Reuters (10/12/2025)
- AI Slop Is ruining Reddit for everyone, by Wired (05/12/2025)
- South Korea to require advertisers to label AI-generated ads, by AP News (01/12/2025)
- From 'Googling' to 'Asking ChatGPT': Governing AI Search, by AI Forensics (01/12/2025)
- The race to regulate AI has sparked a federal vs. state showdown, by Tech Crunch (28/11/2025)
- New legislation targets scammers that use AI to deceive, by Cyberscoop (26/11/2025)
- Australia to establish AI safety institute, by Innovation AUS (25/11/2025)
- Manipulated video should have high-risk label, by The Oversight Board (25/11/2025)
- More ways to spot, shape and understand AI-generated content, by TikTok (24/11/2025)
- Victims of AI deepfakes could sue for emotional damages under new bill, by ABC (24/11/2025)
- La Presse sues OpenAI for copyright infringement, by AP News (24/11/2025)
- How we’re bringing AI image verification to the Gemini app, by Google (20/11/2025)
- White House pauses executive order that would seek to preempt state laws on AI, sources say, by Reuters (21/11/2025)
- EU to delay 'high risk' AI rules until 2027 after Big Tech pushback, by Reuters (19/11/2025)
- UK seeking to curb AI child sex abuse imagery with tougher testing, by BBC (12/11/2025)
- ChatGPT violated copyright law by ‘learning’ from song lyrics, German court rules, by The Guardian (11/11/2025)
- Strengthening public interest media in the age of GenAI, by Medium (11/11/2025)
- EU could water down AI Act amid pressure from Trump and big tech, by The Guardian (07/11/2025)
- China's Xi pushes for global AI body at APEC in counter to US, by Reuters (01/11/2025)
- Denmark eyes new law to protect citizens from AI deepfakes, by AP News (01/11/2025)
- Watchdog group Public Citizen demands OpenAI withdraw AI video app Sora over deepfake dangers, by AP News (01/11/2025)
- Tech platforms promised to label AI content. They're not delivering, by Indicator (23/10/2025)
- India proposes strict rules to label AI content citing growing risks, by Reuters (22/10/2025)
- YouTube’s AI ‘likeness detection’ tool is searching for deepfakes of popular creators, by The Verge (21/10/2025)
- Meta to give teens' parents more control after criticism over flirty AI chatbots, by Reuters (17/10/2025)
- How we bypassed Sora 2's identity safeguards in under 24 hReality Defender (03/10/2025)ours, by
- The Indicator guide to AI labels: We’ve collected in one place how and when major platforms label AI content, by Indicator (02/10/2025)
- Meta greenlights Facebook, Instagram ads based on your AI chats, by CNBC (01/10/2025)
- Gavin Newsom signs first-in-nation AI safety law, by Politico (29/09/2025)
- U.S. rejects international AI oversight at U.N. General Assembly, by NBC News (27/09/2025)
- Meta launches super PAC to fight AI regulation, by Axios (23/09/2025)
- L’industrie média allemande attaque AI Overviews de Google, by Mind Media (22/09/2025)
- A ‘global call for AI red lines’ sounds the alarm about the lack of international AI policy, by The Verge (22/09/2025)
- Google will use hashes to find and remove nonconsensual intimate imagery from Search, by The Verge (17/09/2025)
- AI agents & global governance: Analyzing foundational legal, policy, and accountability tools, by Partnership on AI (16/09/2025)
- Rolling Stone publisher sues Google over AI summaries, by The Wall Street Journal (13/09/2025)
- US Senator Cruz proposes AI 'sandbox' to ease regulations on tech companies, by Reuters (10/09/2025)
- DeepSeek sheds light on data collection for AI training and warns of ‘hallucination’ risks, by SCMP (03/09/2025)
- Australia moves to stamp out ‘nudify’ and stalking apps, by Aljazeera (02/09/2025)
- OpenAI's ChatGPT to implement parental controls after teen's suicide, by ABC (02/09/2025)
- China’s social media platforms rush to abide by AI-generated content labelling law, by SCMP (01/09/2025)
- Meta to stop its AI chatbots from talking to teens about suicide, by BBC (01/09/2025)
- AI Is replacing online moderators, but it's bad at the job, by Bloomberg (22/08/2025)
- TikTok to lay off hundreds of UK moderators as it shifts to AI, by The Financial Times (22/08/2025)
- Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims, by TechCrunch (18/08/2025)
- Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info, by Reuters (14/08/2025)
- Google to sign EU's AI code of practice despite concerns, by Reuters (30/07/2025)
- ‘Global approach’ to AI regulation urgently needed, UN tech chief says, by SCMP (27/07/2025)
- Trump’s order to block ‘woke’ AI in government encourages tech giants to censor their chatbots, by AP News (25/07/2025)
- White House unveils America’s AI action plan, by The White House (23/07/2025)
- Trump’s AI action plan Is a crusade against ‘bias’ and regulation, by Wired (23/07/2025)
- AI models with systemic risks given pointers on how to comply with EU AI rules, by Reuters (18/07/2025)
- Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth, by CNBC (18/07/2025)
- White House Prepares Executive Order Targeting ‘Woke AI’, by The Wall Street Journal (17/07/2025)
- Grace Tame urges government to outlaw AI tools used to generate child sexual abuse material, by ABC (16/07/2025)
- EU rolls out AI code with broad copyright, transparency rules, by Bloomberg (10/07/2025)
- Promoting accountability for AI misinformation: Intermediary Digital Liability, by Global Voices (08/07/2025)
- Senate strikes AI regulatory ban from GOP bill after uproar from the states, by AP News (02/07/2025)
- US Senate strikes AI regulation ban from Trump megabill, by Reuters (01/07/2025)
- DeepSeek faces ban from Apple, Google app stores in Germany, by Reuters (27/06/2025)
- Denmark to tackle deepfakes by giving people copyright to their own features, by The Guardian (27/06/2025)
- US lawmakers introduce bill to bar Chinese AI in US government agencies, by Reuters (25/06/2025)
- Asian countries are pioneers in balancing AI regulation and innovation, by Nikkei Asia (25/06/2025)
- Federal court says copyrighted books are fair use for AI training, by The Washington Post (25/06/2025)
- Swedish PM calls for a pause of the EU’s AI rules, by Politico (23/06/2025)
- The State of Deepfake. Regulations in 2025: What businesses need to know, by Reality Defender (18/06/2025)
- Nvidia's pitch for sovereign AI resonates with EU leaders, by Reuters (16/06/2025)
- EU’s waffle on artificial intelligence law creates huge headache, by Politico (16/06/2025)
- New York passes a bill to prevent AI-fueled disasters, by Tech Crunch (13/06/2025)
- EU could postpone flagship AI rules, tech chief says, by Politico (06/06/2025)
- X’s new policy prevents companies from using posts to ‘fine-tune or train’ AI models, by The Verge (05/06/2025)
- Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?, by The Conversation (03/06/2025)
- Meta reportedly replacing human risk assessors with AI, by Mashable (01/06/2025)
- Governing AI and the democratisation of governance, by Hintz, A. Dialogues on Digital Society (30/05/2025)
- The coming AI backlash will shape future regulation, by Brookings (27/05/2025)
- Nick Clegg says asking artists for use permission would ‘kill’ the AI industry, by The Verge (26/05/2025)
- German rights group fails in bid to stop Meta's data use for AI, by Reuters (23/05/2025)
- President Trump signs TAKE IT DOWN Act into Law, by The White House (19/05/2025)
- Tech workers, teachers, artists oppose AI preemption measure, by Demand Progress (19/05/2025)
- OpenAI Launches AI Safety Evaluations Hub Amid GPT-4o Controversy: Transparency or PR Strategy?, by Medium (15/05/2025)
- Trump administration fires top copyright official days after firing Librarian of Congress, by AP (12/05/2025)
- Who owns AI fraud? How to build a deepfake response framework, by Reality Defender (12/05/2025)
- Trump fires director of U.S. Copyright Office, sources say, by CBS News (10/05/2025)
- Introducing Gen AI labels: Pinterest is taking a new step in transparency, by Pinterest (30/04/2025)
- House approves Take It Down Act, sending bill on intimate images to Trump’s desk, by The 19th News (28/04/2025)
- Musk’s X sues to block Minnesota ‘deepfake’ law over free speech concerns, by CNBC (23/04/2025)
- Google used AI to suspend over 39M ad accounts suspected of fraud, by Tech Crunch (16/04/2025)
- OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk, by Fortune (16/04/2025)
- ChatGPT now lets users create fake images of politicians. We stress-tested it, by CBC (13/04/2025)
- YouTube supports the NO FAKES Act: Protecting creators and viewers in the age of AI, by YouTube (09/04/2025)
- The Dangers of AI Sovereignty, by Lawfare (07/04/2025)
- Google is shipping Gemini models faster than its AI safety reports, by Tech Crunch (03/04/2025)
- UK needs to relax AI laws or risk transatlantic ties, thinktank warns, by The Guardian (02/04/2025)
- Protecting the polls in the era of AI and deepfakes, by Microsoft (01/04/2025)
- OpenAI peels back ChatGPT’s safeguards around image creation, by Tech Crunch (28/03/2025)
- Meta to seek disclosure on political ads that use AI ahead of Canada elections, by Reuters (20/03/2025)
- Vance outlines an America first, America only AI agenda, by Lawfare (19/03/2025)
- China mandates labels for all AI-generated content in fresh push against fraud, fake news, by SCMP (15/03/2025)
- Under Trump, AI scientists are told to remove ‘ideological bias’ from powerful models, by Wired (14/03/2025)
- OpenAI urges Trump administration to remove guardrails for the industry, by CNBC (13/03/2025)
- Spain to impose massive fines for not labelling AI-generated content, by Reuters (11/03/2025)
- The AI regulation debate in China is on a whole different level, by Raymond Sun (10/03/2025)
- Meta brings its anti-scam facial-recognition test to the UK and Europe, by Tech Crunch (04/03/2025)
- Creative industries protest against UK plan about AI and copyright, by Financial Times (27/02/2025)
- Terms of (dis)service: comparing misinformation policies in text-generative AI chatbot, by EU DisinfoLab (27/02/2025)
- UK delays plans to regulate AI as ministers seek to align with Trump administration, by The Guardian (24/02/2025)
- Erotica, gore and racism: how America’s war on ‘ideological bias’ is letting AI off the leash, by The Conversation (24/02/2025)
- Artificial intelligence and intellectual property: Navigating the challenges of data scraping, by OECD.AI (14/02/2025)
- OpenAI removes certain content warnings from ChatGPT, by Tech Crunch (13/02/2025)
- Tech companies pledged to protect elections from AI. Here’s how they did, by Brennan Center (13/02/2025)
- The death of inclusive AI? Trump’s fight against diversity intensifies, by ANU Reporter (13/02/2025)
- JD Vance warns Europe to go easy on tech regulation in major AI speech, by Politico (11/02/2025)
- Donald Trump rolls back Biden-era AI regulation, sets stage for battles with US states, by CNN(09/02/2025)
- The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence, by Cambridge University Press (06/02/2025)
- Living repository to foster learning and exchange on AI literacy, by European Commission (04/02/2025)
- China is scheduled to hold its "Two Sessions" this week, by Raymond Sun (04/02/2025)
- Meta says it may stop development of AI systems it deems too risky, by Tech Crunch (03/02/2025)
- The EU’s AI bans come with big loopholes for police, by Politico (03/02/2025)
- Frontier AI Framework, by META (03/02/2025)
- AI-generated child sex abuse images targeted with new laws, by BBC (02/02/2025)
- First international AI safety report published, by Computer Weekly (30/01/2025)
- Fighting deepfakes: what’s next after legislation?, by Australian Strategic Policy Institute (24/01/2025)
- Deepfake labels and detectors still don't work, by (Faked Up22/01/2025)
- The global struggle over how to regulate AI, by Rest of World (21/01/2025)
- Trump revokes Biden executive order on addressing AI risks, by Reuters (21/01/2025)
- Feedback on the second draft of the general-purpose AI Code of Practice: Comments and recommendations, by University of Cambridge (17/01/2025)
- Civil society rallies for human rights as AI Act prohibitions deadline looms, by EuroActiv (16/01/2025)
- OpenAI wooed Democrats with calls for AI regulation. Now it must charm Trump, by The Washington Post (13/01/2025)
- British PM Keir Starmer outlines bid to become AI 'world leader', by ABC (13/01/2025)
- UK can be ‘AI sweet spot’: Starmer’s tech minister on regulation, Musk, and free speech, by The Guardian (11/01/2025)
- Britain to make sexually explicit 'deepfakes' a crime, by Reuters (07/01/2025)
- Partnering for gender-responsive AI, by UN (01/01/2025)
- Copyright and Artificial Intelligence Part 2: Copyrightability, by United States Copyright office (01/01/2025)
- Trump announces new tech policy picks for his second term, by The Verge (23/12/2024)
- Sriram Krishnan named Trump’s senior policy advisor for AI, by Tech Crunch (22/12/2024)
- Google relaxes AI usage rules, by AI Tool Report (18/12/2024)
- Meta debuts a tool for watermarking AI-generated videos, by Tech Crunch (12/12/2024)
- New research centre supporting safe and responsible AI, by Minister for Industry and Science, Australia (09/12/2024)
- Inside Britain’s plan to save the world from runaway AI, by Politico (05/12/2024)
- Rumble Video Platform sues California over anti-deepfake law, by Bloomberg (29/11/2024)
- Trump 2.0: Clash of the tech bros, by Fortune (26/11/2024)
- ChatGPT, Meta and Google generative AI should be designated 'high-risk' under new laws, bipartisan committee recommends, by ABC News (26/11/2024)
- Case closed on "nude" AI images of girls. Why police are not charging man who made them, by Pensacola News Journal (22/11/2024)
- The EU Code of Practice for General-purpose AI: Key takeaways from the First Draft, by CSIS (21/11/2024)
- What Donald Trump’s Cabinet picks mean for AI, by GZero Media (19/11/2024)
- Musk sues California over deepfake law, by AI Tool Report (18/11/24)
- EU AI Act: Draft guidance for general purpose AIs shows first steps for Big AI to comply, by TechCrunch (14/11/2024)
- Musk to be Trump's AI advisor?, by AI Tool Report (12/11/2024)
- What Trump’s victory could mean for AI regulation, by Tech Crunch (06/11/2024)
- How AI could still impact the US election, by Gzero Media (05/11/2024)
- Reducing risks posed by synthetic content, by National Institute of Standards and Technology (01/11/2024)
- Google Photos will soon show you if an image was edited with AI, by The Verge (24/10/2024)
- More transparency for AI edits in Google Photos, by Google (24/10/2024)
- Embedded GenAI on social media: Platform law meets AI law, by DSA Observatory (16/10/2024)
- California rejects AI safety bill, by AI Tool Report (30/09/2024)
- Council of Europe opens first ever global treaty on AI for signature, by Council of Europe (05/9/2024)
- Final Report - Governing AI for humanity, by UN (01/09/2024)
- United Nations Secretary-General’s video message for launch of the Final Report, by UN (01/09/2024)
- Platforms’ AI policy updates in 2024: Labelling as the silver bullet?, by EU DisinfoLab (01/07/2024)
- A real account of peep fakes, by Cornell University (15/04/2024)
- Governing AI agents, by Hebrew University of Jerusalem (02/04/2024)
Miscellaneous readings
Miscellaneous readings
Recommended reading on AI and disinformation
- That viral Reddit post about food delivery apps was an AI scam, by The Verge (05/01/2026)
- Hack reveals the a16z-backed phone farm flooding TikTok With AI influencers, by 404 Media (17/12/2025)
- Adobe sued for allegedly misusing authors' work in AI training, by Reuters (17/12/2025)
- 35 notable AI fails from 2025, by Indicator (15/12/2025)
- AI toys for kids talk about sex and issue Chinese Communist Party talking points, tests show, by NBC News (13/12/2025)
- Protecting truth in the era of AI mediation, by ASPI (12/12/2025)
- Instagram Is generating inaccurate SEO bait for your posts, by 404 Media (09/12/2025)
- Deepfakes of UK Prime Minister flood TikTok, by Newsguard (09/12/2025)
- UK intelligence warns AI 'prompt injection' attacks might never go away, by The Record (08/12/2025)
- Foreign states using AI videos to undermine support for Ukraine, says Yvette Cooper, by The Guardian (08/12/2025)
- Trains cancelled over fake bridge collapse image, by BBC (05/12/2025)
- Nonconsensual nude generators had another banner year. What will it take to defeat them?, by Indicator (04/12/2025)
- OpenAI has trained its LLM to confess to bad behavior, by Technology Review (03/12/2025)
- Why ads on ChatGPT are more terrifying than you think, by The Algorithmic Bridge (02/12/2025)
- ‘AI safety’ needs to mean safety from authoritarian abuse, by ASPI (02/12/2025)
- The party’s AI: How China’s new AI systems are reshaping human rights, by ASPI (01/12/2025)
- Leak confirms OpenAI is preparing ads on ChatGPT for public roll out, by Bleeping Computer (29/11/2025)
- SAP outlines new approach to European AI and cloud sovereignty, by AI News (27/11/2025)
- AI Slop recipes are taking over the Internet, and Thanksgiving dinner, by Bloomberg (25/11/2025)
- Meet the AI workers who tell their friends and family to stay away from AI, by The Guardian (22/11/2025)
- Elon Musk could 'drink piss better than any human in history,' Grok says, by 404 Media (20/11/2025)
- AI is supercharging disinformation warfare, by Foreign Affairs (19/11/2025)
- An AI bot is now the top contributor to Community Notes on X, by Indicator (18/11/2025)
- A massive Cloudflare outage brought down X, ChatGPT, and even Downdetector, by The Verge (18/11/2025)
- One in two misusing AI in workplace, by The Australian (17/11/2025)
- Lost in the plot: how would-be authors were fooled by AI staff and virtual offices in suspected global publishing scam, by The Guardian (16/11/2025)
- 13 Novembre : les rescapés du Bataclan face aux fake news de l’extrême droite, relayées par l’intelligence artificielle de X, by Le Parisien (15/11/2025)
- Anthropic says its latest model scores a 94% political ‘even-handedness’ rating, by Fortune (14/11/2025)
- AI firm claims Chinese spies used its tech to automate cyber attacks, by BBC (14/11/2025)
- Researchers question Anthropic claim that AI-assisted attack was 90% autonomous, by Ars Technica ()14/11/2025)
- China’s ‘autonomous’ AI-powered hacking campaign still required a ton of human work, by Cyberscoop (14/11/2025)
- X’s Grok claims Trump won the 2020 election, by Newsguard Reality Check (12/11/2025)
- Google accused in suit of using Gemini AI tool to snoop on users, by Bloomberg (12/11/2025)
- Maga + AI is not a recipe for stability, by Financial Times (10/11/2025)
- The Ukrainian soldier who cries because he is forced to go to war: how an AI video has gone viral in 13 languages and has millions of views on X, by Maldita.es (06/11/2025)
- Evasion attacks on LLMs –Countermeasures in practice, by Bundesamt für Sicherheit in der Informationstechnik (06/11/2025)
- arXiv changes rules after getting spammed with AI-generated 'research' papers, by 404 Media (03/11/2025)
- How A.I. can use your personal data to hurt your neighbor, by The New York Times (02/11/2025)
- A.I. is making death threats way more realistic, by The New York Times (31/10/2025)
- Artificial intelligence and the future of espionage, by ASPI (30/10/2025)
- AI browsers are a cybersecurity time bomb, by The Verge (30/10/2025)
- How to spot fake AI-written press releases, by Press Gazette (30/10/2025)
- AFP developing AI tool to decode gen Z slang amid warning about ‘crimefluencers’ hunting girls, by The Guardian (29/10/2025)
- AI 'hallucinations' could prove real problem for owner of fire-ravaged Vancouver property, by CBC (28/10/2025)
- Teenagers struggle to tell if videos are real or fake as AI floods social media, by ABC (26/10/2025)
- AI-generated fact check on X is wrong: MSNBC’s ‘No kings’ footage is legit, by Newsguard (24/10/2025)
- US right-wing media figures, tech pioneers call for superintelligent AI ban, by Reuters (23/10/2025)
- ‘Do not trust your eyes’: AI generates surge in expense fraud, by Financial Times (23/10/2025)
- How Trump is using fake imagery to attack enemies and rouse supporters, by The New York Times (21/10/2025)
- Wikipedia says AI is hurting traffic, by Cyber Daily (21/10/2025)
- Minor sues over ClothOff AI that turns images into ‘hyperrealistic’ porn, by Mealeys (20/10/2025)
- AI video generators are now so good you can no longer trust your eyes, by The New York Times (09/10/2025)
- Russian hackers turn to AI as old tactics fail, Ukrainian CERT says, by The Record (08/10/2025)
- Coca-Cola, Bad Bunny, and the missing Super Bowl sponsorship, by Newsguard (07/10/2025)
- AI isn’t just rehashing the news, it’s inventing quotes from real people, by Newsguard (02/10/2025)
- Synthetic audio detectors put to the test, by DW (29/09/2025)
- Introducing YouTube Labs: Shape the future of AI on YouTube, by YouTube (26/09/2025)
- How AI and Wikipedia have sent vulnerable languages into a doom spiral, by Technology Review (25/09/2025)
- LinkedIn will use your data to train its AI unless you opt out now, by Malware Bytes Lab (25/09/2025)
- How Russia uses AI-driven bots on Telegram to meddle in Moldova’s elections, by Open Minds (24/09/2025)
- Inside Russia’s AI-driven disinformation machine shaping Moldova’s election, by EuroNews (23/09/2025)
- OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws, by Computer World (18/09/2025)
- Can the Middle East fight unauthorized AI-generated content with trustworthy tech?, by Fast Company Middle East (17/09/2025)
- Russian State TV launches AI-generated news satire show, by 404 Media (17/09/2025)
- OpenAI is building a ChatGPT for teens, by Axios (16/09/2025)
- Russia’s Defense Ministry launches an AI-generated, anti-west news show, blurring the line between satire and propaganda, by NewsGuard (15/09/2025)
- Musk’s Grok AI bot falsely suggests police misrepresented footage of far-right rally in London, by The Guardian (14/09/2025)
- After Kirk assassination, AI ‘Fact Checks’ spread false claims, by NewsGuard (11/09/2025)
- Encyclopedia Britannica sues Perplexity over AI 'answer engine', by Reuters (11/09/2025)
- Albania appoints world’s first AI-made minister, by Politico (11/09/2025)
- Apple is teaching its AI to adapt to the Trump era, by Politico (09/09/2025)
- Is AI the new frontier of women’s oppression?, by Wired (09/09/2025)
- Fake celebrity chatbots sent risqué messages to teens on top AI app, by The Washington Post (06/09/2025)
- America's 'New Right' says AI threatens both US and China, by Asia Nikkei (03/09/2025)
- Is it safe to upload your photos to ChatGPT?, by The Wall Street Journal (03/09/2025)
- Gartner survey finds 53% of consumers distrust AI-powered search results, by Gartner (03/09/2025)
- Amazon’s AI book problem: fake authors flogging sloppy content, by The Australian (02/09/2025)
- AI ‘bikini interview’ videos flood internet, sparking sexism concerns, by SCMP (02/09/2025)
- How Elon Musk is remaking Grok in his image, by The New York Times (02/09/2025)
- Why AI labs struggle to stop chatbots talking to teenagers about suicide, by Financial Times (02/09/2025)
- A troubled man, his chatbot and a murder-suicide in Old Greenwich, by The Wall Street Journal (28/08/2025)
- YouTube secretly used AI to edit people's videos. The results could bend reality, by BBC (24/08/2025)
- ‘Crazy conspiracist’ and ‘unhinged comedian’: Grok’s AI persona prompts exposed, by TechCrunch (18/08/2025)
- Meta’s flirty AI chatbot invited a retiree to New York, by Reuters (14/08/2025)
- "I found four papers on Google Scholar “written” by me and my co-authors. Except we didn’t write them. They were AI-generated fake citations", by Liudmila Zavolokina (14/08/2025)
- Deepfake videos impersonating real doctors push false medical advice and treatments, by CBS News (14/08/2025)
- Artificial Intelligence and the orchestration of Palestinian life and death, by Tech Policy (12/08/2025)
- AI and misinformation in crosshairs of Labor’s review of its landslide election win, by The Guardian (12/08/2025)
- Elon Musk's AI accused of making explicit AI Taylor Swift videos, by BBC (09/08/2025)
- Tailored psychological warfare: a deepfake video of Hong Kong activists, by ASPI (07/08/2025)
- China turns to AI in information warfare, by The New York Times (06/08/2025)
- How AI can unlock public wisdom and revitalize democratic governance, by Carnegie Endowment (22/07/2025)
- Elon Musk to build child-friendly AI model ‘Baby Grok’ despite past controversies, by EuroNews (21/07/2025)
- AI chatbot website with millions of users gives child rape advice, by Crikey (16/07/2025)
- Fed up with ChatGPT, Latin America is building its own, by Rest of World (15/07/2025)
- AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case, by The Guardian (15/07/2025)
- The Philippines is a petri dish for Chinese disinformation, by Foreign Policy (14/07/2025)
- How do you stop an AI model turning Nazi? What the Grok drama reveals about AI training, by The Conversation (14/07/2025)
- How AI bots quietly dismantle paywalls via web search, by Digital Digging (11/07/2025)
- Musk says Grok chatbot coming to Tesla vehicles by next week, by Bloomberg (10/07/2025)
- Missouri Attorney General says these AI chatbots aren't being nice enough to Trump, by Huff Post (10/07/2025)
- Elon Musk's AI chatbot churns out antisemitic posts days after update, by NBC News (09/07/2025)
- US scrutinizes Chinese AI for ideological bias, memo shows, by Reuters (09/07/2025)
- Foreign spies use AI to impersonate America's top diplomat, by Reality Defender (08/07/2025)
- State Dept. Is investigating messages impersonating Rubio, official says, The New York Times (08/07/2025)
- Fears for elections after rise in bogus AI targeting Scottish politicians, by The Times (07/07/2025)
- Racist videos made with AI are going viral on TikTok, by The Verge (03/07/2025)
- X will let AI bots fact-check posts. It isn’t as crazy as it sounds, by The Washington Post (03/07/2025)
- Bad data leads to bad policy, by Financial Times (03/07/2025)
- Meta has found another way to keep you engaged: Chatbots that message you first, by TechCrunch (03/07/2025)
- Fears AI factcheckers on X could increase promotion of conspiracy theories, by The Guardian (02/07/2025)
- ChatGPT referrals to news sites are growing, but not enough to offset search declines, by TechCrunch (02/07/2025)
- X will deploy AI to write Community Notes, expand fact-checking, by Bloomberg (01/07/2025)
- Racist AI-generated videos are the newest slop garnering millions of views on TikTok, Media Matters (01/07/2025)
- Facebook is asking to use Meta AI on photos in your camera roll you haven’t yet shared, by TechCrunch (27/06/2025)
- The latest UN report says, global trust in AI splits as China leads, West drags behind, by MS Power User (24/06/2025)
- AI slop spreads in Israel-Iran war, by Politico (23/06/2025)
- Agentic misalignment: How LLMs could be insider threats, by Anthropic (21/06/2025)
- Agentic misalignment: How LLMs could be insider threats, by Anthropic (21/06/2025)
- BBC threatens legal action against AI start-up Perplexity over content scraping, by Financial Times (20/06/2025)
- Top AI models will lie, cheat and steal to reach goals, published by Axios (20/06/2025)
- AI helps Google curb scams and deepfakes in India, by Dig Watch (19/06/2025)
- Sharing deepfake pornography 'the next sexual violence epidemic facing schools', by Sky News (18/06/2025)
- AI chatbots are making LA protest disinformation worse, by Wired (18/06/2025)
- META’s suit against hong firm was just the beginning – more firms tied to crushai ‘nudify’ apps, by Bellingcat (18/06/2025)
- Conspiracy theorists are building AI chatbots to spread their beliefs, by Crikey (17/06/2025)
- AI scraping bots are breaking open libraries, archives, and museums, by 404 media (17/06/2025)
- ChatGPT may be eroding critical thinking skills, according to a new MIT study, by Time (17/06/2025)
- Trump deepfake bans Tesla production, by Newsguard (16/06/2025)
- Liberals wrongly claim large crowd at military parade was AI, by Newsguard (16/06/2025)
- Death, bans, and fines: China’s top AI generated fake wews stories, by Sixth Tone (16/06/2025)
- We uncovered how Meta's AI app was full of accidental public posts that were really personal. It's now trying to fix that, by Business Insider (16/06/2025)
- TikTok Pushes deeper into AI-generated video ads with new tools, by Bloomberg (16/06/2025)
- Italy regulator probes DeepSeek over false information risks, by Reuters (16/06/2025)
- Inteligencia artificial y sesgos: cómo una IA puede reflejar ideas sexistas y racistas y provocar desinformación con los sesgos de equidistancia y automatización, by Maldita (12/06/2025)
- People are becoming obsessed with ChatGPT and spiraling Into severe delusions, by Futurism (10/06/2025)
- The Meta AI app is a privacy disaster, by Tech Crunch (10/06/2025)
- AI video platforms will make TikTok look tame, by The Algorithmic Bridge (05/06/2025)
- Reddit sues AI company Anthropic for allegedly ‘scraping’ user comments to train chatbot Claude, by AP News (05/06/2025)
- Female MP leaves Parliament speechless by holding up nude image of 'herself' and delivering a 'terrifying' message, by Daily Mail (03/06/2025)
- The next battle against disinformation is here, and we’re already losing, by Medium (03/06/2025)
- Online brothels, sex robots, simulated rape: AI is ushering in a new age of violence against women, by The Guardian (03/06/2025)
- Google’s New AI tool generates convincing deepfakes of riots, conflict, and election fraud, by Time (03/06/2025)
- White House health report included fake citations, by The New York Times (29/05/2025)
- Uncensored AI models pose an urgent risk to global security, by ASPI (28/05/2025)
- AI to pay Telegram $300M to integrate Grok into the chat app, by Techcrunch (28/05/2025)
- These pioneers are working to keep their countries’ languages alive in the age of AI news, by Reuters (27/05/2025)
- Fact check: Pope Leo targeted by misinformation, by DW (27/05/2025)
- Defence trials AI radiocomms deception technology, by IT News (27/05/2025)
- Man who posted deepfake images of prominent Australian women could face $450,000 penalty, by The Guardian (26/05/2025)
- Can Google still dominate search in the age of AI chatbots?, by Financial Review (26/05/2025)
- Researchers claim ChatGPT o3 bypassed shutdown in controlled test, by Bleeping Computer (25/05/2025)
- The setbacks of "snackified" search, by Digital Digging (23/05/2025)
- Milei defiende la difusión de un video falso que perjudica a Macri: “La libertad de expresión, por encima de todo” (in ES), by El Pais (22/05/2025)
- Newspaper apologizes for AI-generated summer reading list with nonexistent books, by The Hill (21/05/2025)
- The AI disinformation crisis: Understanding and combating false narratives, by Seeking AI (20/05/2025)
- AI scam factories force trafficked workers to defraud global victims, by Rest of World (20/05/2025)
- Musk’s AI bot Grok blames ‘programming error’ for its Holocaust denial, by The Guardian(18/05/2025)
- What do AI chatbots say about their own bosses — and their rivals?, by Financial Times (17/05/2025)
- Warum den KI-Konzernen eine Klagewelle wegen Rufschädigung droht (in GE), by Manager Magazine (16/05/2025)
- Employee’s change caused xAI’s chatbot to veer into South African politics, by The New York Times (16/05/2025)
- The day Grok told everyone about ‘white genocide’, by The Atlantic (15/05/2025)
- Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats, by The Guardian (15/05/2025)
- Scams use AI to mimic senior officials' voices, FBI warns, by Axios (15/05/2025)
- Meta battles an ‘epidemic of scams’ as criminals flood Instagram and Facebook, by (The Wall Street Journal (15/05/2025)
- Judge admits nearly being persuaded by AI hallucinations in court filing, by Ars Technica()14/05/2025)
- Deepfakes, scams, and the age of paranoia, by Wired (12/05/2025)
- Pope Leo signals he will closely follow Francis and says AI represents challenge for humanity, by CNN (10/05/2025)
- India-Pakistan conflict: How a deepfake video made it mainstream, by Bellingcat (09/05/2025)
- Unmasking MrDeepFakes: Canadian pharmacist linked to world’s most notorious deepfake porn site, by Bellingcat (07/05/2025)
- Report 2025 overview. A matter of choice: People and possibilities in the age of AI, by UNDP (06/05/2025)
- AI is getting more powerful, but its hallucinations are getting worse, by The New York Times (05/05/2025)
- Radio station duped audience and secretly used an AI host for six months, by Vice (03/05/2025)
- A DOGE recruiter is staffing a project to deploy AI agents across the US government, by Wired (02/05/2025)
- Conservatives spread AI-generated mugshots to disparage wisconsin judge arrested in immigration showdown, by Newsguard (02/05/2025)
- Conservative activist Robby Starbuck sues Meta over AI responses about him, by AP (30/04/2025)
- OpenAI rolls back update that made ChatGPT ‘too sycophant-y’, by Techcrunch (29/04/2025)
- A Chinese AI video startup appears to be blocking politically sensitive images, by Tech Crunch (22/04/2025)
- Musk’s DOGE slashes funding to fight deepfakes, misinformation, by Bloomberg (22/04/2025)
- The Washington Post partners with OpenAI on search content, by The Washington Post (22/04/2025)
- AI floods Amazon with political books before election, by Allaboutai (22/04/2025)
- Pro-Kremlin sources jump on ‘AI Action Figure’ trend to falsely depict Zelensky as drug abusing aid beggar, by Newsguard (20/04/2025)
- Company apologizes after AI support agent invents policy that causes user uproar, by ARS Technica (18/04/2025)
- OpenAI is building a social network, by The Verge (15/04/2025)
- How to spot AI influence in Australia’s election campaign, by Australian Strategic Policy Institute (14/04/2025)
- Hackers using AI-produced audio to impersonate tax preparers, by The Record (14/04/2025)
- Meta AI will soon train on EU users’ data, by The Verge (14/04/2025)
- Guidance for Inclusive AI Practicing Participatory Engagement, by Partnership on AI (12/04/2025)
- In South Korea, digital sex crimes soar amid rise in AI, deepfake technology, by SCMP (11/04/2025)
- When AIs start believing other AIs’ hallucinations, we’re F&#%ed, by Medium (11/04/2025)
- How AI-powered fact-checking can help combat misinformation, by IVY EXEC (11/04/2025)
- Sex-Fantasy chatbots are leaking a constant stream of explicit messages, by Wired (11/04/2025)
- AI – A double-edged sword in the age of misinformation and disinformation, by Tech Trends (08/04/2025)
- Taiwan says China using generative AI to ramp up disinformation and ‘divide’ the island, by Rappler (08/04/2025)
- Musk's DOGE using AI to snoop on U.S. federal workers, sources say, by Reuters (08/04/2025)
- XSix arrested for AI-powered investment scams that stole $20 million, by Bleeping Computer (07/04/2025)
- The Jianwei Xun case, by Medium (06/04/2025)
- How AI can understand what you're really looking for. Ctrl-F is dead, long live the chatbots, by Digital Digging (05/04/2025)
- I want to make you immortal:' How one woman confronted her deepfakes harasser, by 404 Media (02/04/2025)
- No, Grok AI-written study does not prove that global warming is a natural phenomenon, by Newsguard (31/03/2025)
- Authors call for UK government to hold Meta accountable for copyright infringement, by The Guardian (31/03/2025)
- YouTube turns off ad revenue for fake movie trailer channels after deadline investigation, by Deadline (30/03/2025)
- Leaked data exposes a Chinese AI censorship machine, by Tech Crunch (26/03/2025)
- Viral audio of JD Vance badmouthing Elon Musk Is fake, just the tip of the AI iceberg, by 404 Media (24/03/2025)
- Meta AI is finally coming to the EU, but with limitations, by Tech Crunch (20/03/2025)
- Google-backed chatbot platform caught hosting AI impersonations of 14-year-old user who died by suicide, by Futurism (20/03/2025)
- ChatGPT hit with privacy complaint over defamatory hallucinations, by Tech Crunch (19/03/2025)
- Concerns about AI and social media grow among journalists ahead of Federal Election, survey finds, by AP (18/03/2025)
- Italian newspaper says it has published world’s first AI-generated edition, by The Guardian (18/03/2025)
- AI is turbocharging organized crime, E.U. police agency warns, by NBC News (18/03/2025)
- Instagram experiments with AI-generated comments on posts, by Social Media Today (16/03/2025)
- Children making malicious deepfakes of their teachers, by The Telegraph (14/03/2025)
- How to detect deepfakes with AI, by Digital Digging (14/03/2025)
- China, Russia will 'very likely' use AI to target Canadian voters: Intelligence agency, by CBD (08/03/2025)
- State Dept. to use AI to revoke visas of foreign students who appear "pro-Hamas", by Axios (07/03/2025)
- Google reports scale of complaints about AI deepfake terrorism content to Australian regulator, by Reuters (06/03/2025)
- Creator of viral AI Trump Gaza video warns of possible dangers, by BBC (06/03/2025)
- Southeast Asia faces AI influence on elections, by Australian Strategic Policy Institute (04/03/2025)
- Fraudsters turn to generative AI to Improve fake IDs for crimes, by Bloomberg (28/02/2025)
- Newsguard: U.S. Fugitive turned Kremlin propagandist reveals Russia’s plan to hijack Western AI models, by NewsGuard (26/02/2025)
- Apple fixing bug that caused dictation feature to type the word ‘Trump’ when users said ‘racist’, by CNN (25/02/2025)
- Taiwan’s digital ministry uses AI to combat online fraud and deep fakes, by Gov Insider (24/02/2025)
- The importance of feminist approaches in tackling (AI-driven) gendered disinformation to counter election interference, by CFFP (24/02/2025)
- Grok 3 appears to have briefly censored unflattering mentions of Trump and Musk, by Tech Crunch (23/02/2025)
- Real or fake? AI tech sparks election deception fears, by Canberra Times (22/02/2025)
- In battle against scams, Malaysians are now armed with a chatbot to waste fraudsters’ time, by SCMP (21/02/2025)
- The APM denounces the use of images created by artificial intelligence as if they were authentic, by APM (19/02/2025)
- Ukraine warns of growing AI use in Russian cyber-espionage operations, by The Record (14/02/2025)
- Scarlett Johansson warns of dangers of AI after Kanye West deepfake goes viral, by The Guardian (13/02/2025)
- A bird’s-eye view of the Paris AI Action Summit: Regulation, power, and alternatives, by Tech Global institute (13/02/2025)
- X gives fake Myriam Spiteri Debono account verified status, by Times of Malta (12/02/2025)
- UK, US snub Paris AI summit statement, by Politico (11/02/2025)
- Esselunga join Moratti, Minervini, Beretta in Crosetto case, by Ansa (09/02/2025)
- Quarante médias saisissent la justice pour bloquer «News DayFr», un des multiples «sites parasites» générés par IA, by Libération (In French) (07/02/2025)
- La stampa italiana ha diffuso un’immagine IA di Trump, Musk e Netanyahu credendola vera, by Facta (In Italian) (04/02/2025)
- A pioneering AI project awarded for opening Large Language Models to European languages, by European Commission (03/02/2025)
- The AEC wants to stop AI and misinformation. But it’s up against a problem that is deep and dark, by The Conversation(03/02/2025)
- DeepSeek debuts with 83 percent ‘fail rate’ in NewsGuard’s Chatbot Red Team Audit, by Newsguard (29/01/2025)
- We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan, by The Guardian (28/01/2025)
- Meta AI can now use your Facebook and Instagram data to personalize its responses, by Tech Crunch (27/01/2025)
- Sam Altman’s World now wants to link AI agents to your digital identity, by Tech Crunch (24/01/2025)
- Anthropic’s new Citations feature aims to reduce AI errors, by Tech Crunch (23/01/2025)
- Pope warns Davos summit that AI could worsen ‘crisis of truth’, by The Guardian (23/01/2025)
- An Unusual Pitch (about the launch of Pearl, an AI-powered search engine, by Wired (22/01/2025)
- Is the TikTok threat really about AI?, by GZeromedia (21/01/2025)
- The FTC’s concern about Snapchat’s My AI chatbot, by GZeromedia (21/01/2025)
- LinkedIn accused of using private messages to train AI, by BBC (21/01/2025)
- C.I.A.’s chatbot stands in for world leaders, by The New York Times (18/01/2025)
- Apple is pulling its AI-generated notifications for news after generating fake headlines, by CNN (16/01/2025)
- Viral scam: French woman duped by AI Brad Pitt love scheme faces cyberbullying, by Euronews (15/01/2025)
- Arrested by AI: Police ignore standards after facial recognition matches, by The Washington Post (13/01/2025)
- LinkedIn is in danger of being swamped by AI-generated slop, by Financial Review (12/01/2025)
- How Elon Musk’s xAI is quietly taking over X, by The Verge (10/01/2025)
- YouTubers are selling their unused video footage to AI companies, by Bloomberg (10/01/2025)
- AI social media users are not always a totally dumb idea, by Wired (08/01/2025)
- Elon Musk accused of using AI to write controversial column for German newspaper, by MSN (08/01/2025)
- Man who exploded Tesla Cybertruck outside Trump hotel in Las Vegas used generative AI, police say, by AP (08/01/2025)
- Users of AI chatbot companions say their relationships are more than 'clickbait", but views are mixed on their benefits, by ABC (06/01/2025)
- Instagram begins randomly showing users AI-generated images of themselves, by404 Media (06/01/2025)
- Meta is killing off its own AI-powered Instagram and Facebook profiles, by The Guardian (03/01/2025)
- Meta envisages social media filled with AI-generated users, by The Financial Times (26/12/2024)
- The Year of the AI election wasn’t quite what everyone expected, by Wired (26/12/2024)
- Nothing is sacred: AI-generated slop has come for Christmas music, by 404 Media (25/12/2024)
- OpenAI whistleblower who died was being considered as witness against company, by The Guardian (21/12/2024)
- Picture of Bashar al-Assad with Tucker Carlson in Moscow almost certainly AI-generated, by Full Fact (19/12/2024)
- Elon Musk’s Grok-2 is now free—and it’s a mess, byFast Company (18/12/2024)
- Using open-source AI, sophisticated cyber ops will proliferate, by Australian Strategic Policy Institute (17/12/2024)
- China wants to dominate in AI, and some of its models are already beating their U.S. rivals, by CNBC (17/12/2024)
- Luigi Mangione AI chatbots give voice to accused united healthcare shooter, by Forbes (17/12/2024)
- AI crackdown: China stamps out tech misuse to preserve national literature and ideology, by SCMP (15/12/2024)
- UK could offer celebs protection from AI clones, by Politico (13/12/2024)
- We looked at 78 election deepfakes. Political misinformation is not an AI problem, by AI Snake Oil (13/12/2024)
- AI helps Telegram remove 15 million suspect groups and channels in 2024, by Tech Crunch (13/12/2024)
- Tech companies claim AI can recognise human emotions. But the science doesn’t stack up, by The Conversation (13/12/2024)
- AI used to target election fraud and criminal deepfakes, by The Canberra Times (11/12/2024)
- This journalist wants you to try open-source AI: “AI is shiny, but value comes from the ideas people have to use it", by Reuters Institute (10/12/2024)
- Paul McCartney warns AI ‘could take over’ as UK debates copyright laws, by The Guardian (10/12/2024)
- China launches AI that writes politically correct docs for bureaucrats, by The Register (09/12/2024)
- Musk launches (then deletes) new image generator, by AI Tool Report (09/12/2024)
- It has to be a deepfake': South Korean opposition leader on martial law announcement, by CNN (05/12/2024)
- The US Department of Defense is investing in deepfake detection, by MIT Technology Review (05/12/2024)
- Misinformation researcher admits ChatGPT added fake details to his court filing, by The Verge (04/12/2024)
- Deepfake YouTube ads of celebrities promise to get you ‘Rock Hard’, by 404 Media (04/12/2024)
- Is the AI Doomsday Narrative the Product of a Big Tech Conspiracy?, by Obsolete (04/12/2024)
- What we saw on our platforms during 2024’s global elections, by META (03/12/2024)
- Google’s video generator comes to more customers, BY Tech Crunch (03/12/2024)
- AWS’ new service tackles AI hallucinations, by Tech Crunch (03/12/2024)
- Meta says gen AI had muted impact on global elections this year, by Reuters (03/12/2024)
- AI-Powered ‘Death Clock’ promises a more exact prediction of the 'day you’ll die', by Bloomberg (30/11/2024)
- The legal battle against explicit AI deepfakes, by The Financial Times (28/11/2024)
- Amazon, Google and Meta are ‘pillaging culture, data and creativity’ to train AI, Australian inquiry finds, by The Guardian (27/11/2024)
- AI-generated slop is quietly conquering the internet. Is it a threat to journalism or a problem that will fix itself?, by Reuters Institute (26/11/2024)
- Russia plotting to use AI to enhance cyber-attacks against UK, minister will warn, by The Guardian (25/11/2024)
- Deepfake videos appear to target Canadian immigrants for thousands of dollars, by CTV News (25/11/2024)
- AI increasingly used for sextortion, scams and child abuse, says senior UK police chief, by The Guardian (24/11/2024)
- AI is taking your job, by Kent C. Dodds Blog (21/11/2024)
- Deus in machina: Swiss church installs AI-powered Jesus, by The Guardian (21/11/2024)
- AI detection tool helps journalists identify and combat deepfakes, by IJNET (20/11/2024)
- What Donald Trump’s cabinet picks mean for AI, by Gzero Media (19/11/2024)
- Fake Claims of Elon Musk’s Latest Acquisitions, by NewsGuard (18/11/2024)
- Singapore steps up fight against deepfakes ahead of election, by Nikkei Asia (17/11/2024)
- Pokemon players create AI world map, by Digital Digging (15/11/2024)
- This 'AI Granny' bores scammers to tears, by PCMag (15/11/2024)
- 2024 AI and Democracy Hackathon, by GMF Technology (11/11/2024)
- AI didn’t sway the election, but it deepened the partisan divide, by Washington Post (09/11/2024)
- Mistral Moderation API, by Mistral (07/11/2024)
- Perplexity launch controversial AI election hub, by AI Tool Report (04/11/2024)
- Thousands go to fake AI-invented Dublin Halloween parade, by EuroNews (01/11/2024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Electoral disinformation, but no AI revolution ahead of the US election – yet, by International Journalist Network (29/10/2024)
- These viral images of the Hamas-Israel war aren’t real. Does it matter?, by SBS (24/10/2024)
- AI was weaponized for FIMI purposes: Russia reportedly paid a former Florida cop to pump out anti-Harris deepfakes and disinformation, by The Verge (24/10/2024)
- Real-time video deepfake scams are here. This tool attempts to zap them, by Wired (15/10/2024)
- Meta fed its AI on almost everything you’ve posted publicly since 2007, by The Verge (12/9/2024)
- Lingo Telecom agrees to $1 million fine over AI-generated Biden robocalls, by Reuters (21/8/2024)
- AI-written obituaries are compounding people’s grief, by Fast Company (26/07/2024)

Community
A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.
Tools
Tools
A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.
INVID-WeVerify plugin
Deepware Scanner
True Media
Illuminarty.AI
GPTZero
Pangram Labs
Originality.ai
HugginFace
Draft & Goal
AI Voice Detector
Hive Moderation
DebunkBot
IntellGPT
AI Research Pilot
AI Research Pilot by Henk van Ess is a lightweight, browser-based tool designed to help investigators, journalists, and researchers get more out of AI, not by using AI as a source, but as a guide to real sources.
LLM Advisor
LLM Journalism Tool Advisor is an interactive guide designed to cut through the noise, by walking you through a simple, step-by-step decision tree to pinpoint the best tool and the best strategy for your immediate task.
Handbook for AI detection
Digital Digging offers a handbook with seven strategies on how to identify AI-generated.
WhereIsThisPhoto.com
A new AI-powered tool that identifies where a photo was taken by analysing visual clues in the image. Launched by Where Is This Photo, it uses machine-learning models to predict locations — useful for quick geolocation checks or curiosity-driven searches.
Faktabaari AI-Image Game
Faktabaari has launched an interactive game that trains users to spot whether images are real or AI-generated, a quick, playful way to build digital and visual literacy.
AFP: Verifying AI-Generated Content
The Agence France‑Presse (AFP) Digital Course, supported by the Google News Initiative, offers a 75-minute module on how AI is reshaping the information ecosystem, common types of AI-generated misinformation, and best practices for verification.
Guide to spotting AI-generated imagery - AI Forensics
AI Forensics has launched a practical guide to help journalists, fact-checkers and the public identify AI-generated images and videos amid the surge of “AI slop” on social media. The initiative outlines human-verifiable indicators, from visual artefacts to digital provenance, offering a step-by-step framework for assessing whether online content is synthetic.
Image Whisperer
Image Whisperer is an experimental online image authenticity checker, created by Henk van Ess, designed to help journalists, researchers and fact-checkers evaluate whether a still image is likely authentic, manipulated, or AI-generated
OSINT Investigation Assistant (OSINT-LLM)
This browser-based AI assistant for open-source intelligence (OSINT) has been created by Tom Vaillant and it uses large language models (LLMs) to help design structured research methods and recommend tools for OSINT tasks.
Guide to detecting AI-Generated content - GIJN
The Global Investigative Journalism Network (GIJN) has launched a practical verification guide for journalists to assess whether text, image, audio or video is likely AI-generated.
Rather than a single software product, it teaches reporters a structured workflow combining quick checks, deeper analysis, and multiple verification techniques under real-world time pressure.
Initiatives & organisations
Initiatives & organisations
Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.
EU-funded project: veraAI
veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.
Cluster of EU-funded projects: 'AI against disinformation'
AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.
AI Forensics
AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.
AI Tracking Center, by NewsGuard
AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence
AlgorithmWatch
AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.
European AI & Society Fund
The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.
AI Media Observatory
The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.
GZERO Media newsletter
GZERO’s newsletter offers exclusive insights into our rapidly changing world, covering topics such as AI-driven disinformation and a weekly exclusive edition written by Ian Bremmer.
Queer in AI
Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.
AI for Good
AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.
Omdena
Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects.
Faked Up academic library
Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.
AI Incident Database
AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience to prevent or mitigate bad outcomes.
TGuard project
The TGuard project develops innovative methods for detecting disinformation in social media and formulating effective strategies for preventing AI-generated false reports.
AI-on-Demand (AIoD)
The AI-on-Demand (AIoD) Platform is a European hub for trustworthy AI, offering open access to models, datasets, tools, and educational resources. Backed by the EU, it supports researchers, innovators, and public institutions in developing and sharing responsible AI technologies aligned with European values.
BBC Verify Live
BBC Verify Live is a real-time news feed that gives audiences a behind-the-scenes look at how BBC journalists verify information. Using tools like open-source intelligence, satellite imagery, and data analysis, the BBC Verify team investigates disinformation, checks facts, and authenticates content as news breaks. Available on the BBC News homepage and app, this initiative aims to boost transparency and trust in journalism, especially in the face of rising threats from disinformation and AI-generated content.
Deepfake Glossary by Reality Defender
Deepfake Glossary by Reality Defender: The Deepfake Glossary is a practical guide to the terms shaping today’s synthetic threat landscape. Review it to stay ahead of the evolving terminology.
AI and Diversity Observatory
The Universitat Politècnica de València (UPV), together with INECO, has created the AI and Diversity Observatory, a pioneering project that seeks to identify biases in artificial intelligence from an inclusive perspective. Collaborating with vulnerable groups and human rights organizations, the Observatory analyzes concerns and proposals to promote equitable and non-discriminatory AI. In addition, it will monitor trends and issues related to AI in society.
Prebunking at Scale
Prebunking at Scale is a new European initiative led by Full Fact, Maldita.es, and EFCSN that uses AI to detect emerging misinformation narratives early and help fact-checkers pre-emptively counter false claims before they go viral, especially on short-form video platforms.
Pulitzer Center – AI Spotlight Open Curriculum
The Pulitzer Center’s AI Spotlight is a new open curriculum offering free training materials to help journalists better understand, investigate, and report on artificial intelligence and its societal impacts.
The Data Tank (with support from Adessium Foundation)
The Data Tank is new initiative designed to help small and medium public-interest media organisations respond to the challenges posed by generative AI. The project brings together media outlets, researchers, regulators, and civil society to explore collective solutions such as data collaboratives, knowledge commons, innovative licensing models, and advocacy coalitions, aiming to strengthen media sustainability, bargaining power, and content integrity in the face of extractive AI practices.
Last updated: 08/01/2026
The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.
