
AI Disinfo Hub
The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.
In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.
Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.
Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS
We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!
News
News
Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work? (The Conversation, 03/06/2025)
The Conversation: In May, Google introduced SynthID Detector, a tool designed to identify AI-generated content across text, images, video, and audio. However, there are important limitations. The tool is mainly effective for content produced using Google’s own AI systems, like Gemini (text), Veo (video), Imagen (images), or Lyria (audio). It won’t reliably detect content created with non-Google tools, such as ChatGPT. That’s because SynthID doesn’t actually detect AI-generated content in general, it can only recognize specific markers embedded by Google’s own AI models.
Online brothels, sex robots, simulated rape: AI is ushering in a new age of violence against women (The Guardian, 03/06/2025)
The Guardian: This article explores how artificial intelligence is fuelling a disturbing rise in digital misogyny, creating new forms of violence against women and girls. It reveals how AI is being used to build online “brothels”, generate simulated child abuse, and develop sex robots with features designed to mimic rape. The piece argues that the unchecked development of AI threatens to embed gender inequality more deeply into society, especially as men remain its dominant users and beneficiaries.
Google’s New AI tool generates convincing deepfakes of riots, conflict, and election fraud (Time, 03/06/2025)
Time: This investigation by TIME and several tech watchdogs reveals that Google’s AI tool Veo 3 can create realistic deepfake videos containing misleading or inflammatory depictions of news events. Despite some safeguards, the tool was able to generate clips such as a Pakistani crowd setting fire to a Hindu temple, Chinese researchers handling a bat in a wet lab, an election worker shredding ballots, and Palestinians gratefully accepting U.S. aid in Gaza. Experts warn that, if shared on social media in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence.
Leaked files reveal how China is using AI to erase the history of the Tiananmen Square massacre (ABC, 02/06/2025)
ABC: Leaked documents reveal that China employs advanced AI technologies alongside human censors to systematically erase public memory of the 1989 Tiananmen Square massacre. The censorship system uses machine learning to detect not only direct references but also symbolic imagery, such as sequences resembling the iconic “Tank Man” photo, even if disguised with everyday objects like bananas and apples.
Hey chatbot, is this true? AI 'factchecks' sow misinformation (France 24, 02/06/2025)
France 24: AI chatbots, increasingly relied upon for instant fact-checking, have been shown to frequently spread misinformation rather than correct it. During India’s recent conflict with Pakistan, these tools wrongly identified unrelated video footage as military strikes, fueling confusion. Beyond this, investigations revealed that chatbots sometimes fabricate details, as when an AI-generated image of a woman was falsely confirmed as authentic by a chatbot in Uruguay. The decline in human fact-checkers at major tech platforms has exacerbated the problem, raising concerns about the reliability, political bias, and manipulation of AI-powered fact-checking tools.
Generative AI used to copy and clone French news media in French-speaking Africa (Reporters Without Borders, 02/06/2025)
Reporters Without Borders: Reporters Without Borders has raised the alarm over the growing use of generative AI to impersonate trusted French media outlets in French-speaking African countries. Recent deepfakes and synthetic audio clips mimicking journalists from Radio France Internationale (RFI) and France 24 have circulated widely on platforms like WhatsApp and TikTok, misleading the public with fabricated news.
TRIED: Truly Innovative and Effective AI Detection Benchmark, developed by WITNESS (Arxiv, 30/05/2025)
Arxiv: Witness has developed The Truly Innovative and Effective AI Detection Benchmark, which provides a comprehensive framework for assessing AI detection tools through a sociotechnical lens, emphasizing their effectiveness in real-world scenarios and their usefulness to critical information stakeholders. Shaped by input from communities, case studies of deceptive AI handled by the WITNESS Deepfakes Rapid Response Force, and international consultations, the benchmark delivers practical guidance and concrete recommendations to help develop, improve, and promote robust, forward-looking detection technologies with global relevance.
White House health report included fake citations (The New York Times, 29/05/2025)
The New York Times: The Trump administration’s Make America Healthy Again Commission unveiled a report last week that it claimed would provide an evidence-based approach to children’s health policy. However, the report referenced studies that don’t actually exist, covering topics like drug advertising, mental health, and asthma treatments. According to Dr. Ivan Oransky, a medical journalism professor at NYU, the inaccuracies are strikingly similar to the kinds of mistakes commonly seen in content generated by AI systems.
Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns (The Conversation & Florida International University, 29/05/2025)
The Conversation & Florida International University: As AI-generated disinformation grows more sophisticated, researchers at Florida International University are developing tools to fight back using the same technology. By teaching AI to analyse narratives, identifying storytellers, cultural cues, and timelines, the team is helping uncover how false stories spread and take root. From fake election videos to culturally tailored propaganda, the study highlights the power of storytelling in persuasion and the urgent need for culturally literate, narrative-aware AI systems to detect and counter digital influence campaigns.
A weaponized AI chatbot Is flooding Canadian City Councils with climate misinformation (DeSmog, 28/05/2025)
DeSmog: A group called KICLEI, mimicking the international environmental network ICLEI, has been sending thousands of AI-generated emails to over 500 Canadian municipalities, urging councils to abandon net-zero climate targets. Using a custom AI chatbot dubbed the “Canadian Civic Advisor,” KICLEI crafts tailored messages that downplay climate change, focus on “real pollution, not CO2,” and cast doubt on the scientific consensus. Several municipalities, including Thorold, Ontario and Lethbridge, Alberta, have already voted to weaken or withdraw from key climate initiatives after receiving KICLEI materials. Scientists have labelled many of KICLEI’s claims as misinformation, while the group denies spreading falsehoods.
The coming AI backlash will shape future regulation (Brookings, 27/05/2025)
Brookings: Technology firms and their executives have increasingly embedded themselves within the US federal government, gaining greater access to confidential data and benefiting from a loosening of earlier AI regulations. While some tech leaders argue that AI doesn’t require strict oversight, growing public concern and real-world issues, such as privacy violations, biased algorithms, and security vulnerabilities, highlight the urgent need for thoughtful governance. History shows that when new technologies spark public unease, pressure builds for government action, making openness and accountability crucial to sustaining trust and ensuring the industry’s future stability.
Defence trials AI radiocomms deception technology (IT News, 27/05/2025)
IT News: A new application for AI-driven deception has been identified and tested: the Australian Army is experimenting with the so-called TrapRadio, a system leveraging artificial intelligence to generate fake radio signals that imitate the behavior and patterns of important communications to confuse adversaries and safeguard frontline trops.
Can Google still dominate search in the age of AI chatbots? (Financial Review, 26/05/2025)
Financial Review: The development of artificial intelligence and AI chatbots has revolutionized the dominance of search engines, as in the case of Google. For the first time in decades, the tech giant’s once-unshakable monopoly faces real competition, and the question at stake is whether Google will be able to maintain its dominance in online search at a time when AI chatbots are redefining how people access information.
Newspaper apologizes for AI-generated summer reading list with nonexistent books (The Hill, 21/05/2025)
The Hill: The Chicago Sun-Times published an AI-generated summer reading list containing entirely fictional book titles, which appeared both in the online and in print editions without any editorial oversight. The list quickly drew criticism and ridicule from readers who noticed the fake entries. The newspaper has since acknowledged the error, admitting it failed to review the content. What seems like a minor anecdote calls into question the use of artificial intelligence technologies in journalism without human oversight.
Deepfakes just got even harder to detect: Now they have heartbeats (BBC, 30/04/2025)
BBC: Deepfakes have advanced in a critical area that could make them significantly harder to detect. A new study published in Frontiers in Imaging reveals that synthetic videos are now capable of replicating realistic pulse signals in human bodies—biological cues whose absence was previously used to identify fakes. This development may render many existing detection tools less effective. Experts warn that this breakthrough could further erode public trust in visual media, and emphasize the need for cryptographic authentication methods, not just more advanced detectors, as a long-term defense strategy.
Events, jobs & announcements
Events, jobs & announcements
Event. 16-17 June 2025, in Paris: The Paris Conference on AI & Digital Ethics
The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place on June 16-17 at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies. PCAIDE offers a unique platform for experts to engage in open dialogue and collaborate on addressing key issues in the development of sociotechnical systems.
Event. 8-11 July 2025 in Geneva: AI for Good Global Summit
The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva. This leading UN event on AI brings together top names in AI, with a high-level lineup of global decision makers. Its goal is to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact.
Event. 14-18 July 2025 in Thessaloniki and online: AIDA Symposium and Summer School on ‘AI/ML Cutting Edge Trends' AI for Good Global Summit
From July 14-18, 2025, the AIDA Symposium and Summer School will explore the latest in AI and ML. Co-organised by AIDA and Aristotle University of Thessaloniki, this hybrid event offers expert-led lectures, special sessions, and hands-on tutorials.
Job: Technical & non-technical roles
The UK’s AI Safety Institute is recruiting for multiple roles in research, engineering, strategy, and operations. As part of a high-impact initiative focused on AI governance, successful candidates will contribute to critical work in a fast-paced, interdisciplinary environment alongside leading experts.

AI & Disinfo Multimedia
A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.
Webinars
Webinars
Our own and community webinar collection exploring the intersections of AI and disinformation
- This is what happens when you let Elon Musk build an AI, with Nolan Higdon and Sydney Sullivan. Hosted by The disinfo detox (20/05/2025)
- LLM grooming: a new strategy to weaponise AI for FIMI purposes, with Sophia Freuden (The American Sunlight Project). Hosted by EU DisinfoLab (10/04/2025)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, with Heron Lopes (UCDP). Hosted by EU DisinfoLab (06/03/2025)
- Safeguarding Australian elections: Addressing AI-enabled disinformation, with Kate Seward (Microsoft ANZ), Antonio Spinelli (International IDEA) and Sam Stockwell (CETaS). Hosted by ASPI (06/02/2025)
- Faking It - Information Integrity, AI and the Law -Global Game Changers Series-, with Monica Attard and Michael Davis (UTS), Creina Chapman (ACMA), Cullen Jennings (Cisco Systems) and Jason M Schultz (Canva). Hosted by University of Technology Sydney (29/11/2024)
- AI and Disinformation: A legal perspective, with Noémie Krack (KU Leuven). Hosted by EU DisinfoLab (07/11/2024)
- Generative AI and Geopolitical Disruption, with Corneliu Bjola (Oxford Internet Institute), Antonio Estella and Maria Dolores Sanchez Galera (Carlos III University), Peter Pijpers (Netherlands Defence Academy), Michael Zinkanell (Austrian Institute for European and Security Policy), and Gregory Smith (RAND Corporation). Hosted by Solaris (25/10/2024)
- DisinfoCon 2024 - Taking stock of Information Integrity in the Age of AI, with Carl Miller (Center for Analysis of Social Media at Demos). Hosted by Democracy Reporting International (26/09/2024)
- Advancing synthetic media detection: introducing veraAI, with Akis (Symeon) Papadopoulos (Centre for Research and Technology Hellas – Information Technologies Institute). Hosted by EU DisinfoLab (29/08/2024)
- Using Generative AI for the production, spread, and detection of disinformation – latest insights and innovations, with Kalina Bontcheva (University of Sheffield). Hosted by EU DisinfoLab (27/06/2024)
- Beyond Deepfakes: AI-related risks for elections, with Sophie Murphy Byrne (Logically). Hosted by EU DisinfoLab (30/05/2024)
- The Top 9 AI Breakthroughs of 2024 (You Won’t Believe Are Real). By AI Uncovered (08/11/2024)
- Tools and techniques for using AI in digital investigations, with Craig Silverman (ProPublica). Hosted by EU DisinfoLab (25/04/2024)
- OSINT & AI: Advanced Analysis, with Ivan Kravtsov (Social Links) and Gary Ruddell (Independent Cyber Threat Intelligence Professional). Hosted by Social Links (16/11/2023)
Podcasts
Podcasts
Community podcasts exploring the intersections of AI and disinformation
- Is technological progress always good? Hosted by Responsible bytes (02/04/2025)
- AI Is transforming geopolitics. Hosted by New Lines Magazine (21/02/2025)
- The rise of DeepSeek, the Chinese AI chatbot making waves in tech. Hosted by Teka Teka (19/02/2025)
- Privacy, digital rights, AI and the law. Hosted by Technology & Security (17/02/2025)
- How DeepSeek controls the conversation. Hosted by Digital Digging (29/01/2025)
- AI regulation and risk management in 2024. Hosted by The AI in business Podcast (21/01/2025)
- The case for human-centered AI. Hosted by McKinsey Digital (20/12/2024)
- Destination Deception 2025. Hosted by Faked Up (18/12/2024)
- What is AI slop and did it lead to a Halloween parade hoax in Dublin? Hosted by The Explainer (05/11/2024)
- Beyond the ballot: Misinformation, trust and truth in elections. Hosted by The National Security Podcast (24/10/2024)
- Do not "summarize this"! Episode 4: improve prompts to get a better summary. Hosted by Digital Digging (28/09/2024)
- How to detect fake AI-texts, episode 1 of podcast series on AI & Research. Hosted by Digital Digging (17/09/2024)
- Moderating Global Voices. Hosted by Decoding Hate (10/02/2021)

AI Disinfo in depth
A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.
Research
Research
A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation
- AI job recruitment tools could 'enable discrimination' against marginalised groups, research finds, by ABC News (07/05/2025)
- Synthetic propaganda, by Marcus Boesch (05/05/2025)
- How Russia is using Gaelic and AI to peddle disinformation in Scotland, by The Times (03/05/2025)
- Why does AI hinder democratization?, by PNAS (03/05/2025)
- Pro-Russian influence operation targeting Australia in lead-up to election with attempt to 'poison' AI chatbots, by ABC (02/05/2025)
- Disasters and disinformation: AI and the Myanmar 7.7 Magnitude Earthquake, by RSiS ()01/05/2025)
- Americans largely foresee AI having negative effects on news and journalists, by Pew Research Center (28/04/2025)
- Operating multi-client influence networks across platforms, by Anthropic(23/04/2025)
- AI is inherently ageist. That’s not just unethical – it can be costly for workers and businesses, by The Conversation (22/04/2025)
- Values in the wild: Discovering and analyzing values in real-world language model interactions, by Anthropic (21/04/2025)
- False face: Unit 42 demonstrates the alarming ease of synthetic identity creation, by Unit 42 (21/04/2025)
- Russian propaganda campaign targets France with AI-fabricated scandals, drawing 55 million views on social media, by Newsguard (17/04/2025)
- OpenAI’s new reasoning AI models hallucinate more, by Tech Crunch (17/04/2025)
- Russia’s use of genAI in disinformation and cyber jnfluence: Strategy, use cases and future expectations, by CRC (13/04/2025)
- LLM pass the Turing Test. But that doesn’t mean AI is now as smart as humans, by The Conversation (08/04/2025)
- What we learned from tracking AI use in global elections, by Rest of World (08/04/2025)
- Emotional prompting amplifies disinformation generation in AI large language models, by Frontiers (07/04/2025)
- AI Index 2025: State of AI in 10 Charts, by HAI Stanford University (07/04/2025)
- OpenAI’s Sora Is plagued by sexist, racist, and ableist biases, by Wired (23/03/2025)
- AI’s answers on China differ depending on the language, analysis finds, by Tech Crunch (20/03/2025)
- Users turning to ChatGPT for news may find misinformation in responses, by Logically Facts (18/03/2025)
- Deepfake detectors vulnerable ahead of election, by InnovationAus (13/03/2025)
- Russia-linked Pravda network cited on Wikipedia, LLMs, and X, by DFRLab (12/03/2025)
- Urgent action is needed to secure the UK’s AI research ecosystem against hostile state threats, by The Alan Turing Institute (07/03/2025)
- A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propaganda, by Newsguard (06/03/2025)
- Chinese AI video generators unleash a flood of new nonconsensual porn, by 404 Media (06/03/2025)
- AI search has a citation problem, by Columbia Journalism Review (06/03/2025)
- An AI slop "science" site has been beating real publications in Google results by publishing fake images of SpaceX Rockets, by Futurism (06/03/2025)
- Character flaws, by Graphika (05/03/2025)
- Hybrid threats and the amplifying power of AI: Five strategic scenarios, by Alto Intelligence (01/03/2025)
- Towards a common reporting framework for AI incidents, by OECD (28/02/2025)
- Microsoft outs hackers behind tools to bypass generative AI guardrails, by Bloomberg (27/02/2025)
- The smarter AI gets, the more It start cheating when it's losing, by The Byte (22/02/2025)
- Disrupting malicious uses of AI, by Open AI (21/02/2025)
- Deepfake threat: Only 0.1% can spot AI-generated fakes, by Security Brief (19/02/2025)
- Grok’s responses to questions on the German elections were mostly accurate and relied heavily on media sources, by Reuters Institute (19/02/2025)
- How 35 YouTube channels spread disinformation using AI about Spanish and European politics, by Maldita (14/02/2025)
- Inconsistent and unreliable: Chatbots provide inaccurate Information on German elections, by Democracy Reporting International (12/02/2025)
- Representation of BBC News content in AI assistants, by BBC (11/02/2025)
- An adviser to Elon Musk’s xAI has a way to make AI more like Donald Trump, by Wired (11/02/2025)
- Red-teaming in the public interest, by Data & Society (09/02/2025)
- AI misinformation monitor of leading AI chatbots multilingual edition, by Newsguard (07/02/2025)
- Challenges and opportunities of AI in the fight against information manipulation, by VIGNIUM (07/02/2025)
- Search Google Maps with the help of AI, by Digital Digging (06/02/2025)
- Rechts, weiblich, Fake, by Tagesschau (05/02/2025)
- Russian propaganda may be flooding AI models, by American Sunlight (01/02/2025)
- AI-Generated Disinformation in Europe and Africa, by KAS (31/01/2025)
- Scammers are creating fake news videos to blackmail victims, by Wired (27/01/2025)
- Russian propagandist turns his sights to German election, by Reuters (23/01/2025)
- Greenwashing and bothsidesism in AI chatbot answers about fossil fuels' role in climate change, by Global Witness (22/01/2025)
- Knowing less about AI makes people more open to having it in their lives, by The Conversation (20/01/2025)
- AI isn’t very good at history, by Tech Crunch (19/01/2025)
- A fact-checking tool based on Artificial Intelligence to fight disinformation on Telegram, by Universidad de Navarra (12/01/2025)
- Apple urged to withdraw 'out of control' AI news alerts, by BBC (07/01/2025)
- AI could usher in a golden age of research – but only if these cutting-edge tools aren’t restricted to a few major private companies, by The Conversation (06/01/2025)
- These defenders of democracy do not exist, by Conspirador Norteño (05/01/2025)
- An AI-Powered Audit: Do Chatbots Reproduce Political Pluralism?, by Democracy Reporting International (27/12/2024)
- ChatGPT search tool vulnerable to manipulation and deception, tests show, by The Guardian (24/12/2024)
- Predictions for AI in 2025: Collaborative agents, AI skepticism, and new risks, by Stanford University (23/12/2024)
- Bridging the data provenance gap across text, speech and video, by arXiv:2412.17847 (19/12/2024)
- Fake AI versions of world-renowned academics are spreading claims that Ukraine should surrender to Russia, by The Insider (13/12/2024)
- ElevenLabs used for Russian propaganda, by AI Tool Report (11/12/2024)
- AI enters Congress: Sexually explicit deepfakes target women lawmakers, by The 19th News (11/12/2024)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, by GNET (11/12/2024)
- Scottish Parliament TV at risk of deepfake attacks, by Infosecurity (10/12/2024)
- Revealed: bias found in AI system used to detect UK benefits fraud, by The Guardian (06/12/2024)
- Evaluating Large Language Models capability to launch fully automated spear phishing campaigns: Validated on human subjects, by arXiv (30/11/2024)
- Study of ChatGPT citations makes dismal reading for publishers, by Tech Crunch (29/11/2024)
- How ChatGPT Search (mis)represents publisher content, by Columbia Journalism Review (27/11/2024)
- Persuasive technologies in China: implications for the future of national security, by Australian Strategic Policy Institute (26/11/2024)
- "Operation Undercut" shows multifaceted nature of SDA’s influence operations, by Recorded Future (26/11/2024)
- Philippines, China clashes trigger money-making disinformation, by France24 (26/11/2024)
- Not even Spotify is safe from AI slop, by The Verge (14/11/2024)
- AI-enabled influence operations: Safeguarding future elections, by Cetas (13/11/2024)
- Disconnected from reality: American voters grapple with AI and flawed OSINT strategies, by ISD (07/11/2024)
- AI hallucinations caused artificial intelligence to falsely describe these people as criminals, by ABC News (03/11/2024)
- Exploiting Meta’s weaknesses, deceptive political ads thrived on Facebook and Instagram in run-up to election, by Pro Publica (31/10/2024)
- "Say it’s only fictional”: How the far-right is jailbreaking AI and what can be done about it, by ICCT (30/10/2024)
- How X users can earn thousands from US election misinformation and AI images, by BBC (30/10/2024)
- Hospitals use a transcription tool powered by an error-prone OpenAI model, by The Verge (28/10/2024)
- Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said, by AP news (26/10/2024)
- GenAI and Democracy, by DSET (25/10/2024)
- Prebunking elections rumors: Artificial Intelligence assisted interventions increase confidence in American elections, by California Institute of Technology, Washington University in St. Louis, Cambridge University (24/10/2024)
- Large Language Models reflect the ideology of their creators, by arXiv (24/10/2024)
- Amazon Alexa users given false information attributed to Full Fact’s fact checks, by Full Fact (17/10/2024)
- Ensuring AI accountability: Auditing methods to mitigate the risks of Large Language Models, by Democracy Reporting International (14/10/2024)
- Pig butchering scams are going high tech, by Wired (12/10/2024)
- An update on disrupting deceptive uses of AI, by openAI (09/10/2024)
- Generative Artificial Intelligence and elections, by Center for Media Engagement (03/10/2024)
- Grok AI: A deepfake disinformation disaster for democracy, by CCDH (29/8/2024)
- OpenAI blocks AI propaganda, by AI Tool Report (19/8/2024)
- Disrupting deceptive uses of AI by covert influence operations, by OpenAI (30/5/2024)
- AI-pocalypse Now? Disinformation, AI, and the super election year, by MSC (01/04/2024)
About policy & regulations
About policy & regulations
A look at regulation and policies implemented on AI and disinformation
- Meta reportedly replacing human risk assessors with AI, by Mashable (01/06/2025)
- Governing AI and the democratisation of governance, by Hintz, A. Dialogues on Digital Society (30/05/2025)
- Nick Clegg says asking artists for use permission would ‘kill’ the AI industry, by (The Verge26/05/2025)
- German rights group fails in bid to stop Meta's data use for AI, by Reuters (23/05/2025)
- President Trump signs TAKE IT DOWN Act into Law, by The White House (19/05/2025)
- Tech workers, teachers, artists oppose AI preemption measure, by Demand Progress (19/05/2025)
- OpenAI Launches AI Safety Evaluations Hub Amid GPT-4o Controversy: Transparency or PR Strategy?, by Medium (15/05/2025)
- Trump administration fires top copyright official days after firing Librarian of Congress, by AP (12/05/2025)
- Trump fires director of U.S. Copyright Office, sources say, by CBS News (10/05/2025)
- Introducing Gen AI labels: Pinterest is taking a new step in transparency, by Pinterest (30/04/2025)
- House approves Take It Down Act, sending bill on intimate images to Trump’s desk, by The 19th News (28/04/2025)
- Musk’s X sues to block Minnesota ‘deepfake’ law over free speech concerns, by CNBC (23/04/2025)
- Google used AI to suspend over 39M ad accounts suspected of fraud, by Tech Crunch (16/04/2025)
- OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk, by Fortune (16/04/2025)
- ChatGPT now lets users create fake images of politicians. We stress-tested it, by CBC (13/04/2025)
- YouTube supports the NO FAKES Act: Protecting creators and viewers in the age of AI, by YouTube (09/04/2025)
- The Dangers of AI Sovereignty, by Lawfare (07/04/2025)
- Google is shipping Gemini models faster than its AI safety reports, by Tech Crunch (03/04/2025)
- UK needs to relax AI laws or risk transatlantic ties, thinktank warns, by The Guardian (02/04/2025)
- Protecting the polls in the era of AI and deepfakes, by Microsoft (01/04/2025)
- OpenAI peels back ChatGPT’s safeguards around image creation, by Tech Crunch (28/03/2025)
- Meta to seek disclosure on political ads that use AI ahead of Canada elections, by Reuters (20/03/2025)
- Vance outlines an America first, America only AI agenda, by Lawfare (19/03/2025)
- China mandates labels for all AI-generated content in fresh push against fraud, fake news, by SCMP (15/03/2025)
- Under Trump, AI scientists are told to remove ‘ideological bias’ from powerful models, by Wired (14/03/2025)
- OpenAI urges Trump administration to remove guardrails for the industry, by CNBC (13/03/2025)
- Spain to impose massive fines for not labelling AI-generated content, by Reuters (11/03/2025)
- The AI regulation debate in China is on a whole different level, by Raymond Sun (10/03/2025)
- Meta brings its anti-scam facial-recognition test to the UK and Europe, by Tech Crunch (04/03/2025)
- Creative industries protest against UK plan about AI and copyright, by Financial Times (27/02/2025)
- Terms of (dis)service: comparing misinformation policies in text-generative AI chatbot, by EU DisinfoLab (27/02/2025)
- UK delays plans to regulate AI as ministers seek to align with Trump administration, by The Guardian (24/02/2025)
- Erotica, gore and racism: how America’s war on ‘ideological bias’ is letting AI off the leash, by The Conversation (24/02/2025)
- Artificial intelligence and intellectual property: Navigating the challenges of data scraping, by OECD.AI (14/02/2025)
- OpenAI removes certain content warnings from ChatGPT, by Tech Crunch (13/02/2025)
- Tech companies pledged to protect elections from AI. Here’s how they did, by Brennan Center (13/02/2025)
- The death of inclusive AI? Trump’s fight against diversity intensifies, by ANU Reporter (13/02/2025)
- JD Vance warns Europe to go easy on tech regulation in major AI speech, by Politico (11/02/2025)
- Donald Trump rolls back Biden-era AI regulation, sets stage for battles with US states, by CNN(09/02/2025)
- The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence, by Cambridge University Press (06/02/2025)
- Living repository to foster learning and exchange on AI literacy, by European Commission (04/02/2025)
- China is scheduled to hold its "Two Sessions" this week, by Raymond Sun (04/02/2025)
- Meta says it may stop development of AI systems it deems too risky, by Tech Crunch (03/02/2025)
- The EU’s AI bans come with big loopholes for police, by Politico (03/02/2025)
- Frontier AI Framework, by META (03/02/2025)
- AI-generated child sex abuse images targeted with new laws, by BBC (02/02/2025)
- First international AI safety report published, by Computer Weekly (30/01/2025)
- Fighting deepfakes: what’s next after legislation?, by Australian Strategic Policy Institute (24/01/2025)
- Deepfake labels and detectors still don't work, by (Faked Up22/01/2025)
- The global struggle over how to regulate AI, by Rest of World (21/01/2025)
- Trump revokes Biden executive order on addressing AI risks, by Reuters (21/01/2025)
- Feedback on the second draft of the general-purpose AI Code of Practice: Comments and recommendations, by University of Cambridge (17/01/2025)
- Civil society rallies for human rights as AI Act prohibitions deadline looms, by EuroActiv (16/01/2025)
- OpenAI wooed Democrats with calls for AI regulation. Now it must charm Trump, by The Washington Post (13/01/2025)
- British PM Keir Starmer outlines bid to become AI 'world leader', by ABC (13/01/2025)
- UK can be ‘AI sweet spot’: Starmer’s tech minister on regulation, Musk, and free speech, by The Guardian (11/01/2025)
- Britain to make sexually explicit 'deepfakes' a crime, by Reuters (07/01/2025)
- Partnering for gender-responsive AI, by UN (01/01/2025)
- Copyright and Artificial Intelligence Part 2: Copyrightability, by United States Copyright office (01/01/2025)
- Trump announces new tech policy picks for his second term, by The Verge (23/12/2024)
- Sriram Krishnan named Trump’s senior policy advisor for AI, by Tech Crunch (22/12/2024)
- Google relaxes AI usage rules, by AI Tool Report (18/12/2024)
- Meta debuts a tool for watermarking AI-generated videos, by Tech Crunch (12/12/2024)
- New research centre supporting safe and responsible AI, by Minister for Industry and Science, Australia (09/12/2024)
- Inside Britain’s plan to save the world from runaway AI, by Politico (05/12/2024)
- Rumble Video Platform sues California over anti-deepfake law, by Bloomberg (29/11/2024)
- Trump 2.0: Clash of the tech bros, by Fortune (26/11/2024)
- ChatGPT, Meta and Google generative AI should be designated 'high-risk' under new laws, bipartisan committee recommends, by ABC News (26/11/2024)
- Case closed on "nude" AI images of girls. Why police are not charging man who made them, by Pensacola News Journal (22/11/2024)
- The EU Code of Practice for General-purpose AI: Key takeaways from the First Draft, by CSIS (21/11/2024)
- What Donald Trump’s Cabinet picks mean for AI, by GZero Media (19/11/2024)
- Musk sues California over deepfake law, by AI Tool Report (18/11/24)
- EU AI Act: Draft guidance for general purpose AIs shows first steps for Big AI to comply, by TechCrunch (14/11/2024)
- Musk to be Trump's AI advisor?, by AI Tool Report (12/11/2024)
- What Trump’s victory could mean for AI regulation, by Tech Crunch (06/11/2024)
- How AI could still impact the US election, by Gzero Media (05/11/2024)
- Reducing risks posed by synthetic content, by National Institute of Standards and Technology (01/11/2024)
- Google Photos will soon show you if an image was edited with AI, by The Verge (24/10/2024)
- More transparency for AI edits in Google Photos, by Google (24/10/2024)
- Embedded GenAI on social media: Platform law meets AI law, by DSA Observatory (16/10/2024)
- California rejects AI safety bill, by AI Tool Report (30/09/2024)
- Council of Europe opens first ever global treaty on AI for signature, by Council of Europe (05/9/2024)
- Final Report - Governing AI for humanity, by UN (01/09/2024)
- United Nations Secretary-General’s video message for launch of the Final Report, by UN (01/09/2024)
- Platforms’ AI policy updates in 2024: Labelling as the silver bullet?, by EU DisinfoLab (01/07/2024)
- A real account of peep fakes, by Cornell University (15/04/2024)
- Governing AI agents, by Hebrew University of Jerusalem (02/04/2024)
Miscellaneous readings
Miscellaneous readings
Recommended reading on AI and disinformation
- Female MP leaves Parliament speechless by holding up nude image of 'herself' and delivering a 'terrifying' message, by Daily Mail (03/06/2025)
- The next battle against disinformation is here, and we’re already losing, by Medium (03/06/2025)
- Uncensored AI models pose an urgent risk to global security, by ASPI (28/05/2025)
- AI to pay Telegram $300M to integrate Grok into the chat app, by Techcrunch (28/05/2025)
- These pioneers are working to keep their countries’ languages alive in the age of AI news, by Reuters (27/05/2025)
- Fact check: Pope Leo targeted by misinformation, by DW (27/05/2025)
- Man who posted deepfake images of prominent Australian women could face $450,000 penalty, by The Guardian (26/05/2025)
- Researchers claim ChatGPT o3 bypassed shutdown in controlled test, by Bleeping Computer (25/05/2025)
- The setbacks of "snackified" search, by Digital Digging (23/05/2025)
- Milei defiende la difusión de un video falso que perjudica a Macri: “La libertad de expresión, por encima de todo” (in ES), by El Pais (22/05/2025)
- The AI disinformation crisis: Understanding and combating false narratives, by Seeking AI (20/05/2025)
- AI scam factories force trafficked workers to defraud global victims, by Rest of World (20/05/2025)
- Musk’s AI bot Grok blames ‘programming error’ for its Holocaust denial, by The Guardian(18/05/2025)
- What do AI chatbots say about their own bosses — and their rivals?, by Financial Times (17/05/2025)
- Warum den KI-Konzernen eine Klagewelle wegen Rufschädigung droht (in GE), by Manager Magazine (16/05/2025)
- Employee’s change caused xAI’s chatbot to veer into South African politics, by The New York Times (16/05/2025)
- The day Grok told everyone about ‘white genocide’, by The Atlantic (15/05/2025)
- Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats, by The Guardian (15/05/2025)
- Scams use AI to mimic senior officials' voices, FBI warns, by Axios (15/05/2025)
- Meta battles an ‘epidemic of scams’ as criminals flood Instagram and Facebook, by (The Wall Street Journal (15/05/2025)
- Judge admits nearly being persuaded by AI hallucinations in court filing, by Ars Technica()14/05/2025)
- Deepfakes, scams, and the age of paranoia, by Wired (12/05/2025)
- Pope Leo signals he will closely follow Francis and says AI represents challenge for humanity, by CNN (10/05/2025)
- India-Pakistan conflict: How a deepfake video made it mainstream, by Bellingcat (09/05/2025)
- Unmasking MrDeepFakes: Canadian pharmacist linked to world’s most notorious deepfake porn site, by Bellingcat (07/05/2025)
- AI is getting more powerful, but its hallucinations are getting worse, by The New York Times (05/05/2025)
- Radio station duped audience and secretly used an AI host for six months, by Vice (03/05/2025)
- A DOGE recruiter is staffing a project to deploy AI agents across the US government, by Wired (02/05/2025)
- Conservatives spread AI-generated mugshots to disparage wisconsin judge arrested in immigration showdown, by Newsguard (02/05/2025)
- Conservative activist Robby Starbuck sues Meta over AI responses about him, by AP (30/04/2025)
- OpenAI rolls back update that made ChatGPT ‘too sycophant-y’, by Techcrunch (29/04/2025)
- A Chinese AI video startup appears to be blocking politically sensitive images, by Tech Crunch (22/04/2025)
- Musk’s DOGE slashes funding to fight deepfakes, misinformation, by Bloomberg (22/04/2025)
- The Washington Post partners with OpenAI on search content, by The Washington Post (22/04/2025)
- AI floods Amazon with political books before election, by Allaboutai (22/04/2025)
- Pro-Kremlin sources jump on ‘AI Action Figure’ trend to falsely depict Zelensky as drug abusing aid beggar, by Newsguard (20/04/2025)
- Company apologizes after AI support agent invents policy that causes user uproar, by ARS Technica (18/04/2025)
- OpenAI is building a social network, by The Verge (15/04/2025)
- How to spot AI influence in Australia’s election campaign, by Australian Strategic Policy Institute (14/04/2025)
- Hackers using AI-produced audio to impersonate tax preparers, by The Record (14/04/2025)
- Meta AI will soon train on EU users’ data, by The Verge (14/04/2025)
- Guidance for Inclusive AI Practicing Participatory Engagement, by Partnership on AI (12/04/2025)
- In South Korea, digital sex crimes soar amid rise in AI, deepfake technology, by SCMP (11/04/2025)
- When AIs start believing other AIs’ hallucinations, we’re F&#%ed, by Medium (11/04/2025)
- How AI-powered fact-checking can help combat misinformation, by IVY EXEC (11/04/2025)
- Sex-Fantasy chatbots are leaking a constant stream of explicit messages, by Wired (11/04/2025)
- AI – A double-edged sword in the age of misinformation and disinformation, by Tech Trends (08/04/2025)
- Taiwan says China using generative AI to ramp up disinformation and ‘divide’ the island, by Rappler (08/04/2025)
- Musk's DOGE using AI to snoop on U.S. federal workers, sources say, by Reuters (08/04/2025)
- XSix arrested for AI-powered investment scams that stole $20 million, by Bleeping Computer (07/04/2025)
- The Jianwei Xun case, by Medium (06/04/2025)
- How AI can understand what you're really looking for. Ctrl-F is dead, long live the chatbots, by Digital Digging (05/04/2025)
- I want to make you immortal:' How one woman confronted her deepfakes harasser, by 404 Media (02/04/2025)
- No, Grok AI-written study does not prove that global warming is a natural phenomenon, by Newsguard (31/03/2025)
- Authors call for UK government to hold Meta accountable for copyright infringement, by The Guardian (31/03/2025)
- YouTube turns off ad revenue for fake movie trailer channels after deadline investigation, by Deadline (30/03/2025)
- Leaked data exposes a Chinese AI censorship machine, by Tech Crunch (26/03/2025)
- Viral audio of JD Vance badmouthing Elon Musk Is fake, just the tip of the AI iceberg, by 404 Media (24/03/2025)
- Meta AI is finally coming to the EU, but with limitations, by Tech Crunch (20/03/2025)
- Google-backed chatbot platform caught hosting AI impersonations of 14-year-old user who died by suicide, by Futurism (20/03/2025)
- ChatGPT hit with privacy complaint over defamatory hallucinations, by Tech Crunch (19/03/2025)
- Concerns about AI and social media grow among journalists ahead of Federal Election, survey finds, by AP (18/03/2025)
- Italian newspaper says it has published world’s first AI-generated edition, by The Guardian (18/03/2025)
- AI is turbocharging organized crime, E.U. police agency warns, by NBC News (18/03/2025)
- Instagram experiments with AI-generated comments on posts, by Social Media Today (16/03/2025)
- Children making malicious deepfakes of their teachers, by The Telegraph (14/03/2025)
- How to detect deepfakes with AI, by Digital Digging (14/03/2025)
- China, Russia will 'very likely' use AI to target Canadian voters: Intelligence agency, by CBD (08/03/2025)
- State Dept. to use AI to revoke visas of foreign students who appear "pro-Hamas", by Axios (07/03/2025)
- Google reports scale of complaints about AI deepfake terrorism content to Australian regulator, by Reuters (06/03/2025)
- Creator of viral AI Trump Gaza video warns of possible dangers, by BBC (06/03/2025)
- Southeast Asia faces AI influence on elections, by Australian Strategic Policy Institute (04/03/2025)
- Fraudsters turn to generative AI to Improve fake IDs for crimes, by Bloomberg (28/02/2025)
- Newsguard: U.S. Fugitive turned Kremlin propagandist reveals Russia’s plan to hijack Western AI models, by NewsGuard (26/02/2025)
- Apple fixing bug that caused dictation feature to type the word ‘Trump’ when users said ‘racist’, by CNN (25/02/2025)
- Taiwan’s digital ministry uses AI to combat online fraud and deep fakes, by Gov Insider (24/02/2025)
- The importance of feminist approaches in tackling (AI-driven) gendered disinformation to counter election interference, by CFFP (24/02/2025)
- Grok 3 appears to have briefly censored unflattering mentions of Trump and Musk, by Tech Crunch (23/02/2025)
- Real or fake? AI tech sparks election deception fears, by Canberra Times (22/02/2025)
- In battle against scams, Malaysians are now armed with a chatbot to waste fraudsters’ time, by SCMP (21/02/2025)
- The APM denounces the use of images created by artificial intelligence as if they were authentic, by APM (19/02/2025)
- Ukraine warns of growing AI use in Russian cyber-espionage operations, by The Record (14/02/2025)
- Scarlett Johansson warns of dangers of AI after Kanye West deepfake goes viral, by The Guardian (13/02/2025)
- A bird’s-eye view of the Paris AI Action Summit: Regulation, power, and alternatives, by Tech Global institute (13/02/2025)
- X gives fake Myriam Spiteri Debono account verified status, by Times of Malta (12/02/2025)
- UK, US snub Paris AI summit statement, by Politico (11/02/2025)
- Esselunga join Moratti, Minervini, Beretta in Crosetto case, by Ansa (09/02/2025)
- Quarante médias saisissent la justice pour bloquer «News DayFr», un des multiples «sites parasites» générés par IA, by Libération (In French) (07/02/2025)
- La stampa italiana ha diffuso un’immagine IA di Trump, Musk e Netanyahu credendola vera, by Facta (In Italian) (04/02/2025)
- A pioneering AI project awarded for opening Large Language Models to European languages, by European Commission (03/02/2025)
- The AEC wants to stop AI and misinformation. But it’s up against a problem that is deep and dark, by The Conversation(03/02/2025)
- DeepSeek debuts with 83 percent ‘fail rate’ in NewsGuard’s Chatbot Red Team Audit, by Newsguard (29/01/2025)
- We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan, by The Guardian (28/01/2025)
- Meta AI can now use your Facebook and Instagram data to personalize its responses, by Tech Crunch (27/01/2025)
- Sam Altman’s World now wants to link AI agents to your digital identity, by Tech Crunch (24/01/2025)
- Anthropic’s new Citations feature aims to reduce AI errors, by Tech Crunch (23/01/2025)
- Pope warns Davos summit that AI could worsen ‘crisis of truth’, by The Guardian (23/01/2025)
- An Unusual Pitch (about the launch of Pearl, an AI-powered search engine, by Wired (22/01/2025)
- Is the TikTok threat really about AI?, by GZeromedia (21/01/2025)
- The FTC’s concern about Snapchat’s My AI chatbot, by GZeromedia (21/01/2025)
- LinkedIn accused of using private messages to train AI, by BBC (21/01/2025)
- C.I.A.’s chatbot stands in for world leaders, by The New York Times (18/01/2025)
- Apple is pulling its AI-generated notifications for news after generating fake headlines, by CNN (16/01/2025)
- Viral scam: French woman duped by AI Brad Pitt love scheme faces cyberbullying, by Euronews (15/01/2025)
- Arrested by AI: Police ignore standards after facial recognition matches, by The Washington Post (13/01/2025)
- LinkedIn is in danger of being swamped by AI-generated slop, by Financial Review (12/01/2025)
- How Elon Musk’s xAI is quietly taking over X, by The Verge (10/01/2025)
- YouTubers are selling their unused video footage to AI companies, by Bloomberg (10/01/2025)
- AI social media users are not always a totally dumb idea, by Wired (08/01/2025)
- Elon Musk accused of using AI to write controversial column for German newspaper, by MSN (08/01/2025)
- Man who exploded Tesla Cybertruck outside Trump hotel in Las Vegas used generative AI, police say, by AP (08/01/2025)
- Users of AI chatbot companions say their relationships are more than 'clickbait", but views are mixed on their benefits, by ABC (06/01/2025)
- Instagram begins randomly showing users AI-generated images of themselves, by404 Media (06/01/2025)
- Meta is killing off its own AI-powered Instagram and Facebook profiles, by The Guardian (03/01/2025)
- Meta envisages social media filled with AI-generated users, by The Financial Times (26/12/2024)
- The Year of the AI election wasn’t quite what everyone expected, by Wired (26/12/2024)
- Nothing is sacred: AI-generated slop has come for Christmas music, by 404 Media (25/12/2024)
- OpenAI whistleblower who died was being considered as witness against company, by The Guardian (21/12/2024)
- Picture of Bashar al-Assad with Tucker Carlson in Moscow almost certainly AI-generated, by Full Fact (19/12/2024)
- Elon Musk’s Grok-2 is now free—and it’s a mess, byFast Company (18/12/2024)
- Using open-source AI, sophisticated cyber ops will proliferate, by Australian Strategic Policy Institute (17/12/2024)
- China wants to dominate in AI, and some of its models are already beating their U.S. rivals, by CNBC (17/12/2024)
- Luigi Mangione AI chatbots give voice to accused united healthcare shooter, by Forbes (17/12/2024)
- AI crackdown: China stamps out tech misuse to preserve national literature and ideology, by SCMP (15/12/2024)
- UK could offer celebs protection from AI clones, by Politico (13/12/2024)
- We looked at 78 election deepfakes. Political misinformation is not an AI problem, by AI Snake Oil (13/12/2024)
- AI helps Telegram remove 15 million suspect groups and channels in 2024, by Tech Crunch (13/12/2024)
- Tech companies claim AI can recognise human emotions. But the science doesn’t stack up, by The Conversation (13/12/2024)
- AI used to target election fraud and criminal deepfakes, by The Canberra Times (11/12/2024)
- This journalist wants you to try open-source AI: “AI is shiny, but value comes from the ideas people have to use it", by Reuters Institute (10/12/2024)
- Paul McCartney warns AI ‘could take over’ as UK debates copyright laws, by The Guardian (10/12/2024)
- China launches AI that writes politically correct docs for bureaucrats, by The Register (09/12/2024)
- Musk launches (then deletes) new image generator, by AI Tool Report (09/12/2024)
- It has to be a deepfake': South Korean opposition leader on martial law announcement, by CNN (05/12/2024)
- The US Department of Defense is investing in deepfake detection, by MIT Technology Review (05/12/2024)
- Misinformation researcher admits ChatGPT added fake details to his court filing, by The Verge (04/12/2024)
- Deepfake YouTube ads of celebrities promise to get you ‘Rock Hard’, by 404 Media (04/12/2024)
- Is the AI Doomsday Narrative the Product of a Big Tech Conspiracy?, by Obsolete (04/12/2024)
- What we saw on our platforms during 2024’s global elections, by META (03/12/2024)
- Google’s video generator comes to more customers, BY Tech Crunch (03/12/2024)
- AWS’ new service tackles AI hallucinations, by Tech Crunch (03/12/2024)
- Meta says gen AI had muted impact on global elections this year, by Reuters (03/12/2024)
- AI-Powered ‘Death Clock’ promises a more exact prediction of the 'day you’ll die', by Bloomberg (30/11/2024)
- The legal battle against explicit AI deepfakes, by The Financial Times (28/11/2024)
- Amazon, Google and Meta are ‘pillaging culture, data and creativity’ to train AI, Australian inquiry finds, by The Guardian (27/11/2024)
- AI-generated slop is quietly conquering the internet. Is it a threat to journalism or a problem that will fix itself?, by Reuters Institute (26/11/2024)
- Russia plotting to use AI to enhance cyber-attacks against UK, minister will warn, by The Guardian (25/11/2024)
- Deepfake videos appear to target Canadian immigrants for thousands of dollars, by CTV News (25/11/2024)
- AI increasingly used for sextortion, scams and child abuse, says senior UK police chief, by The Guardian (24/11/2024)
- AI is taking your job, by Kent C. Dodds Blog (21/11/2024)
- Deus in machina: Swiss church installs AI-powered Jesus, by The Guardian (21/11/2024)
- AI detection tool helps journalists identify and combat deepfakes, by IJNET (20/11/2024)
- What Donald Trump’s cabinet picks mean for AI, by Gzero Media (19/11/2024)
- Fake Claims of Elon Musk’s Latest Acquisitions, by NewsGuard (18/11/2024)
- Singapore steps up fight against deepfakes ahead of election, by Nikkei Asia (17/11/2024)
- Pokemon players create AI world map, by Digital Digging (15/11/2024)
- This 'AI Granny' bores scammers to tears, by PCMag (15/11/2024)
- 2024 AI and Democracy Hackathon, by GMF Technology (11/11/2024)
- AI didn’t sway the election, but it deepened the partisan divide, by Washington Post (09/11/2024)
- Mistral Moderation API, by Mistral (07/11/2024)
- Perplexity launch controversial AI election hub, by AI Tool Report (04/11/2024)
- Thousands go to fake AI-invented Dublin Halloween parade, by EuroNews (01/11/2024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Electoral disinformation, but no AI revolution ahead of the US election – yet, by International Journalist Network (29/10/2024)
- These viral images of the Hamas-Israel war aren’t real. Does it matter?, by SBS (24/10/2024)
- AI was weaponized for FIMI purposes: Russia reportedly paid a former Florida cop to pump out anti-Harris deepfakes and disinformation, by The Verge (24/10/2024)
- Real-time video deepfake scams are here. This tool attempts to zap them, by Wired (15/10/2024)
- Meta fed its AI on almost everything you’ve posted publicly since 2007, by The Verge (12/9/2024)
- Lingo Telecom agrees to $1 million fine over AI-generated Biden robocalls, by Reuters (21/8/2024)
- AI-written obituaries are compounding people’s grief, by Fast Company (26/07/2024)

Community
A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.
Tools
Tools
A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.
INVID-WeVerify plugin
Deepware Scanner
True Media
Illuminarty.AI
GPTZero
Pangram Labs
Originality.ai
HugginFace
Draft & Goal
AI Voice Detector
Hive Moderation
DebunkBot
IntellGPT
AI Research Pilot
AI Research Pilot by Henk van Ess is a lightweight, browser-based tool designed to help investigators, journalists, and researchers get more out of AI, not by using AI as a source, but as a guide to real sources.
Initiatives & organisations
Initiatives & organisations
Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.
EU-funded project: veraAI
veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.
Cluster of EU-funded projects: 'AI against disinformation'
AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.
AI Forensics
AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.
AI Tracking Center, by NewsGuard
AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence
AlgorithmWatch
AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.
European AI & Society Fund
The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.
AI Media Observatory
The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.
GZERO Media newsletter
GZERO’s newsletter offers exclusive insights into our rapidly changing world, covering topics such as AI-driven disinformation and a weekly exclusive edition written by Ian Bremmer.
Queer in AI
Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.
AI for Good
AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.
Omdena
Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects.
Faked Up academic library
Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.
AI Incident Database
AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience to prevent or mitigate bad outcomes.
TGuard project
The TGuard project develops innovative methods for detecting disinformation in social media and formulating effective strategies for preventing AI-generated false reports.
AI-on-Demand (AIoD)
The AI-on-Demand (AIoD) Platform is a European hub for trustworthy AI, offering open access to models, datasets, tools, and educational resources. Backed by the EU, it supports researchers, innovators, and public institutions in developing and sharing responsible AI technologies aligned with European values.
BBC Verify Live
BBC Verify Live is a real-time news feed that gives audiences a behind-the-scenes look at how BBC journalists verify information. Using tools like open-source intelligence, satellite imagery, and data analysis, the BBC Verify team investigates disinformation, checks facts, and authenticates content as news breaks. Available on the BBC News homepage and app, this initiative aims to boost transparency and trust in journalism, especially in the face of rising threats from disinformation and AI-generated content.
Last updated: 09/06/2025
The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.