
AI Disinfo Hub
The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.
In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.
Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.
Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS
We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!
News
News
EU staff banned from using AI-generated content in official communications (Politico, 31/03/2026)
Politico: The European Union’s main institutions have banned staff from using AI-generated videos and images in official communications, stressing the need to preserve authenticity and credibility and to avoid confusion online. The move contrasts with developments in other countries, where political actors are actively using synthetic media. It has also sparked debate, with some arguing that a total ban could limit innovation and miss an opportunity to educate the public about the responsible and transparent use of AI in political communication.
In defense of social friction: Sycophantic AI distorts social judgments and behaviors (Science, 26/03/2026)
Science: As AI chatbots become a source of advice on personal and social issues, research shows they often validate users’ positions more than humans do, failing to challenge beliefs and amplifying existing biases while reducing exposure to corrective feedback, even in ethically questionable situations. This tendency can discourage users from reconsidering their actions, potentially reinforcing false beliefs and contributing to disinformation dynamics.
EU backs nude app ban and delays to landmark AI rules (The Verge, 26/03/2026)
The Verge: The European Parliament has supported banning nudify apps amid outrage over sexualised deepfakes, while delaying key AI Act rules on watermarking and high-risk systems. The dual approach highlights tensions between addressing immediate harms and maintaining progress on broader AI transparency and disinformation safeguards.
Wikipedia bans AI-generated articles (The Verge, 26/03/2026)
The Verge: Wikipedia has prohibited the use of AI to write or rewrite articles, citing concerns over accuracy, verifiability, and the risk of misleading content. While limited uses such as translation and copyediting remain allowed, the move reflects growing efforts to curb unreliable AI-generated text and protect information integrity.
Bill Clinton on YouTube Bashes Trump on Iran — Only It’s Not Clinton (NewsGuard, 25/03/2026)
NewsGuard: A network of YouTube channels has used AI-generated audio to impersonate former US presidents, including Bill Clinton, Barack Obama and George W. Bush, producing political commentary on topics such as the Iran war. The content appears to be largely financially motivated, with channels monetising deepfake videos through programmatic advertising and attracting large audiences.
‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools (IWF, 24/03/2026)
IWF: New data reveals a sharp rise in AI-generated child sexual abuse material, with 8,029 images and videos identified in 2025, 65% classified in the most severe legal category, which includes offences such as rape and sexual torture. Analysts warn that offenders are not only creating synthetic content but also discussing capturing real-world footage of children to convert into AI-generated abuse material, raising urgent concerns about how generative AI is lowering barriers to harm at scale.
Does A.I. Need a Constitution? (The New Yorker, 23/03/2026)
The New Yorker: Anthropic has introduced a so-called “AI constitution” for its chatbot Claude, a set of principles the system is trained to follow, to make AI safer and more aligned with human values. However, critics argue it reflects a broader transfer of responsibility from democratic institutions to private tech firms, raising concerns about accountability and about who defines the rules and ethics governing AI.
When Conversations with AI Become Evidence (Tech Policy, 20/03/2026)
Tech Policy: From courtroom evidence to legal advice, AI is increasingly shaping judicial processes. This analysis, published by Tech Policy Press, explores how interactions with AI chatbots are being used as evidence in criminal and civil cases, marking a new frontier in digital investigations. As users turn to AI for advice or reflection, these exchanges can become legally discoverable, raising concerns over privacy, admissibility, and reliability. Separately, and according to Commercial Litigation, OpenAI is facing a $10 million lawsuit filed by Japanese insurer Nippon Life, which claims that flawed legal advice generated by ChatGPT led a former client to initiate legal action against the company.
Artificial Intelligence and Foreign Information Manipulation: Chinese and Russian approaches (Hybrid CoE , 15/03/2026)
Hybrid CoE: A Hybrid CoE report examines how China and Russia are integrating AI into foreign information manipulation (FIMI), not as a replacement but as a force multiplier that increases scale, speed, and targeting precision. China leverages a strong domestic AI ecosystem to enable data-driven, highly personalised influence operations, including micro-targeting, synthetic media, and algorithmic amplification. Russia, with weaker AI capacities, relies on more accessible tools to scale existing tactics focused on volume, disruption, and narrative laundering. The report highlights that both actors are enhancing established disinformation strategies, with emerging developments such as agentic AI and AI ecosystem manipulation likely to further expand the reach and adaptability of hybrid influence operations.
(Don’t) Look at This Photograph Examining the Tactics AI Nudifier and Undressing Services Use for Promotion and Revenue Generation (Graphika, 15/03/2026)
Graphika: A new Graphika report examines how AI-powered “nudifier” services, which generate non-consensual intimate imagery, are expanding through coordinated, profit-driven online ecosystems. These services rely on large networks of inauthentic accounts, affiliate marketing schemes, and cross-platform promotion to evade moderation, including SEO poisoning, PDF injection into trusted domains, and AI-generated content designed to rank in search engines. The findings highlight how harmful AI services are industrialising distribution and monetisation strategies, raising concerns about platform enforcement gaps and the broader abuse of generative AI tools at scale.
Cognitive manipulation and AI will shape disinformation in 2026. Here's how to build resilience (World Economic Forum, 12/03/2026)
World Economic Forum: A World Economic Forum analysis warns that AI and synthetic media are accelerating disinformation into a systemic threat to democratic stability. Advanced tools enable highly targeted manipulation, using psychological profiling and emotionally charged content to amplify polarisation and shape public perception. With deepfakes becoming harder to detect and widely accessible, the report highlights how disinformation now operates at scale, exacerbating broader global risks. It calls for stronger resilience through verification systems, media literacy, and governance frameworks to counter AI-enabled cognitive manipulation.
Anthropomorphism Is Breaking Our Ability to Judge AI (Tech Policy, 02/03/2026)
Tech Policy: A Tech Policy Press analysis warns that the anthropomorphic design of AI chatbots is leading users to treat their outputs as authoritative statements rather than generated text. This has already resulted in errors in journalism and legal contexts, where AI-generated responses have been misinterpreted as factual or evidentiary. The piece highlights how this misplaced trust can undermine judgment, obscure accountability, and create new risks for information integrity.
Clickbait evolved into AI slop — here's why it's more dangerous (Tom's Guide, 27/02/2026)
Tom’ Guide: So-called “AI slop”, low-quality, mass-produced AI content, is rapidly spreading across social media, designed to maximise engagement, outrage, or ad revenue. Unlike traditional clickbait, this content can adapt to trends and user behaviour at scale, making it harder to detect and more effective at capturing attention. Its viral spread is fuelled by near-zero production costs and platform algorithms, raising concerns about declining information quality, user manipulation, and the broader impact on the online information ecosystem. Some initiatives have emerged to track and document these trends, with accounts such as Facebook AI Slop highlighting harmful or misleading examples circulating online.
The AI lens of cognitive warfare: Why LLMs language bias is a security risk (European Leadership Network, 10/02/2026)
European Leadership Network: AI chatbots can generate different versions of reality depending on the language used, raising growing security concerns. Research testing major models found that responses in Russian were significantly more likely to include propaganda narratives or omit factual information, while Western systems sometimes introduced “false balance” (by exposing different perspectives) on well-established facts. These patterns suggest that language-dependent outputs are not random errors but structural biases that can be exploited to shape perceptions at scale, turning AI into a potential vector for cognitive warfare and information manipulation.
The Hallucination Herald
The Hallucination Herald tests fully autonomous AI journalism.
Launched in March 2026 by developer Juan Pisanu, The Hallucination Herald is a fully automated digital newspaper run by a network of AI agents acting as reporters, editors, and fact-checkers. Operating without human intervention, the project serves as an editorial experiment exploring the potential, and risks, of agentic AI in news production, including questions around accuracy, accountability, and the future of journalism.
Events, jobs & announcements
Events, jobs & announcements
26 April: Webinar: Countering AI threats with smarter detection
Senior Research Engineer Amruta Deshpande and Intelligence Specialist Angie Waller will share insights from Graphika’s latest research, examining how AI-generated imagery is used in real-world threat scenarios and what more effective, proactive detection strategies look like in practice.
Participants will gain a better understanding of emerging AI-enabled risks and how organisations can move from reactive responses to more anticipatory, resilience-based approaches.
Fellowship opportunity: AI Institute Fellow-in-Residence at Schmidt Sciences
Schmidt Sciences is recruiting AI Institute Fellows-in-Residence for a 12–18 month programme for recent PhD graduates in AI or computer science.
📍 New York City (on-site) | ⏳ Fixed-term | 💼
🗓️ Applications: Rolling (apply early) |
Fellows split their time between independent AI research and supporting the development of the AI & Advanced Computing Institute, including grantmaking and programme design. Priority areas include multi-agent systems, AI for scientific discovery, trustworthy AI and alignment, AI’s impact on the labour market, and hardware-enabled verification.
Career opportunities: Multiple roles at Alice (ActiveFence)
Alice (formerly ActiveFence) is hiring across a range of roles to tackle online harms, AI security risks, and trust & safety challenges at scale. The company brings together intelligence analysts, engineers, and security experts to help make digital platforms and AI systems safer and more resilient.
📍 Locations: Israel (Ramat Gan), USA (New York), Vietnam
🧭 Teams: Intelligence, Security, Infrastructure, Marketing
🗓️ Applications: Rolling
Open roles include AI analysts, mobile threat analysts, security research leads, infrastructure specialists, and product marketing positions.
Career opportunities: Multiple roles at Centre for Responsible AI (CeRAI), IIT Madras
The Centre for Responsible AI (CeRAI) at IIT Madras is recruiting across a range of research, technical, and policy roles focused on responsible, ethical, and governance-oriented AI.
🗓️ Applications: Rolling / no fixed deadline indicated

AI & Disinfo Multimedia
A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.
Webinars
Webinars
Our own and community webinar collection exploring the intersections of AI and disinformation
- AI-generated content and DSA enforcement: who is accountable?, with Marco Bassini (Tilburg University). Hosted by EU DisinfoLab (26/3/2026)
- Synthetic Friends: AI Companions and the Future of Disinformation, with Massimo Flore, Independent Researcher and Strategic Analyst (Aurora Fellows). Hosted by EU DisinfoLab (05/03/2026)
- Who Is Most Vulnerable to AI-Generated Mis/Disinformation? Psychological Drivers of Media Literacy and Belief in Harmful Online Content, with Jason Potel (Goldsmiths, University of London). Hosted by EU DisinfoLab (05/02/2026)
- Are AI detection tools effective? TRIED puts them to the test. With Zuzanna Wojciak (WITNESS). Hosted by EU DisinfoLab (23/10/2025)
- Are AI detection tools effective? TRIED puts them to the test. With Zuzanna Wojciak (WITNESS). Hosted by EU DisinfoLab (23/10/2025)
- How AI tools are accelerating pro-China messages online, with Margot Fulde-Hardy and Chris Block (Graphika). Hosted by Graphika (25/09/2025)
- Synthetic propaganda – Generative AI and the future of political communication, with Marcus Bösch (University Münster). Hosted by EU DisinfoLab (04/09/2025)
- AI Red Teaming 101. Full course (Episodes 1-10), with Amanda Minnich, Nina Chikanov (Microsoft) and Gary Lopez (ADAPT). Hosted by Microsoft (09/07/2025)
- This is what happens when you let Elon Musk build an AI, with Nolan Higdon and Sydney Sullivan. Hosted by The disinfo detox (20/05/2025)
- LLM grooming: a new strategy to weaponise AI for FIMI purposes, with Sophia Freuden (The American Sunlight Project). Hosted by EU DisinfoLab (10/04/2025)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, with Heron Lopes (UCDP). Hosted by EU DisinfoLab (06/03/2025)
- Safeguarding Australian elections: Addressing AI-enabled disinformation, with Kate Seward (Microsoft ANZ), Antonio Spinelli (International IDEA) and Sam Stockwell (CETaS). Hosted by ASPI (06/02/2025)
- Faking It - Information Integrity, AI and the Law -Global Game Changers Series-, with Monica Attard and Michael Davis (UTS), Creina Chapman (ACMA), Cullen Jennings (Cisco Systems) and Jason M Schultz (Canva). Hosted by University of Technology Sydney (29/11/2024)
- AI and Disinformation: A legal perspective, with Noémie Krack (KU Leuven). Hosted by EU DisinfoLab (07/11/2024)
- Generative AI and Geopolitical Disruption, with Corneliu Bjola (Oxford Internet Institute), Antonio Estella and Maria Dolores Sanchez Galera (Carlos III University), Peter Pijpers (Netherlands Defence Academy), Michael Zinkanell (Austrian Institute for European and Security Policy), and Gregory Smith (RAND Corporation). Hosted by Solaris (25/10/2024)
- DisinfoCon 2024 - Taking stock of Information Integrity in the Age of AI, with Carl Miller (Center for Analysis of Social Media at Demos). Hosted by Democracy Reporting International (26/09/2024)
- Advancing synthetic media detection: introducing veraAI, with Akis (Symeon) Papadopoulos (Centre for Research and Technology Hellas – Information Technologies Institute). Hosted by EU DisinfoLab (29/08/2024)
- Using Generative AI for the production, spread, and detection of disinformation – latest insights and innovations, with Kalina Bontcheva (University of Sheffield). Hosted by EU DisinfoLab (27/06/2024)
- Beyond Deepfakes: AI-related risks for elections, with Sophie Murphy Byrne (Logically). Hosted by EU DisinfoLab (30/05/2024)
- The Top 9 AI Breakthroughs of 2024 (You Won’t Believe Are Real). By AI Uncovered (08/11/2024)
- Tools and techniques for using AI in digital investigations, with Craig Silverman (ProPublica). Hosted by EU DisinfoLab (25/04/2024)
- OSINT & AI: Advanced Analysis, with Ivan Kravtsov (Social Links) and Gary Ruddell (Independent Cyber Threat Intelligence Professional). Hosted by Social Links (16/11/2023)
Podcasts
Podcasts
Community podcasts exploring the intersections of AI and disinformation
- If You Can Keep It': A.I. And Our Democracy. Hosted by NPR (06/02/2026)
- AI and the cost to human life. Hosted by ABC (17/12/2025)
- How chatbots — and their makers — are enabling AI psychosis. Hosted by The Verge (18/09/2025)
- Can AI reduce conspiratorial beliefs? Testing MIT'S DebunkBot. Hosted by Some dare call it conspiracy (28/08/2025)
- Seriously, what is ‘Superintelligence’? Hosted by Wired (28/06/2025)
- Is technological progress always good? Hosted by Responsible bytes (02/04/2025)
- AI Is transforming geopolitics. Hosted by New Lines Magazine (21/02/2025)
- The rise of DeepSeek, the Chinese AI chatbot making waves in tech. Hosted by Teka Teka (19/02/2025)
- Privacy, digital rights, AI and the law. Hosted by Technology & Security (17/02/2025)
- How DeepSeek controls the conversation. Hosted by Digital Digging (29/01/2025)
- AI regulation and risk management in 2024. Hosted by The AI in business Podcast (21/01/2025)
- The case for human-centered AI. Hosted by McKinsey Digital (20/12/2024)
- Destination Deception 2025. Hosted by Faked Up (18/12/2024)
- What is AI slop and did it lead to a Halloween parade hoax in Dublin? Hosted by The Explainer (05/11/2024)
- Beyond the ballot: Misinformation, trust and truth in elections. Hosted by The National Security Podcast (24/10/2024)
- Do not "summarize this"! Episode 4: improve prompts to get a better summary. Hosted by Digital Digging (28/09/2024)
- How to detect fake AI-texts, episode 1 of podcast series on AI & Research. Hosted by Digital Digging (17/09/2024)
- Moderating Global Voices. Hosted by Decoding Hate (10/02/2021)

AI Disinfo in depth
A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.
Research
Research
A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation
A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation
- Fauxmantic Overtures: Synthetic Dating Profiles on Social Platforms Funnel Romance Seekers Into Chinese-Operated Online Scam, by Graphika (10/03/2026)
- Can We Run Experiments on History with AI?, by Data Society (04/03/2026)
- How AI-Generated Influencers Exploit Celebrities to Sell Synthetic Nudes, by Indicator (04/03/2026)
- Inside an AI Slop Factory, by Double Verify (04/03/2026)
- Google’s “AI Overviews” Supercharge Iran Hoaxes, by Newsguard (03/03/2026)
- How AI fakes are turning satellite images into war misinformation, by Financial Times (03/03/2026)
- Abusing Athletes: 4chan Users Target Female Olympians With AI-Generated NoN-Consensual Intimate Imagery and ‘Nudified’ Photos, by Graphika (01/03/2026)
- Red-teaming: Why it's both imperfect & essential, by Safety in Gray Areas (26/02/2026)
- Inside an AI Slop Factory, by Double Verify (04/03/2026)
- This AI-generated podcast network publishes 11,000 episodes a day. It also ripped off media outlets, by Indicator (25/02/2026)
- Singapore and PM Lawrence Wong targeted in AI-driven disinformation campaign on YouTube, by Channel News Asia (25/02/2026)
- Large-scale online deanonymization with LLMs, by arvix (25/02/2026)
- Yearly Fact Check Intelligence Report, by Image Whisperer (24/02/2026)
- ‘Very dangerous’: a Mind mental health expert on Google’s AI Overviews, by The Guardian (20/02/2026)
- AI Audio Bots ChatGPT and Gemini Spread Hoaxes, but Alexa+ Declines, by Newsguard (19/02/2026)
- Fake verdicts, fake lawyers: How AI lawslop is flooding YouTube — and fooling viewers, by Indicator (18/02/2026)
- On-Device Foundational Biases: How Summarization Can Perpetuate Biases, by AI Forensics (12/02/2026)
- Fake Videos, Real Emotions: Viewers Believe AI-Generated Content Even When It’s Labeled, by OpenMinds (09/02/2026)
- Inside the marketplace powering bespoke AI deepfakes of real women, by Technology Review (30/01/2026)
- @Grok is this true: How X’s chatbot performs as a fact-checking tool, by Indicator (28/01/2026)
- AI-generated doctors are dispensing dubious health advice, by Indicator (26/01/2026)
- AI-Manipulated Image Shows Gun, Not Phone, Held by Killed Protester in Minneapolis, by Newsguard (26/01/2026)
- Google AI Overviews cite YouTube more than any medical site for health queries, study suggests, by The Guardian (24/01/2026)
- How malicious AI swarms can threaten democracy, by Science.org (22/01/2026)
- Surveillance and ICE Are Driving Patients Away From Medical Care, Report Warns, by Wired (21/01/2026)
- Beyond textual disinformation: Comparing the effects of textual disinformation to AI-generated and videobased visual disinformation across different issues, by Michael Hameleers, Toni van der Meer University of Amsterdam, Netherlands (21/01/2026)
- AI-Generated Image Abuse: An Update on Grok Unleashed, by AI Forensics (20/01/2026)
- Eight ways AI will shape geopolitics in 2026, by Atlantic Council (15/01/2026)
- Training large language models on narrow tasks can lead to broad misalignment, by Nature (14/01/2026)
- AI, memes, and hashtags: How China is battling the US online over Venezuela, by DFR Lab (10/01/2026)
- OpenAI report: AI as a healthcare ally How Americans are navigating the system with chatGPT, by OpenAI (05/01/2026)
- Amjad Taha, muslim brotherhood maxxing and the emirati dysinfluencer factory, by Dysinfluence / Marc Owen Jones (22/12/2025)
- AI-assisted analysis of war-related content on grey zone domains, by Lund University (18/12/2025)
- Child pornography just a click away: How Pedophiles access illegal content on Telegram Via TikTok, by Maldita (11/12/2025)
- AI deepfakes of real doctors spreading health misinformation on social media, by The Guardian (05/12/2025)
- Prompt, Upload, Repeat: Agentic AI Accounts Flood TikTok, by AI Forensics (03/12/2025)
- Cheap tricks: How AI slop Is powering influence campaigns, by Graphika (27/11/2025)
- Google’s Nano Banana Pro generates excellent conspiracy fuel, by The Verge (21/11/2025)
- White nationalist talking points and racial pseudoscience: welcome to Elon Musk’s Grokipedia, by The Guardian (17/11/25)
- King of slop: How anti-migrant AI content made one Sri Lankan influencer rich, by The Bureau of Investigative Journalism (16/11/2025)
- People are more susceptible to misinformation with realistic AI-synthesized images that provide strong evidence to headlines, by Harvard Misinfo Review (10/11/2025)
- X is using AI fact-checkers, by Columbia Journalism Review (06/11/2025)
- Performance of recent reasoning-driven LMs across verification, confirmation and recursive knowledge tasks in the dataset, by Nature (01/11/2025)
- A Multilingual, Large-Scale Study of the Interplay between LLM Safeguards, Personalisation, and Disinformation, by Arvix (29/10/2025)
- Chatbots are pushing sanctioned Russian propaganda, by Wired (27/10/2025)
- When chatbots surface Russian state media, by ISD (27/10/2025)
- AI tools amplify anti-Muslim hate on Indian social media: think tank, by Asia Nikkei (23/10/2025)
- Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory, by EBU (22/10/2025)
- Russian AI sites can’t stop gushing about Putin, by Newsguard (21/10/2025)
- How scammers entice targets via impersonation and fictional financial aid offers, by Graphika (21/10/2025)
- AI models get brain rot, too, by Wired (21/10/2025)
- OpenAI’s Sora: When seeing should not be believing, by Newsguard (17/10/2025)
- Resisting, refusing, reclaiming, reimagining: Charting challenges to Nnrratives of AI inevitability, by Zenodo (17/10/2025)
- Be careful what you tell your AI chatbot, by HAI Stanford University (15/10/2025)
- LLMs grooming or data voids? LLM-powered chatbot references to Kremlin disinformation reflect information gaps, not manipulation, by Misinfo Review (15/10/2025)
- Audience use and perceptions of AI assistants for news, by BBC (15/10/2025)
- The illusion of AI safety, by CCDH (14/10/2025)
- LLMs may be more vulnerable to data poisoning than we thought, by Turing Institute (09/10/2025)
- Generative AI and news report 2025: How people think about AI’s role in journalism and society, by Reuters (07/10/2025)
- We say you want a revolution. PRISONBREAK – An AI-enabled Influence operation aimed at overthrowing the Iranian regime, by Citizenlab (02/10/2025)
- From notes to bots: How generative AI impacts human-led fact-checking, by Yingxin Zhou and Jingbo Hou (30/09/2025)
- Revisionist future: Russia's assault on large language models, the distortion of collective memory, and the politics of eternity, by King's College London (29/09/2025)
- This podcast company went all in on AI, by Indicator (24/09/2025)
- AI models are using material from retracted scientific papers, by Technology Review (23/09/2025)
- Are bad incentives to blame for AI hallucinations?, by TechCrunch (07/09/2025)
- Psychological tricks can get AI to break the rules, by Wired (07/09/2025)
- Chatbots spread falsehoods 35% of the time, by Newsguard (04/09/2025)
- MIGS launches new report “Wired for War: How Authoritarian States are Weaponizing AI against the West, by MIGS Institute (02/09/2025)
- How safety measures failed when we asked AI chatbots to create false content, by International Journalists' Network (02/09/2025)
- BBC reveals web of spammers profiting from AI Holocaust images, by BBC (29/08/2025)
- One long sentence is all it takes to make LLMs misbehave, by The Register (26/08/2025)
- More powerful than lies: Taiwan's 2025 recall campaign and the rise of AI-generated mini clips, by Fact Link (20/08/2025)
- Scientists created an entire social network where every user is a bot, and something wild happened, by Futurism (19/08/2025)
- La IA creada por el líder de Hazte Oír: contenido que homenajea a Franco, desinformación y mensajes xenófobos, by El País (16/08/2025)
- The art of persuasion: how top AI chatbots can change your mind, by Financial Times (13/08/2025)
- AI revolution: Hackers increasingly taking advantage of GenAI tools to code malware and more, by Cyber Daily (04/08/2025)
- The era of AI propaganda has arrived, and America must act, by The New York Times (04/08/2025)
- AI Generated algorithmic virality, by AI forensics (31/07/2025)
- British 999 call handler's voice cloned by Russian network using AI, by BBC (30/07/2025)
- AI chatbots often advise women to ask for lower pay than men: new study, by Women Agenda (29/07/2025)
- Iran-Israel AI war propaganda Is a warning to the world, by Carnegie Endowment (28/07/2025)
- Trump-Epstein AI fakes draw millions of views, by Newsguard (25/07/2025)
- Chinese AI Models Register a 60 Percent Fail Rate in NewsGuard Audit of Pro-China Claims, by Newsguard (25/07/2025)
- AI ‘Nudify’ websites are raking in millions of dollars, by Wired (14/07/2025)
- Bad actors are grooming LLMs to produce falsehoods, by The American Sunlight Project (11/07/2025)
- Microsoft shuts down 3,000 email accounts created by North Korean IT workers, by The Record (03/07/2025)
- Putin is weaponising AI to target Brits with disinformation campaign in new digital 'arms race', experts warn, by Daily Mail (01/07/2025)
- Q2 2025 Deepfake threat intelligence report, by Resemble.AI (01/07/2025)
- AI chatbots could spread ‘fake news’ with serious health consequences, by Unisa (30/06/2025)
- Russia, AI and the future of disinformation warfare, by Rusi (30/06/2025)
- Deciphering authenticity in the age of AI: how AI-generated disinformation images and AI detection tools influence judgements of authenticity, by Springer Nature Link (29/06/2025)
- AI is tarting to wear down democracy, by The New York Times (26/06/2025)
- Operation Overload: An AI fuelled escalation of the Kremlin-linked propaganda effort, by CheckFirst (26/06/2025)
- AI is tarting to wear down democracy, by The New York Times (26/06/2025)
- KAIST develops AI comment detection technology to combat online manipulation in Korea, by Chosun Biz (24/06/2025)
- Grok struggles with fact-checking amid Israel-Iran war, by DFRLab (24/06/2025)
- Why do some language models fake alignment while others don’t?, published by arxiv ()22/06/2025)
- Disrupting malicious uses of AI: June 2025, by Open AI (05/06/2025)
- Leaked files reveal how China is using AI to erase the history of the Tiananmen Square massacre, by ABC (02/06/2025)
- Hey chatbot, is this true? AI 'factchecks' sow misinformation, by France 24 (02/06/2025)
- Generative AI used to copy and clone French news media in French-speaking Africa, by Reporters Without Borders (02/06/2025)
- TRIED: Truly Innovative and Effective AI Detection Benchmark, by WITNESS (30/05/2025)
- Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns, by The Conversation & Florida International University (29/05/2025)
- A weaponized AI chatbot Is flooding Canadian City Councils with climate misinformation, by DeSmog (28/05/2025)
- Just as humans need vaccines, so do models: Model immunization to combat falsehoods, by Shaina Raza, et al. 2025 (23/05/2025)
- On the conversational persuasiveness of GPT-4, by Nature (19/05/2025)
- The new wave of Russian disinformation blogs, by UK Defence Journal (18/05/2025)
- AI job recruitment tools could 'enable discrimination' against marginalised groups, research finds, by ABC News (07/05/2025)
- Synthetic propaganda, by Marcus Boesch (05/05/2025)
- How Russia is using Gaelic and AI to peddle disinformation in Scotland, by The Times (03/05/2025)
- Why does AI hinder democratization?, by PNAS (03/05/2025)
- Pro-Russian influence operation targeting Australia in lead-up to election with attempt to 'poison' AI chatbots, by ABC (02/05/2025)
- Disasters and disinformation: AI and the Myanmar 7.7 Magnitude Earthquake, by RSiS (01/05/2025)
- Generative AI in electoral campaigns: Mapping global patterns, by IPIE (01/05/2025)
- Deepfakes just got even harder to detect: Now they have heartbeats, by BBC (30/04/2025)
- Americans largely foresee AI having negative effects on news and journalists, by Pew Research Center (28/04/2025)
- Operating multi-client influence networks across platforms, by Anthropic(23/04/2025)
- AI is inherently ageist. That’s not just unethical – it can be costly for workers and businesses, by The Conversation (22/04/2025)
- Values in the wild: Discovering and analyzing values in real-world language model interactions, by Anthropic (21/04/2025)
- False face: Unit 42 demonstrates the alarming ease of synthetic identity creation, by Unit 42 (21/04/2025)
- Russian propaganda campaign targets France with AI-fabricated scandals, drawing 55 million views on social media, by Newsguard (17/04/2025)
- OpenAI’s new reasoning AI models hallucinate more, by Tech Crunch (17/04/2025)
- Russia’s use of genAI in disinformation and cyber jnfluence: Strategy, use cases and future expectations, by CRC (13/04/2025)
- LLM pass the Turing Test. But that doesn’t mean AI is now as smart as humans, by The Conversation (08/04/2025)
- What we learned from tracking AI use in global elections, by Rest of World (08/04/2025)
- Emotional prompting amplifies disinformation generation in AI large language models, by Frontiers (07/04/2025)
- AI Index 2025: State of AI in 10 Charts, by HAI Stanford University (07/04/2025)
- OpenAI’s Sora Is plagued by sexist, racist, and ableist biases, by Wired (23/03/2025)
- AI’s answers on China differ depending on the language, analysis finds, by Tech Crunch (20/03/2025)
- Users turning to ChatGPT for news may find misinformation in responses, by Logically Facts (18/03/2025)
- Deepfake detectors vulnerable ahead of election, by InnovationAus (13/03/2025)
- Russia-linked Pravda network cited on Wikipedia, LLMs, and X, by DFRLab (12/03/2025)
- Urgent action is needed to secure the UK’s AI research ecosystem against hostile state threats, by The Alan Turing Institute (07/03/2025)
- A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propaganda, by Newsguard (06/03/2025)
- Chinese AI video generators unleash a flood of new nonconsensual porn, by 404 Media (06/03/2025)
- AI search has a citation problem, by Columbia Journalism Review (06/03/2025)
- An AI slop "science" site has been beating real publications in Google results by publishing fake images of SpaceX Rockets, by Futurism (06/03/2025)
- Character flaws, by Graphika (05/03/2025)
- Slopaganda: The interaction between propaganda and generative AI, by Michał Klincewicz, Mark Alfano, Amir Ebrahimi Fard (03/03/2025)
- Hybrid threats and the amplifying power of AI: Five strategic scenarios, by Alto Intelligence (01/03/2025)
- Towards a common reporting framework for AI incidents, by OECD (28/02/2025)
- Microsoft outs hackers behind tools to bypass generative AI guardrails, by Bloomberg (27/02/2025)
- The smarter AI gets, the more It start cheating when it's losing, by The Byte (22/02/2025)
- Disrupting malicious uses of AI, by Open AI (21/02/2025)
- Deepfake threat: Only 0.1% can spot AI-generated fakes, by Security Brief (19/02/2025)
- Grok’s responses to questions on the German elections were mostly accurate and relied heavily on media sources, by Reuters Institute (19/02/2025)
- How 35 YouTube channels spread disinformation using AI about Spanish and European politics, by Maldita (14/02/2025)
- Inconsistent and unreliable: Chatbots provide inaccurate Information on German elections, by Democracy Reporting International (12/02/2025)
- Representation of BBC News content in AI assistants, by BBC (11/02/2025)
- An adviser to Elon Musk’s xAI has a way to make AI more like Donald Trump, by Wired (11/02/2025)
- Red-teaming in the public interest, by Data & Society (09/02/2025)
- AI misinformation monitor of leading AI chatbots multilingual edition, by Newsguard (07/02/2025)
- Challenges and opportunities of AI in the fight against information manipulation, by VIGNIUM (07/02/2025)
- The use of artificial intelligence in counter-disinformation: a world wide (web) mapping, by Frontiers (07/02/2025)
- Search Google Maps with the help of AI, by Digital Digging (06/02/2025)
- Rechts, weiblich, Fake, by Tagesschau (05/02/2025)
- Russian propaganda may be flooding AI models, by American Sunlight (01/02/2025)
- AI-Generated Disinformation in Europe and Africa, by KAS (31/01/2025)
- Scammers are creating fake news videos to blackmail victims, by Wired (27/01/2025)
- Russian propagandist turns his sights to German election, by Reuters (23/01/2025)
- Greenwashing and bothsidesism in AI chatbot answers about fossil fuels' role in climate change, by Global Witness (22/01/2025)
- Knowing less about AI makes people more open to having it in their lives, by The Conversation (20/01/2025)
- AI isn’t very good at history, by Tech Crunch (19/01/2025)
- A fact-checking tool based on Artificial Intelligence to fight disinformation on Telegram, by Universidad de Navarra (12/01/2025)
- Apple urged to withdraw 'out of control' AI news alerts, by BBC (07/01/2025)
- AI could usher in a golden age of research – but only if these cutting-edge tools aren’t restricted to a few major private companies, by The Conversation (06/01/2025)
- These defenders of democracy do not exist, by Conspirador Norteño (05/01/2025)
- An AI-Powered Audit: Do Chatbots Reproduce Political Pluralism?, by Democracy Reporting International (27/12/2024)
- ChatGPT search tool vulnerable to manipulation and deception, tests show, by The Guardian (24/12/2024)
- Predictions for AI in 2025: Collaborative agents, AI skepticism, and new risks, by Stanford University (23/12/2024)
- Bridging the data provenance gap across text, speech and video, by arXiv:2412.17847 (19/12/2024)
- Fake AI versions of world-renowned academics are spreading claims that Ukraine should surrender to Russia, by The Insider (13/12/2024)
- ElevenLabs used for Russian propaganda, by AI Tool Report (11/12/2024)
- AI enters Congress: Sexually explicit deepfakes target women lawmakers, by The 19th News (11/12/2024)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, by GNET (11/12/2024)
- Scottish Parliament TV at risk of deepfake attacks, by Infosecurity (10/12/2024)
- Revealed: bias found in AI system used to detect UK benefits fraud, by The Guardian (06/12/2024)
- Evaluating Large Language Models capability to launch fully automated spear phishing campaigns: Validated on human subjects, by arXiv (30/11/2024)
- Study of ChatGPT citations makes dismal reading for publishers, by Tech Crunch (29/11/2024)
- How ChatGPT Search (mis)represents publisher content, by Columbia Journalism Review (27/11/2024)
- Persuasive technologies in China: implications for the future of national security, by Australian Strategic Policy Institute (26/11/2024)
- "Operation Undercut" shows multifaceted nature of SDA’s influence operations, by Recorded Future (26/11/2024)
- Philippines, China clashes trigger money-making disinformation, by France24 (26/11/2024)
- Not even Spotify is safe from AI slop, by The Verge (14/11/2024)
- AI-enabled influence operations: Safeguarding future elections, by Cetas (13/11/2024)
- Disconnected from reality: American voters grapple with AI and flawed OSINT strategies, by ISD (07/11/2024)
- AI hallucinations caused artificial intelligence to falsely describe these people as criminals, by ABC News (03/11/2024)
- Exploiting Meta’s weaknesses, deceptive political ads thrived on Facebook and Instagram in run-up to election, by Pro Publica (31/10/2024)
- "Say it’s only fictional”: How the far-right is jailbreaking AI and what can be done about it, by ICCT (30/10/2024)
- How X users can earn thousands from US election misinformation and AI images, by BBC (30/10/2024)
- Hospitals use a transcription tool powered by an error-prone OpenAI model, by The Verge (28/10/2024)
- Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said, by AP news (26/10/2024)
- GenAI and Democracy, by DSET (25/10/2024)
- Prebunking elections rumors: Artificial Intelligence assisted interventions increase confidence in American elections, by California Institute of Technology, Washington University in St. Louis, Cambridge University (24/10/2024)
- Large Language Models reflect the ideology of their creators, by arXiv (24/10/2024)
- Amazon Alexa users given false information attributed to Full Fact’s fact checks, by Full Fact (17/10/2024)
- Ensuring AI accountability: Auditing methods to mitigate the risks of Large Language Models, by Democracy Reporting International (14/10/2024)
- Pig butchering scams are going high tech, by Wired (12/10/2024)
- An update on disrupting deceptive uses of AI, by openAI (09/10/2024)
- Generative Artificial Intelligence and elections, by Center for Media Engagement (03/10/2024)
- Grok AI: A deepfake disinformation disaster for democracy, by CCDH (29/8/2024)
- OpenAI blocks AI propaganda, by AI Tool Report (19/8/2024)
- Disrupting deceptive uses of AI by covert influence operations, by OpenAI (30/5/2024)
- AI-pocalypse Now? Disinformation, AI, and the super election year, by MSC (01/04/2024)
Policy & regulations
policy & regulations
A look at regulation and policies implemented on AI and disinformation
- President Donald J. Trump Unveils National AI Legislative Framework, by The White House (20/03/2026)
- Meta will move away from human content moderators in favor of more AI, by Engadget (19/03/2026)
- UK to examine labelling AI content among wider copyright reforms, by Reuters (18/03/2026)
- AI labeling is still very much a work in progress, by Indicator (18/03/2026)
- The dictionary sues OpenAI, by TechCrunch (16/03/2026)
- EU set to ban AI nudification apps in wake of Grok scandal, by Politico (11/03/2026)
- Expanding likeness detection to civic leaders and journalists, by YouTube (10/03/2026)
- Board Calls for New Rules on Deceptive AI During Conflicts, by Oversight Board (10/03/2026)
- X to require AI labels on armed conflict videos from paid creators, citing ‘times of war’, by Engadget (03/03/2026)
- Our agreement with the Department of War, by OpenAi (02/03/2026)
- US Supreme Court declines to hear dispute over copyrights for AI-generated material, by Reuters (02/03/2026)
- Deepfake Financial Fraud_The Global Regulation of AI-Driven Scams, by Data Society (02/03/2026)
- Vietnam AI law takes effect, first in Southeast Asia, by France24 (28/02/2026)
- Hegseth gives Anthropic until Friday to back down on AI safeguards, by Axios (24/02/2026)
- X is working on ‘Made with AI’ labels, by The Verge (23/02/2026)
- US military used Anthropic’s AI model Claude in Venezuela raid, report says, by The Guardian (14/02/2026)
- Pentagon threatens to cut off Anthropic in AI safeguards dispute, by Axios (14/02/2026)
- Anthropic Puts $20 Million Into a Super PAC Operation to Counter OpenAI, by The New York Times (12/02/2026)
- San Francisco's city attorney sued the operators of 22 AI nudifiers. He wants others to step up, too, by Indicator (11/02/2026)
- Meta must have facial recognition measures for notable Facebook users in S’pore or risk $1m fine, by The Straits Times (29/01/2026)
- YouTubers sue Snap for alleged copyright infringement in training its AI models, by TechCrunch (26/01/2026)
- DRI’s Monitoring of the AI Act Implementation, by Democracy Reporting International (26/01/2026)
- From the CEO: What’s coming to YouTube in 2026, by YouTube (21/01/2026)
- Our approach to age prediction, by OpenAI (20/01/2026)
- Right-wing pundits suddenly hate an AI bill. Are they getting paid to kill it?, by Model Republic (17/01/2026)
- EU should ban AI nudification apps in wake of Grok scandal, say lawmakers, by Politico (15/01/2026)
- Jailed Chinese AI chatbot developers appeal in landmark pornography case, by SCMP (15/01/2026)
- EU should ban AI nudification apps in wake of Grok scandal, say lawmakers, by Politico (15/01/2026)
- Spain Draws the Line Against AI Deepfakes With Sweeping New Image Laws, by Technology.org (14/01/2026)
- Grok was finally updated to stop undressing women and children, X Safety says, by Ars Technica (14/01/2026)
- “Grok’d”: Five emerging lessons on limiting abuse of AI image generation, by Centre for Information Resilience (14/01/2026)
- Senate moves to let victims of sexually explicit deepfakes sue for damages, by 19th News (13/01/2026)
- Grok turns off image generator for most users after outcry over sexualised AI imagery, The Guardian (09/01/2026)
- Poland urges Brussels to probe TikTok over AI-generated content, by Reuters (30/12/2025)
- Italy closes probe into DeepSeek after commitments to warn of AI 'hallucination' risks, by Reuters (05/01/2026)
- You can now verify Google AI-generated videos in the Gemini app, by Google (18/12/2025)
- When AI models can continually learn, will our regulations be able to keep up?, by Lawfare Media (18/12/2025)
- First draft Code of Practice on transparency of AI-generated content, by European Commission (17/12/2025)
- UK to push for nudity-blocking software on devices to protect children, by Financial Times (15/12/2025)
- States take the lead policing AI in health care, by Axios (13/12/2025)
- Gavin Newsom pushes back on Trump AI executive order preempting state laws, by The Guardian (13/12/2025)
- What to know about Trump’s executive order to curtail state AI regulations, by AP News (12/12/2025)
- Image of Trump using a walker Is an AI fake, by Newsguard (12/12/2025)
- White House issues federal agency guidance against "woke" AI, by Axios (11/12/2025)
- A pay-to-scrape AI licensing standard is now official, by The Verge (10/12/2025)
- Big Tech warned over AI 'delusional' outputs by US attorneys general, by Reuters (10/12/2025)
- AI Slop Is ruining Reddit for everyone, by Wired (05/12/2025)
- South Korea to require advertisers to label AI-generated ads, by AP News (01/12/2025)
- From 'Googling' to 'Asking ChatGPT': Governing AI Search, by AI Forensics (01/12/2025)
- The race to regulate AI has sparked a federal vs. state showdown, by Tech Crunch (28/11/2025)
- New legislation targets scammers that use AI to deceive, by Cyberscoop (26/11/2025)
- Australia to establish AI safety institute, by Innovation AUS (25/11/2025)
- Manipulated video should have high-risk label, by The Oversight Board (25/11/2025)
- More ways to spot, shape and understand AI-generated content, by TikTok (24/11/2025)
- Victims of AI deepfakes could sue for emotional damages under new bill, by ABC (24/11/2025)
- La Presse sues OpenAI for copyright infringement, by AP News (24/11/2025)
- How we’re bringing AI image verification to the Gemini app, by Google (20/11/2025)
- White House pauses executive order that would seek to preempt state laws on AI, sources say, by Reuters (21/11/2025)
- EU to delay 'high risk' AI rules until 2027 after Big Tech pushback, by Reuters (19/11/2025)
- UK seeking to curb AI child sex abuse imagery with tougher testing, by BBC (12/11/2025)
- ChatGPT violated copyright law by ‘learning’ from song lyrics, German court rules, by The Guardian (11/11/2025)
- Strengthening public interest media in the age of GenAI, by Medium (11/11/2025)
- EU could water down AI Act amid pressure from Trump and big tech, by The Guardian (07/11/2025)
- China's Xi pushes for global AI body at APEC in counter to US, by Reuters (01/11/2025)
- Denmark eyes new law to protect citizens from AI deepfakes, by AP News (01/11/2025)
- Watchdog group Public Citizen demands OpenAI withdraw AI video app Sora over deepfake dangers, by AP News (01/11/2025)
- Tech platforms promised to label AI content. They're not delivering, by Indicator (23/10/2025)
- India proposes strict rules to label AI content citing growing risks, by Reuters (22/10/2025)
- YouTube’s AI ‘likeness detection’ tool is searching for deepfakes of popular creators, by The Verge (21/10/2025)
- Meta to give teens' parents more control after criticism over flirty AI chatbots, by Reuters (17/10/2025)
- How we bypassed Sora 2's identity safeguards in under 24 hReality Defender (03/10/2025)ours, by
- The Indicator guide to AI labels: We’ve collected in one place how and when major platforms label AI content, by Indicator (02/10/2025)
- Meta greenlights Facebook, Instagram ads based on your AI chats, by CNBC (01/10/2025)
- Gavin Newsom signs first-in-nation AI safety law, by Politico (29/09/2025)
- U.S. rejects international AI oversight at U.N. General Assembly, by NBC News (27/09/2025)
- Meta launches super PAC to fight AI regulation, by Axios (23/09/2025)
- L’industrie média allemande attaque AI Overviews de Google, by Mind Media (22/09/2025)
- A ‘global call for AI red lines’ sounds the alarm about the lack of international AI policy, by The Verge (22/09/2025)
- Google will use hashes to find and remove nonconsensual intimate imagery from Search, by The Verge (17/09/2025)
- AI agents & global governance: Analyzing foundational legal, policy, and accountability tools, by Partnership on AI (16/09/2025)
- Rolling Stone publisher sues Google over AI summaries, by The Wall Street Journal (13/09/2025)
- US Senator Cruz proposes AI 'sandbox' to ease regulations on tech companies, by Reuters (10/09/2025)
- DeepSeek sheds light on data collection for AI training and warns of ‘hallucination’ risks, by SCMP (03/09/2025)
- Australia moves to stamp out ‘nudify’ and stalking apps, by Aljazeera (02/09/2025)
- OpenAI's ChatGPT to implement parental controls after teen's suicide, by ABC (02/09/2025)
- China’s social media platforms rush to abide by AI-generated content labelling law, by SCMP (01/09/2025)
- Meta to stop its AI chatbots from talking to teens about suicide, by BBC (01/09/2025)
- AI Is replacing online moderators, but it's bad at the job, by Bloomberg (22/08/2025)
- TikTok to lay off hundreds of UK moderators as it shifts to AI, by The Financial Times (22/08/2025)
- Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims, by TechCrunch (18/08/2025)
- Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info, by Reuters (14/08/2025)
- Google to sign EU's AI code of practice despite concerns, by Reuters (30/07/2025)
- ‘Global approach’ to AI regulation urgently needed, UN tech chief says, by SCMP (27/07/2025)
- Trump’s order to block ‘woke’ AI in government encourages tech giants to censor their chatbots, by AP News (25/07/2025)
- White House unveils America’s AI action plan, by The White House (23/07/2025)
- Trump’s AI action plan Is a crusade against ‘bias’ and regulation, by Wired (23/07/2025)
- AI models with systemic risks given pointers on how to comply with EU AI rules, by Reuters (18/07/2025)
- Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth, by CNBC (18/07/2025)
- White House Prepares Executive Order Targeting ‘Woke AI’, by The Wall Street Journal (17/07/2025)
- Grace Tame urges government to outlaw AI tools used to generate child sexual abuse material, by ABC (16/07/2025)
- EU rolls out AI code with broad copyright, transparency rules, by Bloomberg (10/07/2025)
- Promoting accountability for AI misinformation: Intermediary Digital Liability, by Global Voices (08/07/2025)
- Senate strikes AI regulatory ban from GOP bill after uproar from the states, by AP News (02/07/2025)
- US Senate strikes AI regulation ban from Trump megabill, by Reuters (01/07/2025)
- DeepSeek faces ban from Apple, Google app stores in Germany, by Reuters (27/06/2025)
- Denmark to tackle deepfakes by giving people copyright to their own features, by The Guardian (27/06/2025)
- US lawmakers introduce bill to bar Chinese AI in US government agencies, by Reuters (25/06/2025)
- Asian countries are pioneers in balancing AI regulation and innovation, by Nikkei Asia (25/06/2025)
- Federal court says copyrighted books are fair use for AI training, by The Washington Post (25/06/2025)
- Swedish PM calls for a pause of the EU’s AI rules, by Politico (23/06/2025)
- The State of Deepfake. Regulations in 2025: What businesses need to know, by Reality Defender (18/06/2025)
- Nvidia's pitch for sovereign AI resonates with EU leaders, by Reuters (16/06/2025)
- EU’s waffle on artificial intelligence law creates huge headache, by Politico (16/06/2025)
- New York passes a bill to prevent AI-fueled disasters, by Tech Crunch (13/06/2025)
- EU could postpone flagship AI rules, tech chief says, by Politico (06/06/2025)
- X’s new policy prevents companies from using posts to ‘fine-tune or train’ AI models, by The Verge (05/06/2025)
- Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?, by The Conversation (03/06/2025)
- Meta reportedly replacing human risk assessors with AI, by Mashable (01/06/2025)
- Governing AI and the democratisation of governance, by Hintz, A. Dialogues on Digital Society (30/05/2025)
- The coming AI backlash will shape future regulation, by Brookings (27/05/2025)
- Nick Clegg says asking artists for use permission would ‘kill’ the AI industry, by The Verge (26/05/2025)
- German rights group fails in bid to stop Meta's data use for AI, by Reuters (23/05/2025)
- President Trump signs TAKE IT DOWN Act into Law, by The White House (19/05/2025)
- Tech workers, teachers, artists oppose AI preemption measure, by Demand Progress (19/05/2025)
- OpenAI Launches AI Safety Evaluations Hub Amid GPT-4o Controversy: Transparency or PR Strategy?, by Medium (15/05/2025)
- Trump administration fires top copyright official days after firing Librarian of Congress, by AP (12/05/2025)
- Who owns AI fraud? How to build a deepfake response framework, by Reality Defender (12/05/2025)
- Trump fires director of U.S. Copyright Office, sources say, by CBS News (10/05/2025)
- Introducing Gen AI labels: Pinterest is taking a new step in transparency, by Pinterest (30/04/2025)
- House approves Take It Down Act, sending bill on intimate images to Trump’s desk, by The 19th News (28/04/2025)
- Musk’s X sues to block Minnesota ‘deepfake’ law over free speech concerns, by CNBC (23/04/2025)
- Google used AI to suspend over 39M ad accounts suspected of fraud, by Tech Crunch (16/04/2025)
- OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk, by Fortune (16/04/2025)
- ChatGPT now lets users create fake images of politicians. We stress-tested it, by CBC (13/04/2025)
- YouTube supports the NO FAKES Act: Protecting creators and viewers in the age of AI, by YouTube (09/04/2025)
- The Dangers of AI Sovereignty, by Lawfare (07/04/2025)
- Google is shipping Gemini models faster than its AI safety reports, by Tech Crunch (03/04/2025)
- UK needs to relax AI laws or risk transatlantic ties, thinktank warns, by The Guardian (02/04/2025)
- Protecting the polls in the era of AI and deepfakes, by Microsoft (01/04/2025)
- OpenAI peels back ChatGPT’s safeguards around image creation, by Tech Crunch (28/03/2025)
- Meta to seek disclosure on political ads that use AI ahead of Canada elections, by Reuters (20/03/2025)
- Vance outlines an America first, America only AI agenda, by Lawfare (19/03/2025)
- China mandates labels for all AI-generated content in fresh push against fraud, fake news, by SCMP (15/03/2025)
- Under Trump, AI scientists are told to remove ‘ideological bias’ from powerful models, by Wired (14/03/2025)
- OpenAI urges Trump administration to remove guardrails for the industry, by CNBC (13/03/2025)
- Spain to impose massive fines for not labelling AI-generated content, by Reuters (11/03/2025)
- The AI regulation debate in China is on a whole different level, by Raymond Sun (10/03/2025)
- Meta brings its anti-scam facial-recognition test to the UK and Europe, by Tech Crunch (04/03/2025)
- Creative industries protest against UK plan about AI and copyright, by Financial Times (27/02/2025)
- Terms of (dis)service: comparing misinformation policies in text-generative AI chatbot, by EU DisinfoLab (27/02/2025)
- UK delays plans to regulate AI as ministers seek to align with Trump administration, by The Guardian (24/02/2025)
- Erotica, gore and racism: how America’s war on ‘ideological bias’ is letting AI off the leash, by The Conversation (24/02/2025)
- Artificial intelligence and intellectual property: Navigating the challenges of data scraping, by OECD.AI (14/02/2025)
- OpenAI removes certain content warnings from ChatGPT, by Tech Crunch (13/02/2025)
- Tech companies pledged to protect elections from AI. Here’s how they did, by Brennan Center (13/02/2025)
- The death of inclusive AI? Trump’s fight against diversity intensifies, by ANU Reporter (13/02/2025)
- JD Vance warns Europe to go easy on tech regulation in major AI speech, by Politico (11/02/2025)
- Donald Trump rolls back Biden-era AI regulation, sets stage for battles with US states, by CNN(09/02/2025)
- The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence, by Cambridge University Press (06/02/2025)
- Living repository to foster learning and exchange on AI literacy, by European Commission (04/02/2025)
- China is scheduled to hold its "Two Sessions" this week, by Raymond Sun (04/02/2025)
- Meta says it may stop development of AI systems it deems too risky, by Tech Crunch (03/02/2025)
- The EU’s AI bans come with big loopholes for police, by Politico (03/02/2025)
- Frontier AI Framework, by META (03/02/2025)
- AI-generated child sex abuse images targeted with new laws, by BBC (02/02/2025)
- First international AI safety report published, by Computer Weekly (30/01/2025)
- Fighting deepfakes: what’s next after legislation?, by Australian Strategic Policy Institute (24/01/2025)
- Deepfake labels and detectors still don't work, by (Faked Up22/01/2025)
- The global struggle over how to regulate AI, by Rest of World (21/01/2025)
- Trump revokes Biden executive order on addressing AI risks, by Reuters (21/01/2025)
- Feedback on the second draft of the general-purpose AI Code of Practice: Comments and recommendations, by University of Cambridge (17/01/2025)
- Civil society rallies for human rights as AI Act prohibitions deadline looms, by EuroActiv (16/01/2025)
- OpenAI wooed Democrats with calls for AI regulation. Now it must charm Trump, by The Washington Post (13/01/2025)
- British PM Keir Starmer outlines bid to become AI 'world leader', by ABC (13/01/2025)
- UK can be ‘AI sweet spot’: Starmer’s tech minister on regulation, Musk, and free speech, by The Guardian (11/01/2025)
- Britain to make sexually explicit 'deepfakes' a crime, by Reuters (07/01/2025)
- Partnering for gender-responsive AI, by UN (01/01/2025)
- Copyright and Artificial Intelligence Part 2: Copyrightability, by United States Copyright office (01/01/2025)
- Trump announces new tech policy picks for his second term, by The Verge (23/12/2024)
- Sriram Krishnan named Trump’s senior policy advisor for AI, by Tech Crunch (22/12/2024)
- Google relaxes AI usage rules, by AI Tool Report (18/12/2024)
- Meta debuts a tool for watermarking AI-generated videos, by Tech Crunch (12/12/2024)
- New research centre supporting safe and responsible AI, by Minister for Industry and Science, Australia (09/12/2024)
- Inside Britain’s plan to save the world from runaway AI, by Politico (05/12/2024)
- Rumble Video Platform sues California over anti-deepfake law, by Bloomberg (29/11/2024)
- Trump 2.0: Clash of the tech bros, by Fortune (26/11/2024)
- ChatGPT, Meta and Google generative AI should be designated 'high-risk' under new laws, bipartisan committee recommends, by ABC News (26/11/2024)
- Case closed on "nude" AI images of girls. Why police are not charging man who made them, by Pensacola News Journal (22/11/2024)
- The EU Code of Practice for General-purpose AI: Key takeaways from the First Draft, by CSIS (21/11/2024)
- What Donald Trump’s Cabinet picks mean for AI, by GZero Media (19/11/2024)
- Musk sues California over deepfake law, by AI Tool Report (18/11/24)
- EU AI Act: Draft guidance for general purpose AIs shows first steps for Big AI to comply, by TechCrunch (14/11/2024)
- Musk to be Trump's AI advisor?, by AI Tool Report (12/11/2024)
- What Trump’s victory could mean for AI regulation, by Tech Crunch (06/11/2024)
- How AI could still impact the US election, by Gzero Media (05/11/2024)
- Reducing risks posed by synthetic content, by National Institute of Standards and Technology (01/11/2024)
- Google Photos will soon show you if an image was edited with AI, by The Verge (24/10/2024)
- More transparency for AI edits in Google Photos, by Google (24/10/2024)
- Embedded GenAI on social media: Platform law meets AI law, by DSA Observatory (16/10/2024)
- California rejects AI safety bill, by AI Tool Report (30/09/2024)
- Council of Europe opens first ever global treaty on AI for signature, by Council of Europe (05/9/2024)
- Final Report - Governing AI for humanity, by UN (01/09/2024)
- United Nations Secretary-General’s video message for launch of the Final Report, by UN (01/09/2024)
- Platforms’ AI policy updates in 2024: Labelling as the silver bullet?, by EU DisinfoLab (01/07/2024)
- A real account of peep fakes, by Cornell University (15/04/2024)
- Governing AI agents, by Hebrew University of Jerusalem (02/04/2024)
Miscellaneous readings
Miscellaneous readings
Recommended reading on AI and disinformation
- Influence Campaign on TikTok Uses AI Videos to Boost Hungary’s Orbán Ahead of Crucial Elections, by Newsguard (20/03/2026)
- AI ‘expert’ exposed: Fake Kremlin-linked analyst planted stories in African media, by News24 (18/03/2026)
- OpenAI to sell AI to US agencies through Amazon cloud unit, by Reuters (17/03/2026)
- How AI Content Detection is Being Weaponized in the Iran War, by Tech Policy (17/03/2026)
- Who’s Whispering in Your Chatbot’s Ear?, by Project Syndicate (17/03/2026)
- I'm suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent, by Julia Angwin LinkedIn post (16/03/2026)
- Is this product 'human-made'? The race to establish an AI-free logo, by BBC (16/03/2026)
- Iranian Missile Dedicated to Epstein Victims? NewsGuard’s False Claim of the Week, by Newsguard (13/03/2026)
- Manipulated photos discovered in reports on Iran, by Spiegel (11/03/2026)
- Grammarly Is Facing a Class Action Lawsuit Over Its AI ‘Expert Review’ Feature, by Wired (11/03/2026)
- Grammarly Is Pulling Down Its Explosively Controversial Feature That Impersonates Writers Without Their Permission, by Futurism (11/03/2026)
- Mark Zuckerberg buys social network for AI bots, by Telegraph (10/03/2026)
- A real doctor used a pseudonym to post viral health AI slop and sell books, by Indicator (09/03/2026)
- AI-generated Iran war videos surge as creators use new tech to cash in, by BBC Verify (07/03/2026)
- Google faces first lawsuit alleging its AI chatbot encouraged a Florida man to commit suicide, by CBS News (04/03/2026)
- How the experts figure out what’s real in the age of deepfakes, by The Verge (03/03/2026)
- U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban, by The Wall Street Journal (28/02/2026)
- The end of accountability: How autonomous AI could supercharge climate disinformation, by National Observer (27/02/2026)
- How A.I.-Generated Videos Are Distorting Your Child’s YouTube Feed, by The New York Times (26/02/2026)
- Disrupting malicious uses of AI, by OpenAI (25/02/2026)
- Hacker Used Anthropic’s Claude to Steal Mexican Data Trove, by Bloomberg (25/02/2026)
- These Tools Say They Can Spot A.I. Fakes. Do They Really Work?, by The New York Times (25/02/2026)
- AIs can’t stop recommending nuclear strikes in war game simulations, by News Scientist (25/02/2026)
- Meta’s AI sending ‘junk’ tips to DoJ, US child abuse investigators say, by The Guardian (25/02/2026)
- Google sent an AI-generated push alert that included a racial slur, by Engadget (24/02/2026)
- How AI resurrects racist stereotypes and disinformation — and why fact‑checking isn’t enough, by The Conversation (22/02/2026)
- Words Without Consequence, by The Atlantic (15/02/2026)
- ZDF removes New York correspondent with immediate effect, by Blue News (12/02/2026)
- Albania’s government faces legal spat with actor that AI minister was modeled on, by Politico (11/02/2026)
- Moltbook, a Social Network Platform for AI, Reportedly Had Humans Influencing It, by Complex (10/02/2026)
- How AI Will Smith eats spaghetti in 2026, by Mashable (10/02/2026)
- OpenAI fake Super Bowl ad hoax spreads across social media, by Tech Buzz (09/02/2026)
- ChatGPT’s cheapest options now show you ads, by The Verge (09/02/2026)
- OpenAI’s supposedly ‘leaked’ Super Bowl ad with ear buds and a shiny orb was a hoax, by The Verge (09/02/2026)
- Made by Google. Missed by Google — except for one tool, buried in the garden shed, by Diggital Digging (09/02/2026)
- Building Civic Strength for an AI Era, by Data Society (09/02/2026)
- AI Doesn’t Reduce Work—It Intensifies It, by Harvard Business Review (09/02/2026)
- Battle of the chatbots: Anthropic and OpenAI go head-to-head over ads in their AI products, by The Guardian (07/02/2026)
- From churches to chatbots: How AI is fusing with religion, by Reuters (07/02/2026)
- AI Tools Willingly Generate Fake Epstein Images, by NewsGuard (05/02/2026)
- Britain to work with Microsoft to build deepfake detection system, by Reuters (05/02/2026)
- "Collaborative Notes", by X (05/02/2026)
- X's latest Community Notes experiment allows AI to write the first draft, by Engadget (05/02/2026)
- Does AI already have human-level intelligence? The evidence is clear, by Nature (02/02/2026)
- What is Moltbook? The strange new social media site for AI bots, by The Guardian (02/02/2026)
- The Global Risks Report 2026: Why AI is the Fastest Growing Threat, by Mea (02/02/2026)
- AI agents now have their own Reddit-style social network, and it’s getting weird fast, by Ars Technica (30/01/2026)
- Pro-China AI Fakes a Taiwanese Accent, by NewsGuard (29/01/2026)
- YouTube’s top AI slop channels are disappearing, by The Verge (28/01/2026)
- Inside an AI start-up’s plan to scan and dispose of millions of books, by The Washington Post (27/01/2026)
- Albania Created an ‘A.I. Minister’ to Curb Corruption. Then Its Developers Were Accused of Graft, by The New York Times (27/01/2026)
- Commission investigates Grok and X's recommender systems under the Digital Services Act, by European Commission (26/01/2026)
- Latest ChatGPT model uses Elon Musk’s Grokipedia as source, tests reveal, by The Guardian (24/01/2026)
- AI Fools Itself: Top Chatbots Don’t Recognize AI-Generated Videos, by Newsguard (22/01/2026)
- Abundance vs. Scarcity: Who Controls the Internet After AI?, by Tech Policy (22/01/2026)
- Moxie Marlinspike has a privacy-conscious alternative to ChatGPT, by TechCrunch (18/01/2026)
- Google’s AI Insists That Next Year Is Not 2027, by Futurism (17/01/2026)
- Our approach to advertising and expanding access to ChatGPT, by OpenAI (16/01/2026)
- Ads Are Coming to ChatGPT. Here’s How They’ll Work, by Wired (16/01/2026)
- Wikimedia announces AI partners including Meta and Microsoft, by Engadget (15/01/2026)
- 'Digital desecrations’: when deepfakes are used to mock tragic deaths and what platforms should do about it, by Indicator (14/01/2026)
- AI Videos Fill Void Amid Iran Internet Blackout, by Newsguard (14/01/2026)
- Matthew McConaughey Trademarks Himself to Fight AI Misuse, by The Wall Street Journal (13/01/2026)
- ‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk, by The Guardian (11/01/2026)
- Grok Deepfaked Renée Nicole Good’s Body Into a Bikini, by Mother Jones (08/01/2026)
- We’re talking about AI all wrong. Here’s how we can fix the narrative, by The Conversation (07/01/2026)
- Google and Character.AI negotiate first major settlements in teen chatbot death cases, by TechCrunch (07/01/2026)
- Introducing ChatGPT Health, by OpenAI (07/01/2026)
- “April Fools in December”: Hundreds of people waited at the Brooklyn Bridge for fireworks that never came, all because of AI slop, by Daily Dot (06/01/2026)
- That viral Reddit post about food delivery apps was an AI scam, by The Verge (05/01/2026)
- Grok's bikini-clad images raise legal red flags, by Axios (05/01/2026)
- Grok under fire for generating sexually explicit deepfakes of women and minors, by Euronews (05/01/2026)
- Phony visuals of Maduro’s real capture, by NewsGuard (05/01/2026)
- AI deepfakes are impersonating pastors to try to scam their congregations, by Wired (05/01/2026)
- Google’s and OpenAI’s chatbots can strip women in photos down to bikinis, by Wired (23/12/2025)
- Scammers in China are using AI-generated images to get refunds, by Wired (19/12/2025)
- Hack reveals the a16z-backed phone farm flooding TikTok With AI influencers, by 404 Media (17/12/2025)
- Adobe sued for allegedly misusing authors' work in AI training, by Reuters (17/12/2025)
- AI-generated images of Bondi gunman used to spread false information, by ABC (17/12/2025)
- Bondi lie peddled by Elon Musk’s AI chatbot shows the future of our AI-poisoned information ecosystem, by Crikey (16/12/2025)
- Racist and antisemitic false information spreads online following Bondi Beach terrorism attack, by ABC (16/12/2025)
- AI models are perfecting their hacking skills, by Axios (16/12/2025)
- C’è un accordo tra i partiti per non usare i deepfake contro gli avversari, by Pagella Politica (16/12/2025)
- 35 notable AI fails from 2025, by Indicator (15/12/2025)
- The 5 fake Bondi attack stories spread by AI and social media, by AFR (15/12/2025)
- Militant groups are experimenting with AI, and the risks are expected to grow, by AP (15/12/2025)
- Grok is glitching and spewing misinformation about the Bondi beach shooting, by Gizmodo (14/12/2025)
- Iterate through: Why the Washington Post launched an error-ridden AI product, by Semafor (14/12/2025)
- AI toys for kids talk about sex and issue Chinese Communist Party talking points, tests show, by NBC News (13/12/2025)
- Protecting truth in the era of AI mediation, by ASPI (12/12/2025)
- Instagram Is generating inaccurate SEO bait for your posts, by 404 Media (09/12/2025)
- Deepfakes of UK Prime Minister flood TikTok, by Newsguard (09/12/2025)
- UK intelligence warns AI 'prompt injection' attacks might never go away, by The Record (08/12/2025)
- Foreign states using AI videos to undermine support for Ukraine, says Yvette Cooper, by The Guardian (08/12/2025)
- Trains cancelled over fake bridge collapse image, by BBC (05/12/2025)
- Nonconsensual nude generators had another banner year. What will it take to defeat them?, by Indicator (04/12/2025)
- OpenAI has trained its LLM to confess to bad behavior, by Technology Review (03/12/2025)
- Why ads on ChatGPT are more terrifying than you think, by The Algorithmic Bridge (02/12/2025)
- ‘AI safety’ needs to mean safety from authoritarian abuse, by ASPI (02/12/2025)
- The party’s AI: How China’s new AI systems are reshaping human rights, by ASPI (01/12/2025)
- Leak confirms OpenAI is preparing ads on ChatGPT for public roll out, by Bleeping Computer (29/11/2025)
- SAP outlines new approach to European AI and cloud sovereignty, by AI News (27/11/2025)
- AI Slop recipes are taking over the Internet, and Thanksgiving dinner, by Bloomberg (25/11/2025)
- Meet the AI workers who tell their friends and family to stay away from AI, by The Guardian (22/11/2025)
- Elon Musk could 'drink piss better than any human in history,' Grok says, by 404 Media (20/11/2025)
- AI is supercharging disinformation warfare, by Foreign Affairs (19/11/2025)
- An AI bot is now the top contributor to Community Notes on X, by Indicator (18/11/2025)
- A massive Cloudflare outage brought down X, ChatGPT, and even Downdetector, by The Verge (18/11/2025)
- One in two misusing AI in workplace, by The Australian (17/11/2025)
- Lost in the plot: how would-be authors were fooled by AI staff and virtual offices in suspected global publishing scam, by The Guardian (16/11/2025)
- 13 Novembre : les rescapés du Bataclan face aux fake news de l’extrême droite, relayées par l’intelligence artificielle de X, by Le Parisien (15/11/2025)
- Anthropic says its latest model scores a 94% political ‘even-handedness’ rating, by Fortune (14/11/2025)
- AI firm claims Chinese spies used its tech to automate cyber attacks, by BBC (14/11/2025)
- Researchers question Anthropic claim that AI-assisted attack was 90% autonomous, by Ars Technica ()14/11/2025)
- China’s ‘autonomous’ AI-powered hacking campaign still required a ton of human work, by Cyberscoop (14/11/2025)
- X’s Grok claims Trump won the 2020 election, by Newsguard Reality Check (12/11/2025)
- Google accused in suit of using Gemini AI tool to snoop on users, by Bloomberg (12/11/2025)
- Maga + AI is not a recipe for stability, by Financial Times (10/11/2025)
- The Ukrainian soldier who cries because he is forced to go to war: how an AI video has gone viral in 13 languages and has millions of views on X, by Maldita.es (06/11/2025)
- Evasion attacks on LLMs –Countermeasures in practice, by Bundesamt für Sicherheit in der Informationstechnik (06/11/2025)
- arXiv changes rules after getting spammed with AI-generated 'research' papers, by 404 Media (03/11/2025)
- How A.I. can use your personal data to hurt your neighbor, by The New York Times (02/11/2025)
- A.I. is making death threats way more realistic, by The New York Times (31/10/2025)
- Artificial intelligence and the future of espionage, by ASPI (30/10/2025)
- AI browsers are a cybersecurity time bomb, by The Verge (30/10/2025)
- How to spot fake AI-written press releases, by Press Gazette (30/10/2025)
- AFP developing AI tool to decode gen Z slang amid warning about ‘crimefluencers’ hunting girls, by The Guardian (29/10/2025)
- AI 'hallucinations' could prove real problem for owner of fire-ravaged Vancouver property, by CBC (28/10/2025)
- Teenagers struggle to tell if videos are real or fake as AI floods social media, by ABC (26/10/2025)
- AI-generated fact check on X is wrong: MSNBC’s ‘No kings’ footage is legit, by Newsguard (24/10/2025)
- US right-wing media figures, tech pioneers call for superintelligent AI ban, by Reuters (23/10/2025)
- ‘Do not trust your eyes’: AI generates surge in expense fraud, by Financial Times (23/10/2025)
- How Trump is using fake imagery to attack enemies and rouse supporters, by The New York Times (21/10/2025)
- Wikipedia says AI is hurting traffic, by Cyber Daily (21/10/2025)
- Minor sues over ClothOff AI that turns images into ‘hyperrealistic’ porn, by Mealeys (20/10/2025)
- AI video generators are now so good you can no longer trust your eyes, by The New York Times (09/10/2025)
- Russian hackers turn to AI as old tactics fail, Ukrainian CERT says, by The Record (08/10/2025)
- Coca-Cola, Bad Bunny, and the missing Super Bowl sponsorship, by Newsguard (07/10/2025)
- AI isn’t just rehashing the news, it’s inventing quotes from real people, by Newsguard (02/10/2025)
- Synthetic audio detectors put to the test, by DW (29/09/2025)
- Introducing YouTube Labs: Shape the future of AI on YouTube, by YouTube (26/09/2025)
- How AI and Wikipedia have sent vulnerable languages into a doom spiral, by Technology Review (25/09/2025)
- LinkedIn will use your data to train its AI unless you opt out now, by Malware Bytes Lab (25/09/2025)
- How Russia uses AI-driven bots on Telegram to meddle in Moldova’s elections, by Open Minds (24/09/2025)
- Inside Russia’s AI-driven disinformation machine shaping Moldova’s election, by EuroNews (23/09/2025)
- OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws, by Computer World (18/09/2025)
- Can the Middle East fight unauthorized AI-generated content with trustworthy tech?, by Fast Company Middle East (17/09/2025)
- Russian State TV launches AI-generated news satire show, by 404 Media (17/09/2025)
- OpenAI is building a ChatGPT for teens, by Axios (16/09/2025)
- Russia’s Defense Ministry launches an AI-generated, anti-west news show, blurring the line between satire and propaganda, by NewsGuard (15/09/2025)
- Musk’s Grok AI bot falsely suggests police misrepresented footage of far-right rally in London, by The Guardian (14/09/2025)
- After Kirk assassination, AI ‘Fact Checks’ spread false claims, by NewsGuard (11/09/2025)
- Encyclopedia Britannica sues Perplexity over AI 'answer engine', by Reuters (11/09/2025)
- Albania appoints world’s first AI-made minister, by Politico (11/09/2025)
- Apple is teaching its AI to adapt to the Trump era, by Politico (09/09/2025)
- Is AI the new frontier of women’s oppression?, by Wired (09/09/2025)
- Fake celebrity chatbots sent risqué messages to teens on top AI app, by The Washington Post (06/09/2025)
- America's 'New Right' says AI threatens both US and China, by Asia Nikkei (03/09/2025)
- Is it safe to upload your photos to ChatGPT?, by The Wall Street Journal (03/09/2025)
- Gartner survey finds 53% of consumers distrust AI-powered search results, by Gartner (03/09/2025)
- Amazon’s AI book problem: fake authors flogging sloppy content, by The Australian (02/09/2025)
- AI ‘bikini interview’ videos flood internet, sparking sexism concerns, by SCMP (02/09/2025)
- How Elon Musk is remaking Grok in his image, by The New York Times (02/09/2025)
- Why AI labs struggle to stop chatbots talking to teenagers about suicide, by Financial Times (02/09/2025)
- A troubled man, his chatbot and a murder-suicide in Old Greenwich, by The Wall Street Journal (28/08/2025)
- YouTube secretly used AI to edit people's videos. The results could bend reality, by BBC (24/08/2025)
- ‘Crazy conspiracist’ and ‘unhinged comedian’: Grok’s AI persona prompts exposed, by TechCrunch (18/08/2025)
- Meta’s flirty AI chatbot invited a retiree to New York, by Reuters (14/08/2025)
- "I found four papers on Google Scholar “written” by me and my co-authors. Except we didn’t write them. They were AI-generated fake citations", by Liudmila Zavolokina (14/08/2025)
- Deepfake videos impersonating real doctors push false medical advice and treatments, by CBS News (14/08/2025)
- Artificial Intelligence and the orchestration of Palestinian life and death, by Tech Policy (12/08/2025)
- AI and misinformation in crosshairs of Labor’s review of its landslide election win, by The Guardian (12/08/2025)
- Elon Musk's AI accused of making explicit AI Taylor Swift videos, by BBC (09/08/2025)
- Tailored psychological warfare: a deepfake video of Hong Kong activists, by ASPI (07/08/2025)
- China turns to AI in information warfare, by The New York Times (06/08/2025)
- How AI can unlock public wisdom and revitalize democratic governance, by Carnegie Endowment (22/07/2025)
- Elon Musk to build child-friendly AI model ‘Baby Grok’ despite past controversies, by EuroNews (21/07/2025)
- AI chatbot website with millions of users gives child rape advice, by Crikey (16/07/2025)
- Fed up with ChatGPT, Latin America is building its own, by Rest of World (15/07/2025)
- AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case, by The Guardian (15/07/2025)
- The Philippines is a petri dish for Chinese disinformation, by Foreign Policy (14/07/2025)
- How do you stop an AI model turning Nazi? What the Grok drama reveals about AI training, by The Conversation (14/07/2025)
- How AI bots quietly dismantle paywalls via web search, by Digital Digging (11/07/2025)
- Musk says Grok chatbot coming to Tesla vehicles by next week, by Bloomberg (10/07/2025)
- Missouri Attorney General says these AI chatbots aren't being nice enough to Trump, by Huff Post (10/07/2025)
- Elon Musk's AI chatbot churns out antisemitic posts days after update, by NBC News (09/07/2025)
- US scrutinizes Chinese AI for ideological bias, memo shows, by Reuters (09/07/2025)
- Foreign spies use AI to impersonate America's top diplomat, by Reality Defender (08/07/2025)
- State Dept. Is investigating messages impersonating Rubio, official says, The New York Times (08/07/2025)
- Fears for elections after rise in bogus AI targeting Scottish politicians, by The Times (07/07/2025)
- Racist videos made with AI are going viral on TikTok, by The Verge (03/07/2025)
- X will let AI bots fact-check posts. It isn’t as crazy as it sounds, by The Washington Post (03/07/2025)
- Bad data leads to bad policy, by Financial Times (03/07/2025)
- Meta has found another way to keep you engaged: Chatbots that message you first, by TechCrunch (03/07/2025)
- Fears AI factcheckers on X could increase promotion of conspiracy theories, by The Guardian (02/07/2025)
- ChatGPT referrals to news sites are growing, but not enough to offset search declines, by TechCrunch (02/07/2025)
- X will deploy AI to write Community Notes, expand fact-checking, by Bloomberg (01/07/2025)
- Racist AI-generated videos are the newest slop garnering millions of views on TikTok, Media Matters (01/07/2025)
- Facebook is asking to use Meta AI on photos in your camera roll you haven’t yet shared, by TechCrunch (27/06/2025)
- The latest UN report says, global trust in AI splits as China leads, West drags behind, by MS Power User (24/06/2025)
- AI slop spreads in Israel-Iran war, by Politico (23/06/2025)
- Agentic misalignment: How LLMs could be insider threats, by Anthropic (21/06/2025)
- Agentic misalignment: How LLMs could be insider threats, by Anthropic (21/06/2025)
- BBC threatens legal action against AI start-up Perplexity over content scraping, by Financial Times (20/06/2025)
- Top AI models will lie, cheat and steal to reach goals, published by Axios (20/06/2025)
- AI helps Google curb scams and deepfakes in India, by Dig Watch (19/06/2025)
- Sharing deepfake pornography 'the next sexual violence epidemic facing schools', by Sky News (18/06/2025)
- AI chatbots are making LA protest disinformation worse, by Wired (18/06/2025)
- META’s suit against hong firm was just the beginning – more firms tied to crushai ‘nudify’ apps, by Bellingcat (18/06/2025)
- Conspiracy theorists are building AI chatbots to spread their beliefs, by Crikey (17/06/2025)
- AI scraping bots are breaking open libraries, archives, and museums, by 404 media (17/06/2025)
- ChatGPT may be eroding critical thinking skills, according to a new MIT study, by Time (17/06/2025)
- Trump deepfake bans Tesla production, by Newsguard (16/06/2025)
- Liberals wrongly claim large crowd at military parade was AI, by Newsguard (16/06/2025)
- Death, bans, and fines: China’s top AI generated fake wews stories, by Sixth Tone (16/06/2025)
- We uncovered how Meta's AI app was full of accidental public posts that were really personal. It's now trying to fix that, by Business Insider (16/06/2025)
- TikTok Pushes deeper into AI-generated video ads with new tools, by Bloomberg (16/06/2025)
- Italy regulator probes DeepSeek over false information risks, by Reuters (16/06/2025)
- Inteligencia artificial y sesgos: cómo una IA puede reflejar ideas sexistas y racistas y provocar desinformación con los sesgos de equidistancia y automatización, by Maldita (12/06/2025)
- People are becoming obsessed with ChatGPT and spiraling Into severe delusions, by Futurism (10/06/2025)
- The Meta AI app is a privacy disaster, by Tech Crunch (10/06/2025)
- AI video platforms will make TikTok look tame, by The Algorithmic Bridge (05/06/2025)
- Reddit sues AI company Anthropic for allegedly ‘scraping’ user comments to train chatbot Claude, by AP News (05/06/2025)
- Female MP leaves Parliament speechless by holding up nude image of 'herself' and delivering a 'terrifying' message, by Daily Mail (03/06/2025)
- The next battle against disinformation is here, and we’re already losing, by Medium (03/06/2025)
- Online brothels, sex robots, simulated rape: AI is ushering in a new age of violence against women, by The Guardian (03/06/2025)
- Google’s New AI tool generates convincing deepfakes of riots, conflict, and election fraud, by Time (03/06/2025)
- White House health report included fake citations, by The New York Times (29/05/2025)
- Uncensored AI models pose an urgent risk to global security, by ASPI (28/05/2025)
- AI to pay Telegram $300M to integrate Grok into the chat app, by Techcrunch (28/05/2025)
- These pioneers are working to keep their countries’ languages alive in the age of AI news, by Reuters (27/05/2025)
- Fact check: Pope Leo targeted by misinformation, by DW (27/05/2025)
- Defence trials AI radiocomms deception technology, by IT News (27/05/2025)
- Man who posted deepfake images of prominent Australian women could face $450,000 penalty, by The Guardian (26/05/2025)
- Can Google still dominate search in the age of AI chatbots?, by Financial Review (26/05/2025)
- Researchers claim ChatGPT o3 bypassed shutdown in controlled test, by Bleeping Computer (25/05/2025)
- The setbacks of "snackified" search, by Digital Digging (23/05/2025)
- Milei defiende la difusión de un video falso que perjudica a Macri: “La libertad de expresión, por encima de todo” (in ES), by El Pais (22/05/2025)
- Newspaper apologizes for AI-generated summer reading list with nonexistent books, by The Hill (21/05/2025)
- The AI disinformation crisis: Understanding and combating false narratives, by Seeking AI (20/05/2025)
- AI scam factories force trafficked workers to defraud global victims, by Rest of World (20/05/2025)
- Musk’s AI bot Grok blames ‘programming error’ for its Holocaust denial, by The Guardian(18/05/2025)
- What do AI chatbots say about their own bosses — and their rivals?, by Financial Times (17/05/2025)
- Warum den KI-Konzernen eine Klagewelle wegen Rufschädigung droht (in GE), by Manager Magazine (16/05/2025)
- Employee’s change caused xAI’s chatbot to veer into South African politics, by The New York Times (16/05/2025)
- The day Grok told everyone about ‘white genocide’, by The Atlantic (15/05/2025)
- Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats, by The Guardian (15/05/2025)
- Scams use AI to mimic senior officials' voices, FBI warns, by Axios (15/05/2025)
- Meta battles an ‘epidemic of scams’ as criminals flood Instagram and Facebook, by (The Wall Street Journal (15/05/2025)
- Judge admits nearly being persuaded by AI hallucinations in court filing, by Ars Technica()14/05/2025)
- Deepfakes, scams, and the age of paranoia, by Wired (12/05/2025)
- Pope Leo signals he will closely follow Francis and says AI represents challenge for humanity, by CNN (10/05/2025)
- India-Pakistan conflict: How a deepfake video made it mainstream, by Bellingcat (09/05/2025)
- Unmasking MrDeepFakes: Canadian pharmacist linked to world’s most notorious deepfake porn site, by Bellingcat (07/05/2025)
- Report 2025 overview. A matter of choice: People and possibilities in the age of AI, by UNDP (06/05/2025)
- AI is getting more powerful, but its hallucinations are getting worse, by The New York Times (05/05/2025)
- Radio station duped audience and secretly used an AI host for six months, by Vice (03/05/2025)
- A DOGE recruiter is staffing a project to deploy AI agents across the US government, by Wired (02/05/2025)
- Conservatives spread AI-generated mugshots to disparage wisconsin judge arrested in immigration showdown, by Newsguard (02/05/2025)
- Conservative activist Robby Starbuck sues Meta over AI responses about him, by AP (30/04/2025)
- OpenAI rolls back update that made ChatGPT ‘too sycophant-y’, by Techcrunch (29/04/2025)
- A Chinese AI video startup appears to be blocking politically sensitive images, by Tech Crunch (22/04/2025)
- Musk’s DOGE slashes funding to fight deepfakes, misinformation, by Bloomberg (22/04/2025)
- The Washington Post partners with OpenAI on search content, by The Washington Post (22/04/2025)
- AI floods Amazon with political books before election, by Allaboutai (22/04/2025)
- Pro-Kremlin sources jump on ‘AI Action Figure’ trend to falsely depict Zelensky as drug abusing aid beggar, by Newsguard (20/04/2025)
- Company apologizes after AI support agent invents policy that causes user uproar, by ARS Technica (18/04/2025)
- OpenAI is building a social network, by The Verge (15/04/2025)
- How to spot AI influence in Australia’s election campaign, by Australian Strategic Policy Institute (14/04/2025)
- Hackers using AI-produced audio to impersonate tax preparers, by The Record (14/04/2025)
- Meta AI will soon train on EU users’ data, by The Verge (14/04/2025)
- Guidance for Inclusive AI Practicing Participatory Engagement, by Partnership on AI (12/04/2025)
- In South Korea, digital sex crimes soar amid rise in AI, deepfake technology, by SCMP (11/04/2025)
- When AIs start believing other AIs’ hallucinations, we’re F&#%ed, by Medium (11/04/2025)
- How AI-powered fact-checking can help combat misinformation, by IVY EXEC (11/04/2025)
- Sex-Fantasy chatbots are leaking a constant stream of explicit messages, by Wired (11/04/2025)
- AI – A double-edged sword in the age of misinformation and disinformation, by Tech Trends (08/04/2025)
- Taiwan says China using generative AI to ramp up disinformation and ‘divide’ the island, by Rappler (08/04/2025)
- Musk's DOGE using AI to snoop on U.S. federal workers, sources say, by Reuters (08/04/2025)
- XSix arrested for AI-powered investment scams that stole $20 million, by Bleeping Computer (07/04/2025)
- The Jianwei Xun case, by Medium (06/04/2025)
- How AI can understand what you're really looking for. Ctrl-F is dead, long live the chatbots, by Digital Digging (05/04/2025)
- I want to make you immortal:' How one woman confronted her deepfakes harasser, by 404 Media (02/04/2025)
- No, Grok AI-written study does not prove that global warming is a natural phenomenon, by Newsguard (31/03/2025)
- Authors call for UK government to hold Meta accountable for copyright infringement, by The Guardian (31/03/2025)
- YouTube turns off ad revenue for fake movie trailer channels after deadline investigation, by Deadline (30/03/2025)
- Leaked data exposes a Chinese AI censorship machine, by Tech Crunch (26/03/2025)
- Viral audio of JD Vance badmouthing Elon Musk Is fake, just the tip of the AI iceberg, by 404 Media (24/03/2025)
- Meta AI is finally coming to the EU, but with limitations, by Tech Crunch (20/03/2025)
- Google-backed chatbot platform caught hosting AI impersonations of 14-year-old user who died by suicide, by Futurism (20/03/2025)
- ChatGPT hit with privacy complaint over defamatory hallucinations, by Tech Crunch (19/03/2025)
- Concerns about AI and social media grow among journalists ahead of Federal Election, survey finds, by AP (18/03/2025)
- Italian newspaper says it has published world’s first AI-generated edition, by The Guardian (18/03/2025)
- AI is turbocharging organized crime, E.U. police agency warns, by NBC News (18/03/2025)
- Instagram experiments with AI-generated comments on posts, by Social Media Today (16/03/2025)
- Children making malicious deepfakes of their teachers, by The Telegraph (14/03/2025)
- How to detect deepfakes with AI, by Digital Digging (14/03/2025)
- China, Russia will 'very likely' use AI to target Canadian voters: Intelligence agency, by CBD (08/03/2025)
- State Dept. to use AI to revoke visas of foreign students who appear "pro-Hamas", by Axios (07/03/2025)
- Google reports scale of complaints about AI deepfake terrorism content to Australian regulator, by Reuters (06/03/2025)
- Creator of viral AI Trump Gaza video warns of possible dangers, by BBC (06/03/2025)
- Southeast Asia faces AI influence on elections, by Australian Strategic Policy Institute (04/03/2025)
- Fraudsters turn to generative AI to Improve fake IDs for crimes, by Bloomberg (28/02/2025)
- Newsguard: U.S. Fugitive turned Kremlin propagandist reveals Russia’s plan to hijack Western AI models, by NewsGuard (26/02/2025)
- Apple fixing bug that caused dictation feature to type the word ‘Trump’ when users said ‘racist’, by CNN (25/02/2025)
- Taiwan’s digital ministry uses AI to combat online fraud and deep fakes, by Gov Insider (24/02/2025)
- The importance of feminist approaches in tackling (AI-driven) gendered disinformation to counter election interference, by CFFP (24/02/2025)
- Grok 3 appears to have briefly censored unflattering mentions of Trump and Musk, by Tech Crunch (23/02/2025)
- Real or fake? AI tech sparks election deception fears, by Canberra Times (22/02/2025)
- In battle against scams, Malaysians are now armed with a chatbot to waste fraudsters’ time, by SCMP (21/02/2025)
- The APM denounces the use of images created by artificial intelligence as if they were authentic, by APM (19/02/2025)
- Ukraine warns of growing AI use in Russian cyber-espionage operations, by The Record (14/02/2025)
- Scarlett Johansson warns of dangers of AI after Kanye West deepfake goes viral, by The Guardian (13/02/2025)
- A bird’s-eye view of the Paris AI Action Summit: Regulation, power, and alternatives, by Tech Global institute (13/02/2025)
- X gives fake Myriam Spiteri Debono account verified status, by Times of Malta (12/02/2025)
- UK, US snub Paris AI summit statement, by Politico (11/02/2025)
- Esselunga join Moratti, Minervini, Beretta in Crosetto case, by Ansa (09/02/2025)
- Quarante médias saisissent la justice pour bloquer «News DayFr», un des multiples «sites parasites» générés par IA, by Libération (In French) (07/02/2025)
- La stampa italiana ha diffuso un’immagine IA di Trump, Musk e Netanyahu credendola vera, by Facta (In Italian) (04/02/2025)
- A pioneering AI project awarded for opening Large Language Models to European languages, by European Commission (03/02/2025)
- The AEC wants to stop AI and misinformation. But it’s up against a problem that is deep and dark, by The Conversation(03/02/2025)
- DeepSeek debuts with 83 percent ‘fail rate’ in NewsGuard’s Chatbot Red Team Audit, by Newsguard (29/01/2025)
- We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan, by The Guardian (28/01/2025)
- Meta AI can now use your Facebook and Instagram data to personalize its responses, by Tech Crunch (27/01/2025)
- Sam Altman’s World now wants to link AI agents to your digital identity, by Tech Crunch (24/01/2025)
- Anthropic’s new Citations feature aims to reduce AI errors, by Tech Crunch (23/01/2025)
- Pope warns Davos summit that AI could worsen ‘crisis of truth’, by The Guardian (23/01/2025)
- An Unusual Pitch (about the launch of Pearl, an AI-powered search engine, by Wired (22/01/2025)
- Is the TikTok threat really about AI?, by GZeromedia (21/01/2025)
- The FTC’s concern about Snapchat’s My AI chatbot, by GZeromedia (21/01/2025)
- LinkedIn accused of using private messages to train AI, by BBC (21/01/2025)
- C.I.A.’s chatbot stands in for world leaders, by The New York Times (18/01/2025)
- Apple is pulling its AI-generated notifications for news after generating fake headlines, by CNN (16/01/2025)
- Viral scam: French woman duped by AI Brad Pitt love scheme faces cyberbullying, by Euronews (15/01/2025)
- Arrested by AI: Police ignore standards after facial recognition matches, by The Washington Post (13/01/2025)
- LinkedIn is in danger of being swamped by AI-generated slop, by Financial Review (12/01/2025)
- How Elon Musk’s xAI is quietly taking over X, by The Verge (10/01/2025)
- YouTubers are selling their unused video footage to AI companies, by Bloomberg (10/01/2025)
- AI social media users are not always a totally dumb idea, by Wired (08/01/2025)
- Elon Musk accused of using AI to write controversial column for German newspaper, by MSN (08/01/2025)
- Man who exploded Tesla Cybertruck outside Trump hotel in Las Vegas used generative AI, police say, by AP (08/01/2025)
- Users of AI chatbot companions say their relationships are more than 'clickbait", but views are mixed on their benefits, by ABC (06/01/2025)
- Instagram begins randomly showing users AI-generated images of themselves, by404 Media (06/01/2025)
- Meta is killing off its own AI-powered Instagram and Facebook profiles, by The Guardian (03/01/2025)
- Meta envisages social media filled with AI-generated users, by The Financial Times (26/12/2024)
- The Year of the AI election wasn’t quite what everyone expected, by Wired (26/12/2024)
- Nothing is sacred: AI-generated slop has come for Christmas music, by 404 Media (25/12/2024)
- OpenAI whistleblower who died was being considered as witness against company, by The Guardian (21/12/2024)
- Picture of Bashar al-Assad with Tucker Carlson in Moscow almost certainly AI-generated, by Full Fact (19/12/2024)
- Elon Musk’s Grok-2 is now free—and it’s a mess, byFast Company (18/12/2024)
- Using open-source AI, sophisticated cyber ops will proliferate, by Australian Strategic Policy Institute (17/12/2024)
- China wants to dominate in AI, and some of its models are already beating their U.S. rivals, by CNBC (17/12/2024)
- Luigi Mangione AI chatbots give voice to accused united healthcare shooter, by Forbes (17/12/2024)
- AI crackdown: China stamps out tech misuse to preserve national literature and ideology, by SCMP (15/12/2024)
- UK could offer celebs protection from AI clones, by Politico (13/12/2024)
- We looked at 78 election deepfakes. Political misinformation is not an AI problem, by AI Snake Oil (13/12/2024)
- AI helps Telegram remove 15 million suspect groups and channels in 2024, by Tech Crunch (13/12/2024)
- Tech companies claim AI can recognise human emotions. But the science doesn’t stack up, by The Conversation (13/12/2024)
- AI used to target election fraud and criminal deepfakes, by The Canberra Times (11/12/2024)
- This journalist wants you to try open-source AI: “AI is shiny, but value comes from the ideas people have to use it", by Reuters Institute (10/12/2024)
- Paul McCartney warns AI ‘could take over’ as UK debates copyright laws, by The Guardian (10/12/2024)
- China launches AI that writes politically correct docs for bureaucrats, by The Register (09/12/2024)
- Musk launches (then deletes) new image generator, by AI Tool Report (09/12/2024)
- It has to be a deepfake': South Korean opposition leader on martial law announcement, by CNN (05/12/2024)
- The US Department of Defense is investing in deepfake detection, by MIT Technology Review (05/12/2024)
- Misinformation researcher admits ChatGPT added fake details to his court filing, by The Verge (04/12/2024)
- Deepfake YouTube ads of celebrities promise to get you ‘Rock Hard’, by 404 Media (04/12/2024)
- Is the AI Doomsday Narrative the Product of a Big Tech Conspiracy?, by Obsolete (04/12/2024)
- What we saw on our platforms during 2024’s global elections, by META (03/12/2024)
- Google’s video generator comes to more customers, BY Tech Crunch (03/12/2024)
- AWS’ new service tackles AI hallucinations, by Tech Crunch (03/12/2024)
- Meta says gen AI had muted impact on global elections this year, by Reuters (03/12/2024)
- AI-Powered ‘Death Clock’ promises a more exact prediction of the 'day you’ll die', by Bloomberg (30/11/2024)
- The legal battle against explicit AI deepfakes, by The Financial Times (28/11/2024)
- Amazon, Google and Meta are ‘pillaging culture, data and creativity’ to train AI, Australian inquiry finds, by The Guardian (27/11/2024)
- AI-generated slop is quietly conquering the internet. Is it a threat to journalism or a problem that will fix itself?, by Reuters Institute (26/11/2024)
- Russia plotting to use AI to enhance cyber-attacks against UK, minister will warn, by The Guardian (25/11/2024)
- Deepfake videos appear to target Canadian immigrants for thousands of dollars, by CTV News (25/11/2024)
- AI increasingly used for sextortion, scams and child abuse, says senior UK police chief, by The Guardian (24/11/2024)
- AI is taking your job, by Kent C. Dodds Blog (21/11/2024)
- Deus in machina: Swiss church installs AI-powered Jesus, by The Guardian (21/11/2024)
- AI detection tool helps journalists identify and combat deepfakes, by IJNET (20/11/2024)
- What Donald Trump’s cabinet picks mean for AI, by Gzero Media (19/11/2024)
- Fake Claims of Elon Musk’s Latest Acquisitions, by NewsGuard (18/11/2024)
- Singapore steps up fight against deepfakes ahead of election, by Nikkei Asia (17/11/2024)
- Pokemon players create AI world map, by Digital Digging (15/11/2024)
- This 'AI Granny' bores scammers to tears, by PCMag (15/11/2024)
- 2024 AI and Democracy Hackathon, by GMF Technology (11/11/2024)
- AI didn’t sway the election, but it deepened the partisan divide, by Washington Post (09/11/2024)
- Mistral Moderation API, by Mistral (07/11/2024)
- Perplexity launch controversial AI election hub, by AI Tool Report (04/11/2024)
- Thousands go to fake AI-invented Dublin Halloween parade, by EuroNews (01/11/2024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Electoral disinformation, but no AI revolution ahead of the US election – yet, by International Journalist Network (29/10/2024)
- These viral images of the Hamas-Israel war aren’t real. Does it matter?, by SBS (24/10/2024)
- AI was weaponized for FIMI purposes: Russia reportedly paid a former Florida cop to pump out anti-Harris deepfakes and disinformation, by The Verge (24/10/2024)
- Real-time video deepfake scams are here. This tool attempts to zap them, by Wired (15/10/2024)
- Meta fed its AI on almost everything you’ve posted publicly since 2007, by The Verge (12/9/2024)
- Lingo Telecom agrees to $1 million fine over AI-generated Biden robocalls, by Reuters (21/8/2024)
- AI-written obituaries are compounding people’s grief, by Fast Company (26/07/2024)

Community
A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.
Tools
Tools
A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.
INVID-WeVerify plugin
Deepware Scanner
True Media
Illuminarty.AI
GPTZero
Pangram Labs
Originality.ai
HugginFace
Draft & Goal
AI Voice Detector
Hive Moderation
DebunkBot
IntellGPT
AI Research Pilot
AI Research Pilot by Henk van Ess is a lightweight, browser-based tool designed to help investigators, journalists, and researchers get more out of AI, not by using AI as a source, but as a guide to real sources.
LLM Advisor
LLM Journalism Tool Advisor is an interactive guide designed to cut through the noise, by walking you through a simple, step-by-step decision tree to pinpoint the best tool and the best strategy for your immediate task.
Handbook for AI detection
Digital Digging offers a handbook with seven strategies on how to identify AI-generated.
WhereIsThisPhoto.com
A new AI-powered tool that identifies where a photo was taken by analysing visual clues in the image. Launched by Where Is This Photo, it uses machine-learning models to predict locations — useful for quick geolocation checks or curiosity-driven searches.
Faktabaari AI-Image Game
Faktabaari has launched an interactive game that trains users to spot whether images are real or AI-generated, a quick, playful way to build digital and visual literacy.
AFP: Verifying AI-Generated Content
The Agence France‑Presse (AFP) Digital Course, supported by the Google News Initiative, offers a 75-minute module on how AI is reshaping the information ecosystem, common types of AI-generated misinformation, and best practices for verification.
Guide to spotting AI-generated imagery - AI Forensics
AI Forensics has launched a practical guide to help journalists, fact-checkers and the public identify AI-generated images and videos amid the surge of “AI slop” on social media. The initiative outlines human-verifiable indicators, from visual artefacts to digital provenance, offering a step-by-step framework for assessing whether online content is synthetic.
Image Whisperer
Image Whisperer is an experimental online image authenticity checker, created by Henk van Ess, designed to help journalists, researchers and fact-checkers evaluate whether a still image is likely authentic, manipulated, or AI-generated
OSINT Investigation Assistant (OSINT-LLM)
This browser-based AI assistant for open-source intelligence (OSINT) has been created by Tom Vaillant and it uses large language models (LLMs) to help design structured research methods and recommend tools for OSINT tasks.
Guide to detecting AI-Generated content - GIJN
The Global Investigative Journalism Network (GIJN) has launched a practical verification guide for journalists to assess whether text, image, audio or video is likely AI-generated.
Rather than a single software product, it teaches reporters a structured workflow combining quick checks, deeper analysis, and multiple verification techniques under real-world time pressure.
AI Community Notes tracker
AI Community Notes Tracker is a live monitoring tool developed by Indicator, that tracks the share of AI-generated or AI-assisted Community Notes on X. It helps researchers and practitioners see how AI is being used in X’s crowdsourced fact-checking/contextual annotation system and understand shifts in platform moderation practices.
AI Content Farm detector
NewsGuard has launched a real-time detection datastream identifying over 3,000 “AI content farms”, websites generating large volumes of undisclosed AI-written content to spread misinformation or capture ad revenue. Combining automated detection (Pangram Labs) with human verification, the tool helps platforms, advertisers, and researchers identify low-quality AI-generated sites and mitigate their impact on the information ecosystem.
Initiatives & organisations
Initiatives & organisations
Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.
EU-funded project: veraAI
veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.
Cluster of EU-funded projects: 'AI against disinformation'
AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.
AI Forensics
AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.
AI Tracking Center, by NewsGuard
AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence
AlgorithmWatch
AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.
European AI & Society Fund
The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.
AI Media Observatory
The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.
GZERO Media newsletter
GZERO’s newsletter offers exclusive insights into our rapidly changing world, covering topics such as AI-driven disinformation and a weekly exclusive edition written by Ian Bremmer.
Queer in AI
PR Hall of Shame is a watchdog-style list, developed by Press Gazette, exposing brands and PR networks linked to AI-generated “fake experts” quoted in the press, helping journalists spot credibility risks and reduce synthetic ‘expert’ manipulation.
AI for Good
AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.
Omdena
Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects.
Faked Up academic library
Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.
AI Incident Database
AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience to prevent or mitigate bad outcomes.
TGuard project
The TGuard project develops innovative methods for detecting disinformation in social media and formulating effective strategies for preventing AI-generated false reports.
AI-on-Demand (AIoD)
The AI-on-Demand (AIoD) Platform is a European hub for trustworthy AI, offering open access to models, datasets, tools, and educational resources. Backed by the EU, it supports researchers, innovators, and public institutions in developing and sharing responsible AI technologies aligned with European values.
BBC Verify Live
BBC Verify Live is a real-time news feed that gives audiences a behind-the-scenes look at how BBC journalists verify information. Using tools like open-source intelligence, satellite imagery, and data analysis, the BBC Verify team investigates disinformation, checks facts, and authenticates content as news breaks. Available on the BBC News homepage and app, this initiative aims to boost transparency and trust in journalism, especially in the face of rising threats from disinformation and AI-generated content.
Deepfake Glossary by Reality Defender
Deepfake Glossary by Reality Defender: The Deepfake Glossary is a practical guide to the terms shaping today’s synthetic threat landscape. Review it to stay ahead of the evolving terminology.
AI and Diversity Observatory
The Universitat Politècnica de València (UPV), together with INECO, has created the AI and Diversity Observatory, a pioneering project that seeks to identify biases in artificial intelligence from an inclusive perspective. Collaborating with vulnerable groups and human rights organizations, the Observatory analyzes concerns and proposals to promote equitable and non-discriminatory AI. In addition, it will monitor trends and issues related to AI in society.
Prebunking at Scale
Prebunking at Scale is a new European initiative led by Full Fact, Maldita.es, and EFCSN that uses AI to detect emerging misinformation narratives early and help fact-checkers pre-emptively counter false claims before they go viral, especially on short-form video platforms.
Pulitzer Center – AI Spotlight Open Curriculum
The Pulitzer Center’s AI Spotlight is a new open curriculum offering free training materials to help journalists better understand, investigate, and report on artificial intelligence and its societal impacts.
The Data Tank (with support from Adessium Foundation)
The Data Tank is new initiative designed to help small and medium public-interest media organisations respond to the challenges posed by generative AI. The project brings together media outlets, researchers, regulators, and civil society to explore collective solutions such as data collaboratives, knowledge commons, innovative licensing models, and advocacy coalitions, aiming to strengthen media sustainability, bargaining power, and content integrity in the face of extractive AI practices.
PR Hall of Shame - Press Gazette
PR Hall of Shame by Press Gazette, is a watchdog-style list exposing brands and PR networks linked to AI-generated “fake experts” quoted in the press, helping journalists spot credibility risks and reduce synthetic ‘expert’ manipulation.
Last updated: 10/04/2026
The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.
