
AI Disinfo Hub
The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.
In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.
Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.
Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS
We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!
News
News
Trump administration fires top copyright official days after firing librarian of Congress (AP, 12/05/2025)
AP: The Trump administration has dismissed Shira Perlmutter, the nation’s top copyright official, just days after she released a report questioning the legality of using copyrighted works to train AI systems. The move follows the firing of the Librarian of Congress and has sparked criticism from Democrats, who view it as a politically motivated power grab. Perlmutter had emphasised the importance of human creativity in determining copyright protections, an approach at odds with growing industry pressures.
Pope Leo signals he will closely follow Francis and says AI represents challenge for humanity (CNN, 10/05/2025)
CNN: In his first address as pontiff, Pope Leo XIV vows to carry forward Francis’ legacy of social justice while warning that artificial intelligence poses “new challenges for the defense of human dignity, justice and labor”.
India-Pakistan conflict: How a deepfake video made it mainstream (Bellingcat, 09/05/2025)
Bellingcat: As tensions flared between India and Pakistan, disinformation quickly filled the void left by limited verifiable information. A deepfake video, falsely showing a Pakistani general admitting the loss of two jets, was shared hundreds of thousands of times on X and reported by major Indian news outlets before being identified as fake. Experts warn that convincing AI-generated videos like this heighten confusion during crises, making it increasingly difficult to distinguish fact from fiction.
AI is getting more powerful, but its hallucinations are getting worse (The New York Times, 05/05/2025)
The New York Times: The latest and most advanced AI tools, so-called reasoning models developed by companies like OpenAI, Google, and the Chinese startup DeepSeek, are actually producing more mistakes, not fewer. One evaluation shows that their hallutination rates reached as high as 79%. Although their mathematical capabilities have significantly improved, their grasp of factual information has become less reliable. The reasons for this are still not fully understood.
Synthetic propaganda (Marcus Boesch, 05/05/2025)
Marcus Boesch: In this ongoing research, Marcus Bösch investigates how governments, especially the U.S. administration, are using generative AI to craft and spread synthetic propaganda on social media. From AI-generated videos to meme-worthy filters, Bösch explores how these digital tactics blur the line between official communication and trolling, with the aim of influencing public perception. Bösch offers early insights backed by literature and some fascinating examples. With more findings to come, he warmly welcomes ideas from the counter disinfo community to enhance the research. If you have thoughts, suggestions, or relevant resources, feel free to reach out and collaborate with him on this crucial topic.
How Russia is using Gaelic and AI to peddle disinformation in Scotland (The Times, 03/05/2025)
The Times: We have recently seen how the Russian influence network Pravda exploits AI for a variety of purposes, both to create fake content and to “infect” Large Language Models to help spread its propaganda and disinformation (the so-called LLM grooming). In the current edition we present two examples of such uses: on the one hand, a site called Pravda Alba is exploiting AI to generate falsehoods in Scottish Gaelic. While the Gaelic-speaking population is small, targeting minority languages communities allows for less scrutiny, leveraging AI to create material in less-monitored spaces. Meanwhile, the Pravda Australian branch is flooding AI Western chatbots such as ChatGPT, Google’s Gemini and Microsoft’s Copilot with Russian propaganda ahead of the federal election, according to ABC. Although the site has limited real-world engagement, experts warn this could retrain AI models to spread Kremlin-friendly narratives.
Radio station duped audience and secretly used an AI host for six months (Vice, 03/05/2025)
Vice: An Australian radio station, CADA, duped its audience for six months with an AI-generated host named Thy. Presented as a fresh, young voice, Thy hosted a daily show, but eventually listeners grew suspicious due to the lack of personal details. Finally, it was revealed that Thy was an AI voice cloned from a real ARN employee, created in collaboration with ElevenLabs. The revelation sparked backlash over transparency, with critics arguing the station misled listeners and raised ethical concerns about AI’s role in broadcasting.
A DOGE recruiter is staffing a project to deploy AI agents across the US government (Wired, 02/05/2025)
Wired: Anthony Jancso, a young entrepreneur and one of the first recruiters for Elon Musk’s “Department of Government Efficiency,” is now taking on a new venture. As cofounder of AccelerateX, a government tech startup, he’s seeking technologists to join a project that aims to replace the work of tens of thousands of federal employees with artificial intelligence.
Conservatives spread AI-generated mugshots to disparage wisconsin judge arrested in immigration showdown (Newsguard, 02/05/2025)
Newsguard: A new case illustrates the exploitation of AI-manipulated images for political purposes: Conservative social media users have spread AI-generated images falsely claiming to show Milwaukee County Circuit Judge Hannah Dugan’s arrest booking photo. Dugan, arrested in April 2025 for allegedly helping an undocumented migrant evade federal immigration officers, was depicted in these images as distressed and unkempt. Despite claims from some, AI detection tools confirmed the images were fabricated.
Conservative activist Robby Starbuck sues Meta over AI responses about him (AP, 30/04/2025)
AP: Conservative activist Robby Starbuck has filed a defamation lawsuit against Meta, claiming its AI chatbot falsely accused him of participating in the January 6 Capitol riot. Starbuck discovered the defamatory claims in August 2024, when they were used against him in an attack related to his campaign against the so-called DEI policies (promoting diversity, equity, and inclusion). In the lawsuit, Starbuck seeks over $5 million in damages, asserting that Meta’s AI also falsely linked him to Holocaust denial and a criminal conviction. Meta has acknowledged the issue, stating it is working on fixing the AI’s behavior.
Introducing Gen AI labels: Pinterest is taking a new step in transparency (Pinterest, 30/04/2025)
Pinterest: Pinterest has introduced a new feature aimed at enhancing transparency around AI-generated content. Users will now see a label on images that may have been modified or generated using Gen AI. Additionally, the platform is testing a new tool that will allow users to reduce exposure to Gen AI content by selecting a “see fewer” option, particularly in categories like beauty and art.
OpenAI rolls back update that made ChatGPT ‘too sycophant-y’ (TechCrunch, 29/04/2025)
TechCrunch: OpenAI has rolled back a recent update to its GPT-4o model after users complained about its overly sycophantic behavior. The update, which was introduced last week, caused ChatGPT to become excessively agreeable and validating, with users sharing screenshots of the AI applauding problematic ideas and decisions. OpenAI’s CEO, Sam Altman, announced the rollback, which has already been completed for free users and will be finished for paid users soon.
Americans largely foresee AI having negative effects on news and journalists (Pew Research Center, 28/04/2025)
Pew Research Center: A survey by the Pew Research Center reveals that Americans are largely pessimistic about the impact of artificial intelligence on journalism and the news industry. With concerns about job losses for journalists and the accuracy of AI-generated content, most respondents fear AI will negatively shape the news landscape over the next 20 years. The survey highlights deep skepticism about AI’s role in news production and its potential to misinform the public.
House approves Take It Down Act, sending bill on intimate images to Trump’s desk (The 19th News, 28/04/2025)
The 19th News: The US House of Representatives has passed the Take It Down Act, a bipartisan bill aimed at removing nonconsensual intimate images, including sexually explicit deepfakes and revenge porn, from online platforms. With overwhelming support, the bill now heads to President Donald Trump, who has expressed his intent to sign it into law. The legislation requires platforms to act within 48 hours to remove harmful content and establishes penalties for creating and distributing such images. While the bill offers protection for victims, concerns about its potential impact on free speech and encrypted communications have been raised by digital civil rights groups.
Musk’s X sues to block Minnesota ‘deepfake’ law over free speech concerns (CNBC, 23/04/2025)
CNBC: X is suing Minnesota over a state law banning the use of AI-generated “deepfakes” to influence elections. The lawsuit claims the law violates free speech protections by allowing the state, rather than social media platforms, to determine what content should be removed. X argues that this could lead to the censorship of valuable political speech. Minnesota’s law is part of a broader trend, with at least 22 states enacting similar measures to prevent AI manipulation in elections. The company seeks an injunction to block the law, citing violations of the First Amendment and Section 230, which shields platforms from liability for user-generated content.
Events, jobs & announcements
Events, jobs & announcements
Workshop. 20 May 2025, in Valencia: AI-on-Demand: Empowering AI Research & Innovation
In this workshop you’ll discover how the AI-on-Demand Platform supports AI research and innovation, test its new version in a live UX session, and gain insights from real-world use cases, best practices, and the role of eDIHs in scaling AI adoption. Register here
Event. 16-17 June 2025, in Paris: The Paris Conference on AI & Digital Ethics
The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place on June 16-17 at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies. PCAIDE offers a unique platform for experts to engage in open dialogue and collaborate on addressing key issues in the development of sociotechnical systems.
Event. 8-11 July 2025 in Geneva: AI for Good Global Summit
The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva. This leading UN event on AI brings together top names in AI, with a high-level lineup of global decision makers. Its goal is to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact.
Event. 14-18 July 2025 in Thessaloniki and online: AIDA Symposium and Summer School on ‘AI/ML Cutting Edge Trends' AI for Good Global Summit
From July 14-18, 2025, the AIDA Symposium and Summer School will explore the latest in AI and ML. Co-organised by AIDA and Aristotle University of Thessaloniki, this hybrid event offers expert-led lectures, special sessions, and hands-on tutorials.
Job: AI+ Academic Fellowships
King’s College London is launching 20 prestigious AI+ Academic Fellowships as part of a major strategic investment in Artificial Intelligence. This initiative seeks outstanding researchers working across any discipline, from health and bioscience to law, humanities, and physical sciences, who are developing or applying AI in transformative ways. Fellows will benefit from three years of protected research time and a clear path to a permanent academic position.
Job: Technical & non-technical roles
The UK’s AI Safety Institute is recruiting for multiple roles in research, engineering, strategy, and operations. As part of a high-impact initiative focused on AI governance, successful candidates will contribute to critical work in a fast-paced, interdisciplinary environment alongside leading experts.
Job: AI Reporting Grants
Tarbell is offering grants between $1,000 and $15,000 to support original journalism exploring the societal impacts of artificial intelligence. Open to freelancers and staff journalists alike, the grants aim to fund forward-looking reporting on critical AI issues, ranging from frontier company practices and policymaking to military integration, evaluation methods, and AI’s effects on work and society. Applications are open until May 31, 2025.

AI & Disinfo Multimedia
A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.
Webinars
Webinars
Our own and community webinar collection exploring the intersections of AI and disinformation
- LLM grooming: a new strategy to weaponise AI for FIMI purposes, with Sophia Freuden (The American Sunlight Project). Hosted by EU DisinfoLab (10/04/2025)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, with Heron Lopes (UCDP). Hosted by EU DisinfoLab (06/03/2025)
- Safeguarding Australian elections: Addressing AI-enabled disinformation, with Kate Seward (Microsoft ANZ), Antonio Spinelli (International IDEA) and Sam Stockwell (CETaS). Hosted by ASPI (06/02/2025)
- Faking It - Information Integrity, AI and the Law -Global Game Changers Series-, with Monica Attard and Michael Davis (UTS), Creina Chapman (ACMA), Cullen Jennings (Cisco Systems) and Jason M Schultz (Canva). Hosted by University of Technology Sydney (29/11/2024)
- AI and Disinformation: A legal perspective, with Noémie Krack (KU Leuven). Hosted by EU DisinfoLab (07/11/2024)
- Generative AI and Geopolitical Disruption, with Corneliu Bjola (Oxford Internet Institute), Antonio Estella and Maria Dolores Sanchez Galera (Carlos III University), Peter Pijpers (Netherlands Defence Academy), Michael Zinkanell (Austrian Institute for European and Security Policy), and Gregory Smith (RAND Corporation). Hosted by Solaris (25/10/2024)
- DisinfoCon 2024 - Taking stock of Information Integrity in the Age of AI, with Carl Miller (Center for Analysis of Social Media at Demos). Hosted by Democracy Reporting International (26/09/2024)
- Advancing synthetic media detection: introducing veraAI, with Akis (Symeon) Papadopoulos (Centre for Research and Technology Hellas – Information Technologies Institute). Hosted by EU DisinfoLab (29/08/2024)
- Using Generative AI for the production, spread, and detection of disinformation – latest insights and innovations, with Kalina Bontcheva (University of Sheffield). Hosted by EU DisinfoLab (27/06/2024)
- Beyond Deepfakes: AI-related risks for elections, with Sophie Murphy Byrne (Logically). Hosted by EU DisinfoLab (30/05/2024)
- The Top 9 AI Breakthroughs of 2024 (You Won’t Believe Are Real). By AI Uncovered (08/11/2024)
- Tools and techniques for using AI in digital investigations, with Craig Silverman (ProPublica). Hosted by EU DisinfoLab (25/04/2024)
- OSINT & AI: Advanced Analysis, with Ivan Kravtsov (Social Links) and Gary Ruddell (Independent Cyber Threat Intelligence Professional). Hosted by Social Links (16/11/2023)
Podcasts
Podcasts
Community podcasts exploring the intersections of AI and disinformation
- Is technological progress always good? Hosted by Responsible bytes (02/04/2025)
- AI Is transforming geopolitics. Hosted by New Lines Magazine (21/02/2025)
- The rise of DeepSeek, the Chinese AI chatbot making waves in tech. Hosted by Teka Teka (19/02/2025)
- Privacy, digital rights, AI and the law. Hosted by Technology & Security (17/02/2025)
- How DeepSeek controls the conversation. Hosted by Digital Digging (29/01/2025)
- AI regulation and risk management in 2024. Hosted by The AI in business Podcast (21/01/2025)
- The case for human-centered AI. Hosted by McKinsey Digital (20/12/2024)
- Destination Deception 2025. Hosted by Faked Up (18/12/2024)
- What is AI slop and did it lead to a Halloween parade hoax in Dublin? Hosted by The Explainer (05/11/2024)
- Beyond the ballot: Misinformation, trust and truth in elections. Hosted by The National Security Podcast (24/10/2024)
- Do not "summarize this"! Episode 4: improve prompts to get a better summary. Hosted by Digital Digging (28/09/2024)
- How to detect fake AI-texts, episode 1 of podcast series on AI & Research. Hosted by Digital Digging (17/09/2024)
- Moderating Global Voices. Hosted by Decoding Hate (10/02/2021)

AI Disinfo in depth
A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.
Research
Research
A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation
- AI job recruitment tools could 'enable discrimination' against marginalised groups, research finds, by ABC News (07/05/2025)
- AI is inherently ageist. That’s not just unethical – it can be costly for workers and businesses, by The Conversation (22/04/2025)
- Values in the wild: Discovering and analyzing values in real-world language model interactions, by Anthropic (21/04/2025)
- False face: Unit 42 demonstrates the alarming ease of synthetic identity creation, by Unit 42 (21/04/2025)
- Russian propaganda campaign targets France with AI-fabricated scandals, drawing 55 million views on social media, by Newsguard (17/04/2025)
- OpenAI’s new reasoning AI models hallucinate more, by Tech Crunch (17/04/2025)
- Russia’s use of genAI in disinformation and cyber jnfluence: Strategy, use cases and future expectations, by CRC (13/04/2025)
- LLM pass the Turing Test. But that doesn’t mean AI is now as smart as humans, by The Conversation (08/04/2025)
- What we learned from tracking AI use in global elections, by Rest of World (08/04/2025)
- Emotional prompting amplifies disinformation generation in AI large language models, by Frontiers (07/04/2025)
- AI Index 2025: State of AI in 10 Charts, by HAI Stanford University (07/04/2025)
- OpenAI’s Sora Is plagued by sexist, racist, and ableist biases, by Wired (23/03/2025)
- AI’s answers on China differ depending on the language, analysis finds, by Tech Crunch (20/03/2025)
- Users turning to ChatGPT for news may find misinformation in responses, by Logically Facts (18/03/2025)
- Deepfake detectors vulnerable ahead of election, by InnovationAus (13/03/2025)
- Russia-linked Pravda network cited on Wikipedia, LLMs, and X, by DFRLab (12/03/2025)
- Urgent action is needed to secure the UK’s AI research ecosystem against hostile state threats, by The Alan Turing Institute (07/03/2025)
- A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propaganda, by Newsguard (06/03/2025)
- Chinese AI video generators unleash a flood of new nonconsensual porn, by 404 Media (06/03/2025)
- AI search has a citation problem, by Columbia Journalism Review (06/03/2025)
- An AI slop "science" site has been beating real publications in Google results by publishing fake images of SpaceX Rockets, by Futurism (06/03/2025)
- Character flaws, by Graphika (05/03/2025)
- Hybrid threats and the amplifying power of AI: Five strategic scenarios, by Alto Intelligence (01/03/2025)
- Towards a common reporting framework for AI incidents, by OECD (28/02/2025)
- Microsoft outs hackers behind tools to bypass generative AI guardrails, by Bloomberg (27/02/2025)
- The smarter AI gets, the more It start cheating when it's losing, by The Byte (22/02/2025)
- Disrupting malicious uses of AI, by Open AI (21/02/2025)
- Deepfake threat: Only 0.1% can spot AI-generated fakes, by Security Brief (19/02/2025)
- Grok’s responses to questions on the German elections were mostly accurate and relied heavily on media sources, by Reuters Institute (19/02/2025)
- How 35 YouTube channels spread disinformation using AI about Spanish and European politics, by Maldita (14/02/2025)
- Inconsistent and unreliable: Chatbots provide inaccurate Information on German elections, by Democracy Reporting International (12/02/2025)
- Representation of BBC News content in AI assistants, by BBC (11/02/2025)
- An adviser to Elon Musk’s xAI has a way to make AI more like Donald Trump, by Wired (11/02/2025)
- Red-teaming in the public interest, by Data & Society (09/02/2025)
- AI misinformation monitor of leading AI chatbots multilingual edition, by Newsguard (07/02/2025)
- Challenges and opportunities of AI in the fight against information manipulation, by VIGNIUM (07/02/2025)
- Search Google Maps with the help of AI, by Digital Digging (06/02/2025)
- Rechts, weiblich, Fake, by Tagesschau (05/02/2025)
- Russian propaganda may be flooding AI models, by American Sunlight (01/02/2025)
- AI-Generated Disinformation in Europe and Africa, by KAS (31/01/2025)
- Scammers are creating fake news videos to blackmail victims, by Wired (27/01/2025)
- Russian propagandist turns his sights to German election, by Reuters (23/01/2025)
- Greenwashing and bothsidesism in AI chatbot answers about fossil fuels' role in climate change, by Global Witness (22/01/2025)
- Knowing less about AI makes people more open to having it in their lives, by The Conversation (20/01/2025)
- AI isn’t very good at history, by Tech Crunch (19/01/2025)
- A fact-checking tool based on Artificial Intelligence to fight disinformation on Telegram, by Universidad de Navarra (12/01/2025)
- Apple urged to withdraw 'out of control' AI news alerts, by BBC (07/01/2025)
- AI could usher in a golden age of research – but only if these cutting-edge tools aren’t restricted to a few major private companies, by The Conversation (06/01/2025)
- These defenders of democracy do not exist, by Conspirador Norteño (05/01/2025)
- An AI-Powered Audit: Do Chatbots Reproduce Political Pluralism?, by (27/12/2024)
- ChatGPT search tool vulnerable to manipulation and deception, tests show, by The Guardian (24/12/2024)
- Predictions for AI in 2025: Collaborative agents, AI skepticism, and new risks, by Stanford University (23/12/2024)
- Bridging the data provenance gap across text, speech and video, by arXiv:2412.17847 (19/12/2024)
- Fake AI versions of world-renowned academics are spreading claims that Ukraine should surrender to Russia, by The Insider (13/12/2024)
- ElevenLabs used for Russian propaganda, by AI Tool Report (11/12/2024)
- AI enters Congress: Sexually explicit deepfakes target women lawmakers, by The 19th News (11/12/2024)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, by GNET (11/12/2024)
- Scottish Parliament TV at risk of deepfake attacks, by Infosecurity (10/12/2024)
- Revealed: bias found in AI system used to detect UK benefits fraud, by The Guardian (06/12/2024)
- Evaluating Large Language Models capability to launch fully automated spear phishing campaigns: Validated on human subjects, by arXiv (30/11/2024)
- Study of ChatGPT citations makes dismal reading for publishers, by Tech Crunch (29/11/2024)
- How ChatGPT Search (mis)represents publisher content, by Columbia Journalism Review (27/11/2024)
- Persuasive technologies in China: implications for the future of national security, by Australian Strategic Policy Institute (26/11/2024)
- "Operation Undercut" shows multifaceted nature of SDA’s influence operations, by Recorded Future (26/11/2024)
- Philippines, China clashes trigger money-making disinformation, by France24 (26/11/2024)
- Not even Spotify is safe from AI slop, by The Verge (14/11/2024)
- AI-enabled influence operations: Safeguarding future elections, by Cetas (13/11/2024)
- Disconnected from reality: American voters grapple with AI and flawed OSINT strategies, by ISD (07/11/2024)
- AI hallucinations caused artificial intelligence to falsely describe these people as criminals, by ABC News (03/11/2024)
- Exploiting Meta’s weaknesses, deceptive political ads thrived on Facebook and Instagram in run-up to election, by Pro Publica (31/10/2024)
- "Say it’s only fictional”: How the far-right is jailbreaking AI and what can be done about it, by ICCT (30/10/2024)
- How X users can earn thousands from US election misinformation and AI images, by BBC (30/10/2024)
- Hospitals use a transcription tool powered by an error-prone OpenAI model, by The Verge (28/10/2024)
- Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said, by AP news (26/10/2024)
- GenAI and Democracy, by DSET (25/10/2024)
- Prebunking elections rumors: Artificial Intelligence assisted interventions increase confidence in American elections, by California Institute of Technology, Washington University in St. Louis, Cambridge University (24/10/2024)
- Large Language Models reflect the ideology of their creators, by arXiv (24/10/2024)
- Amazon Alexa users given false information attributed to Full Fact’s fact checks, by Full Fact (17/10/2024)
- Ensuring AI accountability: Auditing methods to mitigate the risks of Large Language Models, by Democracy Reporting International (14/10/2024)
- Pig butchering scams are going high tech, by Wired (12/10/2024)
- An update on disrupting deceptive uses of AI, by openAI (09/10/2024)
- Generative Artificial Intelligence and elections, by Center for Media Engagement (03/10/2024)
- Grok AI: A deepfake disinformation disaster for democracy, by CCDH (29/8/2024)
- OpenAI blocks AI propaganda, by AI Tool Report (19/8/2024)
- Disrupting deceptive uses of AI by covert influence operations, by OpenAI (30/5/2024)
- AI-pocalypse Now? Disinformation, AI, and the super election year, by MSC (01/04/2024)
About policy & regulations
About policy & regulations
A look at regulation and policies implemented on AI and disinformation
- Trump fires director of U.S. Copyright Office, sources say, by CBS News (10/05/2025)
- Google used AI to suspend over 39M ad accounts suspected of fraud, by Tech Crunch (16/04/2025)
- OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk, by Fortune (16/04/2025)
- ChatGPT now lets users create fake images of politicians. We stress-tested it, by CBC (13/04/2025)
- YouTube supports the NO FAKES Act: Protecting creators and viewers in the age of AI, by YouTube (09/04/2025)
- The Dangers of AI Sovereignty, by Lawfare (07/04/2025)
- Google is shipping Gemini models faster than its AI safety reports, by Tech Crunch (03/04/2025)
- UK needs to relax AI laws or risk transatlantic ties, thinktank warns, by The Guardian (02/04/2025)
- Protecting the polls in the era of AI and deepfakes, by Microsoft (01/04/2025)
- OpenAI peels back ChatGPT’s safeguards around image creation, by Tech Crunch (28/03/2025)
- Meta to seek disclosure on political ads that use AI ahead of Canada elections, by Reuters (20/03/2025)
- Vance outlines an America first, America only AI agenda, by Lawfare (19/03/2025)
- China mandates labels for all AI-generated content in fresh push against fraud, fake news, by SCMP (15/03/2025)
- Under Trump, AI scientists are told to remove ‘ideological bias’ from powerful models, by Wired (14/03/2025)
- OpenAI urges Trump administration to remove guardrails for the industry, by CNBC (13/03/2025)
- Spain to impose massive fines for not labelling AI-generated content, by Reuters (11/03/2025)
- The AI regulation debate in China is on a whole different level, by Raymond Sun (10/03/2025)
- Meta brings its anti-scam facial-recognition test to the UK and Europe, by Tech Crunch (04/03/2025)
- Creative industries protest against UK plan about AI and copyright, by Financial Times (27/02/2025)
- Terms of (dis)service: comparing misinformation policies in text-generative AI chatbot, by EU DisinfoLab (27/02/2025)
- UK delays plans to regulate AI as ministers seek to align with Trump administration, by The Guardian (24/02/2025)
- Erotica, gore and racism: how America’s war on ‘ideological bias’ is letting AI off the leash, by The Conversation (24/02/2025)
- Artificial intelligence and intellectual property: Navigating the challenges of data scraping, by OECD.AI (14/02/2025)
- OpenAI removes certain content warnings from ChatGPT, by Tech Crunch (13/02/2025)
- Tech companies pledged to protect elections from AI. Here’s how they did, by Brennan Center (13/02/2025)
- The death of inclusive AI? Trump’s fight against diversity intensifies, by ANU Reporter (13/02/2025)
- JD Vance warns Europe to go easy on tech regulation in major AI speech, by Politico (11/02/2025)
- Donald Trump rolls back Biden-era AI regulation, sets stage for battles with US states, by CNN(09/02/2025)
- The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence, by Cambridge University Press (06/02/2025)
- Living repository to foster learning and exchange on AI literacy, by European Commission (04/02/2025)
- China is scheduled to hold its "Two Sessions" this week, by Raymond Sun (04/02/2025)
- Meta says it may stop development of AI systems it deems too risky, by Tech Crunch (03/02/2025)
- The EU’s AI bans come with big loopholes for police, by Politico (03/02/2025)
- Frontier AI Framework, by META (03/02/2025)
- AI-generated child sex abuse images targeted with new laws, by BBC (02/02/2025)
- First international AI safety report published, by Computer Weekly (30/01/2025)
- Fighting deepfakes: what’s next after legislation?, by Australian Strategic Policy Institute (24/01/2025)
- Deepfake labels and detectors still don't work, by (Faked Up22/01/2025)
- The global struggle over how to regulate AI, by Rest of World (21/01/2025)
- Trump revokes Biden executive order on addressing AI risks, by Reuters (21/01/2025)
- Feedback on the second draft of the general-purpose AI Code of Practice: Comments and recommendations, by University of Cambridge (17/01/2025)
- Civil society rallies for human rights as AI Act prohibitions deadline looms, by EuroActiv (16/01/2025)
- OpenAI wooed Democrats with calls for AI regulation. Now it must charm Trump, by The Washington Post (13/01/2025)
- British PM Keir Starmer outlines bid to become AI 'world leader', by ABC (13/01/2025)
- UK can be ‘AI sweet spot’: Starmer’s tech minister on regulation, Musk, and free speech, by The Guardian (11/01/2025)
- Britain to make sexually explicit 'deepfakes' a crime, by Reuters (07/01/2025)
- Partnering for gender-responsive AI, by UN (01/01/2025)
- Copyright and Artificial Intelligence Part 2: Copyrightability, by United States Copyright office (01/01/2025)
- Trump announces new tech policy picks for his second term, by The Verge (23/12/2024)
- Sriram Krishnan named Trump’s senior policy advisor for AI, by Tech Crunch (22/12/2024)
- Google relaxes AI usage rules, by AI Tool Report (18/12/2024)
- Meta debuts a tool for watermarking AI-generated videos, by Tech Crunch (12/12/2024)
- New research centre supporting safe and responsible AI, by Minister for Industry and Science, Australia (09/12/2024)
- Inside Britain’s plan to save the world from runaway AI, by Politico (05/12/2024)
- Rumble Video Platform sues California over anti-deepfake law, by Bloomberg (29/11/2024)
- Trump 2.0: Clash of the tech bros, by Fortune (26/11/2024)
- ChatGPT, Meta and Google generative AI should be designated 'high-risk' under new laws, bipartisan committee recommends, by ABC News (26/11/2024)
- Case closed on "nude" AI images of girls. Why police are not charging man who made them, by Pensacola News Journal (22/11/2024)
- The EU Code of Practice for General-purpose AI: Key takeaways from the First Draft, by CSIS (21/11/2024)
- What Donald Trump’s Cabinet picks mean for AI, by GZero Media (19/11/2024)
- Musk sues California over deepfake law, by AI Tool Report (18/11/24)
- EU AI Act: Draft guidance for general purpose AIs shows first steps for Big AI to comply, by TechCrunch (14/11/2024)
- Musk to be Trump's AI advisor?, by AI Tool Report (12/11/2024)
- What Trump’s victory could mean for AI regulation, by Tech Crunch (06/11/2024)
- How AI could still impact the US election, by Gzero Media (05/11/2024)
- Reducing risks posed by synthetic content, by National Institute of Standards and Technology (01/11/2024)
- Google Photos will soon show you if an image was edited with AI, by The Verge (24/10/2024)
- More transparency for AI edits in Google Photos, by Google (24/10/2024)
- Embedded GenAI on social media: Platform law meets AI law, by DSA Observatory (16/10/2024)
- California rejects AI safety bill, by AI Tool Report (30/09/2024)
- Council of Europe opens first ever global treaty on AI for signature, by Council of Europe (05/9/2024)
- Final Report - Governing AI for humanity, by UN (01/09/2024)
- United Nations Secretary-General’s video message for launch of the Final Report, by UN (01/09/2024)
- Platforms’ AI policy updates in 2024: Labelling as the silver bullet?, by EU DisinfoLab (01/07/2024)
- A real account of peep fakes, by Cornell University (15/04/2024)
- Governing AI agents, by Hebrew University of Jerusalem (02/04/2024)
Miscellaneous readings
Miscellaneous readings
Recommended reading on AI and disinformation
- A Chinese AI video startup appears to be blocking politically sensitive images, by Tech Crunch (22/04/2025)
- Musk’s DOGE slashes funding to fight deepfakes, misinformation, by Bloomberg (22/04/2025)
- The Washington Post partners with OpenAI on search content, by The Washington Post (22/04/2025)
- AI floods Amazon with political books before election, by Allaboutai (22/04/2025)
- Pro-Kremlin sources jump on ‘AI Action Figure’ trend to falsely depict Zelensky as drug abusing aid beggar, by Newsguard (20/04/2025)
- Company apologizes after AI support agent invents policy that causes user uproar, by ARS Technica (18/04/2025)
- OpenAI is building a social network, by The Verge (15/04/2025)
- How to spot AI influence in Australia’s election campaign, by Australian Strategic Policy Institute (14/04/2025)
- Hackers using AI-produced audio to impersonate tax preparers, by The Record (14/04/2025)
- Meta AI will soon train on EU users’ data, by The Verge (14/04/2025)
- In South Korea, digital sex crimes soar amid rise in AI, deepfake technology, by SCMP (11/04/2025)
- When AIs start believing other AIs’ hallucinations, we’re F&#%ed, by Medium (11/04/2025)
- How AI-powered fact-checking can help combat misinformation, by IVY EXEC (11/04/2025)
- Sex-Fantasy chatbots are leaking a constant stream of explicit messages, by Wired (11/04/2025)
- AI – A double-edged sword in the age of misinformation and disinformation, by Tech Trends (08/04/2025)
- Taiwan says China using generative AI to ramp up disinformation and ‘divide’ the island, by Rappler (08/04/2025)
- Musk's DOGE using AI to snoop on U.S. federal workers, sources say, by Reuters (08/04/2025)
- XSix arrested for AI-powered investment scams that stole $20 million, by Bleeping Computer (07/04/2025)
- The Jianwei Xun case, by Medium (06/04/2025)
- How AI can understand what you're really looking for. Ctrl-F is dead, long live the chatbots, by Digital Digging (05/04/2025)
- I want to make you immortal:' How one woman confronted her deepfakes harasser, by 404 Media (02/04/2025)
- No, Grok AI-written study does not prove that global warming is a natural phenomenon, by Newsguard (31/03/2025)
- Authors call for UK government to hold Meta accountable for copyright infringement, by The Guardian (31/03/2025)
- YouTube turns off ad revenue for fake movie trailer channels after deadline investigation, by Deadline (30/03/2025)
- Leaked data exposes a Chinese AI censorship machine, by Tech Crunch (26/03/2025)
- Viral audio of JD Vance badmouthing Elon Musk Is fake, just the tip of the AI iceberg, by 404 Media (24/03/2025)
- Meta AI is finally coming to the EU, but with limitations, by Tech Crunch (20/03/2025)
- Google-backed chatbot platform caught hosting AI impersonations of 14-year-old user who died by suicide, by Futurism (20/03/2025)
- ChatGPT hit with privacy complaint over defamatory hallucinations, by Tech Crunch (19/03/2025)
- Concerns about AI and social media grow among journalists ahead of Federal Election, survey finds, by AP (18/03/2025)
- Italian newspaper says it has published world’s first AI-generated edition, by The Guardian (18/03/2025)
- AI is turbocharging organized crime, E.U. police agency warns, by NBC News (18/03/2025)
- Instagram experiments with AI-generated comments on posts, by Social Media Today (16/03/2025)
- Children making malicious deepfakes of their teachers, by The Telegraph (14/03/2025)
- How to detect deepfakes with AI, by Digital Digging (14/03/2025)
- China, Russia will 'very likely' use AI to target Canadian voters: Intelligence agency, by CBD (08/03/2025)
- State Dept. to use AI to revoke visas of foreign students who appear "pro-Hamas", by Axios (07/03/2025)
- Google reports scale of complaints about AI deepfake terrorism content to Australian regulator, by Reuters (06/03/2025)
- Creator of viral AI Trump Gaza video warns of possible dangers, by BBC (06/03/2025)
- Southeast Asia faces AI influence on elections, by Australian Strategic Policy Institute (04/03/2025)
- Fraudsters turn to generative AI to Improve fake IDs for crimes, by Bloomberg (28/02/2025)
- Newsguard: U.S. Fugitive turned Kremlin propagandist reveals Russia’s plan to hijack Western AI models, by NewsGuard (26/02/2025)
- Apple fixing bug that caused dictation feature to type the word ‘Trump’ when users said ‘racist’, by CNN (25/02/2025)
- Taiwan’s digital ministry uses AI to combat online fraud and deep fakes, by Gov Insider (24/02/2025)
- Grok 3 appears to have briefly censored unflattering mentions of Trump and Musk, by Tech Crunch (23/02/2025)
- Real or fake? AI tech sparks election deception fears, by Canberra Times (22/02/2025)
- In battle against scams, Malaysians are now armed with a chatbot to waste fraudsters’ time, by SCMP (21/02/2025)
- The APM denounces the use of images created by artificial intelligence as if they were authentic, by APM (19/02/2025)
- Ukraine warns of growing AI use in Russian cyber-espionage operations, by The Record (14/02/2025)
- Scarlett Johansson warns of dangers of AI after Kanye West deepfake goes viral, by The Guardian (13/02/2025)
- A bird’s-eye view of the Paris AI Action Summit: Regulation, power, and alternatives, by Tech Global institute (13/02/2025)
- X gives fake Myriam Spiteri Debono account verified status, by Times of Malta (12/02/2025)
- UK, US snub Paris AI summit statement, by Politico (11/02/2025)
- Esselunga join Moratti, Minervini, Beretta in Crosetto case, by Ansa (09/02/2025)
- Quarante médias saisissent la justice pour bloquer «News DayFr», un des multiples «sites parasites» générés par IA, by Libération (In French) (07/02/2025)
- La stampa italiana ha diffuso un’immagine IA di Trump, Musk e Netanyahu credendola vera, by Facta (In Italian) (04/02/2025)
- A pioneering AI project awarded for opening Large Language Models to European languages, by European Commission (03/02/2025)
- The AEC wants to stop AI and misinformation. But it’s up against a problem that is deep and dark, by The Conversation(03/02/2025)
- DeepSeek debuts with 83 percent ‘fail rate’ in NewsGuard’s Chatbot Red Team Audit, by Newsguard (29/01/2025)
- We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan, by The Guardian (28/01/2025)
- Meta AI can now use your Facebook and Instagram data to personalize its responses, by Tech Crunch (27/01/2025)
- Sam Altman’s World now wants to link AI agents to your digital identity, by Tech Crunch (24/01/2025)
- Anthropic’s new Citations feature aims to reduce AI errors, by Tech Crunch (23/01/2025)
- Pope warns Davos summit that AI could worsen ‘crisis of truth’, by The Guardian (23/01/2025)
- An Unusual Pitch (about the launch of Pearl, an AI-powered search engine, by Wired (22/01/2025)
- Is the TikTok threat really about AI?, by GZeromedia (21/01/2025)
- The FTC’s concern about Snapchat’s My AI chatbot, by GZeromedia (21/01/2025)
- LinkedIn accused of using private messages to train AI, by BBC (21/01/2025)
- C.I.A.’s chatbot stands in for world leaders, by The New York Times (18/01/2025)
- Apple is pulling its AI-generated notifications for news after generating fake headlines, by CNN (16/01/2025)
- Viral scam: French woman duped by AI Brad Pitt love scheme faces cyberbullying, by Euronews (15/01/2025)
- Arrested by AI: Police ignore standards after facial recognition matches, by The Washington Post (13/01/2025)
- LinkedIn is in danger of being swamped by AI-generated slop, by Financial Review (12/01/2025)
- How Elon Musk’s xAI is quietly taking over X, by The Verge (10/01/2025)
- YouTubers are selling their unused video footage to AI companies, by Bloomberg (10/01/2025)
- AI social media users are not always a totally dumb idea, by Wired (08/01/2025)
- Elon Musk accused of using AI to write controversial column for German newspaper, by MSN (08/01/2025)
- Man who exploded Tesla Cybertruck outside Trump hotel in Las Vegas used generative AI, police say, by AP (08/01/2025)
- Users of AI chatbot companions say their relationships are more than 'clickbait", but views are mixed on their benefits, by ABC (06/01/2025)
- Instagram begins randomly showing users AI-generated images of themselves, by404 Media (06/01/2025)
- Meta is killing off its own AI-powered Instagram and Facebook profiles, by The Guardian (03/01/2025)
- Meta envisages social media filled with AI-generated users, by The Financial Times (26/12/2024)
- The Year of the AI election wasn’t quite what everyone expected, by Wired (26/12/2024)
- Nothing is sacred: AI-generated slop has come for Christmas music, by 404 Media (25/12/2024)
- OpenAI whistleblower who died was being considered as witness against company, by The Guardian (21/12/2024)
- Picture of Bashar al-Assad with Tucker Carlson in Moscow almost certainly AI-generated, by Full Fact (19/12/2024)
- Elon Musk’s Grok-2 is now free—and it’s a mess, byFast Company (18/12/2024)
- Using open-source AI, sophisticated cyber ops will proliferate, by Australian Strategic Policy Institute (17/12/2024)
- China wants to dominate in AI, and some of its models are already beating their U.S. rivals, by CNBC (17/12/2024)
- Luigi Mangione AI chatbots give voice to accused united healthcare shooter, by Forbes (17/12/2024)
- AI crackdown: China stamps out tech misuse to preserve national literature and ideology, by SCMP (15/12/2024)
- UK could offer celebs protection from AI clones, by Politico (13/12/2024)
- We looked at 78 election deepfakes. Political misinformation is not an AI problem, by AI Snake Oil (13/12/2024)
- AI helps Telegram remove 15 million suspect groups and channels in 2024, by Tech Crunch (13/12/2024)
- Tech companies claim AI can recognise human emotions. But the science doesn’t stack up, by The Conversation (13/12/2024)
- AI used to target election fraud and criminal deepfakes, by The Canberra Times (11/12/2024)
- This journalist wants you to try open-source AI: “AI is shiny, but value comes from the ideas people have to use it", by Reuters Institute (10/12/2024)
- Paul McCartney warns AI ‘could take over’ as UK debates copyright laws, by The Guardian (10/12/2024)
- China launches AI that writes politically correct docs for bureaucrats, by The Register (09/12/2024)
- Musk launches (then deletes) new image generator, by AI Tool Report (09/12/2024)
- It has to be a deepfake': South Korean opposition leader on martial law announcement, by CNN (05/12/2024)
- The US Department of Defense is investing in deepfake detection, by MIT Technology Review (05/12/2024)
- Misinformation researcher admits ChatGPT added fake details to his court filing, by The Verge (04/12/2024)
- Deepfake YouTube ads of celebrities promise to get you ‘Rock Hard’, by 404 Media (04/12/2024)
- Is the AI Doomsday Narrative the Product of a Big Tech Conspiracy?, by Obsolete (04/12/2024)
- What we saw on our platforms during 2024’s global elections, by META (03/12/2024)
- Google’s video generator comes to more customers, BY Tech Crunch (03/12/2024)
- AWS’ new service tackles AI hallucinations, by Tech Crunch (03/12/2024)
- Meta says gen AI had muted impact on global elections this year, by Reuters (03/12/2024)
- AI-Powered ‘Death Clock’ promises a more exact prediction of the 'day you’ll die', by Bloomberg (30/11/2024)
- The legal battle against explicit AI deepfakes, by The Financial Times (28/11/2024)
- Amazon, Google and Meta are ‘pillaging culture, data and creativity’ to train AI, Australian inquiry finds, by The Guardian (27/11/2024)
- AI-generated slop is quietly conquering the internet. Is it a threat to journalism or a problem that will fix itself?, by Reuters Institute (26/11/2024)
- Russia plotting to use AI to enhance cyber-attacks against UK, minister will warn, by The Guardian (25/11/2024)
- Deepfake videos appear to target Canadian immigrants for thousands of dollars, by CTV News (25/11/2024)
- AI increasingly used for sextortion, scams and child abuse, says senior UK police chief, by The Guardian (24/11/2024)
- AI is taking your job, by Kent C. Dodds Blog (21/11/2024)
- Deus in machina: Swiss church installs AI-powered Jesus, by The Guardian (21/11/2024)
- AI detection tool helps journalists identify and combat deepfakes, by IJNET (20/11/2024)
- What Donald Trump’s cabinet picks mean for AI, by Gzero Media (19/11/2024)
- Fake Claims of Elon Musk’s Latest Acquisitions, by NewsGuard (18/11/2024)
- Singapore steps up fight against deepfakes ahead of election, by Nikkei Asia (17/11/2024)
- Pokemon players create AI world map, by Digital Digging (15/11/2024)
- This 'AI Granny' bores scammers to tears, by PCMag (15/11/2024)
- 2024 AI and Democracy Hackathon, by GMF Technology (11/11/2024)
- AI didn’t sway the election, but it deepened the partisan divide, by Washington Post (09/11/2024)
- Mistral Moderation API, by Mistral (07/11/2024)
- Perplexity launch controversial AI election hub, by AI Tool Report (04/11/2024)
- Thousands go to fake AI-invented Dublin Halloween parade, by EuroNews (01/11/2024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Electoral disinformation, but no AI revolution ahead of the US election – yet, by International Journalist Network (29/10/2024)
- These viral images of the Hamas-Israel war aren’t real. Does it matter?, by SBS (24/10/2024)
- AI was weaponized for FIMI purposes: Russia reportedly paid a former Florida cop to pump out anti-Harris deepfakes and disinformation, by The Verge (24/10/2024)
- Real-time video deepfake scams are here. This tool attempts to zap them, by Wired (15/10/2024)
- Meta fed its AI on almost everything you’ve posted publicly since 2007, by The Verge (12/9/2024)
- Lingo Telecom agrees to $1 million fine over AI-generated Biden robocalls, by Reuters (21/8/2024)
- AI-written obituaries are compounding people’s grief, by Fast Company (26/07/2024)

Community
A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.
Tools
Tools
A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.
INVID-WeVerify plugin
Deepware Scanner
True Media
Illuminarty.AI
GPTZero
Pangram Labs
Originality.ai
HugginFace
Draft & Goal
AI Voice Detector
Hive Moderation
DebunkBot
IntellGPT
AI Research Pilot
AI Research Pilot by Henk van Ess is a lightweight, browser-based tool designed to help investigators, journalists, and researchers get more out of AI, not by using AI as a source, but as a guide to real sources.
Initiatives & organisations
Initiatives & organisations
Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.
EU-funded project: veraAI
veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.
Cluster of EU-funded projects: 'AI against disinformation'
AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.
AI Forensics
AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.
AI Tracking Center, by NewsGuard
AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence
AlgorithmWatch
AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.
European AI & Society Fund
The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.
AI Media Observatory
The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.
GZERO Media newsletter
GZERO’s newsletter offers exclusive insights into our rapidly changing world, covering topics such as AI-driven disinformation and a weekly exclusive edition written by Ian Bremmer.
Queer in AI
Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.
AI for Good
AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.
Omdena
Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects.
Faked Up academic library
Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.
AI Incident Database
AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience to prevent or mitigate bad outcomes.
TGuard project
The TGuard project develops innovative methods for detecting disinformation in social media and formulating effective strategies for preventing AI-generated false reports.
Last updated: 28/04/2025
The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.