
AI Disinfo Hub
The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.
In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.
Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.
Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS
We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!
News
News
DeepSeek debuts with 83 percent ‘fail rate’ in NewsGuard’s Chatbot Red Team Audit (NewsGuard, 29/01/2025)
NewsGuard: DeepSeek, a new Chinese AI chatbot, has been ranked poorly in a NewsGuard audit, with an 83% “fail rate” in providing accurate news information. The tool ranked 10th out of 11 tested chatbots, behind leading Western competitors. DeepSeek often failed to debunk false claims and was found to perpetuate false narratives, particularly when responding to questions related to China, where it repeated the Chinese government’s position. Additionally, it provided outdated information, as it was only trained on data up until October 2023. Overall, according to another article by NewsGuard, the Chinese chatbot phenom is a “disinformation machine.”
Meta AI can now use your Facebook and Instagram data to personalise its responses (Tech Crunch, 27/01/2025)
Tech Crunch: Meta is enhancing its AI chatbot with a new memory feature that allows it to remember details from previous conversations, such as preferences or interests, across Facebook, Messenger, and WhatsApp. The bot can also use personal account data, like location or Instagram activity, to provide personalized recommendations. However, the memory feature won’t apply in group chats, and users can delete memories at any time. This upgrade is currently available in the U.S. and Canada, but there’s no opt-out option for the personalized recommendations.
Scammers are creating fake news videos to blackmail victims (Wired, 27/01/2025)
Wired: Scammers, known as Yahoo Boys, are using AI-generated news videos to blackmail victims. These videos impersonate reputable networks like CNN, claiming the victim is wanted for crimes, often including explicit images. The fraudsters use AI-generated news anchors to add credibility, pressuring victims for payment by creating distressing, fake reports.
Sam Altman’s World now wants to link AI agents to your digital identity (Tech Crunch, 24/01/2025)
Tech Crunch: Altman’s World project is evolving to verify AI agents, linking them to verified human identities. This aligns with OpenAI’s Operator, which allows AI agents to act autonomously on platforms. World’s proof-of-human tools could help businesses verify AI agents, ensuring they’re acting on behalf of real people, transforming online interactions.
Anthropic’s new Citations feature aims to reduce AI errors (Tech Crunch, 23/01/2025)
Tech Crunch: Anthropic has introduced Citations, a new feature for its Claude AI models that allows developers to ground responses in specific source documents, such as emails. Available via Anthropic’s API and Google’s Vertex AI, Citations helps reduce hallucinations by providing detailed references to the exact sentences and passages used. The feature currently supports Claude 3.5 Sonnet and Claude 3.5 Haiku but comes with additional costs based on document length.
Russia-linked AI websites aim to dupe German voters (Reuters, 23/01/2025)
Reuters: A joint investigation by the rating system for news and information websites NewsGuard and German outlet Correctiv uncovered a network of 102 Russia-linked websites spreading AI-generated disinformation ahead of Germany’s February election. The sites, allegedly tied to former U.S. police officer John Mark Dougan, push false narratives targeting pro-NATO politicians while favoring Russia-friendly parties like the far-right AfD. Dougan denies any connection, and Russia has consistently rejected claims of disinformation campaigns.
Pope warns Davos summit that AI could worsen ‘crisis of truth’ (The Guardian, 23/01/2025)
The Guardian: Pope Francis warned global leaders at the World Economic Forum in Davos that AI could worsen the “crisis of truth,” urging governments and businesses to exercise oversight. In a written address, he highlighted ethical concerns and AI’s potential to blur the line between fact and fiction. The Pope himself has been the subject of viral AI-generated deepfakes, underscoring his concerns about misinformation.
An unusual pitch: about the launch of Pearl (Wired, 22/01/2025)
Wired: Pearl, a new AI-powered search engine, combines AI responses with human fact-checking and expert consultations. Developed by Andy Kurtzig, the founder of JustAnswer, Pearl aims to reduce misinformation by offering a freemium model. Users get free AI answers, and for deeper insights, they can connect with experts for a subscription fee. Kurtzig argues that Pearl’s integration of human experts shields it from potential legal issues like Section 230 liability, unlike other AI search engines. However, after testing the platform, the AI responses and fact-checks were often unclear or generic, and the human expert consultations didn’t always offer better insights, especially given the cost of the service.
LinkedIn accused of using private messages to train AI (BBC, 21/01/2025)
BBC: A US lawsuit accuses LinkedIn of secretly sharing Premium users’ private messages to train AI models, opting them into the program without clear consent. The Microsoft-owned company allegedly changed its privacy policy to conceal these actions, though it denies the claims. The lawsuit seeks damages for privacy violations and breach of contract.
Trump revokes Biden executive order on addressing AI risks (Reuters, 21/01/2025)
Reuters: President Donald Trump revoked a 2023 executive order by Joe Biden that aimed to mitigate AI risks to national security, workers, and consumers. The order required AI developers to share safety test results with the government, a move Republicans argued hindered innovation. However, Trump left intact a separate Biden order supporting AI data centers’ energy needs.
Knowing less about AI makes people more open to having it in their lives (The Conversation, 20/01/2025)
The Conversation: People with less AI knowledge are more open to using it, a phenomenon researchers call the “lower literacy-higher receptivity” link. This is driven by a sense of AI’s “magicalness,” especially in human-like tasks, while those with higher literacy see it as a functional tool. Policymakers face a challenge: increasing AI literacy without dampening the enthusiasm that drives adoption.
AI isn’t very good at history (Tech Crunch, 19/01/2025)
Tech Crunch: AI struggles with high-level history exams, with GPT-4 Turbo scoring only 46% accuracy on a new benchmark, Hist-LLM. Researchers found that LLMs often extrapolate from prominent historical data, leading to errors, and perform worse on underrepresented regions. Despite these flaws, experts see potential for AI to assist historians with improved training and benchmarks.
Apple is pulling its AI-generated notifications for news after generating fake headlines (CNN, 16/01/2025)
CNN: Apple is pausing its AI-generated news summaries after the feature produced false headlines, sparking backlash from media organizations. The company plans to improve the technology and reintroduce it with clearer AI disclaimers. Press freedom groups warn that inaccurate AI-generated news poses risks to public trust in reliable information.
Viral scam: French woman duped by AI Brad Pitt love scheme faces cyberbullying (Euronews, 15/01/2025)
Euronews: A French woman was scammed out of €830,000 by fraudsters using AI-generated images and fake social media to impersonate Brad Pitt. Believing she was in a relationship with the actor, she sent money over a year before realising the deception. After sharing her story, she faced widespread online harassment instead of sympathy.
LinkedIn is in danger of being swamped by AI-generated slop (Financial Review, 12/01/2025)
Financial Review: According to a study, the number of words posted on LinkedIn has increased by 107% since the introduction of AI writing tools. Moreover, it seems that posts generated with these tools are likely to receive half as much engagement.
Events, jobs & announcements
Events, jobs & announcements
Explore upcoming AI-related events, jobs and announcements that may be of interest to members of the counter-disinformation community.
Event. 6 February 2025 in Canberra and online: Safeguarding Australian elections: Addressing AI-enabled disinformation
On the 6th of February 2025, the Australian Policy Institute will co-host the panel discussion titled ‘Safeguarding Australian Elections: Addressing AI-Enabled Disinformation,’ exploring the intersection of AI, electoral integrity, and democratic resilience. The panel will feature Kate Seward (Microsoft ANZ), Antonio Spinelli (International IDEA), and Sam Stockwell (CETaS).
Event. 7 February 2025 in Canberra: Foreign Interference, Elections, and AI
On Thursday, February 27, 2025, the ISACA Canberra Chapter will host the event ‘Foreign Interference, Elections, and AI,‘ exploring the evolving challenges to Australia’s democratic processes, including disinformation campaigns, AI-driven attacker trends, and election security. The event will feature Bevan Read and James Murphy, who will provide insights on proactive measures being taken to combat these threats and safeguard the integrity of Australia’s elections.
Event. 10-11 February 2025 in Paris: Artificial Intelligence Action Summit
On 10 and 11 February 2025, France will host the Artificial Intelligence (AI) Action Summit, gathering at Grand Palais, Heads of State and Government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists and members of civil society.
Event. 9-13 March 2025 in Las Vegas: HumanX 2025
HumanX will take place over three days in Las Vegas from March 10–13. Tailored for leaders, founders, policymakers, and investors shaping the future of artificial intelligence, it promises to be a defining event in the AI space.
Workshop. 8-9 May 2025. Online: What is work worth? Exploring what generative AI means for workers’ lives and labor
On May 6 and 9, 2025, Data & Society will host an online workshop on the intersection of generative AI technologies and work. This workshop aims to foster a collaborative environment to discuss how we investigate, think about, resist, and shape the emerging uses of generative AI technologies across a broad range of work contexts.
Event. 8-11 July 2025 in Geneva: AI for Good Global Summit
The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva. This leading UN event on AI brings together top names in AI, with a high-level lineup of global decision makers. Its goal is to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact.
Job. Writer - Robotics/Tech
The Rundown is looking for a Writer – Robotics/Tech to be responsible for researching and writing its bi-weekly robotics and tech-focused newsletters.

AI & Disinfo Multimedia
A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.
Webinars
Webinars
Our own and community webinar collection exploring the intersections of AI and disinformation
- Faking It - Information Integrity, AI and the Law (Global Game Changers Series), with Monica Attard and Michael Davis (UTS), Creina Chapman (ACMA), Cullen Jennings (Cisco Systems) and Jason M Schultz (Canva). Hosted by University of Technology Sydney (29/11/2024)
- AI and Disinformation: A legal perspective, with Noémie Krack (KU Leuven). Hosted by EU DisinfoLab (07/11/2024)
- Generative AI and Geopolitical Disruption, with Corneliu Bjola (Oxford Internet Institute), Antonio Estella and Maria Dolores Sanchez Galera (Carlos III University), Peter Pijpers (Netherlands Defence Academy), Michael Zinkanell (Austrian Institute for European and Security Policy), and Gregory Smith (RAND Corporation). Hosted by Solaris (25/10/2024)
- DisinfoCon 2024 - Taking stock of Information Integrity in the Age of AI, with Carl Miller (Center for Analysis of Social Media at Demos). Hosted by Democracy Reporting International (26/09/2024)
- Advancing synthetic media detection: introducing veraAI, with Akis (Symeon) Papadopoulos (Centre for Research and Technology Hellas – Information Technologies Institute). Hosted by EU DisinfoLab (29/08/2024)
- Using Generative AI for the production, spread, and detection of disinformation – latest insights and innovations, with Kalina Bontcheva (University of Sheffield). Hosted by EU DisinfoLab (27/06/2024)
- Beyond Deepfakes: AI-related risks for elections, with Sophie Murphy Byrne (Logically). Hosted by EU DisinfoLab (30/05/2024)
- The Top 9 AI Breakthroughs of 2024 (You Won’t Believe Are Real). By AI Uncovered (08/11/2024)
- Tools and techniques for using AI in digital investigations, with Craig Silverman (ProPublica). Hosted by EU DisinfoLab (25/04/2024)
- OSINT & AI: Advanced Analysis, with Ivan Kravtsov (Social Links) and Gary Ruddell (Independent Cyber Threat Intelligence Professional). Hosted by Social Links (16/11/2023)
Podcasts
Podcasts
Community podcasts exploring the intersections of AI and disinformation
- How DeepSeek controls the conversation. Hosted by Digital Digging (29/01/2025)
- AI regulation and risk management in 2024. Hosted by The AI in business Podcast (21/01/2025)
- The case for human-centered AI. Hosted by McKinsey Digital (20/12/2024)
- Destination Deception 2025. Hosted by Faked Up (18/12/2024)
- What is AI slop and did it lead to a Halloween parade hoax in Dublin? Hosted by The Explainer (05/11/2024)
- Beyond the ballot: Misinformation, trust and truth in elections. Hosted by The National Security Podcast (24/10/2024)
- Do not "summarize this"! Episode 4: improve prompts to get a better summary. Hosted by Digital Digging (28/09/2024)
- How to detect fake AI-texts, episode 1 of podcast series on AI & Research. Hosted by Digital Digging (17/09/2024)
- Moderating Global Voices. Hosted by Decoding Hate (10/02/2021)

AI Disinfo in depth
A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.
Research
Research
A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation
- Greenwashing and bothsidesism in AI chatbot answers about fossil fuels' role in climate change, by Global Witness (22/01/2025)
- Apple urged to withdraw 'out of control' AI news alerts, by BBC (07/01/2025)
- AI could usher in a golden age of research – but only if these cutting-edge tools aren’t restricted to a few major private companies, by The Conversation (06/01/2025)
- These defenders of democracy do not exist, by Conspirador Norteño (05/01/2025)
- ChatGPT search tool vulnerable to manipulation and deception, tests show, by The Guardian (24/12/2024)
- Predictions for AI in 2025: Collaborative agents, AI skepticism, and new risks, by Stanford University (23/12/2024)
- Fake AI versions of world-renowned academics are spreading claims that Ukraine should surrender to Russia, by The Insider (13/12/2024)
- ElevenLabs used for Russian propaganda, by AI Tool Report (11/12/2024)
- AI enters Congress: Sexually explicit deepfakes target women lawmakers, by The 19th News (11/12/2024)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, by GNET (11/12/2024)
- Scottish Parliament TV at risk of deepfake attacks, by Infosecurity (10/12/2024)
- Revealed: bias found in AI system used to detect UK benefits fraud, by The Guardian (06/12/2024)
- Evaluating Large Language Models capability to launch fully automated spear phishing campaigns: Validated on human subjects, by arXiv (30/11/2024)
- Study of ChatGPT citations makes dismal reading for publishers, by Tech Crunch (29/11/2024)
- How ChatGPT Search (mis)represents publisher content, by Columbia Journalism Review (27/11/2024)
- Persuasive technologies in China: implications for the future of national security, by Australian Strategic Policy Institute (26/11/2024)
- "Operation Undercut" shows multifaceted nature of SDA’s influence operations, by Recorded Future (26/11/2024)
- Philippines, China clashes trigger money-making disinformation, by France24 (26/11/2024)
- Not even Spotify is safe from AI slop, by The Verge (14/11/2024)
- AI-enabled influence operations: Safeguarding future elections, by Cetas (13/11/2024)
- Disconnected from reality: American voters grapple with AI and flawed OSINT strategies, by ISD (07/11/2024)
- AI hallucinations caused artificial intelligence to falsely describe these people as criminals, by ABC News (03/11/2024)
- Exploiting Meta’s weaknesses, deceptive political ads thrived on Facebook and Instagram in run-up to election, by Pro Publica (31/10/2024)
- "Say it’s only fictional”: How the far-right is jailbreaking AI and what can be done about it, by ICCT (30/10/2024)
- How X users can earn thousands from US election misinformation and AI images, by BBC (30/10/2024)
- Hospitals use a transcription tool powered by an error-prone OpenAI model, by The Verge (28/10/2024)
- Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said, by AP news (26/10/2024)
- GenAI and Democracy, by DSET (25/10/2024)
- Prebunking elections rumors: Artificial Intelligence assisted interventions increase confidence in American elections, by California Institute of Technology, Washington University in St. Louis, Cambridge University (24/10/2024)
- Large Language Models reflect the ideology of their creators, by arXiv (24/10/2024)
- Amazon Alexa users given false information attributed to Full Fact’s fact checks, by Full Fact (17/10/2024)
- Ensuring AI accountability: Auditing methods to mitigate the risks of Large Language Models, by Democracy Reporting International (14/10/2024)
- Pig butchering scams are going high tech, by Wired (12/10/2024)
- An update on disrupting deceptive uses of AI, by openAI (09/10/2024)
- Generative Artificial Intelligence and elections, by Center for Media Engagement (03/10/2024)
- Grok AI: A deepfake disinformation disaster for democracy, by CCDH (29/8/2024)
- OpenAI blocks AI propaganda, by AI Tool Report (19/8/2024)
- Disrupting deceptive uses of AI by covert influence operations, by OpenAI (30/5/2024)
- AI-pocalypse Now? Disinformation, AI, and the super election year, by MSC (01/04/2024)
About policy & regulations
About policy & regulations
A look at regulation and policies implemented on AI and disinformation
- First international AI safety report published, by Computer Weekly (30/01/2025)
- Fighting deepfakes: what’s next after legislation?, by Australian Strategic Policy Institute (24/01/2025)
- The global struggle over how to regulate AI, by Rest of World (21/01/2025)
- Feedback on the second draft of the general-purpose AI Code of Practice: Comments and recommendations, by University of Cambridge (17/01/2025)
- Civil society rallies for human rights as AI Act prohibitions deadline looms, by EuroActiv (16/01/2025)
- OpenAI wooed Democrats with calls for AI regulation. Now it must charm Trump, by The Washington Post (13/01/2025)
- British PM Keir Starmer outlines bid to become AI 'world leader', by ABC (13/01/2025)
- UK can be ‘AI sweet spot’: Starmer’s tech minister on regulation, Musk, and free speech, by The Guardian (11/01/2025)
- Britain to make sexually explicit 'deepfakes' a crime, by Reuters (07/01/2025)
- Partnering for gender-responsive AI, by UN (01/01/2025)
- Copyright and Artificial Intelligence Part 2: Copyrightability, by United States Copyright office (01/01/2025)
- Trump announces new tech policy picks for his second term, by The Verge (23/12/2024)
- Sriram Krishnan named Trump’s senior policy advisor for AI, by Tech Crunch (22/12/2024)
- Google relaxes AI usage rules, by AI Tool Report (18/12/2024)
- Meta debuts a tool for watermarking AI-generated videos, by Tech Crunch (12/12/2024)
- New research centre supporting safe and responsible AI, by Minister for Industry and Science, Australia (09/12/2024)
- Inside Britain’s plan to save the world from runaway AI, by Politico (05/12/2024)
- Rumble Video Platform sues California over anti-deepfake law, by Bloomberg (29/11/2024)
- Trump 2.0: Clash of the tech bros, by Fortune (26/11/2024)
- ChatGPT, Meta and Google generative AI should be designated 'high-risk' under new laws, bipartisan committee recommends, by ABC News (26/11/2024)
- Case closed on "nude" AI images of girls. Why police are not charging man who made them, by Pensacola News Journal (22/11/2024)
- The EU Code of Practice for General-purpose AI: Key takeaways from the First Draft, by CSIS (21/11/2024)
- What Donald Trump’s Cabinet picks mean for AI, by GZero Media (19/11/2024)
- Musk sues California over deepfake law, by AI Tool Report (18/11/24)
- EU AI Act: Draft guidance for general purpose AIs shows first steps for Big AI to comply, by TechCrunch (14/11/2024)
- Musk to be Trump's AI advisor?, by AI Tool Report (12/11/2024)
- What Trump’s victory could mean for AI regulation, by Tech Crunch (06/11/2024)
- How AI could still impact the US election, by Gzero Media (05/11/2024)
- Reducing risks posed by synthetic content, by National Institute of Standards and Technology (01/11/2024)
- Google Photos will soon show you if an image was edited with AI, by The Verge (24/10/2024)
- More transparency for AI edits in Google Photos, by Google (24/10/2024)
- Embedded GenAI on social media: Platform law meets AI law, by DSA Observatory (16/10/2024)
- California rejects AI safety bill, by AI Tool Report (30/09/2024)
- Council of Europe opens first ever global treaty on AI for signature, by Council of Europe (05/9/2024)
- Final Report - Governing AI for humanity, by UN (01/09/2024)
- United Nations Secretary-General’s video message for launch of the Final Report, by UN (01/09/2024)
- Platforms’ AI policy updates in 2024: Labelling as the silver bullet?, by EU DisinfoLab (01/07/2024)
- A real account of peep fakes, by Cornell University (15/04/2024)
- Governing AI agents, by Hebrew University of Jerusalem (02/04/2024)
Miscellaneous readings
Miscellaneous readings
Recommended reading on AI and disinformation
- We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan, by The Guardian (28/01/2025)
- Is the TikTok threat really about AI?, by GZeromedia (21/01/2025)
- The FTC’s concern about Snapchat’s My AI chatbot, by GZeromedia (21/01/2025)
- C.I.A.’s chatbot stands in for world leaders, by The New York Times (18/01/2025)
- Arrested by AI: Police ignore standards after facial recognition matches, by The Washington Post (13/01/2025)
- How Elon Musk’s xAI is quietly taking over X, by The Verge (10/01/2025)
- YouTubers are selling their unused video footage to AI companies, by Bloomberg (10/01/2025)
- AI social media users are not always a totally dumb idea, by Wired (08/01/2025)
- Elon Musk accused of using AI to write controversial column for German newspaper, by MSN (08/01/2025)
- Man who exploded Tesla Cybertruck outside Trump hotel in Las Vegas used generative AI, police say, by AP (08/01/2025)
- Users of AI chatbot companions say their relationships are more than 'clickbait", but views are mixed on their benefits, by ABC (06/01/2025)
- Instagram begins randomly showing users AI-generated images of themselves, by404 Media (06/01/2025)
- Meta is killing off its own AI-powered Instagram and Facebook profiles, by The Guardian (03/01/2025)
- Meta envisages social media filled with AI-generated users, by The Financial Times (26/12/2024)
- The Year of the AI election wasn’t quite what everyone expected, by Wired (26/12/2024)
- Nothing is sacred: AI-generated slop has come for Christmas music, by 404 Media (25/12/2024)
- OpenAI whistleblower who died was being considered as witness against company, by The Guardian (21/12/2024)
- Picture of Bashar al-Assad with Tucker Carlson in Moscow almost certainly AI-generated, by Full Fact (19/12/2024)
- Elon Musk’s Grok-2 is now free—and it’s a mess, byFast Company (18/12/2024)
- Using open-source AI, sophisticated cyber ops will proliferate, by Australian Strategic Policy Institute (17/12/2024)
- China wants to dominate in AI, and some of its models are already beating their U.S. rivals, by CNBC (17/12/2024)
- Luigi Mangione AI chatbots give voice to accused united healthcare shooter, by Forbes (17/12/2024)
- AI crackdown: China stamps out tech misuse to preserve national literature and ideology, by SCMP (15/12/2024)
- UK could offer celebs protection from AI clones, by Politico (13/12/2024)
- We looked at 78 election deepfakes. Political misinformation is not an AI problem, by AI Snake Oil (13/12/2024)
- AI helps Telegram remove 15 million suspect groups and channels in 2024, by Tech Crunch (13/12/2024)
- Tech companies claim AI can recognise human emotions. But the science doesn’t stack up, by The Conversation (13/12/2024)
- AI used to target election fraud and criminal deepfakes, by The Canberra Times (11/12/2024)
- This journalist wants you to try open-source AI: “AI is shiny, but value comes from the ideas people have to use it", by Reuters Institute (10/12/2024)
- Paul McCartney warns AI ‘could take over’ as UK debates copyright laws, by The Guardian (10/12/2024)
- China launches AI that writes politically correct docs for bureaucrats, by The Register (09/12/2024)
- Musk launches (then deletes) new image generator, by AI Tool Report (09/12/2024)
- It has to be a deepfake': South Korean opposition leader on martial law announcement, by CNN (05/12/2024)
- The US Department of Defense is investing in deepfake detection, by MIT Technology Review (05/12/2024)
- Misinformation researcher admits ChatGPT added fake details to his court filing, by The Verge (04/12/2024)
- Deepfake YouTube ads of celebrities promise to get you ‘Rock Hard’, by 404 Media (04/12/2024)
- What we saw on our platforms during 2024’s global elections, by META (03/12/2024)
- Google’s video generator comes to more customers, BY Tech Crunch (03/12/2024)
- AWS’ new service tackles AI hallucinations, by Tech Crunch (03/12/2024)
- Meta says gen AI had muted impact on global elections this year, by Reuters (03/12/2024)
- AI-Powered ‘Death Clock’ promises a more exact prediction of the 'day you’ll die', by Bloomberg (30/11/2024)
- The legal battle against explicit AI deepfakes, by The Financial Times (28/11/2024)
- Amazon, Google and Meta are ‘pillaging culture, data and creativity’ to train AI, Australian inquiry finds, by The Guardian (27/11/2024)
- AI-generated slop is quietly conquering the internet. Is it a threat to journalism or a problem that will fix itself?, by Reuters Institute (26/11/2024)
- Russia plotting to use AI to enhance cyber-attacks against UK, minister will warn, by The Guardian (25/11/2024)
- Deepfake videos appear to target Canadian immigrants for thousands of dollars, by CTV News (25/11/2024)
- AI increasingly used for sextortion, scams and child abuse, says senior UK police chief, by The Guardian (24/11/2024)
- AI is taking your job, by Kent C. Dodds Blog (21/11/2024)
- Deus in machina: Swiss church installs AI-powered Jesus, by The Guardian (21/11/2024)
- AI detection tool helps journalists identify and combat deepfakes, by IJNET (20/11/2024)
- What Donald Trump’s cabinet picks mean for AI, by Gzero Media (19/11/2024)
- Fake Claims of Elon Musk’s Latest Acquisitions, by NewsGuard (18/11/2024)
- Singapore steps up fight against deepfakes ahead of election, by Nikkei Asia (17/11/2024)
- Pokemon players create AI world map, by Digital Digging (15/11/2024)
- This 'AI Granny' bores scammers to tears, by PCMag (15/11/2024)
- 2024 AI and Democracy Hackathon, by GMF Technology (11/11/2024)
- AI didn’t sway the election, but it deepened the partisan divide, by Washington Post (09/11/2024)
- Mistral Moderation API, by Mistral (07/11/2024)
- Perplexity launch controversial AI election hub, by AI Tool Report (04/11/2024)
- Thousands go to fake AI-invented Dublin Halloween parade, by EuroNews (01/11/2024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Electoral disinformation, but no AI revolution ahead of the US election – yet, by International Journalist Network (29/10/2024)
- These viral images of the Hamas-Israel war aren’t real. Does it matter?, by SBS (24/10/2024)
- AI was weaponized for FIMI purposes: Russia reportedly paid a former Florida cop to pump out anti-Harris deepfakes and disinformation, by The Verge (24/10/2024)
- Real-time video deepfake scams are here. This tool attempts to zap them, by Wired (15/10/2024)
- Meta fed its AI on almost everything you’ve posted publicly since 2007, by The Verge (12/9/2024)
- Lingo Telecom agrees to $1 million fine over AI-generated Biden robocalls, by Reuters (21/8/2024)
- AI-written obituaries are compounding people’s grief, by Fast Company (26/07/2024)

Community
A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.
Tools
Tools
A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.
INVID-WeVerify plugin
Deepware Scanner
True Media
Illuminarty.AI
GPTZero
Pangram Labs
Originality.ai
HugginFace
Draft & Goal
AI Voice Detector
Hive Moderation
DebunkBot
IntellGPT
Initiatives & organisations
Initiatives & organisations
Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.
EU-funded project: veraAI
veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.
Cluster of EU-funded projects: 'AI against disinformation'
AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.
AI Forensics
AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.
AI Tracking Center, by NewsGuard
AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence
AlgorithmWatch
AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.
European AI & Society Fund
The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.
AI Media Observatory
The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.
GZERO Media newsletter
Stay informed with GZERO Daily. Insights. News. Satire. Crosswords. The essential weekday morning read for anyone who wants real insight on the news of the day. Plus, a weekly exclusive edition written by Ian Bremmer
Queer in AI
Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.
AI for Good
AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.
Omdena
Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects.
Faked Up academic library
Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.
Last updated: 03/02/2025
The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.