AI Disinfo Hub
The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.
In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.
Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.
Join us in the fight against AI-driven disinformation. Follow us and share with the community!
NEURAL NEWS & TRENDS
We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!
News
News
YouTubers are selling their unused video footage to AI companies (Bloomberg, 10/01/2025)
Bloomberg: YouTubers and digital creators are selling unused video footage to AI companies like OpenAI and Google, earning thousands per deal. These exclusive videos are valuable for training AI models as they provide unique, unpublished content. This trend offers creators a new income stream beyond traditional advertising partnerships.
Elon Musk accused of using AI to write controversial column for German newspaper (MSN, 08/01/2025)
MSN: Elon Musk faces allegations of using his AI chatbot, Grok, to author a controversial column for the German weekly Welt am Sonntag. The column, advocating for the far-right AfD party as “Germany’s last hope,” closely mirrors text generated by Grok when prompted with a similar topic. German newspaper Tagesspiegel and AI detection tools highlighted striking similarities, raising questions about the column’s authorship.
Man who exploded Tesla Cybertruck outside Trump hotel in Las Vegas used generative AI, police say (AP, 08/01/2025)
AP: Matthew Livelsberger, a decorated soldier who exploded a Tesla Cybertruck outside the Trump hotel in Las Vegas, reportedly used generative AI tools like ChatGPT to help plan the attack. Police found that Livelsberger had searched for information on explosives and firearms, though he did not intend to harm others. The incident marks the first known case of ChatGPT being used to assist in creating a device for a violent act, raising concerns about the potential misuse of AI.
Apple urged to withdraw 'out of control' AI news alerts (BBC, 07/01/2025)
BBC: Apple faces growing pressure to withdraw its AI news summarization feature on iPhones, criticized for generating false claims in news alerts. Organizations like the BBC, NUJ, and RSF argue the tool risks misinformation and undermines trust in journalism. Apple acknowledged the issue, pledging to clarify that summaries are AI-generated, but critics insist the feature is not ready and should be removed.
Britain to make sexually explicit 'deepfakes' a crime (Reuters, 07/01/2025)
Reuters: The UK government announced plans to criminalize the creation and sharing of sexually explicit “deepfakes,” targeting a growing form of abuse primarily affecting women and girls. Deepfakes, digitally altered images made with AI, have contributed to a 400% rise in image-based abuse since 2017, according to the Revenge Porn Helpline. The new law will allow prosecution of perpetrators, expanding protections beyond existing revenge porn legislation.
Instagram begins randomly showing users AI-generated images of themselves (404 Media, 06/01/2025)
404 Media: Instagram is testing a feature where Meta’s AI generates personalized images of users in various scenarios and integrates them into their feeds. A Reddit user reported seeing an AI-created slideshow of himself in a “mirror maze” after using Instagram’s “Imagine” feature to edit selfies. The AI-generated posts include tailored captions and appear to use uploaded selfies to create targeted content, raising questions about privacy and user consent.
These defenders of democracy do not exist (Conspirador Norteño, 05/01/2025)
Conspirador Norteño: Six now-suspended Bluesky accounts posed as liberal activists but were part of an AI-powered spam network generating unsolicited replies and following users en masse. The accounts, active since December 2024, relied on LLMs for their content, showed erratic posting patterns, and explicitly identified themselves as AI in some responses. Despite their suspension, similar networks could reemerge.
Meta envisages social media filled with AI-generated users (The Financial Times, 26/12/2024)
The Financial Times: Meta is betting that characters generated by artificial intelligence will fill its social media platforms in the next few years. Furthermore, Meta is planning to introduce AI-generated users on its platform, featuring profiles with bios, pictures, and AI-powered content sharing (Wired, 08/01/2025). Despite these news, Meta is removing its AI-generated Instagram and Facebook profiles, initially launched in 2023, after some went viral due to controversial user interactions (The Guardian, 03/01/2025)
The Year of the AI election wasn’t quite what everyone expected (Wired, 26/12/2024)
Wired: In 2024, fears of generative AI dominating elections through deepfakes proved exaggerated, as such content was rarely deceptive or impactful. Instead, AI’s influence was subtler, with campaigns using it to write emails, ads, and speeches. Concerns remain over gaps in AI-detection tools, especially in non-Western regions, and the “liar’s dividend,” where real media is falsely dismissed as fake.
ChatGPT search tool vulnerable to manipulation and deception, tests show (The Guardian, 24/12/2024)
The Guardian: An investigation has revealed vulnerabilities in OpenAI’s ChatGPT search tool, highlighting risks of manipulation and deceptive practices. Tests showed that hidden text on websites could influence the AI’s responses, overriding actual content with biased or malicious instructions—a technique known as “prompt injection.” This could lead ChatGPT to generate misleading product reviews or even provide harmful code.
Predictions for AI in 2025: Collaborative agents, AI skepticism, and new risks (Stanford University, 23/12/2024)
Stanford University: In 2025, AI is expected to advance through collaborative agents where specialized systems work together to solve complex problems, with human guidance. Experts predict skepticism around AI in education, along with increased risks of scams due to generative AI misuse. Additionally, AI agents will collaborate in multidisciplinary teams, and the focus will shift toward evaluating real-world benefits and human-AI collaboration.
Elon Musk’s Grok-2 is now free, and it’s a mess (Fast Company, 18/12/2024)
Fast Company: Elon Musk’s Grok-2, now freely accessible, has sparked viral moments and backlash, with users exploiting its flaws for memes and controversy. Despite its claimed improvements, Grok-2 has produced polarising statements and making misleading or inaccurate responses. The chatbot’s ability to generate personalised content has raised privacy concerns, particularly after instances where users’ profiles were used to create images without their consent.
Luigi Mangione AI chatbots give voice to accused united healthcare shooter (Forbes, 17/12/2024)
Forbes: Despite being the chief suspect in the shooting and murder of UnitedHealthcare CEO Brian Thompson, Luigi Mangione has to some become a poster boy for the injustices of America’s healthcare system. Since he was arrested, people have created a number of AI chatbots trained on his online posts and personal history, including as many as 13 on Character.ai, a site where users can create AI avatars.
Fake AI versions of world-renowned academics are spreading claims that Ukraine should surrender to Russia (The Insider, 13/12/2024)
The Insider: A Russian disinformation network, Matryoshka, is using AI to create fake videos of renowned academics, including professors from top universities, spreading false claims that Ukraine should surrender to Russia. These videos manipulate real footage and clone the voices of scholars to deliver political messages, such as condemning sanctions on Russia and portraying Ukrainian president Zelensky negatively. The campaign has been identified across multiple languages and social media platforms, aiming to deceive global audiences.
Meta debuts a tool for watermarking AI-generated videos (Tech Crunch, 12/12/2024)
Tech Crunch: Meta has launched a new tool called Video Seal to watermark AI-generated videos, helping to combat the rise of deepfakes. The tool, open source and integrated into existing software, aims to add imperceptible watermarks that withstand video compression and editing. Despite its robustness, Video Seal faces challenges such as limited adoption due to existing proprietary solutions, prompting Meta to promote its use through a public leaderboard and industry collaborations.
Events, jobs & announcements
Events, jobs & announcements
Explore upcoming AI-related events, jobs and announcements that may be of interest to members of the counter-disinformation community.
Event. 30 January 2025, online: IIC Legal Counsel Forum – AI for Regulators: Problem solver, or problem creator?
IIC organises an event about the implications of AI and what it means for regulators – with a focus on how it transforms both the way we work and how we regulate its use for the benefit of consumers and businesses. The meeting is open to all IIC members and by invitation only to non-members.
Event. 10-11 February 2025 in Paris: Artificial Intelligence Action Summit
On 10 and 11 February 2025, France will host the Artificial Intelligence (AI) Action Summit, gathering at Grand Palais, Heads of State and Government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists and members of civil society.
Event. 9-13 March 2025 in Las Vegas: HumanX 2025
HumanX will take place over three days in Las Vegas from March 10–13. Tailored for leaders, founders, policymakers, and investors shaping the future of artificial intelligence, it promises to be a defining event in the AI space.
Workshop. 8-9 May 2025. Online: What is work worth? Exploring what generative AI means for workers’ lives and labor
On May 6 and 9, 2025, Data & Society will host an online workshop on the intersection of generative AI technologies and work. This workshop aims to foster a collaborative environment to discuss how we investigate, think about, resist, and shape the emerging uses of generative AI technologies across a broad range of work contexts.
Event. 8-11 July 2025 in Geneva: AI for Good Global Summit
The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva. This leading UN event on AI brings together top names in AI, with a high-level lineup of global decision makers. Its goal is to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact.
Job offer: European AI Office - Legal Officer and Policy Officer
The Commission has opened two calls for expression of interest to recruit new members for the European AI Office. Apply for Legal Officer and Policy Officer.
Job offer: The Rundown – Content Writer & Social Media Manager
The Rundown, the world’s largest AI newsletter and media co, is looking for a Content Writer .
AI & Disinfo Multimedia
A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.
Webinars
Webinars
Our own and community webinar collection exploring the intersections of AI and disinformation
- Faking It - Information Integrity, AI and the Law (Global Game Changers Series), with Monica Attard and Michael Davis (UTS), Creina Chapman (ACMA), Cullen Jennings (Cisco Systems) and Jason M Schultz (Canva). Hosted by University of Technology Sydney (29/11/2024)
- AI and Disinformation: A legal perspective, with Noémie Krack (KU Leuven). Hosted by EU DisinfoLab (07/11/2024)
- Generative AI and Geopolitical Disruption, with Corneliu Bjola (Oxford Internet Institute), Antonio Estella and Maria Dolores Sanchez Galera (Carlos III University), Peter Pijpers (Netherlands Defence Academy), Michael Zinkanell (Austrian Institute for European and Security Policy), and Gregory Smith (RAND Corporation). Hosted by Solaris (25/10/2024)
- DisinfoCon 2024 - Taking stock of Information Integrity in the Age of AI, with Carl Miller (Center for Analysis of Social Media at Demos). Hosted by Democracy Reporting International (26/09/2024)
- Advancing synthetic media detection: introducing veraAI, with Akis (Symeon) Papadopoulos (Centre for Research and Technology Hellas – Information Technologies Institute). Hosted by EU DisinfoLab (29/08/2024)
- Using Generative AI for the production, spread, and detection of disinformation – latest insights and innovations, with Kalina Bontcheva (University of Sheffield). Hosted by EU DisinfoLab (27/06/2024)
- Beyond Deepfakes: AI-related risks for elections, with Sophie Murphy Byrne (Logically). Hosted by EU DisinfoLab (30/05/2024)
- The Top 9 AI Breakthroughs of 2024 (You Won’t Believe Are Real). By AI Uncovered (08/11/2024)
- Tools and techniques for using AI in digital investigations, with Craig Silverman (ProPublica). Hosted by EU DisinfoLab (25/04/2024)
- OSINT & AI: Advanced Analysis, with Ivan Kravtsov (Social Links) and Gary Ruddell (Independent Cyber Threat Intelligence Professional). Hosted by Social Links (16/11/2023)
Podcasts
Podcasts
Community podcasts exploring the intersections of AI and disinformation
- The case for human-centered AI. Hosted by McKinsey Digital (20/12/2024)
- Destination Deception 2025. Hosted by Faked Up (18/12/2024)
- What is AI slop and did it lead to a Halloween parade hoax in Dublin? Hosted by The Explainer (05/11/2024)
- Beyond the ballot: Misinformation, trust and truth in elections. Hosted by The National Security Podcast (24/10/2024)
- Do not "summarize this"! Episode 4: improve prompts to get a better summary. Hosted by Digital Digging (28/09/2024)
- How to detect fake AI-texts, episode 1 of podcast series on AI & Research. Hosted by Digital Digging (17/09/2024)
- Moderating Global Voices. Hosted by Decoding Hate (10/02/2021)
AI Disinfo in depth
A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.
Research
Research
A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation
- AI could usher in a golden age of research – but only if these cutting-edge tools aren’t restricted to a few major private companies, by The Conversation (06/01/2025)
- Fake AI versions of world-renowned academics are spreading claims that Ukraine should surrender to Russia, by The Insider (13/12/2024)
- ElevenLabs used for Russian propaganda, by AI Tool Report (11/12/2024)
- AI enters Congress: Sexually explicit deepfakes target women lawmakers, by The 19th News (11/12/2024)
- Melodies of malice: Understanding how AI fuels the creation and spread of extremist music, by GNET (11/12/2024)
- Scottish Parliament TV at risk of deepfake attacks, by Infosecurity (10/12/2024)
- Revealed: bias found in AI system used to detect UK benefits fraud, by The Guardian (06/12/2024)
- Evaluating Large Language Models capability to launch fully automated spear phishing campaigns: Validated on human subjects, by arXiv (30/11/2024)
- Study of ChatGPT citations makes dismal reading for publishers, by Tech Crunch (29/11/2024)
- How ChatGPT Search (mis)represents publisher content, by Columbia Journalism Review (27/11/2024)
- Persuasive technologies in China: implications for the future of national security, by Australian Strategic Policy Institute (26/11/2024)
- "Operation Undercut" shows multifaceted nature of SDA’s influence operations, by Recorded Future (26/11/2024)
- Philippines, China clashes trigger money-making disinformation, by France24 (26/11/2024)
- Not even Spotify is safe from AI slop, by The Verge (14/11/2024)
- AI-enabled influence operations: Safeguarding future elections, by Cetas (13/11/2024)
- Disconnected from reality: American voters grapple with AI and flawed OSINT strategies, by ISD (07/11/2024)
- AI hallucinations caused artificial intelligence to falsely describe these people as criminals, by ABC News (03/11/2024)
- Exploiting Meta’s weaknesses, deceptive political ads thrived on Facebook and Instagram in run-up to election, by Pro Publica (31/10/2024)
- "Say it’s only fictional”: How the far-right is jailbreaking AI and what can be done about it, by ICCT (30/10/2024)
- How X users can earn thousands from US election misinformation and AI images, by BBC (30/10/2024)
- Hospitals use a transcription tool powered by an error-prone OpenAI model, by The Verge (28/10/2024)
- Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said, by AP news (26/10/2024)
- GenAI and Democracy, by DSET (25/10/2024)
- Prebunking elections rumors: Artificial Intelligence assisted interventions increase confidence in American elections, by California Institute of Technology, Washington University in St. Louis, Cambridge University (24/10/2024)
- Large Language Models reflect the ideology of their creators, by arXiv (24/10/2024)
- Amazon Alexa users given false information attributed to Full Fact’s fact checks, by Full Fact (17/10/2024)
- Ensuring AI accountability: Auditing methods to mitigate the risks of Large Language Models, by Democracy Reporting International (14/10/2024)
- Pig butchering scams are going high tech, by Wired (12/10/2024)
- An update on disrupting deceptive uses of AI, by openAI (09/10/2024)
- Generative Artificial Intelligence and elections, by Center for Media Engagement (03/10/2024)
- Grok AI: A deepfake disinformation disaster for democracy, by CCDH (29/8/2024)
- OpenAI blocks AI propaganda, by AI Tool Report (19/8/2024)
- Disrupting deceptive uses of AI by covert influence operations, by OpenAI (30/5/2024)
- AI-pocalypse Now? Disinformation, AI, and the super election year, by MSC (01/04/2024)
About policy & regulations
About policy & regulations
A look at regulation and policies implemented on AI and disinformation
- Trump announces new tech policy picks for his second term, by The Verge (23/12/2024)
- Sriram Krishnan named Trump’s senior policy advisor for AI, by Tech Crunch (22/12/2024)
- Google relaxes AI usage rules, by AI Tool Report (18/12/2024)
- New research centre supporting safe and responsible AI, by Minister for Industry and Science, Australia (09/12/2024)
- Inside Britain’s plan to save the world from runaway AI, by Politico (05/12/2024)
- Rumble Video Platform sues California over anti-deepfake law, by Bloomberg (29/11/2024)
- Trump 2.0: Clash of the tech bros, by Fortune (26/11/2024)
- ChatGPT, Meta and Google generative AI should be designated 'high-risk' under new laws, bipartisan committee recommends, by ABC News (26/11/2024)
- Case closed on "nude" AI images of girls. Why police are not charging man who made them, by Pensacola News Journal (22/11/2024)
- The EU Code of Practice for General-purpose AI: Key takeaways from the First Draft, by CSIS (21/11/2024)
- What Donald Trump’s Cabinet picks mean for AI, by GZero Media (19/11/2024)
- Musk sues California over deepfake law, by AI Tool Report (18/11/24)
- EU AI Act: Draft guidance for general purpose AIs shows first steps for Big AI to comply, by TechCrunch (14/11/2024)
- Musk to be Trump's AI advisor?, by AI Tool Report (12/11/2024)
- What Trump’s victory could mean for AI regulation, by Tech Crunch (06/11/2024)
- How AI could still impact the US election, by Gzero Media (05/11/2024)
- Reducing risks posed by synthetic content, by National Institute of Standards and Technology (01/11/2024)
- Google Photos will soon show you if an image was edited with AI, by The Verge (24/10/2024)
- More transparency for AI edits in Google Photos, by Google (24/10/2024)
- Embedded GenAI on social media: Platform law meets AI law, by DSA Observatory (16/10/2024)
- California rejects AI safety bill, by AI Tool Report (30/09/2024)
- Council of Europe opens first ever global treaty on AI for signature, by Council of Europe (05/9/2024)
- Final Report - Governing AI for humanity, by UN (01/09/2024)
- United Nations Secretary-General’s video message for launch of the Final Report, by UN (01/09/2024)
- Platforms’ AI policy updates in 2024: Labelling as the silver bullet?, by EU DisinfoLab (01/07/2024)
- A real account of peep fakes, by Cornell University (15/04/2024)
- Governing AI agents, by Hebrew University of Jerusalem (02/04/2024)
Miscellaneous readings
Miscellaneous readings
Recommended reading on AI and disinformation
- How Elon Musk’s xAI is quietly taking over X, by The Verge (10/01/2025)
- Users of AI chatbot companions say their relationships are more than 'clickbait", but views are mixed on their benefits, by ABC (06/01/2025)
- Nothing is sacred: AI-generated slop has come for Christmas music, by 404 Media (25/12/2024)
- OpenAI whistleblower who died was being considered as witness against company, by The Guardian (21/12/2024)
- Picture of Bashar al-Assad with Tucker Carlson in Moscow almost certainly AI-generated, by Full Fact (19/12/2024)
- Using open-source AI, sophisticated cyber ops will proliferate, by Australian Strategic Policy Institute (17/12/2024)
- China wants to dominate in AI, and some of its models are already beating their U.S. rivals, by CNBC (17/12/2024)
- AI crackdown: China stamps out tech misuse to preserve national literature and ideology, by SCMP (15/12/2024)
- UK could offer celebs protection from AI clones, by Politico (13/12/2024)
- We looked at 78 election deepfakes. Political misinformation is not an AI problem, by AI Snake Oil (13/12/2024)
- AI helps Telegram remove 15 million suspect groups and channels in 2024, by Tech Crunch (13/12/2024)
- Tech companies claim AI can recognise human emotions. But the science doesn’t stack up, by The Conversation (13/12/2024)
- AI used to target election fraud and criminal deepfakes, by The Canberra Times (11/12/2024)
- This journalist wants you to try open-source AI: “AI is shiny, but value comes from the ideas people have to use it", by Reuters Institute (10/12/2024)
- Paul McCartney warns AI ‘could take over’ as UK debates copyright laws, by The Guardian (10/12/2024)
- China launches AI that writes politically correct docs for bureaucrats, by The Register (09/12/2024)
- Musk launches (then deletes) new image generator, by AI Tool Report (09/12/2024)
- It has to be a deepfake': South Korean opposition leader on martial law announcement, by CNN (05/12/2024)
- The US Department of Defense is investing in deepfake detection, by MIT Technology Review (05/12/2024)
- Misinformation researcher admits ChatGPT added fake details to his court filing, by The Verge (04/12/2024)
- Deepfake YouTube ads of celebrities promise to get you ‘Rock Hard’, by 404 Media (04/12/2024)
- What we saw on our platforms during 2024’s global elections, by META (03/12/2024)
- Google’s video generator comes to more customers, BY Tech Crunch (03/12/2024)
- AWS’ new service tackles AI hallucinations, by Tech Crunch (03/12/2024)
- Meta says gen AI had muted impact on global elections this year, by Reuters (03/12/2024)
- AI-Powered ‘Death Clock’ promises a more exact prediction of the 'day you’ll die', by Bloomberg (30/11/2024)
- The legal battle against explicit AI deepfakes, by The Financial Times (28/11/2024)
- Amazon, Google and Meta are ‘pillaging culture, data and creativity’ to train AI, Australian inquiry finds, by The Guardian (27/11/2024)
- AI-generated slop is quietly conquering the internet. Is it a threat to journalism or a problem that will fix itself?, by Reuters Institute (26/11/2024)
- Russia plotting to use AI to enhance cyber-attacks against UK, minister will warn, by The Guardian (25/11/2024)
- Deepfake videos appear to target Canadian immigrants for thousands of dollars, by CTV News (25/11/2024)
- AI increasingly used for sextortion, scams and child abuse, says senior UK police chief, by The Guardian (24/11/2024)
- AI is taking your job, by Kent C. Dodds Blog (21/11/2024)
- Deus in machina: Swiss church installs AI-powered Jesus, by The Guardian (21/11/2024)
- AI detection tool helps journalists identify and combat deepfakes, by IJNET (20/11/2024)
- What Donald Trump’s cabinet picks mean for AI, by Gzero Media (19/11/2024)
- Fake Claims of Elon Musk’s Latest Acquisitions, by NewsGuard (18/11/2024)
- Singapore steps up fight against deepfakes ahead of election, by Nikkei Asia (17/11/2024)
- Pokemon players create AI world map, by Digital Digging (15/11/2024)
- This 'AI Granny' bores scammers to tears, by PCMag (15/11/2024)
- AI didn’t sway the election, but it deepened the partisan divide, by Washington Post (09/11/2024)
- Mistral Moderation API, by Mistral (07/11/2024)
- Perplexity launch controversial AI election hub, by AI Tool Report (04/11/2024)
- Thousands go to fake AI-invented Dublin Halloween parade, by EuroNews (01/11/2024)
- Introducing ChatGPT search, by openAI (31/10/20024)
- Electoral disinformation, but no AI revolution ahead of the US election – yet, by International Journalist Network (29/10/2024)
- These viral images of the Hamas-Israel war aren’t real. Does it matter?, by SBS (24/10/2024)
- AI was weaponized for FIMI purposes: Russia reportedly paid a former Florida cop to pump out anti-Harris deepfakes and disinformation, by The Verge (24/10/2024)
- Real-time video deepfake scams are here. This tool attempts to zap them, by Wired (15/10/2024)
- Meta fed its AI on almost everything you’ve posted publicly since 2007, by The Verge (12/9/2024)
- Lingo Telecom agrees to $1 million fine over AI-generated Biden robocalls, by Reuters (21/8/2024)
- AI-written obituaries are compounding people’s grief, by Fast Company (26/07/2024)
Community
A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.
Tools
Tools
A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.
INVID-WeVerify plugin
Deepware Scanner
True Media
Illuminarty.AI
GPTZero
Pangram Labs
Originality.ai
HugginFace
Draft & Goal
AI Voice Detector
Hive Moderation
DebunkBot
IntellGPT
Initiatives & organisations
Initiatives & organisations
Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.
EU-funded project: veraAI
veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.
Cluster of EU-funded projects: 'AI against disinformation'
AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.
AI Forensics
AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.
AI Tracking Center, by NewsGuard
AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence
AlgorithmWatch
AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.
European AI & Society Fund
The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.
AI Media Observatory
The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.
GZERO Media newsletter
Stay informed with GZERO Daily. Insights. News. Satire. Crosswords. The essential weekday morning read for anyone who wants real insight on the news of the day. Plus, a weekly exclusive edition written by Ian Bremmer
Queer in AI
Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.
AI for Good
AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.
Omdena
Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects.
Faked Up academic library
Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.
Last updated: 13/01/2025
The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.