AI Disinfo Hub

The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.

This hub intends to assist you to better understand how AI is impacting the disinformation field.  To be up-to-date on the latest developments, we will collect the latest Neural News and Trends and include upcoming events and job opportunities that you cannot miss.

Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.

In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.

Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.

Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS

We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!

News
News

AP: The Trump administration has dismissed Shira Perlmutter, the nation’s top copyright official, just days after she released a report questioning the legality of using copyrighted works to train AI systems. The move follows the firing of the Librarian of Congress and has sparked criticism from Democrats, who view it as a politically motivated power grab. Perlmutter had emphasised the importance of human creativity in determining copyright protections, an approach at odds with growing industry pressures.

CNN: In his first address as pontiff, Pope Leo XIV vows to carry forward Francis’ legacy of social justice while warning that artificial intelligence poses “new challenges for the defense of human dignity, justice and labor”.

Bellingcat: As tensions flared between India and Pakistan, disinformation quickly filled the void left by limited verifiable information. A deepfake video, falsely showing a Pakistani general admitting the loss of two jets, was shared hundreds of thousands of times on X and reported by major Indian news outlets before being identified as fake. Experts warn that convincing AI-generated videos like this heighten confusion during crises, making it increasingly difficult to distinguish fact from fiction.

The New York Times: The latest and most advanced AI tools, so-called reasoning models developed by companies like OpenAI, Google, and the Chinese startup DeepSeek, are actually producing more mistakes, not fewer. One evaluation shows that their hallutination rates reached as high as 79%. Although their mathematical capabilities have significantly improved, their grasp of factual information has become less reliable. The reasons for this are still not fully understood.

Marcus Boesch: In this ongoing research, Marcus Bösch investigates how governments, especially the U.S. administration, are using generative AI to craft and spread synthetic propaganda on social media. From AI-generated videos to meme-worthy filters, Bösch explores how these digital tactics blur the line between official communication and trolling, with the aim of influencing public perception. Bösch offers early insights backed by literature and some fascinating examples. With more findings to come, he warmly welcomes ideas from the counter disinfo community to enhance the research. If you have thoughts, suggestions, or relevant resources, feel free to reach out and collaborate with him on this crucial topic.

The Times: We have recently seen how the Russian influence network Pravda exploits AI for a variety of purposes, both to create fake content and to “infect” Large Language Models to help spread its propaganda and disinformation (the so-called LLM grooming). In the current edition we present two examples of such uses: on the one hand, a site called Pravda Alba is exploiting AI to generate falsehoods in Scottish Gaelic. While the Gaelic-speaking population is small, targeting minority languages communities allows for less scrutiny, leveraging AI to create material in less-monitored spaces. Meanwhile, the Pravda Australian branch is flooding AI Western chatbots such as ChatGPT, Google’s Gemini and Microsoft’s Copilot with Russian propaganda ahead of the federal election, according to ABC. Although the site has limited real-world engagement, experts warn this could retrain AI models to spread Kremlin-friendly narratives.

 

Vice: An Australian radio station, CADA, duped its audience for six months with an AI-generated host named Thy. Presented as a fresh, young voice, Thy hosted a daily show, but eventually listeners grew suspicious due to the lack of personal details. Finally, it was revealed that Thy was an AI voice cloned from a real ARN employee, created in collaboration with ElevenLabs. The revelation sparked backlash over transparency, with critics arguing the station misled listeners and raised ethical concerns about AI’s role in broadcasting.

Wired: Anthony Jancso, a young entrepreneur and one of the first recruiters for Elon Musk’s “Department of Government Efficiency,” is now taking on a new venture. As cofounder of AccelerateX, a government tech startup, he’s seeking technologists to join a project that aims to replace the work of tens of thousands of federal employees with artificial intelligence.

Newsguard: A new case illustrates the exploitation of AI-manipulated images for political purposes: Conservative social media users have spread AI-generated images falsely claiming to show Milwaukee County Circuit Judge Hannah Dugan’s arrest booking photo. Dugan, arrested in April 2025 for allegedly helping an undocumented migrant evade federal immigration officers, was depicted in these images as distressed and unkempt. Despite claims from some, AI detection tools confirmed the images were fabricated.

AP: Conservative activist Robby Starbuck has filed a defamation lawsuit against Meta, claiming its AI chatbot falsely accused him of participating in the January 6 Capitol riot. Starbuck discovered the defamatory claims in August 2024, when they were used against him in an attack related to his campaign against the so-called DEI policies (promoting diversity, equity, and inclusion). In the lawsuit, Starbuck seeks over $5 million in damages, asserting that Meta’s AI also falsely linked him to Holocaust denial and a criminal conviction. Meta has acknowledged the issue, stating it is working on fixing the AI’s behavior.

Pinterest: Pinterest has introduced a new feature aimed at enhancing transparency around AI-generated content. Users will now see a label on images that may have been modified or generated using Gen AI. Additionally, the platform is testing a new tool that will allow users to reduce exposure to Gen AI content by selecting a “see fewer” option, particularly in categories like beauty and art.

TechCrunch: OpenAI has rolled back a recent update to its GPT-4o model after users complained about its overly sycophantic behavior. The update, which was introduced last week, caused ChatGPT to become excessively agreeable and validating, with users sharing screenshots of the AI applauding problematic ideas and decisions. OpenAI’s CEO, Sam Altman, announced the rollback, which has already been completed for free users and will be finished for paid users soon.

Pew Research Center: A survey by the Pew Research Center reveals that Americans are largely pessimistic about the impact of artificial intelligence on journalism and the news industry. With concerns about job losses for journalists and the accuracy of AI-generated content, most respondents fear AI will negatively shape the news landscape over the next 20 years. The survey highlights deep skepticism about AI’s role in news production and its potential to misinform the public.

The 19th News: The US House of Representatives has passed the Take It Down Act, a bipartisan bill aimed at removing nonconsensual intimate images, including sexually explicit deepfakes and revenge porn, from online platforms. With overwhelming support, the bill now heads to President Donald Trump, who has expressed his intent to sign it into law. The legislation requires platforms to act within 48 hours to remove harmful content and establishes penalties for creating and distributing such images. While the bill offers protection for victims, concerns about its potential impact on free speech and encrypted communications have been raised by digital civil rights groups.

CNBC: X is suing Minnesota over a state law banning the use of AI-generated “deepfakes” to influence elections. The lawsuit claims the law violates free speech protections by allowing the state, rather than social media platforms, to determine what content should be removed. X argues that this could lead to the censorship of valuable political speech. Minnesota’s law is part of a broader trend, with at least 22 states enacting similar measures to prevent AI manipulation in elections. The company seeks an injunction to block the law, citing violations of the First Amendment and Section 230, which shields platforms from liability for user-generated content.

Events, jobs & announcements

In this workshop you’ll discover how the AI-on-Demand Platform supports AI research and innovation, test its new version in a live UX session, and gain insights from real-world use cases, best practices, and the role of eDIHs in scaling AI adoption. Register here

The Paris Conference on AI & Digital Ethics (PCAIDE 2025) will take place on June 16-17 at Sorbonne University, Paris. This cross-disciplinary event brings together academics, industry leaders, civil society, and political stakeholders to discuss the ethical, societal, and political implications of AI and digital technologies. PCAIDE offers a unique platform for experts to engage in open dialogue and collaborate on addressing key issues in the development of sociotechnical systems.

The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva. This leading UN event on AI brings together top names in AI, with a high-level lineup of global decision makers. Its goal is to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 

From July 14-18, 2025, the AIDA Symposium and Summer School will explore the latest in AI and ML. Co-organised by AIDA and Aristotle University of Thessaloniki, this hybrid event offers expert-led lectures, special sessions, and hands-on tutorials.

King’s College London is launching 20 prestigious AI+ Academic Fellowships as part of a major strategic investment in Artificial Intelligence. This initiative seeks outstanding researchers working across any discipline, from health and bioscience to law, humanities, and physical sciences, who are developing or applying AI in transformative ways. Fellows will benefit from three years of protected research time and a clear path to a permanent academic position.

The UK’s AI Safety Institute is recruiting for multiple roles in research, engineering, strategy, and operations. As part of a high-impact initiative focused on AI governance, successful candidates will contribute to critical work in a fast-paced, interdisciplinary environment alongside leading experts.

Tarbell is offering grants between $1,000 and $15,000 to support original journalism exploring the societal impacts of artificial intelligence. Open to freelancers and staff journalists alike, the grants aim to fund forward-looking reporting on critical AI issues, ranging from frontier company practices and policymaking to military integration, evaluation methods, and AI’s effects on work and society. Applications are open until May 31, 2025.

 

AI & Disinfo Multimedia

A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.

Webinars

Our own and community webinar collection exploring the intersections of AI and disinformation

AI Disinfo in depth

A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.

Research

A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation

About policy & regulations

A look at regulation and policies implemented on AI and disinformation

Miscellaneous readings

Recommended reading on AI and disinformation

Community

A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.

Tools

A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.

AI Research Pilot by Henk van Ess is a lightweight, browser-based tool designed to help investigators, journalists, and researchers get more out of AI, not by using AI as a source, but as a guide to real sources.

Initiatives & organisations

Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.

veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.

AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.

AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.

AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence

AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.

The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.

The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.

GZERO’s newsletter offers exclusive insights into our rapidly changing world, covering topics such as AI-driven disinformation and a weekly exclusive edition written by Ian Bremmer.

Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.

AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.

Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects. 

Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.

AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience to prevent or mitigate bad outcomes.

The TGuard project develops innovative methods for detecting disinformation in social media and formulating effective strategies for preventing AI-generated false reports.

Last updated: 28/04/2025

The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.