AI Disinfo Hub

The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.

This hub intends to assist you to better understand how AI is impacting the disinformation field.  To be up-to-date on the latest developments, we will collect the latest Neural News and Trends and include upcoming events and job opportunities that you cannot miss.

Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.

In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.

Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.

Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS

We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!

News
News

NewsGuard: DeepSeek, a new Chinese AI chatbot, has been ranked poorly in a NewsGuard audit, with an 83% “fail rate” in providing accurate news information. The tool ranked 10th out of 11 tested chatbots, behind leading Western competitors. DeepSeek often failed to debunk false claims and was found to perpetuate false narratives, particularly when responding to questions related to China, where it repeated the Chinese government’s position. Additionally, it provided outdated information, as it was only trained on data up until October 2023. Overall, according to another article by NewsGuard, the Chinese chatbot phenom is a “disinformation machine.”

Tech Crunch: Meta is enhancing its AI chatbot with a new memory feature that allows it to remember details from previous conversations, such as preferences or interests, across Facebook, Messenger, and WhatsApp. The bot can also use personal account data, like location or Instagram activity, to provide personalized recommendations. However, the memory feature won’t apply in group chats, and users can delete memories at any time. This upgrade is currently available in the U.S. and Canada, but there’s no opt-out option for the personalized recommendations.

Wired: Scammers, known as Yahoo Boys, are using AI-generated news videos to blackmail victims. These videos impersonate reputable networks like CNN, claiming the victim is wanted for crimes, often including explicit images. The fraudsters use AI-generated news anchors to add credibility, pressuring victims for payment by creating distressing, fake reports. 

Tech Crunch: Altman’s World project is evolving to verify AI agents, linking them to verified human identities. This aligns with OpenAI’s Operator, which allows AI agents to act autonomously on platforms. World’s proof-of-human tools could help businesses verify AI agents, ensuring they’re acting on behalf of real people, transforming online interactions.

Tech Crunch: Anthropic has introduced Citations, a new feature for its Claude AI models that allows developers to ground responses in specific source documents, such as emails. Available via Anthropic’s API and Google’s Vertex AI, Citations helps reduce hallucinations by providing detailed references to the exact sentences and passages used. The feature currently supports Claude 3.5 Sonnet and Claude 3.5 Haiku but comes with additional costs based on document length.

Reuters: A joint investigation by the rating system for news and information websites NewsGuard and German outlet Correctiv uncovered a network of 102 Russia-linked websites spreading AI-generated disinformation ahead of Germany’s February election. The sites, allegedly tied to former U.S. police officer John Mark Dougan, push false narratives targeting pro-NATO politicians while favoring Russia-friendly parties like the far-right AfD. Dougan denies any connection, and Russia has consistently rejected claims of disinformation campaigns.

The Guardian: Pope Francis warned global leaders at the World Economic Forum in Davos that AI could worsen the “crisis of truth,” urging governments and businesses to exercise oversight. In a written address, he highlighted ethical concerns and AI’s potential to blur the line between fact and fiction. The Pope himself has been the subject of viral AI-generated deepfakes, underscoring his concerns about misinformation.

Wired: Pearl, a new AI-powered search engine, combines AI responses with human fact-checking and expert consultations. Developed by Andy Kurtzig, the founder of JustAnswer, Pearl aims to reduce misinformation by offering a freemium model. Users get free AI answers, and for deeper insights, they can connect with experts for a subscription fee. Kurtzig argues that Pearl’s integration of human experts shields it from potential legal issues like Section 230 liability, unlike other AI search engines. However, after testing the platform, the AI responses and fact-checks were often unclear or generic, and the human expert consultations didn’t always offer better insights, especially given the cost of the service.

BBC: A US lawsuit accuses LinkedIn of secretly sharing Premium users’ private messages to train AI models, opting them into the program without clear consent. The Microsoft-owned company allegedly changed its privacy policy to conceal these actions, though it denies the claims. The lawsuit seeks damages for privacy violations and breach of contract.

Reuters: President Donald Trump revoked a 2023 executive order by Joe Biden that aimed to mitigate AI risks to national security, workers, and consumers. The order required AI developers to share safety test results with the government, a move Republicans argued hindered innovation. However, Trump left intact a separate Biden order supporting AI data centers’ energy needs.

The Conversation: People with less AI knowledge are more open to using it, a phenomenon researchers call the “lower literacy-higher receptivity” link. This is driven by a sense of AI’s “magicalness,” especially in human-like tasks, while those with higher literacy see it as a functional tool. Policymakers face a challenge: increasing AI literacy without dampening the enthusiasm that drives adoption.

Tech Crunch: AI struggles with high-level history exams, with GPT-4 Turbo scoring only 46% accuracy on a new benchmark, Hist-LLM. Researchers found that LLMs often extrapolate from prominent historical data, leading to errors, and perform worse on underrepresented regions. Despite these flaws, experts see potential for AI to assist historians with improved training and benchmarks.

CNN: Apple is pausing its AI-generated news summaries after the feature produced false headlines, sparking backlash from media organizations. The company plans to improve the technology and reintroduce it with clearer AI disclaimers. Press freedom groups warn that inaccurate AI-generated news poses risks to public trust in reliable information.

Euronews: A French woman was scammed out of €830,000 by fraudsters using AI-generated images and fake social media to impersonate Brad Pitt. Believing she was in a relationship with the actor, she sent money over a year before realising the deception. After sharing her story, she faced widespread online harassment instead of sympathy.

Financial Review: According to a study, the number of words posted on LinkedIn has increased by 107% since the introduction of AI writing tools. Moreover, it seems that posts generated with these tools are likely to receive half as much engagement.

Events, jobs & announcements

Explore upcoming AI-related events, jobs and announcements that may be of interest to members of the counter-disinformation community.

On the 6th of February 2025, the Australian Policy Institute will co-host the panel discussion titled ‘Safeguarding Australian Elections: Addressing AI-Enabled Disinformation,’ exploring the intersection of AI, electoral integrity, and democratic resilience. The panel will feature Kate Seward (Microsoft ANZ), Antonio Spinelli (International IDEA), and Sam Stockwell (CETaS).

On Thursday, February 27, 2025, the ISACA Canberra Chapter will host the event ‘Foreign Interference, Elections, and AI,‘ exploring the evolving challenges to Australia’s democratic processes, including disinformation campaigns, AI-driven attacker trends, and election security. The event will feature Bevan Read and James Murphy, who will provide insights on proactive measures being taken to combat these threats and safeguard the integrity of Australia’s elections.

On 10 and 11 February 2025, France will host the Artificial Intelligence (AI) Action Summit, gathering at Grand Palais, Heads of State and Government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists and members of civil society.

HumanX will take place over three days in Las Vegas from March 10–13. Tailored for leaders, founders, policymakers, and investors shaping the future of artificial intelligence, it promises to be a defining event in the AI space.

On May 6 and 9, 2025, Data & Society will host an online workshop on the intersection of generative AI technologies and work. This workshop aims to foster a collaborative environment to discuss how we investigate, think about, resist, and shape the emerging uses of generative AI technologies across a broad range of work contexts. 

The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva. This leading UN event on AI brings together top names in AI, with a high-level lineup of global decision makers. Its goal is to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 

 

The Rundown is looking for a Writer – Robotics/Tech to be responsible for researching and writing its bi-weekly robotics and tech-focused newsletters. 

AI & Disinfo Multimedia

A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.

Webinars

Our own and community webinar collection exploring the intersections of AI and disinformation

AI Disinfo in depth

A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.

Research

A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation

About policy & regulations

A look at regulation and policies implemented on AI and disinformation

Miscellaneous readings

Recommended reading on AI and disinformation

Community

A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.

Tools

A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.

Initiatives & organisations

Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.

veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.

AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.

AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.

AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence

AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.

The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.

The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.

Stay informed with GZERO Daily. Insights. News. Satire. Crosswords. The essential weekday morning read for anyone who wants real insight on the news of the day. Plus, a weekly exclusive edition written by Ian Bremmer

Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.

AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.

Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects. 

Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.

Last updated: 03/02/2025

The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.