AI Disinfo Hub

The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, enabling the manipulation of content and accelerating its spread. Recent technical developments have exponentially increased these challenges. While AI offers opportunities for legitimate purposes, it is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.

This hub intends to assist you to better understand how AI is impacting the disinformation field.  To be up-to-date on the latest developments, we will collect the latest Neural News and Trends and include upcoming events and job opportunities that you cannot miss.

Are you more into podcast and video content? You will find a repository of podcasts and webinars in AI Disinfo Multimedia, while AI Disinfo in Depth will feature research reports from academia and civil society organisations. This section will cover the burning questions related to the regulation of AI technologies and their use. In addition to this, the Community working in the intersections of AI and disinformation will have a dedicated space where initiatives and resources will be listed, as well as useful tools.

In short, this hub is your go-to resource for understanding the impact of AI on disinformation and finding ways to combat it.

Here, researchers, policymakers, and the public can access reliable tools and insights to navigate this complex landscape. Together, we’re building a community to tackle these challenges head-on, promoting awareness and digital literacy.

Join us in the fight against AI-driven disinformation. Follow us and share with the community!

NEURAL NEWS & TRENDS

We've curated a selection of articles from external sources that delve into the topic from different perspectives. Keep exploring the latest news and publications on AI and disinformation!

News
News

Bloomberg: YouTubers and digital creators are selling unused video footage to AI companies like OpenAI and Google, earning thousands per deal. These exclusive videos are valuable for training AI models as they provide unique, unpublished content. This trend offers creators a new income stream beyond traditional advertising partnerships.

MSN: Elon Musk faces allegations of using his AI chatbot, Grok, to author a controversial column for the German weekly Welt am Sonntag. The column, advocating for the far-right AfD party as “Germany’s last hope,” closely mirrors text generated by Grok when prompted with a similar topic. German newspaper Tagesspiegel and AI detection tools highlighted striking similarities, raising questions about the column’s authorship.

AP: Matthew Livelsberger, a decorated soldier who exploded a Tesla Cybertruck outside the Trump hotel in Las Vegas, reportedly used generative AI tools like ChatGPT to help plan the attack. Police found that Livelsberger had searched for information on explosives and firearms, though he did not intend to harm others. The incident marks the first known case of ChatGPT being used to assist in creating a device for a violent act, raising concerns about the potential misuse of AI.

BBC: Apple faces growing pressure to withdraw its AI news summarization feature on iPhones, criticized for generating false claims in news alerts. Organizations like the BBC, NUJ, and RSF argue the tool risks misinformation and undermines trust in journalism. Apple acknowledged the issue, pledging to clarify that summaries are AI-generated, but critics insist the feature is not ready and should be removed.

Reuters: The UK government announced plans to criminalize the creation and sharing of sexually explicit “deepfakes,” targeting a growing form of abuse primarily affecting women and girls. Deepfakes, digitally altered images made with AI, have contributed to a 400% rise in image-based abuse since 2017, according to the Revenge Porn Helpline. The new law will allow prosecution of perpetrators, expanding protections beyond existing revenge porn legislation.

404 Media: Instagram is testing a feature where Meta’s AI generates personalized images of users in various scenarios and integrates them into their feeds. A Reddit user reported seeing an AI-created slideshow of himself in a “mirror maze” after using Instagram’s “Imagine” feature to edit selfies. The AI-generated posts include tailored captions and appear to use uploaded selfies to create targeted content, raising questions about privacy and user consent.

Conspirador Norteño: Six now-suspended Bluesky accounts posed as liberal activists but were part of an AI-powered spam network generating unsolicited replies and following users en masse. The accounts, active since December 2024, relied on LLMs for their content, showed erratic posting patterns, and explicitly identified themselves as AI in some responses. Despite their suspension, similar networks could reemerge.

The Financial Times: Meta is betting that characters generated by artificial intelligence will fill its social media platforms in the next few years. Furthermore, Meta is planning to introduce AI-generated users on its platform, featuring profiles with bios, pictures, and AI-powered content sharing (Wired, 08/01/2025). Despite these news, Meta is removing its AI-generated Instagram and Facebook profiles, initially launched in 2023, after some went viral due to controversial user interactions (The Guardian, 03/01/2025)

Wired: In 2024, fears of generative AI dominating elections through deepfakes proved exaggerated, as such content was rarely deceptive or impactful. Instead, AI’s influence was subtler, with campaigns using it to write emails, ads, and speeches. Concerns remain over gaps in AI-detection tools, especially in non-Western regions, and the “liar’s dividend,” where real media is falsely dismissed as fake.

The Guardian: An investigation has revealed vulnerabilities in OpenAI’s ChatGPT search tool, highlighting risks of manipulation and deceptive practices. Tests showed that hidden text on websites could influence the AI’s responses, overriding actual content with biased or malicious instructions—a technique known as “prompt injection.” This could lead ChatGPT to generate misleading product reviews or even provide harmful code.

Stanford University: In 2025, AI is expected to advance through collaborative agents where specialized systems work together to solve complex problems, with human guidance. Experts predict skepticism around AI in education, along with increased risks of scams due to generative AI misuse. Additionally, AI agents will collaborate in multidisciplinary teams, and the focus will shift toward evaluating real-world benefits and human-AI collaboration.

Fast Company: Elon Musk’s Grok-2, now freely accessible, has sparked viral moments and backlash, with users exploiting its flaws for memes and controversy. Despite its claimed improvements, Grok-2 has produced polarising statements and making misleading or inaccurate responses. The chatbot’s ability to generate personalised content has raised privacy concerns, particularly after instances where users’ profiles were used to create images without their consent.

Forbes: Despite being the chief suspect in the shooting and murder of UnitedHealthcare CEO Brian Thompson, Luigi Mangione has to some become a poster boy for the injustices of America’s healthcare system. Since he was arrested, people have created a number of AI chatbots trained on his online posts and personal history, including as many as 13 on Character.ai, a site where users can create AI avatars.

The Insider: A Russian disinformation network, Matryoshka, is using AI to create fake videos of renowned academics, including professors from top universities, spreading false claims that Ukraine should surrender to Russia. These videos manipulate real footage and clone the voices of scholars to deliver political messages, such as condemning sanctions on Russia and portraying Ukrainian president Zelensky negatively. The campaign has been identified across multiple languages and social media platforms, aiming to deceive global audiences.

Tech Crunch: Meta has launched a new tool called Video Seal to watermark AI-generated videos, helping to combat the rise of deepfakes. The tool, open source and integrated into existing software, aims to add imperceptible watermarks that withstand video compression and editing. Despite its robustness, Video Seal faces challenges such as limited adoption due to existing proprietary solutions, prompting Meta to promote its use through a public leaderboard and industry collaborations.

Events, jobs & announcements

Explore upcoming AI-related events, jobs and announcements that may be of interest to members of the counter-disinformation community.

IIC organises an event about the implications of AI and what it means for regulators – with a focus on how it transforms both the way we work and how we regulate its use for the benefit of consumers and businesses. The meeting is open to all IIC members and by invitation only to non-members.

 

On 10 and 11 February 2025, France will host the Artificial Intelligence (AI) Action Summit, gathering at Grand Palais, Heads of State and Government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists and members of civil society.

 

HumanX will take place over three days in Las Vegas from March 10–13. Tailored for leaders, founders, policymakers, and investors shaping the future of artificial intelligence, it promises to be a defining event in the AI space.

On May 6 and 9, 2025, Data & Society will host an online workshop on the intersection of generative AI technologies and work. This workshop aims to foster a collaborative environment to discuss how we investigate, think about, resist, and shape the emerging uses of generative AI technologies across a broad range of work contexts. 

The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva. This leading UN event on AI brings together top names in AI, with a high-level lineup of global decision makers. Its goal is to identify practical applications of AI, accelerate progress towards the UN SDGs and scale solutions for global impact. 

 

The Commission has opened two calls for expression of interest to recruit new members for the European AI Office. Apply for Legal Officer and Policy Officer.

The Rundown, the world’s largest AI newsletter and media co, is looking for a Content Writer 

AI & Disinfo Multimedia

A collection of webinars and podcasts from us and the wider community, dedicated to countering AI-generated disinformation.

Webinars

Our own and community webinar collection exploring the intersections of AI and disinformation

AI Disinfo in depth

A repository of research papers and reports from academia and civil society organisations alongside articles addressing key questions related with the regulation of AI technologies and their use. It also features a collection of miscellaneous readings.

Research

A compact yet potent library dedicated to what has been explored in the realm of AI and disinformation

About policy & regulations

A look at regulation and policies implemented on AI and disinformation

Miscellaneous readings

Recommended reading on AI and disinformation

Community

A list of tools to fight AI-driven disinformation, along with projects and initiatives facing the challenges posed by AI. The ultimate aim is to foster cooperation and resilience within the counter-disinformation community.

Tools

A repository of tools to tackle AI-manipulated and/or AI-generated disinformation.

Initiatives & organisations

Organisations working in the field and initiatives launched by community members to address the challenges posed by AI in the disinformation field.

veraAI is a research and development project focusing on disinformation analysis and AI supported verification tools and services.

AI against disinformation is a cluster of six European Commission co-funded research projects, which include research on AI methods for countering online disinformation. The focus of ongoing research is on detection of AI-generated content and development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.

AI Forensics is a European non-profit that investigates influential and opaque algorithms. They hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. They empower the research community with tools, datasets and methodologies to strengthen the AI audit ecosystem.

AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence

AlgorithmWatch is a non-governmental, non-profit organisation based in Berlin and Zurich. They fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them.

The European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.

The European AI Media Observatory is a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via their directory.

Stay informed with GZERO Daily. Insights. News. Satire. Crosswords. The essential weekday morning read for anyone who wants real insight on the news of the day. Plus, a weekly exclusive edition written by Ian Bremmer

Queer in AI is an initiative established by queer scientists in AI with the mission to make the AI community a safe and inclusive place that welcomes, supports, and values LGBTQIA2S+ people. Their aim is to build a visible community of queer AI scientists through different actions.

AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. Its mission is to leverage the transformative potential of artificial intelligence (AI) to drive progress toward achieving the UN Sustainable Development Goals.

Omdena is a collaborative AI platform where a global community of changemakers unites to co-create real-world tech solutions for social impact. It combines collective intelligence with hands-on collaboration, empowering the community from across all industries to learn, build, and deploy meaningful AI projects. 

Faked Up curates a library of academic studies and reports on digital deception and misinformation, offering accessible insights for subscribers. The collection includes studies from 2020 onward, organised into clusters like misinformation prevalence, fact-checking effects, and AI-generated deceptive content. It serves as a practical resource for understanding and addressing misinformation challenges.

Last updated: 13/01/2025

The articles and resources listed in this hub do not necessarily represent EU DisinfoLab’s position. This hub is an effort to give voice to all members of the community countering AI-generated disinformation.