
The veraAI project aims to develop and build trustworthy AI solutions in the fight against disinformation.
The project aims to continue and enrich the work started in the “forerunner project”, WeVerify, completed in November 2021. The consortium brings together leading research teams, technology experts and end users from the domain of countering disinformation. The consortium partners follow a multidisciplinary co-creation approach and deliver solutions that directly address the latest user needs and can be used by journalists, fact-checkers, investigators, researchers and all those verifying content. The veraAI solutions will use AI methods to support online content verification activities and deal with many content types across various languages, such as audio, video, images, and text.
veraAI is part of the “AI Against Disinformation” cluster of projects that focus on creating AI-based tools and methodologies for automated content verification, detecting manipulated media and deepfakes, and understanding the societal implications of AI in the information ecosystem.
Project name: VERification Assisted by Artificial Intelligence
Funding programme: Horizon Europe Framework Programme
Call: HORIZON-CL4-2021-HUMAN-01
Duration: 15 September 2022 – 14 September 2025

HIGHLIGHTS

'AI Against Disinformation' cluster
Disinformation has become one of the defining challenges of our time, eroding trust in institutions, polarising societies, and threatening democratic processes worldwide. To address these pressing issues, a series of cutting-edge research and innovation (R&I) initiatives has been launched, supported by the European Commission.

Visual assessment of CIB in disinformation campaigns
This report, developed under the veraAI project, uses a visual approach to analyse disinformation campaigns, scoring them on a 0–100% scale based on 50 indicators of Coordinated Inauthentic Behaviour (CIB).

EUDL webinar alert: join us on 12 Dec 2024 for a talk on verification tools and guidelines
The European External Action Service (EEAS) recently published the “OSINT Toolkit to Detect and Analyse IBD-focused FIMI”, redacted by EU DisinfoLab. Following this work and an in-person launch event in Brussels a short while ago, EUDL now invite anyone interested to a webinar in which some of the report’s findings will be discussed and brought to an even wider audience.

veraAI has joined BlueSky: hope to see you there!
On Thursday 14 November 2024 we launched our presence on BlueSky, a federated social network. Things took off there rather quickly. Altering that old saying slightly, it could well be that “the sky is bluer on the other side”.

Webinar: Advancing synthetic media detection: introducing veraAI
This EU DisinfoLab webinar delved into the latest advancements in synthetic media detection, with a strong focus on the innovative work conducted within the veraAI project.

Coordinated Inauthentic Behaviour detection tree
The EU DisinfoLab has authored a document under the EU funded veraAI project entitled ‘Revisit the Coordinated Inauthentic Behaviour detection tree’ (pdf). The present document revisits the Coordinated Inauthentic Behaviour (CIB) detection tree we developed in 2021: 1) The Coordination Assessment; 2) The Source Assessment, 3) The Impact Assessment, and 4) The Authenticity Assessment.

Webinar: Using Generative AI for the production, spread, and detection of disinformation
In this EU DisinfoLab webinar, Kalina Bonthcheva from the University of Sheffield talked about veraAI, and the challenges and opportunities presented by generative AI in the context of disinformation production, spread, and detection.

Platforms’ AI policy updates in 2024: labelling as the silver bullet?
In June 2024, we published a new version of our report “Platforms’ policies on AI-manipulated or generated misinformation”, compiling the recent steps. Spoiler alert: most of the actions taken in 2024 reaffirm the approach these platforms were already focusing on to tackle the problem, i.e., labelling.

Meet the Future of AI 2024 – Generative AI and Democracy
On 19 June 2024 a significant milestone was reached as six European-funded projects focused on AI and disinformation (AI4Media, Titan, veraAI, AI4Trust, AI4Debunk and AI-CODE), alongside the European Commission, hosted the event "Meet the Future of AI - Generative AI and Democracy" in Brussels.

White Paper on Generative AI and Disinformation
The EU co-funded projects TITAN, AI4Media, AI4Trust and veraAI have joined forces: participants of the four Horizon Europe projects co-wrote a White Paper entitled "Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities." It takes a look at past and present developments and raises issues for the future in this highly relevant field.

Mitigation of systemic risks in the disinformation space
Following a call to action by the European Commission requesting feedback on the systemic risks related to electoral processes, individuals from a number of veraAI consortium partners sketched out their views on the matter. In this article, some of the contributions made are recapped and shared.

Meet the Future of AI
On 29 June 2023 the day had finally come. Horizon Europe research projects AI4Media, AI4Trust, TITAN, and veraAI – together with the European Commission – welcomed about 90 participants on-site at VRT in Brussels for the event "Meet the Future of AI: Countering Sophisticated & Advanced Disinformation".

A year in review – vera.ai turns ONE
What happened within veraAI’s first year? What has the project accomplished and where is it heading? Take a look with us at 365 days of vera.ai and beyond.

WHO IS BEHIND VERA.AI? Meet: Alexandre Alaphilippe
With the interview format 'Who is behind veraAI?', we subsequently present the people who make up the project, their role within vera.ai as well as their views on challenges, trends and visions for the project's outcomes.

vera.ai Policy Relevant Evidence
Tailored to the needs emerging from the veraAIi project, this contribution unveils a series of policy recommendations aimed at enhancing data access tools, streamlining application processes, and promoting transparency in the use of social media data for academic and investigative purposes.