The vera.ai project aims to develop and build trustworthy AI solutions in the fight against disinformation.
The project aims to continue and enrich the work started in the “forerunner project”, WeVerify, completed in November 2021. The consortium brings together leading research teams, technology experts and end users from the domain of countering disinformation. The consortium partners follow a multidisciplinary co-creation approach and deliver solutions that directly address the latest user needs and can be used by journalists, fact-checkers, investigators, researchers and all those verifying content. The vera.ai solutions will use AI methods to support online content verification activities and deal with many content types across various languages, such as audio, video, images, and text.
Project name: VERification Assisted by Artificial Intelligence
Funding programme: Horizon Europe Framework Programme
Duration: 15 September 2022 – 14 September 2025
In an era dominated by the influence of platforms, access to comprehensive data has become paramount for researchers seeking to analyse the dynamics of online content. Tailored to the needs emerging from the vera.ai project, this contribution unveils a series of policy recommendations aimed at enhancing data access tools, streamlining application processes, and promoting transparency in the use of social media data for academic and investigative purposes.
On 29 June 2023 the day had finally come. Horizon Europe research projects AI4Media, AI4Trust, TITAN, and vera.ai – together with the European Commission – welcomed about 90 participants on-site at VRT in Brussels for the event "Meet the Future of AI: Countering Sophisticated & Advanced Disinformation".
With the interview format WHO IS BEHIND VERA.AI? we subsequently present the people who make up the project, their role within vera.ai as well as their views on challenges, trends and visions for the project's outcomes.