17 April, 14:30 – 15:30 CEST

Join us for a discussion with Sophia Freuden, a researcher from The American Sunlight Project, as she unpacks the emerging concept of LLM grooming – a novel strategy to manipulate large language models (LLMs) for foreign information manipulation and interference (FIMI) purposes.

We explore how the pro-Russia Pravda network appears to have mass-seeded LLMs with tailored inputs, potentially reshaping the way these systems learn, respond, and even influence the architecture of the internet itself. We examine what LLM grooming actually entails, what evidence points to coordinated activity by the Pravda network, and what this could mean for the future of AI and global information ecosystems. We also look at the wider consequences – political, social, and technological – and consider how both state and non-state actors might use similar tactics, and examine what can be done to mitigate these risks.

Watch the replay of the session here:

Speaker:


Sophia Freuden
, Research Consultant, The American Sunlight Project

Sophia Freuden is a researcher at The American Sunlight Project (ASP) and the primary author of ASP’s recent report: “A Pro-Russia Content Network Foreshadows – the Automated Future of Info Ops.” For over five years, she has conducted technology-driven research on information operations and cybersecurity issues at institutions in Europe and the United States. She holds a Master of Arts in Russian regional studies from Harvard University.

Moderator:

Raquel Miguel Serrano, EU DisinfoLab

Raquel Miguel Serrano is a senior researcher at EU DisinfoLab. She has a background in journalism and spent most of her professional career working for the German press agency DPA until 2019, when she shifted her focus to disinformation. Raquel received a Master’s degree in Cyber Intelligence, and began working with the EU DisinfoLab. She is the author of multiple articles, mainly focused on mis- and disinformation circulating in Spain and Germany, but also on more comprehensive topics such as the impact risk of online disinformation or pre-bunking as a tool to counter information disorders. Recently, she has been working in other areas, such as FIMI or the challenges posed by generative IA.

The opinions expressed are those of the speakers/authors and do not necessarily reflect the position of EU DisinfoLab. This webinar does not represent an endorsement by EU DisinfoLab of any organisation.