Dearest readers,
Bonjour! We’re kicking off February with cold and plenty of rain, but the weather won’t stop us from bringing you exciting updates and developments – some challenging, others very promising.
We’re thrilled to announce that the #Disinfo2026 conference call for proposals is now open until the end of February! Don’t miss the opportunity to share the topics and ideas you’d like to see covered at our annual conference, which will take place in Vilnius, Lithuania, on 6–8 October 2026.
In case you missed it, we’ve got four hubs, and they’ve just been updated! In this edition, we’re giving you access to the best each one has to offer. Check out our Climate Clarity Hub, AI Disinfo Hub, Conflict & Crisis Hub, and Doppelganger Hub.
Key investigations continue to unfold, from TikTok protests for profit to wikilaundering by PR firms, highlighting the need for platform accountability. Meanwhile, the EU’s #ClimateFactsMatter campaign is addressing climate disinformation and supporting public resilience.
And from Brussels, two significant updates you don’t want to miss: The European Democracy Shield draft report is out and the Commission’s decision on X’s fine has been published, but… not by the Commission itself.
This newsletter not only brings you the latest on disinformation, but we also recommend an excellent book, sneak peek into a community member’s LinkedIn 🙃, share where our EU DisinfoLab colleagues go when they’re not in the office, and highlight upcoming events, job opportunities, and more… including a little trick to bypass X restrictions without logging in 😎.
Stay informed and stay involved! Thanks for reading.
Our Webinars
UPCOMING – REGISTER NOW!

The session will explore the psychological factors that shape how people respond to false or misleading content, with a particular focus on AI-generated and online misinformation. Led by Dr Jason Potel, Visiting Researcher at Goldsmiths, it will draw on existing research to show how people may be more likely to believe or share misleading information when they feel a lack of control, confidence, or social connection.
12 February: Scroll, stop, verify. How the EU is bringing climate facts forward

Jeremy Herry from the European Commission (Directorate-general for Climate Action) will introduce their newly launched campaign #ClimateFactsMatter. This will be followed by Stephanie Hankey from Tactical Tech, who will share insights on climate disinformation trends and solutions from her latest publication, The RePlaybook.
PAST – WATCH THE RECORDINGS!
- Climate deception ‘on air’ | Mainstream TV and radio remain among the most powerful information channels, and increasingly, vectors of climate disinformation. In this webinar, Louna Wemaere (QuotaClimat) presents comparative research from France and Brazil, showing how broadcasters amplify misleading and false claims about renewables, electric vehicles, energy prices, deforestation, and climate policy, shaping public opinion and undermining democratic debate.
- Pop fascism: memes, music, and the digital revival of historical extremism | Franco in glitter sunglasses. Mussolini dancing in a school corridor. Hitler celebrating like Cristiano Ronaldo. These are not “just jokes”, they are disinformation tactics. This webinar exposes how “pop fascism” operates by laundering extremist ideology through memes, music, football fandom, AI-generated videos, and nostalgic aesthetics. With Coral García (Maldita.es) and Francesca Capoccia (Facta).
Don’t miss out, watch the recordings and explore all our past EU DisinfoLab webinars.
| 🧡 A huge thank you to all our speakers, partners, and participants for making every conversation sharper, deeper, and more impactful. If your company or institution is interested in partnering with us and sponsoring our webinars, please reach out if you’d like to discuss how we can work together: info@disinfo.eu |
Disinfo news & updates
Major investigations
- TikTok protests for profit. A Maldita.es investigation found 550 TikTok accounts posting over 5.800 AI-generated “protest” videos linked to Iran, Venezuela, Palestine and more, amassing over 89 million views. Creators use polarising, emotional AI content to rapidly build follower counts, then generate profit via TikTok’s Creator Rewards Programme or by selling accounts – a practice the outlet argues breaches platform rules and raises Digital Services Act compliance concerns.
- China-linked spoofed media network. Graphika uncovered a network of 43 domains and 37 subdomains posing as reputable outlets like The New York Times, The Guardian and The Wall Street Journal to push pro-China messaging. The infrastructure shows technical links to Chinese companies tied to prior pro-China campaigns and overlaps with Spamouflage-linked amplification on Western platforms.
- Wikilaundering exposed. The Bureau of Investigative Journalism reports that London PR firm Portland Communications allegedly ran a covert “wikilaundering” operation, using subcontractors and sockpuppet accounts to edit Wikipedia pages for governments and billionaires in ways that buried critical reporting and amplified favourable narratives. The investigation warns the stakes are rising as Wikipedia content is increasingly reused by AI search summaries and chatbots, meaning subtle edits can shape public perception far beyond the platform itself.
Resilience & Security
- Germany, Italy back anti-disinfo hub. Both countries pledged to strengthen the European Centre of Excellence for Countering Hybrid Threats after the US withdrew under Trump, calling it “wasteful”. The move underlines Europe’s focus on information resilience amid Russian hybrid threats.
- Germany targets “disposable agents”. Germany plans to amend its Criminal Code to punish “disposable agents” acting on behalf of foreign powers, amid sabotage investigations that authorities suspect may be linked to Russia. The move aims to strengthen penalties for deniable interference and espionage-related activity.
- Portugal election disinfo. EDMO/IBERIFIER monitoring found 48 significant disinformation cases in Portugal’s 2026 presidential election, with anti-immigration narratives being the most prominent, often linked to “Great Replacement” framing. Researchers also flagged fraud claims and attacks spreading across X, TikTok, Facebook and Instagram.
Influence ops
- Greenland anxiety disinfo. Pro-Kremlin accounts circulated fabricated videos impersonating Western outlets to claim Europe’s support for Ukraine has left Greenland exposed to Trump’s annexation threats. The clips spread across X, Telegram and TikTok and were amplified by Russian state and pro-Kremlin media.
- Lavrov’s three-hour disinfo. Russia’s Foreign Minister Sergey Lavrov used his 2026 annual press conference, lasting nearly three hours, as a coordinated FIMI offensive against Europe, personally targeting EU leaders and portraying Europe’s security structures – NATO, the OSCE, and the EU – as hostile, weak and illegitimate.
AI disinfo watch
- AI becomes a health information gatekeeper. Recent developments show how AI is reshaping access to health information, and its risks: Google scaled back some AI Overviews after harmful inaccuracies, and a new study suggests the tool cites YouTube more than any medical site for health queries. This setback at Google has not deterred OpenAI from launching its own health-focused product: “ChatGPT Health”. Meanwhile, an investigation found synthetic “AI doctors” spreading dubious advice at scale across platforms, a reminder that AI can also amplify health misinformation.
- Grok’s image tool sparks DSA scrutiny. Grok’s AI image generator is under fire after being used to create non-consensual sexualised images, including of women and children. While X says safeguards have been strengthened, researchers found loopholes across other access points, prompting a formal EU DSA investigation into risks linked to Grok’s deployment on X. Meanwhile, China has taken a tougher approach, prosecuting AI chatbot developers over AI-generated sexual content.
- ChatGPT will introduce advertising. OpenAI is reportedly preparing to test ads inside ChatGPT for logged-in adult users in the US (Free/Go tiers), shown in clearly labelled boxes below answers. While OpenAI says ads won’t affect responses or access conversations, experts warn the model could shift incentives in ways that increase manipulation risks.
- “@Grok is this true?” becomes X’s new fact-check habit. New research suggests X users are increasingly treating Grok as a built-in fact-checker, generating hundreds of thousands of verification prompts in 2025. But early evidence also points to a decline in Community Notes activity, raising concerns that automated “fact-checking” may displace human-led accountability.
🔎 For more AI-related disinformation news and resources, see our regularly updated AI Disinfo Hub.
Conflict information disorder
- Iran: protests under an information blackout. During Iran’s January 2026 protests, authorities imposed a near-total internet and phone shutdown, creating ideal conditions for information manipulation.
- Blackout was used as a tactic: it helped silence media, conceal abuses, and restrict independent coverage.
- Information vacuum meant disinfo surge, with coordinated campaigns and bots pushing competing narratives.
- AI + recycled footage went viral: hyper-realistic synthetic protest videos and repurposed clips spread widely, blurring reality and making verification far more difficult.
- Analysis questions the rising profile of Reza Pahlavi, citing his polarising legacy, unclear support inside Iran, and alleged influence efforts boosting him online.
- Gaza: restricted access fuels information disorder. With international journalists largely barred from entering Gaza, UNRWA’s Philippe Lazzarini warns that the lack of independent reporting is enabling disinformation and polarised narratives to thrive.
- Open-source investigators fill the gap: In a CIR article (28 Jan 2026), co-founder Ross Burley explains how open-source investigators have documented the Israel–Gaza war while navigating an “unusually hostile” information environment.
- Ukraine: hybrid war alongside the military one. Moldova’s President Maia Sandu warns Europe is facing “two wars”: Russia’s aggression in Ukraine and a parallel campaign of disinformation, cyberattacks and election interference.
- New research on Russian-language discourse on X suggests coordinated behaviour on both pro- and anti-war sides, including bot activity and tactics like hashtag hijacking.
- Another investigation mapping of 3.600+ Telegram channels finds a Kremlin-centred hub and pseudo-Ukrainian channels embedded in Russian propaganda networks.
Crisis information disorder
- New French health disinfo strategy. The government unveiled a national plan to combat health disinformation, creating an observatory and an “infovigilance” system to track viral falsehoods.
- Germany rebuts RFK Jr. The German health minister rejected Robert F. Kennedy Jr.’s claims that German doctors are being prosecuted for issuing COVID vaccine or mask exemptions, calling them “completely unfounded” and “factually incorrect”. She said prosecutions only related to fraud or forged documents.
- Nutrition myths go viral. A new review warns that social media is fuelling nutrition misinformation (detox ads, misleading “healthy” products), calling for stronger health literacy and evidence-based messaging
- “Weather manipulation” conspiracy surge. A research report finds that “weather manipulation” conspiracy theories spiked in 2025 around disasters and political events, spreading heavily on right-wing platforms (especially Truth Social).
🔎 Explore more: Our Conflicts & Crisis Hub tracks key narratives, recurring tactics, and past cases, with archived items and resources to go deeper.

Policy & EU action
- #ClimateFactsMatter: EU campaign against climate disinformation. The European Commission (Directorate-General for Climate Action) has launched #ClimateFactsMatter, an EU‑wide campaign to strengthen public resilience against climate disinformation. Built around prebunking, it helps people recognise manipulation tactics early and stay anchored in credible information.
- BE AWARE, BE PREPARED, BE INFORMED. The campaign focuses on these three pillars and offers shareable content and a toolkit for stakeholders and amplifiers. EU DisinfoLab is partnering with DG CLIMA to support this effort with research‑based insights on how disinformation spreads.
- When climate disinformation clouds reality, you see clearly. Recent studies show that 71% of Europeans report frequent exposure to disinformation, and among 25–39-year-olds, nearly 9 out of 10 encountered it in just the past week. So the European Commission has made countering climate-related disinformation a priority. By making the issue clearer and more accessible, the campaign encourages citizens from all EU to stay alert, recognise misleading claims, and feel confident challenging them when they appear.
- Toolkit to tackle climate disinfo. The campaign puts forward tips on how to spot disinformation, videos, a dedicated toolkit for stakeholders and amplifiers, and practical examples that show how climate disinformation works. It is designed to reach people who care about climate change but don’t always have the time to cut through the noise, such as busy millennials balancing work and family life
- EU backs climate information integrity. On 20 January 2026, the EU endorsed the Declaration on Information Integrity on Climate Change launched during COP30 in Brazil, reaffirming its commitment to evidence-based climate debate.
- Amsterdam bans fossil fuel ads. From 1 May 2026, Amsterdam will become the first capital city in the world to ban fossil fuel and meat ads in public spaces.
Media & narratives
- France update: climate scepticism vs trust in science. A Fondation Descartes survey tracks how French perceptions of climate information have shifted since 2022, showing slightly higher climate scepticism, but continued strong trust in scientists and growing demand for clearer media coverage orientated towards concrete solutions and more practical advice.
- Climate disinformation is reshaping geopolitics. NewClimate Institute argues that climate disinformation has become a geopolitical and security issue, increasingly targeting the politics of climate action (cost, trust, polarisation) rather than denying climate science.
- Ad industry whistleblowers: “lip service” on climate & safety. DeSmog reports on an anonymous memo from senior advertising executives warning that major agencies and brands continue funding harmful online content and legitimising polluting industries.
Influence & obstruction
- Coordinated campaign against EU green laws. DeSmog reports that the American Petroleum Institute is stepping up efforts to weaken key EU climate rules, including the EU Methane Regulation, using coordinated lobbying and influence tactics.
- A new influence front: carbon accounting. ExxonMobil is supporting “Carbon Measures,” a scheme that critics say could dilute responsibility for fossil fuel emissions by moving climate “liability” down the supply chain to buyers, according to DeSmog. This would reshape how accountability is communicated and enforced.
- Davos soundbite: Trump attacks UK climate policy. Speaking at the World Economic Forum in Davos, Donald Trump claimed UK environmental rules and taxation block North Sea oil and gas development, calling green energy policies a “scam”. Environmental groups pushed back sharply:
“The only people who benefit from backtracking on climate action are the polluters, billionaires and powerful vested interests profiting from fossil fuels.”
– Mike Childs, head of policy at Friends of the Earth.
- Secret “Climate Working Group”. Newly disclosed records suggest the Trump administration quietly steered a politicised and rushed report aimed at undermining the US Environmental Protection Agency’s (EPA) Endangerment Finding, the legal basis for regulating greenhouse gas pollution, as published by UCS.
- How Trump is impacting global climate action. Amnesty International outlines five ways President Trump is undermining global climate action, and how climate obstruction can be driven not only by online actors, but also by policy choices that reshape the information environment.
Guides & resources
- Spotting climate disinformation. Euronews breaks down common tactics behind climate disinformation and shares practical tips to verify what you see online.
- Pause and check. Yale Climate Connections shares simple “pause and check” prompts to avoid spreading misleading climate content.
- Training opportunity: Strategic Climate Change Communication (Yale Online) is a 14-week online certificate helping professionals turn climate science into effective messaging. Applications open on 9 February 2026 (and close on 9 March 2026).
🔎 This edition’s Climate Clarity Corner brings together a few selected items on climate disinformation, building on the recently updated Climate Clarity Hub.
Brussels Corner
The Commission’s decision on X’s fine is published, but not by the Commission
The United States House Judiciary Committee has released the European Commission’s full decision behind its €120 million fine X, before the EU had made the document publicly available.
Early January, we submitted an official access to documents request to obtain the Commission’s full decision behind imposing a fine against X. The Commission rejected our request, explaining it was still preparing a redacted public version, pending consultations with the concerned third party.
Only a few days later the House Judiciary GOP published the decision with confidential parts already redacted. In a thread on X, the House Judiciary Committee framed the decision as evidence that the Commission is punishing innovation and using the Digital Services Act as a censorship instrument to target the platform for allowing free speech. This interpretation is in line with a broader disinformation campaign against the Digital Services Act (DSA) which has included hearings and reports portraying Europe as a foreign censorship threat. Moreover, the US imposed travel bans affecting European disinformation researchers and civil society – Clare Melford (GDI), Imran Ahmed (CCDH), Josephine Ballon and Ana-Lena von Hodenberg (HateAid), as well as former European Commissioner Thierry Breton. The House Judiciary Committee were not specific about whether they felt that cracking down on deceptive use of the blue check mark, penalising lack of transparency of its ads repository or punishing the illegal blocking of researcher access to public data was the “censorship” they referred to.
Against the backdrop of restrictions against civil society members working to defend digital rights, it is especially important to note that much of the evidence used by the Commission to establish X’s alleged non-compliance drew heavily on civil society contributions. This once again underlines the essential role European civil society plays in DSA enforcement, and the urgent need for stable, long-term and independent funding to support that work.
AI and copyright
On 28 January, the European Parliament’s Legal Affairs Committee (JURI) adopted a non-legislative report “Copyright and generative artificial intelligence – opportunities and challenges”, aimed at addressing the relationship between AI development and copyright law. The report is led by rapporteur Axel Voss (EPP) and ostensibly seeks to strike a balance between technological innovation and the protection of rights holders.
According to reporting by Contexte, key compromises on the report have been reached after prolonged negotiations. The report is closely watched by stakeholders, as it is currently the main parliamentary discussion on AI and copyright, ahead of the European Commission’s planned evaluation of the Copyright Directive, expected in mid-2026.
The approach of the final report seems to suggest an “opt-out” mechanism for copyright holders/news outlets to not be a part of the training data. The principle this report follows is that technological developments must respect existing law, while existing law should not slow down innovation. Nevertheless, this kind of approach seems to burden the rightsholders more than the AI providers.
While the report is non-binding, it is expected to influence future legislative initiatives. A final vote on the report in the European Parliament’s plenary is planned in March.
European Democracy Shield Draft Report
The special Committee on the European Democracy Shield (EUDS) has published its draft report on the findings and recommendations of the Special Committee on the European Democracy Shield.
The report’s rapporteur (Parliamentarian in charge) Thomas Tobé presented the draft at an EUDS Committee meeting on 29 January. Overall, the report presents more ambition compared to the Commission’s communication released in November last year.
On one of the key proposals – the Centre for democratic resilience, the report points out that the Commission’s proposal lacks operational detail and calls on the Commission to give the Centre a clear mandate. During the meeting Mr. Tobé also called on the Commission to consider a financing mechanism for the Centre that would involve the major online platforms. Moreover, the draft report emphasises that the AgoraEU programme under the next Multiannual Financial Framework (MFF) should provide for funding that meets the needs of civil society organisations.
The deadline for tabling amendments to this report is 11 February. The Committee is expected to have a vote on 23 June, with a plenary vote so far penciled in for July.
Reading & resources
- UN chief: unity vs disinfo: UN Secretary-General António Guterres has recently warned that the world is entering an era of “chaos” marked by conflict, inequality, climate breakdown and weakening respect for international law. In outlining his priorities for 2026, he highlighted the need for “unity in an age of division”, explicitly calling out disinformation as a driver of polarisation and exclusion.
- On DSA systemic risks. ISD published “Bridging Policy and Research under the DSA: A Structured Research Agenda for Identifying Systemic Risks”, outlining how researchers and regulators can better assess systemic risks under the DSA, including risks linked to asymmetric amplification of political content.
- UK democracy, trust & Ukraine disinfo. A new Resilience & Reconstruction report argues disinformation in the UK works less by persuading with single falsehoods than by exploiting deep distrust in politics, media and institutions, weakening public resilience and shaping attitudes to Ukraine and Russia.
- US authoritarian drift. In a New York Times Opinion column, journalist and author M. Gessen argues that one year into Trump’s second term the US is rapidly normalising authoritarian practices (detentions, intimidation, pressure on media and elections), urging civil society to act now while democratic space still remains.
- Illiberal playbook. An analysis by the AUTHLIB project links legal, political and discursive attacks on civil society to a wider illiberal strategy in Europe, urging stronger EU enforcement and more resilient funding to protect democratic checks and balances.
- Finland’s media literacy model. Finland teaches media literacy from the age of three as a national resilience tool against propaganda, and is now adding AI literacy to help students spot synthetic images and disinformation.
- Trump censorship tracker. A new public dashboard by the American Sunlight Project cataloguing alleged censorship actions linked to Trump’s second-term administration. Updated weekly, it tracks incidents by category, geography and actors, useful as a monitoring tool, though framed from an advocacy perspective.
- The Baltic Flank (new Substack). Baltic reporters have launched a new English-language investigative newsletter tracking Russia’s hybrid warfare, from espionage and sabotage to propaganda and sanctions evasion, with a frontline perspective from NATO’s eastern flank.
- M.A.T.T.E.R. – a newsroom meme checklist. A practical editorial checklist by Marcus Bösch (understanding-tiktok.com) to help journalists assess whether to cover or embed a meme, mapping its meaning, emotional “vibe”, format/platform dynamics, lifecycle, ethical risks (including manipulation/harassment), and real-world relevance before its amplification.
The latest from EU DisinfoLab
- Building a common operational picture of FIMI. Doppelganger, Overload, Storm-1516, Undercut… Lost in translation when it comes to Russian information manipulation campaigns targeting the EU, and what can actually be done about them? With a small group of partners, we felt it was time to structure our collective knowledge for the community. Building on the Information Manipulation Set (IMS) concept, we collectively applied this framework to four well-documented operations.
- A practical toolkit for detecting, assessing, and responding to Foreign Information Manipulation and Interference (FIMI). We have developed three monitoring templates to identify FIMI incidents, assess related systemic infringements, and document responses and their impact, providing a basis for follow-up actions where violations are identified. They are ready to be used. 1. Incident Qualification Template; 2. (Systemic) Violations Template; 3. Countermeasures Template, Have a look at the practical example we provide and a checklist on how to use the templates.
- Your report is important to us – please overcome some unnecessary barriers to submit it. This blog post builds on our previous work on malicious semi-compliance, taking a deeper dive into the lengthy and burdensome reporting mechanisms platforms have designed to discourage users from reporting illegal content, despite the DSA’s requirement for user-friendly notice mechanisms.
- Climate Clarity, AI Disinfo, Conflict & Crisis, and Doppelganger. For this newsletter edition, we have refreshed and updated our Hubs – four dedicated resource spaces designed to support our community with practical tools, research, and guidance. Explore them!
This week’s recommended read
This week’s recommended read is brought to you by Katrina Luize Asmane, Policy intern at EU Disinfo Lab:
I have just finished reading “The Hour of the Predator” by Giuliano da Empoli, a striking and unsettling exploration of today’s global order. Through a series of encounters with autocrats and tech billionaires, da Empoli offers a reflection on how cynical scheming and force is increasingly determining international relations.
In what he calls “the hour of the predator,” the author suggests we are witnessing the collapse of the belief that rules and norms can restrain the struggle for power. As he writes, this moment is “essentially just a return to normality,” implying that the true anomaly was the short period when we imagined that the “bloody quest for power” could be curbed through systems of law and shared values.
“The Hour of the Predator” is a compelling and thought-provoking read for anyone trying to understand the forces reshaping international politics today.
👀 Spotted: EU DisinfoLab
- On 21 January, our Research Manager, Maria Giovanna Sessa gave a lecture at the Winter School on EU Policy-Making “Analysing Tech Policies in Today’s EU” at the VUB Brussels School of Governance. The session focused on the overlaps between identity-based disinformation and FIMI, with hands-on recommendations on policy enforcement to tackle the problem.
- On 26 January, our Executive Director Alexandre Alaphilippe was interviewed by Portuguese outlet, Gerador. He discussed several key issues, including the evolving disinformation ecosystem across the EU Member States, the role of platforms in amplifying and monetising false content, the limits of media literacy, the impact of foreign interference, and the growing risk posed by AI-generated content and deepfakes.
- On 2 February, Ana Romero delivered a lecture at the Universidad Autónoma de Madrid on “Climate Change in Southern Europe: Building Transversal Skills for Understanding and Action”. The session explored the complexities of environmental disinformation, emphasising the balance between freedom of expression and preventing harm, as well as strategies like prebunking and debunking to combat misleading narratives.
- Brussels community meetups. We regularly get together in Brussels for informal community meetups: good conversations, shared ideas, and the chance to put faces to names. If you’re interested in joining one of the next meetups, just reply to this email and let us know.
Events & announcements
- Present-June: The Cyber for Good Media programme is running with the mission to protect and better equip journalists against interference and manipulation in the digital space, with a focus on OSINT and cybersecurity.
- 4 February: Spotlight Germany: FIMI Operations and Sabotage is a live session to learn how Russia targets Germany’s information environment, how FIMI operations work and are amplified and what resilience requires in practice.
- 5 February: ATHENA project will host a webinar “From Promise to Practice: How Online Advertising Funds Misinformation Before and After the EU Code of Practice”.
- 13 February: ReMeD Final Conference (Brussels) will present findings from the EU-funded project on media resilience and democracy in the digital age.
- 16 February: The next EDMO BELUX Lunch Lecture: “From self to co-regulation in the EU’s approach to disinformation: The framing power of Big Tech business lobbies” will take place online.
- 16-17 February: The DSA and Platform Regulation Conference at the Amsterdam Law School will reflect on the DSA and European platform regulation, providing an opportunity to discuss its broader legal and political context, through the overall theme of platform governance and democracy.
- 25 February: This year’s Digital Platforms Summit 2026 will examine how the Digital Markets Act (DMA) is reshaping online markets and enforcement, while looking ahead to the upcoming Digital Fairness Act (DFA). The event will explore platform governance, consumer and child protection, dark patterns, interoperability, and the future of EU digital regulation, alongside new research findings from CERRE.
- 8-10 April: The Cambridge Disinformation Summit is expected to gather the world’s leading scholars, professionals, and policy-makers to explore interventions on systemic risks from disinformation.
- 15–18 June: Disinformation Summer Institute 2026 (Bainbridge Island, near Seattle): A 4-day in-person institute will bring together early-career researchers and senior experts for lectures, panels and discussions on studying and countering disinformation.
- 7–8 September: EDMO BELUX 2.0 final conference ”Countering Disinformation, Raising Democratic Resilience” will be organised in Brussels.
- 6–8 October: EU DisinfoLab Annual Conference. #Disinfo2026 will happen in Vilnius, Lithuania. Save the date. The call for proposals is already open!
- Other initiatives:
- The Data Tank is inviting small and medium media and fact-checking organisations to join a new action research project aimed at building collective leverage over Big GenAI, to protect media sustainability and information integrity across Europe.
- Call for applications (deadline: 15 February 2026): Disinformation Summer Institute 2026 . Applications are open for the second annual DSI, inviting advanced PhDs and early-career researchers to join an interdisciplinary programme on disinformation research and response.
- Call for contributions (deadline: 8 March 2026): EDMO BELUX 2.0 Final Conference. EDMO BELUX 2.0 invites proposals for papers/studies, themed panels and hands-on workshops on fact-checking, media literacy and disinformation research, with a focus on Belgium/Luxembourg and EU policy implementation.
- Call for papers (deadline: 15 September 2026): The Journal of Marketing Management invites submissions examining how platform economies, ad tech, recommender systems and creator monetisation shape the spread of disinformation, and what interventions could strengthen societal resilience.
One thing we loved
After our daily scrolling through platforms, it’s refreshing to come across a post like this one.
Inês Narciso flagged a compelling video by TikTok news creator Dylan Page (NEWSDADDY), who explains why he took a break after his content was repeatedly demonetised and unfairly removed.

According to Inês, what makes the piece stand out isn’t outrage or conspiratorial framing, but clarity. Dylan offers a rational, well-researched account of how content moderation (and appeals) can fail at scale. Inês highlights three key takeaways:
- platforms often buckle under speed and volume rather than censoring with intent;
- AI still struggles to moderate contextually at scale (especially when appeal feedback loops don’t work);
- creators sharing experiences can help reveal systemic patterns that remain invisible without better transparency and accountability under the EU’s DSA.
Jobs
- OpenAI is looking for a Global safety response operations analyst. Open until filled
- Alice (ActiveFence) is offering several positions; scroll their page to view all open roles.
- NewsGuard is looking for a full-time Staff Reporter, a Politics Reporter and an Editorial intern.
- Moonshot is looking for an OSINT Analyst.
- The Centre for Information Resilience has opened its talent pool for an OSINT investigator (Contractor — Russian/Ukrainian speaker.)
- Follow the Money (FTM) is looking for a reporter to investigate anti-democratic influences.
- The European Commission’s Joint Research Centre (JRC) is hiring a scientific officer on the topic of citizen participation.
Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!
💡 Tip of the week
Bypass X restrictions?Are you a part of that crowd who left X for any good reasons? But colleagues, friends and families are still sharing posts from X that you can’t properly access because you’re not logged to the website? Here is a tip that might be useful: xcancel.com, a bypass provided by the same people that created Nitter. By changing any x.com url and using xcancel.com, you can access a post and consult replies, without having to use any X account.
Have something to share – an event, job opening, publication? Send your suggestions via the “get in touch” form below, and we’ll consider them for the next edition of Disinfo Update.




