March 23, 2022

EU lawmakers call on platforms to do more in response to disinformation in Ukraine. However, current legislative discussions would have them do less.

By Diana Wallis, President of the Board of Directors of EU DisinfoLab. This column was originally published in Euractiv.

The war in Ukraine is playing out across digital services and social media platforms, with disinformation and propaganda at its core. In just a few weeks, the Spanish fact-checker Maldita.es has already listed more than 750 fact-checks produced to counter disinformation items.

The EU institutions are officially calling on tech platforms to increase their efforts to tackle disinformation. On March 10, MEP Raphaël Glucksmann, chair of the Parliamentary special committee on foreign interference, and 9 other MEPs called for “clear rules” and “a structural approach to disinformation” in the DSA.

However, despite their declarations, EU lawmakers are failing to put forward a regulatory framework that matches their public positions. The current proposal on the table would actually give incentives for the platforms to do less, not more.

A key element currently in jeopardy is the access to the user redress mechanism foreseen by the Article 17. This new feature allows users to challenge the content moderation decisions taken against them by the platforms, unfortunately as now proposed by a compromise it would only grant access to users whose own content has been removed.

To illustrate the issue many were recently horrified by the story of a pregnant woman in Mariupol being rushed to an ambulance after Russia bombed a maternity hospital. Russian officials and conspiracy theorists have infamously twisted this event, claiming it was staged and that the woman was an actress. While some of these claims have been removed, this disinformation around the incident continues to propagate.

In this case, the compromise means that only those perpetrating the disinformation, claiming it was a staged would have the right to complain against a platform’s moderation of their content. So in a sense the more the platforms moderate content, the more they will raise the risks of being challenged. So in turn there is less incentive for them to moderate more content.

The compromise proposal would rather profit the platforms and their business model, alongside the abusers of their service whilst dangerously ignoring the needs of the users who are exposed to or would legitimately wish to counter this disinformation. The only way to counter disinformation is allow more open access to challenge content moderation decisions pursuant to the platforms own policies and rules ensuring equal application, whether or not they are taking action to remove content.

If Facebook says it’s labelling all Russian-state information, we need to legally challenge why in 91% of the cases it failed to do so? If Instagram prohibits advertisement including claims debunked by fact-checkers, we need to understand on what basis it did allow advertisement on alleged US biolabs in Ukraine?

Since November, the Council has agreed in compromise texts to extend this possibility to challenge all platforms decisions. This includes decisions to not act on content that is infringing their own terms and conditions, a key request from many civil society actors who are supporting this vision.

We do not want a regulation that relies on Commission President Von der Leyen calling Meta to take down disinformation. This is untenable as it is also giving full credence to the platforms press releases trumpeting what they are alleging doing. What we need most of all is an enhanced access to user redress mechanisms in Article 17, as proposed by the Council. Only this will grant public accountability over private decision-making.

Diana Wallis