More details, please

The European Commission has published reports by Facebook, Google and Twitter covering the progress made in January 2019 on their commitments to fight disinformation (most of them were expected, as explained in our previous newsletter of 25/02). Homework half-heartedly done because these platforms have not provided enough metrics to clearly measure the results of activities undertaken, especially with the scrutiny of ads placements. Next reports will be published in March 2019 and platforms already have a task assigned. They are expected to implement effective policies to ensure the integrity of electoral process for the European elections due in May 2019. 

Cybersecurity first

As part of the elections integrity toolkit, strong cybersecurity is key to ensuring the transparent functioning of the election system. In this context, the EU cybersecurity agency – ENISA- has released an opinion on the cybersecurity during electoral processes. Among its recommendation, the agency suggests that “Member States should consider introducing national legislation to tackle the challenges associated with online disinformation while protecting to the maximum extent possible the values set down in the Treaty of Lisbon and the Charter of Fundamental Rights of the EU”. 
Last week, Microsoft announced it detected attacks targeting think tanks working on democratic integrity in Europe. As part of its “defending democracy” programme, the company announced it will extend its “AccountGuard” programme in several EU countries in order to provide protection to political candidates, parties and also think tanks and non-profits working on issues related to democracy and electoral integrity.  

Moderation at all costs?

An article published in The Verge rose emotion over the working conditions of content moderators. Major tech platforms outsource content moderation to “process executives” working under difficult management and traumatizing mental conditions. From developing PTSD-like symptoms and even beginning to embrace the fringe viewpoints of the videos and memes, human moderation does have impactful consequences. This raises the issue of content moderation processes that should imply both AI and humans without ultimately being dehumanizing outcomes. A shared framework and guidelines between platforms and contractors could be a strong element of discussion.

Number of the week

77.8% of Europeans said they believed fake news and “politically motivated disinformation” posed a threat to the legitimacy of European elections according to a poll by Yougov for Avaaz


What to read this week:

Calendar and announcements

  • Pre-registrations for EU Disinfolab conference on May 28-29 in Brussels are open Pre-register
  • March 4-8 the Atlantic Council is organising its Disinfoweek in Madrid, Brussels and Athens
  • March 6 @ European Parliament – David Alandete will present his new book “Fake news: La nueva arma de destrucción masiva”
  • Call for papers: Nordic network on the study of online disinformation
  • EU project “EU algorithms” is seeking your input on emerging findings and knowledge gaps regarding the relationship between AI and disinfo. 

See all past and upcoming events in our agenda