In this article we have gathered some of the most important regulations, actions and legislative documents covering disinformation in the EU Member States and G7 countries. This page will be updated regularly.
Last updated on 24/07/2019
G7 Rapid Response Mechanism (RRM) is an initiative to strengthen coordination across the G7 in identifying, preventing, and responding to threats to G7 democracies.
The Charlevoix Commitment on Defending Democracy from Foreign Threats is the commitment by the G7 to
A complete summary of the regulations and actions taken against disinformation on the level of the European Union can be found here.
In May 2018, the Belgian Minister for the Digital Agenda Alexander De Croo launched an Expert group and citizen consultation platform to tackle misinformation in the country. The expert group composed of academics, media representatives and NGOs elaborated several recommendations on the subject adopted by the Belgian authorities (Prime Minister Cabinet). The Dutch version of the recommendations is available via this link.
In January 2019, the Canadian government announced a multi-pronged effort to combat misinformation by giving $7 million to projects fighting disinformation online ahead of elections in October 2019.
The Critical Election Incident Public Protocol lays out a simple, clear, and impartial process by which Canadians should be notified of a threat to the integrity of the 2019 General Election. Also, the government called on social media platforms to do more to combat disinformation ahead of the elections. The move comes with the Bill C-76, which aims to compel tech companies to be more transparent about their anti-disinformation and advertising policies.
Bills on Hate Speech adopted in June 2018 concluded that hate speech, public incitement to violence, and the spread of fake news, should all be addressed in one law (only the first two are covered by the criminal code).
In the context of the parliamentary elections on 20th May 2019, the Danish government adopted an Action plan to strengthen safeguards against ay interference on Danish democracy and society. Moreover, the Danish authorities bolstered their efforts to get ahead of misinformation problems by re-purposing some media literacy material from Sweden. In order to avoid falling for misinformation, the government is distributing brochures with important tips.
In 2006, Estonia was one of the first countries to create a Computer Emergency Response Team to manage security incidents. “Baltic elves” – volunteers who monitor the internet for Russian disinformation – became active in 2015 after the Maidan Square events in Ukraine. Moreover, the Baltic nations have fined or suspended several media channels displaying bias. The Baltic countries also rely on a European Union agency formed in 2015 to combat Russian disinformation campaigns directed against the EU. The agency was created to issue rapid alerts to the public in case of potential disinformation directed against the 2019 European Parliamentary Elections.
The law aiming to prevent the
Proposed Organic Law Against Manipulation of Information, No. 772; and Proposed Bill on the Fight Against the Manipulation of Information, No. 799). The bills cover four main points:
- The introduction of a new interlocutory proceedings
enabelingjudges to take proportionate and necessary measures against internet service providers and hosts to stop the spread of inaccurate or misleading allegations or imputations of a fact;
- The granting of new powers to the French Audiovisual Council, to be able to prevent, suspend or terminate the broadcasting of television services controlled by a foreign state in the event of an infringement of the French state’s fundamental interests;
- The introduction of an obligation upon internet hosts and service providers to allow users to bring to their attention information they believe to be fake and to alert public authorities;
- An obligation to ensure transparency in the relationship between online platform operators and the advertisers for whom they act.
In July 2019, the Assemblée nationale followed Germany’s footsteps in adopting a law against online hate speech, which requires digital platforms to delete messages that are “manifestly unlawful on grounds of race, religion, sex, sexual orientation or disability” within 24 hours. In the event of non-compliance, the regulator may impose an administrative penalty of a maximum of 4% of the turnover of “content accelerators”. The law will need a final adoption by the Senate in September 2019.
The Act to Improve Enforcement of the Law in Social Networks NetzDG. The law covers defamation, dissemination of propaganda material, incitement to commit serious violent offense endangering the state, public incitement to crime, incitement to hatred, and the distribution of pornography. Pursuant to Article 14 (3) of the E-Commerce Directive, Member States are allowed to establish “procedures governing the removal or disabling of access to information”. Moreover, the German law against hate speech on Facebook forces online platforms to remove “obviously illegal” posts within 24 hours or risk fines of up to €50 million. Aimed at social networks with more than 2 million members, such as Facebook, YouTube,
According to the Bill on political bots and advertising, using a bot to create 25 or more personas on social media would be punishable by up to five years in prison or fines of up to €10,000.
The Law on the Provision of Information to the Public under which the Commission functions allows the Radio and Television Commission of Lithuania to block media that spreads war propaganda, instigates war or hatred, ridicule, humiliation, instigates discrimination, violence, physical violent treatment of a group of people or a person belonging on grounds of age, sex, sexual orientation, ethnic origin, race, nationality, citizenship, language, origin, social status, belief, convictions, views or religion’. The decision on temporarily blocking a channel is made through the courts following an application by the Commission.
In February 2019, the Dutch government launched a public awareness campaign aimed at informing people about the spread of misinformation online. The campaign, which came months ahead of the EU Parliamentary elections, was predominantly waged on social media. The aim of the campaign is to make Dutch voters more aware of the possible presence of disinformation and help people to recognise it.
The ‘Cybersecurity Doctrine of the Republic of Poland’ was adopted in 2015 by the National Security Bureau as a response to the increase in hybrid threats, propaganda, disinformation, and psychological influence operation by foreign states and non-state actors. The doctrine maps out tasks for state institutions, notably security agencies and armed forces, the private sector, as well as NGOs. The threats stemming from cyberspace and identified in the doctrine include cybercrime, i.e. “cyberviolence, destructive cyberprotests and cyber demonstrations,” attacks against telecommunications systems important for national security, data and ID theft, and the
In March 2019, ahead of the European parliamentary
Has adopted a Bill and established a joint cybersecurity group with Spain focusing on misinformation in general and misinformation in the context of elections. The move came after Spanish ministers accused Russia of spreading misinformation about the Catalan referendum.
In December 2018, the Russian Duma introduced an additional package of bills that would wage fines of up to 1 million rubles ($15,000) for sharing false information online.
The media regulatory framework in Slovakia is based on Article 26 of the Constitution of the Slovak Republic, which guarantees the freedom of expression and the right to information. The article prohibits censorship, specifies that ‘freedom of expression and the right to seek and disseminate information may be restricted by law only if it is regarding measures necessary in a democratic society to protect the rights and freedoms of others, national security, public order, protection of health, and morals.’
A Committee recommendation and joint cybersecurity group working
The British Government, under the direction of the Home Office and the Department for Digital, Culture, Media
Online Harms White Paper adopted by the UK Government puts forward the plans for a new system of accountability and oversight for tech companies, moving beyond self-regulation. A new regulatory framework for online safety outlines clear private sector responsibilities to keep UK users, particularly children, safer online with the most robust action to counter illegal content and activity. This will be overseen by an independent regulator that will set clear safety standards backed up by reporting requirements and effective enforcement powers.
There is no anti-disinformation law on a federal level, nevertheless, California, has adopted a law on Media Literacy.