26 March 2026

by Joe McNamee, EU DisinfoLab

Introduction

Recent US attacks on the EU’s approach to harmful and illegal content online have the same relationship with truth as a drunk has to a lamppost – they are used for support rather than for illumination.

This post will give some historical context and explain why the approaches in the EU and US have similar motivations, but also important differences that contradict much of the recent criticisms and commentary.

Here’s a crazy fact – the EU’s Digital Services Act (DSA) of 2023 has its legal roots in the EU’s E-Commerce Directive (ECD) of 2000 and that, in turn, found its inspiration in Section 230 of the United States’ Communications Decency Act (CDA) which was adopted in 1996 – just a few years after invention of the concepts and technologies behind the worldwide web. The primary objective in all this legislation was to ensure non-liability of internet companies, thereby giving them the legal certainty necessary to innovate, grow and evolve.

The approaches of the EU and US are historically deeply intertwined.

The US Approach

The US CDA says that the “interactive computer service” (online hosting or social media company) should not be considered to be the publisher of any content on its service, when it was uploaded by a user. By contrast, if it fails to fulfil the criteria to be covered by the provision, it can be liable as the publisher of the content. A briefing from the US Congress explains, very instructively, how the platform roommates.com fell, at different moments, on either side of this threshold. The service was found to be:

  • covered by the liability protections of the US CDA when its function for allowing people to say what they were looking for in a roommate was abused by users to set discriminatory criteria, but;
  • not covered by the CDA when it designed a search tool in such a way that it led todiscrimination on the basis of protected attributes. 

In short, if the online company’s service is being used by a third party for illegal activities, particularly if the company has no knowledge of this, they cannot be held liable for the actions of others. However, when they are, by negligence or design, party to the offence (such as by causing an unmitigated systemic risk), they lose their protection.

The CDA suffers from having an expansive but unclearreach in terms of the companies concerned – covering “interactive computer services,” not being applicable in relation to all types of content and, still today, not having adequate settled case law from the Supreme Court. To add to the confusion, differing rules apply in certain situations, such as copyright infringement.

There are some key imbalances in the US approach – most notably that the internet companies themselves have constitutional rights, in particular under the First Amendment. This means that internet companies have “free speech” rights to restrict free speech of people, and freedom to delete content they do not want to host.

Terms of service are generally written in a way that makes it legally safe for online companies to delete anything they want, if they feel it is in their corporate interest to do so.

The EU’s approach: boost individual rights and clarify the liability of companies

The EU’s ECD tried to do one simple thing, namely to support online companies by giving them protection from liability when they act as neutral intermediaries. It was drafted with a conscious aim of taking the best of the logic behind the US CDA, while also learning important lessons from the early experiences with implementation of the US legislation.  

The first lesson learned by the EU was the need to be more precise about the companies being protected, giving hosting companies, technical network storage services (“caching” providers) and internet network transit services (“mere conduit”) protection from liability for illegal content of which they were not aware. Secondly, it is “horizontal”, covering all illegal content. These principles were carried across into the DSA.

The second improvement is boosting the rights of individuals. As mentioned above, the USA protects free speech through a constitutional provision, the First Amendment. Only people and entities are directly protected by the Constitution when under US jurisdiction, and this “free speech” on the part of corporations includes the “right” of corporations to limit what is said on their platforms. On the other hand, the EU protects freedom of expression as a fundamental, human right. This was explained already in 2000 in the ECD – “ the removal or disabling of access has to be undertaken in the observance of the principle of freedom of expression” (recital 46). Freedom of expression is mentioned no fewer than 18 times in the DSA.

The Digital Services Act maintains the provisions of the EU ECD for the three basic types of internet company. But it also recognises that with great power comes great responsibility. As a result, it adds various due diligence obligations for very large online services (VLOPs – platforms, and VLOSES – search engines) including risk assessments, risk mitigation, and transparency. 

What does all of this mean for illegal content in the EU?

Broadly, there are three levels of responsibility/non-responsibility for content on an intermediary’s service:

  • If the intermediary had no knowledge of the illegal content, such as illegal FIMI provided by a third party:
    • Here the intermediary has no responsibility provided it acts “expeditiously” upon receiving a valid notice from any source. 
    • The intermediary has to act with more urgency, namely “without undue delay” in cases where it receives the notice from an entity that has received the designation of “trusted flagger”. 
    • In addition to being liable for failing to act in relation to the illegal content, the intermediary can be held liable for any damage or loss suffered as a result of its inaction.
  • If the intermediary is a very large online platform or very large online search engine and failed to adequately mitigate recognised systemic risks:
    • In this case, they are liable for this failure to respect their legal obligations to adequately assess and mitigate the risks it created (broadly similar to the way roommates.com was held liable in the US).
  • If a platform breaks the law, for example, by providing commercial services such as advertising to a sanctioned Russian entity, then it is liable just as any offline publication would be in equivalent circumstances.

Which regime protects free speech better?

Liability protections for online companies reduce their incentives to unfairly block or delete online content, so both are strong from that perspective.

Laws that are predictable are generally better, all other things being equal. The EU’s horizontal (treating all illegal content in the same way) approach is more predictable than the US approach of broadly horizontal but also partly vertical (content-specific) approach. The EU’s DSA was also recently adopted and is not under any immediate threat. In the US, section 230 of the CDA is almost constantly under threat. So, the EU seems to be clearly more settled on this point.

The EU DSA has explicit protections for from arbitrary restrictions being imposed by intermediaries, no such protections exist in the US. Here again, it seems that the EU approach is more predictable for all concerned parties. 

The EU’s highest court has made landmark rulings on the EU ECD, such as Netlog/Sabam (C-360/10), Scarlet/Sabam (C-70/10) and eBay/L’Oreal (C-324/09), won by the intermediary in each case, and Google/AEPD (C-131/12), lost by the intermediary. By contrast, there have been no US Supreme Court rulings on section 230 of the US CDA. Here, again, the EU appears to be in a more settled position.

Conclusion

The EU framework creates incentives to remove illegal content. For very large online platforms and search engines, it also requires assessing and then mitigating risks caused by their design and functioning. It does this with explicit regard to protection of the fundamental right to freedom of expression.

However, it also creates disincentives to remove content illegitimately, by prohibiting arbitrary restrictions. Case law of the EU’s highest court, the Court of Justice of the EU, gives intermediaries a great deal of protection if they do not remove content in cases that are not clear cut.

By providing robust liability protections, the US framework removes incentives for online intermediaries to delete content. It does so using different rules for different types of content, in different contexts, and it does not meaningfully prevent internet companies from removing users’ content, and arguably even facilitates such removal.

The rules are implemented under a mix of legal bases that sometimes cover everyone in the jurisdiction, and sometimes only citizens. 

Which is better? It isn’t for us to judge, we just hope that this post can be illuminating.