by Katrīna Luīze Ašmane & Joe McNamee, EU DisinfoLab
The Digital Services Act (DSA), a flagship piece of EU legislation which created and extended rules for transparency and accountability of online intermediaries such as platforms, promised people in the EU a safer and more trustworthy experience online. In reality, the DSA has so far fallen short of its promise, and saying that big tech platforms have become significantly more transparent over the past three years would be an overstatement. Instead, a consistent pattern of aiming for a legal grey zone, to have the appearance of complying, while creating regulatory overhead for regulators and NGOs, can be observed — a behaviour that could be described as “malicious semi-compliance”. To examine this pattern systematically, this blog post analyses three key examples following a simple structure: what platforms should have done under the DSA, what they actually did, and what the consequences of their semi-compliance are.
Non-Profiling Requirements
Firstly, a recent case against Meta brought to the Dutch courts by a Dutch digital human rights organisation, “Bits of Freedom,” highlighted how Meta has not fulfilled its obligation to provide users with a version of their feed that is not based on profiling.
Under the DSA, platforms are required to design systems that enable genuine choice and transparency, allowing users to easily change their recommendation preferences at any time (Article 27), offering at least one option not based on profiling for each recommender system (Article 38), and avoiding deceptive interface designs or “dark patterns” that manipulate user decisions (Article 25).
Although an alternative feed technically exists, Meta has made it difficult to choose it. On Instagram for Android, users have to navigate through several menus to find it, and even then, features like Direct Messages are only accessible if they switch back to the profiling-based feed. The app also reverts to Meta’s personalised feed every time it is opened, meaning users must repeatedly select the alternative if they want to avoid profiling (click here for a video). This repetitive process, combined with the obscure placement of the non-profiled feed, inevitably discourages people from choosing it and subtly pushes them toward Meta’s default, personal data-hungry experience.
As a result, Bits of Freedom sued Meta, and the Dutch court found Meta to be in breach of the DSA. The ruling compels Meta to offer Dutch users a non-profiling, chronological feed, which is a milestone that could shape broader DSA enforcement across Europe. Yet, with the court granting a postponement until December 31, 2025, compliance also remains postponed. The case demonstrates that civil litigation can enforce the DSA, but at the cost of immense resources for civil society, adding hugely to the overhead outlined earlier. Ultimately, Meta’s approach illustrates how big tech not only designs addictive feeds but also arguably maliciously designs its semi-compliance to preserve control, in opposition to the spirit and letter of the law.
Interestingly, while the Dutch court found a violation of the aforementioned DSA articles, audits of Meta’s 2024 risk assessment, which platforms are required to carry out according to Article 34 of the DSA, did not point out the non-compliance of the same design features. The audit report for Facebook found that an option for a recommender system not based on profiling was not in place only for Facebook Dating, which was later implemented. However, bizarrely enough, the same report did not consider consent that needs to be resubmitted every time to be problematic.
Notice and Action Mechanisms
A similar pattern appeared in how Meta approached its obligations to provide users with simple mechanisms to notify illegal content, which is highlighted by recent Commission preliminary findings.
Article 16(1) of the DSA clearly envisions that platforms should have an easy and user-friendly way for people or organisations to report illegal content. These reporting tools should be simple to find, straightforward to use, and available entirely online.
Instead, the Commission has found that both Facebook and Instagram have made their “notice-and-action” mechanisms manipulative by design, ostensibly constituting dark patterns. According to the Commission, current mechanisms include unnecessary extra steps to report illegal content, such as child sexual abuse or terrorist content. If users can be nudged towards reporting potentially illegal content as Terms of Service violations, the platforms can remove it on this basis, which could be seen as a way to avoid the consequences of knowing that a user is uploading (or re-uploading) illegal or even criminal content. This issue creates problems that go further than mis-categorisation of content, it leads to incorrect and therefore misleading statistics about content moderation. Even worse, if criminal content is misclassified as Terms of Service violations, the crime is not recognised as such, the criminal is never prosecuted, and victims are not protected.
Political Advertising
Thirdly, Google’s and Meta’s decision to back out of political ads leads to less accountability and creates an imbalance. While the platforms have banned political ads, it does not mean that there will not be ads that are political.
The Regulation on the Transparency and Targeting of Political Advertising (TTPA) came into force on October 10, 2025. A faithful implementation would have meant that everyone involved in political advertising – the companies placing the ads, the platforms hosting them, and the groups funding them – openly shared key details about their campaigns. The details should include how audiences were targeted, what data was used, how much was spent, where the ads ran, and who exactly they were meant to reach.
In a reaction that could only be described as petulant, Google and Meta announced that they would ban political ads altogether. On top of that Google deleted its pre-existing political ad archive for the EU.
Meta argues that fully complying with these transparency and non-profiling requirements is technically complex and burdensome. At a scrutiny session on TTPA held at the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) (the Committee that was in charge of the DSA when it was adopted), a representative from Meta claimed that complying with the TTPA is exceptionally difficult, because the regulation imposes extensive limits on how political ads can be targeted, preventing advertisers from effectively reaching their intended audiences. He also argued that the additional consent requirements risk confusing users, while what Meta describes as the ambiguous definition of what qualifies as a political ad makes consistent enforcement nearly impossible.
Essentially, what Meta seems to be saying is that it cannot separate sensitive personal data from the mix it uses to target ads, and it does not want to go through the task of asking every European user for clear permission to exploit their personal data when its advertising customers seek to influence their political choices.
Nevertheless, this ban has not eliminated ads containing political messages. The Hungarian National Resistance Movement, for example, is reportedly circumventing the ban by advertising AI-generated content spreading political messages through Minecraft-themed and other animated videos. Meta’s Ad Library shows that promotion of these videos was paid for, but the amounts are not available since, technically, those are not political or issue-based ads. Similarly, Google relies on self-declaration of ads, allowing easy circumvention. Consequently, the transparency of political ads is even more limited, creating an information gap for researchers.
Conclusion
Taken together, these three examples show that platforms avoid full compliance by default, using allegedly deceptive interface designs, or by avoiding responsibility altogether, as in the political ads case.
The Dutch case against Meta proves that enforcement is possible, but at a high cost of financial and human resources from civil society. The same pattern holds for notice-and-action mechanisms and political advertising. When illegal content is hidden behind misleading categorisation, it allows platforms to avoid acknowledging how much illegal content they actually host. When political ads are replaced by content containing political messages and moved out of dedicated archives, this creates unnecessary difficulties for research in this field and further undermines the transparency of political advertising.
In the end, if the DSA is to live up to its promise, defaults should not be built on dark patterns that serve platform interests, but on a realistic understanding of users’ capacity to navigate them. If the DSA is to live up to its promise, malicious semi-compliance needs to be expected and factored into the work of the European Commission and national Digital Services Coordinators. If the DSA is to live up to its promise, the cost of non-compliance has to be greater than the cost of compliance.
