The EU’s Digital Services Act takes on “The Algorithm”

• Bookmarks: 184


The governance of online platforms is at the forefront of policy regulation due to the exponential rise in illegal content, hate speech, and cyberbullying. The European Union (EU), a leader in this space, kickstarted formal regulation for online content moderation via the Digital Services Act (DSA), approved by the European Council on Oct. 4, 2022.  It has implications on moderated online content, algorithmic transparency, the spread of misinformation, and the role of online intermediaries. Since more than 95% of Europe’s population has access to the internet, this policy has the potential to shape the future behavior of technology companies (and the internet) not only in the EU but around the world. 

The DSA aims to update the intermediary liability provision, reduce illegal content, limit hate speech, and improve algorithmic transparency on recommended content and targeted ads. This policy itself stems from the need to modernize Europe’s E-Commerce directive (2000) as well as harmonize the 27 national laws currently in place to moderate online content. This is critical to reducing the compliance costs of the over 10,000 online platforms operating across Europe with 90% of them being small and medium-sized enterprises.    

Given the range in objectives, the DSA has rightly adopted a risk-based approach rather than a rules-based one that would have been overly prescriptive and would have disproportionately affected smaller players, and caused implementation issues.  

The risk-based approach guides policymakers to best distribute their limited resources based on the risk that a market actor poses. The DSA applies this approach when looking at online platforms by tying compliance obligations proportionately to the role, type, and size of risks that such platforms generate. This makes the DSA ‘asymmetric by design’ dividing up online actors into four categories: intermediary services, hosting services, online platforms, and very large online platforms (VLOPs). Their cumulative obligations increase based on which category they fall into. Most affected companies had until Jan. 1, 2024 to follow the regulations with the VLOPs such as Google, Meta, and Amazon having to comply within four months of implementation. This follows the core assumption of risk-based regulation: The bigger an online platform is, the greater its impact, and therefore, it needs to comply faster to reduce the high risks it poses to the wider community. If companies do not comply, they face penalties of 6% of their global annual revenue. 

The key aim of the DSA is to update the E-Commerce Directive, which has been in place since 2000 (Chapter 2), with the ethos of “what is illegal offline should be illegal online.” It largely preserves the limited liability provision, i.e., providers cannot be held liable for content on their platform unless they know it is illegal. It updates this provision by adding requirements for building mechanisms to flag and remove illegal content expeditiously. It does this by requiring “trusted flaggers,” who are meant to be individuals who are experts in spotting illegal content and are independent of the company itself. While the DSA does not supply a timeline on how quickly content needs to be removed, companies need to have nimble processes in place to act the moment any content is flagged. Online marketplaces also need to provide mechanisms under the “know your business customer” principle to track down sellers of illegal goods and not use “dark patterns,” i.e. deceptive user design on websites that can trick users into inadvertently sharing their personal data. 

On the other hand, the DSA demands greater transparency of content removal decisions made by platforms. This includes building a dispute resolution mechanism for people to contest decisions and supply relevant information such as whether the action was in response to a notice given or voluntarily removed, the policy guidelines the content violates, and the complaint addressal mechanism available to the user. Critically, information on whether automation/AI systems were used in flagging a video needs to be provided to the user human to AI moderators. With the industry trend of moving away from human to AI moderators, having this level of transparency is key for maintaining freedom of speech while addressing the issue of illegal content.

Moreover, the shielding of legal liability on platforms is intended to make them more proactive in moderating content on their portals. It should allow for smaller platforms to adopt stricter moderation practices as their unique selling proposition in an overcrowded market without worrying about legal ramifications.                                                                                                                            

On top of moderating illegal content, the DSA aims to improve algorithmic transparency for how content is recommended and how advertisements are targeted to individuals. Member states in the EU will need to be informed of the inner workings of recommendation systems of platforms while users will need to be supplied a brief explanation as to why a certain ad is being shown to them, including an ad label and information about the ad buyer. Users should also be able to see non-targeted ads if they wish, while profiling based on sensitive user information such as ethnicity or sexual orientation is prohibited. Crucially, targeted advertisements for children are banned. Another major requirement is a biannual report on content moderation efforts that are needed from each platform. Given that these will be public documents, they should generate insights not just into how platforms run in Europe but around the world, leading to wider public discourse on this urgent topic. 

The DSA does not say what content social media platforms can and cannot publish. Rather, it is forcing platforms to create processes and make content moderation clear to users and requiring them to take the safety concerns of users and the protection of fundamental rights seriously. While this is great in theory, implementation of the act across all member states is still a concern. Chapter 4 of the DSA does try to address this issue, but ensuring cooperation across different national governments is always a concern. Moreover, certain parts of the bill such as decisions on illegal content removal can be too rigid and prescriptive, with a major focus on individual content decisions and vague risk assessments. Technology companies have also criticized the heavy burden that the rules and general lack of clarity of the DSA will have on their platforms. Organizations have also claimed that this kind of prescriptive regulation will stifle innovation and freedom of speech. 

While such concerns exist, the DSA is largely a long overdue step for moderating content online and supplying greater algorithmic and ad transparency from companies that shape our digital lives. It has been called the “gold standard” of technology regulation and could impact the way the United States and other parts of the world approach such policies. All companies will have to follow the rules in Europe, which could affect the way they run in other jurisdictions.

The internet should no longer run like the Wild West. Major tech platforms need to be more transparent and properly regulated to prevent the spread of misinformation and illegal content. Ultimately, with higher levels of transparency on ad targeting and an inside look into the black box algorithms of online platforms, the DSA is a correct step for not just protecting individuals online but also building a safer and more inclusive internet community. 

672 views
bookmark icon