The Future of Digital Regulation: A Risk-Based Approach?
Share this Post
Authors: Giovanni De Gregorio and Pietro Dunn
Technologies, risks, and the law
Like many new technological advances, digital technologies have heralded unprecedented opportunities, while also giving rise to myriad challenges. The growing prevalence of a range of digital technologies over the first two decades of the 21st century has posed complex legal challenges in both domestic and global legal contexts. The result has reset the terms of certain individual fundamental rights and liberties such as privacy and data protection, freedom of expression and non-discrimination, vis-à-vis both public institutions and private actors.
Lawmakers and policymakers have found themselves in a predicament, having to carefully balance the prerogatives of technological innovation and market considerations with the need to mitigate all potential risks arising from the new and developing algorithmic society, especially when these threaten the protection of constitutional and fundamental values.
What is a risk society?
A “risk society,” as the term is used today, is one in which the parameter of risk plays a key role in regulation. In the context of technology, this means that the law intervenes only insofar as it is necessary to reduce the risks connected to technology, ultimately creating a framework that aspires to strike a viable and proportionate balance between the various scenarios. Risk, in this context, refers to the combination of the probability that a defined hazard will occur as a result of use of a particular technology, and the magnitude of the consequences said hazard may entail. In other words, regulatory decisions are made by lawmakers in response to the probabilistic forecasting of opportunities and/or threats, by assessing risk through a set of standardized methodologies, templates and processes.
Since regulation comes with a price tag, as compliance with the law necessarily entails increased costs for market actors, risk-based regulation aims to minimizing overregulation while fostering more efficient, objective and fair governance. To reduce such costs, “risk-based regulation” uses risk as a tool to determine enforcement strategies that are proportionate to a concrete hazard by calibrating law enforcement interventions based on risk scores.
The European risk-based approach
Risk-based regulation in Europe initially developed as a response to the risks posed to the environment and to human health and safety by new technologies and/or industries. Only subsequently did it extend its scope of action to encompass a wider range of fields. Most notably, since the launch of the Digital Single Market Strategy, the European Union has increasingly relied on this legislative approach to address digital technologies. This is evident, in particular, in recent legislative developments in the fields of data, online content, and artificial intelligence.
Three European legal instruments: GDPR, DSA and AIA
The General Data Protection Regulation (GDPR) features a regulatory system based on accountability, whereby data controllers are required to demonstrate that their data processing activities comply with the rules stipulated by the Regulation. The GDPR features a “bottom-up” approach, since the burden is placed on data controllers to assess the risks that their processing activities pose to data subjects’ privacy and data protection rights, and to adopt and enforce “appropriate technical and organizational measures to ensure and to be able to demonstrate that processing is performed in accordance with [the] Regulation.” In contrast to a traditional top-down legislative arrangement, the GDPR, predicated on placing responsibility on the regulated, shifts towards a more collaborative architecture where the governed implement the appropriate risk management strategies to avoid liability.
The recently adopted Digital Services Act (DSA) also features a risk-based approach, especially in the new rules it introduces with respect to content-moderation practices. Most notably, the DSA specifies four categories of intermediary service providers (all providers; hosting providers; online platforms; and very large online platforms, VLOPs, together with very large online search engines, VLOSEs). Each category is subjected to different due diligence obligations in order of ascending burden. Through this asymmetric approach, designed to ensure “a transparent and safe online environment,” the DSA is easily identifiable as a risk-based regulation. However, unlike the GDPR, which following a bottom-up perspective completely delegates to the data controller and/or processor the tasks of risk assessment and mitigation, the DSA, in identifying four general risk layers, significantly diminishes the margins of discretion given to the targets of regulation (i.e., in this case, the provider of intermediary services). At the same time, some provisions of the DSA still grant providers a fundamental role in the ongoing development of appropriate strategies to reduce and mitigate the negative side-effects of their services. This is exemplified foremost by the requirement of VLOPs and VLOSEs to conduct periodic assessments of the “systemic risks in the [European] Union stemming from the design or functioning of their service and their related systems” and “put in place reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified.” The DSA may thus be defined as hybrid, since although its overall structure follows a categorization of risk that is defined on a top-down basis and is operated directly by the European lawmakers, it is complemented by important bottom-up features.
Within the Artificial Intelligence Act (AIA), the shift from a bottom-up to a top-down model is even more evident. As in this case of the DSA, the proposal identifies four risk categories applicable to AI systems. The AIA categories are unacceptable risk, high risk, limited risk, and minimal risk. Like the DSA, the AIA stipulates increasingly burdensome requirements depending on the risk level assigned each given AI system. Not only the systems face requirements – providers and users are also subjected to certain duties and obligations. This is a fundamentally top-down proposal: For instance, an AI system that engenders an unacceptable risk to human rights will altogether be prohibited under the AIA, whereas in the case of minimal risk systems, a mechanism of self-regulation based on the adoption of codes of conduct (encouraged by the European Commission) applies.
Notwithstanding their differences, all three of these legal instruments – the GDPR, DSA, and AIA – share common features. Together, they signal that the European Union has entered a new phase in the history of digital policies. Specifically, they showcase the evolving digital constitutionalism approach of the EU. All three, based on the organizing principle of risk, aim to strike a balance between the economics-oriented emphasis on innovation and the creation of an internationally competitive digital single market, and the interest of safeguarding democratic values, including individual rights and freedoms. This balancing of the various interests and values at stake using risk strategy is intrinsically constitutional, revealing the shift that has characterized the approach of the EU institutions (starting from the Court of Justice) with respect to digital technologies, from a purely liberal perspective, inspired by the US, to a more proactive one, driven by a more precautionary approach towards human rights protection.
Governance of digital technologies in the US
When addressing the regulation of digital technologies and, specifically, the mitigation of digital risks, the US perspective represents an essential and necessary viewpoint that needs to be taken into account, both because of the inherent transnationality of the Internet and because of the dominance of US-based IT companies operating on a worldwide basis.
Over the past decades, the risk-based approach to public governance and regulation has increasingly gathered momentum not only within the EU but also in the US. However, while the US has adopted risk-based regulation to confront public health challenges and environmental protection, unlike the EU, it has not applied risk-based regulation to digital technologies. This difference is relatively recent: At the turn of the millennium, liberal policies on digital technologies were adopted on both sides of the Atlantic, focusing primarily on promoting innovation and the technological market. It was only in the following decades that the approach to digital technologies on the respective continents evolved in different directions. While European institutions and courts have become increasingly aware of and concerned with the risks arising in the developing digital landscape, consequently moving towards a interventionist approach that has set the basis for the currently evolving digital constitutionalism in Europe, the US approach remained committed to the view of digital technologies as enablers of fundamental rights (notably, freedom of expression) rather than as sources of risk.
Nevertheless, US lawmakers, including the US Supreme Court, have not been oblivious to the risks posed by digital technologies. In recent years, some of the Supreme Court justices have begun to highlight how a reframing of US Internet case-law may become necessary in the future. Particularly, the cases Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh will likely provide an opportunity to assess whether the constitutional protection of social media spaces finds some limits when touching terrorist videos.
Despite the increased attention regarding the risks posed by digital technologies, a radical change in the US approach to digital regulation, especially towards a risk-based regulatory strategy such as that of the EU, is unlikely in the short term. The limited success of attempts to bridge the divide, as shown in various court cases, suggest that the EU’s developing risk-based digital constitutionalism will for now remain at odds with its counterpart on the other side of the Atlantic.
Conclusions
Digital technologies have introduced critical challenges to regulators across the world, because of the difficulty of mediating between the important economic and societal advantages they present and the threats they also inherently entail. To address this balance, different jurisdictions have adopted rather different legal strategies.
Within the EU, recent regulatory responses, such as the GDPR, the DSA, and the AIA proposal, have turned increasingly towards a strategy that seeks to create a balanced and proportionate regime by leveraging risk. By attaching a legal regime to specific risk conditions, the goal of the EU is to minimize the collateral damages of digital technologies while minimizing the market burdens caused by protective regulation. There is a common thread that connects these three legislative initiatives: Namely, striking a balance between the need to foster fundamental rights and freedoms, as well as constitutional values, while protecting economic freedom and innovation. From a constitutional standpoint, the parameter of risk quantitatively expresses reflects the aspiration to reach “optimal” and proportional balance between the various interests at stake.
The European way is but one possible approach towards digital governance. The US model, still dominated by a liberal ideology, is indicative of a perspective very different from risk regulation in the digital age. As risk-regulation continues to evolve in the digital age, new models are likely to enter the stage. It remains to be seen how the innovative risk-based approach of the EU will be adopted by or inspire jurisdictions around the globe as they take on the challenges of the digital society.
The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Israel Public Policy Institute (IPPI) and/or its partners.
Share this Post
"The basis for democratic discourse in the age of social media is skewed"
Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks…
"Deescalating polarization will contribute to diminishing the problem of misinformation"
Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks…
The Regulation of Artificial Intelligence in the EU
What does regulation of artificial intelligence mean? Technologies that are important for society due to the risks and…