Leashes, Not Guardrails: A Management-Based Approach to Artificial Intelligence Risk Regulation

Calls to regulate artificial intelligence (AI) have sought to establish “guardrails” to protect the public against AI going awry. Although physical guardrails can lower risks on roadways by serving as fixed, immovable protective barriers, the regulatory equivalent in the digital age of AI is unrealistic and even unwise. AI is too heterogeneous and dynamic to circumscribe fixed paths along which it must operate—and, in any event, the benefits of the technology proceeding along novel pathways would be limited if rigid, prescriptive regulatory barriers were imposed. But this does not mean that AI should be left unregulated, as the harms from irresponsible and ill-managed development and use of AI can be serious. Instead of “guardrails,” though, policymakers should impose “leashes.” Regulatory leashes imposed on digital technologies are flexible and adaptable—just as physical leashes used when walking a dog through a neighborhood allow for a range of movement and exploration. But just as a physical leash only protects others when a human retains a firm grip on the handle, the kind of leashes that should be deployed for AI will also demand human oversight. In the regulatory context, a flexible regulatory strategy known in other contexts as management-based regulation will be an appropriate model for AI risk governance. In this article, we explain why regulating AI by management-based regulation—a “leash” approach—will work better than a prescriptive or “guardrail” regulatory approach. We discuss how some early regulatory efforts are including management-based elements. We also elucidate some of the questions that lie ahead in implementing a management-based approach to AI risk regulation. Our aim is to facilitate future research and decision-making that can improve the efficacy of AI regulation by leashes, not guardrails.
Category
Year
Author
C. Coglianese; C. R. Crum
Attachments

Di Porto F., Fantozzi P., Naldi M. & Rangone N.

Consultations are key to gather evidence that informs rulemaking. When analysing the feedback received, it is essential for the regulator to appropriately cluster stakeholders’ opinions, as misclustering may alter the representativeness of the positions, making some of them appear majoritarian when they might not be. The European Commission (EC)’s approach to clustering opinions in consultations lacks a standardized methodology, leading to reduced procedural transparency, while making use of computational tools only sporadically. This paper explores how natural language processing (NLP) technologies may enhance the way opinion clustering is currently conducted by the EC. We examine 830 responses to three legislative proposals (the Artificial Intelligence Act, the Digital Markets Act and the Digital Services Act) using both a lexical and semantic approach. We find that some groups (like small and medium companies) have low similarity across all datasets and methodologies despite being clustered in one opinion group by the EC. The same happens for citizens and consumer associations for the consultation run over the DSA. These results suggest that computational tools actually help reduce misclustering of stakeholders’ opinions and consequently allow greater representativeness of the different positions expressed in consultations. They further suggest that the EC could identify a convergent methodology for all its consultations, where such tools are employed in a consistent and replicable rather than occasionally. Ideally, it should also explain when one methodology is preferred to another. This effort should find its way into the Better Regulation toolbox (EC 2023). Our analysis also paves the way for further research to reach a transparent and consistent methodology for group clustering.

Category
Year
Author
Mining EU consultations through AI

Risks Without Rights? The EU AI Act’s Approach to AI in Law and Rule-making

The EU AI Act seeks to balance the need for societal protection against the potential risks of AI systems, with the goal of fostering innovation. However, the Act’s ex-ante risk-based approach might lead to regulatory obsolescence (already materialised in 2021 with the spread of LLMs and the consequent reopening of the regulatory process), as well as to over or under-inclusion of AI applications in risks’ categories. The paper deals with the latter outcome by exploring how AI uses in law and rulemaking hide risks not covered by the EU AI Act. It is then analysed as to how the Act lacks flexibility on amending its provisions, and the way forward. The latter is tackled without utopian and not really feasible proposals for a new act and risk-based approach, but focusing on codes of conduct and national interventions on AI uses by public authorities.
Category
Year
Author
N. Rangone; L. Megale

Risks Without Rights? The EU AI Act’s Approach to AI in Law and Rule-Making

The EU AI Act seeks to balance the need for societal protection against the potential risks of AI systems, with the goal of fostering innovation. However, the Act’s ex-ante risk-based approach might lead to regulatory obsolescence (already materialised in 2021 with the spread of LLMs and the consequent reopening of the regulatory process), as well as to over or under-inclusion of AI applications in risks’ categories. The paper deals with the latter outcome by exploring how AI uses in law and rulemaking hide risks not covered by the EU AI Act. It is then analysed as to how the Act lacks flexibility on amending its provisions, and the way forward. The latter is tackled without utopian and not really feasible proposals for a new act and risk-based approach, but focusing on codes of conduct and national interventions on AI uses by public authorities.

Category
Year
Author
N. Rangone; L. Megale

Regulating Multifunctionality

Foundation models and generative artificial intelligence (AI) exacerbate a core regulatory challenge associated with AI: its heterogeneity. By their very nature, foundation models and generative AI can perform multiple functions for their users, thus presenting a vast array of different risks. This multifunctionality means that prescriptive, one-size-fits-all regulation will not be a viable option. Even performance standards and ex post liability— regulatory approaches that usually afford flexibility—are unlikely to be strong candidates for responding to multifunctional AI’s risks, given challenges in monitoring and enforcement. Regulators will do well instead to promote proactive risk management on the part of developers and users by using management-based regulation, an approach that has proven effective in other contexts of heterogeneity. Regulators will also need to maintain ongoing vigilance and agility.
More than in other contexts, regulators of multifunctional AI will need sufficient resources, top
human talent and leadership, and organizational cultures committed to regulatory excellence.
Category
Year
Author
Coglianese C.; Crum C.R.

Pubbliche amministrazioni e intelligenza artificiale. Strumenti, principi e garanzie

Da tempo, le pubbliche amministrazioni fanno uso di tecnologie digitali, compresa l’intelligenza artificiale, nello svolgimento delle proprie funzioni. Tale fenomeno – unito all’impianto normativo di derivazione europea, volto a normare i processi di transizione digitale – impone un ripensamento dei principi e delle garanzie finora individuati dalla dottrina e dalla giurisprudenza e ispirati ai regimi giuridici tradizionali del diritto amministrativo. Il volume, muovendo dall’esame degli utilizzi attuali delle tecnologie digitali da parte dei pubblici poteri, ne ricostruisce le possibili nuove regole.
Category
Year
Author
Armiento M.B.