Literature
Artificial Intelligence and new technologies regulation
C. Coglianese; C. R. Crum (2025)
Leashes, Not Guardrails: A Management-Based Approach to Artificial Intelligence Risk Regulation
Calls to regulate artificial intelligence (AI) have sought to establish “guardrails” to protect the public against AI going awry. Although physical guardrails can lower risks on roadways by serving as fixed, immovable protective barriers, the regulatory equivalent in the digital age of AI is unrealistic and even unwise. AI is too heterogeneous and dynamic to circumscribe fixed paths along which it must operate—and, in any event, the benefits of the technology proceeding along novel pathways would be limited if rigid, prescriptive regulatory barriers were imposed. But this does not mean that AI should be left unregulated, as the harms from irresponsible and ill-managed development and use of AI can be serious. Instead of “guardrails,” though, policymakers should impose “leashes.” Regulatory leashes imposed on digital technologies are flexible and adaptable—just as physical leashes used when walking a dog through a neighborhood allow for a range of movement and exploration. But just as a physical leash only protects others when a human retains a firm grip on the handle, the kind of leashes that should be deployed for AI will also demand human oversight. In the regulatory context, a flexible regulatory strategy known in other contexts as management-based regulation will be an appropriate model for AI risk governance. In this article, we explain why regulating AI by management-based regulation—a “leash” approach—will work better than a prescriptive or “guardrail” regulatory approach. We discuss how some early regulatory efforts are including management-based elements. We also elucidate some of the questions that lie ahead in implementing a management-based approach to AI risk regulation. Our aim is to facilitate future research and decision-making that can improve the efficacy of AI regulation by leashes, not guardrails.