Liability for robots II: an economic analysis

This is the second of two companion papers that discuss accidents caused by robots. In the first paper (Guerra et al., 2021), we presented the novel problems posed by robot accidents, and assessed the related legal approaches and institutional opportunities. In this paper, we build on the previous analysis to consider a novel liability regime, which we refer to as ‘manufacturer residual liability’ rule. This makes operators and victims liable for accidents due to their negligence – hence, incentivizing them to act diligently; and makes manufacturers residually liable for non-negligent accidents – hence, incentivizing them to make optimal investments in R&D for robots' safety. In turn, this rule will bring down the price of safer robots, driving unsafe technology out of the market. Thanks to the percolation effect of residual liability, operators will also be incentivized to adopt optimal activity levels in robots' usage.
Category
Year
Author
A. Guerra; F. Parisi; D. Pi

Liability for robots I: legal challenges

In robot torts, robots carry out activities that are partially controlled by a human operator. Several legal and economic scholars across the world have argued for the need to rethink legal remedies as we apply them to robot torts. Yet, to date, there exists no general formulation of liability in case of robot accidents, and the proposed solutions differ across jurisdictions. We proceed in our research with a set of two com-panion papers. In this paper, we present the novel problems posed by robot accidents, and assess the legal challenges and institutional prospects that policymakers face in the regulation of robot torts. In the companion paper, we build on the present analysis and use an economic model to propose a new liability regime that blends negligence-based rules and strict manufacturer liability rules to create optimal incentives for robot torts.
Category
Year
Author
A. Guerra; F. Parisi; D. Pi

Algorithmic disclosure rules

During the past decade, a small but rapidly growing number of Law&Tech scholars have been applying algorithmic methods in their legal research. This Article does it too, for the sake of saving disclosure regulation failure: a normative strategy that has long been considered dead by legal scholars, but conspicuously abused by rule-makers. Existing proposals to revive disclosure duties, however, either focus on the industry policies (e.g. seeking to reduce consumers’ costs of reading) or on rulemaking (e.g. by simplifying linguistic intricacies). But failure may well depend on both. Therefore, this Article develops a `comprehensive approach', suggesting to use computational tools to cope with linguistic and behavioral failures at both the enactment and implementation phases of disclosure duties, thus filling a void in the Law & Tech scholarship. Specifically, it outlines how algorithmic tools can be used in a holistic manner to address the many failures of disclosures from the rulemaking in parliament to consumer screens. It suggests a multi-layered design where lawmakers deploy three tools in order to produce optimal disclosure rules: machine learning, natural language processing, and behavioral experimentation through regulatory sandboxes. To clarify how and why these tasks should be performed, disclosures in the contexts of online contract terms and privacy online are taken as examples. Because algorithmic rulemaking is frequently met with well-justified skepticism, problems of its compatibility with legitimacy, efficacy and proportionality are also discussed.
Category
Year
Author
Fabiana Di Porto

Regulating New Tech: Problems, Pathways, and People

New technologies bring with them many promises, but also a series of new problems. Even though these problems are new, they are not unlike the types of problems that regulators have long addressed in other contexts. The lessons from regulation in the past can thus guide regulatory efforts today. Regulators must focus on understanding the problems they seek to address and the causal pathways that lead to these problems. Then they must undertake efforts to shape the behavior of those in industry so that private sector managers focus on their technologies’ problems and take actions to interrupt the causal pathways. This means that regulatory organizations need to strengthen their own technological capacities; however, they need most of all to build their human capital. Successful regulation of technological innovation rests with top quality people who possess the background and skills needed to understand new technologies and their problems.
Category
Year
Author
Cary Coglianese

Antitrust by Algorithm

Technological innovation is changing private markets around the world. New advances in digital technology have created new opportunities for subtle and evasive forms of anticompetitive behavior by private firms. But some of these same technological advances could also help antitrust regulators improve their performance. We foresee that the growing digital complexity of the marketplace will necessitate that antitrust authorities increasingly rely on machine-learning algorithms to oversee market behavior. In making this transition, authorities will need to meet several key institutional challenges—building organizational capacity, avoiding legal pitfalls, and establishing public trust—to ensure successful implementation of antitrust by algorithm.
Category
Year
Author
C. Coglianese; A. Lai