Publications

Literature
Artificial Intelligence and new technologies regulation
Fabiana Di Porto (2021)
'Algorithmic Disclosure Rules'
During the past decade, a small but rapidly growing number of Law&Tech scholars have been applying algorithmic methods in their legal research. This Article does it too, for the sake of saving disclosure regulation failure: a normative strategy that has long been considered dead by legal scholars, but conspicuously abused by rule-makers. Existing proposals to revive disclosure duties, however, either focus on the industry policies (e.g. seeking to reduce consumers’ costs of reading) or on rulemaking (e.g. by simplifying linguistic intricacies). But failure may well depend on both. Therefore, this Article develops a `comprehensive approach', suggesting to use computational tools to cope with linguistic and behavioral failures at both the enactment and implementation phases of disclosure duties, thus filling a void in the Law & Tech scholarship. Specifically, it outlines how algorithmic tools can be used in a holistic manner to address the many failures of disclosures from the rulemaking in parliament to consumer screens. It suggests a multi-layered design where lawmakers deploy three tools in order to produce optimal disclosure rules: machine learning, natural language processing, and behavioral experimentation through regulatory sandboxes. To clarify how and why these tasks should be performed, disclosures in the contexts of online contract terms and privacy online are taken as examples. Because algorithmic rulemaking is frequently met with well-justified skepticism, problems of its compatibility with legitimacy, efficacy and proportionality are also discussed.
Literature
Experimental approach to law and regulation
Sofia Ranchordas (2021)
Experimental Regulations and Regulatory Sandboxes: Law without Order?
This article argues that the poor design and implementation of experimental regulations and regulatory sandboxes can have both methodological and legal implications. First, the internal validity of experimental legal regimes is limited because it is unclear whether the verified positive or negative results are the direct result of the experimental intervention or other circumstances. The limited external validity of experimental legal regimes impedes the generalization of the experiment and thus the ability to draw broader conclusions for the regulatory process. Second, experimental legal regimes that are not scientifically sound make a limited contribution to the advancement of evidence-based lawmaking and the rationalization of regulation. Third, methodological deficiencies may result in the violation of legal principles (e.g., legality, legal certainty, equal treatment, proportionality) which require that experimental regulations follow objective, transparent, and predictable standards. This article contributes to existing comparative public law and law and methods literature with an interdisciplinary framework which can help improve the design of experimental regulations and regulatory sandboxes. This article starts with an analysis of the central features, functions, and legal framework of these experimental legal regimes. It does so by focusing on legal scholarship, policy reports, and case law on experimental regulations and regulatory sandboxes from France, United Kingdom, and The Netherlands. While this article is not strictly comparative in its methodology, the three selected jurisdictions illustrate well the different facets of experimental legal regimes. This article draws on social science literature on the methods of field experiments to offer novel methodological insights for a more transparent and objective design of experimental regulations and regulatory sandboxes.
Literature
Artificial Intelligence and new technologies regulation
S. Ranchordas (2021)
Empathy in the Digital Administrative State
It is human to make mistakes. It is indisputably human to make mistakes while filling in tax returns, benefit applications, and other government forms which are often tainted with complex language, requirements, and short deadlines. However, the unique human feature of forgiving these mistakes is disappearing with the digitization of government services and the automation of government decision-making. While the role of empathy has long been controversial in law, empathic measures have helped public authorities balance administrative values with citizens’ needs and deliver fair and legitimate decisions. The empathy of public servants has been particularly important for vulnerable citizens (e.g., disabled individuals, seniors, underrepresented minorities, low income). When empathy is threatened in the digital administrative state, vulnerable citizens are at risk of not being able to exercise their rights because they cannot engage with digital bureaucracy. This Article argues that empathy, the ability to relate to others and understand a legal situation from multiple perspectives, is a key value of administrative law which should be safeguarded in the digital administrative state. Empathy can contribute to the advancement of procedural due process, equal treatment, and the legitimacy of automation. The concept of administrative empathy does not aim to create arrays of exceptions, imbue law with emotions and individualized justice. Instead, this concept suggests avenues for humanizing digital government and automated decision-making through the complete understanding of citizens’ needs. This Article explores the role of empathy in the digital administrative state at two levels: First, it argues that empathy can be a partial response to some of the shortcomings of digital bureaucracy. At this level, administrative empathy acknowledges that citizens have different skills and needs, and this requires the redesign of pre-filled application forms, government platforms, algorithms, as well as assistance. Second, empathy should also operate ex post as a humanizing measure which can help ensure that administrative decision-making remains human. Drawing on comparative examples of empathic measures employed in the United States, the Netherlands, Estonia, and France, the academic contribution of this Article is twofold: first, it offers an interdisciplinary reflection on the role of empathy in administrative law and public administration for the digital age that seeks to advance the position of vulnerable citizens; second, it operationalizes the concept of administrative empathy.
Documents
Public utilities
F. Molinari; C. Van Noordt; L. Vaccari (2021)
AI Watch. Beyond pilots: sustainable implementation of AI in public services
Artificial Intelligence (AI) is a peculiar case of General Purpose Technology that differs from other examples in history because it embeds specific uncertainties or ambiguous character that may lead to a number of risks when used to support transformative solutions in the public sector. AI has extremely powerful and, in many cases, disruptive effects on the internal management, decision-making and service provision processes of public administration. Over the past few years, the European Union and its Member States have designed regulatory policies and initiatives to mitigate the AI risks and make its opportunities a reality for national, regional and local government institutions. ‘AI Watch’ is one of these initiatives which has, among its goals, the monitoring of European Union’s industrial, technological, and research capacity in AI and the development of an analytical framework of the impact potential of AI in the public sector. This report, in particular, follows a previous landscaping study and collection of European cases, which was delivered in 2020. This document first introduces the concept of AI appropriation in government, seen as a sequence of two logically distinct phases, respectively named adoption and implementation of related technologies in public services and processes. Then, it analyses the situation of AI governance in the US and China and contrasts it to an emerging, truly European model, rooted in a systemic vision and with an emphasis on the revitalised role of the member states in the EU integration process, Next, it points out some critical challenges to AI implementation in the EU public sector, including: the generation of a critical mass of public investments, the availability of widely shared and suitable datasets, the improvement of AI literacy and skills in the involved staff, and the threats associated with the legitimacy of decisions taken by AI algorithms alone. Finally, it draws a set of common actions for EU decision-makers willing to undertake the systemic approach to AI governance through a more advanced equilibrium between AI promotion and regulation. The three main recommendations of this work include a more robust integration of AI with data policies, facing the issue of so-called “explainability of AI” (XAI), and broadening the current perspectives of both Pre-Commercial Procurement (PCP) and Public Procurement of Innovation (PPI) at the service of smart AI purchasing by the EU public administration. These recommendations will represent the baseline for a generic implementation roadmap for enhancing the use and impact of AI in the European public sector.