Algorithmic Accountability for the Public Sector

The Ada Lovelace Institute (Ada), AI Now Institute (AI Now), and Open Government Partnership (OGP) have partnered to launch the first global study to analyse the initial wave of algorithmic accountability policy for the public sector.
As governments are increasingly turning to algorithms to support decision-making for public services, there is growing evidence that suggests that these systems can cause harm and frequently lack transparency in their implementation. Reformers in and outside of government are turning to regulatory and policy tools, hoping to ensure algorithmic accountability across countries and contexts. These responses are emergent and shifting rapidly, and they vary widely in form and substance – from legally binding commitments, to high-level principles and voluntary guidelines.
This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems.
Category
Year
Author
ADA, AI NOW, OGP

Report of the Social and Human Sciences Commission (SHS)

UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence
Category
Year
Author
UNESCO

'Algorithmic Disclosure Rules'

During the past decade, a small but rapidly growing number of Law&Tech scholars have been applying algorithmic methods in their legal research. This Article does it too, for the sake of saving disclosure regulation failure: a normative strategy that has long been considered dead by legal scholars, but conspicuously abused by rule-makers. Existing proposals to revive disclosure duties, however, either focus on the industry policies (e.g. seeking to reduce consumers’ costs of reading) or on rulemaking (e.g. by simplifying linguistic intricacies). But failure may well depend on both. Therefore, this Article develops a `comprehensive approach', suggesting to use computational tools to cope with linguistic and behavioral failures at both the enactment and implementation phases of disclosure duties, thus filling a void in the Law & Tech scholarship. Specifically, it outlines how algorithmic tools can be used in a holistic manner to address the many failures of disclosures from the rulemaking in parliament to consumer screens. It suggests a multi-layered design where lawmakers deploy three tools in order to produce optimal disclosure rules: machine learning, natural language processing, and behavioral experimentation through regulatory sandboxes. To clarify how and why these tasks should be performed, disclosures in the contexts of online contract terms and privacy online are taken as examples. Because algorithmic rulemaking is frequently met with well-justified skepticism, problems of its compatibility with legitimacy, efficacy and proportionality are also discussed.
Category
Year
Author
Fabiana Di Porto

Governing by Algorithm? No Noise and (Potentially) Less Bias

As intuitive statisticians, human beings suffer from identifiable biases, cognitive and otherwise. Human beings can also be “noisy,” in the sense that their judgments show unwanted variability. As a result, public institutions, including those that consist of administrative prosecutors and adjudicators, can be biased, noisy, or both. Both bias and noise produce errors. Algorithms eliminate noise, and that is important; to the extent that they do so, they prevent unequal treatment and reduce errors. In addition, algorithms do not use mental short-cuts; they rely on statistical predictors, which means that they can counteract or even eliminate cognitive biases. At the same time, the use of algorithms, by administrative agencies, raises many legitimate questions and doubts. Among other things, they can encode or perpetuate discrimination, perhaps because their inputs are based on discrimination, perhaps because what they are asked to predict is infected by discrimination. But if the goal is to eliminate discrimination, properly constructed algorithms nonetheless have a great deal of promise for administrative agencies.
Category
Year
Author
Cass R. Sunstein

Empathy in the Digital Administrative State

It is human to make mistakes. It is indisputably human to make mistakes while filling in tax returns, benefit applications, and other government forms which are often tainted with complex language, requirements, and short deadlines. However, the unique human feature of forgiving these mistakes is disappearing with the digitization of government services and the automation of government decision-making. While the role of empathy has long been controversial in law, empathic measures have helped public authorities balance administrative values with citizens’ needs and deliver fair and legitimate decisions. The empathy of public servants has been particularly important for vulnerable citizens (e.g., disabled individuals, seniors, underrepresented minorities, low income). When empathy is threatened in the digital administrative state, vulnerable citizens are at risk of not being able to exercise their rights because they cannot engage with digital bureaucracy.

This Article argues that empathy, the ability to relate to others and understand a legal situation from multiple perspectives, is a key value of administrative law which should be safeguarded in the digital administrative state. Empathy can contribute to the advancement of procedural due process, equal treatment, and the legitimacy of automation. The concept of administrative empathy does not aim to create arrays of exceptions, imbue law with emotions and individualized justice. Instead, this concept suggests avenues for humanizing digital government and automated decision-making through the complete understanding of citizens’ needs.

This Article explores the role of empathy in the digital administrative state at two levels: First, it argues that empathy can be a partial response to some of the shortcomings of digital bureaucracy. At this level, administrative empathy acknowledges that citizens have different skills and needs, and this requires the redesign of pre-filled application forms, government platforms, algorithms, as well as assistance. Second, empathy should also operate ex post as a humanizing measure which can help ensure that administrative decision-making remains human. Drawing on comparative examples of empathic measures employed in the United States, the Netherlands, Estonia, and France, the academic contribution of this Article is twofold: first, it offers an interdisciplinary reflection on the role of empathy in administrative law and public administration for the digital age that seeks to advance the position of vulnerable citizens; second, it operationalizes the concept of administrative empathy.
Category
Year
Author
S. Ranchordas

‘Neo-feudalism’ in the age of algorithms

Digital platforms have strengthened their power more and more by collecting and exploiting data processed through algorithms. Economic and social interactions are increasingly shaped by the technology and software. Such a reliance on digital technologies has brought about the tendency of private subjects (but also of public institutions) to replace traditional norms and rules with regulations based on the code (code is law). From this perspective digital giants are no longer market participants. Rather they are market makers and exert regulatory control over the terms which define the positions of commercial and final users. Moreover they aspire to displace more governmental and public roles over time, performing functions and tasks normally vested in public authorities such as judicial bodies and courts. This shift in power towards private actors has led to an expanding privatization in the field of individuals’ rights as well. Algorithms are replacing the traditional functions of law embedding private values and interests in the technology. They represent the key of digital platforms’ power. Therefore regulators should ‘capture’ the algorithm to steer its effects and thanks to the technology be able to introduce regulatory principles into the design of the digital code.
Category
Year
Author
Laura Ammannati

THE PROPOSAL FOR A REGULATION OF THE EUROPEAN UNION ON ARTIFICIAL INTELLIGENCE: PRELIMINARY NOTES (ITALIAN)

Artificial Intelligence has great potential in all areas of our lives, but it also presents risks for fundamental rights and the rule of law. The European Union is trying to create a balanced regulatory framework between the pros and cons of AI. On 21 April 2021 EU published a comprehensive proposal for an AI regulation, which should protect and promote European rights and values, without impeding the technological, industrial, and commercial development of AI. This article aims to give a first analysis of the proposal, focusing on its main positive and negative aspects.
Category
Year
Author
Casonato, C; Marchetti, B.