No AI Regulator: An Analysis of Artificial Intelligence and Public Standards Report (UK Government)

The Committee on Standards in Public Life has recently published (February 2020) a review on ‘Artificial Intelligence and Public Standards’. Chaired by Lord Evans of Weardale KCB DL, the report takes a thorough look at the use of AI in public service through the framework of the Nolan Principles (Selflessness, Integrity, Objectivity, Accountability, Openness, Honestly, and Leadership). This paper briefly comments upon and analyses selections form the publication by surveying the recommendations.
Category
Year
Author
Kazim E.; Koshiyama A.

Procurement As Policy: Administrative Process for Machine Learning

At every level of government, officials contract for technical systems that employ machine learning—systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they and the public have little input into—or even knowledge about—their design or how well that design aligns with public goals and values.

This Article explains the ways that the decisions about goals, values, risk, and certainty, along with the elimination of case-by-case discretion, inherent in machine-learning system design create policies—not just once when they are designed, but over time as they adapt and change. When the adoption of these systems is governed by procurement, the policies they embed receive little or no agency or outside expertise beyond that provided by the vendor. Design decisions are left to private third-party developers. There is no public participation, no reasoned deliberation, and no factual record, which abdicates Government responsibility for policymaking.
This Article then argues for a move from a procurement mindset to policymaking mindset. When policy decisions are made through system design, processes suitable for substantive administrative determinations should be used: processes that foster deliberation reflecting both technocratic demands for reason and rationality informed by expertise, and democratic demands for public participation and political accountability. Specifically, the Article proposes administrative law as the framework to guide the adoption of machine learning governance, describing specific ways that the policy choices embedded in machine-learning system design fail the prohibition against arbitrary and capricious agency actions absent a reasoned decision-making process that both enlists the expertise necessary for reasoned deliberation about, and justification for, such choices, and makes visible the political choices being made.
Finally, this Article sketches models for machine-learning adoption processes that satisfy the prohibition against arbitrary and capricious agency actions. It explores processes by which agencies might garner technical expertise and overcome problems of system opacity, satisfying administrative law’s technocratic demand for reasoned expert deliberation. It further proposes both institutional and engineering design solutions to the challenge of policymaking opacity, offering process paradigms to ensure the “political visibility” required for public input and political oversight. In doing so, it also proposes the importance of using “contestable design”—design that exposes value-laden features and parameters and provides for iterative human involvement in system evolution and deployment. Together, these institutional and design approaches further both administrative law’s technocratic and democratic mandates.
Category
Year
Author
Mullingan D. K.; Bamberger K. A.

Artificial Intelligence and Cybersecurity

The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society.
The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.

As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI.
This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry
Category
Year
Author
CEPS

Regulating a Revolution: From Regulatory Sandboxes to Smart Regulation

Prior to the global financial crisis, financial innovation was viewed very positively, resulting in a laissez-faire, deregulatory approach to financial regulation. Since the crisis the regulatory pendulum has swung to the other extreme. Post-crisis regulation, plus rapid technological change, have spurred the development of financial technology (FinTech). FinTech firms and data-driven financial service providers profoundly challenge the current regulatory paradigm. Financial regulators increasingly seek to balance the traditional regulatory objectives of financial stability and consumer protection with promoting growth and innovation. The resulting regulatory innovations include RegTech, regulatory sandboxes, and special charters. This Article analyzes possible new regulatory approaches, ranging from doing nothing (which spans being permissive to highly restrictive, depending on context), cautious permissiveness (on a case-by-case basis, or through special charters), structured experimentalism (such as sandboxes or piloting), and development of specific new regulatory frameworks. Building on this framework, we argue for a new regulatory approach, which incorporates these rebalanced objectives, and which we term ‘smart regulation.’ Our new automated and proportionate regime builds on shared principles from a range of jurisdictions and supports innovation in financial markets. The fragmentation of market participants and the increased use of technology requires regulators to adopt a sequential reform process, starting with digitization, before building digitally-smart regulation. This Article provides a roadmap for this process.
Category
Year
Author
Zetzsche D. A. and others

FinTech: Regulatory sandboxes and innovation hubs

In recent years competent authorities in the EU have adopted various initiatives to facilitate financial innovation. These initiatives include the establishment of ‘innovation facilitators’. Innovation facilitators typically take the form of ‘innovation hubs’ and ‘regulatory sandboxes’. Innovation hubs provide a dedicated point of contact for firms to raise enquiries with competent authorities on FinTech-related issues and to seek non-binding guidance on regulatory and supervisory expectations, including licensing requirements. Regulatory sandboxes, on the other hand, are schemes to enable firms to test, pursuant to a specific testing plan agreed and monitored by a dedicated function of the competent authority, innovative financial products, financial services or business models.
In this report the European Supervisory Authorities (the ESAs) set out a comparative analysis of the innovation facilitators established to date in the EU, further to the mandate specified in the European Commission’s March 2018 FinTech Action Plan.The ESAs also set out ‘best practices’ regarding the design and operation of innovation facilitators, informed by the results of the comparative analysis and the experiences of the national competent authorities in running the facilitators. The best practices are intended to provide indicative support for competent authorities when considering the establishment of, or reviewing the operation of innovation facilitators. Accordingly, the best practices are intended to promote convergence in the design and operation of innovation facilitators and thereby protect the level playing field. The ESAs also set out options, to be considered in the context of future EU-level work on innovation facilitators, including in conjunction with the European Commission’s future work, to promote coordination and cooperation between innovation facilitators and support the scaling-up of FinTech across the EU. These options comprise:
• the development of Joint ESA own-initiative guidance on cooperation and coordination between innovation facilitators;
• the creation of an EU network to bridge innovation facilitators established at the Member State level.
The ESAs will continue to monitor national developments regarding innovation facilitators and take such steps as are appropriate to promote an accommodative and common approach towards FinTech in the EU.
Category
Year
Author
ESMA

Decision-making in the Age of the Algorithm

Frontline practitioners in the public sector – from social workers to police to custody officers – make important decisions every day about people’s lives. Operating in the context of a sector grappling with how to manage rising demand, coupled with diminishing resources, frontline practitioners are being asked to make very important decisions quickly and with limited information. To do this, public sector organisations are turning to new technologies to support decision-making, in particular, predictive analytics tools, which use machine learning algorithms to discover patterns in data and make predictions.

While many guides exist around ethical AI design, there is little guidance on how to support a productive human-machine interaction in relation to AI. This report aims to fill this gap by focusing on the issue of human-machine interaction. How people are working with tools is significant because, simply put, for predictive analytics tools to be effective, frontline practitioners need to use them well. It encourages public sector organisations to think about how people feel about predictive analytics tools – what they’re fearful of, what they’re excited about, what they don’t understand.
Based on insights drawn from an extensive literature review, interviews with frontline practitioners, and discussions with experts across a range of fields, the guide also identifies three key principles that play a significant role in supporting a constructive human-machine relationship: context, understanding, and agency.
Category
Year
Author
NESTA

Artificial Intelligence and Law: An Overview

Much has been written recently about artificial intelligence (AI) and law. But what is AI, and what is its relation to the practice and administration of law? This article addresses those questions by providing a high-level overview of AI and its use within law. The discussion aims to be nuanced but also understandable to those without a technical background. To that end, I first discuss AI generally. I then turn to AI and how it is being used by lawyers in the practice of law, people and companies who are governed by the law, and government officials who administer the law. A key motivation in writing this article is to provide a realistic, demystified view of AI that is rooted in the actual capabilities of the technology. This is meant to contrast with discussions about AI and law that are decidedly futurist in nature.
Category
Year
Author
Surden H.

The Perils and Promises of Artificial General Intelligence

Artificial General Intelligence (“AGI”) - an Artificial Intelligence ("AI") capable of achieving any goal - is the greatest existential threat humanity faces. Indeed, the questions surrounding the regulation of AGI are the most important the millennial generation will answer. The capabilities of current AI systems are evolving at accelerating rates. Yet, legislators and scholars have yet to address or identify critical issues relating to AI regulation. Instead, legislators and scholars have focused narrowly on short term AI policy.
This paper takes a contrarian approach to analyzing AI regulation with a specific emphasis on deep reinforcement learning systems, a relatively recent breakthrough in AI technology. Additionally, this paper identifies three important regulatory issues legislators and scholars need to address in the context of AI development. AI and legal scholars have made the demanding need for an AI regulatory system clear. However, those arguments focus on the regulation of current AI systems and generally ignore or dismiss the possibility of AGI. Further, previous scholarship has yet to grapple specifically with the regulation of deep reinforcement learning systems, which many AI scholars argue provides a direct path to AGI. Ultimately, legislators must consider and address the perils and promises of AGI when developing and evolving AI regulatory frameworks.
Category
Year
Author
Haney B.S.