Publications

Literature
Artificial Intelligence and new technologies regulation
Mullingan D. K.; Bamberger K. A. (2019)
Procurement As Policy: Administrative Process for Machine Learning
At every level of government, officials contract for technical systems that employ machine learning—systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they and the public have little input into—or even knowledge about—their design or how well that design aligns with public goals and values. This Article explains the ways that the decisions about goals, values, risk, and certainty, along with the elimination of case-by-case discretion, inherent in machine-learning system design create policies—not just once when they are designed, but over time as they adapt and change. When the adoption of these systems is governed by procurement, the policies they embed receive little or no agency or outside expertise beyond that provided by the vendor. Design decisions are left to private third-party developers. There is no public participation, no reasoned deliberation, and no factual record, which abdicates Government responsibility for policymaking. This Article then argues for a move from a procurement mindset to policymaking mindset. When policy decisions are made through system design, processes suitable for substantive administrative determinations should be used: processes that foster deliberation reflecting both technocratic demands for reason and rationality informed by expertise, and democratic demands for public participation and political accountability. Specifically, the Article proposes administrative law as the framework to guide the adoption of machine learning governance, describing specific ways that the policy choices embedded in machine-learning system design fail the prohibition against arbitrary and capricious agency actions absent a reasoned decision-making process that both enlists the expertise necessary for reasoned deliberation about, and justification for, such choices, and makes visible the political choices being made. Finally, this Article sketches models for machine-learning adoption processes that satisfy the prohibition against arbitrary and capricious agency actions. It explores processes by which agencies might garner technical expertise and overcome problems of system opacity, satisfying administrative law’s technocratic demand for reasoned expert deliberation. It further proposes both institutional and engineering design solutions to the challenge of policymaking opacity, offering process paradigms to ensure the “political visibility” required for public input and political oversight. In doing so, it also proposes the importance of using “contestable design”—design that exposes value-laden features and parameters and provides for iterative human involvement in system evolution and deployment. Together, these institutional and design approaches further both administrative law’s technocratic and democratic mandates.
Literature
Better Regulation
Van Loo R. (2019)
The New Gatekeepers: Private Firms as Public Enforcers
The world’s largest businesses must routinely police other businesses. By public mandate, Facebook monitors app developers’ privacy safeguards, Citibank audits call centers for deceptive sales practices, and Exxon reviews offshore oil platforms’ environmental standards. Scholars have devoted significant attention to how policy makers deploy other private sector enforcers, such as certification bodies, accountants, lawyers, and other periphery “gatekeepers.” However, the literature has yet to explore the emerging regulatory conscription of large firms at the center of the economy. This Article examines the rise of the enforcer-firm through case studies of the industries that are home to the most valuable companies, in technology, banking, oil, and pharmaceuticals. Over the past two decades, administrative agencies have used legal rules, guidance documents, and court orders to mandate that private firms in these and other industries perform the duties of a public regulator. More specifically, firms must write rules in their contracts that reserve the right to inspect third parties. When they find violations, they must pressure or punish the wrongdoer. This form of governance has important intellectual and policy implications. It imposes more of a public duty on the firm, alters corporate governance, and may even reshape business organizations. It also gives resource-strapped regulators promising tools. If designed poorly, however, the enforcer-firm will create an expansive area of unaccountable authority. Any comprehensive account of the firm or regulation must give a prominent role to the administrative state’s newest gatekeepers.
Literature
Transparency
Coglianese C., Lehr D. (2019)
Transparency and Algorithmic Governance
Machine-learning algorithms are improving and automating important functions in medicine, transportation, and business. Government officials have also started to take notice of the accuracy and speed that such algorithms provide, increasingly relying on them to aid with consequential public-sector functions, including tax administration, regulatory oversight, and benefits administration. Despite machine-learning algorithms’ superior predictive power over conventional analytic tools, algorithmic forecasts are difficult to understand and explain. Machine learning’s “black-box” nature has thus raised concern: Can algorithmic governance be squared with legal principles of governmental transparency? We analyze this question and conclude that machine-learning algorithms’ relative inscrutability does not pose a legal barrier to their responsible use by governmental authorities. We distinguish between principles of “fishbowl transparency” and “reasoned transparency,” explaining how both are implicated by algorithmic governance but also showing that neither conception compels anything close to total transparency. Although machine learning’s black-box features distinctively implicate notions of reasoned transparency, legal demands for reason-giving can be satisfied by explaining an algorithm’s purpose, design, and basic functioning. Furthermore, new technical advances will only make machine-learning algorithms increasingly more explainable. Algorithmic governance can meet both legal and public demands for transparency while still enhancing accuracy, efficiency, and even potentially legitimacy in government.
Literature
Artificial Intelligence and new technologies regulation
Haney B.S. (2019)
The Perils and Promises of Artificial General Intelligence
Artificial General Intelligence (“AGI”) - an Artificial Intelligence ("AI") capable of achieving any goal - is the greatest existential threat humanity faces. Indeed, the questions surrounding the regulation of AGI are the most important the millennial generation will answer. The capabilities of current AI systems are evolving at accelerating rates. Yet, legislators and scholars have yet to address or identify critical issues relating to AI regulation. Instead, legislators and scholars have focused narrowly on short term AI policy. This paper takes a contrarian approach to analyzing AI regulation with a specific emphasis on deep reinforcement learning systems, a relatively recent breakthrough in AI technology. Additionally, this paper identifies three important regulatory issues legislators and scholars need to address in the context of AI development. AI and legal scholars have made the demanding need for an AI regulatory system clear. However, those arguments focus on the regulation of current AI systems and generally ignore or dismiss the possibility of AGI. Further, previous scholarship has yet to grapple specifically with the regulation of deep reinforcement learning systems, which many AI scholars argue provides a direct path to AGI. Ultimately, legislators must consider and address the perils and promises of AGI when developing and evolving AI regulatory frameworks.
Literature
Artificial Intelligence and new technologies regulation
Finck M. (2019)
Automated Decision-Making and Administrative Law
Over the past few years, there has been much discussion regarding the potential of automated-decision making (‘ADM’) systems powered by mechanisms of computational intelligence such as machine learning or deep learning (commonly referred to as ‘Artificial Intelligence’ or ‘AI’). To date, such forms of (big) data analysis are most prominently relied on by the private sector, such as the search algorithms used by online search engines or the recommendation algorithms used by e-commerce and entertainment services platforms. These forms of data analysis in essence offer three main benefits, namely the speed and efficiency of decision-making as well as an ability to detect correlations that may be undetectable to the human brain. The efficiency, speed and correlations offered by these forms of data analytics are also appealing in the public sector. Indeed, various products of computational learning are already being used in administrative processes and will likely become much more prominent in future years. Whereas these techniques offer important potential benefits, they have also been the cause of concern. Indeed, the use of ADM in administrative settings raises numerous important legal and ethical challenges. This paper introduces these new elements in the administrative toolbox and to survey related consequences, in particular possible implications for the principle of transparency.
Literature
Cost-benefit analysis
Cecott C. (2019)
Deregulatory Cost-Benefit Analysis and Regulatory Stability
Cost-benefit analysis (“CBA”) has faced significant opposition during most of its tenure as an influential agency decisionmaking tool. As advancements have been made in CBA practice, especially in more complete monetization of relevant effects, CBA has been gaining acceptance as an essential part of reasoned agency decisionmaking. When carefully conducted, CBA promotes transparency and accountability, efficient and predictable policies, and targeted retrospective review. This Article highlights an underappreciated additional effect of extensive use of CBA to support agency rulemaking: reasonable regulatory stability. In particular, a regulation based on a wellsupported CBA is more difficult to modify for at least two reasons. The first reason relates to judicial review. Courts take a “hard look” at agency findings of fact, which are summarized in a CBA, and they require justifications when an agency changes course in ways that contradict its previous factfinding. A prior CBA provides a powerful reference point; any updated CBA supporting a new course of action will naturally be compared against the prior CBA, and the agency will need to explain any changes in CBA inputs, assumptions, and methodology. The second reason relates to the nature of CBA. By focusing on the incremental costs and benefits of a proposed change, CBA can make it difficult for an agency to justify changing course, especially when stakeholders have already relied on the prior policy. Together, these forces constrain the range of changes that agencies could rationally support. CBA thus promotes regulatory stability around transparent and increasingly efficient policies. But, admittedly, this CBA-based stabilizing influence gives rise to several objections. This Article responds to, among others, concerns about democratic accountability and, most importantly, the use of alternative methods of policy modification. Overall, the Article concludes that CBA and judicial review of CBA play a desirable role in stabilizing regulatory policy across presidential administrations.
Literature
Better Regulation
Coglianese C., Walters D. (2019)
Whither the Regulatory 'War on Coal'? Scapegoats, Saviors, and Stock Market Reactions
Complaints about excessive economic burdens associated with regulation abound in contemporary political and legal rhetoric. In recent years, perhaps nowhere have these complaints been heard as loudly as in the context of regulations targeting the use of coal as an energy source, as production levels in the coal industry dropped nearly by half between 2008 and 2016. The coal industry and its political supporters, including the President of the United States, have argued that a suite of air pollution regulations imposed by the U.S. Environmental Protection Agency (EPA) during the Obama Administration seriously undermined coal companies’ bottom lines, presenting an existential threat to the industry. Under the Trump Administration, industry players have lobbied hard for (and sometimes received) financial subsidies and regulatory changes, with the President seemingly all-too-happy to play the role of the industry’s savior. Stepping back, we ask whether regulations have really led to the decline in demand for coal and how much the coal industry can actually expect to gain from the de-regulatory policies of the current Administration. To address these questions, we statistically analyze stock market reactions to important events in what critics called the regulatory “war on coal” during the Obama Administration. Using an event-study framework that measures abnormal market activity in the immediate wake of these events, we are able to isolate any potential impact of regulation above and beyond market factors, such as secular trends in natural gas prices and market performance as a whole. Surprisingly, we find no systemic evidence consistent with a regulatory “war on coal” based on investor assessments of the industry’s financial prospects, even though our methods do find evidence of stock market reactions to other events, such as bankruptcies of other companies. The very actors with financial stakes in understanding the impact of regulation on the coal industry never bought into the regulatory “war on coal” narrative. Our findings are consistent with other evidence about the effects of regulation and with an underlying political economy of regulatory scapegoating, according to which actors in a declining industry prefer to blame regulation rather than competitive factors for the decline. By recognizing the pervasive incentives for scapegoating and cheap talk by politicians seeking to be saviors, we explain the mismatch between the evidence and the rhetoric of the “war on coal,” and along the way we also show how important it is for courts, government officials, and the public to demand careful analysis and evidence before agencies make regulatory decisions.