Publications

Literature
Better Regulation
Van Loo R. (2019)
The New Gatekeepers: Private Firms as Public Enforcers
The world’s largest businesses must routinely police other businesses. By public mandate, Facebook monitors app developers’ privacy safeguards, Citibank audits call centers for deceptive sales practices, and Exxon reviews offshore oil platforms’ environmental standards. Scholars have devoted significant attention to how policy makers deploy other private sector enforcers, such as certification bodies, accountants, lawyers, and other periphery “gatekeepers.” However, the literature has yet to explore the emerging regulatory conscription of large firms at the center of the economy. This Article examines the rise of the enforcer-firm through case studies of the industries that are home to the most valuable companies, in technology, banking, oil, and pharmaceuticals. Over the past two decades, administrative agencies have used legal rules, guidance documents, and court orders to mandate that private firms in these and other industries perform the duties of a public regulator. More specifically, firms must write rules in their contracts that reserve the right to inspect third parties. When they find violations, they must pressure or punish the wrongdoer. This form of governance has important intellectual and policy implications. It imposes more of a public duty on the firm, alters corporate governance, and may even reshape business organizations. It also gives resource-strapped regulators promising tools. If designed poorly, however, the enforcer-firm will create an expansive area of unaccountable authority. Any comprehensive account of the firm or regulation must give a prominent role to the administrative state’s newest gatekeepers.
Documents
Better Regulation
Finnish Government (2019)
Framework for innovation-friendly regulation
Radical innovations and break-through technologies are desperately needed in solving to-day’s difficult societal challenges, such as those created by climate change or ageing demographics. However, addressing complex societal challenges requires elaborate systemic planning, determined investments and often also, visionary and brave decisions by the legislators and regulators. While radical innovation may bring much needed economic benefits and solutions to pressing societal challenges, they can also generate new risks and ethical dilemmas. Hence, today’s legislators are faced with difficult questions in trying to foresee an optimal legal framework, which would sufficiently leave space for and encourage new solutions, but at the same time would ensure safe conditions and fair benefits to everyone. In light of the above, increased attention is paid to developing innovation-friendly regulatory approaches and practices. The introduction of European Commission’s Innovation Principle, as well as several national initiatives (such as regulatory sandboxes and regulation roadmaps), are good examples of such development. So far, there has not been a common definition, nor a comprehensive framework to grasp the different aspects of innovation-friendly regulation approaches and practices. Developing such framework has been one of the main objectives in Finnish government commissioned study on “Impacts of regulation on innovation and new markets”. This Policy Brief presents some first findings and introduces a draft framework for innovation-friendly regulation.
Documents
Artificial Intelligence and new technologies regulation
NESTA (2019)
Decision-making in the Age of the Algorithm
Frontline practitioners in the public sector – from social workers to police to custody officers – make important decisions every day about people’s lives. Operating in the context of a sector grappling with how to manage rising demand, coupled with diminishing resources, frontline practitioners are being asked to make very important decisions quickly and with limited information. To do this, public sector organisations are turning to new technologies to support decision-making, in particular, predictive analytics tools, which use machine learning algorithms to discover patterns in data and make predictions. While many guides exist around ethical AI design, there is little guidance on how to support a productive human-machine interaction in relation to AI. This report aims to fill this gap by focusing on the issue of human-machine interaction. How people are working with tools is significant because, simply put, for predictive analytics tools to be effective, frontline practitioners need to use them well. It encourages public sector organisations to think about how people feel about predictive analytics tools – what they’re fearful of, what they’re excited about, what they don’t understand. Based on insights drawn from an extensive literature review, interviews with frontline practitioners, and discussions with experts across a range of fields, the guide also identifies three key principles that play a significant role in supporting a constructive human-machine relationship: context, understanding, and agency.
Literature
Transparency
Coglianese C., Lehr D. (2019)
Transparency and Algorithmic Governance
Machine-learning algorithms are improving and automating important functions in medicine, transportation, and business. Government officials have also started to take notice of the accuracy and speed that such algorithms provide, increasingly relying on them to aid with consequential public-sector functions, including tax administration, regulatory oversight, and benefits administration. Despite machine-learning algorithms’ superior predictive power over conventional analytic tools, algorithmic forecasts are difficult to understand and explain. Machine learning’s “black-box” nature has thus raised concern: Can algorithmic governance be squared with legal principles of governmental transparency? We analyze this question and conclude that machine-learning algorithms’ relative inscrutability does not pose a legal barrier to their responsible use by governmental authorities. We distinguish between principles of “fishbowl transparency” and “reasoned transparency,” explaining how both are implicated by algorithmic governance but also showing that neither conception compels anything close to total transparency. Although machine learning’s black-box features distinctively implicate notions of reasoned transparency, legal demands for reason-giving can be satisfied by explaining an algorithm’s purpose, design, and basic functioning. Furthermore, new technical advances will only make machine-learning algorithms increasingly more explainable. Algorithmic governance can meet both legal and public demands for transparency while still enhancing accuracy, efficiency, and even potentially legitimacy in government.
Literature
Artificial Intelligence and new technologies regulation
Haney B.S. (2019)
The Perils and Promises of Artificial General Intelligence
Artificial General Intelligence (“AGI”) - an Artificial Intelligence ("AI") capable of achieving any goal - is the greatest existential threat humanity faces. Indeed, the questions surrounding the regulation of AGI are the most important the millennial generation will answer. The capabilities of current AI systems are evolving at accelerating rates. Yet, legislators and scholars have yet to address or identify critical issues relating to AI regulation. Instead, legislators and scholars have focused narrowly on short term AI policy. This paper takes a contrarian approach to analyzing AI regulation with a specific emphasis on deep reinforcement learning systems, a relatively recent breakthrough in AI technology. Additionally, this paper identifies three important regulatory issues legislators and scholars need to address in the context of AI development. AI and legal scholars have made the demanding need for an AI regulatory system clear. However, those arguments focus on the regulation of current AI systems and generally ignore or dismiss the possibility of AGI. Further, previous scholarship has yet to grapple specifically with the regulation of deep reinforcement learning systems, which many AI scholars argue provides a direct path to AGI. Ultimately, legislators must consider and address the perils and promises of AGI when developing and evolving AI regulatory frameworks.
Literature
Artificial Intelligence and new technologies regulation
Finck M. (2019)
Automated Decision-Making and Administrative Law
Over the past few years, there has been much discussion regarding the potential of automated-decision making (‘ADM’) systems powered by mechanisms of computational intelligence such as machine learning or deep learning (commonly referred to as ‘Artificial Intelligence’ or ‘AI’). To date, such forms of (big) data analysis are most prominently relied on by the private sector, such as the search algorithms used by online search engines or the recommendation algorithms used by e-commerce and entertainment services platforms. These forms of data analysis in essence offer three main benefits, namely the speed and efficiency of decision-making as well as an ability to detect correlations that may be undetectable to the human brain. The efficiency, speed and correlations offered by these forms of data analytics are also appealing in the public sector. Indeed, various products of computational learning are already being used in administrative processes and will likely become much more prominent in future years. Whereas these techniques offer important potential benefits, they have also been the cause of concern. Indeed, the use of ADM in administrative settings raises numerous important legal and ethical challenges. This paper introduces these new elements in the administrative toolbox and to survey related consequences, in particular possible implications for the principle of transparency.
Literature
Cost-benefit analysis
Cecott C. (2019)
Deregulatory Cost-Benefit Analysis and Regulatory Stability
Cost-benefit analysis (“CBA”) has faced significant opposition during most of its tenure as an influential agency decisionmaking tool. As advancements have been made in CBA practice, especially in more complete monetization of relevant effects, CBA has been gaining acceptance as an essential part of reasoned agency decisionmaking. When carefully conducted, CBA promotes transparency and accountability, efficient and predictable policies, and targeted retrospective review. This Article highlights an underappreciated additional effect of extensive use of CBA to support agency rulemaking: reasonable regulatory stability. In particular, a regulation based on a wellsupported CBA is more difficult to modify for at least two reasons. The first reason relates to judicial review. Courts take a “hard look” at agency findings of fact, which are summarized in a CBA, and they require justifications when an agency changes course in ways that contradict its previous factfinding. A prior CBA provides a powerful reference point; any updated CBA supporting a new course of action will naturally be compared against the prior CBA, and the agency will need to explain any changes in CBA inputs, assumptions, and methodology. The second reason relates to the nature of CBA. By focusing on the incremental costs and benefits of a proposed change, CBA can make it difficult for an agency to justify changing course, especially when stakeholders have already relied on the prior policy. Together, these forces constrain the range of changes that agencies could rationally support. CBA thus promotes regulatory stability around transparent and increasingly efficient policies. But, admittedly, this CBA-based stabilizing influence gives rise to several objections. This Article responds to, among others, concerns about democratic accountability and, most importantly, the use of alternative methods of policy modification. Overall, the Article concludes that CBA and judicial review of CBA play a desirable role in stabilizing regulatory policy across presidential administrations.
Literature
Better Regulation
Coglianese C., Walters D. (2019)
Whither the Regulatory 'War on Coal'? Scapegoats, Saviors, and Stock Market Reactions
Complaints about excessive economic burdens associated with regulation abound in contemporary political and legal rhetoric. In recent years, perhaps nowhere have these complaints been heard as loudly as in the context of regulations targeting the use of coal as an energy source, as production levels in the coal industry dropped nearly by half between 2008 and 2016. The coal industry and its political supporters, including the President of the United States, have argued that a suite of air pollution regulations imposed by the U.S. Environmental Protection Agency (EPA) during the Obama Administration seriously undermined coal companies’ bottom lines, presenting an existential threat to the industry. Under the Trump Administration, industry players have lobbied hard for (and sometimes received) financial subsidies and regulatory changes, with the President seemingly all-too-happy to play the role of the industry’s savior. Stepping back, we ask whether regulations have really led to the decline in demand for coal and how much the coal industry can actually expect to gain from the de-regulatory policies of the current Administration. To address these questions, we statistically analyze stock market reactions to important events in what critics called the regulatory “war on coal” during the Obama Administration. Using an event-study framework that measures abnormal market activity in the immediate wake of these events, we are able to isolate any potential impact of regulation above and beyond market factors, such as secular trends in natural gas prices and market performance as a whole. Surprisingly, we find no systemic evidence consistent with a regulatory “war on coal” based on investor assessments of the industry’s financial prospects, even though our methods do find evidence of stock market reactions to other events, such as bankruptcies of other companies. The very actors with financial stakes in understanding the impact of regulation on the coal industry never bought into the regulatory “war on coal” narrative. Our findings are consistent with other evidence about the effects of regulation and with an underlying political economy of regulatory scapegoating, according to which actors in a declining industry prefer to blame regulation rather than competitive factors for the decline. By recognizing the pervasive incentives for scapegoating and cheap talk by politicians seeking to be saviors, we explain the mismatch between the evidence and the rhetoric of the “war on coal,” and along the way we also show how important it is for courts, government officials, and the public to demand careful analysis and evidence before agencies make regulatory decisions.