Publications

Literature
Regulatory governance
Yeung K. (2016)
‘Hypernudge’: Big Data as a mode of regulation by design
This paper draws on regulatory governance scholarship to argue that the analytic phenomenon currently known as ‘Big Data’ can be understood as a mode of ‘design-based’ regulation. Although Big Data decision-making technologies can take the form of automated decision-making systems, this paper focuses on algorithmic decision-guidance techniques. By highlighting correlations between data items that would not otherwise be observable, these techniques are being used to shape the informational choice context in which individual decision-making occurs, with the aim of channelling attention and decision-making in directions preferred by the ‘choice architect’. By relying upon the use of ‘nudge’ – a particular form of choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives, these techniques constitute a ‘soft’ form of design-based control. But, unlike the static Nudges popularised by Thaler and Sunstein [(2008). Nudge. London: Penguin Books] such as placing the salad in front of the lasagne to encourage healthy eating, Big Data analytic nudges are extremely powerful and potent due to their networked, continuously updated, dynamic and pervasive nature (hence ‘hypernudge’). I adopt a liberal, rights-based critique of these techniques, contrasting liberal theoretical accounts with selective insights from science and technology studies (STS) and surveillance studies on the other. I argue that concerns about the legitimacy of these techniques are not satisfactorily resolved through reliance on individual notice and consent, touching upon the troubling implications for democracy and human flourishing if Big Data analytic techniques driven by commercial self-interest continue their onward march unchecked by effective and legitimate constraints.
Literature
Behavioural regulation
Grüne-Yanoff T., Hertwig R. (2016)
Nudge versus boost: How coherent are policy and theory?,
If citizens’ behavior threatens to harm others or seems not to be in their own interest (e.g., risking severe head injuries by riding a motorcycle without a helmet), it is not uncommon for governments to attempt to change that behavior. Governmental policy makers can apply established tools from the governmental toolbox to this end (e.g., laws, regulations, incentives, and disincentives). Alternatively, they can employ new tools that capitalize on the wealth of knowledge about human behavior and behavior change that has been accumulated in the behavioral sciences (e.g., psychology and economics). Two contrasting approaches to behavior change are nudge policies and boost policies. These policies rest on fundamentally different research programs on bounded rationality, namely, the heuristics and biases program and the simple heuristics program, respectively. This article examines the policy–theory coherence of each approach. To this end, it identifies the necessary assumptions underlying each policy and analyzes to what extent these assumptions are implied by the theoretical commitments of the respective research program. Two key results of this analysis are that the two policy approaches rest on diverging assumptions and that both suffer from disconnects with the respective theoretical program, but to different degrees: Nudging appears to be more adversely affected than boosting does. The article concludes with a discussion of the limits of the chosen evaluative dimension, policy–theory coherence, and reviews some other benchmarks on which policy programs can be assessed.
Literature
Better Regulation
Rangone N. (2016)
Techniques for Improving the Quality of Procedural Rules
The quality of procedural rules might significantly affect the quality of public decisions, an aim that is strongly promoted at European and international level. Indeed, a well-designed decision-making process might help in attaining general public decisions which are lawful, necessary, proportionate, consistent, well-written and accessible to stakeholders. The quality of procedural rules is the result of a balance between guarantees that must be provided for stakeholders, on the one hand, and efficiency and effectiveness of decision-making processes, on the other. Undoubtedly, there might be some tension between these aims, the first asking for more procedural constraints, the second raising the issue of how deep the decision-making procedure might be in order to assure both cost-effectiveness and well-reasoned decisions. This issue is tackled by procedure-simplifying measures, introduced by the policies of administrative simplification (which are analysed in chapter 1 of this section). However, such a balance is hardly to be found in general terms or imposed through a top-down approach alone. On the contrary, it often emerges from well-designed decision making processes, with predetermined, identified steps, whose depth of analysis (and their reasons) must be transparent and justified in the final decision. The quality of procedural rules might also be enhanced by the use of specific techniques aiming at increasing empirical data that enables better informed public decisions (e.g. environmental impact assessment, competition assessment, and risk assessment). These tools might be considered as techniques for improving procedural rules because they are intended to increase the awareness of the problem at stake and prevent public decisions from unintended consequences. However, while such techniques should enable evidence-based general decisions, it is not always the case, due to reasons which are both intrinsic and extrinsic to these tools.
Literature
Impact assessment
Van Golen T, Van Voorst S. (2016)
Towards a Regulatory Cycle? The Use of Evaluative Information in Impact Assessments and Ex-Post Evaluations in the European Union
As a part of its Better Regulation agenda, the European Commission increasingly stresses the link between different types of regulatory evaluations. Predictions made by Impact Assessments (IAs) could be verified during ex–post legislative evaluations, while ex–post evaluations in turn could recommend amendments to be studied in future IAs. This article combines a dataset of 309 ex–post legislative evaluations (2000-2014) and a dataset of 225 IAs of legislative updates (2003-2014) to show how many ex–post evaluations of the Commission use IAs and vice versa. This way, it explores if the Commission's rhetoric of a ‘regulatory cycle’ holds up in practice. Building on the literature of evaluation use, we formulate the hypotheses that the timeliness, quality and focus of the IAs and evaluations are key explanations for use. Our results show that so far only ten ex–post evaluations have used IAs of EU legislation, while thirty three IAs have used ex–post legislative evaluations. Using Fuzzy set Qualitative Comparative Analysis, we find that timeliness is a necessary condition of the use of ex–post evaluations by IAs, suggesting that for the regulatory cycle to function properly, it is crucial to complete an ex–post evaluation before an IA is launched. Future research could repeat our analysis for evaluations of non–regulatory activities or study the causal mechanisms behind our findings.