Publications

Literature
Cost-benefit analysis
Carrigan C.; Febrizio M; Shapiro S. (2020)
Regulating Agencies: Using Regulatory Instruments as a Pathway to Improve BenefitCost Analysis
Scholars of regulation generally view the procedures that agencies must follow when promulgating rules as instruments by which political principals control bureaucratic agents. Much like political principals attempt to use procedural checks to constrain regulatory agencies actions, these same agencies employ various regulatory instruments to influence the decisions of private agents, especially firms. Despite the parallel nature of these principal-agent problems, few studies, if any, have looked at whether lessons from one can be used to inform the other. In this paper, we draw analogies between benefit-cost analysis (BCA)—a procedural control employed in the regulatory process—and three regulatory instruments that have similarities to BCA—performance standards, information disclosure requirements, and management-based regulation. We use lessons from research on the effectiveness of regulatory instruments to make predictions regarding the efficacy of BCA in various situations. Just as different regulatory instruments are appropriate for different regulatory contexts, the pathways by which BCA attempts to encourage better regulation may not all be applicable in every circumstance. We argue that such mutual exclusivity should inform how requirements for BCA are designed and that BCA’s emphasis on systematic analysis—the pathway most closely resembling management-based regulation—may offer the most promise for encouraging better rules.
Documents
Digital markets
AGCM, AGCOM, Garante Privacy (2020)
Survey on Big Data
Al termine di una intensa e proficua collaborazione, è stato pubblicato oggi il rapporto finale dell’indagine conoscitiva sui Big Data condotta congiuntamente dall’Autorità per le Garanzie nelle Comunicazioni, dall’Autorità Garante della Concorrenza e del Mercato e dal Garante per la Protezione dei Dati Personali. Da tre prospettive diverse e complementari, l’indagine ha approfondito, anche attraverso audizioni e richieste di informazioni a imprese, associazioni di categoria ed esperti della materia, i cambiamenti derivanti dai Big Data sugli utenti che forniscono i dati, sulle aziende che li utilizzano e, dunque, sui mercati. Ciò anche al fine di cogliere appieno le possibili sinergie tra le tre Autorità e identificare gli strumenti più appropriati per eventuali interventi. Negli ultimi anni i dati hanno assunto importanza via via crescente nell’organizzazione delle attività di produzione e di scambio, a tal punto da poter essere considerati, oltre che la proiezione della persona nel mondo digitale, anche una risorsa economica a tutti gli effetti, anzi la risorsa di gran lunga più importante in molti settori. Infatti, grazie agli avanzamenti nell’ambito dell’Information e Communication Technology (ICT), le organizzazioni tendono a raccogliere dati di qualsiasi tipo, ad elaborarli in tempo reale per migliorare i propri processi decisionali e a memorizzarli in maniera permanente al fine di poterli riutilizzare in futuro o di estrarne nuova conoscenza. La creazione di dati sta seguendo un processo esponenziale: nell’anno 2018 il volume totale di dati creati nel mondo è stato di 28 zettabyte (ZB), registrando un aumento di più di dieci volte rispetto al 2011: si prevede che entro il 2025 il volume complessivo dei dati arriverà fino a 163 ZB. Questa espansione, guidata dall’affermazione delle piattaforme on-line, subirà un’ulteriore accelerazione con la connessione tra oggetti e le applicazioni 5G. In questo quadro si pongono nuove sfide: la centralità del dato, anche come bene economico e l’ importanza della sua tutela come diritto fondamentale della persona; l’impatto della profilazione algoritmica e delle piattaforme on-line sul grado di concorrenza in vecchi e in nuovi mercati rilevanti; l’effetto del programmatic advertising sulla qualità dell’informazione e sulle modalità di diffusione e acquisizione della stessa; la tutela e la promozione del pluralismo on-line in un contesto informativo esposto a strategie di disinformazione e di hatespeech; la necessità di garantire trasparenza e scelte effettive al consumatore, con particolare attenzione alla tutela dei minori, in relazione alla consenso circa l’uso del proprio dato; la protezione del dato personale anche in ambiti non attualmente coperti dal GDPR; la definizione di politiche di educazione in relazione all’uso del dato. La presente Indagine conoscitiva è articolata in 5 capitoli e un capitolo conclusivo. Il capitolo 1 introduce i temi oggetto dell’Indagine e fornisce una definizione e una descrizione delle caratteristiche dei Big Data. Nel capitolo 2 vengono riportate le principali questioni emerse nel corso delle audizioni e dai contributi dei partecipanti all’Indagine e i riflessi sull’operatività delle imprese italiane. Il capitolo 3 riporta le considerazioni dell’AGCOM su come il fenomeno dei Big Data incida nel settore delle comunicazioni elettroniche e dei media. Il capitolo 4 riporta le considerazioni del Garante per la Protezione dei Dati Personali sul possibile impatto dei Big Data sul diritto alla protezione dei dati personali e sulle misure e cautele da adottare; il capitolo 5 quelle dell’AGCM sull’utilizzo dei Big Data e le relative implicazioni di natura antitrust e di tutela del consumatore. Infine, nel capitolo conclusivo sono descritte le linee guida e raccomandazioni di policy indirizzate al legislatore. Tra queste, l’impegno assunto dalle tre Autorità a definire un meccanismo di collaborazione permanente in relazione agli interventi e allo studio dell’impatto dei big data su imprese, consumatori e cittadini.
Literature
Artificial Intelligence and new technologies regulation
Mullingan D. K.; Bamberger K. A. (2019)
Procurement As Policy: Administrative Process for Machine Learning
At every level of government, officials contract for technical systems that employ machine learning—systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they and the public have little input into—or even knowledge about—their design or how well that design aligns with public goals and values. This Article explains the ways that the decisions about goals, values, risk, and certainty, along with the elimination of case-by-case discretion, inherent in machine-learning system design create policies—not just once when they are designed, but over time as they adapt and change. When the adoption of these systems is governed by procurement, the policies they embed receive little or no agency or outside expertise beyond that provided by the vendor. Design decisions are left to private third-party developers. There is no public participation, no reasoned deliberation, and no factual record, which abdicates Government responsibility for policymaking. This Article then argues for a move from a procurement mindset to policymaking mindset. When policy decisions are made through system design, processes suitable for substantive administrative determinations should be used: processes that foster deliberation reflecting both technocratic demands for reason and rationality informed by expertise, and democratic demands for public participation and political accountability. Specifically, the Article proposes administrative law as the framework to guide the adoption of machine learning governance, describing specific ways that the policy choices embedded in machine-learning system design fail the prohibition against arbitrary and capricious agency actions absent a reasoned decision-making process that both enlists the expertise necessary for reasoned deliberation about, and justification for, such choices, and makes visible the political choices being made. Finally, this Article sketches models for machine-learning adoption processes that satisfy the prohibition against arbitrary and capricious agency actions. It explores processes by which agencies might garner technical expertise and overcome problems of system opacity, satisfying administrative law’s technocratic demand for reasoned expert deliberation. It further proposes both institutional and engineering design solutions to the challenge of policymaking opacity, offering process paradigms to ensure the “political visibility” required for public input and political oversight. In doing so, it also proposes the importance of using “contestable design”—design that exposes value-laden features and parameters and provides for iterative human involvement in system evolution and deployment. Together, these institutional and design approaches further both administrative law’s technocratic and democratic mandates.
Documents
Competition advocacy
Autorité de la concurrence and Bundeskartellamt (2019)
Algorithms and Competition
Algorithms are among the most important technological drivers of the ongoing digitalization process. They are becoming more and more important, enabling firms to be more innovative and efficient. However, debate has arisen on whether and to what extent algorithms might also have detrimental effects on the competitive functioning of markets. In their joint conceptual project – Algorithms and Competition – the Autorité de la concurrence and the Bundeskartellamt studied potential competitive risks that might be associated with algorithms. They elaborated on the concept of algorithm as well as on different types and fields of application. In their study, the two authorities focused in particular on pricing algorithms and collusion, but also considered potential interdependencies between algorithms and the market power of the companies using them as well as practical challenges when investigating algorithms. Isabelle de Silva, President of the Autorité de la concurrence: “Algorithms are used constantly in the digital economy, and are at the very core of how some fast growing businesses operate: online travel agencies, e-commerce, online advertising, to name only a few. It is essential that we look into how these algorithms work. We need to determine if there is a risk that algorithms might facilitate or permit behaviours that are contrary to competition law. With this joint study with the Bundeskartellamt we aim at reaching a common view on these matters and at starting a debate with stakeholders.” Andreas Mundt, President of the Bundeskartellamt: “The joint study is another proof of the continuing cooperation between our agencies. As digital markets keep evolving, we expand our expertise on algorithms in an exchange with each other. This is in line with our efforts to devote more resources to the digital economy with the clear-cut aim to enforce competition law also in the era of platform economy and digital business models.” Algorithms and competition are also the topic of an accompanying conference hosted by the Autorité de la concurrence and the Bundeskartellamt that is taking place in Paris today. Several renowned speakers, including business representatives, researchers and competition enforcers, are discussing potential business applications for algorithms, pricing algorithms and the risk of horizontal collusion, as well as ways to address the challenges raised by algorithms.