Artificial Intelligence and Law Enforcement - Impact on Fundamental Rights

This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the LIBE Committee, examines the impact on fundamental rights of Artificial Intelligence in the field of law enforcement and criminal justice, from a European Union perspective. It presents the applicable legal framework (notably in relation to data protection), and analyses major trends and key policy discussions. The study also considers developments following the Covid-19 outbreak. It argues that the seriousness and scale of challenges may require intervention at EU level, based on the acknowledgement of the area’s specificities.
Category
Year
Author
European Parliament Research Service

Getting the future right – Artificial intelligence and fundamental rights

Artificial intelligence (AI) already plays a role in deciding what unemployment benefits someone gets, where a burglary is likely to take place, whether someone is at risk of cancer, or who sees that catchy advertisement for low mortgage rates. Its use keeps growing, presenting seemingly endless possibilities. But we need to make sure to fully uphold fundamental rights standards when using AI. This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It focuses on four core areas – social benefits, predictive policing, health services and targeted advertising.
Category
Year
Author
European Union Agency for Fundamental Rights

Deploying Machine Learning for a Sustainable Future

To meet the environmental challenges of a warming planet and an increasingly complex, high techeconomy, government must become smarter about how it makes policies and deploys its limited
resources. It specifically needs to build a robust capacity to analyze large volumes of environmental and economic data by using machine-learning algorithms to improve regulatory oversight, monitoring,
and decision-making. Three challenges can be expected to drive the need for algorithmic environmental governance: more problems, less funding, and growing public demands. This paper explains why algorithmic governance will prove pivotal in meeting these challenges, but it also presents four likely obstacles that environmental agencies will need to surmount if they are to take full advantage of big data and predictive analytics. First, agencies must invest in upgrading their information technology infrastructure to take advantage of computational advances. Relatively modest technology investments, if made wisely, could support the use of algorithmic tools that could yield substantial savings in other administrative costs. Second, agencies will need to confront emerging concerns about privacy, fairness, and transparency associated with its reliance on Big Data and algorithmic analyses. Third, government agencies will need to strengthen their human capital so that they have the personnel who understand how to use machine learning responsibly. Finally, to work well, algorithms will need clearly defined objectives. Environmental officials will need to continue to engage with elected officials, members of the public, environmental groups, and industry representatives to forge clarity and consistency over how various risk and regulatory objectives should be specified in machine learning tools. Overall, with thoughtful planning, adequate resources, and responsible management, governments should be able to overcome the obstacles that stand in the way of the use of artificial intelligence to improve environmental sustainability. If policy makers and the public will recognize the need for smarter governance, they can then start to tackle obstacles that stand in its way and better position society for a more sustainable future.
Category
Year
Author
Coglianese C.

Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research

Category
Year
Author
Nuffield Foundation

Opportunities of Artificial Intelligence

This study focuses on presenting the technological, impact and regulatory state of play in the EU, as compared to key competitor countries. This study also highlights industrial areas in which AI
will bring significant socioeconomic benefits, before presenting a methodology for scrutinising the fitness of the EU policy and regulatory framework in the context of AI. This document was provided by the Policy Department for Economic, Scientific and Quality of Life Policies at the request of the committee on Industry, Research and Energy (ITRE committee).
Category
Year
Author
European Parliament

Algorithmic Accountability in the Administrative State

How will artificial intelligence (AI) transform government? Stemming from a major study commissioned by the Administrative Conference of the United States (ACUS), we highlight the promise and trajectory of algorithmic tools used by federal agencies to perform the work of governance. Moving past the abstract mappings of transparency measures and regulatory mechanisms that pervade the current algorithmic accountability literature, our analysis centers around a detailed technical account of a pair of current applications that exemplify AI’s move to the center of the redistributive and coercive power of the state: the Social Security Administration’s use of AI tools to adjudicate disability benefits cases and the Securities and Exchange Commission’s use of AI tools to target enforcement efforts under federal securities law. We argue that the next generation of work will need to push past a narrow focus on constitutional law and instead engage with the broader terrain of administrative law, which is far more likely to modulate use of algorithmic governance tools going forward. We demonstrate the shortcomings of conventional ex ante and ex post review under current administrative law doctrines and then consider how administrative law might adapt in response. Finally, we ask how to build a sensible accountability structure around public sector use of algorithmic governance tools while maintaining incentives and opportunities for salutary innovation. Reviewing and rejecting commonly offered solutions, we propose a novel approach to oversight centered on prospective benchmarking. By requiring agencies to reserve a random set of cases for manual decision making, benchmarking offers a concrete and accessible test of the validity and legality of machine outputs, enabling agencies
Category
Year
Author
Freeman Engsrom D. e Ho D. E.

Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies

Artificial intelligence (AI) promises to transform how government agencies do their work. Rapid developments in AI have the potential to reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data, thereby making government performance more efficient and effective. Agencies that use AI to realize these gains will also confront important questions about the proper design of algorithms and user interfaces, the respective scope of human and machine decision-making, the boundaries between public actions and private contracting, their own capacity to learn over time using AI, and whether the use of AI is even permitted.
These are important issues for public debate and academic inquiry. Yet little is known about how agencies are currently using AI systems beyond a few headline-grabbing examples or surface-level descriptions. Moreover, even amidst growing public and scholarly discussion about how society might regulate government use of AI, little attention has been devoted to how agencies acquire such tools in the first place or oversee their use. In an effort to fill these gaps, the Administrative Conference of the United States (ACUS) commissioned this report from researchers at Stanford University and New York University. The research team included a diverse set of lawyers, law students, computer scientists, and social scientists with the capacity to analyze these cutting-edge issues from technical, legal, and policy angles. The resulting report offers three cuts at federal agency use of AI:
(i) a rigorous canvass of AI use at the 142 most significant federal departments, agencies, and sub-agencies (Part I)
(ii) a series of in-depth but accessible case studies of specific AI applications at eight leading agencies (SEC, CPB, SSA, USPTO, FDA, FCC, CFPB, USPS) covering a range of governance tasks (Part II); and
(iii) a set of cross-cutting analyses of the institutional, legal, and policy challenges raised by agency use of AI (Part III).
Category
Year
Author
Freeman Engsrom D. et al.

Examining the Black Box: Tools for Assessing Algorithmic Systems

As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.
This report is primarily aimed at policymakers, to inform more accurate and focused policy conversations. It may also be helpful to anyone who creates, commissions or interacts with an algorithmic system and wants to know what methods or approaches exist to assess and evaluate that system.
Category
Year
Author
Ada Lovelace Institute and Data Kind UK