[ITA] I recenti provvedimenti del Garante privacy nei confronti di ChatGPT, un sistema di intelligenza artificiale generativa di proprietà di OpenAI, sollevano riflessioni sul ruolo e la capacità delle autorità pubbliche di supportare l’innovazione tutelando al contempo i cittadini. Gli interventi del Garante mettono in luce l’impatto sull’attuazione amministrativa di una regolazione obsoleta - il Regolamento europeo generale sulla protezione dei dati - che contribuisce all’ineffettività dei provvedimenti rispetto agli obiettivi perseguiti. Neppure è risolutiva l’impostazione molto poco flessibile della proposta di Regolamento europeo sull’IA, laddove è invece auspicabile un mutamento del paradigma regolatorio alla base dell’intervento pubblico.
-- [ENG]
The recent actions taken by the Italian Data Protection Authority against ChatGPT, a generative artificial intelligence system owned by OpenAI, prompt reflections on the role and ability of public authorities to support innovation while simultaneously protecting citizens. The interven- tions by the Privacy Authority shed light on the impact of an outdated regulatory framework, the European General Data Protection Regulation, on the regulatory delivery, thereby impeding the effectiveness of these measures in achieving their intended goals. Furthermore, the proposed European Regulation on Artificial Intelligence, with its rigid approach, fails to provide a definitive solution, as there is a need for ashift in the regulatory paradigm underlying public intervention.
These two volumes collect twenty five articles and papers published within the “Governance of/through Data” research project financed by the Italian Ministry of Universities. The research project, which was promoted by Roma Tre University, as project lead, and saw the participation of professors and reseachers from Bocconi University in Milan; LUMSA University in Rome; Salento University in Lecce and Turin Polytechnic, cover multiple issues which are here presented in five sections: Algorithms and artificial intelligence; Antitrust, artificial intelligence and data; Big Data; Data governance; Data protection and privacy.
These two volumes collect twenty five articles and papers published within the “Governance of/through Data” research project financed by the Italian Ministry of Universities. The research project, which was promoted by Roma Tre University, as project lead, and saw the participation of professors and reseachers from Bocconi University in Milan; LUMSA University in Rome; Salento University in Lecce and Turin Polytechnic, cover multiple issues which are here presented in five sections: Algorithms and artificial intelligence; Antitrust, artificial intelligence and data; Big Data; Data governance; Data protection and privacy.
The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.
The AI Index collaborates with many different organizations to track progress in artificial intelligence. These organizations include: the Center for Security and Emerging Technology at Georgetown University, LinkedIn, NetBase Quid, Lightcast, and McKinsey. The 2023 report also features more self-collected data and original analysis than ever before. This year’s report included new analysis on foundation models, including their geopolitics and training costs, the environmental impact of AI systems, K-12 AI education, and public opinion trends in AI. The AI Index also broadened its tracking of global AI legislation from 25 countries in 2022 to 127 in 2023.
Machine learning, or artificial intelligence, refers to a vast array of different algorithms that are being put to highly varied uses, including in transportation, medicine, social media, marketing, and many other settings. Not only do machine-learning algorithms vary widely across their types and uses, but they are evolving constantly. Even the same algorithm can perform quite differently over time as it is fed new data. Due to the staggering heterogeneity of these algorithms, multiple regulatory agencies will be needed to regulate the use of machine learning, each within their own discrete area of specialization. Even these specialized expert agencies, though, will still face the challenge of heterogeneity and must approach their task of regulating machine learning with agility. They must build up their capacity in data sciences, deploy flexible strategies such as management-based regulation, and remain constantly vigilant. Regulators should also consider how they can use machine-learning tools themselves to enhance their ability to protect the public from the adverse effects of machine learning. Effective regulatory governance of machine learning should be possible, but it will depend on the constant pursuit of regulatory excellence.
In this report, the National Board of Trade Sweden highlights the challenges that digital innovation creates for technical regulation. The report questions whether artificial intelligence and cyber vulnerabilities are changing the way industrial goods, such as some smartphones, medical devices, and vehicles with embedded digital technologies, are regulated.