In the midst of the intervention of the Italian Data Protection Authority - observed by the world and with interest by other regulators, our series of essays on ChatGPT includes the contribution of Germana Lo Sapio, Judge at the Regional Administrative Court for Campania and expert on the simplification of administrative procedures for the Italian Department of Public Administration.
Lo Sapio provides us with important clarifications on the possible risks accompanying ChatGPT (direct impact on 'knowledge and information workers' and lack of clear reasoning behind the systems' outputs), also building on the traditional risks linked to Artificial Intelligence systems (discriminatory prejudices, loss of human control, compromise of democratic principles).
The timing with respect to the investigation opened by the Italian DPA also allows one to have her opinion on the same, defined as out of time (e.g., since it was launched in November 2022), and out of space (since issues like this cannot be resolved at a national level, in addition to showing the inadequacy of existing regulations, e.g., the GDPR).
In any case, the interview also allows us to focus on the opportunities: raising the general average of skills (also for the legal professions), but it is important to understand how it could be used, how it is trained, by whom, in which specific areas. However, the current draft EU regulation might be too detailed compared to the flexibility that such an environment requires.
The potential way forward is: a) to resolve the lack of participation in the training process, also by including multidisciplinarity; b) to be able to accompany the rules, which alone have never changed the world, with concrete measures, based on human and material resources, to make them effective - also with adequate investment in research; c) to seize ChatGPT as an opportunity to consciously take on the governance of the digital revolution that is before everyone's eyes and that is running at an unpredictable speed.
1. From your perspective and background, what are the risks (if any) that accompany ChatGPT and its possible applications?
ChatGPT reignited and possibly increased attention on the ethical and legal issues that have been affecting AI systems for years. I would focus on replying on what are the risks commonly associated with AI systems, and then on if, and how, the risks associated with ChatGPT differ.
I will mention three widely accepted risks, but there are others.
A. The risk of discriminatory biases
Data used to train AI systems are collected from the past, and the latter is laden with discriminatory biases, which are sedimented in Onlife. Given the complexity of the value chain of AI systems (e.g., from producers, through developers, to professional users), it is also not easy to identify in which stage the deviation or error lurks.
The problem is not that discriminatory biases exist, but the multiplier effect triggered by computing power. Biases thus spread on a large scale, propagate from the individual to the collective, and become even more insidious since they hide under the veil of apparent objectivity resulting from the use of complex calculations.
Several tests have been done on ChatGPT to verify its resistance to discriminatory bias, and there are thousands of human resources dedicated to preventing the answers given to prompts (the questions asked to ChatGPT by the user) from being ethically incorrect and discriminatory. The risk of discriminatory bias remains common to other generative systems. For instance, if you ask DALL E, which is OpenAI's image-generating system, to create you one of a CEO of a large multinational company, it will come up with a nice view of a modern office with a man in the middle, strictly male, white, in his forties, dressed in an American businessman's style.
B. The risk of loss of human control.
In the state of current scientific research, it is impossible to understand how an input leads to an output. It is indeed impossible to understand why to a given question the system gives precisely that answer and not another, and what the reasoning behind it was. The issue is ontological between the human mind and the artificial 'mind': AI systems are not designed and developed to say 'why', which is the basic question a learning child asks an adult, but to say 'what'. They do not give cause-and-effect answers, but look for correlations, statistical inferences, and probable sequences. They do not intuit the 'meaning' of the concepts used.
For ChatGPT, based on the prompt posed by the user, it generates texts that appear to be the result of reasoning, but are instead the result of the 'right sequence' of words. However, the lack of comprehension and reasoning is a serious handicap, especially in specific domains such as law. Where language is based partly on common sense words, partly on technical words that actually encapsulate legal concepts, sometimes even using metaphors. In a chat between two humans, the context that illuminates the meaning of the word is immediately apparent. In ChatGPT, it has to be explained first, in the course of demanding and expensive training.
C. The risk of distorted uses of AI systems for the purposes of social surveillance or the undermining of the founding principles of democracy.
It is an issue that also emerged dramatically for social platforms using AI in the Facebook/Cambridge Analytica scandal, which erupted in 2018 in which 87 million Facebook users' data were used, via the Thisisyourdigitallife app.
ChatGPT's ability to generate texts of all kinds, which appear to be produced by a human being, entails a real risk that false news will circulate on the web, aimed at distorting reality, flattening public debate on radical ideas and lowering users' ability to distinguish what is false from what is true. The latter, touching the heart of democratic systems, is also founded on the freedom of manifestation of thought. In 2016, The Guardian published an editorial, written using GPT-3. Had there not been the statement in the margin that that editorial had been written with the help of an AI, no reader would have noticed. Even more, since it could only be used then by selected experts, and the discussions on such a system did not have the same planetary dimension, as today.
The risk of deepfakes is now there for all to see. As happened a few days ago with the case of the photo circulated on the web of the pope wearing a white duvet. Since values and the democratic method are at stake, it is, therefore, necessary to find concrete measures to safeguard against such risks, without blocking technological evolution. And such measures can only come from authorities that have a high level of legitimacy, which is also democratic. They certainly cannot concern a single application or a single territory, as happened recently with the measure of the Italian Data Protection Authority.
2. What the peculiarities of ChatGPT?
ChatGPT has at least two features that make it disruptive:
(a) its very fast spread and user-friendliness. Despite an unattractive design, ChatGPT is the fastest growing platform ever. A diffusion accelerated also thanks to social networks and an invasion of users from the most disparate professions of 'knowledge' working on and with words, to such an extent that free access was (and often still is) clogged up (a paid version was also introduced, which supports GPT-4 and other functions, such as priority access);
b) for the first time, an AI model was intended to impact precisely on 'knowledge workers' and 'information workers' working with language, whether general or specific to certain 'domains'. Tests and experiments of the use of ChatGPT have thus ranged from the Bar exam in the US to the creation of articles and newspaper articles drafted in collaboration with ChatGPT, with the consequent problem of recognising authorship. For the first time, the binary 0-1 code was magically transformed into words and sentences: creating poems in the desired style, conference papers with an ironic or metaphorical tone, and university and high school research papers. The latter is written so well - despite the different languages used - as to seem fully reliable and at the same time arouse emotions of hilarity, surprise, and fear for the future.
c) Yet ChatGPT, like all AI systems, has no understanding of what it writes, uses an advanced linguistic model based on deep learning, defined as 'Transformer', and, above all, relies on enormous computing power (an unspecified number of SuperComputers located in various states in the United States, partly owned by OpenAI, partly by Microsoft in partnership). ChatGPT thus plastically portrays that dichotomy of AI systems, underlined by Luciano Floridi, in which there is a 'divorce' between obtaining results and using intelligence (as we humans understand it) to do so.
It is clear that such a disruptive system in the 'knowledge' professions raises worrying questions: is it destined to replace or change and possibly increase professional skills? And which professions are most likely to change? How is up-skilling achieved so quickly? Basically, the same questions posed by the Digital Revolution 4.0., but seen from the perspective of those who, not working with numbers but with words, may have believed they were out of the woods.
3. Since you mentioned it, what is your opinion on the recent measure of the Italian Data Protection Authority?
I think it is out of time and out of space.
Out of time because ChatGPT was launched in November 2022. Universities have been wondering since then how to deal with this challenge, as discussed on this website by Prof. Michele Vincenti, and which cannot be solved by a ban upstream. Besides, the EU draft Regulation bans on unacceptable risk systems have been the subject of intense negotiations between the member states and the EU Commission.
Out of space, because the issues raised by ChatGPT cannot be resolved at the national level. As demonstrated precisely by the ongoing legislative process in the EU, where the question of a) whether and how to regulate GPAI (General Purpose Artificial Intelligence), b) deciding what, c) how and in what terms to ban directly affects the European single market, risks distorting competition, without any benefit. While ChatGPT is being banned in Italy, there are hundreds of applications taking its place (Perplexiti.AI, but also the Italian Pizzagpt.it and several ways to continue using it), Google experiments with Bard, based on the advanced LAMDA language model; META invests in a similar natural language processing architecture OPT-175B5; and the Chinese giant BAIDU, only considering text-generative systems. Those, on the other hand, that are generative of images - which carry perfectly similar risks - are all still fully usable (Midjourney, Stable Diffusion, OpenAI's DALL E).
Apart from the effect of entering into the global competition at full force, the measure brings with it another side-effect: it shows that the apparatus of standards such as the GDPR designed 'without AI in Mind' is not adequate for the new technologies. This issue was also on the table when the European Commission and Parliament started the path of AI regulation. Between 2016 and 2020, one wondered for a long time whether it would not be sufficient to surgically intervene in the already existing and proven disciplines that might appear suitable to face the new challenges, not only with regard to privacy, but also consumer protection, prohibition of discrimination, and protection of competition. The negative answer lies in the decision to the draft EU Regulation.
4. What, on the other hand, are the opportunities?
If used in the right way, ChatGPT can raise the general average of skills in performing basic tasks, which concern all knowledge workers: synthesising texts, providing starting points for reports, classifying huge quantities of documents on the basis of the addresses given to them, integrating texts with numerical tables. Rendering in language accessible, even to non-experts, specialised written texts in which, as mentioned, technical terms abound, often incomprehensible to the addressees. In short, it can do what a very good assistant, properly 'trained' and available 24 hours a day, could do.
The real point is not what it can do, but how it is used, and upstream, how it is trained, by whom, and in which specific domains. Because, as it stands today, it is clear that its abilities also differ depending on the language with which it is used. More than 50% of the unstructured texts on the web are written in English, then Chinese, German, and French. This alone makes the application less efficient and reliable if it is used in a different language. But we are talking about a system that has now begun to take its first steps and whose concrete direction, in terms of investment, research and development, is in the hands not of the users, nor of the regulators, but of OpenAI.
5. Should systems such as ChatGPT be regulated or can they be considered already covered by the current draft regulation?
At the moment, the negotiation between the European institutions is at an apparent stalemate, precisely because the options of how to regulate GPAI are different: on the one hand, the Council, i.e. the Member States, had indicated to include a specific title in the draft, while delegating the adoption of specific delegated acts to the Commission for the identification of detailed disciplines. On the other hand, the Parliament intends to indicate already in the text of the regulation what and how the very complex measures, obligations, and procedures to which the 'target-specific' systems are to be subject are to be specified. In my opinion, to have chosen the path of a super-detailed regulation for systems that have an unpredictable speed of development, as demonstrated by the eruption of ChatGPT and the knock-on effect it has produced (the web is currently teeming with generative apps), risks complicating the path. The so-called Collingridge dilemma is there for all to see: regulate early and before it is too late? Or wait to know the real impact, risks or opportunities of the new technology? The game has just begun.
6. What opportunities does ChatGPT present specifically for the legal profession (lawyers as well as judges), and what steps should be taken to ensure that these opportunities are fully realized?
ChatGPT opened the discussion on the application of NLP models in the legal professions. In itself, this is already an opportunity. And as I said, it could be a valuable 'assistant', helping in the preparatory phase of the decision for judges or the drafting of court documents for lawyers. But models designed, conceived, developed with the Italian or at least the European legal system in mind would have to be put in place. Indeed, the so-called predictive justice systems in common law systems are based above all on jurisprudential precedents, because there is the rule of 'stare decisis', i.e. the constraint of the precedent. In civil law systems, on the other hand, it is important that alongside jurisprudential precedents, which alone risk causing the future to be conditioned by the past, there is an elaboration of the rules in force, often articulated on several levels (European, national, regional, local) and of the scientific literature that marks the evolution of interpretation, so as to apply the rules to the new.
7. What risks are associated with the use of ChatGPT in legal decision-making, and what regulatory measures should be taken to mitigate these risks?
The most obvious risk, in my opinion, is the lack of participation in the training process. Leaving it only to computer scientists, who are in turn directed by the business policies of private investors. Never before has the watchword been multidisciplinary between different skills, backgrounds and approaches. But this way of working should cover all phases from design to training, implementation, monitoring and adjusting. There is a need, on the one hand, for strong leadership in those who drive the change (which is not only technical, but cultural). On the other hand, for a rethinking of the boundaries of their work, for legal professionals to be open to other scientific knowledge - including the linguistic sciences - in order to pursue the same objective. Otherwise, we risk being out of time and out of space.
Then there is another risk, which has already been explored in high-tech environments, such as air transportation, where 'automatic pilots' have been used for decades, but the subject is also studied in the field of health: the risk of automation bias.
An experimental study, conducted by Israeli and Dutch researchers in 2022, investigated this issue precisely in relation to the public administration’s environment. And the results are rather puzzling because, on the one hand, they show that similar problems of 'confirmation bias' occur when public officials rely on the specialised expertise of experts. On the other hand, they show that, at least in the initial phase of using automated systems, the guard is up and therefore the risk of automation bias is low. The real problem is what happens when the systems, having passed the phase of 'pilot projects' and extraordinary funding, become everyday tools of use, as email or the internet are today. The draft AIAct, while issuing the general principle of human oversight for the use of AI systems, also shines a spotlight on the need to make the individual aware of the tendency to rely on the machine. But it offers no concrete solutions.
8. What role should regulators play in overseeing the development and deployment of ChatGPT, and what resources will be needed to carry out this oversight effectively?
I believe that beyond the rules, which are detailed and transversal - according to the EU model -, or elaborated by principles and then articulated in individual sectors - according to the UK model - policy-makers have a fundamental role. It is a matter of accompanying the rules, which alone have never changed the world, with concrete measures, based on human and material resources, to make them effective. It would be necessary to bring research - not only theoretical research, but above all applied research - back under the public aegis, with adequate funding to direct technological development towards collective benefits, and not the maximisation of profits. Two significant historical examples:
- In 1958, in response to the USSR's satellite launch of Sputnik, the Defense Advanced Research Projects Agency started providing massive funding for research into the nascent AI, the ambiguous term for which was coined by American researchers in those very years.
- In 1991, there was only one website in the world and it had been invented by Tim Berners Lee, working at CERN in Geneva on the ARPANET system that connected computers, which had also originally been designed for US military security. CERN decided to make the innovative system public, based on the idea of drawers opening through links. Had it not been for that decision in the early 1990s, the World Wide Web would probably not be accessible as we know it today with an impact also on tech development.
In my opinion, the same should be nurtured, as for regulation, also for investments and directions to be given to research. Some steps have been taken, but they remain subdued, they do not arouse the same curiosity as a measure by the Italian DPA for ChatGPT.
For instance, at the end of November 2022, 'Leonardo', one of the world's four most powerful Super Computers, was inaugurated in Italy. It is located at Tecnopolo di Bologna, occupies an area of hundreds of square metres and was designed and built thanks to substantial Italian and European co-financing. To put it where it is, it took 31 lorries carrying the materials. According to the financing agreements, 50 % of the computing power generated by Leonardo is at the disposal of Italian research institutes and universities (the rest will be used by European researchers). A step-change that could also be the trump card for the professional ambitions of future generations.
9. What are your thoughts on ChatGPT's answer to our question about its risks and regulation?
I think it is very correct and equally superficial. ChatGPT is not designed to go deeper, to provide different and new perspectives, and to use the 'lateral thinking' that makes humanity evolve. It is only a starting point, but a very useful one at that. If nothing else, it has allowed us to reflect on it.
10. Any other considerations you feel are appropriate?
Disruptive events, such as ChatGPT's burst onto the scene at the end of 2022, and the discussions that have been stirring at various levels since then, among regulators, policymakers, enthusiasts, and practitioners of words, should be an opportunity to consciously take on the governance of the digital revolution that is before everyone's eyes and that runs at an unpredictable speed. No one, however, can do it alone, no specific expertise can be sufficient to frame risks and opportunities and, above all, to envisage measures that mitigate the former and increase the latter. Are we on time? Let's hope the words written by Kevin Kelly in 2016 for the digital revolution, practically the ice age, still apply: 'today is truly borderless, in it we are all becoming. It is truly the best time in human history to begin. We are not late'.
Judge at the Regional Administrative Court for Campania and
expert on the simplification of administrative procedures
for the Italian Department of Public Administration