ChatGPT essay series | The Ethical Perspective

As part of our essay series on ChatGPT, we had the pleasure of interviewing Professor Fabio Macioce, a full professor of bioethics and philosophy of law at the University of Rome LUMSA.

Professor Macioce provides an interesting perspective on the risks associated with AI language models: users’ awareness of ChatGPT’s information processing and ability to critically evaluate the outputs; loss of empowerment in writing, expressing, or arguing skills; cultural and social change in our communication habits. ChatGPT could also positively provide advantages to society (and the academic world) if effective transparency is ensured, which makes real people aware of risks and opportunities while avoiding information overload. Evidence and scholarship on “informed consent” in the health sector may suggest regulators a way forward.


Below is the full interview:


1. From your perspective and background, what are the risks (if any) that accompany ChatGPT and its possible applications?

There are legal risks (e.g., copyright infringement) and ethical risks associated with the material that GPT uses to process its responses. As for the ethical profiles, it seems to me that the most sensitive issue is not so much the content of what GPT writes, which could, for example, reflect bias and stereotyping, or spread defamatory or offensive arguments: this can certainly happen, and it depends on the data that GPT works on, and reflects uncritically. However, the greatest risk seems to me to be related to the fact that the average user may not be sufficiently aware of this information processing, and may not have tools to detect and critically evaluate the content of what GPT writes. In short, what seems risky to me is not prejudice: it is human beings who spread prejudice and stereotypes, discriminate and defame, GPT only reflects this. However, we are accustomed to seeing human beings as unreliable, violent, ideological, biased, or even simply bearers of values and ideas different from our own: we are aware of this, and ready (more or less) to detect it. On the contrary, we are inclined to think of technology as ethically neutral, formal, and indifferent to moral issues -and so it is, in fact- but precisely because of this we may be inclined not to use moral criteria to evaluate what technology produces, risking failing to see discrimination and prejudice. Technology can deceive us because it presents itself as ethically neutral, even when it reflects prejudice and ideology uncritically.

A second risk is the loss of empowerment: I mean, not using GPT as a tool to enhance one's research or writing skills, but as a substitute, whose products are accepted and taken uncritically and passively. This is particularly risky for students, or for those whose training is connected with their ability to write, express, argue, and so on (lawyers!).
A third risk depends on the very presence of GPT in the world. The fact that a technology is present, in itself, changes the way we think about reality. The risk that I see in GPT to date is that people see writing and language production as an unnecessary activity, and as something they do not need to engage in. Just as Twitter and - earlier - text messaging have changed our language, pushing it toward ever shorter and more simplified sentences, so GPT could have an impact on the way we speak and write. And because language reflects thought, the risks associated with this technology do not seem trivial.


2. Should language models like ChatGPT be held responsible for any harm caused by their output?

No. An integrated liability model seems reasonable, in which liability for any harm caused by GPT (erroneous financial suggestions, reputational offenses, dissemination of personal data, etc.) is apportioned, case-by-case, among the user, the owner of the software, the author of the data and material that GPT used for the response, and so on. And it seems to me that a liability model not equivalent to the (civil) liability activated for human beings, but to a mix of strict liability and damage prevention or mitigation actions, to be activated because of the specific circumstances of the case, is reasonable.


3. How can we ensure that AI language models like ChatGPT are transparent and accountable in their decision-making processes?

I don't think it is possible. I don't even think it is entirely necessary, or at least not all the way through. It seems to me that the user needs to be aware (with warnings and messages of various kinds) that GPT is not a person, that it processes its outputs from a certain kind of data and through certain mechanisms. In short, I believe that the users of GPT need to be made aware, in a way that is understandable to the average user's skills, of how roughly GPT's language processing works. Beyond that, however, absolute transparency is not possible, is not within the reach of the average expertise, and is not what we normally ask of the human beings we talk to (we have no idea of their decision-making process). The average user is unable to understand too detailed explanations of how GPT works anyway. As happened in the medical field, an over-explanation not only does not really inform, but also activates mechanisms of reduction of responsibility (the end user, having received all the information possible, cannot complain about anything). On the contrary, it is better to give basic information, and to activate more personal and personalized procedures of information, at the user's request.

As for accountability for the results of GPT's decision-making, I would distinguish between logical and inferential correctness (which it seems to me can be asked of software engineers, and those who design the algorithm that allows GPT to process the data from which it draws its answers) and substantive accountability, for the content of what GPT says. As I have said, this kind of responsibility should be apportioned between the user and other subjects, and should be implemented primarily by an increasingly aware use of GPT by end users: by education and training.


4. How can we ensure that AI language models like ChatGPT do not perpetuate harmful biases or reinforce existing power structures?

We could say: how can we prevent a person raised in a sexist, or racist, or homophobic context from absorbing these thought patterns and reflecting them in what they say, or what they do?

In part, the problem with GPT is the same problem we always face, even with human beings. In part it is different, because GPT cannot be educated, it does not go to school, and does not elevate itself morally unless we improve the database to which it refers.
At the same time, the database might be expanded (the broader it is, the less statistical significance negative content will have), and forms of ex-post control can be activated, similar to the control that some social media do concerning the content that is published, as well as forms of peer control - between GPT users or among those who benefit from the material produced by GPT.


5. What are your thoughts on ChatGPT's answer to our question about its risks and regulation?

Its answers are mostly correct, but superficial. Not only do these answers fail to take into account the complexity of the phenomena and the difficulties of regulation in this area, but they are mostly given in an apodictic, unmotivated and unreasoned manner. Thus, even when they are correct or agreeable, they are unable to stimulate debate or provide new knowledge.


6. Any other considerations you feel are appropriate?

We should not focus our attention on what GPT does, but on what it could do. The rapidity of this evolution, and its capacity at the present stage, is impressive when we consider future prospects. If GPT writes a sensible article but invents a fictitious bibliography, the interesting thing is to imagine what it could do if it had access to academic databases, if it could actually consult an appropriate reference bibliography, and if had criteria for determining which sources to consider relevant and which not. For example, it could use criteria that are established in the academy, albeit themselves contested and problematic, such as the number of citations to an article or the impact factor of a journal, or the number of awards won by an author, or whatever.


Fabio Macioce
Full professor of bioethics and philosophy of law at the University of Rome LUMSA