ChatGPT essay series | A Legislative Drafter's Perspective

Following the point of view of a computer scientist (Fantozzi), we return to the comparison with an end-user expert evaluating the application of ChatGPT to a specific context: legislative drafting. Laura Tafani, a former Senior Parliamentary Advisor of the Service for the Quality of Legislative Acts of the Italian Senate of the Republic, gives us valuable insights into the use of Artificial Intelligence in parliamentary activity and her personal experience of using ChatGPT (e.g., the system pointing to the article of the Italian Constitution that deals with repudiation of war, when a regulation on the promotion of the Italian language is called for, in the face, however, of a draft that appears well structured and constructed with meaning).

Tafani's contribution allows us to learn more about the risks and opportunities of using artificial intelligence systems in legislative activity. From conditioning the dynamics of parliamentary democracy by affecting certain fundamental assumptions of representation (82 million amendments submitted in Italy to a constitutional reform and the problem of mass comments), to the experimentation of an amendment writer by the Italian Parliament (an algorithmic tool that creates a specific opportunity for parliamentary activity).

Furthermore, agreeing with the opinion of the other interviewees on the need for a common supranational strategy, Tafani also raises the need i) for Parliament to invest more resources both in the recruitment of staff also skilled in artificial engineering and robotics, and in the professional training of staff, starting with those currently working in IT services; ii) train systems through certified data sets that are the result of statistical research, surveys, studies and analyses by recognized experts (and a specific Italian example on the matter); iii) a potential constitution, also in the Italian Parliament, of a bicameral parliamentary Committee, which would coordinate the activity of promoting rules and guidelines and subsequently oversee their correct application.


1. What do you see as the most significant risks associated with the use of AI language models like ChatGPT in the legislative drafting process?

I believe that the main risks associated with the use of AI language models in the legislative drafting process are those already encountered for any other use: poor quality of answers in terms of reliability; lack of transparency and reliability on the sources used; difficulty of traceability and human control of processes; production of data potentially polluted by intentional or unconscious biases; dangers of profiling; lack of liability for privacy or copyright violations; outdated information and data. 

These risks are compounded by others that are more specific and related to the inherent complexity of legislative drafting. These AI models are 50 per cent based on English-language texts; the other texts are in Chinese, German and French, and only a small percentage in Italian. This means that they are not based on a corpus of Italian legislation and have not been trained to take into account the peculiarities of our legal texts. Besides, the fact that the answers generated by the model are the product of correlations, statistical inferences and probabilistic sequences of words, and not the result of logical-rational reasoning, has a particular impact on the production of law and conditions the results, as from the examples below.

Before OpenAI blocked access to ChatGPT for Italian users, following the measure adopted by the Italian Data Protection Authority on 30 March 2023, I used the model by asking it to draw up some bills and providing information on the object of the intervention, the minimum content of the regulation to be introduced or the legislation to be amended. From a formal point of view, the texts drafted appear well structured, in most cases with an appropriate title and divided into articles of homogeneous content, without headings and with the paragraphs numbered according to the Anglo-Saxon model and not according to Italian drafting rules. The most significant aspect is that the model is not able to use the 'novella' technique (textual amendment) to amend existing legislation. From a substantive point of view, in many cases the contents appear relevant, but repetitive. In some cases, extravagant provisions not expressly requested in the query or outright errors appear. To give a few examples, when asked to draw up a new state regulation establishing a ban on smoking in all outdoor places, ChatGPT responds by inserting, among others, an article stating that sanctions are to be established by the 'school board' and a further article stating that the new regulation 'shall come into force as of the first school day of the school year following the approval of the law'. Asked to introduce a regulation for the promotion and enhancement of the Italian language in the Constitution, ChatGPT points to Article 11 of the Constitution (on the repudiation of war: sic!) as the reference source.

In order to test ChatGPT also on profiles other than legislative drafting, but still remaining in the legal sphere, a question on criminal law was formulated: the answer turned out to be wrong because the fact was mistakenly attributed to the offence of aggravated theft instead of theft of use.

The most striking aspect in positive terms, on the other hand, is the remarkable timeliness of the answers provided and also their institutional tone, which could, however, strongly condition an uninformed and inexperienced user of the subject, inducing him to trust the model, without subjecting the answers provided to verification and feedback.

Beyond the profiles strictly related to the writing of legislative texts, the inclusion of AI models in parliamentary procedures can significantly condition the dynamics of parliamentary democracy by affecting certain fundamental assumptions of representation. In 2016, AI was tested during the 17th legislature of the Italian Senate (with disruptive effects) for the production of amendments to the so-called 'Renzi-Boschi' constitutional reform (A.S. no. 1429): two senators of the 'Northern League' presented approximately 82 million amendments produced on the basis of an algorithm capable of almost endlessly processing textual modification proposals of the bill. The algorithm, using Natural Language Generation, made simple substitutions of terms and punctuation, thus creating from one amendment millions more amendments, all different in terms of linguistic-textual formulation, with the real risk of making parliamentary debate impossible. Even if parliamentary structures are gearing up to implement techniques that allow this enormous amount of amendments to be managed, it is clear that self-generated texts put the traditional system of parliamentary scrutiny in difficulty because they expose it to obstructionist techniques that cannot be easily countered or at least contained with traditional procedural tools.

A similar risk could arise if AI were to be used for citizen and stakeholders’ consultation procedures within the regulatory cycle.

AI already greatly facilitates the possibility of so-called 'mass' comments, i.e. comments that are apparently different, but in substance identical, or 'misattributed' comments, i.e. coming from fictitious parties (the term 'misattributed' was coined by Professor Michael Herz with reference to the US Federal Communication Commission's (FCC) net neutrality regulation, in which millions of public comments allegedly came from people who did not exist or who had not actually submitted comments).  

In the future, AI could use algorithms capable not only of synthesizing individual opinions and comments, but also of translating them 'automatically' into a text that could be entrusted to parliamentary work; something that would entail, however, evaluations that go beyond the mere processing of large quantities of data, or the selection of words and propositions and that presuppose choices that are anything but neutral from a political, even before a legal, point of view.

A further risk is that Parliament will be misled in its work by false or incorrect information coming from the use of these models. As already mentioned, ChatGPT has no knowledge of what it writes, but is an extremely large text prediction model that selects the next word, phrase or punctuation based on the sample texts it has been trained on; it is not a look-up table or an encyclopedia. ChatGPT is not connected to the Internet and its training data stops at 2021. Moreover, even internet-connected models such as Bing AI have a tendency to invent or 'hallucinate' information, especially in contexts without training data.

The question then is: are we willing to accept in the next future an artificial legislator, a Leviathan totally detached from the will of human beings?


2. What opportunities do you see in using AI language models like ChatGPT to enhance the efficiency and effectiveness of the legislative drafting process?

According to the World e-Parliament Report 2020 (Interparliamentary Union 2021, 7, 16, 55), while approximately one in two Parliaments globally uses information technologies in the area of drafting, only 6% of Parliaments use artificial intelligence-based technologies in this area.

In my opinion, the specific working context of representative assemblies, characterized by the flexibility of the legislative process and the pluralism of the actors involved, requires, in the area of formal drafting, technologies specifically designed for that type of 'environment' or at least an ex-ante adaptation of the essential structure of the AI model and an intensive training effort for administrative staff, parliamentarians and their collaborators.

For this reason, it seems more appropriate to turn, rather than to models such as ChatGPT, to algorithmic tools created specifically for parliamentary activity.

i) A case of the 'institutionalization' of algorithmic tools in the legislative process can be found in the very recent experimentation developed by the Italian Parliament during the pandemic for the so-called formal drafting. The Chamber of deputy and the Senate, in fact, have carried out a joint investigation for the development of an 'amendment writer', which is soon to be adopted and is intended to support the amendment-writing activity by parliamentarians and groups. This system allows the user to directly edit the text of the provision and obtain the corresponding amendment proposal structured in the form of an amendment, according to the rules of technical drafting of legislative texts. The 'electronic desk', which was developed as part of research in the field of legimatics, has also offered interesting prospects for development in drafting procedures. By 'electronic desk' we mean a computer workstation that is able to provide the drafter with a series of applications (e.g., dictionaries, spelling and syntactic correctors, databases, etc.) that can be accessed without the need to leave the standardization computer environment.

ii) As far as so-called ‘substantive’ drafting is concerned (instrumental in particular to the phase of legislative inquiry and impact analysis, as well as to that of ex-post control on the implementation of laws), there is no doubt that AI makes it possible to analyze a large quantity of data very quickly and then extract structured information from them, which could supplement the data and information gathered through the traditional documentation and study activities conducted by parliamentary offices, as well as through government hearings and briefings. By combining AI techniques with technologies pertaining to the field of so-called Big Data, it would be possible to obtain information without knowing all the rules of access to the different databases and their interrogation methods, simply by formulating articulated queries in natural language. To achieve this, it would first be necessary to integrate the archives as far as possible, then to create a platform capable of processing, defining the necessary algorithms, and finally to 'train' the platform based on the available data. This effort, however, should necessarily concern certified data sets that are the result of statistical research, surveys, studies and analyses by recognized experts, whereas models such as ChatGPT do not guarantee these requirements and this 'expertise'.

An interesting example in this context concerns what was foreshadowed by ISTAT at a conference held at the Senate on 15 October 2019, during which the Institute announced the launch of a platform to enable natural language-based searches on the datasets contained in all its archives, to be made available also to Parliament. If this undertaking is realized, it will be a first example of the use of an AI-certified dataset available to the Italian Parliament, to be used throughout the regulatory cycle, from ex ante impact analysis to ex post evaluation of legislation. This first experiment could enable the creation of further platforms for analyzing portions of (certified) Big Data in strategic sectors: e.g., data on public finance, the real economy, justice, etc.

iii) A further area for the use of artificial intelligence in the legislative process is participation based on so-called crowdsourcing policymaking, i.e., on open consultations through which citizens and stakeholders can express their opinions on digital platforms, which are then synthesized by computer systems that return the final orientation resulting from the summation of the contributions individually given. AI could help participants to structure their opinions and indications into comments that are more likely to influence decisions. One could, for example, ask a generative AI tool to summarize a person's position on a proposal and turn it into a comment that is organized and clear. It would seem, for instance, that ChatGPT can easily create convincing public contributions on the basis of such simple suggestions.

iv) Generative AI could also help chamber staff to summarize comments on the proposal, classify feedback according to pre-defined categories and group information according to similarities in content, style or other characteristics. It could then provide an outline of a response to the comments. However, this workflow would incorporate a more sophisticated set of tools than templates such as ChatGPT and would require customization of an existing LLM to better apply it to this task, through additional training with relevant texts.


3.  Do you believe that there is a need for specific regulations or guidelines around the use of AI language models in legislative drafting? If so, what might these regulations look like?

I think that it is necessary and urgent, to draw up, in coordination with supranational and international institutions, a common strategy with a shared vision, values, objectives and goals.

The world of AI still lacks a general discipline, just as it lacks rules governing the use of AI software by public administrations.

As has been rightly emphasized by many, there are at least a few essential parameters to be ensured. The first is that of transparency/explainability, i.e., the interest in understanding how and why through artificial intelligence a certain answer has been given; the other is that of subsidiarity, i.e. the need for artificial intelligence to be a useful support to human intelligence, but not to go so far as to constitute a complete replacement of the latter.

The proposal for a regulation presented by the European Commission is currently being discussed, which prefigures a comprehensive regulation of the matter that will also concern public administrations. The regulation should stipulate that those who supply an AI system must comply with strict requirements in terms of data governance, security and accuracy, information, and human oversight, as well as being subject to a compliance procedure as a condition for placing the system on the market or putting it into service.

In some Parliaments, the challenges posed by AI have already led to the creation of autonomous structures and institutions with advisory and analytical functions with respect to digital change and the assessment of scientific and technological progress in support of parliamentary activity.

One could hypothesize, on the basis of these best practices, the constitution also in the Italian Parliament of a bicameral parliamentary Committee, which would coordinate the activity of promoting rules and guidelines and subsequently oversee their correct application and possible implementation by making use of parliamentary administrations and external experts.


4. How might the use of ChatGPT impact the role of legislative drafters, and what skills or training might be necessary for them to effectively integrate these tools into their work?

The application of digital tools to the processes of formal drafting and revision of legislative texts has already required major innovations and investments both in the field of available technologies and professional training.

If, on the one hand, these technologies have simplified and speeded up the work of the drafter - which until a few years ago was still done on paper, with corrections made by hand sent to the printer for the relevant printed composition - by eliminating merely repetitive operations devoid of discretion, on the other hand, they have made it necessary to supervise the final product more carefully in order to avoid the proliferation of errors linked to the 'copy-paste' technique from erroneous precedents or the use of automatic correctors.

The introduction of AI models in the legislative process will entail the need for the Parliament to invest more resources both in hiring staff who are also experts in artificial engineering and robotics, and in the professional updating of staff, starting with those currently working in IT services. Further training should be inspired by an interdisciplinary approach that invests in various AI-related aspects: machine learning, automated reasoning, natural language processing, user data protection, optimization and decision-support systems, offering insights into ethical issues and the latest findings from cognitive neuroscience. Parliamentarians, their collaborators, employees of the administration and parliamentary groups will have to be not only adequately trained in the use of these new models (these tools are often very intuitive and easy to interrogate), but also made aware of all the limits, constraints and risks underlying their use so as to enable them to govern processes and make use of the results of research and practical applications without being passively subjected to them. In general, users will have to be more experienced than in the past, so that the information produced by AI can be verified and supplemented by those who have specific and in-depth knowledge of the subject or topic being researched.


5.  Are there any legal or ethical concerns that arise when using AI language models like ChatGPT in the legislative drafting process? How might these concerns be addressed?

As I have already mentioned, there is no doubt that the use of AI models into the working methods of legislative assemblies has significant consequences on the very dynamics of parliamentary democracy. Already, the daily use of IT by parliamentarians has affected some of the fundamental democratic assumptions of representation, first and foremost transparency, constant interaction between citizens and elected representatives, mediation of questions and interests, and popular participation in decision-making processes. At the same time, digitization has also invested that invisible dimension of parliamentary work, carried out outside codified procedures and removed from publicity, which nevertheless proves to be strategic for the efficiency of the various functions.

If the legislative process were entrusted exclusively to a machine, we would witness the definitive emptying out of the 'political' dimension and the sacrifice of the typical tension of democratic legislative production. The law, as an instrument of law, is not only intended to regulate, but also performs the function of guiding the behavior of citizens. In this lies the 'politicalness' of the law. Even the most advanced AI is based on 'data' and learns with reference to something 'given', it learns from the 'past' but is unable to predict future events in complex and uncertain situations. This retrospective nature clashes with the properly human tension towards the future and the teleological orientation that marks the legal dimension. The behavioral scientist Sendhil Mullainathan wrote that we should be afraid not of intelligent machines, but of machines that make decisions they do not have the intelligence to make: 'I am much more afraid of the stupidity of machines than of the intelligence of machines'.

It is certainly possible that all this remains an hypothesis, but we cannot underestimate the pressure of orientations and experiments that would like to entrust to automation even human activities that today seem totally immune to automation processes.


6. What steps can be taken to ensure that AI language models like ChatGPT do not unintentionally perpetuate bias or discrimination in the legislative drafting process?

I think it is extremely difficult to eliminate these problems considering that, although OpenAI has supplemented ChatGPT with a moderation tool to prevent it from producing forbidden content on even more sensitive profiles (e.g., presence of sexual, hateful, violent content or content promoting self-harm), such protection is not guaranteed. What can and must be done is to make all users of such models aware of their limitations and risks.


7.   What are your thoughts on the future of AI language models in the legislative drafting process? Do you see them becoming more widespread, or are there significant barriers that may prevent their widespread adoption?

As I pointed out earlier, everything will depend on the political choices that will be made to regulate the phenomenon and on the identification of AI models that are 'certified' from the point of view of both the knowledge base and the managers.


8.  What are your thoughts on ChatGPT's answer to our question about its risks and regulation?

The answer given by ChatGPT is quite similar, in form and argument, to those I have received to my questions on how to write a bill. ChatGPT uses polished and institutional language, which lends an ‘aura’ of authority and credibility to his answers. However, in terms of content, there are significant shortcomings as the answers are often repetitive, superficial and apodictic. Furthermore, the bibliography given in the text of the answer and in the appendix is often false (as the authors cited do not exist) or incorrect.


Laura Tafani
Former Senior Parliamentary Advisor
of the Service for the Quality of Legislative Acts
of the Italian Senate of the Republic