In a surprising turn of events, city lawmakers in Porto Alegre, Brazil, recently discovered that the nation's first legislation written entirely by artificial intelligence (AI) was enacted in their city, unbeknownst to them. The experimental ordinance, passed in October, was crafted by OpenAI's ChatGPT, sparking objections and prompting a broader debate on the role of AI in shaping public policy.
Councilman Ramiro Rosário, who initiated the process, revealed that he had tasked the ChatGPT chatbot with creating a proposal to prevent the city from imposing charges on taxpayers for stolen water consumption meters. What makes this revelation particularly noteworthy is that Rosário presented the proposal to his 35 peers on the council without making a single alteration or disclosing its unprecedented AI origin.
The events in Porto Alegre bring to light the challenges and ethical considerations associated with integrating AI into legislative processes. While AI presents opportunities for efficiency and innovation, the lack of transparency in this case has raised concerns about the need for clear guidelines and disclosure when AI is involved in crafting legislation.
On a larger scale, Professor Nicoletta Rangone, full professor of Administrative Law at LUMSA University and Jean Monnet Professor of Better Regulation, wrote an article titled "Artificial Intelligence Challenging Core State Functions: A Focus on Law-making and Rule-making", which was recently published in the Revista De Derecho Público (Vol. 8/2023). She emphasizes the far-reaching impact of AI on administrative, judicial, and legislative functions. Nevertheless, a comprehensive approach to AI in the life-cycle of rules - from the proposal of a new rule to its implementation, monitoring and review- is currently lacking in the rich panorama of studies from different disciplines. The analysis shows that AI has the power to play a crucial role in the life-cycle of rules, by performing time-consuming tasks, increasing access to knowledge base, and enhancing the ability of institutions to draft effective rules and to declutter the regulatory stock. However, it is not without risks, ranging from discrimination to challenges to democratic representation. In order to play a role in achieving law effectiveness while limiting the risks, a complementarity between human and AI should be reached both at the level of the AI architecture and ex-post. Moreover, an incremental and experimental approach is suggested, as well as the elaboration of a general framework, to be tailored by each regulator to the specific features of its tasks, aimed at setting the rationale, the role, and adequate guardrails to AI in the life-cycle of rules. This agile approach would allow the AI revolution to display its benefits while preventing potential harms or side effects.
Meanwhile, on the legislative front in Europe, negotiators from the Council presidency and the European Parliament have reached a provisional agreement on the proposal for harmonized rules on artificial intelligence, known as the Artificial Intelligence Act. This legislation marks a significant step in regulating AI systems, including General Purpose AI (GpAI), which encompasses Large Language Models (LLM), like ChatGPT.
The European AI Act introduces a risk-based categorization, by dividing hazards into three groups and related regulations. However, it is not a real risk-based approach, due to the lack of a thorough risk analysis. Notably, the act adds obligations for GpAI, covering aspects such as IT security, transparency in training processes, and sharing technical documentation before market entry. The regulations extend to generative AI applications like ChatGPT, Bard, and Midjourney. Several weeks ago, France, Italy, and Germany advocated for self-regulation, presenting differing perspectives on how to approach AI governance.
The obligations for foundation models outlined in the European AI Act encompass two levels. The first mandates the publication of materials used for training algorithms and the recognition of AI-generated content to combat fraud and disinformation—a positive step toward accountability. The second level applies to AI systems posing systemic risks, necessitating assessments, mitigation strategies, and mandatory incident reporting to a specialized AI Office within the Commission.
In conclusion, the case of Porto Alegre, Professor Rangone's insights, and the European AI Act collectively underscore the evolving landscape of AI in legislation. As nations grapple with integrating AI into governance, the Porto Alegre evennt serves as a cautionary tale, while the European AI Act signifies a significant step toward establishing a regulatory framework for responsible AI use. The global discourse on AI in public policy emphasizes the need for continuous dialogue, transparency, and a nuanced approach to harness the benefits of AI while mitigating potential risks.
Luca Megale
is a PhD Student at LUMSA University of Rome
and tutor of the European Master in Law and Economics - EMLE (Rome term)