The Italian Privacy Guarantor vs. ChatGPT: what role for public authorities in balancing support for innovation and protection of rights?

Complementing the series on "ChatGPT, what risks, what opportunities and what needs for intervention and supervision?" is an article written by a member of the Chair, Luca Megale, entitled "The Italian Privacy Guarantor vs. ChatGPT: what role for public authorities in balancing support for innovation and protection of rights?", published by the Giornale di diritto amministrativo, no. 3/2023.

Following a brief overview of the possible uses of ChatGPT by public authorities (para. 2), the profiles of administrative delivery of an outdated regulation (para. 3), an analysis of the effective protection of users (para. 4), as well as the limits of the proposed European AI legislation (para. 5), the key role of public authorities is outlined: between support and guarantee.

Indeed, in markets characterized by very rapid evolution, such as digital markets, traditional better regulation tools continue to be essential, but they require the support of a new phase between those of designing and implementing a rule.

For instance, the experiences of Banca d'Italia and Consob confirm the success of an informal and supportive approach to rule interpretation. Such examples of cooperation and support for citizens and businesses should be declined in an innovated approach to new technologies, in which public authorities must assume a key role, as a support for businesses and a guarantee for citizens.

At first analysis, the proposed European AI regulation would seem to be moving in the right direction, providing that national authorities can rely on regulatory experimentation through so-called sandboxes. In reality, such tools require a regulatory flexibility inherent in the agile regulation, lacking in the AI regulatory proposal. The outcome risks being a "stiffening of public administration”, with an inability of the authority to adapt to the variability of specific cases and tools.

With reference to generative AI, in which the difference in the degree of risk concerns the use and not so much the tool itself, sandboxes would allow an analysis of "real" risk, benefiting both the administrations and the firms involved. The handling of an ever-developing digital environment, including generative AIs, should therefore be inspired by an "agile" approach to administrative regulation and implementation. Examples include sandboxes, as well as business collaboration and supporting tools to incentivize innovation while respecting user rights, so as to ensure a more practical and less theoretical methodology for systems that are intended to systematically anticipate regulation.