[RESEARCH NOTE] Regulatory Sandboxes for AI – in Search for the Right Approach [Repost from Librestories.eu]

Original article published here.


While the Parliament invests significant efforts into developing the text of the Regulation and ensuring that the AI systems are safe, transparent, environmentally friendly, non-discriminatory, traceable and overseen by people, it also recognized the importance of facilitating innovation. This is evident by a number of changes especially regarding the formation and utilization of regulatory sandboxes.

In search for a definition

Probably one of the most convincing pieces of evidence in that regard is the proposed amendment of Art 1 on the scope of the Regulation where the setting up of regulatory sandboxes is perceived as a measure to support innovation, with a particular focus on SMEs. Unsurprisingly, the Parliament follows the Council’s approach and proposes a definition of a regulatory sandbox to be included in Art 3 rather than being extrapolated from Art 53 as it was in the Commission’s proposal.

Commission Council Parliament
“a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan”. (emphasis added) “a concrete framework set up by a national competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a specific plan for a limited time under regulatory supervision.” (emphasis added) “a controlled environment established by a public authority that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan under regulatory supervision” (emphasis added)

Table 1. Comparison of definitions

Nevertheless, Table 1 shows that there is an ongoing and currently unresolved discussion on the very nature of regulatory sandboxes. Interestingly, the definition of a regulatory sandbox in the UK where this tool actually originates from, is much more practical. Usually, it is just referred to as a ‘safe space’ that is designed for certain tasks, such as allowing firms to test innovative propositions in the market with real consumers. Sir Patrick Vallance’s Digital Report, which is oft quoted in the recent UK White Paper on AI, defines regulatory sandboxes as “a live testing environment, with a well-defined relaxation of rules, to allow innovators and entrepreneurs to experiment with new products or services under enhanced regulatory supervision without the risk of fines or liability. They are typically operated by a regulator for a limited time period and seek to inform rule making.” This definition, originating from the UK’s extensive experience with regulatory sandboxes in various fields, comes in stark contrast with the ‘controlled environment’ notion adopted by the EU.

The Council’s common position attempted to mitigate this by proposing testing under ‘real world conditions’, where appropriate, but the Parliament has reverted to the controlled environment approach. Furthermore, the explicit requirement for the regulatory sandbox to be established by a public authority and the deliberate inclusion of the word safe in the definition demonstrates very little genuine support for innovation and even fear from ‘playing in the sand.’ This is also supported by adding the word ‘controlled’ in Recital 71, as well as emphasising and greatly exaggerating the regulatory compliance function of the regulatory sandbox in Recital 72. The way this wall of text describes the objective of the regulatory sandbox disregards the well-established problem of scale and the large costs associated with even a traditional national sandbox involving just one regulator. Moreover, the listing of the regulatory sandbox’s objectives in the new Art 53(1e), proposed by the Parliament, is rather self-centred around the needs of the regulators. Objective (b), claiming that regulatory sandboxes aim to “allow and facilitate the testing and development of innovative solutions related to AI systems” is not supported by evidence. Claiming that regulatory compliance guidance facilitates innovation does not correspond with the needs expressed by the industry in a number of occasions. While it does play a certain role, there are bigger hurdles before European innovation, in general, and in the field of AI, including lack of suitable investment opportunities, lack of access to testing facilities, need for specific datasets and regulatory overburden.

In essence, the Parliament’s draft is laser-focused on establishing regulatory sandboxes for AI as quickly as possible, making them mandatory for Member States in Art 53 and requiring at least one per Member State being operational by the time the Regulation becomes applicable. Art 53 also provides the possibility of this sandbox to be established jointly between one or several Member States (or by the Commission and the European Data Protection Supervisor separately or together with Member States). This part of the provision is extremely puzzling because if we interpret it systematically and grammatically the possibility of several Member States being involved in a sandbox should be considered as a way to ease the mandatory establishment requirement, especially for Member States such as Bulgaria that do not have any experience with anticipatory regulation. However, cross-border and cross-jurisdictional regulatory sandboxes are much more challenging, and we are currently lacking any examples of a successful model to achieve this. Even in the field of the financial regulation, where regulatory sandboxes have been applied the longest and by a significant number of jurisdictions, such initiatives of cross-border and multi-jurisdictional sandboxes have had a very limited success. The Global Financial Innovation Network’s reports on their cross-border testing identify a number of issues not only with respect to the participants and their expectations and needs but also with respect to the coordination between the participating authorities. This once again raises the question on the ratio between costs and results in a regulatory sandbox.

The incentives

The Parliament proposes the adding of a new paragraph to Art 53(1g) where it attempts to offer some incentives to the high-risk AI systems providers to participate in a regulatory sandbox. The provision stipulates that establishing authorities “shall provide sandbox prospective providers who develop high-risk AI systems with guidance and supervision on how to fulfil the requirements set out in this Regulation, so that the AI systems may exit the sandbox being in presumption of conformity with the specific requirements of this Regulation that were assessed within the sandbox. Insofar as the AI system complies with the requirements when exiting the sandbox, it shall be presumed to be in conformity with this regulation” (emphasis added).

This particular provision could pose several challenges. First and foremost, this presumption of conformity could be detrimental to legal certainty. On one side, the testing of high-risk AI systems in a controlled environment within the scope of the regulatory sandbox could hardly be indicative of how the same system could behave in a real-life environment. Moreover, this approach goes directly against the contemporary software development practices, where currently the Continuous Integration and Continuous Delivery (CI/CD) is the preferred methodology of working. CI/CD allows the application of iterative process which is constantly updated with feedback from the end-users (among others) and therefore improves the ability to address error detection and system integration. In contrast to good industry practices, the Regulation attempts to ‘freeze’ the conformity of a given AI system in a moment of time which goes against contemporary software engineering, particularly AI development practices.

Furthermore, the Parliament vests too much confidence in the process and the ability of the regulator. Cases such as the Volkswagen emissions scandal demonstrate that there are ways to ‘cheat’ the regulator and, while we should not presume a company acts in bad faith, we should not completely dismiss the possibility. The establishment of a presumption of conformity could serve as an effective camouflage of such violations, especially if it is not equipped with at least some period of validity, for example from 6 months to 1 year. Moreover, the vague formulation of the article vis-à-vis the presumption does not show how this legal fact is supposed to be interpreted, e.g., whether it is a procedural or substantive presumption, rebuttable or non-rebuttable, and also to whom the burden of proof shifts.

Another interesting incentive, which has been offered in the original proposal from the Commission, is the possibility for a participant in a regulatory sandbox to process personal data which has been lawfully collected for other purpose. The subsequent processing in the sandbox is for the purpose of developing and testing certain innovative AI systems under conditions that have been explicitly listed and include: the specific purpose of the AI system, necessity, monitoring, guarantees for the processing of the personal data, including not affecting the data subjects and deletion of the data once the retention period expires or the participation of the sandbox has been terminated, and strict transparency obligations (Art 54 (1)).

The Parliament proposes some minor changes in Art 54, the most significant being probably in paragraph 1(i) which specifies the purposes for which AI systems need to be developed in order to benefit from the personal data protection exception. In the Commission’s draft, item (i) contains the fields of “prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security, under the control and responsibility of the competent authorities.” The Parliament proposes the complete removal of this item in correspondence with its general approach to limiting the abilities of police and law enforcement to utilize AI systems disproportionately.

Castles made of sand

Besides a great song by Jimi Hendrix, this metaphor illustrates the general approach of the AI Regulation towards the regulatory sandboxes. The initial proposal of the Commission was vague but at least it signalled the desire to change our old way of regulation through more agile and anticipatory rather than reactive tools. Unfortunately, this generated quite a buzz which led to ‘regulatory sandboxes’ being the answer to every question, which they are certainly not. The real needs of the business were swept under the carpet, but instead we received a heavily bureaucratic, complicated and extremely expensive sandbox-like process which threatens legal certainly and offers little to no incentives for participants. In my opinion, regulatory sandboxes need to offer more freedom and smaller, local scale. A lot of Member States have little to none experience with regulatory sandboxes and overambition often leads to completely unusable instruments that could be of great advantage if tuned in properly. To once again quote Jimi Hendrix:

And so castles made of sand

Fall in the sea eventually.


Katerina Yordanova
Lawyer and researchers specializing in ICT & IP Law