Foundation models and generative artificial intelligence (AI) exacerbate a core regulatory challenge associated with AI: its heterogeneity. By their very nature, foundation models and generative AI can perform multiple functions for their users, thus presenting a vast array of different risks. This multifunctionality means that prescriptive, one-size-fits-all regulation will not be a viable option. Even performance standards and ex post liability— regulatory approaches that usually afford flexibility—are unlikely to be strong candidates for responding to multifunctional AI’s risks, given challenges in monitoring and enforcement. Regulators will do well instead to promote proactive risk management on the part of developers and users by using management-based regulation, an approach that has proven effective in other contexts of heterogeneity. Regulators will also need to maintain ongoing vigilance and agility.
More than in other contexts, regulators of multifunctional AI will need sufficient resources, top
human talent and leadership, and organizational cultures committed to regulatory excellence.
More than in other contexts, regulators of multifunctional AI will need sufficient resources, top
human talent and leadership, and organizational cultures committed to regulatory excellence.
Category
Year
Author
Coglianese C.; Crum C.R.