Noam Kolt of the University of Toronto has written Governing AI Agents. Here is the abstract:
While language models and generative AI have taken the world by storm, a more transformative technology is already being developed: “AI agents” — AI systems that can autonomously plan and execute complex tasks with only limited human oversight. Companies that pioneered the production of tools for generating synthetic content are now building AI agents that can independently navigate the internet, perform a wide range of online tasks, and increasingly serve as automated personal assistants. The opportunities presented by this new technology are tremendous, as are the associated risks. Fortunately, there exist robust analytic frameworks for confronting many of these challenges, namely the economic theory of principal-agent problems and the common law doctrine of agency relationships. Drawing on these frameworks, this Article makes three contributions. First, it uses agency law and theory to identify and characterize problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty. Second, it illustrates the limitations of conventional solutions to agency problems: incentive design, monitoring, and enforcement might not be effective for governing AI agents that make uninterpretable decisions and operate at unprecedented speed and scale. Third, the Article explores the implications of agency law and theory for designing and regulating AI agents, arguing that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.