Noam Kolt of the University of Toronto has written Predicting Consumer Contracts, 37 Berkeley Technology Law Journal (2022 Forthcoming). Here is the abstract:
This Article empirically examines whether a computational language model can read and understand consumer contracts. Language models are able to perform a wide range of complex tasks by predicting the next word in a sequence. In the legal domain, language models can summarize laws, draft case documents, and translate legalese into plain English. However, the ability of language models to inform consumers of their contractual rights and obligations has not been explored in detail.
To showcase the opportunities and challenges of using language models to read consumer contracts, this Article studies the performance of GPT-3, a powerful language model released in June 2020. The case study employs a novel dataset comprised of questions relating to the terms of service of popular U.S. websites. Although the results are not definitive, they offer several important insights. First, owing to its immense training data, the model can exploit subtle informational cues embedded in questions. Second, the model performed poorly on contractual provisions that favor the rights and interests of consumers, suggesting that it may contain an anti-consumer bias. Third, the model is brittle in unexpected ways. Performance was highly sensitive to the wording of questions, but surprisingly indifferent to variations in contractual language.
While language models could potentially empower consumers, they could also provide misleading legal advice and entrench harmful biases. Leveraging the benefits of language models in reading consumer contracts and confronting the challenges they pose requires a combination of engineering and governance. Policymakers, together with developers and users of language models, should begin exploring technical and institutional safeguards to ensure that language models are used responsibly and align with broader social values.