David Horton article: Forced Robot Arbitration

David Horton of California, Davis has written Forced Robot Arbitration, forthcoming in 109 Cornell Law Review (2023). Here’s the abstract:

Recently, advances in artificial intelligence (AI) have sparked interest in a topic that sounds like science fiction: robot judges. Researchers have harnessed AI to build programs that can predict the outcome of legal disputes. Some countries have even begun allowing AI systems to resolve small claims. These developments are fueling a fascinating debate over whether AI courts will increase access to justice or undermine the rule of law.

However, this Article argues that AI adjudication is more likely to set down roots in one of the most controversial areas of the American civil justice system: forced arbitration. For decades, corporations and arbitration providers have capitalized on the U.S. Supreme Court’s muscular interpretation of the Federal Arbitration Act (FAA) to create their own alternative procedural universes. These entities may soon take the next step and eliminate human decision-makers in some contexts. First, most objections to AI judges do not apply to AI arbitrators. For example, because AI systems suffer from the “black box problem”—they cannot explain the reasoning behind their conclusions—deploying them in the judicial system might violate procedural due process principles. But opacity is already the norm in arbitration, which is private, confidential, and often features awards that are unwritten. Second, although AI legal prediction tools are still embryonic, they work well in the simple debt collection and employment misclassification disputes that businesses routinely funnel into arbitration. Third, AI programs require little overhead and operate at lightning speed. The ability to streamline the process has become especially important in the last few years, as plaintiffs’ lawyers have begun filing “mass arbitrations”—overloading the system with scores of individual claims in an effort to saddle defendants with millions of dollars in fees. For these reasons, companies and arbitration providers have powerful financial incentives to experiment with automating decision-making in certain cases.

The Article then offers an insight that will have a profound impact on this futuristic form of dispute resolution. Drawing on the FAA’s text, structure, and legislative history, the Article contends the statute only applies to adjudication conducted by a “person.” Thus, there is no federal mandate that courts enforce agreements to resolve disputes by AI. In turn, because state law fills faps in the FAA, individual jurisdictions will be able to decide for themselves whether to permit or prohibit robot arbitration. Finally, the Article explains why this incremental approach is better than either barring AI dispute resolution or finding that it triggers the gale force of the FAA.

Leave a Reply

Your email address will not be published. Required fields are marked *