14 March 2018

Legal Automation

'A Rule of Persons, Not Machines: The Limits of Legal Automation' by Frank A. Pasquale in (2018) George Washington Law Review (Forthcoming) comments
For many legal futurists, attorneys’ work is a prime target for automation. They view the legal practice of most businesses as algorithmic: data (such as facts) are transformed into outputs (agreements or litigation stances) via application of set rules. These technophiles promote substituting computer code for contracts and descriptions of facts now written by humans. They point to early successes in legal automation as proof of concept. TurboTax has helped millions of Americans file taxes, and algorithms have taken over certain aspects of stock trading. Corporate efforts to “formalize legal code” may bring new efficiencies in areas of practice characterized by both legal and factual clarity. 
However, legal automation can also elide or exclude important human values, necessary improvisations, and irreducibly deliberative governance. Due process, appeals, and narratively intelligible explanation from persons, for persons, depend on forms of communication that are not reducible to software. Language is constitutive of these aspects of law. To preserve accountability and a humane legal order, these reasons must be expressed in language by a responsible person. This basic requirement for legitimacy limits legal automation in several contexts, including corporate compliance, property recordation, and contracting. A robust and ethical legal profession respects the flexibility and subtlety of legal language as a prerequisite for a just and accountable social order. It ensures a rule of persons, not machines. 
An earlier analysis by Pasquale is here.

'Artificial Intelligence: The Importance of Trust & Distrust' (UC Hastings Research Paper No. 268) by Robin Feldman comments
Artificial Intelligence (AI) is percolating through modern society. Just as technology made the leap from analog to digital, and communications catapulted forward from rotary dial telephones to smartphones, FaceTime, and WhatsApp, AI has gone from needing hundreds of square feet of computers for the task of playing chess to identifying faces with a thumbnail of silicon. Most important, one cannot overemphasize the speed at which we are hurtling towards a world in which AI will be ubiquitous, seeping into every corner of what we do. 
As AI becomes part of our everyday life, a key aspect will be the way in which society—and by extension, the legal system—manages both the integration of these systems and society’s expectations. In this context, this essay suggests that the concepts of trust and distrust will be critical for navigating the road ahead, particularly if we want to avoid societal unrest and upheaval. We will have to learn to trust the capacity of AI systems sufficiently so that we can soar to new heights without succumbing to the irrational exuberance that can send us crashing to the ground when our hopes are dashed by its inability to live up to our blind expectations. And we must learn to tolerate the ambiguity that lies between these two extremes. To accomplish these goals, this essay suggests three general principles that should form the basis of the legal regimes—both regulatory and property—that will be necessary for AI.