The Importance of Using Reasonable Care When Relying on Algorithms in Law: A Case Study
Given the issues with the "Google Generation" and seeming ease of using algorithms in law, there are increasing challenges in using reasonable care to comply with the newer Duty of Technology Competence. As noted in a previous post:
The Duty of Technology Competence requires lawyers to keep abreast of “changes in the law and its practice, including the benefits and risks associated with relevant technology.” To date, 28 states have adopted the duty.
Using reasonable care to understand the benefits and risks associated with relevant technologies is increasingly difficult as society moves beyond the abundance of information that defined the Information Age to increasingly rely on algorithms that sort big data in the Algorithmic Society.
However, reasonable care is essential when using algorithms in law. Take the COMPAS risk-assessment tool, for example. Developed by a private company called Equivant (formerly Northpointe), COMPAS—or the Correctional Offender Management Profiling for Alternative Sanctions—purports to predict a defendant’s risk of committing another crime. It works through a proprietary algorithm that considers some of the answers to a 137-item questionnaire. COMPAS is one of several such risk-assessment algorithms being used around the country to predict hot spots of violent crime, determine the types of supervision that inmates might need, or provide information that might be useful in sentencing.
The inner workings of the algorithm are secretive, and a recent Wisconsin Supreme Court ruling urged caution and skepticism in the algorithm's use. Caution is indeed warranted, according to Julia Dressel and Hany Farid from Dartmouth College. In a new study, they have shown that COMPAS is no better at predicting an individual’s risk of recidivism than random volunteers recruited from the internet.
A major issue is the lack of transparency for human users to understand how the algorithm generated its results -- results that could be biased. In 2016, the technology reporter Julia Angwin and colleagues at ProPublica analyzed COMPAS assessments for more than 7,000 arrestees in Broward County, Florida, and published an investigation claiming that the algorithm was biased against African Americans. The problems, they said, lay in the algorithm’s mistakes. “Blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” the team wrote. And COMPAS “makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.”
Lawyers and judges using algorithms in law should be cognizant and use reasonable care to assess an algorithm's results. They must understand the pitfalls with relying on algorithms and try (although very difficult given the lack of transparency) to avoid those pitfalls. In fact, in many jurisdictions, members of the bar are required to use reasonable care to comply with the Duty of Technology Competence.
AI is going to have major consequences for jurisprudence. It's kind of worrisome.
ReplyDeleteThat is so true. My fear is that we'll get comfortable and complacent and let it happen.
Delete