Error of the Day & Maintaining Integrity of Algorithmic Results

If you're into algorithms, you should absolutely subscribe to the MIT Technology Review newsletter called The Algorithm.

Earlier this week, the folks at The Algorithm asked "what is AI, exactly?" The answer is reproduced below.

The question may seem basic, but the answer is kind of complicated.

In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.

As it currently stands, the vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning. These algorithms use statistics to find patterns in massive amounts of data. They then use those patterns to make predictions on things like what shows you might like on Netflix, what you’re saying when you speak to Alexa, or whether you have cancer based on your MRI.

Machine learning, and its subset deep learning (basically machine learning on steroids), is incredibly powerful. It is the basis of many major breakthroughs, including facial recognition, hyper-realistic photo and voice synthesis, and AlphaGo, the program that beat the best human player in the complex game of Go. But it is also just a tiny fraction of what AI could be.

The grand idea is to develop something resembling human intelligence, which is often referred to as “artificial general intelligence,” or “AGI.” Some experts believe that machine learning and deep learning will eventually get us to AGI with enough data, but most would agree there are big missing pieces and it’s still a long way off. AI may have mastered Go, but in other ways it is still much dumber than a toddler.

In that sense, AI is also aspirational, and its definition is constantly evolving. What would have been considered AI in the past may not be considered AI today. 

Because of this, the boundaries of AI can get really confusing, and the term often gets mangled to include any kind of algorithm or computer program. We can thank Silicon Valley for constantly inflating the capabilities of AI for its own convenience.

It's good to be reminded of this definition as we contend with the latest releases of the legal research databases as the databases continuously tweak their underlying algorithms -- the latest being Westlaw Edge.

With Westlaw Edge comes a revised "WestSearch Plus."

Introducing the next generation of legal search. Get superior predictive research suggestions as you start typing your legal query in the global search bar.

WestSearch Plus applies state-of-the-art AI technologies to help you quickly address legal questions for thousands of legal topics without needing to drill into a results list.

We're starting to see a time when the Google Generation is already predisposed to not drill into a results list and now the databases are actively advocating for the users to blindly rely on the top result in the list.

Along with the consequences of fake news on algorithmic results when using Google, for example, we must also be aware of the errors within the legal research databases themselves. To that end, a fellow law librarian, Mary Matuszak, has been collecting the errors that she finds during the legal research process in the various databases and distributes them via the Law-Lib listserv as "Error of the Day."

From October 30, 2018:
Error of the Day  A  Lexis typo (possibly scanning error) in  Excessiveness of Bail in State Cases, 7 A.L.R.6th 487.   The following group of letters is used six times throughout the document, CocainesepBail.   A quick look at the Westlaw version shows that it should be Cocaine – Bail








From November 5, 2018:
In the Case People v Kindell, 148 AD3d 456 (1st Dept 2017), Susan Axelrod is listed as both the counsel for the Appellant and the Respondent.   The official version, the print, does not list the attorneys. 
[]
I confirmed with ADA Axelrod that she did not represent the defendant and opposing counsel was not someone with the same name.   I also checked the defendant’s brief and it lists Ms. Moser as counsel.








While these errors are seemingly minute individually, the consequences are greater in the aggregate.
My own mentor, a law librarian who had been in the profession for 40 years, kept a print file of the errors that he found in the databases while performing legal research. The file was overflowing by the time I saw it roughly 3 years before his retirement.

Because an algorithm's results are only as good as the underlying data, as we move toward an algorithmic society that relies heavily on algorithmic decision making, these errors could have consequences on the development of the law.

Comments

Popular posts from this blog

For The Love Of Archives

US News Scholarship Impact Issues

AALL/LexisNexis Call for Papers 2019-2020 Now Open!