Disruption in Law: Algorithms that Doubt Themselves
AI systems and their predicted disruption in law and legal research, one of the most compelling arguments against blindly relying on algorithms is the lack of transparency and underlying probability for error.
This may be less of a concern going forward as the MIT Technology Review reports that Google and others are building AI systems that doubt themselves.
Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.
The work reflects the realization that uncertainty is a key aspect of human reasoning and intelligence. Adding it to AI programs could make them smarter and less prone to blunders, says Zoubin Ghahramani, a prominent AI researcher who is a professor at the University of Cambridge and chief scientist at Uber.
David Blei, a professor of statistics and computer science at Columbia University, says combining deep learning and probabilistic programming is a promising idea that needs more work. “In principle, it’s very powerful,” he says. “But there are many, many technical challenges.”
In recent years, the neural-network school has been so dominant that other ideas have been all but left behind. To move forward, the field may need to embrace these other ideas. “The interesting story here is that you don’t have to think of these camps as separate,” Goodman says. “They can come together—in fact, they are coming together—in the tools that we are now building.”
Combining conventional neural networks -- systems that learn only from the data they are fed -- with probabilistic programming has the potential to more closely mimic human reasoning. It also provides a way for human users to gauge their confidence in the algorithm's results because the algorithm should supply a confidence ranking.
While it's wonderful to hear that this work is being done. Human users should be cautiously optimistic.
For example, IBM Watson for Healthcare is already touted as having the ability to combine deep learning with probabilistic programming to assist with cancer prognosis and treatment. In practice, however, after three years in the field with the leading cancer centers in the United States, it's still not living up to the hype and has been abandoned by some of those centers.
As David Blei noted in the MIT Technology Review article, “In principle, it’s very powerful,” he says. “But there are many, many technical challenges.” At this point, in law, we should all take care to understand the challenges associated with AI and comply with our ethical duty to understand the risks and benefits associated with the relevant technology.