Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case

One of the biggest civil liberties issues raised by technology today is whether, when, and how we allow computer algorithms to make decisions that affect people’s lives. We’re starting to see this in particular in the criminal justice system. For the past several years the ACLU of Idaho has been involved in a fascinating case that, so far as I can tell, has received very little if any national coverage, but which raises fascinating issues that are core to the new era of big data that we are entering.

The case, K.W. v. Armstrong, is a class action lawsuit brought by the ACLU representing about 4,000 Idahoans with developmental and intellectual disabilities who receive assistance from the state’s Medicaid program. I spoke recently with Richard Eppink, Legal Director of the ACLU of Idaho, and he told me about the case:

It originally started because a bunch of people were contacting me and saying that that the amount of assistance that they were being given each year by the state Medicaid program was being suddenly cut by 20 or 30 percent. I thought the case would be a simple matter of saying to the state, “Okay, tell us why these dollar figures dropped by so much.”

What happens in this particular program is that each year you go to an assessment interview with an assessor who is a contractor with the Medicaid program, and they ask you a bunch of questions. The assessor plugs these into an Excel spreadsheet, and it comes out with this dollar figure amount, which is how much that you can spend on your services that year.

But when we asked them how the dollar amounts were arrived at, the Medicaid program came back and said, “we can’t tell you that, it’s a trade secret.”

And so that’s what led to the lawsuit. We said “you’ve got to release this, you can’t just be coming up with these numbers using a secret formula.” And then, within a couple of weeks of filing the case, the court agreed and told the state, “yeah, you have to disclose that.” In a ruling from the bench the judge said it’s just a blatant due process violation to tell people you’re going to reduce their health care services by $20,000 in a year for some secret reason. The judge also ruled on Medicaid Act grounds—there are requirements in the act that if you’re going to reduce somebody’s coverage, you have to explain why.

That was five years ago. And once we got their formula, we hired a couple of experts to dig into it and figure out what it was doing—how the whole process was working, both the assessment—the formula itself—and the data that was used to create it.

Eppink said the experts that they hired found big problems with what the state Medicaid program was doing:

There were a lot of things wrong with it. First of all, the data they used to come up with their formula for setting people’s assistance limits was corrupt. They were using historical data to predict what was going to happen in the future. But they had to throw out two-thirds of the records they had before they came up with the formula because of data entry errors and data that didn’t make sense. So they were supposedly predicting what this population was going to need, but the historical data they were using was flawed, and they were only able to use a small sub-set of it. And bad data produces bad results.

A second thing is that the state itself had found in its own testing that there were problems—disproportionate results for different parts of the state that couldn’t be explained.

And the third thing is that our experts found that there were fundamental statistical flaws in the way that the formula itself was structured.

Idaho’s Medicaid bureaucracy was making arbitrary and irrational decisions with big impacts on people’s lives, and fighting efforts to make it explain how it was reaching those decisions. This lack of transparency is unconscionable. Algorithms are often highly complicated, and when you marry them to human social/legal/bureaucratic systems, the complexity only skyrockets. That means public transparency is vital. The experience in Idaho only confirms this.

I asked Eppink, if Idaho’s decisionmaking system was so irrational, why did the state rely on it?

I don’t actually get the sense they even knew how bad this was. It’s just this bias we all have for computerized results—we don’t question them. It’s a cultural, maybe even biological thing, but when a computer generates something—when you have a statistician, who looks at some data, and comes up with a formula—we just trust that formula, without asking “hey wait a second, how is this actually working?” So I think the state fell victim to this complacency that we have with computerized decisionmaking.

Secondly, I don’t think anybody at the Medicaid program really thought about how this was working. When we took the depositions in the case I asked each person we deposed from the program to explain to me how they got from these assessment figures to this number, and everybody pointed a finger at somebody else. “I don’t know that, but this other person does.” So I would take a deposition from that other person, and that person pointed at somebody else, and eventually everybody was pointing around in a circle.

And so, that machine bias or complacency, combined with this idea that nobody really fully understood this—it was a lack of understanding of the process on the part of everybody; everybody assumed somebody else knew how it worked.

This, of course, is one of the time-honored horrors of bureaucracies: the fragmentation of intelligence that (as I have discussed) allows hundreds or thousands of intelligent, ethical individuals to behave in ways that are collectively stupid and/or unethical. I have written before about a fascinating paper by Danielle Citron entitled “Technological Due Process,” which looks at the problems and solutions that arise when translating human rules and policies into computer code. This case shows those problems in action.

So what are the solutions in this case? Eppink:

A couple years ago after we’d done all that discovery and worked with the experts, we put it together in a summary judgment package for the judge. And last year the court held that the formula itself was so bad that it was unconstitutional—violated due process—because it was effectively producing arbitrary results for a large number of people. And the judge ordered that the Medicaid program basically overhaul the way it was doing this. That includes regular testing, regular updating, and the use of quality data. And that’s where we are now; they’re in the process of doing that.

My hunch is that this kind of thing is happening a lot across the United States and across the world as people move to these computerized systems. Nobody understands them, they think that somebody else does—but in the end we trust them. Even the people in charge of these programs have this trust that these things are working.

And the unfortunate part, as we learned in this case, is that it costs a lot of money to actually test these things and make sure they’re working right. It cost us probably $50,000, and I don’t think that a state Medicaid program is going to be motivated to spend the money that it takes to make sure these things are working right. Or even these private companies that are running credit predictions, housing predictions, recidivism predictions—unless the cost is internalized on them through litigation, and it’s understood that “hey, eventually somebody’s going to have the money to test this, so it better be working.”

As our technological train hurtles down the tracks, we need policymakers at the federal, state, and local level who have a good understanding of the pitfalls involved in using computers to make decisions that affect people’s lives.

View comments (27)
Read the Terms of Use


Get the dissertation writing service students look for these days with the prime focus being creating a well researched and lively content on any topic.

artificial inte...

It's dreadful being directed. It can be entirely maddening. In the auto business, we get managed by the Department of Transportation, by the EPA and a cluster of others. What's more, there's administrative organizations in each nation. In space, we get controlled by the FAA. Be that as it may, on the off chance that you ask the normal individual, "Hello, would you like to dispose of the FAA? Furthermore, simply take a risk on makers not compromising on airplane since benefits were down that quarter?" It resembles, "Damnation no. That sounds horrendous."

artificial inte...

I think even people who are pretty libertarian, free market, I think they are probably like, “Yeah, we should keep an eye on the aircraft companies and make sure they’re building good aircraft.” There’s a role for regulators that is very important. I’m against overregulation, for sure, but we have to get on that with AI. https://thebigmoapproach.com




ome AI algorithms can recreate an image in a painter’s style, some can improvise a track alongside a human musician on stage and some can even write a story presenting the reader as hero. Are they as creative or imaginative as humans? Unlike humans, they have a limited degree of functional autonomy and are capable of performing only one function. For example, an AI that’s dedicated to writing will not paint a picture. Mark Riedl, research professor at the Georgia Tech School of nteractive Computing and director of the Entertainment Intelligence Lab throws some light on the issue of AI creativity. When asked by Le Monde, he says: “We are creative when we play Pictionary, when we use a trombone to repair a pair of glasses or when we find another route to go home if a road is closed. Computers already have this kind of creativity.” Grégory Labrousse, founder and CEO at nam.R, a company dealing with data, AI and energy efficiency, further explains: “Every imagina-on is seen as the recombina-on of the elements from pre-exis-ng memory”. This is the underlying principle of how AI creates artistic content.


In Australia last year, the Federal Govt agency responsible for social service payments started using something that turned out to be computer-based fraud on ordinary people. It was called "Robodebt"
What happened was this - you're an average Jo/Joe on some version of Centrelink payment (which you've been through hoops to get in the first place) and suddenly there's a demand (in legalese, on Ietterhead) that you repay $$ over overpayment, threatening all manner of dire punishment (including penalties, heavy fines, maybe jail) for failure to comply - and this within weeks. Some of the demands were for matters referring to 4-5-6-7 or more years ago.
No request for more information, discussion, mediation, re-validation, no hotline, not even any avenue of appeal.
After huge denials of bureaucratic failure (blame the victim syndrome on steroids) the true situation was finally revealed. Centrelink had switched to a computer program that "looked into" tax dept information for zillions of ordinary recipients (itself a matter of dubious legality) and then AUTOMATICALLY calculated what that person "should" have paid - with no provision for the randomness of income over time - for instance varying periods of out- of- work or ill-health (ie legitimate claims) but alternating with employment.
The computer simply averaged income over whole years, and then AUTOMATICALLY issued the demand letter. No relation to reality, just pay up or else. Life's now lumbered with a Robodebt for tens of thousands of Centrelink recipients - by definition, those with low or no fallback in economic resources.
Despite huge outcry, it still took many long months for the Govt to do anything about this idiot abdication of HUMAN responsibility to fraudulous computer power.
Nothing "intelligent" about it at any level.

Peter Erikson

Story talks about a fascinating paper by Danielle Citron entitled “Technological Due Process.” The word is “titled.” I am entitled to read the book, which is titled ...


Stay Informed