Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case

One of the biggest civil liberties issues raised by technology today is whether, when, and how we allow computer algorithms to make decisions that affect people’s lives. We’re starting to see this in particular in the criminal justice system. For the past several years the ACLU of Idaho has been involved in a fascinating case that, so far as I can tell, has received very little if any national coverage, but which raises fascinating issues that are core to the new era of big data that we are entering.

The case, K.W. v. Armstrong, is a class action lawsuit brought by the ACLU representing about 4,000 Idahoans with developmental and intellectual disabilities who receive assistance from the state’s Medicaid program. I spoke recently with Richard Eppink, Legal Director of the ACLU of Idaho, and he told me about the case:

It originally started because a bunch of people were contacting me and saying that that the amount of assistance that they were being given each year by the state Medicaid program was being suddenly cut by 20 or 30 percent. I thought the case would be a simple matter of saying to the state, “Okay, tell us why these dollar figures dropped by so much.”

What happens in this particular program is that each year you go to an assessment interview with an assessor who is a contractor with the Medicaid program, and they ask you a bunch of questions. The assessor plugs these into an Excel spreadsheet, and it comes out with this dollar figure amount, which is how much that you can spend on your services that year.

But when we asked them how the dollar amounts were arrived at, the Medicaid program came back and said, “we can’t tell you that, it’s a trade secret.”

And so that’s what led to the lawsuit. We said “you’ve got to release this, you can’t just be coming up with these numbers using a secret formula.” And then, within a couple of weeks of filing the case, the court agreed and told the state, “yeah, you have to disclose that.” In a ruling from the bench the judge said it’s just a blatant due process violation to tell people you’re going to reduce their health care services by $20,000 in a year for some secret reason. The judge also ruled on Medicaid Act grounds—there are requirements in the act that if you’re going to reduce somebody’s coverage, you have to explain why.

That was five years ago. And once we got their formula, we hired a couple of experts to dig into it and figure out what it was doing—how the whole process was working, both the assessment—the formula itself—and the data that was used to create it.

Eppink said the experts that they hired found big problems with what the state Medicaid program was doing:

There were a lot of things wrong with it. First of all, the data they used to come up with their formula for setting people’s assistance limits was corrupt. They were using historical data to predict what was going to happen in the future. But they had to throw out two-thirds of the records they had before they came up with the formula because of data entry errors and data that didn’t make sense. So they were supposedly predicting what this population was going to need, but the historical data they were using was flawed, and they were only able to use a small sub-set of it. And bad data produces bad results.

A second thing is that the state itself had found in its own testing that there were problems—disproportionate results for different parts of the state that couldn’t be explained.

And the third thing is that our experts found that there were fundamental statistical flaws in the way that the formula itself was structured.

Idaho’s Medicaid bureaucracy was making arbitrary and irrational decisions with big impacts on people’s lives, and fighting efforts to make it explain how it was reaching those decisions. This lack of transparency is unconscionable. Algorithms are often highly complicated, and when you marry them to human social/legal/bureaucratic systems, the complexity only skyrockets. That means public transparency is vital. The experience in Idaho only confirms this.

I asked Eppink, if Idaho’s decisionmaking system was so irrational, why did the state rely on it?

I don’t actually get the sense they even knew how bad this was. It’s just this bias we all have for computerized results—we don’t question them. It’s a cultural, maybe even biological thing, but when a computer generates something—when you have a statistician, who looks at some data, and comes up with a formula—we just trust that formula, without asking “hey wait a second, how is this actually working?” So I think the state fell victim to this complacency that we have with computerized decisionmaking.

Secondly, I don’t think anybody at the Medicaid program really thought about how this was working. When we took the depositions in the case I asked each person we deposed from the program to explain to me how they got from these assessment figures to this number, and everybody pointed a finger at somebody else. “I don’t know that, but this other person does.” So I would take a deposition from that other person, and that person pointed at somebody else, and eventually everybody was pointing around in a circle.

And so, that machine bias or complacency, combined with this idea that nobody really fully understood this—it was a lack of understanding of the process on the part of everybody; everybody assumed somebody else knew how it worked.

This, of course, is one of the time-honored horrors of bureaucracies: the fragmentation of intelligence that (as I have discussed) allows hundreds or thousands of intelligent, ethical individuals to behave in ways that are collectively stupid and/or unethical. I have written before about a fascinating paper by Danielle Citron entitled “Technological Due Process,” which looks at the problems and solutions that arise when translating human rules and policies into computer code. This case shows those problems in action.

So what are the solutions in this case? Eppink:

A couple years ago after we’d done all that discovery and worked with the experts, we put it together in a summary judgment package for the judge. And last year the court held that the formula itself was so bad that it was unconstitutional—violated due process—because it was effectively producing arbitrary results for a large number of people. And the judge ordered that the Medicaid program basically overhaul the way it was doing this. That includes regular testing, regular updating, and the use of quality data. And that’s where we are now; they’re in the process of doing that.

My hunch is that this kind of thing is happening a lot across the United States and across the world as people move to these computerized systems. Nobody understands them, they think that somebody else does—but in the end we trust them. Even the people in charge of these programs have this trust that these things are working.

And the unfortunate part, as we learned in this case, is that it costs a lot of money to actually test these things and make sure they’re working right. It cost us probably $50,000, and I don’t think that a state Medicaid program is going to be motivated to spend the money that it takes to make sure these things are working right. Or even these private companies that are running credit predictions, housing predictions, recidivism predictions—unless the cost is internalized on them through litigation, and it’s understood that “hey, eventually somebody’s going to have the money to test this, so it better be working.”

As our technological train hurtles down the tracks, we need policymakers at the federal, state, and local level who have a good understanding of the pitfalls involved in using computers to make decisions that affect people’s lives.

View comments (25)
Read the Terms of Use

Anonymous

Drones are not AI. They commanded and controlled by a human. Yes they have autopilot protocols, but this is a far cry from AI. Drones do not have AI kill capability in that they autonomously and on their own select targets and launch missles. If the drone is commanded to attack a target with a preloaded image and coordinates this is still not AI. The human is making all the decisions. The machine is running a program.

The machine does not make predictions or assumptions. The human tells the machine what to predict and what to assume. Ones and zeros. The machine doesn't choose which one to pick, the carbon based organic human does.

Anonymous

Why is it, that no one ever asks the question.... What % of federal monies, that the state is receiving, are getting to the D D Waiver clients and what % are the bureaucrats at the state agency's taking???

Anonymous

Medicaid uses 3% for all its operations

Anonymous

Why is it, that no one ever asks the question.... What % of federal monies, that the state is receiving, are getting to the D D Waiver clients and what % are the bureaucrats at the state agency's taking???

Joanna Bryson

I want to confirm that this is a crude form of AI – it's a rudimentary rule-based system. Like most AI, it doesn't really "act" without humans, but it augments (well, replaces) something that was a part of human decision making.

It's fantastic that the ACLU is doing such a great job on this issue. Regular testing, regular updating, and the use of quality data – and a transparent algorithm for converting that data to outcome, that's exactly what we need. One great thing about AI is that it does make this kind of decision explicit, so there should be a good record. But only with vigilance like this.

Anonymous

With that argument an IBM calculator is AI. I think not.

Anonymous

Machine learning is not "AI". "AI" is actually a misapplied blanket term that *could* mean pretty much anything from anti-spam to Skynet to that Osgood kidbot who believed in the blue fairy because... people.

Noel Sharkey

Great work Jay. It is a strong example about what the kind of automated decision making that is concerning a lot of us.

I am not overly keen on the term AI because it leads to public misunderstanding.

But, as it currently stands the problem here is a combination of AI decision making and human automation (complacency) bias.

We need to ensure deliberative human control of decisions affecting human lives.

Anonymous

One of the key points to this story is the well known concept of "Group Think". The Idaho Medicaid administration had heavily insulated itself and was also convinced that the algorithm was a sound measure and predictor, even though it was developed by a non-expert. No one in house could explain, nor did they understand, the algorithm. As pressure built from clients, families, legislators, Medicaid agencies and eventually attorneys, they battened down their hatches even more...water tight. Why? They were convinced that Medicaid agencies were reaping millions and delivering substandard care so decreasing budgets was justified. Unfortunately, the clients bore the brunt of the political trauma. Then came the wave of hundreds and hundreds of appeal hearings, which Medicaid won almost every time, officiated over by a Medicaid legal contractor. Vulnerable adults, not informed about appeal processes, had no idea how to prepare for those hearings...and over and over again they were not successful...almost every person lost their appeal. The only option left on the table was a federal lawsuit and a judge's order.

http://www.soci...

http://www.sociologyassignments
I appreciate your efforts in preparing this post. I really like your blog articles.

Pages

Stay Informed