How to Fight an Algorithm (ep. 7)

August 2, 2018
mytubethumbplay
%3Ciframe%20width%3D%22100%25%22%20height%3D%22166px%22%20scrolling%3D%22no%22%20frameborder%3D%22no%22%20allow%3D%22autoplay%22%20thumb%3D%22sites%2Fall%2Fmodules%2Fcustom%2Faclu_podcast%2Fimages%2Fpodcast-at-liberty-click-wall-full.jpg%22%20play-icon%3D%22sites%2Fall%2Fmodules%2Fcustom%2Faclu_podcast%2Fimages%2Fpodcast-play-btn-full.png%22%20src%3D%22https%3A%2F%2Fw.soundcloud.com%2Fplayer%2F%3Furl%3Dhttps%253A%2F%2Fapi.soundcloud.com%2Ftracks%2F479863563%26amp%3Bcolor%3D%2523000000%26amp%3Binverse%3Dfalse%26amp%3Bauto_play%3Dtrue%26amp%3Bhide_related%3Dtrue%26amp%3Bshow_comments%3Dfalse%26amp%3Bshow_user%3Dfalse%26amp%3Bshow_reposts%3Dfalse%26amp%3Bshow_teaser%3Dfalse%22%3E%3C%2Fiframe%3E
Privacy statement. This embed will serve content from soundcloud.com.

It seems like artificial intelligence is everywhere these days — in our homes, in our cars, in our offices, and of course online. Government decisions, too, are being outsourced to computer code. In one Pennsylvania county, for example, welfare services use digital tools to assess the likelihood that a child is at risk of abuse. Los Angeles contracts with the data giant Palantir to engage in predictive policing, in which algorithms identify residents who might commit future crimes. Local police departments are buying Amazon's facial recognition tool, which can automatically identify people as they go about their lives in public.

What does all this mean for our civil liberties? And how can the public exercise oversight of a secret algorithm? AI Now Co-founder Meredith Whittaker discusses this brave new world — and the ways we can keep it in check.

Download Audio File

LEE ROWLAND
[00:00:05] I'm Lee Rowland. And from the ACLU, this is At Liberty, the show where we discuss today's biggest civil rights and civil liberties topics.

[00:00:23] Today, artificial intelligence. It's everywhere — in our homes, in our cars, our offices, and of course online. So maybe it should come as no surprise that government decisions are also being outsourced to computer code. In one Pennsylvania county, for example, child and family services uses digital tools to assess the likelihood that a child is at risk of abuse. Los Angeles contracts with the data giant Palantir to engage in predictive policing, in which algorithms identify residents who might commit future crimes.

[00:00:57] Local police departments are buying Amazon's facial recognition tool, which can automatically identify people as they go about their lives in public. What does all this mean for our civil liberties and how do we exercise oversight of an algorithm? Here to talk through this brave new world with us is Meredith Whittaker. She is co-founder and executive director of AI Now, a research institute that studies the social implications of artificial intelligence. Meredith is a distinguished research scientist at New York University, the founder of Google's Open Research Group, and an expert on digital privacy and security issues. Meredith, thank you so much for being with us today.

MEREDITH WHITTAKER
[00:01:36] It's my pleasure. Thank you so much for speaking with me.

LEE
[00:01:38] Of course. So let's start big. Maybe with just some definitional terms. What do we mean when we talk about AI, artificial intelligence?

MEREDITH
[00:01:48] This is a great and fundamental question and I want to put everyone who's asking it at ease by letting you know that even the most technically adept people struggle with this. AI is not a clearly fixed term and it often means slightly different things to different people.This is in part because this is a field that's evolved over many years and because it's also a very hyped marketing term at this point. So the term AI is being used to sell everything, some of which resembles what we might think of as traditional AI, some of which certainly does not.

[00:02:23] So it might be helpful to sort of go back to the history and begin around the beginning. So it was in 1956 a group of men met at Dartmouth. This small group of guys was determined to build intelligent machines over the course of the summer. They thought this was possible and they didn't succeed. That's a spoiler alert there. But they did ignite a field of AI research and this field has developed in fits and starts with, you know, moments of great hope and moments of disappointment over the last 62 years.

[00:02:59] And you know I think it's important to note in the context of some of the current debates going on that this was largely bankrolled and shaped by the U.S. military. So from this common root a lot of different sub-branches all clustered under the umbrella AI emerged. This produced kind of an ecology of AI techniques, ranging from machine vision to natural language processing to machine learning to deep neural nets and well beyond. But they all share a common characteristic that’s important in the context of today's conversation. They all learn about the world by being fed large amounts of data. And they all make predictions and determinations based on what's in such data.

[00:03:40] So if you saw the movie Her you might remember that the AI system built a model of the owner’s personality by reading his e-mail. That's basically how it works.

LEE
[00:03:50] How does AI appear in our daily lives and who's using it?

MEREDITH
[00:03:54] So, AI appears in many, many different forms. Right? You may walk through an airport. You may be profiled by a AI risk assessment system at TSA. You may apply for a loan online and there's sort of an AI backend that is running your credit record and some other data determining if you're credit-worthy. You may apply for insurance and A.I. is determining whether you are eligible for a certain plan or not. A.I. is being used in hiring, it's being used to process worker data to detect which workers are potentially dissatisfied and can monitor their performance and on and on and on. It's actually probably much easier to think about areas where AI isn't which I can't think of right now then it would be to name every sector in which A.I. and automated decision system is being deployed.

LEE
[00:04:46] Could you give us a specific example that people might have heard of that demonstrates how the government is using AI?

MEREDITH
[00:04:53] Well, we have seen in the past the Baltimore Police Department scanning Instagram photos from a Freddie Gray protest, feeding them through facial recognition, using that to identify people with outstanding warrants or tickets. Right? So we do see the will on the part of a lot of these law enforcement officers and agencies to use the technology in ways that would be problematic and oppressive for civil liberties, to say the least.

LEE
[00:05:25] Are there particular groups of people who are more vulnerable to this kind of algorithmic decision making?

MEREDITH
[00:05:31] Absolutely. I mean I think we go back to how is AI constructed, right. It requires a lot of data. It requires data that, given the nature of space-time, was created in the past. And this data necessarily reflects the patterns of discrimination, of oppression, the patterns of life as it is. The way in which AI systems often not only replicate, but in some senses can amplify and mask existing patterns of oppression and discrimination. So there is a system that has been proposed in Pennsylvania. This is a risk assessment system that will give a score as to whether a defendant is a risk, at risk of reoffending.

LEE
[00:06:17] This is a criminal defendant? This is in a criminal context?

MEREDITH
[00:06:20] Yeah this is in a criminal context. Now, one of the inputs it uses to make this decision is arrests. Has this person been arrested in the past? But if you look at Pennsylvania, Pennsylvania has the second highest racial disparity in arrests in the U.S.. So one white man is arrested for every nine black men and every three Latino men. So your risk score and with it your chances of being kept behind bars will increase simply because you're a black man. And we should also note that the record of arrests is taken from a state in which stop and frisk has been deployed. So you're looking here at at the sort of methodology of data creation literally being practices of unconstitutional policing that are then fed into these systems which claim to give objective results but are actually replicating these same patterns of arguably unconstitutional discrimination.

LEE
[00:07:17] I think I've heard a phrase that describes this when talking to coders. It’s “garbage in, garbage out.” Is that the right thing to use here?

MEREDITH
[00:07:25] That is exactly right. I would say it's garbage in, and often much harder to detect and much harder to contest garbage out. Right? These systems often resist due process.

[00:07:38] There are very few mechanisms for pushing back against an automated decision and in a lot of cases the people who are actually tasked with using the system, the people on the ground, from a beat officer to a social worker who's given an iPad and saying run this algorithm to get a score, have no real understanding of how the system works. And in many cases have no ability to override the determination of the system.

LEE
[00:08:05] That's fascinating.

MEREDITH
[00:08:05] So garbage in, complicated garbage out.

LEE
[00:08:10] And do you get the sense that because it's spit out by a computer, it sometimes comes with a veneer of neutrality or objectivity?

MEREDITH
[00:08:19] Absolutely.

LEE
[00:08:20] And yet it's a product of a very flawed human system. Right? In an algorithm or code that's been designed with the flaws baked into it.

MEREDITH
[00:08:29] Absolutely.

LEE
[00:08:30] But people I think have a natural inclination to believe that a computer code result is somehow objective.

MEREDITH
[00:08:35] Yeah. And this is so common that it actually has a term. We call it automation bias. And it's just the tendency to be more credulous when a seemingly objective veneer of scientific authority, like a computer, gives you an answer. I think there's also...we need to look at the way our current trust in technical solutions kind of adds to this tendency.

[00:08:58] I think it's telling that a lot of these systems have been created without documenting the data they used and certainly without releasing this data to the public.There are few to no monitoring regimes that actually look at the impact of these systems and it's often very hard to get records around this. So we can look at the way in which a kind of automation bias is baked into the way even the developers are thinking about these systems.

LEE
[00:09:23] You know traditionally if you went through a criminal justice case and you went to sentencing or a bail hearing you would have real humans testifying against you. Now you're suggesting that some of those decisions are being replaced by algorithms that the people administering them barely understand. So can you explain to me what happens in a courtroom if somebody says, “Well hey this algorithm is junk because you fit in racially biased arrest statistics and so it spits out that I'm likely to be dangerous because I'm a black guy and that's that's flawed.”

[00:09:56] Are there ways for people to make that argument to to challenge the underlying code in these algorithms in the same way that you would, say, traditionally get to confront a human witness against you?

MEREDITH
[00:10:07] At this point, not many and this is something we're working on with AI Now, we have developed a policy framework called the Algorithmic Impact Assessment framework that is looking at simply giving people more access to information on where these systems are when they may have made a decision that affected my life, and allowing some form of of pushback to debate that decision. But at this point the ability to contest an automated decision is not part of most criminal trials.

[00:10:38] I think you know an example from Arkansas in the sort of healthcare space not the criminal justice space would help illustrate this. So in 2016, Arkansas implemented a health care algorithm that was being used to allocate health benefits to Medicaid patients. And not only did it make some really fundamental mistakes, it was implemented in a way that left no room for override by the Medicare worker who was in charge of administering it

LEE
[00:11:09] And how do we know that this system made mistakes?

MEREDITH
[00:11:13] We know that it made mistakes because legal aid in Arkansas took a case from somebody who was impacted by the system whose benefits had been cut. You know we're looking at kind of home care patients, right. These are people who often need help getting out of bed in the morning, need help eating, need help in getting put back into bed. Right.

[00:11:33] You cut your home care benefits from 12 hours a week to something much smaller and you are really endangering that person's ability to live. So this is not trivial. And the reason we know this is that people raise complaints. Legal Aid began getting calls. They decide to take this case. At significant expense and time, they were able to get people to review the algorithm and it was only during the course of litigation that the fundamental flaws in the algorithm and the software implementation of the system were uncovered.

LEE
[00:12:04] And that was uncovered only after the system had been in use for a while?

MEREDITH
[00:12:09] Yeah. Exactly. And only through a sort of drawn out and expensive litigation process.

[00:12:17] So it's almost certain there are many more systems like this, maybe less egregiously harmful, that are in use that we don't know about because they haven't been disclosed or that some people may know about but that we don't have the resources or the time or the expertise to sort of push on a lawsuit or push for explanations.

LEE
[00:12:40] It sounds like it took a lot of resources just to uncover you know how that particular code was flawed. Do you think that in practice local governments have the expertise or the know-how to assess those systems before they're put in place?

MEREDITH
[00:12:56] You know I think this will vary. I think we have seen a starvation of social services and government services in the U.S. over many years. I think that's a solvable problem if we look to fund those type of experts. I mean in New York City which is the first local government in the nation to pass legislation that is looking to implement algorithmic transparency and accountability.

LEE
[00:13:21] What does that mean? What does algorithmic transparency and accountability mean in practice?

MEREDITH
[00:13:26] In this case it means that the algorithms that are used in government would be disclosed to the public, that there would be some form of deliberation around the use of such technologies, and that agencies that were using these technologies would be required to account for their use and hopefully to produce an impact assessment or some other analysis of what the effect of the use of such a system would be on the populations it was being deployed among.

[00:14:03] You know, I caveat all of these. This is not in the law right now. The law as passed constituted a task force that I am a member of that is going to write a report that will be filed with the mayor’s office late 2019. This report will then feed into a lawmaking process that will specify what the substance of algorithmic accountability actually looks like in New York.

LEE
[00:14:27] So it sounds like these are the early days for a groundswell at least at the local government level. The systems you're describing have immense implications for people's daily lives, their freedom, their families. It sounds like it's hard to really exaggerate the degree to which AI decision-making could affect us all. Is this an inevitable march towards our Minority Report dystopian future, or are there ways that the public can have a meaningful say, can find out about these systems, can object to them before they're controlling our lives?

MEREDITH
[00:14:59] I mean, I don't believe in inevitability, because I would just stop working if I did. You know, a couple of years ago when these systems were being put in place, we didn't have this conversation. We now do. I think there is a lot of increased public awareness that has led to things like the New York City bill. It's a start, but it's a start that happened because council member Vacca at the time was getting a lot of calls asking questions about these systems in response to, you know, news articles and awareness-raising generally. And he couldn't answer these questions. And this sort of turned into this process.

[00:15:38] So I think there's a lot of opportunity for people simply to ask, “Hey where are these systems being used? How are they impacting me? Hey, tech companies who are selling these systems, who are you selling them to? What are they doing? Why is it that you know my due process rights may be perturbed by a system that identifies me as like someone else but doesn't actually reflect me as a person, my record, my history?” So these are all questions that don't require a technical degree, that don't require that you're well versed in the latest technical jargon, and they're all really fundamental questions that I think we have to simply be requiring both governments and tech companies to answer clearly.

LEE
[00:16:20] I'm so glad you mentioned the tech companies because I definitely want to talk about what we've been seeing in Silicon Valley. There's been some real meaningful pushback lately from tech workers concerned about their company's contracts with the government.

[00:16:34] It started at Google in a protest and I believe you were at least somewhat involved in, you can tell us, where employees organized against a Pentagon contract that would use A.I. to analyze drone footage. Project Marven, I believe it's called. And since then we've now seen similar protests popping up at Amazon, Microsoft, and Salesforce, also against the use of A.I. in government contracts. So can you tell us more about what's happening in the tech sector right now and why it's happening right now?

MEREDITH
[00:17:04] Yeah. Certainly, I was very involved in the Maven protests in my role as the leader of open research at Google, and I'm deeply heartened to see the rising concern and the willingness to kind of act on that across the industry. You know, I think part of what happened is that there was an increasing dissonance between the promise of tech as a great democratizer, and the reality of these kinds of contracts, right.

[00:17:33] There are many people who for many reasons are deeply concerned about the idea of autonomous weapons. There are many people who are watching what's happening with the Trump administration, and watching the human rights abuses on the border, and watching the rise of authoritarianism frankly and recognizing that this is the moment where you make a choice, right. This is the moment where you say, I’m either going to go with the flow or I'm going to answer the question: What would you have done in this situation?

[00:18:03] And one of the things that's heartening to me is that old fashioned worker organizing works. That a lot of people suddenly realize that oh you know what, we don't actually have to align completely with our employer’s interests and since we are the people who are building this technology, since our skills are necessary to do this, we should also be able to make a choice in what we're doing and what we're not doing.

[00:18:27] So I think there's a long way to go but frankly if you'd asked me, six months ago, four months ago, if it were even possible for a workers’ movement in Silicon Valley, I would have been skeptical. So you know I'm hopeful simply seeing this quick emergence with such clear language and such clear demands, even if there is a long road to go to ensure that the people building the tech actually have a meaningful ethical decision in what they're building and where it gets applied.

LEE
[00:18:57] Are engineers our best hope in seeking an ethical framework for the use of A.I?

MEREDITH
[00:19:04] I think this is one lever among many. And I think we can't simply count on engineers holding the line against you know these massively powerful interests. We really have to think about what other organizing outside of tech companies would look like, what solidarity there would look like, what legal mechanisms do we have. You know I think this is a coalition effort if there ever were one, simply because this is technology that is not only affecting one group or another, right.
[00:19:33] Engineers certainly need to join the fight, but we need voices from all of the affected communities and beyond, making it clear that they too want a say in how this technology affects their lives, and saying you know it's not inevitable.

LEE
[00:19:48] So Meredith what would you say to a member of the public who might be listening who says I want to help make this not inevitable?

MEREDITH
[00:19:55] Given the breadth of A.I’s reach right now into almost every sector, you know, start at home. Where are you seeing it in your field? Where are you seeing it in your discipline? Where are you seeing it in your neighborhood? Just begin asking those questions and be confident that whoever you are, you don't need a CS degree, you don't need fancy training in A.I. to be able to ask straightforward questions about who is this being used by? Who owns it? Who owns my data? Who made the decision to deploy this?

[00:20:24] Certainly call your local representatives, show up at city council meetings, ask questions if that's something you have time to do. If not, support people who do. And, you know, again given the way that A.I is being deployed, it's not going to be hard to look around you and, if you do some digging, figure out one place that it's probably already operationalised in a way that you weren't aware of.

LEE
[00:20:46] Meredith, thank you so much for talking to us today and for helping to make a dystopian future less inevitable.

MEREDITH
[00:20:52] Yeah. My pleasure. Thanks for doing this.

LEE
[00:20:59] From the ACLU, this has been At Liberty. Thanks for listening and be sure to subscribe anywhere you get your podcasts.

Stay Informed