The Good Wife Tackles Algorithmic Discrimination. Meanwhile, in Real Life…

This week’s episode of "The Good Wife" raised some important questions: Will Jason and Alicia sleep together? Will Cary and Lucca sleep together? Will Courtney and Eli sleep together? And one that is perhaps a little more important: Will the algorithms that increasingly govern our economic and personal lives exacerbate racial inequality in America?

The setup is this: A Black restauranteur, whose restaurant is in a predominantly Black neighborhood, wants to sue longtime series tech behemoth ChumHum (a Google-type entity) for its newly launched mapping app, ChummyMaps. The app directs users away from “unsafe” neighborhoods and hides businesses, like hers, located in those neighborhoods. But these “unsafe” neighborhoods seem to be all the places where people of color live. She enlists Diane and Cary to make the case.

During the course of the episode, Alicia and Lucca, who are defending ChumHum, get an education in algorithmic discrimination, or “digital redlining” as it’s sometimes known. Redlining is the practice of excluding neighborhoods of color from mortgage credit, and it used to be the formal policy of banks and the federal government. Official maps outlined these neighborhoods in red. Here, the episode’s opening sequence makes the comparison explicit with an image of one of the old redlined maps of New York City. 

Home Owners' Loan Corporation (HOLC) redlined map of Manhattan from 1938.

Home Owners' Loan Corporation (HOLC) redlined map of Manhattan from 1938. (LaDale Winling / urbanoasis.org)

Alicia and Lucca learn that ChumHum’s algorithms can produce other upsetting racialized results. Its automatic photo tagging algorithm misidentified Black women as “animals,” much like Google’s real-life photo-tagging software tagged Black people as “gorillas.” Another attorney of color sees different ads (like one for a soul food restaurant) in her ChumHum account than does Cary, who is white. And a ChumHum user named Jamal complained that ChumHum’s auto-complete function suggested queries that associated him with criminal behavior. These last two problems echo the real-life findings of Harvard professor LaTanya Sweeney, who discovered that Google served ads related to arrest records in response to searches for Black-identified names, like Travon — but not for white-identified names, like Brad. (Google ultimately applied some fixes to address these particular findings.)

The Good Wife has become known for its ripped-from-the-headlines premises and plots, and this episode was no different. The potentially discriminatory reach of algorithms is becoming increasingly apparent. To that end, the ACLU has urged government regulators to enforce civil rights laws — like the Equal Credit Opportunity Act — online to make sure that algorithms don’t inflict real harms on people of color and others (like women and people with disabilities), who those laws are designed to protect. We’ve also worked with a coalition of civil rights groups on a set of principles that corporations and the government should keep in mind in the use of big data. 

True to form, the Good Wife episode wraps up the legal case, with ChumHum agreeing to fix certain aspects of ChummyMaps. But the real-world problem of algorithmic bias is complicated. As long as residential segregation and the racial wealth gap persist, and the algorithms dictating our online experience don’t comport with civil rights principles, machines analyzing patterns in big data risk reinforcing existing societal discrimination.

The ongoing question of how we tackle this issue is even more crucial than what happens to Alicia and Jason.

Correction: This post initially suggested it was Lucca who saw different ads than those Cary saw. This was incorrect; it was actually another attorney named Monica. 

Add a comment (8)
Read the Terms of Use

Anonymous

Who is looking into the various ramifications of this in real (online) life? Is the ACLU doing it?

Anonymous

It's worth pointing out that the gorilla error in the google image recognition has little to do with how we conceive race. It's most likely an artifact of the way that images are processed through a dimensionality reduction i.e. the images processed by the convolutional neural network have far fewer pixels than the source image - to us it would look like a blocky mess. One could perhaps argue there is a bias of omission in not testing enough faces of particular shades of whatever since this is all supervised learning i.e. training the neural network through manual labeling but the algorithm that looks at novel input isn't "looking" at an image in a similar way to how we see things

Anonymous

The episode makes that point. Watch it.

Anonymous

Having now watched the episode what is misleading about how the image case is it's presented as a systematic error with the labeling of people of a particular skin color when the actual error in the Google labeling of some people as gorillas was the result of much more specific feature extraction The didn't omit the gorilla label because the image labeling was systematically biased to think many people were gorillas - training data of gorillas was likely causing an overfitting on the balance of probabilities of some features of a scene since nearly all images of gorillas have e.g. vegetation surrounding them but pictures of gorillas are rare so it just made sense to remove the label since not being able to identify gorillas is obviously more socially desirable than a rare error. The show made it seem like the error was driven by a face on its own when the people Google labeled as gorillas likely wouldn't have been in a different environment. It exposes a deficiency in the training data set but it's much more specific than adding more pictures of black people because pictures of people e.g. standing in cities is irrelevant to the error.

Anonymous

For a more detailed analysis of the themes alluded to in the episode and how they connect to the research on the topic, see also

blog.geomblog.org/2015/12/fairness-and-good-wife.html

(shameless self-promotion warning)

Anonymous

"Correction: This post initially suggested it was Lucca who saw different ads than those Cary saw. This was incorrect; it was actually another attorney named Monica."

Ironic that the original post confused the character Lucca Quinn (played by Cush Jumbo) with the character Monica Timmons (played by Nikki M. James).

Trouble telling the difference between two African American women?

Anonymous

If the image classifier was trained on pre-labeled data (which is likely, since I doubt Google would have people manually label millions of images), it would pick up the prejudice of the people who originally labeled the image. This actually happens, as demonstrated by a neural network that detected gender imbalance in language use (colah.github.io/posts/2014-07-NLP-RNNs-Representations/)

Anonymousnk

thanks for sharing man keep sharing with us
https://appvn.me/

Sign Up for Breaking News