Back to News & Commentary

New York City Takes on Algorithmic Discrimination

New York at Night
New York at Night
Rashida Richardson,
New York Civil Liberties Union
Share This Page
December 12, 2017

Invisible algorithms increasingly shape the world we live in, and not always for the better. Unfortunately, few mechanisms are in place to ensure they’re not causing more harm than good.

That might finally be changing: A first-in-the-nation bill, passed yesterday in New York City, offers a way to help ensure the computer codes that governments use to make decisions are serving justice rather than inequality.

Computer algorithms are a series of steps or instructions designed to perform a specific task or solve a particular problem. Algorithms inform decisions that affect many aspects of society. These days, they can determine which school a child can attend, whether a person will be offered credit from a bank, what products are advertised to consumer, and whether someone will receive an interview for a job. Government officials also use them to predict where crimes will take place, who is likely to commit a crime and whether someone should be allowed out of jail on bail.

Algorithms are often presumed to be objective, infallible, and unbiased. In fact, they are highly vulnerable to human bias. And when algorithms are flawed, they can have serious consequences.

Just recently, a highly controversial DNA testing technique used by New York City’s medical examiner put thousands of criminal cases in jeopardy. Flawed code can also further entrench systemic inequalities. The algorithms used in facial recognition technology, for example, have been shown to be less accurate on Black people, women, and juveniles, putting innocent people at risk of being labeled crime suspects. And a ProPublica study has found that tools designed to determine the likelihood of future criminal activity made incorrect predictions that were biased against Black people. These tools are used to make bail and sentencing decisions, replicating the racism in the criminal justice system under a guise of technological neutrality.

But even when we know an algorithm is racist, it’s not so easy to understand why. That’s in part because algorithms are usually kept secret. In some cases, they are deemed proprietary by the companies that created them, who often fight tooth and nail to prevent the public from accessing the source code behind them. That secrecy makes it impossible to fix broken algorithms.

The New York City Council yesterday passed legislation that we are hopeful will move us toward addressing these problems. New York City already uses algorithms to help with a broad range of tasks: deciding who stays in and who gets out of jail, teacher evaluations, firefighting, identifying serious pregnancy complications, and much more. The NYPD also previously used an algorithm-fueled software program developed by Palantir Technologies that takes arrest records, license-plate scans, and other data, and then graphs that data to supposedly help reveal connections between people and even crimes. The department since developed its own software to perform a similar task.

The bill, which is expected to be signed by Mayor Bill de Blasio, will provide a greater understanding of how the city’s agencies use algorithms to deliver services while increasing transparency around them. This bill is the first in the nation to acknowledge the need for transparency when governments use algorithms and to consider how to assess whether their use results in biased outcomes and how negative impacts can be remedied.

The legislation will create a task force to review New York City agencies’ use of algorithms and the policy issues they implicate. The task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms. It will develop a set of recommendations addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.

These are extremely thorny questions, and as a result, there are some things left unanswered in bill. It doesn’t spell out, for example, whether the task force will require all source code underlying algorithms to be made public or if disclosing source code will depend on the algorithm and its context. While we believe strongly that allowing outside researchers to examine and test algorithms is key to strengthening these systems, the task force is charged with the responsibility of recommending the right approach.

Similarly, the bill leaves it to the task force to determine when an algorithm disproportionately harms a particular group of New Yorkers — based upon race, religion, gender, or a number of other factors. Because experts continue to debate this difficult issue, rigorous and thoughtful work by the task force will be crucial to protecting New Yorkers’ rights.

The New York Civil Liberties Union testified in support of an earlier version of the bill in October, but we will be watching to see the exact makeup of the task force, what recommendations are advanced, and whether de Blasio acts on them. We will also be monitoring to make sure the task force gets all of the necessary details it needs from the agencies.

Algorithms are not inherently evil. They have the potential to greatly benefit us, and they are only likely to become more ubiquitous. But without transparency and a clear plan to address their flaws, they might do more harm than good.

Learn More About the Issues on This Page