Back to News & Commentary

We Must Get Racism Out of Automated Decision-Making

A 3D Robot staring at an industrial network chain link.
Artificial Intelligence systems are developed in ways that don't adequately take into account existing racism, sexism and other inequities. This results in invisible, but very real discrimination.
A 3D Robot staring at an industrial network chain link.
ReNika Moore,
Director, ACLU's Racial Justice Program
Share This Page
November 18, 2021

In 1956, the Eisenhower administration launched the multibillion-dollar Interstate Highway System, creating a transportation network that indisputably paved the way for immense economic growth. But it also exacted a devastating cost: The new highways were often routed through older, thriving communities, displacing more than 1 million Americans – the vast majority of whom were Black and low-income. In some cities, they cut off Black neighborhoods from quality jobs, schools and housing, solidifying racial and economic segregation. The impact of this disruption is still felt today.

Now, the Biden administration is involved in a similarly game-changing investment – the development of artificial intelligence. The National Artificial Intelligence Research Resource Task Force, launched in June, is President Joe Biden’s first contribution to a growing number of federally authorized advisory committees guiding development of AI systems across public and private sectors from housing, employment and credit to the legal system and national security.

But despite Biden’s announced commitment to advancing racial justice, not a single appointee to the task force has focused experience on civil rights and liberties in the development and use of AI. That has to change. Artificial intelligence, invisible but pervasive, affects vast swaths of American society and will affect many more. Biden must ensure that racial equity is prioritized in AI development.

The artificial intelligence at issue refers to computer models, or algorithms, that mimic the cognitive functions of the human mind, such as learning and problem-solving. AI is widely used for automated decision making — analyzing massive amounts of data, finding correlations and then making predictions about future outcomes.

The impact on the daily lives of Americans is unprecedented. Banks and other lenders use AI systems to determine who is eligible for a mortgage or student loan. Housing providers use AI to screen potential tenants. AI decides who’s helped and who’s harmed with influential predictions about who should be jailed pretrial, admitted to college or hired.

So when AI systems are developed in ways that do not adequately take into account existing racism, sexism and other inequities, built-in algorithmic bias can undermine predictive decisions and result in invisible but very real discrimination. As these systems are deployed, they exacerbate existing disparities and create new roadblocks for already-marginalized groups.

For example, the “Educational Redlining” report by the Student Borrower Protection Center found in 2020 that Upstart, a fast-growing AI lending platform, charged higher interest rates and loan fees to borrowers who attended historically Black Howard University or majority Latinx New Mexico State University than it charged those who went to New York University, where Black and Latinx students combined make up only about 30% of the population.

Another example shows how hard such discrimination will be to overcome. In 2018, the ACLU sued Facebook for using algorithms that excluded women from the audience for traditionally male job opportunities such as truck driver or technician, and the social media giant announced sweeping reforms to fix the problem. But an audit three months ago by researchers at the University of Southern California found that Facebook’s ad-delivery system still showed different job ads to women and men.

Despite the reforms, the researchers noted, “Facebook’s algorithms are somehow picking up on the current demographic distribution of these jobs” — which is exactly the kind of historically based bias that needs to be monitored and corrected.

That’s why the Biden administration must act. As the new task force, like other AI advisory committees, helps guide federal policy for AI uses, it’s crucial that its members include civil rights experts who can identify and root out sources of algorithmic bias and push for appropriate oversight mechanisms.

At the same time, the Biden administration must correct President Donald Trump’s willingness to back burner civil rights in pursuit of rapid AI development. In executive orders and a memorandum issued by the Office of Management and Budget, the Trump administration authorized development and expansion of AI in the name of national security without requiring transparency, external oversight, and accountability for biased and discriminatory outcomes. None of these directives has been rescinded.

Back in the 1960s and ’70s, as the collateral damage of interstate highway construction became apparent, “freeway protests” across the country slowed and sometimes stopped the wholesale demolition. But it was easier to mobilize opposition in that case, because the destruction of neighborhoods was plain for all to see.

In contrast, the dangers of AI’s algorithmic bias are invisible, complex and hard to describe. But AI is far more pervasive than a highway system, and far more consequential in the long run. We need to build fair, equitable AI systems so that the United States of the 21st century is equally accessible to everyone. Let’s learn from the mistakes of the past.

The Biden administration must challenge AI’s power to preserve and exacerbate systemic racism. If the president is to carry out his promise of a more just society, voices representing civil rights groups, scholars and impacted communities must have a seat at the table.

This piece was originally published in the Washington Post on August 9, 2021.

Learn More About the Issues on This Page