A judge, face not visible, holding a gavel above a glowing “AI” button on a desk, with scales of justice in the background.
A judge, face not visible, holding a gavel above a glowing “AI” button on a desk, with scales of justice in the background.
From protecting your privacy to ensuring new technology accounts for inclusivity, ACLU experts explain what’s at stake in the AI policy sphere and the steps advocates and lawmakers can take to regulate AI
Amelia Quezada,
Communications Associate,
ACLU
Ricardo Mimbela,
Communications Strategist
Share This Page
January 16, 2026
From protecting your privacy to ensuring new technology accounts for inclusivity, ACLU experts explain what’s at stake in the AI policy sphere and the steps advocates and lawmakers can take to regulate AI

Whether you encounter it in your daily life or never think about it at all, artificial intelligence (AI) affects us all. From applying for a loan to sitting at the doctor’s office, AI systems are often used behind the scenes to make real-world decisions — and impact us in ways that aren’t disclosed upfront.

This embed will serve content from {{ domain }}. See our privacy statement

Yet despite the growing reach of AI and the diversity of tools and systems it encompasses, regulations governing how it is developed and deployed and how impacted people are informed remain worryingly sparse. Left unregulated, these systems can infringe on your ability to control your data or reinforce discrimination in hiring and employment practices. As the civil rights implications become more serious, strengthening protections is no longer optional.

While policymakers and advocates must do more, existing local, state, and federal laws already offer some protection against discrimination, including digital discrimination. As part of our “Your Questions Answered” series, we asked four ACLU experts to break down what you need to know about your digital rights today, the current state of AI policy, and where regulation may be headed next.

Why is there a need for more regulation in how AI is used?

AI is often used to make decisions about our lives without transparent disclosure. For example, when you apply for a loan or submit a job application, banks or employers might use AI to analyze your materials before a real person ever does. At the doctor’s office, your provider may use an AI scribe to take notes on your conversation. And government agencies are using AI and other automated systems to make crucial decisions about who gets benefits and what those benefits are. AI should be held to strict standards when dealing with people’s lives.

— Olga Akselrod, senior counsel, ACLU Racial Justice Program

What specific harms to our civil liberties might the unregulated use of AI worsen?

Without careful oversight, AI systems used for decision-making have been proven to perpetuate existing systematic inequalities. We’ve seen that when AI tools are used to screen job applications or assess prospective employees, they can unfairly discriminate against people of color, people with disabilities, neurodiverse people, and people from low-income backgrounds. The use of AI in areas like hiring, housing, and policing means that you can be denied a job or an apartment — or even wrongfully arrested when AI-based systems that use facial recognition technology — which suffer from serious racial biases issues and are often used without appropriate safeguards — misidentify suspects in criminal investigations.

None of this is an accident, and it’s not unavoidable. There is an incredibly diverse set of tools and systems that are often categorized as “AI,” and the civil rights implications of these systems depend on the context in which they are used. While some of these systems may be used in relatively benign ways, in other instances, biased AI systems create serious risks of discriminating against real people in life-altering situations. The people, companies, and institutions developing and deploying AI systems are responsible for enabling these biases,but stricter policies and regulation can hold them accountable for their impact and ensure that these practices do not continue.

— Marissa Gerchick, data science manager & algorithmic justice specialist

How can policymakers and advocates address the real-world challenges emerging from the use of AI?

In our new report with researchers from Brown University's Center for Technological Responsibility, we highlight the wide range of AI regulations proposed by policymakers across the country. There are bills which regulate the use of AI in specific areas like education or elections and broader proposals that further expand civil rights protections that already apply to AI uses in high-stakes areas.

Our report also shows how advocates and policymakers can carefully apply computational tools to spot trends and track similarities across the growing AI policy landscape.

Our research also unearthed two key recommendations to address the challenges that emerge when conducting computational AI policy analysis, and we propose solutions to address them:

  1. We urge researchers and policy staff to work together to create standardized formats and structures for legislative texts across jurisdictions to facilitate computational analysis of data.
  2. We encourage researchers and advocates to incorporate a multilingual perspective when analyzing AI legislation introduced in regions under U.S. jurisdiction. Leveraging language technologies tailored to specific languages and legal contexts, while engaging with native speakers and regional AI policy experts, would provide insights into the diverse approaches to AI policy.

While our focus in our report is AI legislation, our findings and recommendations can be applied to other policy areas seeing a growth of bills across jurisdictions, and thus help to understand and strengthen emerging legislation.

— Evani Radiya-Dixit, algorithmic justice fellow

What digital rights do I have when automated tools are used to make decisions about me?

Whether decisions are made by a human or AI, longstanding federal anti-discrimination laws continue to prohibit discrimination in hiring and employment based on race or ethnicity, sex, sexual orientation or gender identity, disability, and other protected characteristics. In addition to federal protections, a growing number of states have passed laws regulating how employers and third-party vendors collect, use, and share your personal data during hiring. These laws give you greater control over your information and more transparency about whether automated systems are evaluating you — and how those systems may influence employment decisions.

Cody Venzke, senior policy counsel, National Political Advocacy

You can learn more about digital discrimination and your digital rights when searching or applying for jobs at our Know Your Rights page.

Learn More About the Issues on This Page