ACLU and CDT Urge Court to Stop Government from Punishing Anthropic for Important Advocacy on AI Guardrails
In Friend-of-the-Court Brief, Groups Lay Out Dangers of Forcing AI Companies to Deploy Their Tools for Mass Domestic Surveillance
WASHINGTON — Today, the American Civil Liberties Union (ACLU) and the Center for Democracy & Technology (CDT) filed an amicus brief in Anthropic’s lawsuit in the D.C. Circuit Court of Appeals challenging the company’s designation as a “supply-chain risk” by the Department of Defense. The designation purports to prohibit anyone from using Anthropic’s tools in connection with Defense Department work. Anthropic’s lawsuit argues that this designation was in retaliation for the company’s First Amendment-protected advocacy related to AI safety, including the urgent need for artificial intelligence (AI) guardrails that prohibit the U.S. military from using these powerful new tools for fully autonomous weapons and mass domestic surveillance.
“AI-powered surveillance poses immense dangers to our democracy. Anthropic’s public advocacy for AI guardrails is laudable and protected by the First Amendment — not something the Pentagon should be punishing,” said Patrick Toomey, deputy director of the ACLU’s National Security Project. “Our privacy laws are lagging decades behind the government’s ability to capture and exploit our data using AI tools, which can easily reveal the most intimate details of our lives. Anthropic has been right to speak out, but Congress also must step up to protect us from mass spying.”
The brief explains why Anthropic’s advocacy for AI guardrails is vitally important and describes the dangers posed by AI tools when applied to immense datasets containing sensitive information — including how these tools can invade privacy, chill speech, and facilitate discriminatory profiling. The groups also explain how existing U.S. privacy laws are inadequate to protect people in the United States, especially in light of loopholes the government has long exploited to justify sweeping surveillance. And it emphasizes why, as a result, Anthropic’s advocacy for strict limitations on the government’s use of AI is critical to protecting the public’s privacy interests.
“AI can enable surveillance that is unprecedented in its detail, scope, and scale, and that poses a profound threat to the freedoms our democracy depends on. The right to privacy, freedom of speech, and freedom of association are fundamental to our country and to our Constitution,” said Samir Jain, vice president of policy at CDT. “By exploiting the data broker loophole, the government has access to unprecedented data on our lives, our routines, and our relationships. Combining that data with the power of AI would expand the Pentagon’s surveillance powers exponentially, and companies like Anthropic are well within their rights to push back on that outcome.”
As the amicus brief shows, while Anthropic has rightly advocated for AI guardrails, people in the U.S. deserve a lasting legislative solution to protect their privacy. The ACLU and CDT have been vocal supporters of the bipartisan Fourth Amendment Is Not For Sale Act, a commonsense reform bill that would ban the government from buying data it would otherwise need a warrant to obtain.