Applying tools trained on high-resource languages to analyze text in other languages will certainly divorce words from meaning. Consider the Palestinian man who posted “good morning” to his Facebook account. When Facebook’s automated translation tool mistranslated his caption into “attack them,” he was arrested by Israeli authorities and questioned for hours.
Social media information, along with biometric data and other details collected from a wide variety of government and commercial databases, may ultimately get fed into DHS’s Automated Targeting System — an overarching system for evaluating international travelers to generate an “automated threat rating,” supposedly correlated to the likelihood that a person is engaged in illegal activity. Aspects of the system are exempt from ordinary disclosure requirements — individuals can’t see their rating, know which data are used to create it, or challenge the assessment — and it’s applied to every person who crosses the border, regardless of citizenship status. We do know that the ATS system pulls information from state and federal law enforcement databases as well as airline-travel database entries (called Passenger Name Records, or PNR).
One civil liberties advocate who managed to get access to some of the data DHS retained on him found that it identified the book he was reading on one trip across the border. Other PNR data collected from travel database entries can reveal even more sensitive information, such as special meal requests, which can indicate an individual’s religion, and even the type of bed requested in a hotel, which could speak to sensitive relationship details. Any broader efforts to collect additional biographic or social media information may feed into the system as well.
For all of our advances in data science and technology, there is still no way for any algorithm to accurately predict an individual’s terrorism risk...
At present, ATS is used to flag individuals for further human scrutiny when they travel into or out of the country or when their visas expire. But recent news reports suggest that automated tools may soon be used to make final decisions about people’s lives.
An Immigration and Customs Enforcement call for software companies to bid on the creation of an automated vetting system — to scrutinize people abroad seeking U.S. visas as well as foreigners already in the country — came to light in August 2017. In keeping with the goals outlined in President Trump’s Muslim ban, the request for proposals called for software capable of evaluating an individual’s propensity to commit crime or terrorism or to “contribute to the national interest.” The algorithm is meant to make automated predictions regarding who should get in and who should get to stay in the country, by evaluating open source data of dubious quality using a hidden formula insulated from public review.
For all of our advances in data science and technology, there is still no way for any algorithm to accurately predict an individual’s terrorism risk or future criminality. Instead, these tools are likely to rely on “proxies,” ultimately making value-based judgements using criteria unrelated to terrorism or criminality.
We’ve seen examples of this in policing contexts. Social media monitoring software companies marketed their products to law enforcement by touting their ability to monitor protesters online by tracking people using terms like “#blacklivesmatter,” “#ImUnarmed,” and “#PoliceBrutality.” An investigation by the ACLU of Massachusetts into the Boston Police Department’s use of Geofeedia software found that the tool was used to monitor the entire Boston Muslim community by tracking common Arabic words and treating them as suspicious.
In the context of an administration that has repeatedly announced intentions to restrict immigration based on religion, race, and need for public assistance, it is not hard to imagine what proxies might be used for propensity to commit crime or for one’s ability to contribute to the national welfare. In the context of America’s history of cloaking nativist and racist policies in pseudo-scientific language, we need to be vigilant in ensuring that the use of new technology doesn’t subvert our highest values.
Erica Posey is a research and program associate for the Liberty and National Security Program at the Brennan Center for Justice. Rachel Levinson-Waldman is senior counsel to the Liberty and National Security Program at the Brennan Center for Justice. The authors benefitted from the knowledge and views of their colleague Andrew Lindsay on this topic.
This piece is part of a series exploring the impacts of artificial intelligence on civil liberties. The views expressed here do not necessarily reflect the views or positions of the ACLU.