Does Artificial Intelligence Make Us Less Free?
Artificial intelligence is playing a growing role in our lives, in private and public spheres, in ways large and small. Machine learning tools help determine the ads you see on Facebook and routes you take to get to work. They might also be making decisions about your health care and immigration status.
Government agencies at the local and federal level are exploring, and in many cases already using, automated tools to allocate resources and monitor people. This raises significant civil rights and civil liberties concerns.
In some cities, police are using artificial intelligence to predict where crimes might occur and to deploy officers and surveillance technologies accordingly. The Trump administration wants to collaborate with Silicon Valley to build a system to mine the social media accounts of foreigners to determine who might be a "threat." In one Pennsylvania county, an algorithm determines which children are at risk of abuse.
Worse, such algorithms tend to be shrouded in secrecy, closed off to external auditors who might be able to test them for bias and make needed corrections. Because of that secrecy, often imposed by private companies that refuse to reveal their source code, people can’t effectively contest the decisions made by these tools. The data and algorithms used to make fateful decisions about people’s lives are simply out of public reach.
In many contexts, the government’s use of automated tools to increase efficiency and assist in decision-making is appropriate. However, safeguards are necessary to give human beings a role in their creation, oversight, and deployment. Without democratic participation, we have no way to ensure that artificial intelligence isn’t exacerbating inequality and obstructing human agency — possibly, without our ever knowing.