Beyond such disclosure, agencies would also be required to provide an accounting of a system’s workings and impact, including any biases or discriminatory behavior the system might perpetuate. Given the many contexts and many types of systems, this would be accomplished not through a one-size-fits-all audit protocol, but by engaging with external researchers and stakeholders, and ensuring that they have meaningful access to an automated decision system.
These external researchers must include people from a broad array of disciplines and experience. Take, for example, the Allegheny Family Screening Tool, a tool used in Allegheny County, Pennsylvania to help judge the risk that a child might face abuse or neglect. Researchers with different toolsets have yielded insights on how the tool makes predictions, how employees of the Allegheny Department of Human Service use the tool to make decisions, and how it impacts people subject to those decisions.
Finally, agencies would need to honor the public’s right to due process. This means ensuring that meaningful public engagement is integrated into all stages of the AIA process before, during, and after the assessment through a “notice and comment” process, through which agencies solicit public feedback on their assessments. This would be a chance for the public to raise their concerns and, in some cases, even challenge whether an agency should adopt a particular automated decision system. Additionally, if an agency fails to adequately complete an AIA, or if harms go unaddressed by the agency, the public should have some method of recourse.
In developing AIA legislation, lawmakers will need to address several points. For example, how should external researchers be funded for their efforts? And what should agencies do when private vendors that sell automated decision systems resist transparency? Our position is that vendors should be required to waive their trade secrecy claims on information required for exercising oversight.
The rise of automated decision systems has already had and will continue to have an impact on the most vulnerable people. That’s why communities across the country need far more insight into government’s use of these systems, and far more control over which systems are used to shape their lives.
Dillon Reisman is a Technology Fellow at the AI Now Institute.
Meredith Whittaker is a co-founder of the AI Now Institute, a Distinguished Research Scientist at New York University, and the founder of Google’s Open Research group.
Kate Crawford is a co-founder of the AI Now Institute, Distinguished Research Professor at NYU, a Principal Researcher at Microsoft Research, and a leading scholar of the social implications of data systems, machine learning, and artificial intelligence.
This piece is part of a series exploring the impacts of artificial intelligence on civil liberties. The views expressed here do not necessarily reflect the views or positions of the ACLU.