Back to News & Commentary

FBI and Industry Failing to Provide Needed Protections For Face Recognition

Statue of Lady Justice with green box around her face
Statue of Lady Justice with green box around her face
Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
June 15, 2016

Among our nation’s founding principles is the belief that great power must be accompanied by commensurately strong checks and balances to prevent abuse and protect the innocent. We’re living at a time when enormously powerful new surveillance technologies are being adopted by the government and private industry—yet our government and corporate institutions are not rising to the challenge. This has been starkly highlighted by two developments today.

First, a new GAO report on the FBI’s use of face recognition technology includes a scathing assessment of the agency’s handling of this sensitive new technology. The FBI, the GAO found, has built an internal unit called Facial Analysis, Comparison, and Evaluation (“FACE”), which can access not only the 30 million photos held by the Next Generation Identification (NGI) biometrics storehouse, but also driver’s license photos of the population of 16 states (while currently negotiating access to 18 more). These photos may be scanned to help identify individuals in relation to state or federal investigations.

Even while constructing this surveillance infrastructure of unprecedented power, the FBI failed to engage in numerous oversight functions to ensure that that power was used well. The bureau “has not assessed how often errors occur” in making face recognition matches, the GAO found, leaving it in a poor position to even know whether the system “provides leads that help enhance, rather than hinder, criminal investigations.” Similarly, the FBI has “never assessed whether the face recognition systems used by external partners, such as states and federal agencies, are sufficiently accurate.” And the bureau has not even been carrying out audits of its databases to assess compliance with privacy protections and other rules and policies — a shocking failure in basic standards of oversight. The FBI also did not update the privacy impact assessment for NGI “in a timely manner,” and for five years did not publish a legally required System of Records Notice (SORN) informing the public about its use of face recognition.

Face recognition is a relatively new technology, and it’s important that not only the FBI but the public be aware of its limitations. Errors mean random people could be falsely identified as potential criminals and find themselves coming under the FBI’s powerful investigatory microscope. That could be not only invading people’s privacy, but also exposing them to accusations of wrongdoing; with the increasing amount of data available to federal agents about all of us, curious agents are more likely than ever to find something wrong in people’s lives if that’s what they’re looking for. It could also be wasting a lot of agents’ time.

Combined with the FBI’s current attempt to exempt itself from complying with the protections to individuals provided for in the Privacy Act of 1974, these failures point to an agency that is hurtling toward the use of advanced new surveillance technologies without taking care to ensure that innocent people are not trampled on the way.

The second development is the finalization today of a set of supposed “best practices” for commercial use of face recognition that have emerged out of a multistakeholder process convened by the National Telecommunications and Information Administration (NTIA). Unfortunately, as with the privacy “best practices” for drones that emerged out of a similar process, this document reflects the desire of industry to avoid making any firm commitments that would in any way curb its freedom of action, rather than an expression of the actual best practices that industry would be expected to follow if people’s privacy were to be protected. Industry’s unwillingness to make any such commitments is what prompted the ACLU and other civil society groups to walk out of the multistakeholder process in the first place a year ago.

By allowing automated machine recognition of individuals from a distance, and without their knowledge, let alone participation, face recognition has the potential to alter the fundamentals of life in urban public spaces that have been in effect since the emergence of the city in human history. Despite these potentially far-reaching effects, leading institutions are clearly salivating over the new powers it may bring them, and resisting any imposition of oversight when it comes to those powers. As this technology is absorbed into American life, we cannot allow that to happen.

Learn More About the Issues on This Page