What’s Wrong With Airport Face Recognition?

U.S. Customs and Border Protection (CBP) has launched a “Traveler Verification Service” (TVS) that envisions applying face recognition to all airline passengers, including U.S. citizens, boarding flights exiting the United States. This system raises very serious privacy issues.

What we know about this program comes from a privacy impact statement DHS issued on the program, and a briefing CBP Deputy Executive Assistant Commissioner John Wagner gave to privacy advocates in Washington this week. CBP’s plan is to install cameras at boarding gates to take photos of, and apply face recognition to, all cross-border passengers at the boarding gate to their aircraft. Currently being operated in six airports around the country (Boston Logan, New York JFK, Dulles in D.C., Hartsfield-Jackson in Atlanta, Chicago O’Hare, and Bush in Houston), the TVS program is part of a larger program called “Biometric Entry/Exit.” That program is DHS’s attempt to comply with a congressional requirement that the agency use biometrics to keep track of visitors entering and exiting the United States, in order to identify individuals who overstay their visas.

The way the system works is that before departure, CBP obtains the passenger manifest for each flight, and then reaches through the government’s extensive, interconnected set of databases to assemble photographs on each passenger. Those include passport and visa photos as well as photos “captured by CBP during the entry inspection” and “from other DHS encounters.” The agency then compares face recognition templates (essentially, patterns) derived from those database photos to templates derived from live photographs taken by a camera at the boarding gate.

There are a number of very serious problems with this program from a privacy standpoint:

  • It utilizes the most dangerous biometric: face recognition. While Congress has directed CBP to collect biometrics from noncitizens as part of the entry/exit program, Congress did not specify which biometric the agency should use, and from a privacy perspective, face recognition is (along with iris recognition) the most dangerous biometric to use. That’s because it has greater potential for expansion and misuse: for example, you can subject thousands of people an hour to face recognition when they’re walking down the sidewalk without their knowledge, let alone permission or participation. You can’t do that with fingerprints. Face recognition databases could be plugged in to every surveillance camera in America, creating a giant infrastructure for government tracking and control. Wagner told me that the agency opted for face recognition instead of fingerprints because of the greater ease and practicality of the technology as well as the “optics of us taking fingerprints from people.” Of course, fingerprints do have a negative association in the public mind—but that’s because of their use in tracking and identifying accused criminals. And tracking and identifying is exactly what the photos are being used for here. If, as Wagner suggests, taking a photo seems more benign to the public, that’s only because the public’s intuitions about privacy have not caught up with what the technology can do. And fingerprints work fine in the context of international travel, as they are already used for the Global Entry frequent traveler program.
  • It normalizes face recognition as a checkpoint technology. Security technologies that are applied only at airports because of heightened government concerns about the security of air travel tend, over time, to expand outward into society. Magnetometers, for example, spread from airports to a wide variety of venues, including sports stadiums, government buildings, and even some high schools. That dynamic takes place partly because it socializes people to accept such technologies as normal and acceptable, and partly because government agencies and others push it outward in a futile quest for perfect security everywhere. Wagner said of face recognition, “I think this is where the technology is headed.” But “the technology” is not an autonomous, inevitable force; we as a society are in control, and can choose what to deploy and not to deploy. And we should not want to turn into a checkpoint society, where we are subject to ceaseless status and identity checks at every turn, constantly monitoring, evaluating, and sorting citizens into “go” and “no-go” categories. The ease of implementing face recognition makes that all-too-real a threat.
  • It will inevitably be subject to mission creep. Once CBP begins collecting biometrics from every person traveling across the border, including Americans, there is a significant likelihood that that practice will expand not only to new places but also for new purposes. For the moment, CBP says it will delete the live photos captured at the gate within 14 days for citizens, and that it only uses them to verify identity by comparing them with the database photos. But customs officials have already talked about dropping that restriction, saying only that “for now, we’re discarding that information.” How long before CBP begins holding them for longer periods of time, and using them for new purposes? Got a group of photos of wanted bank robbers, drug dealers, or, for that matter, reckless drivers? Why not run passengers’ photos against those databases and maybe catch a few?
  • Face recognition has a major reliability problem. The fact is, people’s faces don’t stay the same, and people look like each other—not only identical twins but also complete strangers. Studies also show that face recognition suffers from higher error rates when trying to match the faces of African Americans, raising the prospect of yet another racial injustice in our society. There are also higher error rates for women and children. Even changes in expression can render the technology inaccurate, which is why Americans are not allowed to smile in their passport photos. Wagner said that tests so far have found a 4% false negative rate. That means one in 25 people will be told by the machine, “sorry, you’re not who you claim to be.” Those people will then be sent to a CBP officer at the gate for visual comparison with their passport photo (American citizens) or fingerprints or other checks (visitors). But if this program scales up, the logistics of making a CBP officer available at every gate to examine that 4% of rejected passengers would be a major obstacle. On the other hand, Wagner suggested the agency might let airline workers do the check, which means zero change from how things are already being done. Overall, it seems very strange for the government to be going all-in on a technology with such a high inaccuracy rate. And Wagner’s 4% number was provided without evidence; the government should make its face-matching algorithms public so that independent studies can be made of their reliability, including ethnic and other differences in that reliability.
  • It is fundamentally unnecessary and wastes taxpayer money. In the United States, all arriving passengers pass through a CBP checkpoint, but there is no infrastructure of CBP checkpoints through which departing passengers must pass, as there is in Europe and a number of other countries. That means creating a system to collect biometrics from exiting travelers is a hugely complex and expensive new enterprise. That is one reason why DHS has in the past resisted Congress’s naive obsession with using sexy biometrics technology and argued that a full biometric exit tracking system is unnecessary, because using just biographic data (name, DOB, etc) from the information systems DHS already has in place to track exactly who is on an aircraft was enough to satisfy Congress’s goal of tracking visa overstays. As with any technology, boosters can always cite scenarios where biometrics will prevent problems, such as imposters flying under another person’s name. But (as with any technology) the proper question is how widespread and how harmful are those scenarios, and what are the costs and downsides of measures to prevent it. DHS has already spent billions on programs such as Real ID supposedly to prevent imposters, and in light of the downsides discussed above, the cost-benefit calculus here makes no sense. Since face recognition is highly unreliable, the only thing the billions spent on this program will achieve is whatever marginal improvement in detection there may be between machine and human effectiveness in matching faces—a highly uncertain benefit. Especially since for 4% of travelers the matching will probably be done the way it already is—manually by a human being.
  • This system is being built in the context of an agency with a troubling record. This is not a technology being deployed by an agency with a history of behaving well. CBP has a terrible track record of use-of-force and other incidents of abuse, and external reports have found a continuing pattern of poor oversight and training of agents as well as a “culture of impunity and violence.” It is an agency that lacks oversight, due process, and transparency, and—though it is our nation’s largest police force—refuses to be held accountable to basic 21st century police best practices. You would think that the agency charged with guarding our borders would not harass people who are leaving the country, but you would be wrong. The creation of an institutional CBP presence where there has never been one before (at the gates of departing aircraft) raises the prospect that the kinds of abuses we have seen at other borders will spread to this new context, and people will be unfairly sanctioned without the kind of due process that they would normally receive, such as the right to go before a judge. For example, someone wrongly suspected of a visa overstay may be pressured into signing papers by an agent at the gate that results in a 10-year ban from returning to the United States. The system, Wagner said, will include the ability for CBP to tag someone for various reasons for official intervention at the gate.

Given all of these serious social implications, one of the biggest problems with the program is that it hasn’t been authorized by Congress. Considering the significance of applying face recognition to the entire American cross-border traveling population, we should expect CBP to subject the program to the full democratic process. To the contrary, as Harrison Rudolph from the Georgetown Center on Privacy & Technology put it, “Congress has passed Biometric Exit bills at least nine times. In each, it has been clear: This is a program meant for foreign nationals. In fact, when President Trump issued an executive order in January on Biometric Exit, it was actually reissued to clarify that it didn’t apply to American citizens.” Unfortunately, based on the briefing provided to advocates, CBP seems to believe that even a nationwide rollout of their pilot wouldn’t require separate Congressional authorization. But this program is clearly not what Congress contemplated, and should be subject to a vigorous public debate that allows members of Congress to weigh the privacy impact and other costs of the new proposal against its purported benefits.

It should be noted that the narrow data-privacy impact of this system is different than some other face recognition deployments. When the technology is deployed, say, on the street, it may collect several kinds of information that its operator does not already possess: (1) photographs of subjects’ faces; (2) their identity, if those photographs can be matched to others; and (3) the time and place where they were seen. In the case of this program focused on international travelers, however, CBP already has all of that information. They have access to the passenger manifests and already know exactly who is departing the country on each flight. And, they already have access to a photograph of each traveler. CBP does run each traveler against an opaque computerized risk assessment engine called the Automated Targeting System, which we at the ACLU have been criticizing for years—but it does that irrespective of whether a passenger’s photo is taken at the gate.

That said, as we have seen there are plenty of significant and problematic societal implications of this program. The biggest from a privacy perspective is that it represents a major step—probably the most major yet—in placing the United States on the road toward widespread use of face recognition as a technology for tracking and control, and for very little gain. Congress and CBP should end it.

Add a comment (7)
Read the Terms of Use

Anonymous

We've had a flawed policy for over 15 years. The best longterm protection against terrorism to create MORE goodwill and allies around the world. Since 2001 we have done the opposite, we have antagonized millions of worldwide citizens and turned allies against us. The response to 9/11 has blacklisted and harmed more Americans than we've protected.

We no longer have intelligence agencies, our national security agencies have transformed into a domestic Stasi or secret police - turning against their own fellow citizens.

No amount of screening or unconstitutional searches will protect us longterm if we continue creating more enemies than allies around the globe.

Anonymous

When the 20th Century original Stasi in East Germany was finally disbanded, all of the agents and informants real names/deeds wete opened to the public. A great nation like the United States will probably go further in exposing all the details that happened also so they can face those they abused, Rule of Thumb: based on world history most war criminals are exposed 20-30 years after they commit their war crimes. So if history is accurate starting in about 2021 we will start unmasking the 2001 officials that were disloyal to their oath of office (spies, drone operators. prison guards, etc).

EggMcMullah

I do believe that the best defense against terrorism is this.
If you don't want people coming into your community and detonating explosive devices, don't do it in theirs.

I thank you.

Rafael Lincoln

I think it’s great news that CBP has launched a traveler verification service. I can’t disagree with the fact that this system raises major social implications. But still, ‘face recognition’ program is especially beneficial in terms of safety. I’m going to share this info through https://myessayslab.com/ as more people should know about this program.

Dot Asc

Thank you for this brilliant piece Jay. We are seeing automated facial recognition suddennly trend all over Europe, including Berlin (!) and London. Seeing ACLU stand against this in the difficult graveyard of civil rights that is border control, is emboldening for us privacy campaigners in the EU. We have to resist this dangerous biometric surveillance and we will. Solidarity.

Ailo

Thank you for this great piece. I believe that the following quotes nicely capture the essence of a lot of these cases of technological solutionism and tech-determinism that is pushed by tech companies and intelligence agencies:

Wagner said of face recognition, “I think this is where the technology is headed.” But “the technology” is not an autonomous, inevitable force; we as a society are in control, and can choose what to deploy and not to deploy.

Mitchel @ high chair

I do believe that the satisfactory defense in opposition to terrorism is this.
In case you do not want human beings coming into your network and detonating explosive gadgets, don't do it in theirs.
https://babyhighchair.net/

Sign Up for Breaking News