Back to News & Commentary

What to Make of the TrapWire Story

Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
August 14, 2012

Some of the Wikileaks-fueled swirl of stories about the TrapWire program appear to have been overhyped, as my colleague Kade Crockford of the ACLU of Massachusetts noted in her excellent roundup of the story yesterday. Others writing about the program have followed suit.

But let’s not overcompensate for the hype and get too world-weary and cynical here; while many questions remain about this program, it does raise some very significant issues. And it does deserve a high level of attention and concern.

We do know the program combines several elements that are each deeply problematic:

• Suspicious activity reports (SARs). The rage for SARs, as we have long noted, is a misguided and dangerous attempt to spot terrorist plots by gathering and sifting through an ocean of data on ordinary everyday behavior. A George Washington University study recently concluded that the suspicious activity reporting system “has flooded fusion centers, law enforcement, and other security entities with white noise” that “complicates the intelligence process” and prevents fusion centers from being “true homeland security assets.” Given that, increasing the flood of SARs is not going to make anyone safer, but only increase the level of white noise and further hobble anti-terrorism efforts.
• Data mining. Big data certainly has its uses but spotting terrorists is not one of them, as experts have repeatedly explained.
• The surveillance-industrial complex. Revolving doors, private-sector takeovers of public functions, and corporate-government cooperation against dissenters: when you marry the furious energy of capitalism with the security establishment’s bottomless appetite for tracking, it’s a recipe for trouble.

But I think the most novel issue raised by TrapWire is the program’s video surveillance component. It’s not entirely clear how the program is using video. A 2010 Stratfor email leaked by Wikileaks states the following:

This week, 500 surveillance cameras were activated on the NYC subway system to focus on pre-operational terrorist surveillance. The surveillance technology is also operational on high-value targets (HVTs) in DC, Las Vegas, Los Angeles and London and is called TrapWire. . . . Operationally, the ability to identify hostile surveillance at one target set — in multiple cities — can be used to neutralize terror threats by interrupting the attack cycle. Meaning, a suspect conducting surveillance of the NYC subway can also be spotted by TrapWire conducting similar activity at the DC subway, connecting the infamous dots. An additional benefit of TrapWire is that the system can also be used to help “walk back the cat” after an attack to identify terrorist suspects and modus operandi. I can also see the tool being very effective in identifying general street crime.

That certainly makes it sound that video feeds from a wide variety of locations are being centrally scrutinized for “hostile” behavior, cross-referenced, and stored (to allow the post-attack investigation). If it’s claimed that the system can recognize that the same individual has appeared in different cities, it is a rational assumption that face recognition is employed, despite that technology’s generally abysmal performance in uncontrolled situations such as public spaces.

In addition, a TrapWire trademark document says the system provides a visual monitor that shows “the threat level at each facility” and highlights those where the threat level has risen “over the preceding 24 hours.” What kind of data would be used to indicate a higher threat at a given facility on an hour-by-hour basis? Such up-to-date evaluations seem like the kind of thing that would be based on video feeds.

But it is still not clear what is going on. The Stratfor descriptions aren’t necessarily accurate. As this piece by independent journalist Ben Doernberg points out, TrapWire’s CEO denied using face recognition in 2006. Also, the New York Times quotes a police spokesperson as denying that the NYPD uses TrapWire. (Which is strange, as Kade points out today, because Trapwire explicitly states on its web site that it services New York’s SARs. Overall the company itself has been strangely quiet during the uproar; if so many of the stories about this program are false, and it is not something we should worry about, it’s natural to wonder why the company hasn’t put out a statement explaining why that is so.)

What seems most likely is that “behavioral recognition” software monitors customers’ video feeds (such as subway cameras) and when it detects suspicious activity, it alerts human operators who then create a suspicious activity report.

Ultimately, we need to look beyond the details of what TrapWire does and does not do at this moment. We know that the following is true:

• Video surveillance is expanding rapidly, as cameras become cheaper and easier to install, and as local police departments increasingly begin installing their own networks of cameras.
• New IP video camera feeds are easy to network, centralize, and store.
• Behavioral recognition technology will increasingly be employed to try to manage the ever-growing volume of video data.
• Portions of our security establishment are fixated on mass surveillance as an approach to stopping terrorism (as we’ve written about often, but with special focus in this 2007 report).
• The private sector is bursting with entrepreneurs looking to sell new surveillance solutions to the government, TrapWire being just one of many in that regard. If something can be done, someone somewhere will make a pitch to do it.
• Government agencies frequently launch new surveillance efforts with minimal public notice or debate—and often (though not in this case), cloaked in official secrecy. (It’s always surprising when security agencies feel they can activate far-reaching surveillance tools without any public knowledge or debate. We’re supposed to be living in a democracy; that’s what these security agencies are supposed to be protecting—they shouldn’t be helping themselves to dramatic new powers over citizens whenever the latest technology makes that possible.)

Together, these elements are a recipe for a new kind of total surveillance that people are rightly worried about. Beyond the details, all the “hype” online over the TrapWire story is a reflection and implicit recognition that such a system is now technologically possible, and we are barreling full speed toward a surveillance society. Whatever the details of TrapWire’s current operation are, we need to grapple with that fact. That’s the biggest takeaway from the TrapWire story.

Learn More About the Issues on This Page