Back to News & Commentary

Cars That Talk to Each Other: What Are The Privacy Implications?

Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
February 4, 2014

The U.S. Department of Transportation announced on Monday that it is proceeding with an effort to reduce traffic accidents by creating a “Vehicle to Vehicle” wireless infrastructure (known as V2V) through which cars can communicate with each other while on the roads and automatically avert certain accidents. Details can be found in this story by the Associated Press, and in this in-depth look by Ars Technica.

What about the privacy implications? This is a technology that I’ve been following since 2006, when I and some other privacy activists attended a briefing and feedback session sponsored by DOT. I recall saying some pretty severe things about some of the ideas that were presented. Since that time, the agency running this program, the National Highway Transportation Safety Administration, has shifted focus away from the idea of creating a network of sensors in the roadway (known as V2I, for “Vehicle to Infrastructure) in favor of V2V, apparently because V2I would simply be too expensive.

Still, the vision here is that every vehicle would be required to include a transponder that would transmit data about a vehicle—unencrypted—such as its GPS location, heading, yaw, and how high above the ground it is (to deal with multi-story roadways). The system uses a specialized wireless protocol (DSRC, for “Dedicated Short Range Communications”)—a variant of WiFi with some modifications to allow for low-latency connections by fast-moving vehicles. It makes use of spectrum (5.9 GHz) allocated by the FCC for public safety applications.

So what are we to make of this from a privacy perspective? I guess I would make three related points:

  • We don’t want this to become yet another vector for location tracking. It certainly has that potential. We’ve already got our hands full trying to protect our privacy in the face of mobile phone tracking, license-plate readers, aerial surveillance, and GPS tracking.
  • Cars under this system must not broadcast a unique identifier. My understanding is that the system will need to include certain rigorous authentication protocols to ensure that hackers can’t subvert the system and use it to cause accidents instead of prevent them. But the authentication headers need to be designed so they do not function as unique identifiers; with careful design, this should be entirely doable. (The security issues around automobiles are already becoming significant.)
  • We also don’t want it used as an automated enforcement mechanism. Dorothy Glancy is a professor at Santa Clara Law who has worked on this issue for a long time (including as a consultant to the DOT), and has kept me updated on this issue over the years. She tells me that “the automated enforcement people just love the idea of this. They’re asking, ‘Why can’t I just read people’s speed and automatically just send them tickets?’” I have written before about the problematic nature of automated law enforcement, as well as Americans’ notably ambivalent relationship to traffic enforcement.

As Glancy pointed out to me, the DOT’s Monday statement was vague on the privacy questions. It boasts that “several layers of security and privacy protection” will ensure that V2V “does not involve exchanging or recording personal information or tracking vehicle movements.” At the same time, it says that “a vehicle or group of vehicles would be identifiable” if “there is a need to fix a safety problem.”

Glancy was also quite surprised that the statement didn’t actually say anything about when this would become required. “They say they’re waiting for a research report, and then will work on a regulatory proposal,” she pointed out. “But they’ve been working on a regulatory proposal for years. The big news will be when they start to move to require this, and when they say exactly how the privacy and security side of this is going to work.”

One final note: many privacy-invasive government projects and proposals are aimed at stopping terrorism—built around stopping what are basically very rare freak events, where the chances that the program will even be effective and the cost-benefit calculus are highly questionable. This, however, is different. Unfortunately, automobile accidents are NOT freak events. The number of annual fatalities on U.S. roadways is around 42,000. Can you imagine Americans tolerating 42,000 deaths a year from terrorism? There are roughly 6 million total traffic accidents a year, and the cost of all this is an estimated $230 billion a year.

We can’t be certain how effective these techniques will be in the end—I would expect a certain degree of risk compensation to kick in, for example—but Glancy tells me that “the technology’s really very good at preventing right-angle collisions, which are very hard to see, especially at night, or because of shrubbery or what have you.” And overall, the results of a large test in Michigan last year showed that the technology did work largely as hoped—that DRSC was a good way to exchange data to avoid accidents, and that it worked not just in the lab but also in the real world.

So, unlike many anti-terrorism schemes, there seems to be plenty of reason to believe this system will prove effective. Considering the very real daily carnage and suffering out there on our roads each day, this is not a program where I would want to “just say no,” privacy threat or not. Everyone knows of someone killed in a car crash. But there is no reason that we can’t get the safety benefits here while being consistent with our values by making sure that any system that’s rolled out has privacy hard-coded in, in ways that cannot easily be altered.

Learn More About the Issues on This Page