When I stick a movie into my DVD player and try to fast-forward through some of the annoying preliminaries that are at the front of the disc, I get an error message:
“This operation prohibited by this disc”
This is just an annoyance, but it is a potentially very portenteous one.
First of all, it’s not “the disc” that is prohibiting my attempt to forward through trumped up FBI warnings and Hollywood anti-piracy propaganda. It’s the Hollywood studios that have programmed my technology to prohibit my “operation.” This is a sterling example of using technology to accomplish what the Marxists called “commodity fetishism“—the hiding of human power relationships behind a pseudo-objective reality. Marx was talking about money and economics, but here technology is called to play the same role. The message is: “There’s no power play going on here, it’s just how the objective technology works!” What’s actually happening is the movie studios have decided to program technology (or pressure hardware manufacturers to do so) to take away your control over your time and what you watch, and force you to view content that they control, in order to advance their own interests.
More broadly, this annoying little example highlights the power struggles we could face as computer chips come to saturate the world around us—the trend often called “the Internet of Things.”
The question is how computers will be used to impose rules on us. Computers are machines for processing data, which they can do with great consistency and regularity. With these qualities, computers actually bear a strong resemblance to another, far older human rule-processing creation: bureaucracies. Like a computer, the core operation of a bureaucracy is to take input in standardized form, process that data according to a set of rules, and spit out a result or decision. Have the applicant fill out a form, distribute the form internally, cross-check its claims against other bureaucracies, perhaps, evaluate its merits according to a defined set of criteria, and then let the applicant know the results. Data in, decision out.
Computers are thus a natural fit for bureaucracies, which have seized upon them from the very beginning, but it’s interesting to think about how they will make use of technology as it worms its way into everything that surround us. As the DVD player example suggests, one danger is that the Internet of Things will allow the tendrils of corporate and government bureaucratic power to reach ever-deeper into the crevices of our lives, enforcing all manner of petty rules and restrictions. As computers saturate our lives, large organizations will constantly be tempted to use them in this way. A good way to think about it is that many computers (often no more than single chips) may come to serve as little mini-bureaucracy “pods” cast off from their agency or company mother ships, allowing those bureaucracies to encode their rules and distribute their power in ways they never could do before.
The danger is that as “computing power” grows and spreads, that term may take on a far more ominous meaning.
And of course the fundamental operations of those computers will remain the same: data in, decisions out. The first half of that equation means that the raw material for the decisions made by these enforcers will be information about us, a reminder of how tightly this power struggle is bound up with privacy—that information and control are two sides of the same coin. One of the most crucial battlefields will be how are devices are programmed to answer bureaucratic questions such as “What information do we need? What information are we entitled to? What information is relevant and irrelevant to our rule-enforcement processes? What information should be passed along to other agencies?”
What are possible examples of this other than my DVD player? One may be RFID chips, which comprise the perfect infrastructure for systems of individualized, algorithmically determined access control. Already today think about the use of contactless key cards by hotels, and how much more control a hotel has over such keys. It can now remotely revoke or reprogram keys, track their use, and program in rules (you can access these floors at these times, but not those facilities at those times, though that guy over there can). None of that could be done with a metal key.
Another example of the kind of thing I’m talking about is starter interrupt devices on cars, which remotely disable them when their owners fall behind on payments. Nissan has reportedly tested various systems in their cars for preventing operation of the car when the driver appears to be drunk—not using breathalyzers (which are already in use in some states for repeat DUI offenders), but based on cameras, sensors, and algorithmic analysis of driving patterns in an attempt to detect inebriation. Ford already has a system that allows parents to hard-wire speed limits into a car when it is driven by a family teenager. The system can also send alerts when seatbelts are not worn, and even limit the volume of the car stereo.
Of course a parent exerting control (however petty) over a teenager in a car they own is not the same as a company or agency exerting control over a customer or citizen. But children are often the first to be subject to rules and controls that later spread more broadly (along with prisoners and immigrants, followed by low-income minorities). And the prevention of drunk driving is certainly a good thing—but like many technologies, it’s the easy cases that lead the way, with the more controversial iterations coming later. Would we want every car to have a functioning breathalyzer? How about a test for the influence of increasingly legal marijuana while we’re at it? Why not other frequently abused illegal and prescription drugs? And what if the car is programmed to report back the results of those tests to [fill in the blank]? Each step could arguably save lives, but where does that end, and do we want to live there?
And think about self-driving cars. Why not hardwire each car to be incapable of violating any speed limit or traffic rule no matter how minor. And while we’re at it, should we prohibit excessively quick acceleration, which wastes fuel and is bad for the environment? The possibilities are endless. (On the other hand, if the car is self-driving, maybe it won’t matter any more if the owner is intoxicated.)
Such will be the controversies of the future, replicated across a wide variety of areas, as minute control becomes increasingly possible and we are confronted with the possibility of some institution exerting such control—over us.
And of course the drunk driving example is a reminder that one of the things that will make this process tricky is that computing power can be used for all kinds of purposes—some good, some bad, some ambiguous. It’s not as if anyone is going to oppose the spread of all computer technology the way some people might oppose all nuclear power or all GMOs; there are simply too many good applications. So what it seems we’re going to be left with is case-by-case trench-warfare power struggles over how deeply we want to let state and corporate bureaucratic power soak into our lives.
This post is the first in a series; the second part is now here.