We Want Internet Providers to Respond to Internet Demand, Not Shape It

The debate over network neutrality is misguided, Robert McMillan argues in Wired, because amid dismay over the FCC’s proposal to allow ISPs to sell “fast lanes” to companies, people don’t understand that giant internet companies like Google, Facebook, and Netflix already enjoy preferential delivery of their bits to end-users. This takes place, he points out, through “peering connections,” in which giant web companies pipe data directly to ISPs on their own private connections rather than through the internet backbone, and through “content delivery networks,” or CDNs, which are servers run by web companies deep inside the bowels of the ISPs.

“We shouldn’t waste so much breath on the idea of keeping the network completely neutral. It isn’t neutral now,” McMillan writes. He approvingly quotes network developer Dave Taht as saying, “most of the points of the debate are artificial, distracting, and based on an incorrect mental model on how the internet works.”

McMillan is narrowly correct that many people don’t talk about or are not aware of these complexities, but he is wrong in his broader implication that the concerns behind network neutrality—including those around internet “fast lanes” —are thereby misplaced. His explanation of peering and content delivery servers is helpful, and a Wired graphic comparing “What you think the Internet looks like” and “What the Internet really looks like,” is a useful visual summary of McMillan’s point.

What McMillan’s argument does not make clear, however, is the difference between following internet users and leading or manipulating them. He explains that

Because these [big internet] companies are moving so much traffic on their own, they’ve been forced to make special arrangements with the country’s internet service providers that can facilitate the delivery of their sites and applications. Basically, they’re bypassing the internet backbone, plugging straight into the ISPs….

Although Google does have an edge over others, not every company needs that edge. Most companies don’t generate enough traffic to warrant a dedicated peering connection or CDN. And if the next internet startup does get big enough, it too can arrange for a Google-like setup….

Traditionally, ISPs have not charged for interconnection points. They’re happy to have Google or Netflix or Akamai or Level 3 servers or routers in their data centers because they speed up service for their customers and reduce the amount of traffic that has to flow out of their network.

McMillan is right that these arrangements do constitute a “fast lane” of a kind. But what he is describing are peering arrangements that reflect the flow of internet traffic. If vast numbers of people are voting with their clicks and seeking content from Google, Facebook, and Netflix, then we want the ISPs to give their customers what they want, and take whatever technological measures are necessary to ensure that the huge amount of traffic coming from these sources is delivered smoothly.

What we don’t want to see are ISPs that deploy these technological measures to shape instead of reflect the flow of traffic online. The fear around the specter of ISPs selling “fast lanes” is that it would open up the gates for just such manipulation— allowing them to extort tolls, giving them an incentive not to ensure sufficient capacity for “normal” (non-tariff-paying) traffic, and enabling them to compete unfairly with content companies in such areas as the provision of streaming movies.

Despite the name, advocates are not concerned with some abstract and pure notion of “neutrality;” the goal is always to defend the principle that the company that connects us to the internet does not get to manipulate or control what we do on the internet.

So how do we ensure that they don’t? McMillan puts all his hopes in the basket of competition. “What we should really be doing is looking for ways we can increase competition among ISPs—ways we can prevent the Comcasts and the AT&Ts from gaining so much power that they can completely control the market for internet bandwidth,” he writes.

I’m certainly for robust competition and end-user choice. And I agree that ultimately the issue comes down to limiting telecom power. That said, I’m skeptical that competition in this area will ever be sufficient to remove the need for other measures to prevent the telecoms from accumulating too much power—such as strong regulations. Competition is an excellent way of distributing Chinese restaurants across a city—and keeping customer service good within them. But there are many problems with trying to impose a market-competition paradigm on something like the gigantic, capital-intensive, publicly vital, utility-like service that ISPs provide. Currently the best most Americans can probably hope for is an oligopoly of two or three ISPs to choose from, and that is not the kind of rich competition that reliably prevents power from being abused. (And remember, even where there were multiple railroad lines competing with each other—and today, airlines—the government has always required each of those companies to honor the rules of common carriers.)

The other alternative is to resurrect the “open access” regulatory regime created by the Telecommunications Act of 1996, which required cable companies and other telecoms to open their networks to competing ISPs in the hopes of curbing telecom power by creating rich competition for their services. One problem with that regime is that it required telecoms to open their wires to, and internally host servers owned by, companies that they were competitive with and naturally hostile to. Unsurprisingly there were many reports of telecoms not treating their competitors fairly. If that could be made to work the resulting competition would be great—but making it truly work would have required much messy and invasive oversight and regulation of its own. (In any case this approach was ended by the FCC, which declared broadband an “information service” not subject to the regulations, a determination that the Supreme Court declined to overturn.)

Competition is great where it works to restrain the power of big companies and protect the freedom of the individual. But I’m skeptical that we can refrain from directly regulating companies that are at the end of the day utilities by trying to induce them into the same dynamics that govern Chinese restaurants. Not only is the capital-intensive, high-barriers-to-entry nature of the telecom sector far different, but society relies upon its services far more—not only to sustain the many other marketplaces that depend on our telecom infrastructure, but also to sustain our free speech.

View comments (5)
Read the Terms of Use

Dave Taht

As the guy quoted here in partial support of both of your arguments, I need to try to set the record a little straighter.

I made a bunch of comments on the main mailing list I operate on here:

The things that bug me most about the net neutrality debate are:

0) The whole slow lane/fast lane conception is just wrong. Internet traffic looks nothing like vehicle traffic. On roads, you have only a few lanes to put cars in. On the internet, it's more like you break up the cars and trucks into atoms (packets), mix them all together, pour them through various choke points and reassemble them at their destination no matter in what order they arrive.

Traffic management at these levels IS needed, and managed at a e2e level by a TCP-friendly protocol (generally), and at a router level by queue management schemes like "Drop Tail". Massive improvements to drop tail, fixing what is known as "bufferbloat" with better "active queue management" (AQM) and packet scheduling schemes (FQ) such as codel, fq_codel, RED, and PIE are being considered by the IETF to better manage congestion, and the net result of these techniques is vastly reduced latency across the chokepoints, vastly improved levels of service for latency sensitive services (such as voice, gaming, and videoconferencing), with only the fattest flows losing some packets and thus slowing down - regardless of who is sending them. Politics doesn't enter into it. Any individual can make their own links better, as can any isp, and vendor.

Some links:


Furthermore individual packets can be marked by the endpoints to indicate their relative needs. This is called QoS, and the primary technique is "diffserv".


There are plenty of problems with diffserv in general, but they are very different from thinking about "fast or slow" lanes, which are rather difficult to implement compared to any of the techniques noted above. You have to have a database of every ip address you wish to manipulate accessed in real time, on every packet, in order to implement the lanes.

IF ONLY I could see in the typical network neutrality debater a sane understanding and discussion of simple AQM, packet scheduling, and QoS techniques, I would be extremely comforted in the idea that sane legislation would emerge. But I've been waiting 10 years for that to happen.

We have tested, and have deployed these algorithms to dramatic reductions in latency and increased throughput on consumer grade hardware, various isps and manufacturers have standardized on various versions, (docsis 3.1 is pie, free.fr uses fq_codel, as does streamboost, as do nearly all the open source routing projects such as openwrt)

I really wish those debating net neutrality actually try - or at least be aware of - these technical solutions to the congestion problems they seek to solve with legislation. I wouldn't mind at all legal mandates to have aqm on, by default. :)

It makes a huge difference, on all technologies available today:


See also the bufferbloat mailing lists.

1) if we want true neutrality, restrictive rules by the ISPs regarding their customers hosting services of their own have to go - and nobody's been making THAT point, which irks me significantly. In an age where you have, say, gbit fiber to your business, it makes quite a lot of sense from a security and maintenence perspective
to be hosting your own data and servers on your own darn premise, not

2) I didn't make any points about competitiveness either; that was robert's piece. I didn't like the original 1996 policy nor do I think title II is the answer.

For the record:

I oppose the time warner merger, and also oppose rules and regulations that prevent municipalities from running their own fiber and allowing providers to compete on top of it. In fact I strongly, strongly favor commonly owned infrastructure with services allowed to compete on top of those, a model that works well in europe and elsewhere.

I came very close to writing a letter to the FCC on that, but didn't.

I LIKED the world we had in the 90s with tens of thousands of ISPs competing
on top of universally agreed upon link technologies. I ran one of those ISPs. That world was pre both of those regulations, where the then monopoly was required to provide access that anyone could buy for a fair price.

I am glad gfiber exists to put a scare into certain monopolists, but even then I'd be tons happier if municipalities treated basic wired connectivity as we do roads and not as we do telephone poles.

It is one of my hopes that one day wireless technologies would
become sufficiently robust to break the last wire monopolies once and
for all.


I really hope my last comment made it.


Indeed, Dave, I think the point you get to at the end is the one that is most accessible to legislators and end users. Imagine if we had competition for roads. Suppose you could get road service from Transcast or from Horizon. Transcast and Horizon both would have to pave a road to your door and talk you into using their road and not their competitor's road. Aside from a few rabid objectivists, nobody would seriously propose such a system, because in order to have any competition at all you have to at a minimum double the cost of delivering road service. All of the costs double--snow clearing, repair, original paving. And that's just for a duopoly: to get real competition you'd need more road service providers. Pretty soon your neighborhood would be mostly asphalt, and your cost to get road service would skyrocket.

The analogy isn't perfect: it is easier to deliver internet to the home than pavement, in the sense that two competing services can in theory share the same conduit or pole. But in every other sense the cost is similar: the cost to deliver internet service increases linearly with the roads leading to the homes on which the service is delivered, and delivering fiber is by no means cheap.

So in practice in most locations you wind up with a duopoly at best, and a monopoly at worst. Why not cut to the chase and just have the same people who do the roads do the last-mile internet service, and do it right: fiber to every home. Move the competition upstream, into the "points of presence" at the other end of the last-mile service. Delivering service to these points of presence is much easier and cheaper, and consequently meaningful competition between service providers is possible.

The system we have now lends itself to the gatekeeper who owns the last-mile connection using their monopoly to extract concessions from users of the network that would never be possible in an environment where there was competition. Get rid of this problem, and net neutrality becomes much less of a problem.


Probably the best description of how fiber can be rolled out I've read is here:


I certainly wouldn't mind an update on that network, today.

Dave Taht

Putting fiber in the ground is a very expensive process. Even with gfiber asking whole neighborhoods to commit to doing it before rolling it out, there will be years before the real costs of putting it in the ground are paid for. Then, even if you create neutral parties to just manage the fiber - the ongoing costs of getting it working, keeping it working, and fixing it when it breaks have to be handled. Even if that org is limited to just physical interconnectivity, experts need to be online, people have to answer phones, and trucks gotta roll to fix things.

One of the hardest things is to agree on standards so that equipment and protocols can interoperate and to build hardware and software that does.

It is not an easy task, and yet, I do agree, allowing a vertically integrated content provider and ISP to own it all is proving out to be the wrong thing.

The best piece I've seen on the complexities and problems involved in wiring a major city for fiber is here:


But once you are done, it seems good.

I have a friend of mine, in sweden, that has his choice of 9 different ISPs to deal with, 100mbit symmetrical, and fiber to the wall.

Stay Informed