Thursday, March 29, 2007
“Scotty, Mi devas deform rapido en tri minutoj aŭ ni ĉiuj estas mortinta!”
William Shatner is William Shatner, even in Esperanto.
“Remember, we're all in this together.”
Brazil, 1985, directed by Terry Gilliam, written by Terry Gilliam, Charles McKeown, and Tom Stoppard. In Brazil, Terry Gilliam asks the audience to imagine a world where the government wages a never- ending war with shadowy terrorists, a world where civil liberties are being destroyed in the name of security, a world where torture becomes official state policy in order to conduct more efficient interrogations of suspected terrorists. What's more, in Gilliam's fictional world, the central government is not just secretive but incompetent. Mistakes are made, leading to the imprisonment and torture of innocents. Most offensive of all, Gilliam implies that such a government could exist without its citizens staging an armed revolt. I'm usually willing to suspend disbelief, but this goes too entirely too far.
Via Jason Kottke, #51: Brazil
Yeah, he's right—a government like that could never happen.
I'm ambivilent about network neutrality
I've been wanting to write this for a few months now (and the orginal intent was to write it for the other site but I'm not sure if Smirk would agree with me) but since Jason Kottke linked to Craig “Craigslist.org” Newmark's article on the subject, I figured it was time.
Net neutrality.
Should the network be neutral in the traffic it transports, or can it prioritize the data depending upon content, destination, or both? It's an argument with proponents on both sides of the issue and it pretty much breaks down into individuals and small companies wanting neutrality, while the large multinational corporations want to shape the traffic.
Me?
I personally would like to see a neutral Internet, but I also see no problem with companies doing what they want to network traffic they carry. What I don't want to see is government regulation of what can and can't be done on the Internet. And generally, I feel the whole movement is moot anyway, because the net will always be neutral.
How?
Economic, my dear Watson.
Data is transfered across the Internet using the Internet Protocol, IP. To transfer a file, it's broken up into small bundles of data called packets, wrapped with a source address (where it's coming from) and a destination address (where it's going to), and pumped out through some piece of hardware (an Ethernet card, USB, serial line, T-1, OC-3, heck, IP packets can even be transported using avian carriers) to another device that's closer to the destination. That device then takes the packet, looks at the destination address, and pumps the packet out some hardware port to another device that's closer, and the process repeats until the destination is reached, on average, in less than 20 such hops [1].
Now, the closer to the “core” you get, the more packets a router (a dedicated device to sling packets around) has to handle. To this end, most routers have hardware dedicated to slinging the packets around, and since the destination address is at a fixed point on the IP packet, it's easy (relatively speaking) to make dedicated hardware for this function. And it's important this be fast, because a core router may be handing 50,000 packets a second (the router The Company's traffic flows through handles about 4,000 packets a second) and comparing each destination address against a routing table with perhaps 200,000 possible destinations. Assuming a binary search on the routing table, that's 17 comparisons on average per packet (18 max), times 50,000 packets per second, or 850,000 comparisons per second, or about one comparison per µsecond [2].
But the second other criteria are added to the routing entries, fewer packets can be processed on a given router. Say you want certain source addresses to be routed over faster links. Now, not only do you have to scan the destination address, but the source address as well. But the dedicated hardware in routers are only there for scanning destination addresses—scanning other parts of the packet require the CPU in the router to handle the packet.
Ouch.
And scanning deeper into the IP packet, say, for a TCP port (like TCP port 80, which is the default port HTTP runs over) requires some additional processing to locate the TCP header, since the IP header is variable in length.
Ouch.
And then there's the administrative overhead.
Oh yes, maintaining a list of “fast addresses” (say, to partnering networks) isn't free. There's overhead in maintaining the list. There's overhead in distributing the list to all the routers across a network. There's overhead in troubleshooting: “Why did the packets go there?”—a few hours later—“So, which is more important, streaming video from Viacom to AT&T or VoIP from AT&T to Verizon?”—a few days later—“But if we do that, that impacts the data from Sprint, who just signed a major deal with us.”
Ouch.
And my example there assumes that AT&T hasn't already bought Verizon. Heck, in these days of multinational mergers it's hard to know who owns what; just the politics of running a non-neutral network is mind-boggling to me, never mind the technical aspects of it.
A simple network is cheaper to run. A simple network is faster too.
Cheaper to run. Faster. Higher profit margin. I would think this would be a no brainer.
Oh, and did I mention that the current version of IP, which was designed over thirty years ago, has a field for “Quality of Service” which is rarely, if ever, actually used? Why? A few reasons:
- The most commonly used network API, the one from BSD Unix, never had a way for applications to set this field.
- It slows down a router (see above).
- It's an administrative nightmare. Heck, BGP, a routing protocol between networks, is an administrative nightmare and that's just dealing with destination addresses—it doesn't deal with quality of services at all.
- There's no guarentee that the network you send traffic to even supports “Quality of Service.”
It's been found over and over again that it's cheaper and easier to make faster routers than to implement quality of service across the network.
So I'm of the opinion—let the big boys waste their money on smart networks. Let them learn the hard way that a stupid network is, in the long term, better.
And that's why I'm not really concerned about network neutrality. The market will sort it out.
- I tried finding an actual number for the average number of hops across the Internet but the only numbers I could find (about 15) were for 1999–2000 with nothing more recent. Given the growth since then, I'll play it safe and say the average is no more than 20. [back]
- And it wouldn't surprise me if I'm underestimating the number of packets per second by an order of magnitude. [back]
Update on Friday, March 30th, 2007
Added a link to Wikipedia, and a small addendum to this entry.