Thursday, May 22, 2003
This is a new one …
Ring.
Ring.
I pick up the phone. “Hello? … Hello?”
“Hello. This is the Obnoxico Corporation. Please hold and one of our represenatives will get to you as soon as possible. We are sorry for the delay. Please hold,” said the recorded voice.
Wait a second … I thought. They called me! I didn't call them! Why am I on hold?
“If you wish, you can hang up and and we'll t—” Click.
Idiots.
The Google Cluster
Few Web services require as much computation per request as search engines. On average, a single query on Google reads hundreds of megabytes of data and consumes tens of billions of CPU cycles. Supporting a peak request stream of thousands of queries per second requires an infrasturcutre comparable in size to that of the largest supercomputer installations. Combining more than 15,000 commodity-class PCs with fault-tolerant software creates a solution that is more cost-effective than a comparable system built out of a smaller number of high-end servers.
Via the Google Weblog, Web Search for a Planet: The Google Cluster Architecture
This is a good introduction to the Google Cluster, the 15,000 machines (as of the writing of this paper I'm sure) that make up the Google website and give it its incredible performance.
One of the ways they do this is have a series of clusters (of a few
thousand machines) located around the world to handle queries more or less
locally; I did a DNS query
from the Facility in the Middle of Nowhere for www.google.com
and
got 216.239.51.99
and a DNS query from a machine in Boston gave the result of
216.239.37.99
. Some other interesting aspects: they forgoe
hardware reliability in favor of software reliability, they don't use the
fastest hardware available but the ones that give the best price/performance
ratio, and lots of commodity hardware.
The paper doesn't go into deep technical details, but it does give a nice overview of how their system is set up.