20040819

$$ per CPU cycle and better, faster, lighter applications

Reading about Gmail prompted me to do a rough cost per CPU cycle on some of the systems I have been using.

Using Google as one extreme example. Say it has 100,000 servers that run on average a 1GHz clock, and that the networking overhead is around 10% of cpu cycles. Then imagine that each server costs $100 a year to run (estimated - mostly power consumption, support and investment cost). The creation cost is far higher. You get 9Mhz per dollar per year, or each CPU cycle costs 0.00001111 cents per year. The reason why that figure can be so low is because of the scale of the system, the low initial cost of the units and support cost per unit.

Next lets take my home PC. 1 CPU with a 333Mhz clock. It cost $3000 about 8 years ago which equates to $500 a year (assuming 10 year life and I could have got 6% in the bank). Doing the same calculations you get 666Khz per dollar per year, or 0.00015 cents per cycle per year. I assume my support is free in this case.

How about an upgrade? 1 CPU with a 2.4GHz clock at $1000 equates to around $200 a year. This gives you 12MHz per dollar per year or 0.0000083 cents per cycle per year. Even my wife might be convinced by that one.

Now an example of a lightweight dot com (circa 2000). 10 twin processor 1.8Ghz CPUs. From memory this costed at $2300 per year per server, which comes mostly from support and infrastructure charges. Tap,tap, tap... 1.5MHz per dollar per year or 0.000064 cents per cycle per year.

A development server. 1x8 CPU 800MHz box from a major brand with double gold SLA. This typically comes out at $30000 a year (support is a big earner). Equates to 213KHz per dollar per year, or 0.00047 cents per cycle per year.

Which means that if you spend $1 per year you get (in KHz):
12000 New PC
9000 Google
1500 Dot Com
666 Old PC
213 Dev Server

Take for example a set of programs that you can run. Each one takes a certain percentage of the CPU when in operation and so you can roughly equate that to an 'in use' figure in MHz of cycles required. For example an application that uses 100MHz continuously would cost a new PC user $8 per year to run, Google $11, Dot Com $67, me $150 and a Dev Server $470 a year.

Then take your inhouse application running on your business systems server (not unlike a dev server). In production using classic elephant techniques you might have a 5GHz application. That's $23k per year for your monolithic server or $555 for Google. Using better faster, lighter techniques you might have a 3GHz application with the proportional saving in cost.

When you scale up your platform the costs obviously come down and the cycles per dollar rise. For example having 2 monolithic servers pushes the 100MHz cost down to around $350 per year. So if the 5GHz application became a 10GHz app and you added another server then the delta cost is around $12k. If you were running the lighter version then you could stick with 1 server and the delta cost for doubling requirements would be $14k or if you went for 2 anyway $6k (but don't forget the $100k capital investment).

How about an email spam filter server, something an ISP might consider putting in. Assuming a fast pipe, we are talking about a 20GHz application, costed using the dot com model $13k per year. But what if you used a peer-to-peer architecture for the application, cost at home PC model is $1670 per year. So even if you compensated the PC users for their loss of cycles and even bandwidth it is still cheaper.

To summarize:

  • Unless you are google then server side processing has to be lighweight.

  • Client side processing is cheap, therefore peer networks are cheap.

  • Moore's law will soon make even the time spent to write this blog seem costly.

  • I must try not to write rambling blogs.



1 comment:

straun said...

Yes, Jim Gray does a much better job of explaining the wider picture. Thanks for the link.

It was never my intention to cover bandwidth. This is mainly because it is quite easy to price as there is an established market for it and a going rate, and the more you buy the lower the unit cost.

One last thought. The Windows operating system and the MS Office suite get heavier at every release (thoughts of South Park come drifting back). This is a pressure to buy more hardware, which creates the demand that allows companies like Intel to charge more for their products and recoup R&D costs. Which means that they can also spend more on R&D to stay ahead of the competition. So if we had all moved to a lighter operating system in 1999 (RH5?) would we now have 4GHz processors?