Share on LinkedInTweet about this on TwitterShare on Google+Email this to someone

As a buyer of IaaS (Infrastructure as a Service) you are renting computing capabilities. You pay by the hour, or from some vendors, by the minute, for services used. You get a very low cost, highly flexible service that allows you to precisely match your actual needs to the capacity purchased. And when you are done, you shut it off and stop paying until you need to turn it on for the next time. Sounds like the proverbial utility – water, electricity, etc. – doesn’t it?

It’s some utility! Proponents claim overall information technology (IT) performance improvements of 30% to 40% in addition to cost savings of 40% – all from using cloud. Quite frankly, my clients just don’t believe it until they understand that these new cloud services come from an approach that is not just different in degree (same way, but just better) but in kind (a whole new way). Let’s look under the hood to grasp how different and powerful are cloud economics compared to the way most organizations provide computing infrastructure for their IT solutions, today.

Cloud infrastructure services are produced in a different kind of animal than you are probably used to thinking about when people talk about computers. They come from Data Centers (DC) that are called Hyper-scale (or Web-scale) as compared to an Enterprise DC (like the one probably in your company). There is a stark difference in absolute scale, design and operations between the two models. Let’s take the tour.

First thing you can’t help but notice is these babies are big: hundreds of thousands of square feet in size and I do mean football fields in size. Plus, they house tens of thousands of servers. Your company’s DC might have only hundreds to maybe thousands of servers. You’ll see that this massive scale plays a big part in how they can do things differently. (BTW, a “server” is really just a small computer, like your laptop. Today’s data centers link many of them to multiply their processing power. Cloud takes this a step further.)

When we think of computing, we think of the machines. Let’s start with the different approach there. A Hyper-scale DC approach assumes equipment will fail. It is designed from the foundation up to anticipate this and use it as leverage. They buy equipment for very low prices, often custom made for them. When they fail, they are left in place until enough are kaput that it is economically sensible to remove them. Then they are trashed. No one is dispatched to repair them as in a typical Enterprise DC. The result is low cost equipment to start plus labor savings. The equipment part is straightforward – buy low and don’t worry about them since with such large volume of units, plenty are still running. Now let’s talk more about the labor.

Labor accounts for 30% to 40% of a traditional data centers operating cost. After machines it is the biggest cost. Surprised that such advanced technology is so labor intensive? Until cloud it has been that way since electronic computing got rolling in the 1950’s. Not doing dispatch and repair cuts down some of it but how do you really get at the rest? When a machine goes down doesn’t the application stop? Ah, good question – that’s what happens in in your company’s DC, right?

Here is more secret sauce that makes cloud different – automated management – machines run the machines. In a cloud DC, applications are considered “coupled” to the underlying hardware. This means that when – as expected – a machine fails, the application automatically floats through the huge cluster of machines over to another healthy machine and continues running. (The jargon is “Automated Failover”).

Another feature of this kind of “machines running the machines, not people” approach is applications can ramp up or down the amount of compute capabilities they need. You get this amazing ability to turn the spigot flexibility and it is the application that does it, again not people. (Two buzzwords are used here: “Load Balancing” and “Automatic Scaling”). You can only get away with both these features economically when you have lots of machines available. You need scale – hence, the massive server counts in these cloud DC’s.

You now see how the big items of hardware (~35% of annual DC costs) and labor (~30% to ~40%) are reduced while improving performance and service features. Power and cooling are next (~5% to ~10%). Data centers are energy hogs. U.S. data centers used 91 billion kilowatt-hours of electricity last year, “enough to power all of New York City’s households twice over and growing.” But Hyper-scale DC’s use just half the average that an ordinary DC does and despite their size they account for only 5% of that 91 billion kWh. They get there by letting the computer rooms run hotter than traditionally and cooling with much lower power approaches. The hardware is also regularly upgraded more often (or “refreshed”) to take advantage of regularly increasing computing power for less cost and power.

Cloud services come from where hardware, labor (operations), power and cooling are all radically less and enabled by massive scale and automation. Here is a little summary for your reference:

 

Data Center Feature Enterprise Size Hyper-scale (Cloud) Size
Scale: Number of ServersFloor space
  •  Up to a few 1000’s
  • Hundred’s to Thousands of Square Feet
  •  Greater than 15,000
  • Hundred of Thousands of Square Feet
Operating Approach
  • Apps run on specific Servers
  • Lots of Manual work
  • When server stops tech is dispatched

 

  • Work is spread over many machines with automated load balancing, scaling and failover
  • Highly Automated
  • Plan on machines failing
Labor:Ratio of staff to ServersLabor as a % of Total Cost
  •  1 to 100 – 200
  • 30% to 40%
  •  1 to 20,000
  • Minor
Power and Cooling
  • One of the top expenses
  • Air Conditioning
  • Hot and Cold Aisles
  • Evaporative Cooling
  • 2X efficient compared to traditional
Hardware Approach
  • Standard Offerings from Traditional Vendors (HP, IBM, Dell, etc.)
  • Heterogeneous (one of everything)
  • Relatively Expensive
  • Repair and return to service
  • No Frills, Custom Design from Taiwan and China
  • Highly Standardized
  • Very Low Cost
  • Dispose (not repair) when fails

This is not your old information technology anymore. This innovative and disruptive approach yields the information utility. Speaking of utilities, in the early days of electricity every company had its own power plant. There used to be VP’s of Electricity who were responsible for ensuring the juice was there to support the enterprise. I am not saying that cloud is setting that stage for information processing, but it does make you think, doesn’t it.

Share on LinkedInTweet about this on TwitterShare on Google+Email this to someone
Share This