Utilization

One of my biggest pet peeves over the years has been utilization or capacity reporting.¬† I firmly believe that in order to figure out how to transform an environment into a more efficient one you have to first know what you’ve got.¬† Over the years I’ve walked into customer after customer, or dealt with admins or peers when I was on the other side of the table, who couldn’t tell me how much storage they had on the floor, or how it was allocated, or what the utilization of their servers were.¬† Part of the problem is that calculating utilization is one of those problems were perspective is reality, a DBA will have a much different idea of storage utilization than a sysadmin or a storage administrator.¬† And depending on how these various stakeholders are incented to manage the environment you will see a great disparity in the numbers you get back.¬† It may sound like the most “no duh” advice ever given but the definition of utilization metrics for each part of the infrastructure is a necessary first step.¬† The second step is publishing those definitions to any and every one and incorporating them into your resource management tools.

Stephen Foskett has a great break down of the problem in his post on “Storage Utilization Remains at 2001 Levels: Low!“, but I’d like to expand on his breakdown to include database utilization at the bottom of his storage waterfall.¬† I often use the “waterfall” to explain utilization to our customers.¬† In this case knowledge truly is power and like Chris Evan’s mentions in his post on “Beating the Credit Crunch” there is free money to be had in reclaiming storage in your environment.

It’s not just knowing about stale snapshots sitting out in the SAN, knowing how many copies of the data that exist is imperative.¬† One customer had a multi-terabyte database that was replicated to a second site, with two full exports on disk and replicated, a BCV at each location and backups to tape at each site.¬† That’s 8 copies of the data on their most expensive disk.¬† Now I’m all for safety, but that’s belt, suspenders and a flying buttress holding up those trousers.¬† A full analysis of utilization needs to take these sorts of outdated/outmoded management practices into account for a full understanding of what is really on the floor.

Old paradigms regarding the amount of overhead at each layer of the utilization cake need to be updated, the concept of 15% – 20% overhead for the environment is a great concept, until that environment gets to be mutli-petabyte, then you’re talking about hundreds of terabytes of storage sucking up your power and cooling.¬† Of course storage virtualization is supposed to solve problems like this, but proper capacity planning and a transparent method of moving data between arrays and/or systems with realistic service levels in place can address it just as effectively.

This entry was posted in Driving Transparency and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *