The Storage Economist: 2012 Archive

    Back to 1999 CAPEX Ratio

    by David Merrill on Dec 20, 2012


    I like looking at trends. Let’s take a look at one storage economics trend—the ratio of CAPEX to OPEX in the total cost of storage ownership.


    In 1999, we saw a pattern that the purchase cost was about 50% of storage cost of ownership. Disk was relatively expensive (one million dollars per TB) so when we analyze the total cost of a GB back then, we saw that CAPEX was the major contributor. This time period was before the shift to SAN and enterprise-pooled storage. Disk was local to the server or mainframe.

    When we look at TCO circa 2005, we notice that this rate changed radically. The CAPEX portion went from 50% to about 20% of the TCO. The total cost was reduced by 80% over this 6-year period, resulting in a change of cost ratios driven by:

    • A lower cost of disk infrastructure
    • Higher costs of labor, power, floor space needed to operate these environments
    • Levels of waste creating a lot of operational and financial problems. Poor utilization increased overall OPEX cost
    • Post 9/11 storage architectures placing more emphasis on data protection and disaster protection
    • Migrations becoming much more complex and expensive due to a high rate of growth in storage assets

    Now, let’s take a look at another recent trend. After seeing very high OPEX costs (as a percent of TCO), IT organization started investing and seeing results from:

    • Virtualization of the storage systems, volumes and servers
    • Tiering of data, thin provisioned volumes
    • Virtualization-assisted migration
    • Auto-archive and backup improvements


    Figure 1

    The result that we see is with effective and efficient investments (technology, best practices, behavior changes) IT department tend to see their storage economics have returned back to CAPEX being 50% of TCO. Again, the total cost has been reduced by over 90% since 1999, but the CAPEX to OPEX ratio has returned to a 13-year old ratio. This is both good and bad news.


    Figure 2

    The good news is that OPEX costs can be and has been reduced over the years. Virtual, thin and tiered storage has a lower cost of ownership. Migration costs are dramatically reduced with virtualization. Environmental costs are much lower on a per-TB basis. Lastly, data protection (backup and DR) processes have improved significantly, especially in regards to reducing the cost of risk. Economies of scale have reduced local and long distance circuit costs. People, process and automation have also matured for most organizations. Clearly having higher rates of data in the TCO denominator helps to drive down unit costs, but we cannot altogether ignore the many improvements achieved over the last 10-15 years.The bad news is that we don’t know where to go from here to continue reducing costs. We have to ignore for the moment the results in Figure 2 and focus on the cost allocation percentiles in Figure 1. We have to assume that OPEX costs will continue to improve, but the biggest hurdle for total cost reduction is the cost of capitalization. Look at figure 2, the CAPEX rate can be about the same due in large part to:

    • Buying more than we need
    • Buying ahead for a “rainy day”
    • Overall poor utilization
    • Long depreciation terms
    • Project-based (not shared) infrastructure procurement
    • We see that some customers throw more storage capacity at a problem rather than spend labor costs (headcount) to properly manage the storage they already have. This adds to excess capacity
    • The explosion of copies for backup, snapshots, DR, data mining, development test, etc

    In order to continue reducing the TCO of storage is to challenge our current CAPEX traditions and ownership models. We need to take a more radical view of depreciation, and why we allow a tax-accounting-book-value-methodology to dictate how and when we upgrade storage infrastructures in order to deliver cost reduction. I believe in the near-future, those IT centers that have exhausted most (if not all) technical, operational, architectural and behavioral changes in order to reduce costs, will start to look at alternatives to CAPEX to acquire and store data. I have covered these methods previously in these blogs:

    Challenging the Tradition of Depreciation for IT Ownership

    An 11-Step Program

    Alternative Acquisition Methods

    Two Economically Superior Customers – In One Day!

    by David Merrill on Dec 12, 2012


    I am working in Sweden this week, meeting with various customers and speaking at the Hitachi Information Forum in Stockholm. I was fortunate to visit 2 customers, and was very surprised to find that they embrace some key enablers that reduce unit costs of storage:

    1. Consolidation of frames, data centers
    2. A virtualized storage environment
    3. Multi-tiered storage
    4. Virtual volumes
    5. Both employ a Storage Services Catalog
    6. Unit cost ratio for the tiers very close to the 7:3:1 ratio
    7. And they do chargeback!

    I posted a blog a few months ago that referenced 11 steps in a 4-phased approach to reduce unit costs. Interestingly, these 2 customers are doing many of these 11 steps evenly distributed between all 4 phases that I outlined:

    • Basics have been done, and are constantly being implemented (consolidation, tech refresh, leasing)
    • Age of virtualization (storage, some servers) with thin volumes and tiered storage definitions. Interestingly, both had homogenous storage (each one had a different primary vendor) so it seems to me that they may be missing out on an opportunity to see price reductions with competitive pricing
    • Behavior change (chargeback, catalog)
    • Challenge ownership – one is leasing, the other is a step lease that is trying to look like CoD (but it really isn’t because they cannot flex down)

    Very few customers have the combination of catalog and chargeback. My estimation is less than 3-5% of IT operations use both of these methods to control configurations and costs. When you add the combination of catalogs, chargeback and a virtualized storage environment, there are some amazing results in efficiency, total cost and IT flexibility. To meet with 2 customers in one day that are doing the better part of all 4 phases is amazing.There are cost-reducing investments and changes that can still be made to these organizations:

    • Archive tiers can be inserted to reduce the backup volume and provide searchable content in an object store.
    • Backup is a problem for both, with many ways to address their costs.
    • Migration factory is needed (one client’s virtualization choice does not help with array-based migration from older arrays from the same vendor).
      • One company is virtualizing with HDS and the other is not. Some limits with 1st generation virtualization can be pretty clear vs. 5th generation virtualization (in the controller) from HDS.
      • Both need more flexibility to match performance and availability needs with costs within their catalog. Their customers or constituents are constantly comparing and contrasting prices and services with cloud providers.
      • Both are looking at external storage managed services (their labor rates are relatively high compared to other unit costs).

    The good news is there are companies that implement what I have preached and witnessed over the years. Even with these capabilities, there are still options available to help reduce costs even more.

    To end on a lighthearted note as I reflect on my experience in Sweden this week, one thing that always makes me smile when clearing baggage claim at the ARN airport in Stockholm are the large poster-sized photos of Sweden’s most famous personalities. I take the order somewhat seriously (as you exit the area the importance seems to get higher). Anyway, at the exit is a picture of the King and Queen of Sweden, but they are not the most famous. Can you guess the last picture in the famous-Swedes series?


    Alternative Acquisition Methods

    by David Merrill on Nov 30, 2012


    I received a comment on a recent posting—Comparing all the costs in a 1-cent-a-GB cloud offering—and chose to make this a topic of a new blog. Our reader, Javed asks “How will HDS compete financially with latest EMC storage array offerings which are based on the lease model instead of total ownership model ?”


    As I mentioned in a previous entry, I believe that capitalization of storage assets will become less common over the next 4-5 years. The reasons for moving away from the purchase/depreciate model that we have been using in the IT industry for 50 years include:

    • Continued pressure on CAPEX budgets
    • Strategically taking some assets off the balance sheet
    • Providing greater flexibility to adopt new technology options without an artificial length of time that we have to keep older assets
    • Better total cost predictability
    • Faster and cheaper migration and transformations into new architectures
    • The flexibility to scale up and down without having to hold reserve capacity on-hand
    • Changes in the behaviors of IT consumers in the organization to pay for what is used (or allocated) and better managed storage and data protection costs

    So if traditional CAPEX is diminishing, does that mean we have to move to a cloud?


    The cloud is a type of infrastructure that can be local, remote or a hybrid mix. The replacement of CAPEX with pay-as-you go models will introduce new varieties of options to use and consume storage resources. The move away from CAPEX does not always mean that we go to a lease instead. My next blog entry will include a simple table of some options that exist now (from most vendors) that allows you to consider alternative ownership models when you want the data and storage to be local, secured and under physical control.

    HDS does offer pay-as-you go and managed service offerings, in fact there was a good write-up a few weeks ago on this subject. We have to remember that the underlying technical and operational architecture will play a significant role in the economic value of these new types of acquisition models.

    Residency Services

    Remote Operational Services Portfolio

    More on this topic and the comparison table coming soon.

    Economic Cliff Diving

    by David Merrill on Nov 12, 2012


    There’s a lot in the news about countries around the world heading towards some type of fiscal cliff. As we approach the end of the calendar year, many governments are struggling with debt, spending limits, tax increases, austerity programs, etc. Fiscal desperation may not be the same in the private sector, but we are approaching a very real storage economics cliff. Let me explain.


    Around the world, IT spending is projected to be flat or down. I see Gartner and other reports that predict a slight Y/Y rise in 2013, but many of the customers that I meet face to face indicate that storage and infrastructure budgets will be down next year.

    Next, we have a data tsunami coming, or already building with unstructured and semi-structured data (sometimes referred to as big data). Lots of new projects are on the drawing board to extract and refine this data for various commercial benefits. I have seen estimates range from 30-80% growth in this type of data, but in face-to-face conversations, people tend to describe growth at around 40-50%. This is still huge growth.

    So how can you reconcile flat/declining budgets with 40% growth in data? The answer is to attack unit costs. I posted a blog a few weeks ago about a 4-phase (11-step) view of unit cost reduction. In the blog there are 4 phases or families of ideas to reduce unit costs. Depending on the rate that you are running to the edge of your storage economic cliff, there are some short-term areas that you should focus on. Two areas really stand-out

    1. Virtualization
    2. Pay-as-you-grow consumption

    Storage virtualization (enterprise-class, heterogeneous, and controller-based) can have positive near-term impact with major cost reductions in migration, re-mastering and capacity reclamation. If you could reclaim 30-50% of usable capacity from your existing infrastructure, you could obtain a temporary pause on the way to the cliff’s edge.

    The pay-as-you-grow approach is very compelling, especially for your lower tiers of storage capacity (archive, backup, lower performance disk, etc.). I strongly suggest that you understand all of your current costs and calculate a baseline cost per TB before you start shopping for these offerings. Sometimes we get hoodwinked by a low price, and will be surprised to learn that our costs go higher. Here are some articles to this effect:

    A Penny for your Gig

    Comparing All the Costs in a 1-cent Cloud Offering

    TCO Baselines

    So Let’s Do a TCO

    Here is an interesting book on these new consumption models that are coming to IT.

    With conflicting futures in data growth and diminishing IT budgets, an economic cliff is most likely in your future. Simply hoping to avoid the cliff is not a strategy. Tactical plans, architecture alternatives, new behavior models and challenging some accounting or ownership traditions will be required. Otherwise, you may want to start practicing some cliff diving moves of your own.

    Transformation and the Impact from Converged Infrastructures—Part 5

    by David Merrill on Oct 30, 2012


    This is the 5th and final blog installment on the topic of transformation projects and their impact to IT teams. So far I have touched on the staff impact due to virtualization, chargeback, MSU/Capacity on Demand, and now I would like to wrap up the series with a few words about converged infrastructures. I refer you to Hu’s blog on the “do it yourself” pattern of IT systems integration and bundling, and what is now available in pre-tested and certified systems. Notice the stereo system picture in that blog, and the neatness and order of everything on that desk. This would have been a geek’s dream in the 70’s, and that picture epitomizes a lot of qualities and Hu’s interests.


    I will not talk here about the benefits and cost savings from unified or converged infrastructures, but the impact to your organization by adopting and moving to converged systems has to be factored into this transformation. Aside from hardware and software, converged infrastructures also imply converged:

    • IT support teams
    • Procurement, vendor certification
    • Operational processes (provisioning, change control, CMDB)
    • Converged architectures
    • Converged service catalogs
    • Troubleshooting
    • Disaster protection, testing, recovery
    • Management metrics, KPI (uptime, utilization, etc.)
    • Operational management that is complemented by the orchestration software
    • De-commissioning assets and lifecycle management

    There is plenty of proof around the benefits and cost savings of converged systems, but a fair amount of organizational re-engineering (or at least reconsiderations) will be required as part of this change. Teams and processes have grown up over decades in silos and specializations (server, network, applications, storage, desktop, etc.) and now that vendors have delivered effective converged hardware systems one cannot assume that the human element is easily transformed. There is a lot of pride and specialization in skills and training within these (silo) teams, and you should not underestimate the transformation needed with the team members. A single converged environment may appear as a job or best-practices threat. Some may believe that off-the-shelf converged infrastructures will no longer require the skilled engineering of the past.

    My recommendation is to put as much time and thought into your staff transformation going forward, as you put into transformation with virtualization, converged infrastructures, chargeback and new consumption models. The skills and in-house knowledge is invaluable to the team, and these skills and work products need just as much transformation effort as the underlying technologies.

    Transformation and the Impact to your IT Staff—Part 4 “Utility-based Consumption”

    by David Merrill on Oct 25, 2012


    This is the fourth segment in a series of blogs related to the potential impact to your staff with various transformational elements. My previous topics covered how virtualization and chargeback can and will impact the dynamics and work flow of your IT team. This entry discusses alternate consumption and acquisition methods, and the effect on IT teams who undertake this transformation.


    First – a prediction. I believe that the traditional method of RFI-Purchase-Depreciate-Asset Retirement will go away. Gartner predicts this to happen within 8 years; however, I think it will be closer to 5 years. The notion that we need to own and depreciate IT assets is based on accounting principles (for taxation and balance sheet reporting) and does not reflect the flexibility and agility that companies need from their IT department. Furthermore, having to hold on to assets for 4 or 5 years (in some cases 7 years) is an artificial impediment to cost reduction and IT improvements. Holding on to assets beyond their ideal useful life increases operational costs (like power and cooling per TB), outage risk, and limits overall performance. Imagine if someone told you that you could not purchase a new iPad 3 until you have had your iPad 1 for at least 4 years (forget about even getting an iPad 2). These practices of IT asset ownership are fading. Not all assets will encounter different provisioning models, but we are seeing this effect now with buy-as-you-grow consumption models, both in cloud environments and in the local data center.

    There are many advantages to consumption-based IT, and this ingredient is usually an important one for a data center transformation project. So how can this impact your staff? Here are some general ideas to consider as you may have to transform how things are done, and how people work and interact with IT.

    There are a couple of things that they will have to start doing (if they are not already):

    1. A services catalog will need to be created and updated at least annually
    2. Within the service catalog, the IT team will have to broker SLA and OLA requirements between the end user and the utility provider
    3. IT teams will become more involved with brokering services and solutions, without necessarily having their fingers in every aspect of the engineering and design of the solution
    4. A referential architecture will be required to map the service catalog to the delivery capability of the utility provider
    5. Chargeback and internal billing, and journal entry processes will be required if internal cost allocations are needed
    6. Utility models can accelerate self-provisioning, or auto-provisioning. These steps will require a new procurement and approval flow (initially) until the consumption reporting and behavior trends can be analyzed. You cannot have kids roaming around the candy store…

    There are also a few things that they may stop doing, or at least reduce the time and effort in performing:

    1. RFI, RPP, ITT and other such instruments will be greatly diminished over time. Partnership decisions will be made based on price and service level capability
    2. Some level of architecture and engineering control will have to be shifted to the utility provider
    3. Testing, certifications and integration will largely be shifted to the utility provider. IDC estimates that these tasks take up 23% of the IT staff resource

    Finally, there are fundamental changes to some work elements:

    1. Troubleshooting and second-third level support will be fundamentally different, depending on the type of utility service that you have. If managed services are part of the package, then these tasks will be largely removed from your team. Otherwise, it is business as usual
    2. CMDB and asset registries will have to be re-visited, and the impact of change control will need to be reviewed in light of having different owners for some of the assets
    3. Cross-team coordination will have to improve between server, storage, network and application groups, and the utility companies
    4. End-to-end transparency and accountability flows will need to be refined, or re-done altogether

    Utility and Capacity-on-demand computing is here, and will continue to take over traditional purchase and ownership models. These decisions will impact your staff’s tasks, division of labor and required skills necessary to broker these new services. As it is true with technical transformations, there will also be organizational transformations required.

    Transformation and the Impact to your IT Staff–Part 3

    by David Merrill on Oct 23, 2012


    This is the third segment in a series of blogs related to the potential impact to your staff with various transformational elements. My last topic was the impact of virtualization to your staff.


    One of the key requirements for data center transformation is the idea of addressing consumer and consumption behavior. Understanding, designing and deploying technology, policies and processes that help change consumption behavior is a necessary step to transformation. It may not be popular, or even politically insulated, but it is a necessary evil.

    Let’s face it, we all hate getting bills. They used to come in the mailbox, now they are on my computer with alarming regularity. Most of my bills I pay without a deep investigation. Some other bills (like my credit card with all my travel expenses) I comb through line by line to check for accuracy. My electric bill has always been frustrating, because I see the kWatt hour consumed and all the taxes, but I do not get any visibility on:

    • Time of day consumptions
    • Power usage by devices in my home
    • Variable rates that may exist at different times of the day, year, etc.
    • Recommendations to reduce my power consumption and therefore reduce my bill

    Now let’s make the correlation to data center assets, consumption, metering and billing.First, most IT users in the enterprise have been getting their IT resources for free (hey, I would like free electricity too), so any change to the “free-ness” is not going to be received well. Your IT staff may take the brunt of the negative feedback, and they cannot be in a position of bill collector or bill generator. You need to create an entirely new staff or team (perhaps in the account dept.) to handle the billing process.Secondly, what your staff can and should do will have an impact on this consumption behavior change, so some of these ideas need to be carefully designed and implemented.

    • Your staff needs a storage service catalog to refer to when questions of billing and service levels are debated
    • If a customer wants to lower their data center utility bill, then they need to be presented with options to down-grade service or protection or performance to meet their cost limits
    • The data center staff needs to bring intelligence to the consumer. This intelligence may not yet be built into your systems (so you have to fix that too), but when it is there your team can:
      • Provide insight as to which servers, applications are generating the highest workload or the highest costs
      • What time of day and/or time of month processing schedules are causing spikes in costs and resources
      • Present options to virtualize and move workloads such that during non-peak times the resources are down-graded (re-tiered) to consume lower cost resources until the next peak time
      • Work to understand, align and fine tune the catalog and service offerings to be price competitive and drive customer satisfaction

    Implementing a chargeback or show-back system will put your team in a defensive position with cloud and other providers in the marketplace. Do not let them get frightened by seemingly low-priced cloud offers from external sources (see a 2-part series on this here and here). Arm your team with a solid service catalog. Have the ability to tell how your costs come down every year, and how your service delivery gets better every year. It sort of sounds like you are turning into a vendor… and you are.

    I have written about service catalogs and chargeback in the past, here are a few of those entries:

    Actions Plans in a Crisis

    Multi-Dimensional Metrics

    Triangulating Budget Appetite and Unit Cost

    Transformation and the Impact to your IT Staff–Part 2

    by David Merrill on Oct 17, 2012


    This is a multi-blog series on IT transformations and the impact that must be considered to your IT staff. I will cover virtualization, with an emphasis on storage virtualization, in this summary of my thoughts and ideas on how virtualization technology impacts organizations and storage teams.


    What is it?

    • SearchStorage says that virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in a storage area network (SAN). The management of storage devices can be tedious and time-consuming. Storage virtualization helps the storage administrator perform the tasks of backup, archiving and recovery more easily, and in less time, by disguising the actual complexity of the SAN.
    • Users can implement virtualization with software applications or by using hardware and software hybrid appliances. The technology can be placed on different levels of a SAN.
    • HDS employs virtualization from the storage computer/controller, and provides enterprise-class, heterogeneous virtualization.
    • Virtualization can have a significant impact on data migration. If an organization has more than 200-300 TB they are probably in a constant state of migration and virtualization will practically eliminate the organizational and technical function.


    Read more about storage virtualization here.

    • Storage administrators may be fragmented by managing storage from different vendors, or different tiers of storage as part of their duties.
    • Some storage is presented and managed by different groups or teams. Different storage pools can exist for:
      • Backup
      • DR
      • Archive
      • Content management
      • Server storage
      • These teams that are/were managing desperate pools of storage need to be re-organized to administer one virtual pool of disk. Some of these teams may also be aligned with certain network protocols (FC, IP, FICON) and these will also need to be reconsidered for a multi-protocol single pool of virtual storage.
      • Cross-team cooperation, span and control will change as islands of storage are brought together to be managed under a single framework.
      • With various, heterogeneous storage systems comes various and incompatible management tools or consoles dispersed in the enterprise. The unification of storage also requires unification of the console and management tools.
      • New training will be needed for the unified management platform, to incorporate best practices and processes with this new level of automation.
      • Some people may feel threatened if their expertise is on one vendor’s platform and management tool set. Perhaps there have been customer scripts or tools written that will be left behind in the transformation to a single virtual storage pool and single pane of glass management.
      • Full-time migration teams may be/can be abandoned. This may be a sensitive problem if an incumbent vendor has a (long term) contract to provide on-going migration services. These contracts need to be severely cut or cancelled.

    What operational changes are needed?

    • Training on virtual storage management.
    • Working with vendors will change as some of the arrays will be subordinate to the virtualization platform, so some of the vendor/customer relationships will change.
    • Moving from multiple tools to a common toolset for management. Operational processes may need adjustment to a single toolkit.
    • With the significant impact to migration policies (scheduled outages) several migration policies/procedures will need to be revisited and re-written.
    • Service levels will improve, so SLA and OLA requirements will need to be updated.
    • Change control, to handle virtualization and subordinate hosts, may need to be updated; along with referential architectures, schematics, etc.
    • If you use chargeback, there may be different rates or SLAs for virtual and non-virtual storage capacity.

    What career enablers are available?

    Besides the considerable cost savings of virtualization, there are new career options and growth potential for individuals and teams. Virtualization is a key enabler to cloud services, whether you are developing private or public offerings. Storage engineers become “cloud infrastructure engineers” and have the option to develop new skills and capabilities within virtual environments. Data center virtualization is required right now, so teams and individuals can capitalize on advanced skills, training and technical/operational leadership positions to make these transformations successful. Managers have to consider breaking down old or archaic organizational structures to enable collaborative and multi-vendor/multi-platform teams. With virtualization, technical leadership will float to the top with those that embrace and exploit the benefits of virtualization within the organization. Organizations need to help these leaders measure and track the improvement paths that often begin with these types of transformations.

    Now I know this is not a complete list, but a composite of observations and details shared by (somewhat larger) clients over the last year or so. I would be interested in your comments and feedback, as this blog series continues. I am thinking about updating an old paper on storage team optimization, especially considering the transformations that have happened the past few years with architecture, process, economics and data growth.

    My next blog will cover chargeback, and the impact it may have on your staff and operational behavior.

    Transformation and the Impact to your IT Staff – Part 1

    by David Merrill on Oct 15, 2012


    We talk a lot about transformation in IT—transforming architectures with virtualization, transforming asset ownership with cloud and utility services, or transforming work tasks with (remote) managed services. I write and blog a lot about the economic impact of these transformations and how they can reduce unit costs. See this recent blog about 11 steps and 4 phases that can produce real unit cost reductions.


    I have been meeting and working with customers in Germany and London these past 9 days, and have been reminded a few times that these transformations are somewhat dependant on the acceptance and adoption of the storage team or IT staff. Some of these transformations are technical, and are a little easier for organizations to adopt. There are also behavioral transformations that are not as readily accepted and implemented by the team (for a variety of reasons). Political changes (IT politics, not national politics) and accounting or financial changes also require a great degree of acceptance by the IT staff to become reality.

    In short, your people and staff can accommodate or stonewall transformations. You cannot put in place a transformation project (for example to reduce costs) without considering the human aspect of these changes.

    I will spend the next few blog postings discussing a few of these points further, and how the following transformation ingredients (below) have to consider and include human and organizational acceptance / adoption, in order to be successful:

    • Storage Virtualization
    • Charge-back
    • Utility Computing and Consumption on Demand
    • Converged Infrastructure

    There are probably a few other areas that could be discussed in this context, but my work this week relates directly to these 4 areas, where we have some proven (and anecdotal) recommendations of how transformation impacts people.

    An 11-Step Program

    by David Merrill on Sep 28, 2012


    I know there is a 12-step program for addiction recovery, but I am not writing about this. Instead, I am writing about a structured, methodical approach to reduce the unit cost of IT. These methods and steps have been proven over the last 10-11 years with storage economics, but I am finding applicability to VM cost reduction, big data, unified computing, etc.  My premise is that there are phases, that have predecessory relationships, and these phases or steps can reduce costs.


    Over the past 6-8 weeks, I have personally conducted 20 economic workshops for clients in Asia. Some of these were with recurring-assessment customers, so I can trend their results and correlate their success to other customers that have had great success in reducing costs. I built a composite example of a total cost reduction roadmap from these recent customers, and summarized roughly 11 steps within 4 phases that have produced unit cost reduction of IT. Again, this composite is a storage example, but many of my VM TCO examples are close to this 11-step approach.


    In this example we show TCO per TB/year, but this could also be unit costs of a VM, VDI, transaction, etc. Notice the categories of costs. We have 34 costs, but there tends to be popular costs that most organizations are interested in measuring and reducing. These popular costs are depreciation, maintenance, labor, environmental costs, outage risks and migration. In achieving these unit costs in the data center, there are (and have been) 4 phases that organizations tend to pass through along this journey.

    Phase 1 – Doing the Basics

    • Consolidation and centralization of data center assets
    • Implement disaster recovery capabilities
    • Tech updates to improve performance and replace aging and expensive assets
    • Moderate unit cost reduction in this phase (10%)

    Phase 2 – The Golden Age of Virtualization

    • Virtualization applies to servers, desktops, storage (LUNs, arrays, file systems) and network
    • Now the ability to over-provision or over-subscribe capacity
    • Tiering and policy-based asset re-purposing (v-motion, policy-based management)
    • Improved management with new layer of abstraction
    • De-duplication and compression
    • Significant unit cost reductions (30-40%)

    Phase 3 – Behavior Modification

    • Basics of best practices, operational changes
    • Chargeback to pay for what is used
    • Service catalogs and referential architecture to limit variability
    • Rewards and punishment systems to drive economic behavior
    • Self-provisioning
    • Moderate unit cost impact (20%) with these actions, but some of these activities are political and can impact organizations

    Phase 4 – Challenging and Changing the Norms of Ownership and Locale

    • Utility computing
    • Scale up and scale down
    • Capacity on demand
    • Private, public and hybrid cloud
    • Remote services and sourcing
    • Cost reductions will depend on many parameters, and some reductions are masked by cost shifting to cloud providers
    • Unit cost impact in the 10-30% range

    In the short-term, there are many technical and operational investments to have dramatic impact on storage, VM and data center economics. Over time, technical and operational investments will have to give way to behavioral, consumption and remote options to reduce data center economics. My next few blogs will dive deeper into these 4 phases and the 11-step program to reduce IT costs.

    The Economics behind the new HUS VM

    by David Merrill on Sep 25, 2012


    This week HDS announced HUS VM, filling the void for enterprise-class virtualization at a much lower entry price. See the details of the announcement here.


    In addition to the powerful features and functions inherited from the VSP platform, HUS VM maintains most, if not all, the economic features of this flagship virtualization-enterprise array. HUS VM brings key cost reduction capabilities to small-to-medium enterprise data centers such as:

    • Multi-vendor, external storage array virtualization, which brings a single-pane of management capability to smaller enterprise accounts. Tools and labor skills required to manage heterogeneous systems can be significantly reduced.
    • Heterogeneous array-based migration costs are done 90% faster and at less cost than traditional methods at the array’s end-of-life.
    • Virtualization produces space reclamation from a single (or from multiple) external arrays. This reclaimed space reduces short-term CAPEX spend.
    • When combined with over-provisioning, additional space reclamation is achieved. Additionally, faster provisioning will put capacity into hosts much more efficiently.
    • When combined with dynamic tiering, data can be re-distributed to the most cost-efficient tier. This is especially true for the re-tiering of copies.

    The combination of virtualization, tiering and over-provisioning has been well documented and proven with customers around the world, now these cost-reducing features are available at a much lower price point. With the VSP architecture, we would tend to see the economic sweet spot of these features at around 120-140 TB. In other words, these capacity levels provided the economies of scale to justify the hardware and software added costs associated with virtualization. With HUS VM, the sweet spot is now as low as 30 TB.Over the past few months I have been analyzing and creating economic use cases for HUS VM, and although each case is different (especially by country), our results show that HUS VM will produce TCO reductions of 30%. This rate reduction is based on comparison to modular and unified storage systems that do not provide external storage virtualization.The 30% TCO reductions are derived from:

    • Reclamation of white space from externalized virtualized arrays.
    • Lower power and cooling requirements for internal HUS VM capacity.
    • Cost of migration.
    • Simplified and unified storage management presentation through a single console.
    • Commoditization of storage purchase as new capacity can be presented behind HUS VM.
    • Reclamation of capacity from zero page reclaim (ZPR) and over-provisioning.

    Storage cost reductions are not requirements for just the larger data centers. All IT operations are under CAPEX and OPEX pressure, so the ability to bring these proven cost reduction elements to small and medium enterprises is great news in tough economic times.

    Comparing all the costs in a 1-cent-a-GB cloud offering

    by David Merrill on Sep 20, 2012


    Last week I wrote about Amazon’s new Glacier offering, with cloud storage set at $0.01/GB/month. Let’s take a closer look behind the headlines to compare and contrast the total cost and not get hung-up on this very attractive price point. Before I could continue, I had some colleagues run a couple of comparative configurations using average street pricing for HDS (HCP) and competitive solutions for a small (50 TB) and large (500 TB) environment. With an ASP and a total cost calculation, we can better see how owning the storage asset yourself might compare with the new price offering from Amazon.


    A quick summary of the offers can be found on the Glacier pricing web site or below:

          Amazon Glacier      Owning a Small      Owning a Large
    Usable Capacity

    Both 50 & 500 TB

    50 TB

    500 TB

    Price* per GB/month




    OPEX** & CAPEX per GB/Mo




    On-boarding Cost

    $0.055 /GB



    Retrieval Cost

    $0.10 /GB



    To create a more balanced view of all costs, you would need to include in the Amazon total cost the on-boarding cost (up-front, one-time) and the retrieval costs (when requested) to the run rate of $0.01 per GB month. There is also a cost for the business to wait for data retrieval (in these examples I have set a low rate of $200/hour)

    In this approach, the on-boarding cost is a tariff that has to be factored in to start. In other-words, if the retrieval rate is zero, meaning you put the data there and never access it, the total costs for 1 year would certainly favor Glacier. Data at rest would never be retrieved and would indeed have a very attractive total cost curve as shown below.


    Although predicting a zero access rate and zero business impact with data retrieval is not very likely.

    Now if I change the retrieval rate to just 5% of the capacity per month, with a total monthly wait-time of 20 hours, we start to see some total cost convergence or parity in a one-year timeframe. The slope of the curve for Glacier is steeper due to Amazon’s charges to retrieve data, and the business impact waiting for that data to be ready and presented. In the model below, these 2 factors will change the slope of the Glacier total cost curves:


    The conclusions are fairly simple with this modeling approach:

    • If data is not to be accessed frequently, and has low RTO (recovery time objective), then cloud offerings such as Glacier will have a good total cost story
    • We have to look beyond a steady-state price to really compare and contrast other offerings
    • Your mileage may vary with regard to power, labor, retrieval rates, network costs and data protection (which I did not include in these cost models), so you have to understand the operational costs and data access requirements before being too impressed with an initial price offering
    • Legal and compliance requirements can turn these kinds of calculations upside-down; make sure your inactive data can be really off-site
    • Read the fine print; make sure you understand all life-cycle costs
    • There can be economic surprises over time, like migration costs or maintenance bubbles. Make sure that your time horizon is appropriate.
    • There will be real performance and availability differences with some of these kinds of services. Make sure the data classification and catalog definitions are commensurate with the service you are purchasing
    • Total cost curves will behave very differently for small, medium and large workloads. These 2 samples of 50 and 500 TB may not present the best overall cost performance in these examples. I did find it interesting that the smaller system had a faster cross-over point, this again would be due to the burden of the retrieval costs being large compared to the monthly run rate

    I have been asked some questions about the economics of these cloud offerings (based on my initial blog post):What do you see this type of offering used for?

    • Low cost, inactive data possibly for long-term storage like compliance

    Is this a play for archive data or backup?

    • Archive data usually requires an index or lookup feature. I do not think that 4-5 hours to restore data would meet many current or active archive requirements
    • Backup could also be a target, as well as tape replacement
    • See the Amazon use cases put forward on their web site

    I don’t want this price analysis exercise to imply that I am anti-cloud. I am a firm believer in cloud architectures and options. We cannot, though, be seduced by low prices offered by a cloud vendor or storage vendor. Do some basic math and understand all the CAPEX and OPEX costs that have to be factored into a total cost comparison. That way you are better positioned to deliver options that reduce both total cost and CAPEX price.


    * For this OPEX calculation I used $0.16 per kWatt hour for electricity, $65 per month for rack/raised floor, $100K fully burdened labor cost, and 200 labor hours per year for minimal administration labor for large capacity, 100 hours for small capacity. Growth rate, once in the archive, is zero percent for this total cost comparison. The on-boarding cost is not a recurring cost, therefore the slope or vector of the total cost curve would be roughly the same (given the modeling conditions) for each of the next few years.

    **For the unit price we took several kits from HDS and competitive modular offerings. This price includes the tax impact of a 4-year depreciation @ 32% marginal tax rate.

    A Penny for your Gig

    by David Merrill on Sep 10, 2012


    You may have seen this recent announcement from Amazon on their new Amazon Glacier Storage Service. This is a red-letter day in the annals of cloud storage service offerings in that a very low cost offer can be priced at $.01 per GB month. That’s 12 cents for 1 GB for an entire year or $120 USD per TB for a year.


    Achieving the 1¢ price-point is much like breaking the 4-minute mile in running. This sets a new precedent and some new pricing standards in the future.

    I recently came across an article trying to diffuse this offering as one that will not destroy tape. You can bet that this new price point will generate a lot of comparisons and contrasts. A new low-price gauntlet has been thrown down, and there will be a fair amount of effort to mimic or diffuse the offering.

    But a singular or myopic view on low-price can lead to confusion and oversight on the total cost of a low-cost option. There are limitations and caveats to be sure–the on-boarding and retrieval costs, for example. Performance delivered with this kind of service has to be aligned with the price as well. I have often talked about price not being equal to cost, and that we have to remember that the price of disk is approaching zero. The Glacier offering is proving that prediction to be true.

    New levels of confusion about price and cost are bound to crop up with this new service-price offer from Amazon.

    I will take my own stab at normalizing this price to a longer term cost perspective in the next few blogs. By creating a few different scenarios (SM, MED, LG), we can start to see the total cost implication of a low-price solution.

    I am very excited to see the market acceptance of this offering, and for competitive solutions coming to the market over time. Once this 1¢ barrier has been broken, others will soon follow. With a clear understanding and construction of the total costs, consumers won’t be as easily seduced or confounded by an incredible new price.

    The Rolls Royce of Storage

    by David Merrill on Sep 5, 2012


    A customer recently explained to me that our storage solutions were great, but were perceived as a “Rolls Royce solution,” and that they had to look at other options because they had some new budget pressures. As I met with the customer, it became clear to me that they were blurring the difference of price and cost. What they paid for the solution cost may have been higher than other options, but in actuality was a lower cost of ownership.


    I often see that an HDS solution (controller-based virtualization, automation software) may have a higher price to purchase when compared to a low-cost simple disk architecture, but we are most often a lower cost to own compared to these cheaper-to-buy solutions. When it comes to budget cuts, most people are supremely interested in lower cost of operational-ownership.

    This customer was recently interviewed in ComputerWorld about their architecture and the results of an economically superior storage solution.

    The perception of high-price or high-end storage solutions often gets confused or overlooked in the context of lowest-cost-to-own. Such was the case with this Australian client, who did not fully appreciate the flexibility, rapid growth, low migration and ease-of management that the advanced HDS storage architecture brought to them. Some of the benefits they gained with HDS solutions included a very high utilization rate, virtually no migration cost and the ability to manage a lot of infrastructure will very little staff. The power, cooling and floor space requirements of HDS were small compared to products from other vendors with a similar capacity.

    This is why econometrics is so important—the right architecture might not always be the lowest price, but it can very often be the lowest cost.

    Curbing our Appetite for Storage Growth – Part 2

    by David Merrill on Aug 31, 2012


    In my last post, I discussed the difficulty in accurately projecting new storage demand and the options available to the storage team to present new growth slopes with new/different storage architectures.


    I tend to see organizations approach these new architectures (virtualization, thin provisioning, tiering, etc.) with an incremental-stepped approach. This ensures that the operational nature of these changes is put into place correctly, and that storage managers and storage consumers are all on the same page. Over time, the incremental approach to taking on new technologies will allow for a phased approach to reducing the overall storage appetite or demand. This can be depicted in the following chart using the previous growth slope options.


    I also observe that some organizations(in my opinion, the best in class) also setup intermediate metrics to track their process. One of these metrics can be a written-to-raw ratio. The actual written capacity compared to raw (non RAID) storage capacity includes copy data, while the written TB may not. This can be an effective measurement for non-technical people to understand how much storage is needed to purchase to hold the written-amount [HDS1] of data. RAID overhead, white space, and un-used capacity all contribute to the differences between written and raw.

    The options to reduce or change the growth slope all present different written-to-raw ratios. If the graphing exercise is too complicated of an explanation, you might prefer a simpler math exercise. Written-to-raw rations can help with technology improvement programs:

    Current baseline rate is (for example) 1:10

    • Thin provisioning can give us 1:4
    • Compression can give us 1:5
    • Virtualization and consolidation can get us 1:8
    • Compounding some or all of these (they become additive, and somewhat mutually exclusive) can yield a 1:3 ratio

    Metrics, graphics or numerics, are needed to demonstrate to management and users how the IT team is doing everything in its power to change the growth slope without introducing disruptive or complex new solutions. In tough situations some of these difficult decisions might be necessary, so the IT team needs to take care of all the low-hanging fruit options like tiering and de-dupe before taking on tougher actions like rationing or charge-back.

    In my next and final post on this topic next week, I will explore modeling methods and worst-case planning to try to determine where your current infrastructure may break (at a certain growth level).

    Curbing Our Appetite for IT Capacity

    by David Merrill on Aug 28, 2012


    Quite often while discussing cost-reduction, we draw a simple x-y graph that depicts:

    • Flat, or nearly flat IT budgets, year over year
    • Increasing demand for storage (and VM apps) capacity, usually at a 30-50% growth-rate year-to-year
    • The requirement to drive down unit costs to allow the growth-rate, given a flat budget

    We tend not to question the growth-rate, right? It is what it is. IT does not seem to have control around how many new apps, systems, or user data requirements emerge each year, and we must simply react to the demand. I see a change required in some of these assumptions about IT being able to curb or challenge real-growth-rate. At a minimum, we can begin to distinguish physical from logical growth.For many IT shops, capacity growth in the storage space has a come-and-ask-for-it mentality. We don’t throw data away anymore, and the price of a disk is cheap enough to keep all data and to keep it online all the time. This kid-in-a-candy store mentality is unsustainable for the near future. Global recession fears, local austerity initiatives, capital preservation, and protection cash are all real requirements in today’s IT environments. The growth-rate of structured data was noted to be 30% per year in the next few years (see IDC: State of File-Based Storage Use in Organizations: Results from IDC’s 2011 Trends in File-Based Storage Survey, doc #228824, June 2011) and unstructured tends to be around 60% y/y.Part of the problem with curbing storage appetites is that IT planners and storage architects don’t always have good visibility from the business or apps regarding the growth-rate needed to support the business. There are new projects on the horizon for:

    • Big data project
    • New apps and systems, like VDI
    • Change in direction relative to market trends
    • Planned and unplanned spikes (opportunities or risks) to be satisfied
    • New legislation or consumption patterns that radically change demand
    • Cloud, and all things related to the cloud

    Most planners tend to look back at the past few years to understand history in order to predict the future. This is always safe, but often insufficient.Another tactic is to employ storage architecture elements that are more accommodating to unknown demands. Having some of these technologies in place may not change the unpredictable nature of the growth, but it can soften the impact on the bare-metal requirements needed to support logical/virtual growth:

    • Thin provisioned volumes
    • Virtualization of arrays, volumes, machines
    • Compression
    • De-duplication
    • Archiving of stale data

    These above examples can help deliver a different physical capacity growth, while at the same time, presenting to the consumer a logical view of capacity that can meet the business or data demands.


    In my next post, I will review some of these roadmap or step-down options available within the storage architecture. These tactics may not impact the outward demand for high-growth storage, but they can reduce the on-the-floor infrastructure and investments required to present a logical presentation of high-growth capacity.

    Mainframe Storage Economics: Part 2

    by David Merrill on Aug 9, 2012


    My last blog presented some of the same operational and capital pressures for mainframe that we see most often, on the open systems side of IT. This entry will present some of the key storage architecture technologies (again emphasized and somewhat perfected) from the open systems side and applied to the mainframe storage environment.


    By way of disclosure, I am not a mainframe, ZOS or S/390 expert. I grew up in the VAX VMS world in the early 80’s, using these systems for CAD, thermal simulations, printed circuit board design and threat simulations (I was in the military electronics world back then). The correlation of new storage architecture options, such as dynamic provisioning and virtualization, comes from my mainframe-expert colleagues and fellow bloggers Claus and Hu. The principles of storage economics apply to all storage, regardless of the host. The mainframe does have different reactions to some of these technologies, but the result is still a reduction of cost per terabyte, and that is important.

    The impact of key, and relatively new,storage technologies are as follows. I will start with controller-based Virtualization for the Mainframe Storage:

    1. Virtualization can present a lower cost tier to mainframe applications. With typical FICON storage systems, only FC drive types were able to be presented. When controller-based virtualization is connected to the host, subordinate or virtualized arrays can now be seen by the MF host, and this can include low cost modular arrays and NL SAS storage systems. This new lower cost tier of disk can be advantageous to ZOS applications with a big appetite for growth, and where lower cost and lower performance drives can make economic sense
    2. Virtualized, NL SAS disk has a higher density, so moving significant capacity to this new virtualized mainframe tier can reduce floor space and power costs for larger environments. Depending on several factors, this new tier may even replace some tape capacity for the VTL environment
    3. With heterogeneous storage assets virtualized and presented to the mainframe environment, an IT department has many more options to flex and share storage arrays between open systems and mainframe. If an asset is retired early from one environment, it can be re-purposed for another, extending the useful life of the asset (sweating an asset)
    4. Virtualization can enable more variations in terms of cost, performance and availability with virtually any storage array being a candidate for connectivity. Your procurement department will appreciate the expanded options to purchase mainframe capacity from a range of vendors and products that, up until now, has not been available to be purchased for the mainframe

    More to come on HDP and tiering in the mainframe in my next blog.

    Mainframe Storage Economics: Part 1

    by David Merrill on Aug 6, 2012


    I am speaking at SHARE this week in Anaheim, and my session will be on the economic impact of storage virtualization for the mainframe. My session details are here if you happen to be in attendance.


    As I was pulling this presentation together, I realized that most of the cost issues (waste, reclamation, optimization) have been in the open systems side of the IT department over the past few years. That is not to say that the ZOS environment cannot benefit or make improvements to unit costs of storage, but most of our consulting work as been with SAN, block, file systems, NAS and open systems applications and servers. After meeting with HDS colleagues to put virtualization impact for the mainframe in the right context, there are new twists to be covered in my next few blogs.

    To start, it is important to understand that the operating systems, skills, backup solutions, and archive solutions have a 20-500 year head-start compared to Linux, Hadoop, UNIX and Windows. Many of these mature processes can be reflected in the unit costs of storage, but there are areas for improvement. Controller-based virtualization can have an important impact on mainframe apps, and on holding down server and storage-related costs. A summary (included in my presentation) of the cost sensitivities that exist in the mainframe domain are as follows:

    IT Common Pressure Points (mainframe as well as open) have to be measured and tracked. We have to do more than talk about Economic Pain-Points. There are plenty of solutions in the MF space as there is in other server platforms in the data center

    • Capital expense is under pressure, not like OPEX but still present
      • Cost of growth can be altered with new architectures and presentation of SATA drives through a virtualization platform
      • Poor utilization of tier 1 disk due to short-stroking can be remedied
        • Resulting in poor utilization, wasted capacity
        • Have to buy more (6-10x more) capacity than what is actually needed
      • MIPS Preservation, and therefore cost avoidance
        • Deferring any and all MIPS upgrade costs whenever possible
        • Look to off-load MIPS that are busy with storage tasks
        • MIPS overhead reduction to improve performance
    • OPEX
      • Power, cooling, and floor space is everyone’s business these days
      • Performance costs, if IO and throughput can be increased, than many times revenue improvement follows performance improvement
      • Backup is a good part of mainframe storage costs. What options can virtualization, archive and tiering help with mainframe costs?

    I will spend more time later this week and next week diving into some of these key topics, primarily around virtualization impact, MIPS offloading, dynamic provisioning and tiering. The compounded impact of combining multiple options also presents a faster rate of savings in the mainframe space, just as we see in the open systems space.

    If you are at SHARE this week, stop by to say hi. Otherwise stay tuned for my upcoming blog posts.

    Economic Best Practices

    by David Merrill on Aug 2, 2012


    I had a colleague call and talk about an economics analysis (cost reduction strategy) that he was starting with a large client in Asia next week. The customer wanted to know our “best practices” around cost, cost measurement and cost reduction. After doing this type of work for the better part of 13 years I had to pause, since our entire methodologyand framework is a collection of best practices. Let me summarize these best practice areas:


    Best Practice #1, one of our key principles is that the price of disk is not the same as the total cost to own storage. The price of the initial storage investment is only 12-15% (and declining) of the total cost of storage ownership. Therefore we have to look beyond the price of disk to measure all the costs of ownership. This paper outlines the 34 costs (best practice) that make up storage TCO.

    Best Practice #2, after identifying the costs that are important, a simple storage TCO, baseline measurement is essential to start a cost reduction road map. I have posted several blogs on this topic. You can see them here and here.

    Best Practice #3, there are correlations or mapping that can be done between costs and investment options (to reduce these costs). HDS has built a mapping tool to assist in this correlation of costs and solutions. Cost reduction plans can be deterministic.

    Best Practice #4, don’t confuse price and cost. Don’t be seduced by a low price and expect to see similarly low costs of ownership. As we embark on new technologies like hypervisors, cloud, big data, we seem to keep making the same low-price-DAS-storage-mistakes each time.

    Best practice #5, the right storage architecture can have an impact on the total costs of adjacent IT infrastructure. For example, there are key storage architectures that can reduce (impact) the total cost of a VM or VDI. We have a good whitepaper on Hypervisor Economics that describes some of these best practices in terms of technology, operations, and metrics. A best practice is to seek after and implement a storage architecture that is economically superior based on the costs that are important to you.

    Best Practice #6, try to not get too comparative with other IT organizations and their cost structures. Since there are 34 different kinds of costs, another company in your same vertical market may not chose the same costs as you do. Therefore, any comparative effort will be irrelevant. Rather, use economic metrics, baselines and annual reviews to measure yourself. Take a look at this recorded session with the CTO of Adobe, and how they performed annual review and measurement of their storage unit costs.

    Best Practice #7, you need to find the economic hero (or heroes) in your organization. Rarely is there just 1 person who cares about all the costs. Many departments and organizations own fractional elements of storage TCO. Get them involved in your economics baseline. Some costs will be direct, and some will be indirect. There is no right or wrong way to include direct and indirect costs in a TCO baseline.

    These are some of the high level, or foundational, best practices. There are many more as you dive deeper in our content.

    • Cost of migration can range from $7-15K per TB, and storage virtualization is a key technology to drop this cost to near-zero

    • In a 4-tiered storage environment, the total cost ratio between tiers should be around 11:7:3:1

    • It is no longer cheaper to waste a disk than it is to manage it

    • Asset utilization (waste reduction) improvement through reclamation can be a powerful alternative to purchasing new assets, especially in tight capital markets





    • Backup costs continue to be a large % of the storage TCO. Many are looking to snap copies, replication and an intelligent archive to reduce or eliminate backup all together

    • For countries with high power costs, new architectures and processes are needed to reduce frame count, optimize capacity and fundamentally reduce power, cooling and space requirements



    Our list could go on and on. Luckily we have published a book on storage economics (for dummies) and all of these ideas or best practices are well documented in this book. You can download a PDF version here.

    The HDS cost efficiency page is also rich in content on best practices around reducing the costs of migration, waste, management labor, power/cooling etc.

    HDS has done thousands of economic workshops and assessments over the past 11 years. We are moving closer to having roughly 100 of these case studies cleansed and indexed so that you can see by industry, location, size and growth rates the kinds of savings available by applying these best practices.

    Concluding my short series on data center environmental costs and issues

    by David Merrill on Jul 16, 2012


    I am heading to Park City, Utah for a week to escape the heat in Texas, and wanted to quickly conclude some thoughts on environmental costs and the impact of the right server and storage architecture.


    This Oracle article is a little dated, but is a good summary of the importance of good, clean power when more emphasis is placed on virtualized IT resources. I believe most of the virtualization references are aimed at servers and applications, but the arguments for storage virtualization are also valid. Inside the report are some good (but outdated) Gartner quotes around power costs and impact to the data center over the next few years.

    While working in Asia last month, the problems and rise of power in many countries have most IT architects looking at new designs for sustainability. There are also some new government programs to incent large corporate investments in energy efficiencies. Listed below are samples of some of these programs:

    Lastly, around 3-5 years ago I heard some buzz about the new CEO, the Chief Environmental Officer. This position has been discussed in several articles in trade press, and even at some large conferences, but I have not yet seen this role develop within the companies that I visit and work with. If you know of some examples, or if your own company has a similar position, please leave me some feedback. I am interested to see how this role is impacting the data center and current IT decisions.

    Environmental Focus on Storage Environmentals

    by David Merrill on Jul 11, 2012


    Last month I posted a couple of blogs on power efficiencies and tactics to reduce storage power consumption. There is plenty evidence that data and storage growth will continue to put pressures on power consumption and power costs.

    • Annual growth rate of 30% for structured and over 80% for unstructured  is forecast for the next few years
    • Without a fundamental change in storage architectures, an unsustainable rate of growth in power and cooling will put pressure on current data center facilities
    • Countries with carbon limits or tariffs will be the first to enact new systems and controls to limit power growth
    • Countries like Australia and the Philippines are seeing a 17-20% annual increase in the wholesale cost of power
    • Power rates by 2020 will be 120% higher than they are today, and by 2030 that number could be 215%
    • Estimates differ across the board, but conservative estimates have storage responsible for 40-60% of data center power and cooling consumption
    • Storage virtualization can and does have the same impact to power as has been seen in server virtualization, but adoption rates are lower than servers

    Clearly, the rate of growth with data growth is unsustainable for comparable increases in power/cooling consumption. New architectures such as virtualization, over-provisioning, data tiering, data archive and new retention policies have to be considered to combat power consumption growth.

    I suggest creating econometrics to track the unit cost of storage environmental as a percent of the storage TCO. Further metrics can be introduced to measure kwatt per TB or BTU per TB for the current and future storage infrastructures. With these kinds of metrics in place, the environmental impact of storage architectures can be tracked and held accountable. Environmental sustainability needs to be a concern not just for a few, but for architects and planners that can influence and recommend new courses for growth.

    Data Center Environmentals – Extreme Makeover Part 2

    by David Merrill on Jun 27, 2012


    In my previous blog, I started a discussion on power consumption, control and the impact storage has in the data center. This is a front-and-center issue in Australia, and requires new thinking and new investments to sustain government goals and industry trends.


    From a pure storage architecture perspective, there are 3 general areas that can be considered to reduce and impact the power cost/usage on a per TB basis. The focus has to be on optimizing what you already have installed, introducing new and better architectures for organic growth, and continuing consolidation activities whenever possible. The following outline of proposals and actionable plans present some basic ideas that have worked for customers (around the world) to reduce environmental resource consumption and costs on a per TB rate base.

    1. Better improve storage capacity utilizationand designs for the systems that are already plugged in and running
      1. Virtualization to simplify and optimize older arrays (and in the process probably retire some of the older ones)
      2. Space reclamation with over provisioning, thin volume, ZPR
      3. Squeeze more existing TB out of the kilowatts already in use
      4. Hot/cold rows to improve air flow, consumption
    2. Introduce technologies and newarchitecturesfor net-new growth
      1. Tiered storage, data movement based on age and use to lower tiers
      2. High density, low power drives for lower tiers
      3. Spin-down disk for less frequent tiers
      4. Hybrid cloud, off-site migration (where alternate site has better power rates) for lower tiers
      5. SSD as well as page-based tiering as much as practical
      6. Virtualization for faster migration to reduce the overlapping space and power costs common in most migration projects
      7. Move some workload to hosting centers that have already optimized architectures and power solutions
    3. Consolidation, tech refresh (fewer moving parts, fewer assets)
      1. Virtualization to collapse and migrate many smaller systems into a single, more efficient (virtual) pool of storage
      2. Consolidation of SAN, NAS filers, older arrays

    Data Center Environmentals – Extreme Makeover Part 1

    by David Merrill on Jun 25, 2012


    I am speaking at 2 IT events this week in Australia, both focused on power reduction in the data center. Australia has some fascinating and steep challenges ahead as power rates rise at high rates (30% pa) and carbon taxes are now in place. Recent legislation, among other things, has put reducing of power and cooling costs clearly in the cross-hairs of IT managers and CIOs in this part of the world.


    While watching part of a rugby match on the TV, there were several commercials for home-based solar panel/inverters from the local home improvement stores. These initiatives are being applied (and felt) in the data centers and by homeowners. Below are some links to the energy efficiency initiatives here in Australia:

    My contribution to these panel discussions will involve the economics and options available to reduce power and data center infrastructure costs. I won’t be announcing any new power technologies available within storage arrays, but proven methods and tactics to reduce the cost of power as a unit of storage TCO. My approach is based on the fact that data is growing (30-80% pa) and these growth rates cannot be replicated in electricity consumption, so new architectures and methods have to be deployed in order to deliver significant unit cost reductions related to environmental costs.The following quote comes from a US Dept of Energy paper on energy efficiency in the data center. The storage section is not rich with ideas, but does convey common sense information on how storage correlates to power growth and ideas for reduction.

    Storage Devices Power consumption is roughly linear to the number of storage modules used. Storage redundancy needs to be rationalized and right-sized to avoid rapid scale up in size and power consumption. Consolidating storage drives into a Network Attached Storage or Storage Area Network are two options that take the data that does not need to be readily accessed and transports it offline. Taking superfluous data offline reduces the amount of data in the production environment, as well as all the copies. Consequently, less storage and CPU requirements on the servers are needed, which directly corresponds to lower cooling and power needs in the data center.For data that cannot be taken offline, it is recommended to upgrade from traditional storage methods to thin provisioning. In traditional storage systems an application is allotted a fixed amount of anticipated storage capacity, which often results in poor utilization rates and wasted energy. Thin provisioning technology, in contrast, is a method of maximizing storage capacity utilization by drawing from a common pool of purchased shared storage on an as-need basis, under the assumption that not all users of the storage pool will need the entire space simultaneously. This also allows for extra physical capacity to be installed at a later date as the data approaches the capacity threshold.I hope to cover a few more practical and deep-dive topics beyond some of the obvious options. My framework for this type of storage/power consumption discussion is based on 3 key action areas:

    1. Optimize utilization
    2. New architectures
    3. Consolidation

    This outline is centered and isolating storage infrastructure. If we open the aperture to include converged solutions, virtual machines and integrated stacks, the environmental options can expand to include a much wider array of topics. I will explain more of these 3 pillars in my next blog post later in the week.

    Evolution of VM Total Cost Calculations and Methods

    by David Merrill on Jun 20, 2012


    A little over a year ago, a colleague of mine here at HDS discussed the lack of information on VM total costs, and how to measure and reduce total costs. Michael Heffernan (also known as Heff) and I eventually created this white paper on an approach to measure and reduce VM total cost of ownership: Hypervisor Economics: A Framework to Identify, Measure and Reduce the Cost of Virtual Machines.

    It is good to see that there is new material coming out on this topic. Initially, most of the economic and business case materials revolved around the rationale to move to a VM environment. Now that these have become much more established, people are interested in optimizing and reducing the costs of virtual machines.

    A couple of interesting papers and models that you may want to look into:

    1. VMware TCO methodology
    2. VMware TCO calculator
    3. Dell announcement with HW configs set at $1,000 per VM (CAPEX costs only)
    4. Microsoft paper on VM total costs
    5. Blog Comparisons and Contrasts

    I am sure there are more papers and models out there, please feel free to send me what you read and find useful. To me, much of these cost calculations revolve around CAPEX and the total purchase cost. In my work, the price to purchase a VM is about 25% of the TCO, so I am more intrigued with (and doing my own) research on VM total costs. The other 75% is very important to most IT budget owners over the long-haul.

    Recalibrating Indirect Costs

    by David Merrill on Jun 18, 2012


    As our industry starts to move to consumption-based pricing for storage, there will be an increased need to understand and track current direct and indirect costs, especially at the point that we compare current cost structures to consumption or cloud offerings.


    This past week I spent a half-day with a large client in SE Asia, and our primary topic of conversation was chargeback and cost recovery calculations. This is a topic that I do not seem to address very often, since most IT organizations do not account for, or recover, consumption costs of storage. With this customer, we got into a deep discussion on the topic and a review of their direct and indirect costs. The purpose of the meeting was to strategize on reducing their direct costs and what impact would be possible. During our talk, I learned that indirect costs were a larger portion of the total cost even though they were not focused on these areas.

    One of the solutions that came out was MSU or capacity-on-demand from 3rd party providers, and I warned that they would have a ‘sticker shock’ if they did not consider their direct and indirect costs when comparing and contrasting 3rd party consumption options. The reason is that when you purchase from a provider, they will not discriminate between direct and indirect, but will pass along all their cost of goods, including their margin, to the consumer in the monthly rate.

    When organizations focus on direct, and are not sensitive to indirect costs, then the move to a cloud or consumption model may not look economically attractive. A more complete total cost picture has to be presented to contrast to other options, especially when such a move is done with the goal of reducing costs.

    For many people, the definitions of direct and indirect can be summarized as:

    Direct Costs

    Indirect Costs

    Capital costs, depreciation or leasePower, cooling and often DC floor space
    HW maintenanceStorage management labor (can be either type)
    SW maintenanceMonitoring, tuning
    On-boarding, transformation servicesProvisioning time and effort
    Storage management labor (can be either type)SAN, LAN, WAN
    Backups (again, can be either type)
    Outage time, cost and risk
    Performance cost and risk
    Migration and remastering time and effort
    Compliance risk


    The reasons why people segment costs in these buckets can vary from customer to customer, but it usually comes down to local and individual budgets assigned to the IT infrastructure group for storage. For example, the network and environmental costs are handled by another organization, so to the IT storage team these become indirect costs.

    There is nothing wrong with this segmentation process, but when looking for macro or micro optimization, many more cost areas need to be considered than what is simply under the budget control of the storage team. And when consumption or cloud offerings are considered, an even broader view of direct and indirect costs needs to be included in any comparative work or business case development. See this older blog link on the topic of not shifting or ignoring certain costs when looking at cloud storage offerings: Don’t Just Transfer the Costs.

    TCO Baselines

    by David Merrill on Jun 8, 2012


    Here in the US, we are almost ready to celebrate the NBA (basketball) championship. Even though my 2011 World-Champion Mavericks are not in contention again, this annual playoff series is still exciting to watch.  Between several customer and sales team meetings this week, and watching some of the games in the evening, I have heard common phrases – mainly that of a baseline. In basketball, it is the end-line, and stepping on the line causes a turn-over. There is not much correlation of basketball baselines to TCO baselines, so I won’t take any time trying to draw correlations.



    TCO baselines are one of the first things that we do as we engage with customers who want to reduce their costs (storage, VM, servers, etc.). I always tell people that we cannot improve what we cannot (or do not) measure. Our economic methods include some 34 different costs, so once these costs have been isolated, then we can measure and report on the baseline costs. With storage we typically look at TCO/TB/Year. With converged solutions we tend to track TCO per VM/@100GB, or some such metric. We have also done $/IO and unit cost of a transaction, or map reduce. I have some old blogs on this topic that go into much more detail.

    TCO Reduction: A Customer Perspective

    So Let’s Do A TCO

    Identify, Measure, Reduce

    Measure Twice, Cut Once

    Baselines are just the start of the process though. Once the baseline is done, we can start to review the results to determine what investments and actions can be taken to actually reduce the costs. Annual cost measurements can be repeated to measure progress and adjust the strategies. But the primary action of the baselines and subsequent measurement is to provide feedback that improvements are being made. The baseline is not the end-goal, but the measurement system.

    One important step after the baseline is to map or lay out a plan to provide the necessary improvements. HDS and the storage econ methods have developed a tool that can help you correlate your cost interests to investments and strategies that are proven to reduce your costs. If you have not seen this tool, you need to take a look at it, it only takes a few minutes to click off the cost areas that are important to your organization, and then to see what steps can be taken to reduce these costs. The tool can be accessed here.

    Just as in basketball, the baseline is an important starting point for the start of a new offensive drive. But it is just the starting point. A specific plan and roadmap are necessary to provide real savings, predictable savings over a longer period of time. Use a TCO baseline to get the momentum to start, but periodically measure yourself again, adjust the roadmap, and make tactical and strategic investments that are proven to reduce IT costs.

    Observations from a Decade’s Worth of Econ Assessments

    by Davie Merrill on May 30, 2012


    We recently acquired a summer intern to compile, clean, index and organize assessment reports/models from economic work done at HDS since 2002. I hope to have an internal library of around 200 case studies, with the focus on more recent (last 2-3 years) assessments. As I have compiled and reviewed my own work during these past 10 years, I have noticed that a large majority of these cases related to the business cases around virtualization (starting with HDS 9900, USP, USPV, and now VSP) enterprise arrays, but there were many that dealt with modular systems, file/content, NAS, backup, disaster recovery, Hadoop (big data) and outsourcing/MSS/capacity-on-demand.


    Although the cases that we are compiling are the more recent projects, I got sucked-into reviewing some cases that were 10+ years old. Some of the cost parameters from 2001/2002, and TCO results from that era are quite a bit different from what we see with storage economics today. Perhaps I can summarize some of these findings in bullet form:

    • CAPEX or acquisition cost, as a % of TCO has dropped from around 30-40% to less than 15% today
    • Labor was a very large part of all storage TCO 8-10 years ago. Some clients’ data suggested that labor was 25% (and more) of the total costs. We see much smaller cost ratios and higher TB-per-person rates today
    • Risk costs related to outage (planned and unplanned) have really dropped. Years ago we always sold solutions on five-nines availability (99.999%). Storage array outages and failures were much more common then compared to what we see today
    • Migration costs were not well understood or measured back then. Today, these costs are front-and-center and usually the cornerstone to justify a new architecture
    • Tiering came on its own, but usually resulted in tiered islands. Tiers were presented as a cost reduction tactic, but many implementations resulted in stove-pipe solutions that did not decrease costs
    • Managers 10 years ago used the same line they often do today, ”I can go down to XYZ store and buy a 36GB disk that costs a whole lot less than your big storage array proposal…”
    • Backups were problematic years ago, and I tend to see less and less emphasis in more recent engagements. Perhaps the problems were fixed, or we fundamentally shifted from traditional tape backup to local snaps, replications and data movers
    • For most parts of the world, power/cooling and floorspace have become very high priorities in the datacenters. Emphasis has been directed to reducing power costs and avoiding datacenter storage sprawl
    • Many assessments from the early 2000s involved compliance issues, and the data management emphasis around compliance risk. Back then there was news and real effort to improve processes in the wake ofaccounting scandals at  Enron, Worldcom etc.
    • Naturally there was a spike in DR protection in the post 9/11 era. Several medium to large environments analyzed their risk exposure due to catastrophic events, and made the investments to reduce these risks
    • Data explosion has been incredible during this time. Back then I ran workshops for 35-50TB environments, and that was common. Now we see 15-30PB analysis without being too alarmed at the size of these environments.
    • In that expansion explosion, I notice a lot of utilization decline, in part to holding capacity reserve, but  also due to allocated and written physical allocation

    In a follow-up blog, I will review the evolution of recommendations that I have seen during the past decade. Not surprising but these cost reduction recommendations have been as evolutionary as the environments that we assessed.

    I cannot claim the analysis and compilation complexity that one could get from Gartner or Compass, but our sample size is reasonably large (>1,000 case studies), from around the globe, in most every market segment that has medium to large scale IT. So not quite scientific and complete enough for a thesis, but the results are interesting none-the-less. I hope you find some value in these as well.

    JBOD in Clustered Computing

    by David Merrill on May 29, 2012


    I enjoyed reading Hu’s blog on JBOD and the tight integration needed for today’s applications. In the spirit of adding on to this discussion, I tend to see patterns around JBOD usage that deserve some additional (econ) attention:

    1. When a new technology (that involves high rates of storage) evolves, I am surprised to find that JBOD is often the go-to storage platform due to the relatively low cost of the media. For example, many hypervisor architectures from 4-6 years ago started with JBOD for the initial systems. While these work fine in a very small, low growth and low capacity environment, this architecture choice becomes unsustainable when moved into production.
    2. Some of us were surprised a few years ago to see Microsoft recommend JBOD storage for Exchange 2010. The following article from Dave Vellante put a sharper focus on some of these options –
    3. We often hear comparison to JBOD pricing and enterprise storage. “I can go down to xxx store and buy ten 2TB drives for a lot less money than this 20TB enterprise solution” is something that you may hear from time to time. Check out this old blog entry from a customer in Australia that went and bought off-the-shelf JBOD and found that the total price was much higher than an intelligent array solution –
    4. Early rounds of Hadoop or Azure clustered-compute systems have also had a genesis in the JBOD world. Again, for starter systems with low node-count, this is usually just fine. It is when these systems grow to a larger scale with more nodes and more TB that the JBOD architecture decision starts to negatively impact floor space, power consumption and total cost. I have several blogs about some of these big-data environments a few months ago, they are listed again here.

    To concur with Hu, stacks of compute and storage need to rely on intelligence and integration that goes beyond commodity disk technology.  JBOD seems to be a popular starting point, but over time (and growth) will often become economically unsustainable.

    Correlate Your Costs to Solutions

    by David Merrill on May 25, 2012


    Many of our customers want to reduce storage costs, and they know that just focusing on price is only part of the effort. As people understand that there are many costs associated with storage TCO, reduction of these costs first requires a basic understanding of what costs are important to them. I have done 4-5 assessment engagements this past month, and see that people quickly adopt this multi-cost approach, and I am seeing clients choose almost 20 cost categories as they move forward.


    Defining and calculating total storage costs are the first few steps needed before you start to plan to reduce the costs. But many get stuck after they have a good baseline or understanding of their costs. Just knowing their cost totals does not necessarily help them to reduce the costs. Measurements are essential, but not enough.

    Our work at HDS in storage economics for the past 10+ years has taught us a lot about what investments and strategies can reduce these costs. About a year ago, we put this knowledge of solutions correlated to cost categories into a tool. And this tool is available for you to use.


    The premise of the tool is rather simple, choose the kinds of money (on the left hand side) that are important for your organization to measure and reduce. Based on years of experience and over a thousand customer case studies, the tool can direct you to the kinds of investments, changes, technology and process improvements that are likely to reduce your costs. The tool will not tell you how much money you can save, but it can lead you through a list of options that might be considered for your cost reduction journey. If you have already accomplished some of these recommendations, then you can look to see “what’s next” that might yield cost reduction results.

    On the top right on the model, there is a sensitivity bar that can help “pull back the curtain” on cost reduction ideas. If you are fairly mature and proficient, the bar can be set rather high. If no, you can lower the bar to see what options might have a high or moderate impact on your infrastructure costs.

    We hope to build this same kind of model to correlate costs with other IT infrastructure areas besides just storage, since VM, VDI, cloud, converged and big data initiatives are also very popular and need to be placed under an economics microscope. Stay tuned for more of these tools and guides from HDS in the months ahead.

    Storage Virtualization to Reduce Cost of Copies

    by David Merrill on May 16, 2012


    I am in the middle of a multi-part blog series on some of the secondary cost benefits of storage virtualization. I posted an entry last week on virtualization’s impact on maintenance, and now will talk about the cost of copies.


    Copies are an interesting topic. Copied data can be different from data that can be de-duplicated, or data that is thinned. I have read analyst research that suggests a structured DB can often have 7-12 copies of that database within the enterprise. Copies can be strategic for testing, development, protection, data mining, etc. The costs of copies tend to range from 2-5% of (total, blended) storage TCO, and upwards of 20% of the depreciation expense portion of TCO. Most times, as a consultant, we are not in a position to argue or challenge the number of copies that a customer may create, but rather tend to help customers reduce the cost of these copies. At the core of the cost issue is that copies tend to be made and kept within the same tier as the master file or database.

    Virtualization, either at the LUN or array level, can have an impact on the cost of these copies. The key is to not necessarily be focused on reducing the number of copies, but the cost of copies. Copies can be relegated to lower tiers of storage, storage that has a lower cost of ownership and price. Lower cost tiers, that may be appropriate for copied data, may not have the full/frequent backup protection, probably will not have disaster protection, lower management overhead and a lower cost of the older tier of storage. This is where virtualization can help; by virtualizing older tiers that still have some useful life in them, these arrays can be a good cost-target for copies.

    I had a customer a few years ago, that was running on HDS 3rd generation virtualization arrays (we are now on our 5th). As part of sweating the assets, when an array came off of lease, they would purchase a T&M maintenance contract, and demote the array to tier 3 or tier 4 (depending on the drive type, cache, ports, etc.). They would try to keep these older arrays 18-24 months beyond the depreciation or warranty period, and saw this as a perfect location for most (but not all) of the copy capacity needed for development and production. They employed this enterprise-class technology with heterogeneous arrays from IBM, EMC and SUN. The virtualization architecture allowed these subordinate arrays to inherit some of the qualities and capabilities from the host virtualization platform, but the approach was a minimalist style to hold down costs. These arrays turned into tier 3 and 4 landing zones, and took considerable capacity off of the higher tiered arrays. Dynamic tiering or auto-tiering can also be bundled with virtualization for a compounded benefit, but on its own the virtualized assets provided a lower cost tier, and therefore a lower cost of copies.

    Storage Economics for Dummies

    by David Merrill on May 14, 2012


    Good news for all who have been afraid or reluctant to jump into financial or economic discussions around storage and competitive storage architectures. Thirteen years worth of experience and insight around costs and cost reduction options is now available in a simpler-to-use format. My colleague Justin Augat has abridged hundreds of documents, blogs, white papers and case studies into a book that you may find interesting. Follow the link below to read (on-line) Storage Economics for Dummies.



    Storage Virtualization Can Reduce the Cost of Maintenance

    by David Merrill on May 3, 2012


    This is the first in a series of blogs that discusses the impact of enterprise, controller-based virtualization and the impact to secondary costs. By secondary costs, I mean costs that are not usually and obviously measured with virtualization initiatives. In this entry, I will outline the impact of storage virtualization to array hardware maintenance.


    Hardware maintenance is the recurring monthly cost that emerges after the warranty period is over. HW maintenance ensures that vendors will continue to support and repair equipment according to services and response levels. Hardware maintenance is not absolutely required, but without it an IT shop will run the risk of outage or scheduled downtime to resolve (inevitable) machine faults and failure (see 2nd law of thermodynamics – the laws of entropy).

    Storage virtualization can have an impact on the total cost of storage HW maintenance in 4 different veins:

    1. Virtual assets can now be demoted (or promoted) based on age, performance levels and related maintenance costs associated with the array. Other arrays that are still required for capacity, but have substantial maintenance costs, can be demoted to a lower tier of service. In that demotion of tiers, there should also be a demotion of service levels; therefore the maintenance rate or service level can be significantly lowered.

    I know of customers that keep assets a little longer than usual (since they are fully depreciated) but assign them to lower tiers of service. They will significantly lower the maintenance levels from the vendor or in some cases drop maintenance entirely and go to a T&M arrangement for break/fix. I have also seen customers purchase some spare trays and perform their own replacement in the case of failure.

    Remember that with virtualization, the subordinate arrays inherit the attributes and capabilities of the parent (controller array), so the software and management functions performed with a virtual pool of capacity is born by the virtualization controller.

    2. Virtualization usually produces large rates of consolidation and reclamation of capacity (especially when virtualization is bundled with over-provisioning and ZPR). In these cases large rates of reclamation are often seen, and so the macro capacity can be arranged to retire the older arrays that have high maintenance costs. The total virtual capacity is the same, but the frame count is reduced. We always get excited about the environmental impact of frame reductions, but there is an important maintenance cost that can go down with these actions as well.

    3. When heterogeneous, controller-based storage virtualization is employed, the procurement department often has the leverage to pit vendor against vendor to secure the most favorable maintenance costs. Many of the older arrays (that require month to month maintenance) can be commoditized within the virtual environment. As a commodity, procurement now can leverage the vendors to negotiate the most favorable rates. The winner stays on board in the virtual pool; the loser gets shown the door.

    4. Controller- based virtualization can have a significant impact on the time and cost of array-based migration. This migration is expensive ($7-10,000/TB). A good portion of the migration cost is related to overlapping HW (and SW) maintenance costs. By shortening the migration timeframe, HW maintenance costs will drop. See this paper on more details of the costs of migration, including maintenance.

    Maintenance can be a significant part of the storage TCO, especially if the assets are older or are intended to be used for a longer period of time. Maintenance, as a percent of TCO, can often be 20-25% of storage TCO. Virtualization can be a powerful ally to reduce and tame these costs, while helping to sweat the assets and improve ROA over time.

    Storage Virtualization Can Reduce THAT Cost Too???

    by  David Merrill on Apr 30, 2012


    Storage virtualization has been proven to reduce many different types of storage related costs. Most notably the costs of migration, asset purchase, cost of waste, floor space, electricity and many others are reduced with controller-based, heterogeneous storage virtualization. When building a business case to move to a virtual, flexible, content-and-information-cloud architecture, these cost areas are most noticeable and demonstrable.


    For more specifics on some of these cost areas, here are a few of my earlier blogs for reference:




    But perhaps you did not know about other cost sub-cultures where storage virtualization can have a positive impact. My next few posts will present (one at a time) the direct impact due to storage virtualization.

    • Part 1 – HW maintenance
    • Part 2 – Cost of copies
    • Part 3 – Long distance circuits
    • Part 4 – Provisioning time
    • Part 5 – Cost of procurement

    If these 5 blogs are popular enough (I will never know unless you grade these or provide me some comments), then I will commit to another 5 less-thought-about costs that can be reduced with storage virtualization.

    Identify, Measure, Reduce

    by David Merrill on Apr 27, 2012


    My one key message (for the past 12 years) is that cost reduction is not accidental; it takes a determined and structured approach. I have simplified the process to be 3 steps: identify, measure and reduce. Some previous blog entries on these topics are here:


    You cannot improve (or reduce) what you cannot measure. So the first step is to identify your costs, and this is much more than the price of the IT assets, or storage. HDS has documented 34 different kinds of costs, so this will help you to characterize your costs. These 34 costs are outlined in the following artifacts for your reference


    Once you have selected your costs, you can start to compile all the sub-costs associated with your selections. Power is a function of kwatts, BTU multiplied by the local kwatt hour rate. Maintenance costs and depreciation costs can be obtained by your finance departments. Circuit costs and migration costs can all be estimated. If you have been following my blog for the past 6 years (or you can scan the archives now) you will know that we have many data points and white papers on these cost elements. With all the costs summarized, you can divide the costs by the TB (macro level or by tier), by site or by application type. Tracking and measuring unit costs of storage is a simple way to start a dashboard for storage economics. The dummies book (also linked above) has more content that may be of help in the measurement processes.


    So once you know unit costs, you need to start determine what options you have to achieve these cost reductions. HDS has created a mapping tool that will assist you in correlating costs with specific actions and investments that are shown to reduce costs. The mapping tool is located here for your access and pleasure…

    I always suggest a closed-loop approach to storage economics that involves periodic measurements to ensure that your investments and actions really do reduce costs. If you do not measure and report on progress, you may not achieve the results needed. I like this quote from Thomas Monson who said:

    “When performance is measured, performance improves. When performance is measured and reported back, the rate of improvement accelerates.”

    There you have it, storage econ in a nutshell. This is a simple process that can be adopted in any organization of any size or complexity. Today’s business climate requires cost improvements and cost reductions, and this simple methodology can go a long way to helping you and your organization achieve new levels of optimization.

    Reclaim vs. Buy Part 4 (of 4): Looking at the Total Cost

    by David Merrill on Apr 25, 2012


    The past 3 blogs have covered the economics behind reclamation (of disk) compared to buying new disk. My material has focus on reclamation due to storage virtualization, over provisioning (thin provisioning) and zero page reclaim. The same methods can be used for other capacity efficiency techniques to be sure. The other blogs covering the setup, calculation and cost awareness leave us to the final segment, looking beyond price. Reclaiming disk essentially extends the useful life of the asset. And while “on the surface” this sounds great, there are cost implications that you need to understand and measure/compare to make an informed decision.


    One of the key elements of storage economics is that price is just a fraction of the total cost, and HDS has documented 34 types of costs that constitute TCO. Let’s take a look at how your choices of reclaim vs. buy impact some of these more common costs. This analysis will allow you to not only consider the price differences of reclaim and buying new capacity, but also the total cost of each of these options. I have summarized some of the cost considerations for these 2 options in the table below.

    Impact if RECLAIMING disk capacityImpact if BUYING disk capacity
    Necessary HWThese costs should have been already considered as part of step 2 when the HW upgrade costs are compared to the HW purchase costs
    Necessary SWAlready covered in blog entry #2, where the cost of software needed for the reclamation process has to be considered in the original price comparison
    Frame-based SW for other FnWith frame-based licenses, any additional capacity that is reclaimed would be covered under the present contract. No increase with this categorySince you may likely add a new frame for capacity, this cost will go up for software that is applied to the frame capacity (irrespective of TB). This could be for sync or async replication, backup, management console, etc.
    Capacity-based SW licenseWhen reclaiming or increasing allocated TB with these new techniques, some software license costs may rise. You will have to check with your SW vendors.Net new capacity will certainly raise the cost of these types of SW licenses applied to the new TB
    Hardware MaintenanceDepending on the age of the array where you are reclaiming TB, you may be in (or will be shortly) a period where you are paying for HW maintenance that is outside the warranty period. This tends to be about 15% of the original purchase price (paid annual). This may be the single biggest deterrent to reclamation since you now are committing this capacity for an extended period of time. You may need to consider the support levels that you have in place to re-negotiate; especially if the TB reclaimed can be used for a lower tier (and lower SLA)Usually will not go up, since 3 years (or more) of HW warranty is included with new disk purchases
    Software MaintenanceThis will be similar to HW maintenance costs above. High likelihood that your existing software maintenance commitments will be extended as you use these assets for a longer period. Overall, SW licenses costs will riseSW maintenance usually has 1 year of warranty/maintenance built in to the price, so these costs will not emerge until year 2
    Power/coolingNo change here since the disk that is reclaimed is already installed, electrified, cooled and consuming floor space. These categories are strong compelling reasons to reclaim disk in avoiding these costs. Even though the environmental costs (per TB) for old/existing arrays are not as attractive as brand-new systems, the marginal cost usually will not be as much as adding net-new TBThese costs will always increase. New disk requires new environmental factors and floor space.
    Floor space
    SAN portsUsually, no new SAN ports are neededSAN ports will be needed for the array, edge, core and ISL ports. This will be a definite increase in costs. You can use USD $2K per ports as a total cost of acquisition to cover all SAN costs.
    NetworkUsually, no increase or impactIf the new capacity used iSCSI or NAS, new WAN/LAN capacity will need to be added, increasing the total cost
    Mgmt laborLittle or no impact after the transformation to virtual thin and tiered is complete. These transformational costs need to be calculated in the pricing comparison on step 2Many organizations apply storage management by array (as opposed to pure capacity), so adding a new array adds more workload and management points for the team. This will add new direct and indirect costs
    DR capacity and capabilityUsually no impact, especially if the DR site goes through a similar reclamation process and frees-up new TBThis cost will increase for the tiers of disk that need DR protection. New storage, network, provisions and test procedures would need to be added
    RiskIf your arrays have been problematic (outages, etc.) then reclaiming TB from them and extending the useful life will also extend the risk that you have had from the past. I would not recommend reclamation from problematic arrays.Most of the time adding new disk does not add significant risk, unless there are space, power and management issues that would be under increased pressure with adding more infrastructure
    Provisioning timeWill be much faster, cheaper with virtual, thin tiersWill have the same cost profile as before, in terms of mean-time to provision, CMDB tracking, version control, acquisition
    Scheduled Outage TimeThere may be host outage time involved with moving servers to the thin, virtual volumes. This needs to be planned during a scheduled outage period, otherwise the cost of an unscheduled outage has to be factored into the cost of reclamationNot usually a problem since more new capacity is allocated to new hosts.
    Time and cost for procurementGoes to near-zero with this methodAdding new TB can often take many weeks or a few months to complete the RFI, contract, procurement, delivery, installation and end-user provisioning. This can, in some cases, have a huge impact to project schedules that need capacity quickly (and is usually not-forecast)


    So there you have it, a short primer on the cost calculations related to capacity reclamation and purchase options. If your manager or CFO wants to increase utilization, do more with less or improve ROA, there are many methods and technologies to deliver improved capacity efficiency. These changes are not usually free, so an economic analysis has to be taken to show if and where your efficiencies make the best sense. Don’t spend a dollar to save a few cents.

    Know your costs.

    Measure your costs.

    Take direct action to improve your costs.

    Then measure again

    Reclaim vs. Buy (Part 3): Don’t get upside down on your costs

    by David Merrill on Apr 23, 2012


    This is my 3rd installment of a 4-part series on when/how it is cheaper to reclaim disk as opposed to buying new. The previous 2 entries covered the setup and calculations, now we will look at conditions when each tends to be better.


    Just because you can virtualize and over-provision, doesn’t mean it is always an economically better approach. Sometimes you feel like a nut, sometimes you don’t. Sometimes it is cheaper to buy, sometimes it’s not.

    There may be operational, non-cost and political reasons to do one thing or another, and in this case I will limit my comments to the economic differentiation of these 2 methods to present new capacity.

    When is reclamation MOST LIKELY BETTER than buying new

    • When the reclaim capacity is high enough to clear the bar relative to the reclamation overhead/tariff; this tends to be around 20-25TB, but can vary depending on many conditions
    • When the reclaimed capacity is relatively new, and has good, useful life ahead
    • If your reclamation approach can ingest heterogeneous storage
    • The tier of storage to reclaim has some portion (the more the better) of tier 1 and 2
    • Purchase prices are high, or supply is limited
    • CAPEX budgets are under scrutiny
    • You cannot afford the additional power, cooling and floor space for more arrays; since you are reclaiming capacity that is already being cooled and electrified, this is a good option
    • Your CFO is measuring ROA and pressure to do more with less is a constant IT message
    • You have confidence and some relative experience in storage virtualization, over-provisioning
    • The data that you place in reclaimed capacity can, for the most part, be thinned

    When is buying MOST LIKELY BETTER than reclaiming

    • There is a SW and (some) HW overhead for virtualization and thin provisioning. Overhead might not be the best word, perhaps tariff is better; the savings in reclamation have to overcome the tariff related to advanced storage architecture that can produce reclaimed space and better ROA
      • Software licenses
      • Some appliance or controller upgrades
      • Services to perform the migration
      • Some architectures also require a host outage, some do not
    • If the reclamation tariff is too high (prices and recurring costs of labor, maintenance), then the benefits may not be realized
    • If the tier of storage is relatively low, then the value of the reclaimed disk (say tier 3 or 4) will not be worth the effort
    • If the reclaim potential capacity is very low, and/or if the growth rate is very low
    • If the reclaim-capacity disk array is very old, then the resulting power, cooling and potential maintenance cost per TB may not be worth reclaiming
    • If the capacity that is desirable to reclaim is not centralized, then the network and latency may prohibit the effort to repatriate
    • Your own management maturity and processes may exclude this kind of moderate operational and architectural capability. If you do not have a good CMDB, services catalog or SLA, then some of the steps necessary may be temporarily out of your reach

    I have probably missed a couple of pro and con bullets above, your comments and additions would be most welcome.

    My next entry will wrap this topic up, and provide some guidance on using secondary metrics around TCO to provide another view of comparing reclaim vs. buy.

    Reclaim vs. Buy Part 2 (of 4): Basic Calculations and Modeling Methods

    by David Merrill on Apr 20, 2012


    My previous blog entry discussed the opportunity of deciding if it is cheaper to reclaim (poorly utilized) disk compared to purchasing net new disk. This blog entry will setup how to calculate a simple reclaim vs. buy analysis.


    Step 1 – Determine how much capacity may be a candidate for reclamation

    • In these examples, reclamation can come by virtualization and/or thin provisioning
    • Look at your non-virtualized, thick volume capacity, and look at this capacity by tiers
      • Usable, allocated and written-to TB
      • If you don’t know allocated and written-to, you can apply the general industry average to start:
        • Allocated tends to be about 85% of usable
        • Written-to tends to be about 45% of allocated
      • Your actual numbers are better if you know them, but these above ratios can get you started

    Step 2 – Compute the reclaimed capacity number from the candidate capacities

    • For each tier of non-virtual, thick volumes, determine the transformational capacity that can be attained
      • Reclamation from virtualization (A) = take 66% of the difference between usable and allocated
      • Reclamation from zero page reclaim (B) = use 33% of the difference between allocated and written
    • Adding A + B is a rather conservative estimate for reclamation space using virtualization and over-provisioning
    • Do this step for all tiers of storage. If a certain type of data (MPEG for example) is prevalent in one of your tiers, you may reduce the capacity in that tier since it may not be able to be thinned or compressed

    Step 3 – Compute or convert the reclaimed capacity into a purchase price

    • For each tier, multiply the reclaimed capacity by an average selling price of disk (including SW and maintenance) per tier

    The following table shows how 200TB of candidate capacity is converted into a potential reclaim-able 113TB with a purchase value of $492K. The 200TB is broken into 4 tiers and each tier is processed (step 2) or converted into potential thin TBs at the current rate to purchase, the calculation is complete. In this example the $492K price is needed to purchase 113TB that otherwise could be harvested from the current environment.


    The final step is to compute the cost to reclaim the 113TB (in the above example).Step 4 – Calculate the reclamation investment costs

    • Reclamation is not free; there are several investments in HW, SW, services, best practices, etc. that need to be put in place
    • The first time that thin provisioning or virtualization is used to reclaim, these up-front costs will be there. Subsequent efforts will already have the sunk costs in place so the transformation rate will not be as high
    • Software includes array-based licenses for virtualization, thin provisioning, zero page reclaim, etc.
    • Hardware may include controllers or appliances to perform the virtualization. If a system (like VSP) already exists, there may be memory or port upgrades necessary
    • Services involve migrating servers to the new virtual environment and in transforming the host volumes.
    • Operational best practices and formal processes may be necessary for a first-time transformation. Setting thin policies and SLA/OLA with the end user would be a necessary investment

    So after the reclamation investment is calculated (or quoted by a vendor), you can compare this to the purchase price compiled in step 3.

    My next blog entry will cover conditions and situations where reclamation tends to be much better than buying, and vice-versa. The 4th and final entry in this series will show how to look beyond the one-time price comparison of buy vs. reclaim to take a multi-year TCO perspective on these options.

    Reclaim vs. Buy: Part 1 (of 4)

    by David Merrill on Apr 18, 2012


    Disk prices are rising. I hope I am not the bearer of bad news, but you have probably heard, or seen for yourself, that recent natural disaster and supply problems have caused some disk prices to rise. This is not good at a time when (for most) there is still pressure on CAPEX. Unless you can get capital budget relief, you will have to spend more to deliver the capacity levels needed for growth in the short-term.


    What is interesting is that at the same time as we are experiencing price uncertainty and CAPEX pressure, we continue to see historically high under-utilization of available and allocated disk capacity. We are observing the ‘perfect storm’ conditions that make today a great time to consider a deeper understanding of capacity efficiency and comparative costs of reclamation vs. buying storage.  Another way to state this is ‘any capacity efficiency initiative should include a reclamation proposal’. Your IT budget and financial managers will appreciate some simple calculations or verifications to determine if it is cheaper for you to reclaim disk that you already have, as opposed to buying net-new.

    This 4-part blog series will walk you through the logic, approach and rationalization of reclaim vs. buy.

    Your CFO will love to know that you are considering reclamation as opposed to capital spending. Reclaiming existing capacity always sounds good, as it improves the return on asset. But there are points where reclamation yields diminishing results, and we need to bring a storage economic approach to measure those results. Even in times of limited capital spending (and/or higher disk prices), you have to look beyond the obvious price and cost factors to know what is best for your organization, IT budget and end-users at a particular point in time.

    Virtualization and over provisioning of existing capacity is an effective, and very often, a cost effective approach that may present a lower price compared to buying net-new capacity. As in any economic analysis, your mileage may vary, so I will present the conditions and parameters that are essential to include in a ‘reclaim vs. buy’ analysis that you should do on your own. No matter what a vendor tells you, the effort and undertaking to reclaim does require investments. It is not free. There may be HW, SW, operational changes, processes to be established, etc. The task is to determine if the investments to reclaim capacity from existing assets can be done at a lower price (and also lower long-term cost) compared to buying new.

    If you want to kick-start the process, you can contact a local HDS sales team or re-seller to get access to web and iPad apps that can help demonstrate these options and variables for you given your own infrastructure.

    Now that I have set the stage, the next 3 blogs will hopefully setup a framework to help you with a reclaim vs. buy comparative analysis.

    My second blog entry will give you a simple calculation approach to determine buying vs. reclaiming costs, and where you may stand relative to the crossover point. You can get out a paper and pencil to write down the methods, but I will also provide some simple spreadsheet math to help you develop your own pricing models.

    The third entry on this series will show the sweet and sour spots, so that you can quickly determine if you are “in the zone” or not. Factors that need to be considered will be outlined (i.e. age of current assets) as well as the tiers and data types. You will also find minimum levels of reclaimable capacity that will determine your own crossover point. There are some natural factors related to average selling prices and virtualization/reclamation investments that you also need to consider.

    Finally, the fourth and final installment will outline longer-term benefits, from a total cost perspective. One option may be better in terms of current-day price, but it may not yield the best TCO-per-TB-per-month for the longer term. The final entry will show you how to calculate the TCO variations of reclaiming vs. buying new disk. For example, through reclamation you can present more capacity to your users, without adding a single kilowatt of electricity or square meter of data center floor space. This will impact TCO measurements, as well as qualitative environmental benefits.

    I hope this 4-part series can provide some insight and help with your capacity efficiency initiatives. These 4 posts will be completed by the end of April 2012.

    Storage Efficiency with Consolidation; Consolidation with Unification

    by David Merrill on Apr 9, 2012


    Ever since we started decentralizing IT resources in the golden-era of client/server (mid-90s), we have been working to reign in the sprawl by consolidation recommendation. As soon as we consolidate, there will be another reason to sprawl (perhaps next time in someone else’s cloud infrastructure). Consolidation has been an effective technique for many years to reduce costs, and it implies several activities and resulting cost reductions:

    • Fewer ‘things’ to touch and manage—so more effective labor costs
    • Reduced environmental costs when distributed assets are brought together
    • Wasted IT assets are identified and reduced, producing better ROA
    • Reduced and optimized network or connection costs
    • In some situations consolidation can also reduce the license costs and related maintenance costs. If licenses are frame-based or host-based, this can be very effective
    • Improvements in version control, configuration management, upgrades and logistics come with managing fewer things in a more centralized or federated manner
    • Consolidation can have compounding, down-stream impact to data protection and disaster protection systems, as well as infrastructures

    Virtualization has also been effective with storage and server virtual consolidation over the years, enabling the loosely distributed resources to appear and be presented as fewer logical units. Virtualization can tolerate some asset sprawl, but presents a logical or virtual view of the assets for management, control and sometimes licensing.Within the storage arena, most consolidations are within classes of assets: SAN consolidation, NAS consolidation, backup consolidation, etc. The in-species consolidation efforts should be done first to achieve the micro optimization. If and when the opportunity emerges where macro consolidation is possible (SAN and NAS and iSCSI), then we have a much larger denominator or capacity to optimize, and this can turn into significant savings. Consolidation and optimization within different families of IT/storage assets is now possible with recent capabilities and functionality of unified storage. The economic and operational benefits have been covered in the press and with analysts for over a year. Below are some samples that I have assembled as a quick reference (note: some of these links are marketing documents, but you can see common messages of efficiency, cost reduction and operational impact with unified storage architectures).

    Besides unified storage, there is consolidation/unification capabilities with converged (storage, server, SAN, VM) environments. There is consolidation, unification and convergence by way of cloud platforms – private and public.

    In short, unification does not necessarily exist with products, but as part of larger, coordinated architectures and infrastructures. Hu has some good blog entries on these topics here and here.

    Micro optimization and consolidation is best within like-environments (SAN, email), and when those efficiencies are achieved, larger scopes for consolidation and unification (storage, network, hypervisors) can be scheduled to extend operational and cost efficiencies.

    Hidden Costs

    by David Merrill on Apr 6, 2012


    It is common to blame vendors and IT service providers with hidden costs. It is true that maintenance fees, transformation services, new training or adjacent system upgrades are required when new equipment is installed.




    But I also see internal hidden costs within some IT organizations. There are many variations on what might be hidden:

    1. Some costs that are hidden from upper management, such as a stealthy, non-standard system that exists in a part of the company for a specific purpose.
    2. Costs that are known, but not reported.
    3. Costs that are not within the span-and-control of a particular organization. A good example might be power consumption or the circuit cost, where those funds are accounted-for by another group.
    4. Don’t ask, don’t tell costs – such as contract workers or vendor staff that has to provide a certain function, and there is a lack of transparency for how these are paid.
    5. Future costs (see cartoon above) that may or may not exist, and where some people may not be around at that point in time to take responsibility for the expense.

    There are several more categories or examples of hidden costs. When I hear people tell me that they have to reduce costs, I always inquire as to the nature and quantity of these costs. We have to look beyond the obvious—and quite often dig deeper—to understand hidden risks and expenses so that the proper remedies can be applied to ultimately produce real cost reductions.

    Big Data Scientists – An Alternate Career Path?

    by David Merrill on Mar 29, 2012


    I am a proud father of 5 kids. My 4 oldest are college graduates with meaningful careers in recreation, magazine editing, teaching/coaching and law practice. My youngest is a freshman in high school, and I am always looking to strike-up a discussion around IT (since none of my other kids chose the field) and options for a future career. I find that high school course selection can have an impact with college and eventual career choices.


    Anyway, during a recent discussion I mentioned the idea of a data scientist, and how interesting that might be for work. I could sense his confusion on this as a career, and he asked if this work involved testing iPads on a Bunsen burner. I assured him it would not; like most 15-year-olds, he is fascinated with any and all things Apple.

    What started this discussion was a recent article in ITworld on big data and the skills shortfall that the US is facing. Since my son graduates high school in 2015, this opportunity/shortage appears to be much more relevant in terms of supply and demand. Part of the article expressed these skills, and where we may be short:

    […] finding the right talent to analyze the data will be the biggest hurdle, according to Forrester Research analyst James Kobielus.

    Organizations will “have to focus on data science,” Kobielus said. “They have to hire statistical modelers, text mining professionals, people who specialize in sentiment analysis.”

    Big data relies on solid data modeling, Kobielus said. “Statistical predictive models and test analytic models will be the core applications you will need to do big data,” he said.

    Many are predicting that big data will bring about an entirely new sort of professional, the data scientist. This would be someone with a deep understanding of mathematics and statistics who also knows how to work with big data technologies. [my emphasis]

    These people may be in short supply. By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions, McKinsey and Company estimated.

    We are hiring a summer intern, and he was pleasantly surprised to discover employer interest in his big data references on his resume; it is no surprise that he had several job offers for this summer.

    There is buzz in this technology and career area, not just around the scientists and the work they will do, but also the new breed of IT professionals to architect, manage, deploy and optimize big data IT environments (even if they are all in the cloud) that we will need. Sounds to me that IT is still a safe haven for meaningful employment in the years to come.

    This report goes into some great detail on these areas, and the pending skill shortage. I found it to be a great read on the topic:

    The following is also a cool interactive/infographic tool on the talent gap of these people, by industry:{EA4CDB7F-4500-49DE-B5FB-CD35AA13806D}&pid={A30FD727-0384-4CF1-9364-4C17E9ADB9F8}

    And if you want to turn data science into a competitive sport, check out this website that hosts analytic competitions:

    Finally, show your 15-yr old kid this YouTube video to get them excited about math again (I’ve learned that my communication with my 15-yr old is enhanced with YouTube videos):

    I sometimes think about retired life after Hitachi, and that sweet job waiting for me as a Walmart greeter. But perhaps I should take some night classes, brush up on statistics and catch this next big wave….

    Storage Clouds: Sweet and Sour Spots

    by David Merrill on Mar 26, 2012


    I have had several blog entries on cloud services and the risk of just shifting costs to the cloud. There are some other entries on identifying your current costs of a class of storage (say tier 3) to accurately compare and contrast the exact same costs from a cloud vendor.


    I have been helping a few partners develop business cases for cloud storage offerings, primarily for tier 4 or archive space—a popular starting place to test the feasibility and economics of public cloud infrastructure solutions.

    Defining and comparing your current costs to a proposed pay-as-you-go model has to include all the same cost categories. But within the cost categories there are many different rates, which are a function of location, age, capacity and growth:

    • Location is important because each country has different labor rates, electricity rates, marginal tax rates, etc. Even in the US there is a wide range of labor rates and environmental costs between large metro areas and remote data center locations.
    • Age is important since older systems have maintenance costs, higher power costs, higher rate of failures and are on the verge of a migration or re-mastering cost. If cloud vendors compare against new customer installations, they have to compete with pretty good prices and environmentals (without a lot of the baggage that comes with older environments).
    • Size matters in terms of cost. Larger users demand better discounts and ELA terms. Large environments tend to have better management processes and procedures to reduce labor impact.
    • Growth rate can be a telling indicator of cost, since very high growth environments tend to have spiking problems with labor efficiency, availability, utilization rates, etc.

    If you are shopping for a single tier 4 cloud price (say $0.15 – $.20/GB/month, but don’t get hung-up here with what that may or may-not include) then you need comparatives to your own situation—you will find obvious sweet spots and the eventual sour-spots. Let me show this in a graph:


    Clearly the older assets, regardless of location, will be a good target for a cloud solution. A client’s newer systems, or brand-new options in some countries (say SE Asia or India), may be at a lower price point than what a cloud provider can offer. You clearly want to purchase and invest in areas where the unit cost result is a slam dunk. As you can imagine, there are caveats everywhere. Older systems that have to be migrated or transitioned to the cloud will require an onboarding cost. This may be a one-time cost, but you may want to amortize the cost over several years to capture the true cost of this new architecture. Net new systems purchases (that go to the cloud) would not have this onboarding cost.

    So before your cloud shopping spree, make sure that you know your own costs. And these costs change as a function of several variables. There are more variables not even listed here, so the burden is on the consumer to prepare for smart comparison shopping. Don’t be beguiled by low cloud prices until you understand your own internal/unit costs.

    Big Data Storage Economics – Case Study #2

    by David Merrill on Mar 20, 2012


    I have been developing a small mini-series on the economics of big data, with a focus on the storage approach used in Hadoop and Azure architectures. The intro blog, case study #1, and a review of bare-metal analysis have been posted to this blog over the past few weeks. I will wrap up this series with another case study, this time with a Hadoop environment.


    Like so many organizations that embark on a new IT infrastructure/architecture, the start-up investments are such that planners and procurement are price sensitive. Getting a Hadoop environment genned-up can be very easy, and relatively low in cost. This is true for our case study #2 client, who is a large online retailer.

    Analytics from a node-compute environment came from the online systems, and were extracted for purchase trend analysis, patterns in global buying, etc. The starter systems quickly grew into several Hadoop pods of 600 compute nodes each. Compared to traditional storage/compute environments, most of this data was seen as disposable–there were no DR or data protection capabilities provided, since the data would be reconstructed from scratch if corrupted. The analytics were not mission critical, so some performance (<8 IOPS)/backups/network costs were minimized at the beginning as they grew the nodes and pod architecture.

    The customer used (in 2010) 2TB drives and were always awaiting the next lower-cost, dense drive type. There were 12 drives per Hadoop node, and the access was many more reads compared to limited write activity. Hadoop does data compression as the data is ingested. Part of the CPU overhead is to uncompress the data when being read. Compression ratios range from 2:1 to 8:1. In our final cost comparisons we documented that a TB of Hadoop disk was 1/4 the total unit cost of the traditional SAN environment (with large arrays, FC SAN, DR, backup, tight management, etc.).

    From an HDS perspective, we were eager to show the cost superiority of larger-scale, traditional SAN storage architectures. But with the workload, performance, data protection and transient nature of the data and applications, it was hard to show a strong economic value proposition. In fact, we went and built a series of total cost graphs for the Hadoop storage vs. traditional storage—which is the chart shown below.


    If you look closely, the crossover point for hard costs TCO was around 100 nodes (given twelve 2TB drives per node) and 1200 nodes for total cost of data ownership. This customer was right in the middle of this space at 600 node pods for the Hadoop environment. Their total cost for a Hadoop processing element was effective and convenient for the local disk/DAS architecture. At the time we could show a lower drive count, better power and cooling rate, and reduced floor space, but the costs for many of those elements were not in their line-item budgets. That is why the hard cost had such a low cross over point.

    Moral of the story: cost elements and operational practices have to be understood along with the economics of a particular cloud or big data architecture. There are crossover points, and when we can effectively model and predict these points, architecture and design decisions can be influenced. For an 8 IOP data warehousing Hadoop architecture, like this one, DAS was a better choice than SAN for a 600 node Hadoop pod.

    I hope you have learned a bit from these 2 old case studies for big data storage economics. I will be addressing this much more in-depth in the coming months, with a planned white paper for economic considerations to understand as new cloud and big data architectures are designed and built.

    In the meantime, I would love to hear your thoughts.

    Videos from on the Road…

    by David Merrill on Mar 14, 2012


    I’m out of the office this week—and I plan on continuing my big data case study series as soon as I return—but quickly wanted to reiterate something Claus posted last week.


    Recently, we were able to sit down and cut through the misconceptions to discuss what capacity efficiency really means to us at HDS. And look, someone recorded it!


    As Claus mentions, when it comes to capacity efficiency, we are really focusing on data center efficiencies, improving performance and consolidation—which gives us a pretty unique positioning in the marketplace.

    So if you have a few minutes, check it out, and let us know what you think.

    And for the record, I don’t think there’s anything wrong with the haircuts in the video…

    Don’t miss the opportunity to win a free capacity efficiency assessment from an HDS economic expert (like me):

    For other posts on maximizing storage and capacity efficiencies, check these out:

    Big Data, Bare Metal

    by David Merrill on Mar 9, 2012


    I have put together a couple blog entries reviewing some cost analysis that I did 2-3 years ago around Hadoop and Azure storage/server architectures–specifically how we worked with customers to reduce the costs of these environments (in part) with enterprise-class storage. It goes without saying—but I will anyway—the focus of these economic models and case studies was on the deployment and costs of the storage infrastructure. Some of these new cloud/big data environments do not use RAID overhead or distribute data across hundreds of nodes and disk clusters to perform the work. As I did this work, we took a myopic view of just the storage hardware aspect of these environments. I guess you would expect that from an HDS employee.


    Earlier this week I had an interesting call with Ramon Chen of Rainstor, and compared notes on how they reduce DB costs, and therefore storage with their product offering. After our conversation, it was clear to me that big data cost reductions can happen on at least 2 levels:

    1. Bare metal infrastructure optimization
    2. Software and database optimization

    Take a look at Ramon’s article, which is a very compelling story on how their compression technology had a massive impact on the total Hadoop storage and server infrastructure cost. His blog can be read here:

    We hope to collaborate on a joint effort in the near future to show the compounded impact of SAN/storage optimization and database/compression optimization with large and very large Hadoop environments.

    I am not apologizing for my views and recommendations to reduce the infrastructure costs of large, analytic or cluster architectures. There are certainly many factors to reduce the costs of the hardware on the floor. But a wider view to look beyond the bare-metal costs can be just as valuable.

    I will resume my blog series next week with a few more (bare metal) big data/cloud cost reduction case studies. If you missed case study #1, you can read it here.

    Big Data Storage Economics – Case Study #1

    by David Merrill on Mar 2, 2012


    Last week I posted an introductory blog about big data and some work I had done a few years back in this space (before it was called big data). I have a couple of these large TCO assessments in my library, but will just share 2 or 3 of these that have the easiest story to tell, and make the point around price and cost of big data storage infrastructures.


    This first case study (circa 2009) is a large retailer with most of the revenue coming from web transactions. They used the Azure cloud platform, and had (at the time) 1,500 hosts, about 4PB of JBOD and rack mount storage. They were convinced that the seemingly low priced disk architecture connected to the server/node architecture was meeting their price and cost objectives. The fact that the sprawl of the cloud and big data systems (analytics) was on a rapid pace to overthrow their data center, and they were on track to triple the data center size to meet the growing storage/rack space sprawl.

    As you may know, normal capacity efficiencies (written-TB per disk) and overheads (most run bare-metal, no RAID) did not apply, so a new set of metrics had to be developed to not only show the unit cost of a usable TB, but also the unit cost of a written-to TB. Before we could make quantitative recommendations about reducing the cost of this big data cloud, we had to pause and measure unit costs of their environment. A blog post from 2009 outlines these simple concepts.

    Our challenge was to show (let alone prove) that enterprise disk on a SAN was more cost effective than the JBOD rack approach. With shared volumes and virtual LUNs, we were able to technically show a solution that would work, but the price was much higher than the current disk solutions. We ended up with 9 cost categories for the TCO, and as you can guess, the TCO per usable TB was certainly in the favor of the JBOD/rack disk (labeled here as direct attached or DAS).


    As mentioned, the problem was disk sprawl, and our solutions (FC SAN or iSCSI) had a significant impact on the drive count. Note that the SAN was configured with 400GB drives, and the rackable DAS was 1TB drives.


    In developing an economic story, some new metrics had to be applied. Looking at a price per TB usable was incomplete, since the disk sprawl was hurting the environmental and management cost. We adopted a unit cost per written-TB, and the resulting metrics turned upside-down.


    When measuring written-TB, we were able to get to some closer metrics around total transaction cost, or the analytics query cost within this IT environment. Backing up, network costs, migrating off and on this big data environment had quantifiable costs to the customer. At the end, we took a non-economic view of the problem, and captured a metric around carbon emissions for these 3 options. You can see these results here too.


    Upon further analysis, we found that big data processing required the local CPU to do many mundane storage tasks, and extra processors had to be employed. We forget how much work RAID, controllers and intelligent array-based software do to offload the host. These extra server costs were added to the unit cost model to serve up a TB of capacity.

    I cannot go into the final actions and results of this case study, except to say that we changed a lot of minds around measuring and identifying different metrics to use when building big data cloud infrastructures. Don’t confuse price and cost, and look at a longer time horizon when planning and building big data storage infrastructures.

    Some related readings on big data and cloud cost/economic concepts:

    For other posts on maximizing storage and capacity efficiencies, check these out:

    Big Data – Optimal Storage Infrastructure

    by David Merrill on Feb 23, 2012


    There is plenty of talk in the press today about big data, analytics and our next new wave for IT. I would like to present 2-3 blogs on a small but important subset of the big data world: storage infrastructure (and more importantly, optimal storage architectures). I will use our storage economics approach for the definitions of “optimal”, meaning you can address optimized storage from other dimensions as well (resiliency, scale, performance, etc.) as you develop big data strategies.


    I had a period of time—2-3 years ago—where I was measuring and costing large Hadoop and Azure systemsenvironments. I became very excited about these new distributed architectures, but at that time was not able to dedicate effort and resource for further research on their cost behaviors. Now that these systems have a new moniker (big data), the demand for big-data-economics conversations is here again. Good thing I have the models and methodology all sorted-out.

    My observations with these large (5-8PB) Hadoop and Azure systems (S3 would not be any different in my opinion) is that local JBOD or rack-mount DAS disks were common for the deployments. You can image the rack space, power, cooling and floor space needed for these large systems. The Hadoop file system was not very efficient (by design) in terms of disk written-to utilization—about 8-12%—so large distributed file systems needed 6-10x the written capacity delivered for the raw capacity (there was no RAID overhead). I would encourage big data infrastructure architects to apply best practices and measurement systems to ensure that optimal designs are brought to big data projects. Even with the large revenue impact of data scientists and analytics, they are “not above the law” from being good stewards of limited IT budget and capital.

    In my next few blogs, I will present distinct total cost case studies from these big data environments. One was a large on-line retailer, another was video surveillance and the third was a large gaming and social system (cloud) infrastructure. In this early big data economics work, we were able to model and demonstrate total unit costs with different (non-traditional) metrics and lay-down a plan for new storage infrastructures that reduced the cost of big data environments. You might be surprised at other non-cost measurements and metrics that fundamentally changed the management’s mind in regard to their design (Hint: it had to do with carbon emissions).

    These specialized analytic architectures can provide a new opportunity for company revenue, but this revenue opportunity should not overshadow practical cost-v.-benefits ratios and IT optimization. Look for more on this topic in upcoming entries.

    Getting Your Budget Consumed

    by David Merrill on Feb 17, 2012


    I had a conversation with a colleague yesterday on budget consumption, and then saw this shark-eating-shark photo in National Geographic:



    When you see ‘consumption’ in nature, it marginalizes the problems we often have in IT and business.

    Anyway, back to budget consumption, our dialog was centered on how end-of-month (or year) budget cuts can put spending or investments on hold. We discussed an option that I had seen many years ago when HDS had a full-line of mainframes, a service around auditing software licenses and the maintenance that tends to linger—even if the software is not longer in use.

    I believe this audit offering would be just as valuable today. With the advent of server, desktop and storage virtualization, there is a large-scale reduction in software licenses, which leads me to believe there are orphaned maintenance contracts out there from the pre-virtualization environment, and if the CMDB is a little lacking, there could be large quantities of maintenance fees that are dutifully paid each year—but there is no longer the primary software function.

    The purpose of an audit would be to isolate and map maintenance fees to actual (active) licenses. Where an orphaned maintenance contract is found, the contract would be cancelled and the resulting operational dollars re-directed for short-term investments in growth, modernization (more virtualization) or consolidations. In other words, convincing your finance group to let you find and re-invest budgeted OPEX dollars, albeit a one-time exercise.

    I know this approach was a common process with HDS (and other mainframe vendors) years ago, when a mainframe had many software licenses. With the recent sprawl of servers, OS, desktops, storage, and data protection capabilities, this same potential could easily be put back into practice today.

    As a footnote: I had to include this old picture of an HDS mainframe (circa 1996, when I joined HDS). Perhaps the photo quality is poor, but does this system look like a vending machine to anyone else? Chips on the top rack, and beverages down below….I hope that these 2 individuals pictured are not in my management chain-of-command.

    It’s About Time (and Money)

    by David Merrill on Feb 13, 2012


    There has been great news about nondisruptive migration capabilities, and Hu has a great post that you can read here on the options now available from HDS. Hu quoted me on a rate of $15K per TB for traditional migration, and I would like to address this rate and the research we have done on costs associated with various migration options.


    First, I would refer you to our foundation paper, written 2 years ago, on the costs of array migration. Certainly many cost factors have changed in 2 years, but the core principles are still the same.

    In my original research on migration costs 6 years ago, I found the rate of migration averaging between $7-10K per TB. Our joint paper with TechValidate raised this cost to $15K, but for conservative situations, I tend to start the dialog around $7K. These costs are broken into the following categories:

    1. Change control and remediation
    2. Server and SAN outage cost
    3. Application outage
    4. Labor for data migration (internal and external staff)
    5. Tools for migration – specialized HW and SW
    6. The added environmental costs (double the floor space and cooling for migration time period)
    7. Reduction of the useful life (on-boarding and off-boarding shrinks this time)
    8. Maintenance costs

    Before this new nondisruptive migration announcement, 40-80% reductions in migration costs were typical, primarily relying on virtualization benefits to reduce time, effort and labor associated with the migrations. With nondisruptive migration service, we can truly impact the outage costs to the SAN, servers and applications. Based on data from the TechValidate surveys, the outage costs and risk constitute 30-50% of the migration cost. We can expect another step-drop in migration costs when the impact to outage is minimalized.

    Again, I refer you to the paper—although I have to admit, it may be time to revisit and revise in light of new capabilities and cost reduction options now available.

    Measure Twice, Cut Once

    by David Merrill on Feb 10, 2012


    We live in a world of measurements, metrics and comparative standards. IT may have to consider more measurable gain justifications in order to compete for limited CAPEX and OPEX dollars. I like this quote from Thomas Monson who said:

    “When performance is measured, performance improves. When performance is measured and reported back, the rate of improvement accelerates.”

    When I am at a customer site, and walk through the IT cubicle area, I always look for metrics, charts or infographics on bulletin boards. I am interested to know what that particular group measures and tracks. Most of the time I see:

    • Network bandwidth graphics
    • Mean time to resolve problem tickets
    • Desktop systems under support
    • Customer satisfaction (or an attempt to measure QoS)
    • Perhaps some utilization charts for servers, storage

    You can Google search for yourself samples of what these presentations can look like. Sometimes there are dashboards so complex (like in the 1st generation hybrid cars) that you are mesmerized to the point of distraction and notwatching where you are going (in an IT figurative manner). ZDNet did a great article on effective IT dashboards and recommended 8 steps for a business oriented approach to IT dashboards:

    1. Define the key performance indicators (KPIs) that need to be measured in your dashboard
    2. Map KPIs to specific data requirements. Determine if the data exists in systems or needs to be collected
    3. If data-collection gaps exist, explore improvements to fill holes. Develop a plan and timeline to implement those systems
    4. Investigate business service management, project and portfolio management, and BI tools based on your KPI requirements. Pay attention to how tools integrate with your existing infrastructure
    5. Budget for the initial cost of the CIO dashboard, annual maintenance, and fees to implement the system. Take into account the complexity and cost of changes and updates
    6. Develop an implementation plan that provides dashboard visibility into key systems one at a time
    7. After systems are integrated, focus on correlating data across those systems to provide meaningful visual information and alerting capabilities should a metric violate a threshold
    8. When new components are considered, evaluate how they’ll be integrated into the dashboard

    What I don’t see are economic metrics in our halls and cubicles. In order to bridge the gulf of technical/economic misunderstandings, a storage team can become proactive to publish periodic metrics on how well the systems perform from a cost point-of-view. Some metrics can be economics in nature, or an efficiency type, or some other measurement that has local importance.


    The approach is fairly straight-forward, and you may even score some points with the finance, accounting or procurement departments. In the end, the goal is to accelerate the rate of improvement, and put a few “atta boys” in your inbox.

    For other posts on maximizing storage and capacity efficiencies, check these out:

    Action Plans in a Crisis

    by David Merrill on Feb 7, 2012


    This is part three of a three-part blog series on strategies and tactics during an economic crisis. You can read part one here and two here.

    We have established the terms and parameters for an (IT) economic crisis, the general behaviors and reactions to a crisis, and how to grow to meet IT demand when cash-hoarding is prevalent. Now let’s talk about technology and organizational options to consider before or during a local spending reduction. I will do this in an outline format to summarize and not turn this into a 10-page thesis paper.

    Technology Options

    • Reclaim current capacity (allocated or usable).
      • Virtualization can help consolidate and reclaim usable capacity
      • Zero Page Reclaim, thin provisioning will reclaim allocated and written-to capacity
    • Tier your storage.
      • Vacate higher tier capacities by moving stale data to lower cost tiers or archives
      • If you have to purchase, you can typically buy lower cost tiers
    • Migration costs can be significantly impacted with storage virtualization.
    • Consider RAID level implementations; many new RAID 5 solutions can perform well enough to replace RAID1/0. Also, look to move some from RAID 5 to 6.
    • If power and cooling costs are high in your location, doing a tech refresh of older assets may go a long way to recover OPEX costs and use these savings for new investment.
    • Again, if lots of older sprawl disks, consolidating and shrinking floor space, and/or SW contract costs exist, wasted capacity can be used to justify or off-set new systems.
    • Challenge your current data protection mechanisms and calculate how many copies exist to protect your data. You may be surprised how much capacity is consumed with copies and data protection.

    Business Options

    • Perform a simple storage software audit, and determine if there are licensing costs no longer in use that you are still paying for.
    • Implement some simple metering systems to track allocation and utilization by project, department or user.
    • Move from simple metering to a show-back system, to again demonstrate the consumption patterns and relative costs of each systems storage allocation.
    • If #3 above goes well, then move into a full charge-back system. If you want to change consumption (bad) behaviors, then you need to have measurements and penalty systems in place.
    • Implement a common storage services catalog system to standardize storage deployments.

    There are other options—both technical and operational—that can be put into place to reduce costs. Your level of budget crisis will determine how fast you have to react, and reduce costs.

    Some of the above actions will have immediate impacts (weeks and months), where others may take up to a year for some organizations. Your mileage will vary.

    I would like to hear your stories and experiences with business and technical changes implemented in times of financial pressures, and how those actions made a positive impact for your situation.

    For other posts on maximizing storage and capacity efficiencies, check these out:

    Economic Crisis Part 2: Hoarding Cash

    by David Merrill on Feb 3, 2012


    This is part two of a three-part blog series on strategies and tactics during an economic crisis. You can read part one here.


    We have already covered what makes a crisis (since your mileage and conditions will vary), so this entry will focus on CAPEX pressures, and how traditional capital will (probably) be replaced with contract/consumption pricing over the next half-decade. The raw cost of procurement alone will not drive our industry to this new acquisition model, but when combined with current and future economic conditions, it will set the stage for new consumption models (i.e. cloud computing).

    During an economic crisis, there are some behaviors that can be observed. How many of these apply to your IT department?

    • Most of the world is in the middle of a recession, and recoveries tend to be a long, slow process. CEOs and CFOs respond to an economic crisis by building up their cash reserves. See this article on what Apple (for example) could do with their cash reserves.
    • When hoarding cash, companies are reluctant to also commit monies to long-term IT investments, and are more open to operating leases, and consumption models to finance new growth and expansion.
    • Company leadership and the CFO will slow down or severely restrict capital purchases.
    • Operational costs will be reviewed with great scrutiny, and labor costs can be one of the first areas targeted.
    • When purchasing is allowed, procurement groups tend to go for the lowest price to maximize the limited funding, and often this can be without understanding the downstream impacts to cost, quality, reliability etc.

    Re-stating this strategy in poetic or chiasmus prose, the approach looks something like this….


    My next and final entry in this series will discuss technology and attitudinal changes that can be adopted to reclaim what you have, and improve IT efficiencies.

    For other posts on maximizing storage and capacity efficiencies, check these out:

    The Sky is Falling (Part 1)

    by David Merrill on Jan 31, 2012


    I hope you survived the solar flare-up the other day. I did not get my usual number of cell phone calls on Tuesday, so I am blaming the sun. I am convinced that if Chicken Little is ever proven right, it will be because of space junk falling into the earth’s atmosphere (after being melted by solar flares). I did notice some space debris over the weekend, but that could have been spent bottle-rockets from the Chinese New Year (btw happy year of the dragon!).


    There is a lot of strange news, frightening news, and pretty funny (given economist humor) stuff out there. Let’s take a look at some of the global economic trends that are happening now:

    • Currency de-valuation in India, Vietnam (and forecast for some in Europe), and the impact to purchasing power
    • Government debt downgrade
    • Austerity programs and all the trickle-down results to government and private sector spending
    • Rise in HDD – due in part to natural disasters and other calamities
    • Bankruptcy from notable companies like Kodak and American Airlines

    One could certainly make the case that the sky is falling. Capital funding is being preserved, even from companies that are flush with cash. What impact of capital spending do you see locally in 2012? Gartner has revised their IT outlook,down to 4% growth with year on year IT purchases. Capital may be tighter, but OPEX is still (and probably always will be) under pressure. Our methods around storage, cloud, VM and VDI economics are very popular with IT planners and executives to identify, measure and reduce the cost of IT infrastructure.The strange part of any economic crisis is that data and workload growth does not take a holiday. Upward trends continue putting more pressure on CAPEX and OPEX. Storage, servers (VM) and mobile computing (VDI) seem to continue a steep growth curve. New CAPEX, OPEX and procurement ideas need to be factored in to accommodating growth while in the middle of an economic crisis.I am developing several internal papers on how to position growth options while in the middle of current (or future unknown) economic uncertainties. I will be sharing some of this content in my next 2 blog entries:

    • Part 2 will talk about CAPEX pressures, and how CAPEX will (probably) be replaced with contract/consumption pricing over the next half-decade.
    • Part 3 will outline storage and server architecture strategies/technologies that can help stretch your limited capital budget, outline work-arounds when your local currency does not buy what it used to, and how to really measure, track and improve real costs. This is often referred to as capacity efficiencies, but goes way beyond reclaiming a few GB of storage.

    I do not want to be accused of being an ambulance-chaser, or even a chicken-little, but we have to have a sense of our surroundings (government, social, economic, political) as a backdrop to IT spending, and IT architecture decisions. These next few years may cause some hardship if traditional methods and architectures continue without challenge. IT planners need to step back and have a strong strategy to meet IT business demands, while working within (possibly) new financial restraints.

    Oh, and I joined Twitter. Follow @StoragEcon for all of your storage economist needs!

    For other posts on maximizing storage and capacity efficiencies, check these out:

    Storage Efficiencies: You Say Tomato, I say Potato

    by David Merrill on Jan 27, 2012


    Hu and others have recently been blogging about storage efficiencies, in terms of capacity, management, allocation, data protection and energy efficiency. I will add another dimension to this discussion, and that is the methods in which efficiencies can be measured.


    Quite often we talk about an efficiency theme, and jump right into the solution or options to improve that efficiency. In my work, I tend to recommend that organizations:

    • Identify the measurement system clearly
    • Measure the area that is to be optimized
    • Improve with specific actions and plans

    The recent blogs talk a lot about the step to identify and improve. For example: you can impact storage utilization efficiency with virtualization, thin provisioning, reclamation activities and de-duplication. One can improve energy efficiencies with virtual machines, thin volumes, SSD, storage tiering and an aggressive archive program. As I mentioned, the identify and improve aspects go hand in hand, but we often forget to take the intermediate step to measure (before, during and after) our efficiency program.A wise colleague shared this idiom with me 15 years ago – “you cannot improve what you cannot measure”. This expression has been core to our storage economics framework from the beginning.Measuring efficiencies can sometimes be difficult and subjective. That is why financial or economic measurements are most effective. Turn the efficiency or inefficiency into money and measure the costs (or unit costs). The simple table below outlines a series of efficiency options, potential measurements and then the options that can improve the results. If measurements are taken before and after the effort, we can be sure that we have actually achieved a measurable improvement.


    This is a simple outline, but you can see how the intermediate step of measurement is key to ensuring that the right actions are taken to produce the efficiencies required. Many differences may exist as to the best approach to achieve improvement, but an organization should have a singular focus on the measurement systems to align them with local goals and objectives.Here are more posts on capacity efficiency:

    For more posts on maximizing storage and capacity efficiencies, check these out:

    The Economics of Storage Virtualization Webinar

    by David Merrill on Jan 25, 2012



    Virtualization in the data center is a stable and proven approach to make IT more efficient from desktops to servers and from networks to storage. Whether storage virtualization is host-based, controller-based or through an appliance, it is a core ingredient in economical IT architectures. As in most new technology investments, you need a clear understanding of the benefits versus the costs. When you can project a positive ROI and fast payback, your projects gain more traction with IT management. Storage virtualization is a critical element for any organization that wants to significantly reduce unit costs in 2012.


    Does storage virtualization make sense for you and your storage environment now? Are there economic benefits for you with this technology? What benefits and cost metrics do you need to build your own business case for Storage Economics? Are you aware of the compound advantages that storage and server virtualization make on your data center operations and costs?

    Join me, Hitachi Data Systems chief economist and global business consultant to understand:

    • Types of costs that virtualization can directly address (and reduce)
    • Qualitative benefits of virtualization and how to convert them to cost savings
    • Quantitative methods to measure and predict cost savings of virtualization in data migration, space reclamation, storage management tools, storage management effort and consolidations
    • Qualitative impact of combining server, desktop and storage virtualization
    • Cost savings by large and small IT organizations around the world through virtualization of their storage infrastructures

    Speakers: David Merrill, Chief Economist

    Feb 1, 9am, 12pm ET

    To register for this webinar, click here, or visit:

    For other posts on maximizing storage and capacity efficiencies, check these out:

    1/25 Webinar: Manage Rising Disk Prices with Storage Virtualization

    by David Merrill on Jan 24, 2012


    Hu and I have blogged about rising disk prices, and how virtualization can be a key instrument to hold down costs:

    To learn more, please see below for details about a webinar on this topic tomorrow, Wednesday January 25, 2012.Webinar: Manage Rising Disk Prices with Storage VirtualizationWednesday, January 25, 2012 9:00 am Pacific Standard Time (San Francisco)Technical Session: Learn how storage virtualization can reclaim existing storage on the floor. Extend thin provisioning to existing storage to increase disk utilization and defer capital purchases. Take advantage of zero page reclaim and write same to reclaim storage reclamation. Use Hitachi Dynamic Provisioning over provisioning capability for automatic optimum storage utilization.We’ll also cover dramatic enhancements to Hitachi Switch IT On III program that make this extremely attractive and affordable. Register for this WebTech and learn from HDS experts how to use these technologies to increase your customer satisfaction and sales despite impending increases in disk prices.This webinar will also cover:

    • Thin provisioning options for storage reclamation and expectations that depend on customer environment
    • A “how to” on reclaiming storage from existing systems using Storage Virtualization
    • Details of Hitachi Switch IT On III enhanced program

    Target Audience: Hitachi Data Systems Customers, TrueNorth Partners, and Employees

    Register by either clicking or pasting the link into your Web browser:

    For other posts on maximizing storage and capacity efficiencies, check these out:

    How Much Does it Cost to Spend Money?

    by David Merrill on Jan 19, 2012


    One of the expenses included in storage and IT economics is the cost of procurement. I discovered and documentedthis cost element about a year ago while working in Australia.


    Anyway, I believe that traditional capitalization for IT will diminish over time, and we will move to contract or consumption-based contacts to secure IT resources in the future. I can image an advanced version of an app-store and facebook where I can ‘like’ a storage consumption model, then download it to my infrastructure to upload with my internal apps and systems.

    In the meantime, we are left with more traditional procurement models that require a set of methods and rigor to get assets approved, justified, certified and priced correctly. The entire procurement process can take months and significant effort, so IT planners have become experts in forward-planning to meet capacity demands. There are many options to reduce the time and cost of procurement, but I wanted to start with some simple metrics that may helpyou get a jump start on efforts to reduce this time and cost of procurement.

    The first step is to dollarize the cost of procurement. Easier said than done.

    Last March, Forrester released a paper covering eProcurement software and the impact this can have on the entire process. I found this to be an insightful read on the software automation side of the equation. Then a colleague sent me some information on the efficiencies of procurement personnel, and that the average procurement employee spends US $25M per year. And those employees burdened cost, on average in the US, is $112K.

    Simple math then helps us to create a simple metric: to spend a million dollars on IT assets will cost (in labor) about$4,500. Now there are other aspects of procurement that will not be in this number (such as RFI, ITT generation ad review, treasury funds, certifications) but assigning a rough cost of $5K to acquire $1M is an interesting start. This procurement cost can be seen as a tariff of 0.5% (.0005%) of all IT spending. Perhaps this is the kind of metric that can be used to start measuring the cost of spending money.

    For other posts on maximizing storage and capacity efficiencies, check these out:

    Look Beyond the Price of HDD

    by David Merrill on Jan 12, 2012


    You may have heard news about the devastating floods in Thailand, and now the ripple effect of increased HDD prices due to mfg and supply shortages. HDS, EMC, NetApp, and IBM have all stated temporary price increases of 5-15% for the next few months. It is at these times that we need to step back and differentiate price and costs, and to look at storage architectures (not arrays or HDD) that produce the lower total cost of ownership.


    For years, HDS economic consultants have modeled total costs for customers, thousands of customers, in vertical markets in all parts of the world. We know that price of the hardware and software is just two of the 34 elements that can constitute the total cost. And although most organizations are sensitive to the purchase price, we have seen empirical data that suggests that price (lease or depreciation expense) is just 17-20% of the multi-year total cost of storage ownership. Many other costs (such as labor, power/cooling or migration) can often eclipse the purchase price when looking at four year costs.


    So a 5-15% increase in the price of arrays/HDD then translates to a potential 1-3% net increase in total cost:

    • At the low end, a 5% rise in HDD price impacting a TCO model where price is 17% of the TCO produces <1% increase in Total Cost (.05 x 17%)
    • At the higher end, a 15% rise in price for an environment where HW is 20% of the TCO creates a 3% rise in TCO (.15 x 20%)

    This 1-3% increase in cost is not as devastating a number compared to the bad news of a catastrophe-driven price increase.With the right storage architectures though, total costs can be driven down by reducing the storage footprint, total power consumption, faster (or no) migration time and effort, and reduced wasted capacity. Economically superior storage architectures tend to have some common ingredients to produce these kinds of results:

    • Storage Virtualization can significantly reduce migration time, reclaim white space, provide better/unified management and reduce license and maintenance costs for the subordinate (heterogeneous) storage arrays
    • Thin provisioning can reduce wasted, allocated capacity and provide improved performance with wide stripping
    • Data de-duplication can reduce total disk space requirements
    • Dynamic tiering and auto-tiering can put data in the right cost of tier, based on rules setup around access, retention or QoS
    • Intelligent archive solutions can cut down backup costs, and move stale data to a lower cost tier for retention or immutability requirements
    • Control user behavior or appetites with service catalogs, charge-back/show-back reports and basic consumption metrics

    So the price of HDD and arrays may be out of your control due to natural or un-natural events, but you can take better control of your total costs by considering and implementing the right storage architecture that meets your cost sensitivities. 2012-13 may produce other problems that impact price and some of the cost factors, and all of these events cannot be predicted. Therefore the burden of implementing economically superior (and price-fluctuation-tolerant) architectures should be high on your 2012 to-do list.

    Also, take a look at Hu Yoshida’s recent post on storage efficiencies.

    For other posts on maximizing storage and capacity efficiencies, check these out:

    When Your Storage Array is a Cathedral

    by David Merrill on Jan 6, 2012


    Happy New Year to all.

    I came across this news article from Sweden, where web and data sharing has been recognized as an official state religion.


    So, some quick thoughts come to mind:

    1. Can your storage investment now be considered a charitable tax-exempt contribution?
    2. Do you need to purchase some priestly robes for your storage administrator?
    3. Will we only have to limit our time and attention to data management for an hour only on Sundays?
    4. How will governments’ IT departments now effectively separate church and state?
    5. Will this new religion require some type of sacrifice? (that is a loaded one…)
    6. Will the data center now require gold leafing and vaulted ceilings?
    7. Will a religious connotation change (raise) your perception of database or storage sales people?
    8. Can you now use the freedom of religion argument to increase your mailbox quota?
    9. Does that 20-year-old 9-track tape now become a sacred religious relic?
    10. Will you have to canonize your storage technical architecture?

    Feel free to add to this list.

    I do not condone this definition of religion, but it may put a new light on the value and importance of data and information.