The Storage Economist: 2013 Archive

    IT Economic Predictions 2014

    by David Merrill on Dec 20, 2013


    Hu Yoshida has recently completed a blog series on 2014 trends and predictions, and I’d like to add a few of my own from an economic or IT finance perspective. As I look back on the 15 years of developing and observing economic trends in IT, I believe there is a new shift ahead of us on how we justify, grow, finance and pay for IT services. I don’t think that the traditions and financial practices of the past will be as relevant in the future.


    I have been reading a series of trending and policy documents the last few months, including “The new digital ecosystem reality: Nine trends rewriting the rules of business” and the theories from Clayton Christensen on Disruptive Innovations. Looking through my lens of IT and Storage Economics, I am working through some theories that are a bit more specific to my particular scope. Since this blog post is late in the year, and blog reading is not on top of anyone’s list this time of year, I will outline 4 IT economic trends for the next few years, and then spend more time after the first of the year developing these concepts.

    1. Depreciation of IT assets will diminish in favor of new ownership and consumption methods
    2. We will not be able to afford to protect, certify, encrypt all the data (that we want to) with our current practices
    3. Moore’s Law will not be able to keep up with the rate of growth while holding CAPEX rates flat
    4. Consumer trends will impact the data center

    I hope this holiday season is a restful and happy time for all. I will be back with more IT economic insights in 2014.

    Spy Agencies Rely on Storage Economics

    by David Merrill  on Nov 19, 2013


    A Canadian colleague sent me this article last week, and referenced the fact that low cost (I think they meant to say low price) is the reason spy agencies around the world are able to effectively keep all the data that they do.


    Now part of me believes that to protect individual securities, all storage vendors should immediately raise the price of storage to counteract this movement.

    In all seriousness, we have to remember that the price of disk is only 12-15% of the total cost. Power, cooling, migration, maintenance, management (and 28 other costs) have to be considered in understanding and measuring the total cost of storage. This article confuses price and cost, but then so do most IT organizations that I talk with. This paper explains all the kinds of costs that make up TCO for storage.

    The price of disk will continue to enjoy price erosion, and the price of storage will continue to approach zero. But the costs of storage are not dropping in the same manner as price. Due to over-protection, encryption (from those spy agencies), redundancy etc., the cost of disk can stay at the same rate year after year, even with the price dropping. Here are some older entries that review these tried-and-true principles of storage economics:

    So Let’s Do A TCO

    TCO Baselines

    Think Like an Economist, Talk Like an Accountant, Act Like a Technologist

    Storage Economics Talking Points

    Next time you see the price of a 1TB or 2TB dropping at your local electronics store, just know that it means more storage for your spying-dollar…

    Cost of Very Long Term Data Retention–The Final Installment

    by David Merrill on Oct 30, 2013


    The past 3 blogs (part 1, part 2 and part 3) presented the basic framework to identify and measure long-term data retention options. In this final entry, I will summarize other cost areas that need to be considered when one compares and contrasts options for 100-year (or more) data retention plans. In my first blog, I stated that I would ignore the media cost, and service delivery option. I will break that rule now, as these last 3 cost areas are highly dependent on the type of delivery service used.


    1. Finding a needle in a haystack–the time it takes to find data that you need to retrieve

    This has to do with finding the data, not retrieving the data. If the long-term service does not provide an index or searchable offering, then keeping data for a very long time has marginal business value. The calculation here is straight-forward:


    Searches done per year (A)

    Time each search takes (B hours)

    Cost per hour of labor—data clerk, paralegal ($C/hour)

    100-Year cost is A x B x C x 100 years

    2. Now that you have found the data, how long does it take, and what does it cost to move the data into a usable, local data store?

    Finding data is one aspect, moving it to a local location for further action is another cost. Time is money, and if the service takes hours or days to retrieve, then the business cost (or opportunity cost) has to be calculated. If service levels or QoS rates are set, then there may be penalties associated with overly lengthy retrieval times. Customer satisfaction may also be affected if the retrieval time is too lengthy. Looking at some of the cloud service offerings available today, it is not uncommon to see SLA for retrieval in 6, 10 or 18 hours.

    Some cloud providers also allow for a fixed percentage of data to be accessed (retrieved) on a monthly basis, and if you exceed that limit there are additional fees that come into play. Over time, the business impact of waiting and the actual retrieval costs have to be added to the TCO of the several options that can be used for long-term retention

    3. Finally, we live in an imperfect world, and data can be lost

    We have not seen any media withstand decades or centuries of infallible protection. Even the Dead Sea scrolls are fragmented and partially lost. Data loss can be a function of the media, natural disasters, human error or even the loss of the company that makes the media or provides the services. We have seen some recent bankruptcies in the cloud marketplace, and this will be business fact going into the long-term future as well. Secondary copies can be kept, but at a cost. I wrote a recent blog on annual loss expectancy, these methods should be used to add the risk of data loss into comparative costs for long-term retention.

    There you have it. A short primer on economic concepts to use as our industry has increasing needs to keep data for 50, 100 or even 500 years. The focus on the media (disk, tape, optical) is almost irrelevant, as other costs will dominate an economic view over the time period. Consider all costs associated with migration, re-mastering, on-boarding, off-boarding, discovery, retrieval and the risk of loss. The results of such a broader analysis will definitely change the strategy and plans that you make for long-term data storage.

    Understanding the cost of migration & re-mastering in very long-term data retention

    by David Merrill on Oct 22, 2013


    In my third installment on the economics behind long-term data storage (read the first two here and here), I will discuss the single largest cost component for keeping data for very long period of time, and these are the costs of migration and/or re-mastering. In a 100-year perspective, the cost of migration and re-mastering represent 60-90% of the total PV cost, depending on the method you choose for long-term retention.


    These migration costs can be best described in the context of how you may choose to store your data: option 1 is do-it-yourself (DIY) with tape and disk media. Option 2 is through a service provider. There are migration and re-mastering costs with both of these options, although they may not be obvious to all people that are involved with these architectural decisions.

    Before I get started, I can refer the reader to a paper written several years ago (the concepts of which are still very valid) that explains and quantifies the cost of migrations in a data center environment. This paper has been validated by hundreds of clients over the years, although our methods tend to produce very conservative numbers as to the total cost. The paper can be found here.

    First, lets understand the costs in a DIY environment. When you own the infrastructure, processes and media for long-term retention, you will incur cost events every time you need to change the media or change the infrastructure.

    • If you use storage as your long-term infrastructure, you can expect to migrate off one platform and on to another about every 5-7 years. The storage arrays are usually depreciated over a 4-5 year term, but many people keep the assets for a few additional years. With RAID protection, there are smaller risks for data loss on the media. The time and effort to migrate can be long if you use old-fashioned methods, and rather fast with storage virtualization. In the paper referenced above, we tend to use US $7K/TB for these kinds of migrations. In a 100-year model, you would plan to conduct a storage migration event about 15 times.
    • If tape is the primary media, the retention time can be longer. The media can outlast the library and other infrastructure components. You do need to factor the infrastructure replacement cost (5-7 years) of the infrastructure, but tape media is typically re-mastered about every 8-12 years. This time can be protracted depending on environmental and handling factors. I don’t argue that tape has a longer lifespan, but when the first tapes begin to show wear-and-tear, then the time and effort (and cost) have to begin for re-mastering. The cost of tape re-mastering is lower than for a disk, and over a 100-year term the number of events is probably between 10-12.
    • New DVD/BluRay-based media types now offer 50 or 100-year lifespans, so it is possible that the media would never be re-mastered in our 100-year time frame. Even so, the infrastructure (libraries, media managers) would still be done every 5-7 years. Today, the total recasting cost would be the lowest with this approach.

    Taking a migration/re-mastering look at tape, disk and BluRay options, you can see that the drastically chaining the migration events has a dramatic impact on the total costs of migration over a long-period of time.


    Secondly, lets look at a service provider model. Many are attracted to cloud providers offering steady-state rates of $0.01 – $0.04 per GB month, and while this is attractive, it is not representative of all the costs. These run-rate costs are seductively low to be sure, but we have to consider some relatively high migration, and access costs with this option. If in your career you have gotten into an outsourcing, or gotten out of an outsourcing arrangement, then you know about these on-boarding and off-boarding costs.

    • First there is the cost to get on the new environment (on-boarding). The service provider may or may not charge you for the on-boarding effort, but there is an internal (to you) cost for the transformation planning, internal testing and certification, and project management and staffing for such an event. I have seen several examples of 2-4 week time frames needed to get onto cloud hosting, and this time and effort generates an internal cost. Getting data externalized is not a trivial exercise, especially the first time.
    • Once you are on the service provider infrastructure, you may incur additional costs beyond the run-rate cost for housing your data.
      • The vendor may charge you when they re-master your media, or upgrade your infrastructure.
      • There will be a cost if you access (read or write) your data above and beyond a contracted rate. These penalties for over-use are not really a migration cost, but since you may want access to more of your data in a given month, the cost has to be factored in somewhere. We might as well put it in the migration bucket.
      • If you add more data to a services plan (organic growth) there will be new fees to load your data there. Some vendors charge a registration fee, independent of the amount of data being added. Others charge a per GB rate. Either way, adding new capacity to the cloud infrastructure for long-term storage is not free.
      • Again, be aware of penalties or surcharges for over-use or over-access to your data. I hear about customers who do not read the fine print and get a surprise on the months where the plan hits some surprising charges. This reminds me of the first time I added data to my cell phone plan. Most of us probably over-used our minutes, texts message or MB and had a surprise on our next bill!
    • Getting off the service provider environment will be another cost-generating event that you have to plan for. Some providers may charge a high off-boarding fee (to discourage you from leaving) while others may charge the normal data extraction rates. In the latter case, if your allowed 5% per month for access, then you either have to plan for a 20-month off-boarding exercise, or be ready to pay some very high overage rates.
    • Finally, if we assume that you may keep a service provider for 10 years (two five-year terms), then the costs of on-boarding and off-boarding will be a cost event that will happen about 10 times in a 100-year time frame.

    So there you have it. The 100-year perspective will have 10-20 event periods where data has to be moved, migrated, re-mastered to keep up with current media and technologies. These events in total, represent (by far) the largest cost component to long-term data retention. Cloud services may appear to be the cheapest with low-cost run-rates, but many times generate hidden fees and costs that make them economically less optimal. Some of the new 100-year DVD/etched disk technologies show promising options for the media to last a century or more, but these still require the infrastructure upgrades to be able to read the disks in a hundred years. There are no free options when it comes to migration and re-mastering. It needs to be a cost function that is understood and calculated into a long-term plan.

    The Economics of Very Long-Term Data Retention – Part 2

    by David Merrill on Oct 9, 2013


    In my part one of my blog on this topic, I constructed a scenario to understand and determine operational cost factors of preserving and retaining data for very long periods of time (100 years or more). Response is that this time horizon is still too short, and that perhaps hundreds of years should be considered. I hope that my presentation of some ideas could work for the 50,100 and millennial periods of time.


    In my previous post, I state that for this analysis, I will:

    • Ignore the cost of the media (this will not be a disk vs. tape battle again)
    • And ignore the delivery service or local ownership costs. We all know that these paradigms will change over a long-period of time

    Media types and ownership costs are real, and by ignoring them, I focus on other direct and indirect costs that can be, and should be defined when making architectural and business decisions for very long-term data. Without media and the services - what remains to consider? Let’s start by understanding the environmental differences of our current options.

    Some parts of IT costs have found benefit from price or cost erosion over the last 50 years. The cost of electricity, cooling and floor space are not in that category however. The real estate and electrification costs have to be factored, even though we may not be able to predict pricing rates and efficiencies in the future. Even so, these costs will exist.

    Tapes, optical disks (DVD, BluRay, Quartz), and disk drives will continue to require a physical location in which to reside. The location may have protections against cold, heat, theft, natural disasters etc., so these housing costs will not be free. If copies are kept at a different location, these secondary costs have to be included in any total cost model. PB per square meter of space will/should continue to improve, and this will help offset the total space required. But given the data growth rates that we face, the density of data will not offset the rapid growth and need to store data. We will need computer rooms, vaults, granite mountains etc. A great example of this are the granite vaultsoutside of Salt Lake City, Utah (USA) that hold (and have held) genealogical data for years. This is extreme vaulting, and I would suppose very expensive space, but is commiserate with the value of the records and data held there.


    The cost of holding data at rest has to also include the environmental protection and electrification as well. Cooling (or heating), humidity control for the media and the libraries. There are hundreds of studies and reports available (from vendors) to calculate and contrast the passive and active costs for retention. You can get comparative controller, library and media-at-rest kWatt and BTU rates from your vendor source. If the media is removed from the library, it may not require power to be preserved, but eventually when the media is retrieved there will be power required for that action. I tried to do some simple searches on the future cost of electricity in the years 2050 or 2100 but did not have any such luck. If you know of a good source, leave me a comment on the blog and a reference or link if possible.

    We will discuss the labor needed to manage, retrieve and queue/mount/find the media in another part of this series. Most of the rental cost of the physical locations will have various labor related to monitoring and management, but we will separate labor from the environmental costs for now. As a category, the environmental and space costs tends to be 2-20% of the total cost of long-term retention; but this rate can change radically between disk, tape, optical and cloud providers, so your mileage may vary. If power costs spike in the future, this would have a corresponding impact to the ratio and overall costs.

    As different media and archival methods are considered (even paper or film/fiche), a long-term calculation or projection of environmental and space costs have to be considered into a comparison cost analysis. Part 3 of this blog will cover the multi-year costs of migration, on-boarding, off-boarding and re-mastering of data.

    The Economics of Very Long-term Data Retention

    by David Merrill on Oct 1, 2013


    The past couple of weeks I have met with clients in western Canada, New York, New Zealand and Australia. There has been a long-standing comparison of disk and tape over the years. Some of the basic arguments have not changed, but now we are seeing customer face very long-term retention requirements. Some customers have been in the oil/gas market, media, government and some in university research. I noticed that the perspective of long-term differs by customer and by vertical, but each of these discussions has been around a serious and deliberate plan to archive and access data for a minimum of 100 years.


    There has been some cross-over with the high growth rate of big data, and the long-term, large volume of archive; but for this discussion I will limit some observations and economic concepts to just long-term retention. 100 years may seem to be a long time, but I can look back at my own 30+ year IT career and see quite a range of technology, media, formats and protocols that have changed in that time. We can only assume that the next 30-100 years will have as many new types of improvement.

    The economic analysis that I will present in my next few blogs will try to show a different approach to compare and contrast options for long-term archive. For this analysis, I will do so by:

    1. Ignoring the cost of media. We all know that a tape cartridge is cheaper than a disk drive, but all media types will continue to price-erode over time. We have to do some of this analysis by ignoring the cost of the media. This gets away from the religious differences that some have with tape and disk. By ignoring media for the time being, we also can assume that other media types (flash, optical, Blu-ray, etc) will rise and fall over the next 100 years. The controllers, libraries, media managers, networks and other infrastructure has to be ignored as well.
    2. For this comparison, I will also ignore the delivery service model used for long-term retention. Cloud providers and service offerings will change over time. Those that dominate now, may not exist in the future. Local ownership (capitalization) or utilization (from a ISP) can be ignored since the depreciation or utility rate can be relatively uniform (between the media types) over time.

    By ignoring the media and delivery method, we can now have more clarity in other long-term costs of owning and accessing data. I have isolated 6 costs that can be reasonably modeled for comparative purposes:

    1. The cost of environment (power, cooling, floor space, vaulting)
    2. Media and data management – labor costs
    3. Re-mastering costs. This is the time and effort it takes to move from one-generation of media to another, or to change media types all together
    4. The time it takes to discover or find the data you want
    5. The time it takes to move the data from the long-term state to an active, usable state
    6. And finally the risk of losing data, either from media failure, disaster, human error, service provider bankruptcy, etc.

    In my next few blogs, we will create the economic or financial framework to create a 100-year model for each of these 6 areas. Present day technologies and methods (no longer ignoring the media or delivery method) are very different in their short and long-term total cost benefits.  In total, we will be able to see that a short-term TCO winner may not be very good over the long term. Present value (PV) calculations can simplify the 100-year cost into today’s dollars for a clear comparison. You might be surprised on how a long-term view may change some old perceptions we have about archive.

    The other important analysis intangible is the vendor or partner that will be used as part of a long-term approach. As an employee of Hitachi, it is great to be part of an organization that is over 100 years old. I think this gives us some staying power, both in providing technology leadership options and in some long-term requirements around support and service over this same long period of time.

    With such an established parent, we, as an IT vendor, have a unique position in the market to talk about defining, supporting and planning for long-term technology trends. A 100 years of data storage requirements seem to pale compared to 100 Million year technology that Hitachi will have available in 2015, but for now, lets stick with life-time length data storage.

    What To Do With All That Reclaimed Capacity

    by David Merrill  on Sep 25, 2013


    I am supporting a large customer transformation to virtual, thin and tiered storage. The projections that we made months ago about improving utilization has come true; we are forecasting a net reclamation of about 1.5PB of storage through these transformation investments. The older/existing arrays were simply virtualized, and then afterward the volumes were re-presented as thinned volumes. The good news is they have 1.5 PB of reclaimed space. The bad news… they have 1.5 PB of capacity that is still too new to de-commission.


    One point of view is to keep the capacity in-house, and use that for organic growth over the next (estimated 15-18) months. This will create a CAPEX holiday for them, but since the price of disk is only about 1/5 the TCO of disk, this strategy does not immediately help reduce the unit cost of disk. As some of the systems begin to be used/allocated (they will be 3-4 years old by then) will not have the same environmental efficiencies as arrays that are purchased in the future, when they need them. These assets will be powered, cooled, and burning maintenance or warranty time while they sit, idly waiting to be used.

    The other option is to write-off the assets, sell them in the after-market space, and consume capacity and IT infrastructure that is “closer to the bone” in terms of efficiency and effectiveness. This client may also have another division that could use the assets, and then there would be a cost to transport and re-purpose the systems.

    We typically see clients of this size with a range of older and newer storage assets, so any reclamation activity results in decommissioning the older systems first to save on power, cooling, maintenance and migration. This situation was unique, since they had made a very large purchase (another vendor) less that 2 years ago, and then made the transition to Hitachi virtualized storage. You cannot go on a witch hunt to find those at fault. It is water under the bridge, but it does highlight the requirement to have a multi-year strategy for storage and other high-growth infrastructure. Many of our customers tend to believe that the price erosion of disk will satisfy budget constraints, but as mentioned earlier the price of the array is becoming a smaller fraction of the total cost of the array. Therefore, multi-year future plans are needed to budget and schedule key investments to achieve continuous improvement:

    • Storage virtualization, with unified management
    • Over-provisioning
    • Virtualization-assisted migration
    • Tiering, both in the frame and external to the virtualized subordinate arrays
    • SSD inclusion to the tiering mix
    • Compression, de-dupe
    • Unified block and file storage architectures

    For the customer situation above, we are starting a TCO baseline and 12, 24 month TCO projection to determine which path will provide the lowest cost now and in the next few years. From the TCO models we can then calculate the present value of each option. Economic methods and some simple finance models will help set the right plan to move forward.

    Intermediate Steps to Cloud, and Still Reduce Costs

    by David Merrill on Aug 30, 2013


    In my previous blog I discussed some of the investments and steps people can take to be cloud-ready, or cloud-enabled, without necessarily moving everything to an off-site or consumption based delivery model. There are key ingredients that can help to get cloud-ready. And by cloud-ready I mean the same technology and processes that cloud providers use to deliver superior price and cost models for their customers. Some of these key ingredients include:

    • Virtualized storage
    • Virtualized servers and networks
    • Unified or orchestrated software to manage virtual systems
    • Billing and metering
    • Self-service portals for provisioning, change requests
    • Very strong link from service catalogs (menus items) to the service delivery, SLA, OLA and eventual chargeback

    From the point of view of reducing unit costs, these steps can be done internally or organically, within your current organization, and within the current capitalization processes. At HDS we have demonstrated cost reduction with enterprise-class storage virtualization for many years. We have thousands of customers with many stories of improvement in utilization, tiered distribution, and faster migrations that all add up to a lower unit cost.

    Extending these options to a pay-as-you-go model can extend savings even further. And if your situation (security, compliance, latency) allows, off-site location of IT resources can move the unit cost needle down even more. This graphic shows some of the OPEX, CAPEX, and other cost saving results that were achieved from a recent analysis I did for a large European client.


    The TOP BAR was business as usual as measured in cost per SAPS. The bottom bar was moving into a private cloud offering with a pay-as-you-consume OPEX model. The customer was interested to know what they could achieve on their own (the middle bar) by implementing these advanced architecture elements, but without going to a vendor-assisted consumption/private cloud model. There was certainly space in the middle for unit cost reduction, but in the end they decided to go to the private cloud due to superior unit cost reduction.

    Private, Hybrid and Public storage cloud offerings are becoming very popular these days. We see more interest in the pay-as-you-grow OPEX model for storage. Even if you are not completely ready for this extended reach, you need to understand that you too can use the same ingredients as cloud providers to produce superior unit cost reductions, right now in your current storage environment.

    Cloud Illusions

    by David Merrill on Aug 26, 2013


    I am working in South Africa this week, speaking at some great customer events in Johannesburg and Cape Town. Beautiful cities with beautiful people.


    In some after-meeting discussions with a large government customer, the topic migrated to clouds, and the reality of offerings, capability and cost reductions. We know that clouds are here to stay, but I heard some real concern about illusions and misunderstandings of clouds, and how far down a certain path (public…) we have to go to achieve all the benefits that are espoused. This customer has also been a storage vendor representative, so the views have been from both sides of the aisle.

    After this discussion, my mind went back to an old 1960s pop tune “Both Sides Now” by Judy Collins:


    I’ve looked at clouds from both sides now

    From up and down and still somehow

    It’s cloud’s illusions I recall

    I really don’t know clouds at all

    There is no need to get too analytical about this song, or any of the confusion around cloud computing. I can (but won’t) quote several analysts on the true definition of cloud computing or cloud storage – but the point that we at HDS have been making is that there are key processes and technologies that can be used in traditional data center operations that can improve effectiveness and reduce costs. Cloud service providers use (and accelerate) these capabilities to develop cost effective offerings. These techniques can be planned-for and implemented in-house, with some remarkable impact. Let me show this in another way.

    This table depicts how some of these key ingredients or processes can be used for current systems, private clouds, or as enablers for hybrid or public options.


    My next blog post will show how this move to the right (implementing these systems as a do-it-yourself-approach) can reduce costs right now. You don’t have to have the cloud or consumption portion enabled to see clear and measurable cost reduction in the short term.

    Race to the Bottom: Can Moore’s Law Keep up with Big Data?

    by David Merrill on Jul 26, 2013


    Moore’s law has been a stable predictor of density and price in the IT world for many decades. Initially used to describe transistor density as a function of time, it has been loosely applied to the price of IT, and for our purposes today the price of storage. Except for 2012 (with Tsunamis and flooding) we have enjoyed storage price erosion in the range of 20-25% per year for many, many years. Storage price erosion is a function of areal density and technology improvements, not necessary transistor density. The chart below (from IIEEE Transactions on Magnetics Vol. 48 May 2012) can give you a rough idea of the future for areal density of NAND, HDD and tape.



    There are several technologies that will contribute to the improved areal density including:

    The good news is that media density will be improving by 20%+ year on year, and we could correlate that to a continued price erosion for data storage. See below for projections on price per GB for different media types.


    If you have followed my blog over the years, you know that I put more focus on total cost as opposed to price (price is only 12-15% of storage total cost), but in this analysis I want to focus on future prices. With price erosion roughly 20% per year matched with data growth at 20% per year, one could argue that CAPEX budgets would be roughly flat year on year, in that the increase capacity cost would be covered by price erosion. This simplified view is not really accurate, but for this discussion it demonstrates the parity of price erosion and organic capital growth.Enter-now hyper growth with big data. A customer that I have worked with recently talked about rapid capacity increases that exceed 100% per year, even 400% in a year or 18 months. With this kind of growth rate, there will be a significant increase in CAPEX needed that cannot benefit from a20% price erosion. Dr. Howard Rubin has written on this topic, and I refer you to his concepts around the tipping point and where “the geometric growth rate of computing demand — technology intensity in the context of business and our personal lives — will drive computing costs past the point at which Moore’s Law will keep the costs manageable.”This tipping point could not have come at a worse time, economically speaking. Companies do not have the stomach for large IT investments, even with all the analytic benefits of harvesting revenue and business value from big data. Seeing the requirement for significant IT investments in servers and particularly storage for moderate-to-long-term retention of data, there will be a pull back due to sticker shock. Hyper growth due to big data, without the accompanying price erosion in the midst of economic uncertainty does not make for a good recipe.One of the solutions to this conundrum will be to get away from the CAPEX (purchase, lease, depreciate) tradition and consider consumption-based ownership models. Too many people confuse consumption options with a cloud delivery model, and they are two different methods. Many cloud delivery options do come with consumption or utility pricing, but my recommendation is to simply replace traditional ownership models with a consumption model. The characteristics of a consumption model are:

    1. You pay for what you use, when and if you actually use it
    2. Monthly rates can rise and fall based on utilization
    3. The storage (or other IT) assets are in your facility, behind your firewall
    4. You can still manage the assets if you want, but purchasing a remote management service gets into other parts of the total cost model
    5. The control of the assets and local management can be tight, secure and structured as if you owned the assets
    6. You no longer look at or care about end-of-asset-life events. The service provider should be responsible for upgrades, transformation and migrations
    7. The monthly bill can be passed directly to the organization consuming the resource.
    8. Even if you do not have a formal storage service catalog or chargeback system in place, this consumption approach brings that wrapped into the offering
    9. Since the utility rate is an operational expense, there can be benefits (and drawbacks) for not having these book assets around, and not burdening the procurement group with rapid quote-negotiate-purchase cycles in a hyper-growth period of time

    All in all, the good news of price erosion mixed with the bad news that Moore’s law cannot keep up with hyper-growth scenarios leads to consideration of options that can meet the business and technical needs of the organization. Consumption methods are one option to consider that will help provide balance and alternatives in an era where business uncertainty should not derail technological advancements.

    Cloud Economics from the IaaS Perspective

    by David Merrill  on Jul 22, 2013


    I came across an interesting article on how IaaS cloud-provider-economics work. While it is a simple article with basic economic concepts, it sheds some light on how commoditization and differentiation in the market will have to change in the near future.


    A couple of observations from reading this article depending on your points of view:

    From a Cloud Provider:

    • There are many assumptions about the underlying storage and service architecture that providers are using to build and deploy cloud services. The article implies that all hardware and architectures are the same
    • HDS provides some highly-differentiating and economically-superior storage solutions to many IaaS service providers (Telco, Global SI, traditional cloud providers) that present a virtualized architecture that can scale-up and out with the growth requirements
    • Some of the tactics to “race to the bottom” in price tend to leave a sour taste in the mouth of consumers, as so many features and functions are additive and get shifted to the variable rate cost area

    From a Cloud Consumer:

    • This article was written from the perspective of the IaaS company economics, and how they are having to respond to commodization and competitive offerings in their business. It was not created to help the consumer side of cloud economics find, evaluate and select the right cloud options that actually can reduce your costs.
    • As I have posted earlier, some cloud offerings and decisions  may actually increase costs, so you need to understand all the options and pricing rates from the IaaS (as outlined in this article) and add in other fixed and variable costs that will make-up your new total cost of ownership
    • What I liked about the article was some insight into pricing differences of various cloud vendors. These vendors are not non-profit organizations, so they have to be creative in the engineering and marketing of their solutions. This kind of transparency in the article is refreshing, but also insightful for those that want to know how the price options really work with IaaA vendors
    • So beyond the economics of vendors as outlined in this article, you the consumer has to be very aware of several factors where examining all costs (some are hidden) when comparing and contrasting cloud offerings to a DIY approach. For example:
      • Network transmission between your site and the provider
      • Cost of change, adds, or deleted
      • Transformation cost, or re-hosting, moving to a different provider (even though it may be a future cost)
      • Cost of latency, performance
      • Risk, in terms of off-site premise, protection, country/legal/compliance areas

    It will be fun to see how different providers adjust to new/global competition in this area, but we cannot ignore the consumer impact on total cost requirements and capabilities that need to be assembled to meet the local user requirements

    Big Data Cost Tier/Ratio

    by David Merrill  on Jul 18, 2013


    As a follow on to my July 6th blog post, here are some cost points to consider for the tiers of storage to be used for big data.


    First, in a four-tiered storage cost model (with big data being tier 4), the total costs (not price) ratios of these tiers tends to look like 11:7:3:1. I wrote a more detailed blog on this, that defines the different types of architectures and Quality of Service (QoS) needed to attain these types of cost ratios. In my experience so far, it is practical to insert the big data cost point at the same cost tier as Tier 4 archive, as outlined in this blog post. In my work with customers building extensive big data storage pools, their costs tend to be 1/10 to 1/20th the price of traditional tier 1 storage assets.

    Also interesting that I am seeing are tiers within the big data storage pools. The blended rate of the big data pool may be 1/10 or 1/15, but there are even lower cost requirements for less active tiers of big data. Let’s walk through an example of a customer where there are traditional tiers 1-3 for the primary work (about 1 PB) and then 90 PB of analytic data.

    The cost ratios for their 4-tiered environment looks like this:


    And as we break down the tier 4 (big data tier) you can see even more separation of tiers within this tier.


    In this example, the big data tier had a blended rate of $425/TB/year, but some of the tiers used for analysis have SSD and other high performance features to meet the QoS requirements. But in the end, the blended rate of the big data tier had a total cost of ownership rate that was just a fraction of traditional storage. Again, going back to the April blog this difference in cost cannot be achieved with hardware alone (or price alone). It has to be a new functionality and QoS that can do without:

    • Data protection
    • Much of the traditional management labor
    • Disaster protection
    • Replication of copy content
    • Environmental costs

    If you have some experience with costing tiers of storage with and without big data pools, I would like to hear about your experience and the types of cost ratios per tier that you are achieving.

    Big Data Economics, Putting New Wine in Old Bottles

    by David Merrill on Jun 18, 2013


    In the past 2 months, I have been involved with several big data conferences and speaking opportunities with customers and analysts from all over the world. I see some clear and actionable economic conditions around big data projects that are consistent with actual customer work that I have been involved with over the last 2-3 years. In short, the costs required to build out new big data infrastructures are:

    1. Not always aligned to the business value of the analytic results
    2. Start off with low-cost systems, then they become economically unsustainable as the growth accelerates
    3. Can be too high compared to the (undefined) business value

    HDS sponsored a Big Data Research survey in the UK recently, and a couple of conclusions that came from this work were very interesting to me. 2 questions in this survey had to do with big data costs and value.

    1. What are the barriers preventing you from adopting big data solutions? (in a sample 200 respondents 58% said cost and 41% said unclear on ROI)
    2. What are the major challenges to big data analytics adoption? (in a sample 200 respondents 51% said aligning IT costs to business budgets and growth)

    This survey supports my observation that we have an alignment issue around the costs and economic value for big data infrastructure.

    There is a well known parable about the risks of putting new wine into old bottles (and also old wine in new bottles), and I think this parable also applies to big data infrastructures, especially the storage infrastructure. Big data is the new wine. It has new characteristics in that the data can come from a variety of sources, and be in several different formats. Its rate of growth in volume can be unlike anything we have traditionally built in the past. The data in many cases has a short shelf-life, and only the resulting information may be kept for a long period of time. This new wine infrastructure needs a fundamentally new cost structure and technical architecture.

    I am seeing people attempting to put a big data project inside a traditional storage and server infrastructure. This is like putting new wine in old bottle. This older infrastructure architecture is not built for the cost and operational characteristics of big data. Unit costs for big data need to be at such a lower total point (compared to an existing tier 3 or tier 4), and the inability to deliver this new low cost rate is driving some CMOs to put big data projects in cloud or subscription service offerings. This is not all together a bad thing, but if IT wants to build and own the big data infrastructure in the short-term, new tiers and costs aligned to the big data requirements will be required.

    In my next blog, I will review a total cost framework that may help with the planning and alignment of the costs for this “new wine” in “new bottles”.

    An Economic View for Big Data Planning

    by David Merrill on Jun 6, 2013


    Two weeks ago I spoke at a CIO forum in Australia, sharing the stage with IDC on the topic of trends and directions with big data. Last week, same topic but in Thailand. Next week, same topic but in London.


    I will probably take this topic of ‘big data economics’ and break it down into some bite-size (or blog-size) messages that I have seen over the years. I have observed and measured Hadoop and Azure storage infrastructure costs for about 3 years now. Back in early 2010, I don’t even know if it was called big data, since we have had this type of analytics and data warehousing function for years. What has changed, and what analysts and surveys keep showing, is a rapid acceleration of the amount of data, the variety of data and the impact of machine-to-machine generated data. So this is where we can start on some economic concepts, as well as some simple points. Here is a summary of my observations:

    1. Data for analytic purposes is growing rapidly
    2. The retention of this data varies, but it tends to have a short shelf-life, and many times can be disposed of after a period of time (the data is thrown out, not the information it generates)
    3. The variety of data makes the task of storing, cataloging and referencing the data interesting
    4. New storage architectures, file systems and infrastructures are popping-up in data centers (Hadoop, Azure) and on the web (S3, many others)
    5. I have seen new big data IT initiatives start-up over the past 12-18 months, and many of these start-ups are done on a storage and server infrastructure that is not (economically) optimized for the demands of big data (volume, variety, velocity and value)
    6. When it comes to designing a cost-efficient, in-house big data infrastructure, IT planners tend to be unable to determine (or to be told) the business value and criticality of the analytic work that these large data stores will provide to the business. Without the business value understood, it is impossible to align or optimize the cost structures to be commensurate with the ‘business value’
    7. Some ambitious projects have a hard time getting the funding necessary, since the current storage and server infrastructure cannot support the rate of growth and volume at a price that is right for the job
    8. 1st generation big data infrastructures are good to start, but when the scale-out begins they fail at a technical, operational, and economic level
    9. May people associate or lump big data projects with cloud projects. I think this is a mistake. Even though cloud offerings may support the end-game, these can be viewed and managed as separate initiatives

    My quick summary of the situation is this – we need fundamentally new IT infrastructures and architectures to (cost) effectively support these new initiatives. Your current storage, server and operational processes are probably not suited for the new unit-cost structures needed in the future.

    There are several older blogs on this topic in the HDS blog library:

    Big Data Origins

    Big Data Volume Requirements

    Big Data Velocity Requirements

    Big Data Variety

    Hitachi’s Integrated Vision Around Big Data

    JBOD in Clustered Computing

    Big Data Storage Economics Case Study 1

    Big Data Storage Economics Case Study 2

    Big Data Bare Metal

    Big Data Optimal Storage Infrastructure

    In my next blog entry, I will share some ideas around a “total cost ratio” that may be helpful as you start technical and operational plans to build and deploy big data infrastructures.

    Residual Value of Storage Systems

    by David Merrill on May 15, 2013


    Most of us have bought a car before and sometimes we have an older vehicle to trade in. I have always been disappointed in the trade-in value of my older car, but realize that cars depreciate rather quickly. If the car is running and reasonably driveable, you will get something for the trade-in. I tend to keep cars a long time (7-8 years), so my residual value (RV) or “trade-in value” tends to be low. I like to sweat my assets.


    Storage has an RV as well. If you subscribe to IDC you may want to check the annual reports on the RV for storage systems, and what you can expect to get at the end of the term. Storage is often sold on the secondary market, even on eBay, so there is always an after-market value for the arrays that you cherish right now.

    RV can and should be a factor in determining or computing your storage TCO, as a higher RV can bring in more for a trade-in or to sell in the secondary market. Since RV is a future trade-in amount, you may need to convert the RV to the present value (PV) and then add this value (it is not a cost, but an increase in the future) to the current year TCO. See here for a good overview or tutorial on defining and calculating present value.

    So how do you start? First, you may need to subscribe to IDC reports that list the RV of various storage systems. Your leasing partner may be able to provide you some of these details as well. When you have that RV %, you can then calculate the present value of the RV, and use it in TCO comparisons or calculations. Let’s take an example of this process.

    1. You take inventory of what you have, then get the IDC report, and determine that your storage array will have a RV of 5% after 4 years
    2. You know that you paid 400K for the array, so the 5% RV would be 20K in a few years (when it is fully depreciated)
    3. Say for example you have 3 more years of use from the asset, so you can use the PV formula to determine that the present value of the (future) RV is worth about 15K right now

    Here is an example on how RV might impact your TCO today. In this example, we will compute the TCO of 100 TB that was purchased last year, and has 3 years left for the useful life.

    Current year depreciation expense is  100,000

    Power, cooling, floor space is                40,000

    Labor is                                                150,000

    Maintenance is                                        50,000

    Migration is                                              50,000

    Present Vale of RV                                -15,000

    Current year total costs                        375,000

    TCO per TB for current year              3,750/TB/Yr

    IDC RV links that may be of interest:

    IBM Enterprise and Midrange Storage Residual Value Forecast, Feb 2013

    HDS Enterprise and Midrange Storage Residual Value Analysis, March 2013

    EMC Enterprise Storage Residual Value Forecast, November 2012

    In summary, don’t overlook RV as a function of the costs and benefits that your current storage has. There is future value to the equipment that you have on the floor right now. Some have higher RV than others, and that can be factored into comparisons and plans to deploy economically superior storage solutions.

    Threat Probabilities

    by David Merrill on May 8, 2013


    A few months ago I wrote a 4-part blog series on the cost of data protection. One of those entries discussed an approach to calculate the annual loss expectancy (ALE) related to risk, and the cost of risks. I had a program several years ago that helps calculate the probability of 19 of the most common threats, but failed to include the probability rate in listing those 19 elements.


    Well here is a more complete view of these risks, and the rate of probability that they may happen.


    This table and information is probably 12-14 years old. Risk analysis was a very popular service at HDS prior to the Y2K event, and we were busy consulting with companies to determine the cost of having an outage due to Y2K. Many companies back then (prior to 9/11) did not have formal disaster recovery (DR) capabilities, so the concept of the service was to help them determine the cost of not having a DR or protection plan, and the cost was the sum of each of the risks that they may have a problem.

    Since this list is so old, some of the threat ratios or profiles may be out-of-date. Certainly the category called “External Sabotage” could be re-named terrorist attack or bombing. The world has become a more dangerous place in the last dozen years. Natural disasters are also on the rise, so you can take this old material and still see how threats that are common for you can be dollarized.

    The costing exercise is simple.

    1. Pick a threat (for example: lightning)
    2. Based on the typical occurrence of that threat for you, pick a number between the low and high end of the range of numbers. For me here in north Texas, I would pick a number closer to 2 or 3 due to the violent storms that happen every spring
    3. Determine the outage time that would result from each occurrence with your current state of preparedness or recovery
    4. Determine the cost per hour of an outage

    So, let’s say that lighting may hit my data center in north Texas up to 2 times per year. Each hit would cause a power outage or brown-out that would last 1 hour. Each hour of outage is valued at $40,000 per hour. The cost of risk for this one category would then be  2 times/year X  1 hour X 40,000 for a total ALE of $80,000 from lightning. I would add other cost/threats together to get my total ALE.

    I am not an actuary, but this simple approach can allow us to do dollarize threat potentials and build the proper business case to recover in a cost effective time. The cost effectiveness of the recovery has to be balanced against the cost of risk.

    Better go, I hear thunder outside. Time for the Texas spring storms to hit..

    Price Does Not Equal Cost

    by David Merrill on Apr 26, 2013


    I came across this article on the total cost of storage (and VM) this past week.


    The author states “Capital costs are actually more of a distraction than a real representation of what storage costs.” And I agree with this 100 percent. Working on the total cost of storage over the past 12 years, we have found that the price (or CAPEX) is only a fraction of the total cost. In many of my presentations and papers I state that the price of disk is about 15-20% of total cost.

    Over the last few years, this percentile has continued to drop. Working with an HDS customer last week (they operate with over 70PB of storage), their depreciation expense (or CAPEX) is less than 10% of the total cost. Depending on what costs are included in a TCO, I have seen the CAPEX rate as low as 6% of costs.

    There are several cost categories that, over time, are higher than the purchase price of a storage system:

    • Power and cooling
    • Migration
    • Management
    • Data protection

    When addressing a cost reduction initiative within your organization, you have to look beyond the price negotiations to produce real cost savings. You need to ensure that the products and architecture you consider for storage can reduce other cost. Simple questions to ask your vendor:

    • Does this solution provide an easy on-ramp and off-ramp to other storage systems at the beginning and end of its life?
    • How do the total power, cooling and floor space requirements compare to other solutions?
    • Can I effectively manage 200, 500, 1000 TB with one storage engineer for the total solution set?
    • Can you provide me optimal data protection techniques (backup, snaps, dedupe, copies, replication, disaster recovery solutions) to reduce the total hardware spend on this solution?

    Sometimes to get to an optimal cost, you may need to be prepared to pay a higher price. These solutions are not always cheap or free. There is a simple test – ask your CFO if they are willing to pay a higher price to get a lower total cost. The answer will always be “it depends on how much higher of a price and how much lower of a cost”. That is where you need to start your own storage economics effort. Here are some links to some of my relevant blogs that may help provide further insights:

    So Let’s Do a TCO

    Think Like an Economist, Talk Like an Accountant, Act Like a Technologist

    Storage Economics Talking Points

    Defining Costs for Storage Tiers

    by David Merrill on Apr 18, 2013


    Over the last few years, it has become increasingly important to create storage service catalogs in order to align business requirements to technical storage architectures. Many organizations shy away from developing catalogs for a variety of reasons, one of them being the perceived complexity to create them. Many also tend to think that defining different tiers of storage is difficult, in that predicting exact or perfect classes of service has to be an exact science. People often ask me if there are best practices or published standards on these storage tiers, so that they can be used in a formal or informal catalog.


    As far as I am aware, there are no published industry best practices on setting up storage tiers. The definition of tiers is different for each customer and industry around the world. One person’s tier 1 can be someone else’s tier 2.

    Most storage architects do see some general patterns of these tiers, and I have outlined a highly simplified version here.


    As I mentioned, every organization will have different tier definitions based on their applications and requirements. The above definitions are average or illustrative in nature. The key point is that the last row (total cost ratio) is how we provide real differentiation or separation of costs. This is not the price of the media for the tier, but the total cost of the tier over a several year period (annualized though).

    The costs that are used can vary from company to company, but tend to include the following popular costs out of the possible 34 costs that we use in Storage Economics at HDS:

    1. Hardware and software depreciation
    2. Hardware and software maintenance
    3. Labor for management
    4. Power and cooling
    5. Floor space
    6. All data protection costs (backup, Disaster Recovery)
    7. SAN and WAN costs

    Since you want to move data to lower-cost-to-own tiers over time, this 11:7:3:1 ratio is important. If 3 of the 4 tiers are all roughly the same cost, then there are no options to reduce the cost of storage. Having multiple tiers at multiple/differentiated costs is a necessary incentive as we move to chargeback and pay-as-you-go (cloud) models. In the least, departments and IT consumers need to know what their storage tier selection really costs the company over time. This might influence different selections or assignments of IT resources.

    The types of services and capabilities have to be designed in order to achieve this differentiating 11:7:3:1 cost ratio. If not, subsidies and tariffs may have to be introduced to artificially set the costs so that these ratios can be instituted. I don’t think there is anything wrong with artificial cost setting, since this is a tactic to drive the right behavior in storage data at the right tier for the life of the data asset.

    Don’t be discouraged that you do not have a comprehensive storage catalog in place. I see many customers start with a simple matrix like the one above (perhaps put into a colorful and informative 1-page brochure format) to communicate differences in cost, performance, protection and availability. Communicating the differences and capabilities can start with some simple tables, and over time this format can evolve into a style or format that works best for your organization. Just don’t forget to include the costs for each tier.

    Recipe for Storage Chargeback

    by David Merrill on Apr 10, 2013


    I have been asked to develop a paper (more like a book) on the topic of chargeback. Since chargeback involves money, colleagues automatically assume that this is a job for the “Chief Economist” at HDS. I have had a lot of experience with  charge-back schemes, though this is not a core skill that I can add to my LinkedIn page.


    I probably have a slightly different perspective since I think that catalogs and chargeback are key initiatives to reduce costs (this is a phase 3 event as outlined in this cost reduction time-frame). At this time when customers are considering utility models, the idea of developing or understanding chargeback is also relevant. So even though I am not ready to write a formal paper or a book, there is enough for a blog entry. So here I go…

    What are the goals or objectives expected to be achieved with a chargeback? You should document and track these goals:

    • Change behavior – what new utilization behaviors are needed?
    • Recover IT costs and charge them to the departments or groups that use them
    • Is IT turning into a profit center?
    • Drive down unit cost – what is your cost now? What should it be? How will utility and chargeback help this?
    • Accountability – people pay for what they consume
    • Reward good behavior and usage, and penalize the bad
    • Review and revisit goals every year; compare with what the utility brings

    Storage Total Cost Baseline is needed. You cannot randomly set a rate to charge, unless you can measure your current cost of goods (COGS). HDS has outlined 34 different types of costs. Your plan should be able to define:

    • Current unit costs
    • What costs (out of the 34) are relevant or important?
    • Total cost difference between file and block, tier 1, 2 and 3, disk and tape, etc.
    • Differentiation between services and service levels: Cost of SAN, Cost of backup, Cost of disk only, Cost of DR protection

    Before you start selling disk, you should create a “Storage Service Catalog”. This is your menu of offerings, by service type and by costs. Not all storage and storage services are created equal. The Storage Service Catalog allows you to present information in the language the ‘customer’ will understand:

    • Different tiers, different services, different prices
    • Drive the right behavior
    • Tied to referential architecture
    • Tied to Business Impact Analysis (BIA)

    A Storage Service Catalog is a business level abstraction of the technical or referential architecture. The two need to be linked to each other, but consumers or users should not be selecting drive type, RAID level, cache size etc. The catalog has to abstract the technology and define the services in the context of applications or business.Now with goals defined, total cost and COGs understood and basic service catalogs developed, you are ready to start creating the ‘chargeback’ system for storage. Some things to consider (this is a partial list):

    • Unit of measure
    • Frequency of metering
    • Automation for metering and costs
    • Collection, reconciliation and billing system
    • Consider a tiered price ratio that drives the right behavior (my experience is that for a 3-tiered storage service, a total cost ratio of 7:3:1 is optimal)
    • Annual true-upAnnual price reduction
    • Billing and journal entry accounting systems
    • Benchmark of commercial systems in order to be competitive
    • Are there SOX or other compliance groups that need to be involved in setting your rate?

    I have written in the recent past about how utility computing and chargeback will in conflict with many traditional project-based procurement processes. I suggest these blog entries to be read as you create the chargeback scheme. Read them here and here.

    So perhaps a book on this topic is in order. Perhaps I can use the Betty Crocker template and really simplify this process in the context of a cookbook…

    7th anniversary

    by David Merrill on Apr 4, 2013


    Today marks my 7 year anniversary of blogging for HDS, with most of the emphasis on the economics of storage, hypervisors, converged infrastructures and cloud. My very first entry on April 4, 2006, summarizes the genesis of my material, research and learning journey that has been storage economics. As I look back on the macro and micro-economic changes that have occurred over this period of time, there are a couple of principles and concepts that have not changed too much:

    1. The price of IT is not the same as IT itself
    2. Just as all are “so easily seduced” by the power of the ring in Lord of the Rings, so too are IT professionals drawn to low prices. We are often hood-winked into architecture that cost more over time.
    3. Whether we are looking to improve costs per VM, cost of an IOP, or cost of a TB – we have to start with a baseline. We cannot improve what we cannot measure
    4. Micro and macro economic factors in the world do affect IT economics. IT professionals have to learn to speak the business language of money and (unit) cost reduction
    5. Since IT is expanding (data, systems, apps, users, etc.) we have to focus on unit cost reductions
    6. ROI is good for business case support, but in the end TCO (or an equivalent) is much more meaningful
    7. The list of levers or options to reduce costs keeps getting longer and more intriguing over time
    8. We have to learn to map or correlate cost reduction areas to cost reduction levers

    So what will the IT economic landscape look like in the coming years? No one can predict future events, but what the heck—here I go…

    • Depreciation of IT assets will diminish
    • We will turn to consumption models
    • IT elements will keep getting cheaper (disk, core, memory) while enablement resources (labor, power, risk) will keep rising
    • Rate of change, demand for business agility, standing-up and tearing-down of IT resources will accelerate to levels not yet seen
    • Cloud, web, and hosting options will adapt and change to meet security and protection requirements for larger percentages of IT systems
    • Social innovation/systems will transform traditional IT systems, and shift some costs more to the consumer of IT
    • The need for cost efficient solutions will not go away, as consumer/social innovation trends will demand lower variable rates, and lower unit costs

    I look forward to continuing to journal this IT economic journey, and will have to come back every year or so to see if my predictions are close.

    Options to Reduce the Total Cost of Data Protection

    by David Merrill on Apr 3, 2013


    This is the fourth and final entry on this series about data protection(DP) costs. I have discussed some of the reasons why these DP costs are high, how to measure them, and how to correlate these costs to risk. Finally, I’d like to discuss some ideas around DP cost reduction, as well as the correlation of these ideas to different types of methods used for protection. In a cost reduction approach, it is very important that we don’t jump to this problem-solving step first. Instead, I strongly recommend developing a Data Protection TCO as the first step. Make sure you have accounted for all types and styles of DP that may exist for the lifecycle of your data. This might be a good time to conduct a business impact analysis (BIA) to confirm the protection and risks. Then compare/contrast these costs to the risks. If you are out of balance, then start this final stage of selection cost-reduction solutions.


    The following is a simple table that lists (on the left hand column) popular methods to target and reduce costs. These are some of the levers you have available to reduce the cost of DP. Across the columns are impacts to the different DP methods that you may be using. Not all solutions or tactics impact all DP methods the same.


    Now the cycle is complete. You have levers or solutions to implement that can impact your cost of DP. After a period of time, you can start the process again by measuring the TCO of DP and aligning it to your new risks. Hopefully, with improvements and changes the ratio and cost factors are more favorable.

    As a postscript to this 4-part series on data protection, I would like to reference a 451 Research report entitled “Storage protection strategies shift under server virtualization” which states: “Data protection architectures are where we see differences between physical and virtual servers. While a majority still uses the same protection schemes for both, those taking the opportunity for change are experimenting with relying on hardware functions of snapshot, replication and de-duplication, while leaving backup software out of the picture.”

    The report identifies server virtualization (Virtual Machine) related capacity growth as having the largest single impact on storage, especially on data protection. I will write some more about these DP methods that are specific to the VM world.

    A Short Primer on Annualized Loss Expectancy (ALE)

    by David Merrill on Mar 28, 2013


    My last 2 blogs “Are We Over-protected/Over-insured and “Calculating the Total Cost of you Data Protection” have been on the cost of data protection, and the categories that make up data protection costs. In a blog format, I do not have time/space to present a complete calculation methodology, but rather set up the framework that you can start to determine what the total cost of data protection is for your environment. If you go through this exercise, you will have a number–the total (annual) cost of data protection for your environment.


    What to do with that number? By itself, it is impossible to know if it is too high or too low for your environment. You do not want to be defending or challenging the number if you cannot do so in a proper context. I suggest that the best way to evaluate this number is to compare it to your cost or level of risk. Balancing the cost of protection against a cost of (potential) risk is a key method to understand if you are over-or-under-insured relative to your data. If your cost of data protection is $1M (all the infrastructure, copies, backup etc.) and your cost of risk is $5M, then you will have to work through the 1:5 ratio.

    • Are your protection methods highly optimized?(in order to allow the 1:5 ratio)
    • Is the risk level so high that you may not be protecting your environment adequately?

    This approach gives you some financial methods to compare and contrast your protection and risk. Conversely, if your data protection costs are $5M, and your risk is only $1M, then there may be some opportunities to reduce the protection costs to be commensurate with the risk level that you have calculated.Dollarizing risk and comparing/contrasting to protection costs allows us to get away from subjective statements (“this is just too expensive”) and knee-jerk reactions. Balancing risk and protection is not a perfect science, but putting a dollar amount on both sides of these equations can help with IT projects in the future to optimize or reinforce protection methods.Now the effort to dollarizing risk may not be as easy as it sounds. Sometimes external (3rd party) help is required to bring some credibility to the problem/solution. Many years ago we had a formal risk-calculation framework based on CORA. You can Google CORA to find products and services that are available today. 15 years ago our CORA services were essential to Y2K planning services. We had to dollarize the risks of Y2K outages to balance against the Y2K development efforts. A similar method to CORA is ALE, or annualized loss expectancy. This can be a faster way to determine the annual probability of certain events, apply a probability rating, and then derive a cost. This adds up all the types of threats and costs results in ALE. ALE can be a shortcut to determining total cost of data risk.In the ALE approach, there are hundreds of events that can cause an outage, data loss or disruption of service. Years ago I built some simple tools to calculate ALE by narrowing this list to the 19 most common or popular event types. Here is the list I came up with (in no particular order):

    1. Fire (Minor)
    2. Fire (Major)
    3. Fire (Catastrophic)
    4. Flood
    5. Hurricane
    6. Tornado
    7. Lightning
    8. Earthquake
    9. Power Outage
    10. Power Surge/Transiens
    11. Network Outage
    12. A/C Failure
    13. Input Error
    14. Software Error
    15. Hardware Error
    16. External Sabotage
    17. Employee Sabotage
    18. Fraud/Embezzlement
    19. Strike/Riot

    So for example, if a data center is located in central Florida, then several of these 19 would be included, and some would be excluded. If catastrophic hurricanes is chosen, and the pattern is that one occurs every 20 years, with an average outage period of 2 days, at a business impact of $10K/hour. The ALE for this one category would be:

    .05 (annual probability) X  2 days X 24 hours X $10,000

    The total amount is now calculated at $24,000. If other categories from the 19 are similarly calculated and added together, this would constitute the ALE.

    In any kind of data protection assessment, you have to compare the cost of risk to the cost of protection. ALE can be a simple approach to dollarize outage or loss risks, and then compare this rate to the protection rate. I can publish some standard probability ratings of these 19 cost elements, drop me a note or leave a comment on the blog and I can get this information to you to help you create your own ALE.

    Calculating the Total Cost of your Data Protection

    by David Merrill on Mar 19, 2013


    In my last blog post, I introduced a 4-part blog series on the economics or costs of data protection. My colleagues Ros and Claus also are writing a parallel set of blogs on the technology behind data protection. My boss Hu Yoshida also posted compelling points in a blog from 2 months ago, and an experience where only 12% of a customer’s data storage was 1st instance data.


    As fate would have it, IDC published a recent report from Laura DuBois, Richard L. Villars, Brad Nisbet entitled “The Copy Data Problem: An Order of Magnitude Analysis.” It is an excellent read, with high points being:

    • IDC estimates that in 2012, more than 60% of enterprise disk storage systems (DSS) capacity may have been made of copy data
    • Similarly, in 2012, copy data made up nearly 85% of hardware purchases and 65% of storage infrastructure software revenue
    • IDC projects that data protection infrastructure will cost businesses $44 Billion in 2013
    • They go on to say that the growth of data is bordering on paranoia

    So with this as a backdrop, let’s walk through a process to actually calculate the costs of data protection. First, we have to determine what cost elements we should use in a TCO analysis. Luckily HDS has a robust methodology called Storage Economics that provides 34 different cost areas to use in this approach. See these papers and booksfor more on the 34 costs.

    Next, we need to isolate the costs and align them to the different data protection techniques outlined in my first blog. These costs tend to be hard costs, or direct costs. You can add soft costs as well, but for this exercise hard cost calculations will be more impactful to management for them to see what this insurance is really costing. We will handle the risk costs in another part of this data protection blog series, as these risks need to be quantified and dollarized as part of the annualized loss expectancy.

    Here is a simple correlation to some of the costs on the far left side, and how these costs may exist in the several data protection methods.


    With this framework you are now able to do this math on your own, to calculate techniques unit costs and total costs of data protection. HDS has tools and services to help you if you want to do this a little faster and with people who do this several times a month instead of once in a career.

    I also recommend not only measuring data protection cost per usable TB, but also dividing these costs by the 1stinstance capacity that you are protecting or in other words, the total cost of data ownership (TCDO). This gives a better metric going forward with improvement and cost reduction plans. With this framework in place, you are now able to start calculating the total cost of your data protection strategy

    Are We Over-protected/Over-insured?

    by David Merrill on Mar 7, 2013


    Recently I experienced an insurance audit – to correlate and review all my insurance between home, life, car etc. The audit was helpful in identifying some gaps in my coverage, and to look at ways to reduce my total insurance bill. In the end, I found that I had better coverage at a lower cost.


    I believe this same problem exists with data protection in that data protection methods (and there are many of them) lead me to believe that data protection may be over-engineered and IT departments may be paying too much for data protection. There are many reasons and functions for data protection:
    • Business reasons – monetary loss, customer attrition, brand erosion, compliance, vendor viability, operating costs, changing business needs, operational efficiency, business disruption
    • Technical reasons – data loss, data corruption, physical disaster, disparate systems, storage and apps, in-house expertise, technical flexibility, solution mix, business resumption delays, recovery point and time, management challenges, recovery complexities, application protection

    Looking over recent results of global storage economic assessment, we see that data protection can easily and often be 40-60% of storage TCO (TCO/TB/Year). So when management is looking to reduce costs, can we stand up and state (with a straight face) that we should look at data protection costs, or insurance costs, to see if there are cost reduction opportunities?

    I will dedicate the next few blogs on this topic. But first we have to outline the insurance options are being used today to protect data:
    • RAID
    • Snap copies, CDP
    • Volume copies (local, remote)
    • Backup to tape
    • Backup disk, VTL (then tape)
    • 2DC (passive or active)
    • 3DC

    These techniques protect against different types of risk, but there is often overlap. Therefore, with overlapping coverage, there are extra costs. And sometimes there are gaps in the data protection coverage that can increase risks. As we take a larger look at the problem, there is compounded complexity when you include application and system-level protection techniques (clustering, OS, application level). So developing a data protection economic framework may not be an easy task.

    In this short blog series, I will focus on infrastructure methods and costs. High availability and clustering in the server does protect data, but is targeted at the application layer. I will focus on the storage infrastructure layer. In the next blog, I will cover the methods to calculate the TCO of data protection, then look at the various ‘levers’ available to reduce the costs. These cost reduction efforts have to be aligned with a cost-to-risk-equilibrium (RTO, RPO, cost of outages, Annualized Loss Expectancy (ALE), total spend, service levels of protection, risks) to make sure that we are not creating more risk costs.

    Speaking of risk – the TCO of data protection has to be balanced with ALE. More on ALE in a later blog.

    I hope this can be an interesting dialogue to identify, measure and eventually reduce the total cost of data protection.

    Economic Advantages of Pay-as-You-Grow or Utility Storage Services

    by David Merrill on Feb 28, 2013


    I am working in Australia for several weeks, and find that many sourcing companies (including HDS) have been in the Storage as a Service business for several years. Most companies are aware of these offerings and general acceptance seems to be higher here than in other parts of the world. Part of that may be that these national resources are here in-country, and a threat of data or systems moving off-continent seem to be less likely. The distinctions of utility services compared with traditional outsourcing are mostly well understood. Recently I  met with  a few customers who still have a bad recollection of old-fashioned outsourcing and are skeptical that these new consumption methods are really a disguise for bulky, inflexible outsourcing deals. They also do not see how these options can reduce real costs.


    In this blog, I will outline the theory of storage cost savings with a utility (scale up and down) pay-as-you-go storage service. Let’s just call this “storage utility” for now. And for this blog let’s focus on the CAPEX impact of savings/differences.

    I will start by describing an overly simplistic, multi-year storage growth model. First, let’s look at the written-to data requirements of a company.


    In the above graph, we see several points of interest in the demand curve:

    -       Point A is a steady-state growth with new projects and new infrastructure.

    -       Point B represents a new project (perhaps an analytics event) where a lot of new data is introduced (machine-generated data) to be analyzed. This might be data that can reside in a lower-tier of storage, but will be on-line for several months.

    -       At point C, the burst mode data goes away. Perhaps it is deleted; perhaps it is put back to tape or an archive. But the total capacity demand for written-to data drops.

    -       At point D, there is a merger or acquisition, and the storage/data demand grows rapidly for a sustained period of time.

    Next, let’s look at a traditional purchase model that would be required to meet the above demand.


    The top line represents a usable capacity rate needed to support the written-to levels in the first graph. In this chart let’s also assume:

    -       Thin volumes have limited adaptation in the environment.

    -       A 5-year depreciation for assets.

    -       Once assets are purchased, they stay around until the end of the 5-year depreciation term

    -       There is a lag between demand and delivery. This is due to the time it takes to scope, engineer, bid, purchase and install the assets.

    -       Engineering with reserve capacity (20%) is common for the storage management team.

    -       Utilization of data (written to) compared to allocated is an industry average 40%. Therefore, the white space or wasted capacity of what is allocated has to be added to the reserve capacity defined by the storage team.

    As you can see, overall utilization is very poor. The spikes at the end of event B create pools of unutilized storage. As new projects come online, they want to have their own resources and not a hand-me down disk. Utilization rates that start at 30% of this model can easily drop to 15% in a short period of time. And finally, the written-to-raw ratio hovers around 1:6 (which would be very, very good).

    Now let’s look at a storage utility approach to the same demand scenario. In this service:

    -       Only thin provisioned volumes are delivered to the customer. In this example I have a conservative rate of 110% average oversubscription.

    -       Capacity can scale up and down.

    -       The lag between requirement and delivery is hours or days, not months.

    -       There is no need for reserve capacity. The service provider keeps all the reserve so that the customer pays for only what they need.

    -       White space within the allocated volumes may still exist, but over-provisioning will reduce most of this waste.


    As you can see, the differences are tremendous. Not only is the total storage footprint different…

    -       The written-to-raw ratio turns from 1:6 to around 3:5.

    -       Very fast mean time to deliver provides positive impact to the projects.

    -       Floor space, power and cooling costs are reduced by 35-50%.

    -       With less equipment on the floor

    • Management costs are held in check
    • HW maintenance rates (even as part of the utility rate) are reduced

    -       Agility in acquiring and de-commissioning IT assets can bring better business value, just-in-time OPEX spend in place of long term CAPEX commitments.

    If you subtract the capacity of the Storage Utility line (green) from the BAU line (brown), you get a sense of the different in total capacity at a point in time that would be needed to meet the business needs of data storage.

    Moving from a CAPEX spend to this OPEX storage utility may also present some internal finance and accounting challenges, which we can discuss in the next blog. But for the present view, reducing infrastructure, having the agility to consume what you need when needed, and having a variable rate cost align to business needs are some of the key benefits to this type of it service delivery. Other aspects, benefits and detriments will be covered in my next few blogs.

    ROI vs. TCO

    by David Merrill on Feb 13, 2013


    A colleague sent me this article about using ROI to justify IT investment requirements to business managers/leaders. Some follow on discussions have opened up an old debate about TCO versus ROI to justify IT spending in the face of potential savings. Since the inception of our Storage Economics work at HDS in 1999, I have always been a big proponent of ROI. For all the same reasons as defined in the article, ROI is a valuable tool when competing with other internal spending/funding initiatives. ROI can be a single measurement that management can use to contrast investment options from IT, marketing, new properties, R&D or whatever companies spend money on. So when you have to compare and compete, ROI is an excellent method.


    For the first 5-7 years of doing storage economics (and even hypervisor economics), our tools and methods centered around ROI methods and calculations. Our earliest work was designed to help IT planners build business case justifications for new investments and architectures. It seems to me that around the time of the global economic crisis (also concurrent with an acceleration of data growth and storage demand), there was a subtle shift from ROI to TCO work as our preferred method. Perhaps this is/was a personal preference, but in my work around the world, reducing unit costs of things (TB, VM, VDI, Mailboxes, Oracle instances etc.) has been paramount. And here is why:

    1. IT budgets are relatively flat, or increasing only slightly (if you follow IDC check out report #12 “Research Report – 2013 IT Spending Intentions Survey”)
    2. Storage and VM demand is anything but flat. Most of my customers indicate a 40% year on year growth in storage, VM or VDI

    So, with a nearly flat budget and a 50% growth in IT resources, what is to be done? The answer is to track and actively reduce unit costs. This older blog post has a diagram to show the concept.

    I tend to see that a 30% year on year of TOTAL unit cost reduction can allow most organizations to hold a relatively flat OPEX and CAPEX budget. This is why TCO is such a critical management message. So in most conditions, TCO and ROI methods can co-exist to tell a compelling storage. Investments to drive down unit cost (preserving budgets) and ROI to compete head-to-head with other company initiatives. So instead of a versus arrangement, the best approach is a combination of both.

    Project-based Funding Tends to Produce Poor Asset Utilization

    by David Merrill on Feb 1, 2013


    In my last entry, the concept of project-based funding for IT resources was started. I want to take a deeper dive into some of the problems of project-based IT funding, and how these methods may have to change in the future as procurement and ownership of IT will likely change in the next few years.


    One set of traditions in project-based funding goes like this:

    I. The project gets approved and receives funding to get started

    1.  The budget includes some IT funds—some of which are for IT infrastructure such as storage

    2.  The storage budget is reviewed and presented to the storage team

    a.    The project managers will not want to come back for more resources, so they tend to request and buy all the capacity up front

    1. The storage team uses the project funding to purchase new storage

    a.    The project personnel often assumes that since the assets were bought with their money, the storage assets belong to the project and they have some say on how the assets are used, shared, managed.

    1. Some sharing is allowed, but there is still a feeling of ownership rights to XX TB of disk space

    a.    The project can get reports and “see” their total storage estateb. The project DBA start allocating and writing to capacity within their estate

    1. Project growth rates always start slow, so usually only a small fraction (<10%) is used in the early months or years of the project
    2. Excess or reserve capacity is not shared with another project, or a shared IT infrastructure, since the project owns it
    3. Utilization rates start slow, and even if they grow at a moderate level, the overall utilization is still relatively low

    a.  Projects don’t have to account for poor utilization since it was a project-sponsored purchase

    1. The central IT storage team may not get to share with the project on excess or reserve capacity
    2. The projects may produce low utilization of the assets over time, but the IT department will be paying for this poor utilization for a longer time

    a. More cabinets and drive trays installed than what is needed

    1. Higher power and cooling
    2. More floor space

    a. Higher SW and HW maintenance costs in year 2-nb. More monitoring and management effort

    1. At the end of the project, the IT department usually has the chance to archive or decommission the data for long-term retention
    2. At this time, compression, de-dupe, thin volumes can be used to store the data in a tier that is appropriate for the data value
    3. This process is repeated for virtually every project that comes into the IT department

    Compare this internal approach with that of a hosted service provider (cloud, IaaS, PaaS), in that a pay-as-you grow and pay for consumption approach is taken. Monthly bills are sent for allocated capacity. The service provider has more capacity so the project does not need to purchase reserve resources on day 1. These types of cloud services don’t necessarily have to take your storage and data off-site. Many offerings can provide this kind of pay as you go utility that includes having the assets on your data center floor.

    Project-based procurement and ownership tends to lead to poor storage asset utilization. Poor utilization increases the total cost of ownership, and since our CFOs are always looking for better ROA, we may have to challenge some of these project-ownership traditions. New procurement models will help reduce costs, but new project-based protocols for funding will have to be re-engineered to accommodate these new trends.

    Considerations for Project-Based IT Funding

    by David Merrill on Jan 25, 2013


    There are probably a dozen ways IT departments get funding (primarily CAPEX) to handle new growth and daily operations. I am going to present a few of these traditional funding models, and some of the risks that these funding types may experience to non-traditional IT ownership models. I have been writing about some of these non-traditional methods for the past few months, and you can see them here:


    Back to 1999 CAPEX Ratio
    Challenging the Tradition of Depreciation for IT Ownership
    Alternative Acquisition Methods
    An 11-Step Program

    The difference of these new modes is a pay-as-you go approach. Even if these methods reduce cost, it will cause a disruption (in the force) on how project-based funding works today. Let’s review the approach and the possible conflicts going forward. Again, there are dozens of funding models for IT, and this post will cover just 1 type of project-funding.

    Some IT departments depend on new projects to bring company-approved funding to the table to acquire new storage, servers, apps etc. that the project needs. Characteristics include:
    • Project brings CAPEX dollars to have IT purchase the needed hardware and some software
    • This is a one-time infusion of cash; the project does not pay for on-going or recurring costs
    • IT may be forced to provide 3, 4 or 5 years worth of assets for the project, even though it will take years for the project to grow into the capacity (therefore there is a lot of waste)
    • Some IT departments use this cash to add capacity to the shared services pool of IT resource leadingnew projects to provide all the new cash for internal organic growth
    • The project usually does not pay on-going costs that will exist over the life of the project, such as:
    o Labor
    o Backup services
    o HW and SW maintenance
    o Migration services at the end of asset life
    o Monitoring, tuning, optimization
    • Sometimes the project does not even pay for organic growth required during the life of the project
    • In this mode, the IT department has to maintain a separate budget to cover the above costs. Sometimes new projects fund the on-going costs of older projects (sounds like a modified Ponzi scheme…)

    Now if an IT department wants to consider a pay-as-you-grow or consumption-based model, there will need to be some pretty hard changes to project-funding methods
    • A charge-back method will usually be needed as the service provider will charge on a per TB or per-volume basis
    • Project funds that come into the front door will have to be placed into some type of escrow account to fund the OPEX model over several years
    • Projects may have to change from a one-time CAPEX budget to an annual OPEX spending budget
    • New incoming project funds may not be allowed to pay for older project OPEX costs (from the utility OPEX model) due to accounting and SOX-type legislation
    • IT central budgets will have to undergo a transformation around run-rate costs and utility pricing that they provide for projects

    Some of these accounting practices will require significant re-design, and in some cases may delay a move to utility or consumption-based IT resourcing. My next few blogs will review other funding methods and the impact to traditional accounting and funding practices.

    Constructive View of IT Finances

    by David Merrill on Jan 3, 2013


    So, did we go over a cliff this week? I was sort of expecting a more grandiose situation, like the scene when Denethor runs (while on fire) over the edge of the promenade of Gondor (in Return of the King).



    Our U.S. national tax situation was not quite as spectacular. But the recent changes in my local tax laws (including the annual change of insurance and withholdings) caused me to spend some time this holiday break looking at my financial plan for 2013. It is good to challenge older strategies and tactics and to take action to develop the best financial plan for my family and me.

    IT financial reviews would be just as useful, given the uncertain times and conditions of our business economies. I recommend that technologists and architects sit down and have this dialog with the financial teams as well. Just as in legislative jostling, a clear understanding of all sides of a problem needs to be open for review and compromise. It is important that all sides of a balanced approach be reviewed. Perhaps some open-ended questions for such a collaborative meeting:

    • Data growth is not slowing down, but can this capacity be presented in a virtual, thin or different way?
    • Are metrics and measurement systems in place to help projects know what they consume?
    • Copies consume a lot of the current capacity; are we enabling projects to see and understand their consumption?
    • How do we plan effectively knowing that budgets will continue to be tight for the foreseeable future?
    • Do the right risk/reward systems exist to help drive the preferred behavior?
    • Understanding the trade-offs of CAPEX spend vs. OPEX spend (is one better than the other this year?)
    • Why do IT assets have to be purchased and depreciated?
    • In order to reduce costs, can management allow some risk areas to rise in order to avoid costs (i.e. challenge backup, disaster protection, compliance)?
    • Are all costs equal? If not, which are the most important this year?
    • Does all data or all IT resources have to physically be in the company-owned data centers?

    The start of a new year is a great time to sit and evaluate budgets and spending. It is a good time to challenge past traditions in light of future certainties. It is a good time to have agreement and develop a plan to make the coming year profitable and exciting for all.