The Storage Economist: 2014 Archive

    Focus on Reducing Unit Costs of IT

    by David Merrill on Nov 11, 2014

     

    Over the years I have written a lot of material around cost reduction tactics. As I look at unit costs of storage, VM, cloud, big data and long-term digital preservation, we see patterns of success across IT organizations in different vertical markets. These patterns are significant, since these general investments are much more than individual elements or products.

     

    In the world of IT economics, there are many distractors as well. There are new standards, protocols, and delivery models that promise savings. As I advise clients on cost reduction strategies, it is important to first understand and define what your own costs are. There are dozens of IT costs, and the choice of the costs to measure will ultimately determine the actions and investments needed to reduce the costs.

    After cost identification is done, I see a pattern of 4 distinct phases to reduce unit costs. These phases do not need to be linear, or done in any particular order (but the order below tends to be common), the areas are important to understand for yourself, and to measure effectiveness in that area or phase. The principle of econometrics is essential to track progress and improvement as you enter and exit each of the phases. The undertaking of unit cost reduction requires you to review, understand and challenge current ways of providing IT services:

    • Operational – Each organization tends to know where there are operational problems to improve upon. Whether more rigor is required for the CMDB, or securing a second vendor to leverage better pricing, there are many operational options to consider. Software audits are popular this time of year, to ensure that the license fees that you are paying really do support applications in-use. DR tests, and service levels alignment for DR recovery can require a check-up now and then. End of year systems freezes are often a good time to evaluate best practices, processes, and overall efficiency.
    • Architectural – Many new technology options are emerging (OpenStack, erasure coding, software defined everything, IoT, hyper converged systems), and these new elements can drive exciting and lower-cost architectures. Remember to differentiate the ingredients from the architecture. The architecture you implement is unique to your environment, and these architecture elements are the key ingredients to deliver the results you need.
    • Behavioral- although not popular to implement, end-user behavior change can produce significant cost savings. As more applications and systems move to some type of cloud, end users are becoming accustomed to buying services from a standard offering, having their workload metered, and eventually paying for what is used. These same procedural changes can be made for the internal IT without going to a cloud, and these can help reduce waste and improve overall asset utilization.
    • Ownership – again, referring to the cloud, the ideas of IT-on-demand and for a per-use fee is gaining wider acceptance. The challenge to ownership does not require that you move IT to an off-promise location to get these benefits, since on-premise pay-as-you-go services are readily available for storage, backup, servers, VM, DR, and application hosting. For IT infrastructure, many are seeing the elasticity cost benefits of these new models. You want and need to own your data, but not necessarily all of the attending infrastructure.

    The old adage to measure twice and cut once applies here too. As organizations make strategic and tactical plans (around this time of year), make sure the actions and investments are aligned to the costs that you want to reduce. Measure, check, and measure again to ensure that the investments that you are planning for 2015-17 will produce the desired outcomes.


    Hybrid Cloud Economics–Part 3

    by David Merrill on Jul 17, 2014

     

    In conclusion to my 3-part series on hybrid cloud cost, let’s review a few key points:

    • Hybrid clouds can bring the best part of public and private ownership to the local storage team
    • Crossover points are important, and there is no one-size-fits-all solution
    • The time dimension is important, be sure to have a time horizon in your comparisons that include multiple events like migrations or changing of vendors
    • Tightly integrated solutions that allow multi-vendor, seamless cloud integration is the most important architecture consideration
    • Costs are still being shifted and masked in any and all cloud economic comparisons. Make sure you consider more than just the subscription cost or the purchase price

    A few of the most common costs for hybrid cloud total cost comparisons include:

    • HW depreciation, subscription fees
    • SW licenses
    • HW and SW maintenance
    • Labor (storage management, monitoring, engineering)
    • Environmentals (power, cooling, floor space)
    • On-boarding
    • Off-boarding
    • Migration, remastering
    • Long distance circuits
    • Provisioning time, procurement time
    • Off-premise risk
    • Usage penalty
    • Latency risk

    Now lets consider a modeling approach that can be used to find the sweet spot for your own situation. I have built a simple XLS-based model that has been useful to find these sweet spots and sour spots for different deployment methods. In my model, I compare doing it yourself (DIY) with simple and advanced storage architectures. Advanced architectures involve object store, virtualization, dynamic provisioning, tiers etc.Step one is to define some parameters that are variable. Here are some of the variables that my model uses:

    Screen-Shot-2014-07-16-at-6.52.45-PM.png

    Next, you have to determine what costs are to be used in the TCO model. See the list above for some of the most common costs. After that, I think there are some knobs or levers that change the results for each simulation. These knobs have to provide the variable costs for the comparison, and I think they are:

    • Growth rate – if you see hyper and unpredictable growth, the public and hybrid options will have benefit
    • Daily access – some CSPs charge you access fees to get to your data. If the frequency of access is high, it might be better to have the data local
    • Refresh frequency – how often will you change vendors, CSP or refresh the local equipment. On-boarding, off-boarding and migration costs creep in with this variable
    • What service levels or expectations are there for data access? Can your users tolerate to wait minutes, hours or days for data to be available?
    • Are there risks for having data off-premise (this may include out of your country)? If these risks or compliance factors can be quantified, they belong in the model

    Some of the costs are hard or direct costs. Some of them are soft costs. You choose how to define and characterize each. Here is how I depict these knobs in my model (using slider bars for relative impact):

    Screen-Shot-2014-07-16-at-6.53.34-PM.png

    And finally, there needs to be a graphic representation of the results. In my model, I use TCO-per-TB month as the metric.

    Screen-Shot-2014-07-16-at-6.56.53-PM.png

    In the above example, the relative positions of the knobs suggested that DIY with an advanced storage architecture is a little better in total cost compared with hybrid. If you change the rate of risk, or the growth rate or the access rates, the numbers and results (or winner) would change. This is an effective technique to demonstrate how and when some public or hybrid cloud arraignments can actually be more costly than DIY. Models like this need constant tweaking and updating, since subscription rates, storage prices and network costs seem to come down all the time. There are some costs like compliance and risk that go up each year.

    The conclusion is that modeling hybrid vs. public vs. private vs. DIY is a deterministic exercise for costs. There are multiple variables and conditions that can impact the outcome. Defining and characterizing multiple cost (that are important to you) and then modeling the result with various conditions can help determine trends and sweet spots. Generalized claims made by vendors, CSP and others have to be carefully framed in the scope and context where they are defined. Your milage will vary. Your conditions will vary. But the good news is that new deliver methods for data storage continue to evolve and will put pressure on traditional storage architectures. Be sure to find your sweet spot.


    Hybrid Cloud Economics–Part 2

    by David Merrill on Jul 11, 2014

     

    This is a continuation of a 3-part series on the economic factors for hybrid clouds. Let’s take a quick look at the pros and cons of cloud storage vs. DIY storage (in this case a low cost tier of storage for archive or object store). Now if we take the best of each, and build a hybrid cloud tier, the results are impressive since we can select the strong benefits of the DIY and public cloud offering in the design of a hybrid cloud.

     

    Screen-Shot-2014-07-10-at-6.47.37-PM.png

    What is the definition of a hybrid cloud? Here are some summary points:

    • A hybrid cloud is a cloud computing environment in which an organization provides and manages some resources in-house and has others provided externally.
    • The hybrid approach allows a business to take advantage of the scalability and cost-effectiveness that a public cloud computing environment offers without exposing mission-critical applications and data to third-party vulnerabilities.
    • To be effective, a management strategy for hybrid cloud deployment should address configuration management, change control, security, fault management and budgeting. Because a hybrid cloud combines public cloud and private data center principles, it’s possible to plan a hybrid cloud deployment from either of these starting points.
    • Some key qualities:
      • Single, seamless integration between local assets and remote
      • Cloud-provider independent
      • Easy to move up and down
      • Enables end-user data mobility
      • Local security, management, protection schemes are resident with local IT

    What do the analysts have to say about hybrid cloud for storage?

    • Hybrid clouds will be in most large enterprises by 2017 (Gartner)
    • Most private clouds will have a hybrid component (Forrester)
    • The public cloud isn’t right for everything, but organizations still want some sort of cloud, it usually ends up being a hybrid deployment (NetworkWorld)
    • The hybrid cloud model offers the best balance of cloud benefits and risks (TechTarget)

    In summary, the economics of the Hybrid can be extracted from the best qualities. Risks can be minimized since the IT department determines the line of demarcation between local and remote storage. See the summary below for Hybrid:

    Screen-Shot-2014-07-10-at-6.45.09-PM.png

    The next and final entry to this series will cover some comparative cost models and customer case studies on the economics from customers that have used hybrid clouds.


    Hybrid Cloud Economics

    by David Merrill on Jun 24, 2014

     

    Over the last 5-6 years, I have done extensive modeling and writing about the economics of storage (and VM) in the cloud. Like most vendors, we have worked to find defensive positions against public cloud offerings. These public offerings offer an extremely low consumption price, and that has been attractive to many, despite the variety of technical and business concerns with public clouds. Most of my own economic work has been to define total costs of cloud options, as compared to hosting infrastructure in your own data center. Not surprisingly, there is always a cross-over point where the sweet spot begins or ends for any technology.

     

    daveblog.png

    Over the years, I have built several public and HDS-use models to compare cloud options for:

    • Tier 4 very low cost storage
    • Archive and backup storage
    • Very long term (50-100 years) data retention
    • Cross-over points for private and public cloud architectures
    • And a few others that I have probably forgotten…

    In the early days of cloud adoption, there was a significant shift in costs by moving to the cloud, with questionable results relative to if hard costs really went down. Cloud computing introduced net-new costs that have to be considered in TCO calculations (on-boarding, over-use penalty, latency risk, etc.). I have used economic methods and simulations to show where the cross-over point exists for a given technology. Some of these cloud economic cross-over points can be measured in terms of:

    • Time
    • Growth rate
    • Access rates, frequency
    • Overall capacity
    • Elasticity
    • Risk

    The graph to the right is a simulation to show total cumulative costs with owning an object store solution (HDS HCP) vs. consuming object storage capacity in a cloud offering. You can see the cloud shows economic superiority after 4 years, and this is primarily due to the lack of migration or remastering needed from the cloud offering.

    Given cloud shortcoming, situations have always existed where storage in the cloud made sense from operational and economic perspectives. There are also many situations where it does not make sense.

    Recently, the discussion around cloud and DIY has been less adversarial and more cooperative. More customers want the best of both worlds in having fast access to some data, with low cost and elasticity for other kinds of data. The management needs to be unified, using open standards and avoiding vendor lock-in. Simply put, the time is right for highly integrated hybrid clouds. My next few blogs will outline the economic efficiencies of hybrids, and how the economics change with these new options.


    Business Defined IT–Part 2

    by David Merrill on Jun 23, 2014

     

    A few trips to Japan and more than a few domestic trips have gotten me behind on my blog postings. This is a follow-up to my last one over a month ago on Business Defined IT. As previously covered, business really boils down to 2 key dimensions, time and money. Let’s get into the money aspect now.

     

    I come into many more IT organizations looking to become profit centers. This is an interesting trend (IT as a Profit Center) that is helping to turn once-expensive IT organizations into profit-making elements of the business. I see this IT-as-a-profit-center most generally in the health care vertical, but it is emerging elsewhere. One of the key reasons for this move is to reverse the trend that IT services, data and systems only generate cost. When in fact the data that is generated can be turned into digital gold with analytics. There is gold in all those transactions, tweets, logs and machine-to-machine messages. The work to harvest the gold from within IT is just really beginning, and the trend to see that IT has financial-gain value will accelerate in the upcoming months and years.

    IT can still be seen as drain on CAPEX and OPEX, with so many spawning systems, software, people and data centers. IT planners have to understand where their money is going in support of systems, appellations, lines of business etc. When effective charge back systems are used, the link from spending to business functions can be established. With charge back and metering, the business can see where the investments are being made, measurable gains recorded and proper investment planning can occur. IT needs to move toward becoming a profit center, even if that means changing the outward perspective. To track costs and profits requires a level of recording and transacting, measuring and reporting on results. IT cannot hide behind large cabinets of servers and storage any more. The costs need to be transparent, and eventually linked to the lines of business.

    This transformation and transparency is happening all over the cloud space now, so one can argue that the business value of IT (private or public) is becoming more visible all the time.

    These are exciting times to be in IT. There are many more options to deliver efficient and cost-effective solutions for business groups. Understanding, measuring and reporting IT results in the context of time and money can help bridge some of the old gaps that used to exist between the business group and the IT group.


    Business Defined IT

    by David Merrill on May 5, 2014

     

    Last week HDS made several key solution and product announcements, specifically around the VSP G1000 and Continuous Cloud Infrastructure that is enabled by new products and services. One of the key themes imbedded in the announcement materials is “Business Defined IT” (BD-IT), a business-view corollary to the software defined world that IT is moving toward. I would like to expand on some of the key concepts around business defined IT, in addition to the materials included last week’s announcement.

     

    Read How You Can Enable the First Business-Defined IT Architecture

    Watch the Continuous Cloud Infrastructure Slidecast

    The phrase business-defined anything (IT, customer service, logistics) refers to the focus on business. We are giving business managers the leverage to control the trajectory and speed of business. Two dimensions are essential in business improvements, and certainly with the IT focus for which HDS and others provide solutions. These two dimensions that need to be isolated and used for reference are time and money. Time is money, money takes time, etc. As IT makes fundamental changes in architecture, best practices, solutions, perspectives, behaviors, it will be important to understand these 2 business dimensions and for IT to be able to speak in the context of these two terms. IT has to develop the practices to discuss time and money in the business context, not just in computer and software terms.

    Lets take a look at the business perspective of time, and how IT services have to present an overall better time for business.

    • Time to market (TTM) is a common business term. How long do products and services take to get to the customers? What part does IT play in helping to improve time to market? If it takes weeks or months to get IT resources, then the downstream effect for the business TTM will be at risk
    • Time to acquire assets and solutions is a subset of TTM, but some of the new cloud offerings (with same-day service or self-provisioning) entices projects to outside of tradition IT to get things faster
    • Time to find data, time to restore/retrieve data has direct business impacts. Lawsuits, contracts, compliance, and customer satisfaction all link to getting data re-hydrated and presentable for use
    • Time to learn new trends is one of the key points for big data analytics. We have a lot of content and information seemingly locked up with data volumes, old formats and old media. If this data could be re-presented in a usable way for trending and other analytics, we could make new and faster business decisions
    • Performance and throughput is a direct function of time. IT tends to be sensitive to highly critical transactions. The performance latency can always have a negative business impact. Sometimes we fail to quantify or take credit for improving overall performance and IO, or linking these improvements to business opportunities
    • There is IT setup and engineering time associated with scaling up and out. Businesses want to be agile, so that means we have to be equally efficient in scaling down and in. Cloud computing offers this capability very well. Internal IT needs to take this page from the cloud playbook to offer similar services that allow for on-demand commissioning and de-commissioning of IT resources
    • A new element of time for business is the effort and time spent waiting to ingest and assimilate new data and new data sources. Social media is a good example. IT could take a proactive leadership role to bring to business planners (near) real-time data to analyze and use for business differentiation
    • Some industries (entertainment for one) rely on the ability to remaster content. These events have historically taken a very long time to accomplish, and thus impact business agility. IT needs to address long-term (decades +) data preservation and media/format independence.
    • Archive terms – business planners have been conditioned to expect that data can be kept, secured or preserved for a few years. What if preservation could be practical for hundreds of years? Can the business adapt to new time horizons of data that can be used for future harvesting?

    I will finish this discussion in my next blog posting. It will cover the money aspects that IT can do better describing in business terms and also the enablers that IT can exploit to provide business sensitive results for time and money.


    IT as a Profit Center

    by David Merrill on Apr 3, 2014

     

    For decades, IT departments have been a cost center. Information technology is a cost to the company. The services and capabilities are undoubtedly valuable, but like legal, accounting, or human resources departments they are a cost to the company. In my humble opinion, in many cases this should change.

     

    Most of my work as the Chief Economist at HDS, has focused on ways and means to help IT departments reduce their costs. As a cost center, a dollar (or ringgit) saved is a dollar to the bottom-line. Almost all of my efforts have been to identify, measure and recommend options to reduce costs. For IT service providers, or cloud service providers, the IT department is the profit center, and their cost reductions result in a lower Cost of Goods (CoGs) that in turn improved profitability. These IT departments look for any and all ideas that can open new channels of business and revenue. Many of these operational and planning perspectives should become the norm in traditional cost-center-IT departments.

    Now lets suppose for a minute that IT (or some percentage of IT) can be converted to a profit-center. Cost optimization is still important, but the emphasis is now on how to turn higher profits. The mind-set, traditions, practices of IT would be challenged, and innovations would be embraced when these innovations improve profits. Here are two simple areas where I have seen traditional IT departments consider for-profit expansion:

    1. Private Cloud Service Provider – This has been most common in the government sector. One department becomes very good at delivering IT services and systems at a quality price, and then turn to other smaller agencies to vend their offering. Government may not be a for-profit company, but these IT groups that assume this type of leadership do transform themselves to be customer-sensitive, price-sensitive and service conscious. By moving to a for-profit model, they ensure that their operations and standards contribute to the quality and value of the offering. They discourage and attack waste, long turn-around times, ancestry resources and licenses. Some hospital and higher education IT organizations are also making this transition by offering very customized and specific IT services available to like-minded organizations in which they may have a semi-formal or cooperative arrangement.

    2. Next is the idea of Big Data. Here is a new opportunity where the IT department can now take years of data (or meta data) and work it into a product offering to sell within or without their own organization. They create, management, ownership of the data is within IT, why not turn around and drive revenue with assets within your control? This Bloomberg TV interview of HDS VP in APAC shows one perspective on how this can be done.

    These two examples above may not be enough to convince someone to turn IT into a profit center. There are certainly several risks and issues, but the value and historical resources, or data, can be thought of as a commodity. Best practices and operational efficiencies could be sold to others that do not posses the same levels of cost effectiveness. Moving to a profit-center mindset may also change some of the architectural, operational and attitudinal traditions that exist within IT, and in the least may lead to a secondary path of OPEX and unit cost efficiencies. So even if you don’t change to a profit center, the planning and process might well result in better cost-center cost results.


    The Storage Costs of Protection

    by David Merrill on Mar 3, 2014

     

    I have presented several blog entries on the topic of data protection costs. In a previous blog I suggest that we cannot afford to protect data at the rate that we are now accustomed to.
    In another entry, I attempted to outline how data protection trends have given rise to over-protected or over-insured data.

     

    Next, I outlined ways to help calculate the costs of protection, followed by a post on options to reduce data protection rates, and therefore data protection costs.
    And finally, I wrote a post about how to address or quantify the risks associated with not protecting data. This is key if we are to measure the risks with the costs of protection.

    A colleague sent me this article from The Register, and they support some of these same concepts. Data growth is exploding. Data protection is expensive. This is not just an indictment on backup costs, but on all forms of data protection that we currently use:

    • Clones
    • Copies
    • Snaps
    • Sync and Async replication
    • Tape backup, VTL, disk backups

    Here is another good article with an IDC reference around the cost of protection.

    Our first reaction to these kinds of statements is that we are afraid or unwilling to introduce more risk. Disk is cheap, right? Protection is cheap and easy, right? I do not want to be the storage manager or IT manager that has an un-recoverable issue on my watch.

    The truth is the over-protection rate is unsustainable. Budgets are being cut, and efficiencies in the cloud and utility services are being explored to (in part) reduce overall IT costs. As it is still early in 2014, this is a good time to stop and measure the total capacity used for data protection. This capacity can be put into terms of total cost (roughly 4X the price you paid for it) that can help shed some light on what the total protection cost in storage is really like. These costs can be compared to the cost of risk, and the annualized loss expectancy (ALE) that your organization may have. Now is the best time to stop and look, and to make sure that your insurance protection costs are aligned with the business needs and IT economic requirements.


    SSD Opens the Way for More Tiers of Storage

    by David Merrill on Feb 4, 2014

     

    I came across this graphic from the HDS Competitive Marketing Group, and it depicts very well the SSD/Flash innovation into traditional storage architectures.

    server.jpg

    What used to be simple tiers 1, 2 and 3, storage tiers now range from –2 to 4 and Cloud. I like how the graphic helps segment how the different technology options for data storage and memory should be planned. My interest now is to see if it is easy to define the cost ratios for these new kinds of (negative) tiers. I previously blogged on an optimal cost ratio of a 3-tiered or 4-tiered architecture. Read these here:
    Defining Costs for Storage Tiers
    Big Data Cost Tier/Ratio
    Recipe for Storage Chargeback
    Economic Best Practices

    I would be interested if other work has looked at the total cost (not just acquisition cost) of the memory and local SSD architecture options. Know of anything? Please share in the comments section.


    2014 Prediction #3 – Price Erosion Will Not Keep Up with Growth Rates

    by David Merrill on Jan 30, 2014

     

    This post is the third in my prediction on the economics of IT series. Be sure to read predictions 1 and 2 if you missed them. My 3rd prediction has to with spending, price erosion and growth. From a macro view, my prediction is that Moore’s law cannot keep up with the rate of growth while holding CAPEX flat.

     

    What exactly do I mean with this statement? First, lets take a look at storage economic trends for the past 10-12 years:

    • We have enjoyed a relative price per-GB erosion for the past several years. Some attribute storage prices to Moore’s law (which has to do with transistor density), but in any event, price erosion has been a friend to CAPEX and purchasing for several years. Rather than relying on Moore’s law, storage price points have been better defined by Kryder’s law.  For more, here is another good article on price erosion.
    • We started to see price erosion wane in 2012, in part due to drive supply problems resulting from Thailand floods and the tsunami in Japan
    • Many in our industry are stating that Moore’s law is dead, and that any real density that leads to increased capacity (with price erosion) is slowing down
    • The growth rate of data storage is really changing (exploding), several factors are at work here:
    • Price erosion slowing down and capacity demands ramping up, year-on-year CAPEX spend is going to have to increase. This is not a popular message while the economy is still fragile and CAPEX spending is still highly scrutinized
    • My suggestion is to spend more time putting in place architectures, best practices and new procurement systems to offset the rise in CAPEX These activities are well-documented and include:
      • Storage virtualization to reclaim stranded capacity
      • Dynamic tiering to push data to the right tier, at the right time and for the right cost
      • Reduce the amount of capacity that is used for intentional and unintentional copies
      • Archive more capacity, and with single-instancing, reduce the total long-term capacity footprint

    In the past, an annual price erosion of 25% added to a growth rate of 25% would translate into a relatively flat year-on-year CAPEX spend. In other words, the capacity increase is offset (somewhat) by price decrease, and the net results in nearly flat CAPEX spend over a period of time. This graphic depicts the green flat line of CAPEX spend that is function of price erosion and capacity growth.

    chart.jpg

    The Y-axis are logarithmic, so you cannot take the graphics in absolute terms. But the message is the relatively flat line that we had enjoyed until 2012. Like I said, things started changing around that time. So back to my prediction: if capacity rates are accelerating and price erosion is ‘eroding’, then the result will be a growth in CAPEX spend for storage. The graphic above may be hard to distinguish the CAPEX difference, but the numbers indicate that CAPEX spending may increase 30-300% over what we have become accustomed to.Some of these cost curves and price erosion rates apply well to the SSD and flash markets. I will cover these in another blog segment. For now, my predictions and observations tend to track well to spinning disk media.Not many procurement executives will be happy to hear that storage spending may have to increase by these amounts. In response, storage planners and architect need to be implementing solutions and operational changes to offset a spike in CAPEX. Some of the options to hold and reduce the costs include:

    • Cloud computing for elastic consumption
    • Private cloud, with utility pricing for consumption to pay for what you need, when you need it (removing all the reserve)
    • Virtualization, over-subscription and capacity reclamation. I am seeing customers reclaim current capacity (live off the body fat) and deferring capacity upgrades. Simple measurements of utilization and written-to-raw ratios will help you determine how much more ‘dieting’ is required
    • Reduction of copies and protection capacity (see my blog series on being over-insured and over-protected)
    • Take on an aggressive archive or object-store approach
    • Put in place consumption-behavior changes to discourage wasteful practices. Charge users (departments) for what they consume, and market your storage offerings clearly and concisely with a services catalog

    So that is my prediction, although it is not much a prediction since these trends have been observable for the last 18-24 months. Aggressive plans and alternative architectures and ownership models need to be part of your 2014 plans in order to get ahead of these CAPEX spikes that may be coming your way.


    We Cannot Afford to Protect, Certify, and Encrypt All of the Data That Current Traditions Expect

    by David Merrill on Jan 21, 2014

     

    This is the 2nd installment of my series on 2014 IT economic trends (read the previous posts here and here). This second prediction may not ring true for all IT planners and architects.

     

    I started taking note of the costs of data protection (traditional backups, clones, copies and snaps) earlier in 2013. These 2 older blog present my observations and recommendations from last year:
    Are We Over-protected/Over-insured?
    Calculating the Total Cost of your Data Protection

    During this time of observing data protection cost ratios (in storage baseline work that I was doing), I repeatedly saw situations where 30-50% of storage TCO was attributed to data protection cost areas. The reason for the high TCO percentage was the pattern of double, triple or quadruple data protection. Call it paranoia, but I often found that production databases were intentionally copied or replicated 5-6 times. It was common to find many another un-intentional copy instances that were applied 6-8 times. When it came time to make recommendations and final results to these customers, they were often surprised by the amount of copies or protection that was employed. Knowing the copy frequency and the addition to total costs, it tends to be a hard effort to figure out how best to address the replication and protection problem. No one ever wants to put systems or data at risk, but it became clear that we have to (sometimes) over-protected our data.

    Most every IT organization would love to crack the code to reduce the cost, complexity and effort involved with traditional backups. There is no holy grail, but lots of smart people have practical ideas on how to do this.

    With the growth of data, some of the current trends or traditions of protection, clones, copies, encryption, etc. are just not economically feasible. The rate of growth, with the current protection methods, is unsustainable, given flat or slightly declining OPEX spending within IT.

    The real problem tends to be that we find it hard to identify and isolate critical apps, log files, user files, etc. Very few IT organizations have a structured IT classification system, catalog or metering system applied to data and systems, so we are running a little blind relative to critical data protection. To be safe, the best solution tends to be to protect or triple-protect everything. This is also unsustainable.

    I have heard some say that virtual machines are compounding some of these protection issues, and perhaps that is true. However, I think the lack of a classification or catalog system does not give planners the laser-focus on how to protect, encrypt and manage systems with different tiers of effort (and cost). We need this type of insight so as to protect the high-value assets with the right level of insurance, commensurate with the business value or risk.

    Encryption of data may see a different evolution in the short-term, where systems have embedded encryption chips and key management that makes it viable to encrypt everything. Our increased awareness of hackers, spies and human error may well require that we start to, or continue to, encrypt everything. There will be a cost inflection point for decisions like this, as nothing is without cost (human effort, licenses, processes and risk). See Hu’s blog on the need and ability to encrypt so much more.

    So my prediction of not being able to afford all these data protection and encryption luxuries (if you can call them that) still has to be balanced with the cost of business risk when things fail. We do not want to put our enterprises at risk, but we also face a mountain of data growth with decreasing budgets. This conundrum forces architects and planners to challenge everything, even the policies and practices that protect our infrastructure.


    Depreciation of IT Assets Will Diminish

    by David Merrill on Jan 8, 2014

     

    In my last blog of 2013, I set up a premise for IT and storage predictions in 2014, from an economic or financial perspective.

     

    This first prediction is not really radical, or even new. I, along with many others, have been talking about moving away from traditional depreciation of IT to more flexible, pay-as-you-consume methods. For more information see these links to my previous posts below:

    Economic Advantages of Pay-as-You-Grow or Utility Storage Services

    Considerations for Project-Based IT Funding

    Back to 1999 CAPEX Ratio

    Challenging the Tradition of Depreciation for IT Ownership:

    An 11-Step Program

    This movement away from depreciation does not imply moves to off-premise or cloud models. This projection has to do with how we own or use IT assets, who owns the assets, and the financial flexibility needed in the near-term. We can still have the assets on our local premise, behind our secure firewall and managed the same way as with depreciation, so the prediction here is not about off-site consumption, but local access with alternative ownership instruments.

    Three, 4 or 5-year depreciation methods for owning IT may not make as much sense now as it did 20 or 40 years ago, when the unit cost of IT was very expensive. With price erosion and new technology replacements happening at a very rapid rate, the depreciation ownership model does not fit for enterprises that need elastic scale (up and down), and the best technology at the moment to meet business challenges. Depreciation locks us into a platform or architecture for a long-time (in IT terms, 4 or 5 years is a very long time).

    With big data growth, analytics, and keeping-data-forever concepts floating around IT, we need to consider alternative methods to owning IT. Many IT companies need scale-up and scale-down capability, especially for lower-tiered IT assets (VM and storage). In fact, I am seeing that the alternative ownership model is becoming quite popular with the lower-tiered assets, and this will eventually move further up the IT chain. I can concede that mission critical IT systems will always be depreciated to insulate the business from uncertainties with a consumption model, but the trend to consume IT assets rather than owning them will continue to accelerate in the next 2-4 years.

    Many of our customers now are using creative or rolling-leases to move the depreciation expense to an operational cost. Some customers use pay-as-you-consume or utility models for some archive and backup-tiers of storage, and test/dev virtual machines. Some are using a consumption model where some of the assets are local, and some of them are remote (shared, cloud-like environment) to achieve even greater unit-cost benefits.

    A key ingredient to making these new ownership models a reality is the adoption and exploitation of virtualization. The growth or addition/subtraction of IT assets has to be abstracted away from the core applications. Vendor lock-in will be a problem, so virtualizing the storage volumes or virtual machines is becoming a prerequisite for highly adaptive IT architectures that want to move into these new methods.

    There are some industries, admittedly, that are highly leveraged with large assets (mining as an example) that will always use depreciation, irrespective of the company asset type. Therefore, not all will see this trend develop in the same way. Prepare for this now, as it may take time to convince the CFO or finance department that this move can reduce unit costs of IT. Prepare now by ensuring that your infrastructure, virtual architecture, best practices, catalogs, and charge-back systems are ready for these alternative ownership mechanisms.