If you google the acronym "IT," you will see definitions that look like this one from Wikipedia which has been unchanged over the last 30 to 40 years:


Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise.”




This generally accepted definition of IT is more about infrastructure technology than it is about information technology. Information technology certainly depends on infrastructure but also includes application technology, the technology that transforms data into useful information. Today we are seeing a major expansion in data and applications as we shift from a structured world of databases and data warehouses to an unstructured world of SMAC, social, mobile, analytics, and cloud.  This unstructured world demands more agility and flexibility from applications as well as infrastructure.


In the structured world applications were fairly stable, release cycles were two to three years or more, and as a result the majority of the IT budget was spent on scaling up the infrastructure. Here is a typical break down of IT budgets, which I found on the web that is attributed to Microsoft.




Today, while we need to continue supporting our legacy core applications, we need to invest more of our IT budget on new applications around social, mobile, analytics, and cloud. In order to do this, the infrastructure must be more flexible and adaptable to the changing application environment. This requires a software-defined approach to infrastructure.


Paula Phipps has been following all the hype around software defined everything, and has concluded that “there is something fundamentally right about an infrastructure that uses software as its secret sauce to change faster and keep up with the staggering pace of business.” In her recent blog post she points out that infrastructure requirements differ depending on the application requirements. Whether they are legacy applications like OLTP, ERP and DWH or next generation applications like noSQL, analytics, mobile, open source, etc. This means that not only must the infrastructure be more agile and efficient; it must also be able to leverage its core strength for different types of application workloads.


Software-Defined Infrastructure will enable us to alter IT spend so that it is more focused on applications and innovation and less on the infrastructure. Software defined does not mean that everything is done in software and the infrastructure hardware is commoditized. Software-Defined Infrastructure is a combination of smart hardware and smart software. Tesla is an example of a software-defined infrastructure. It is a software-defined car. With a change in software Tesla was able to boost the performance and driving range of its model S sedan. This is possible because the hardware architecture of the model S is radically different than other automobiles.


When you look for software defined infrastructure look for infrastructure hardware that can not only add new functionality but can also communicate that functionality with the application through software interfaces like APIs and client/providers. In this way the applications can leverage the advanced capabilities of new infrastructure hardware. The Hitachi VSP G1000 enterprise storage array that I blogged about last week provides a software-defined infrastructure. The rich functionality provided by the Hitachi Storage Virtualization Operating System (SVOS) is not kept bottled up within the system. Through vStorage APIs for Storage Awareness (VASA) VSP G1000 will be able to publish its capabilities to VMware; so that software in vSphere will be able to define a virtual volume that uses the unique capabilities of SVOS and the G1000. The G1000 can also virtualize other storage systems and enable them to participate in the software-defined infrastructure since virtualization is embedded in SVOS and is not a proxy or a pass through virtualization appliance. This means all the rich virtualization functionality can be equally accessible whether consumed as part of a converged infrastructure solution like Hitachi Unified Compute Platform or as a stand alone component, because extra devices and appliances are not required to extract the richness storage virtualization provides.


Mike Nalls from Hitachi Data System Enterprise Product Marketing talks about the cost savings and other benefits that storage virtualization is bringing to midrange customers through reclaiming space, simplifying migration, and easing management in a recent blog post.


With tools like virtualization of storage and servers, convergence, and software-defined infrastructure, we will be able to alter the focus and spend in IT to allocate more for innovation.




Mike ends his post with the following questions:


“But Wait, Isn’t This Only For Fortune500 Accounts? Um… No.

Now imagine these abilities to reclaim space, simplify migration and ease management were available to you at your price point?”


The first question is raised because virtualization of external storage is done today by enterprise storage systems like the HUS/VM and the VSP, which are not generally considered in the midrange price point. However, the TCO of virtualization with the HUS/VM and VSP still provides compelling benefits in the midrange market. The latest VSP G1000 introduced another dimension to virtualization through global active devices that span physically dispersed G1000’s. The G1000 also changed our virtualization approach by encapsulating it into software, so that the virtualization of internal and external storage becomes software-defined through the implementation of a storage hypervisor, the Storage Virtualization Operating Systems (SVOS).


Since storage virtualization is now encapsulated in software, Mike poses the second question to stimulate our imagination. What if we could enjoy the benefits of storage virtualization at a lower price point, meaning in the midrange price range. That would be great, but how do you overcome the other limitations of midrange storage hardware like static allocation of LUN ownership?  SVOS is one of the key elements in the support for software-defined infrastructure, but this will require a new hardware architecture for the midrange. Will we be able to extend the software- defined capabilities of SVOS to an expanded family of VSP storage products, from entry to enterprise?


Since I mentioned Tesla’s software-defined car, I would like to add that Tesla is beginning to roll out a Range Assurance application, which will run constantly in the background, communicating in real time with Tesla's network of superchargers and destination chargers to relieve users of range anxiety. This is an example of connected car, running applications in the car that connect with the infrastructure around them to improve the driving experience. Hitachi is very active in the connected car development through their Clarion subsidiary and Clarion partnership with Hitachi Data Systems. Stay tuned for further announcements from Hitachi Data Systems in this area.