Software-defined infrastructure provides flexibility and business agility, but it can only be as good as the underlying hardware it uses.  One of the key tenets for software-defined infrastructure is abstraction to eliminate complexity and  provide functionality that is dynamic and scalable. Storage is only one part of the infrastructure, but it is the foundation for a software defined infrastructure. The ability to automate and efficiently access the data by upper levels of the infrastructure depends on how we virtualize the storage layer.

 

Last week I talked to a director of storage infrastructure, who had inherited an enterprise storage system from another vendor. This system had been sold as a high performance system and contained nothing but very expensive 15K high performance drives. He described it as a very expensive 15K JBOD (Just A Bunch Of Disks). The system did not scale and did not provide any external virtualization, which he could have used to tier off the less active data to some legacy storage system that was not being used.

 

He was experiencing performance problems because some of the ports were being overloaded while other ports were sitting idle. While the vendor claimed that this storage system had a global cache, in fact the cache was configured in a controller node along with processors and port directors and the only sharing of data was across an external Rapid I/O switch to another controller node’s cache. If he wanted to add more cache, he had to buy another controller node, which also meant that he had to pay for the additional processor and port directors that came with the node. Even after he added the new controller node, he would need to take an outage while the data is redistributed across the nodes.

 

Hitachi storage virtualization software is going to make it so much easier.

 

Hitachi’s Virtual Storage Platform (VSP) Family of storage systems solves these problems through virtualization with a true global cache. This global cache enables any front-end port to talk to any back-end port through any cache module across a high performance internal switch. With an internal switch you can add additional front or back-end ports, additional cache modules, or additional processor modules non-disruptively, without the need to add whole controller nodes with embedded processors, cache and port directors. The secret to our global cache is an internal switch and a separate control store that enables us to remap the cache configurations by changing bits in the control store. There is no need for “BIN” files to statically map the cache to the ports, or mapping tables in an appliance that maps external LUNS to Virtual LUNs,

 

The VSP Family can scale from an entry 2U configuration to the largest enterprise configuration using the same storage hypervisors in all 5 models of the VSP G. The Storage Virtualization Operating System (SVOS) creates and manages virtual storage machines that can span across internal and externally virtualized storage as well as across active/active clusters of VSP systems. Using the power of Intel multi-core technology and SVOS, Hitachi is able to provide a global cache and deliver the same functionality from high end enterprise systems to  midrange systems, which were previously limited to dual active/passive controllers. You no longer have to sacrifice, enterprise capabilities due to budget or floor space restrictions. All the VSP’s support high performance flash modules, which are managed with the same Hitachi Command suite of tools. The only function that is missing from the smaller models of the VSP Family is mainframe support.  FICON channels and mainframe support are available in the G1000.

 

hu-051315-1.png

 

If you have a legacy storage system or a dual controller midrange system, you can virtualize it behind a VSP G and upgrade it with all the latest enterprise functionality of the VSP G and participate in software-defined infrastructure through the interfaces provided in the VSP G. If you don’t need any storage capacity, you only need to add a diskless VSP G, to virtualize and upgrade your existing storage. Other storage virtualization approaches are done with an appliance which acts as a proxy or pass through to pool the external storage, but does not provide any enhancements to the existing storage. SVOS can also partition the storage resources for safe multi-tenancy, so that virtual storage machines, which share the same physical VSP, can operate together without data leakage, performance impact, or escalation of management privileges from the other virtual storage machines. Since the SVOS is embedded in the VSP controllers, this partitioning is assisted by microcode to ensure that the partitioning is enforced by hardware. Other approaches to virtualization, which are implement in external appliances, cannot provide this level of security.

 

In the past when storage bit densities were doubling every 18 months, storage systems would be replaced every 4 to 5 years, since it was more cost effective to buy the latest disk technology than maintain the old storage footprint. Those days are over and with higher density 4 TB disks, it makes more sense to amortize the storage capacity over 7 to 10 years, as longs as you can keep up with the latest storage enhancements through virtualization and be able to non-disruptively migrate large capacities when the economics makes sense. I expect to see an increasing use of storage virtualization to extend the life of storage assets without sacrificing the latest advancements in storage management or software-defined infrastructure.

 

In order to support software definition by upper layers of management, the VSP provides, restful APIs, and providers for interfaces like VMware APIs for Storage Awareness (VASA). Through VASA we can publish the capabilities of the VSP G to vSphere and vSPhere can define a virtual volume based on VSP’s capabilities.  Here is a list of software systems that are supported by the VSP’s.

 

hu-051315-2.png

 

When I first started blogging over 10 years ago, I had many exchanges with other bloggers as to whether Hitachi’s embedded virtualization was better than the appliance approach to virtualize and pool together heterogeneous storage systems. When we announced the VSP with our third generation of virtualization, which included thin provisioning and automated tiering, those exchanges died away. Now with our Hypervisor approach to virtualization that extends it horizontally across storage systems and scales it down to the entry and midrange environments, I hope to spark some more debates on storage virtualization and extend it to software-defined infrastructures.