This post is a continuation of my blog series on top trends for 2015. These three trends under continuous infrastructure that support Business Defined IT

 

5. Global Virtualization adds a new dimension to storage virtualization


Storage virtualization has been vertically oriented up to now, where a storage virtualization engine is able to virtualize other storage systems that are attached behind it. Global virtualization will extend virtualization horizontally across multiple storage systems, which can be separated by campus or metro distances for block and global distances for file and content storage systems, creating virtual storage machines that present one pool of virtual storage resources that span multiple physical storage systems. Separate storage systems are connected through software to enable a virtual storage image to be read or written through either storage system. For block systems Hitachi Data Systems provides a Storage Virtualization Operating System, SVOS, which acts like a storage hypervisor providing synchronization across G1000 block storage systems so that applications continue to run across systems failures with zero RTO and zero RPO. SVOS also provides configuration flexibility across campus and metro distance and enables non-disruptive migration for technology refresh. HNAS provides this with a global name space for files, and HCP provides this with Global Access Topology for object stores.

trendspic1.png

The advantage of having multiple storage systems actively supporting the same logical image of data to an application is that the application can continue to run if one storage system should happen to fail.  Up to now, only one storage system could be “active” servicing an application and protection of the data was provided by replication to another storage systems that would be in “passive” standby mode. If the active storage system should fail, the application could be restarted on the replica of the data in the passive storage system. Restarting the application requires an outage, and even if the replication was done synchronously and the data in the passive storage system exactly mirrored the data in the active system at the time of the failure, the application recovery must check for transaction consistency. Logs would have to be checked to see which transactions had completed and which transactions would have to be restarted, before processing could resume. This process requires a delay in the applications recovery point and recovery time objectives. In a configuration where both storage systems are active (active/active) the application would maintain transaction consistency during the failure of one of the storage systems. This will be particularly important to core applications that need to reduce their recovery time and recovery point objectives.

trend2.png

 

These virtual storage machines should be able to provide all the enterprise capabilities of previous enterprise storage virtualization systems including, virtualization of external storage, dynamic provisioning, replication of consistency groups, high performance, and high scalability. The magic will be in the storage virtualization operating system that resides in these virtual storage machines. Think of this as a hypervisor for virtual storage machines, which is similar to the hypervisors for virtual server machines like VMware or HyperV.  Virtual storage machines that support active/active processing for physically separate storage systems can provide the advantages of high availability with zero recovery time and zero point recovery, configuration flexibility across campus or metro distances, and non-disruptive migration across technology refreshes.

 


6. A Greater Focus on Data Recovery and Management of Data Protection copies


Surveys show that data protection continues to be the highest concern for data center managers. The amount of backup data continues to explode driving up recovery time and recovery point objectives. Much of the focus up to now has been on backup, converting to disk based backup and deduplication to reduce the cost of backup storage. In 2015 more of the focus will be on driving down the recovery time and recovery point objectives while continuing to reduce the cost of data protection.

 

Some techniques that will be used to reduce recovery times are aggressive use of active archives to reduce the working set required for recovery, increasing use of snaps and clones that are enabled by thin copy technologies, object stores for unstructured data that only needs to be replicated and not backed up, file synch and share from a central repository, edge ingesters that replicate content to a central repository for immediate recovery, faster virtual tape libraries that can support multi-streaming and multiplexing to reduce recovery time, and active/active storage controllers where virtual volumes span separate storage systems so that applications can continue to run without the need to recover when one storage system fails.

trebdpic31.png

 

The cost for data protection of primary data has been exploding not only due to data growth, but by an increasing number of test and development, data protection and replication copies. A database may have 50 to 60 copies that are administered by different users for different purposes. Many copies become orphaned with no owners or purpose except to consume storage resources, and when a recovery is required, it is not clear which snap or replica should be used for recovery.  IT administrators will require tools to discover, track and manage copies, clones, snaps, and replicas of data stores across their environment, in order to reduce the waste associated with all these copies and stream line the recovery process.

trend-pic-4.png

 

7. Increasing Intelligence in Enterprise Flash Modules.

Due to the limitations of flash technology, writing to formatted blocks, block formatting, recovery of invalidated pages, write amplification, wear leveling and the management of spares, an enterprise flash storage module requires a considerable amount of processing power to maintain performance, extend durability, and increase flash capacity. In 2015 we will see the displacement of Solid State Devices (SSD) that were designed for the commodity PC market with enterprise flash modules that are enhanced with the processing power that is required to address enterprise storage requirements for performance, durability and capacity. New flash technologies like TLC and 3D NAND will be introduced to increase the capacity of flash drives but will further increase demand for intelligence in the flash module to manage the additional complexity and lower durability of these technologies. Hitachi currently uses a quad core processor in its flash module device (FMD) which supports 3.2 TB with higher sustained performance and durability than off the shelf Solid State Devices (SSD). There is an opportunity to utilize the processing power in the FMD to provide further capacity and workload optimization for certain applications.

 

pic5.png

 

 

Here again for reference are my ten IT trends for 2015. I will be covering trends 8, 9, 10 in my next post. I invite you to join me along with George Crump, analyst with Storage Switzerland, on December 10 in a web discussion of these trends and their impact. Registration and details are available here.

 

1.  Business Defined IT


Convergence, Automation, and Integration

2. New Capabilities Accelerate Adoption of Converged and Hyper-converged Platforms

3. Management Automation


Continuous Infrastructure

4. Software Defined

5. Global Virtualization adds a new dimension to storage virtualization

6. A Greater Focus on Data Recovery and Management of Data Protection Copies

7. Increasing Intelligence in Enterprise Flash Modules.


2015 Trends: Big Data, Internet of Things, Data Lakes, and Hybrid Cloud

8. Big Data And Internet of Things

9. Data Lake for Big Data Analytics

10. Hybrid Cloud Gains Traction