Skip navigation

Available to registered customers and authorized partners, this free tool to determines how much storage you will save using Hitachi FMD DC2 flash compression.   To download it your company must be registered as a customer or partner with HDS and you have to register individually on the support portal (where it is located).  An eligible registered customer is a user whose email address is from a known customer of HDS.  If you get an unauthorized message contact your HDS sales or partner sales representative to be added to the list ...

 

https://support.hds.com/en_us/user/downloads/detail.html?ppId=2242&d=flash-compression-estimator

   

One criticism we hear about the VSP family is that it isn’t ‘designed from the ground up for flash’.   Which is odd when said about a system that at the high end is capable of producing over 4M IOPS sustained performance.   Over time, everything in the current generation VSP systems hardware and software has been optimized for the utmost in performance, including for flash. While at the same time providing ultimate reliability, protection and online serviceability.

With the VSP family the answer to flash optimization not just in the hardware, it is also in the Hitachi Storage Virtualization Operating System (SVOS).   Hitachi storage controllers are complex clusters of processors sharing various interconnect, cache, IO, and media devices. SVOS orchestrates connections and data transfers throughout the system. SVOS clustered o/s design provides massive parallelism for a non-blocking architecture for extreme throughput capabilities.  SVOS includes significant flash-focused engineering, patented, with over 30 fundamental changes to enable higher thread-count processing and faster movement of data.   SVOS is architected and optimized for maximum flash throughput throughout.

With flash storage, high performance design starts with high performance data paths between flash devices and servers.  Why solve the I/O bottleneck in the device only to push the bottleneck up into the storage controller?  The significant determinate of performance is the speed of interconnects and number of moves/copies of the data in the path.   Within Hitachi storage systems interconnects and RAID calculations are accomplished with a mix of custom silicon logic machines and multiple parallel PCIe busses, achieving over 830GB/s internal bandwidth and sub-millisecond response times.  So hardware bandwidth is high and latency is low for extreme performance . 

But extra internal data copying steps can still kill performance.  SVOS is optimized with flash to minimize them – for flash it doesn’t copy into a buffer or cache and then out again if it can be copied directly. For example one of these optimizations is unique cache bypass technology.  With HDS storage systems and traditional media, writes go through copies to cache, for subsequent high performance reads with data consistency. With (slower) disk systems this extra copy on the write doesn’t affect performance, but with faster flash devices it does.   With FMD or FMD2 media SVOS avoids this extra copy into cache with flash reads and writes using patented “express” I/O processing and optimization with caching algorithms and cache bypass optimized for flash

Also within SVOS there is media specific tuning of cache and I/O handling for specific devices. Logic for random read miss/write miss throughput, usually the major part of backend I/O processing, is all optimized for flash.

Within the system Dynamic Provisioning provides wide striping that automates performance tuning, self optimizes performance and capacity allocations, eliminates hotspots and improves wear leveling.  Dyamic Tiering has flash specific read/write IO profiling, modified tiering algorithms and buffer handling, and the new Active Flash feature that enables real-time elevation to flash of suddenly active data on lower Dynamic Tiering tiers.

So at the end of the day, as compared to an evolved SVOS and Hitachi Storage, there are no performance advantages of systems designed only for flash.   And lots of advantages with a mature fully featured system like the VSP family.

 

Our customers tell us that when they virtualize existing storage it often improves performance.  Recently we set out to do what we thought would be a useful measured comparison.    Using already fast external storage – Hitachi Virtual Storage Platforms.


The purpose of the testing was to measure the performance capabilities of Hitachi Universal Volume Manager (UVM) software on a Hitachi Virtual Storage Platform G1000 system for a variety of configurations and parameter settings. UVM is the standard SVOS component that enables virtualization of external storage.  This testing focused on some commonly asked questions about external storage performance and best practices, including the following:


1.    For Hitachi Virtual Storage Platform (VSP), how does its maximum performance compare when virtualized behind VSP G1000 to its direct-connect (or “native”) maximum performance? 

2.    What are the effects of, and recommendations for setting, Hitachi Universal Volume Manager (UVM) tunables such as cache mode and system option modes (SOMs)?

3.    On average, how much latency is added or subtracted to each I/O by Universal Volume Manager?


The arrays and drives used in this testing were as follows:


Virtualization engines

·        VSP G1000 with 16 virtual storage directors (VSDs, the processor boards) and 1 TB cache, connected to virtualized array #1

·        VSP G1000 with 4 VSDs and 256GB cache, connected to virtualized array #1

·        VSP G1000 with 8 VSDs and 512GB cache, connected to virtualized array #2


Virtualized Arrays

·        VSP with 2048 x 146GB 15K rpm SAS drives using RAID-10 (2D+2D)

·        VSP G1000 with 96 FMDs in RAID-5 (7D+1P), 8 VSDs and 512 GB cache


The performance results achieved are still in the process of being published, but here are a few summary conclusions that can be drawn after measuring the performance of external storage virtualized with UVM on the VSP G1000:


The maximum IOPS the small configuration VSP G1000 delivered from external storage was 538,892 8K random reads from the external VSP, with an average response time of 15.3 ms. This was only 2.2% less than the maximum measured internal IOPS obtained from the same VSP during earlier scalability testing with 2,048 146GB 15K SAS HDDs. On average, the 32 VSD board cores of the small VSP G1000 were about 72% busy at the maximum IOPS rate, so VSP G1000 processing power was not the first constraint on performance. The first limit on IOPS was the external VSP processors, which were 87% busy at maximum IOPS.


The maximum performance of a virtualized VSP was 7.5% higher than its native performance for sequential read workloads, and the same as its native throughput for sequential write workloads. However, 100% random non-HDP workloads reached only 92-98% of native IOPS rates. According to performance monitor, the first constraint on performance during 100% random non-HDP testing was VSP processor busy, with added UVM latency accounting for the reduced IOPS.


When HDP was configured on the VSP G1000 using external pool volumes from the virtualized VSP, the performance obtained was up to 10% higher than native VSP HDP performance. It was possible to deliver “better than native” performance in this configuration, because the HDP overhead was shifted to VSP G1000.


So since customers often virtualize storage that starts out a lot slower than the external VSP systems used in the evaluation, it is easy to see why tell us that when they virtualize existing storage it often improves performance. 

 

techvalidate-uvm-performance.bmp

The dual subjects of “Need for Cost Savings” and “Heat and Power Problems in the Datacenter” are good reasons for moving to reduce system power and cooling costs wherever possible.  Something that is on many people’s minds. One of the ways to do this for storage is to spend less money keeping idle drives spinning. 

 

That’s why Hitachi has moved to now make MAID technology a standard feature of the HUS 100 family.   The term MAID stands for Massive Array of Idle Disks.  For those interested in reducing power and cooling costs in their data centers the latest release of BOS M software now includes the new power savings plus feature with MAID capabilities.   Power savings plus enhancements enable automatic spin down of disks when not in use and automatic spin disk up upon application I/O.  This enables energy savings of up to 50% or more in normal production environments and is a cost savings idea for infrequently accessed data.  The new power savings plus feature offers:

 

  • Automated spin-down and spin-up linked to IO frequency of selected RAID groups
  • Operates transparently to the application host servers
  • Customer installable, simple online management
  • No application scripting required
  • Can be used for all disk drive types
  • Now included in BOS-M for the HUS 100 with no extra licensing charges

 

In operation, power savings plus reduces power consumption of storage systems by automatically spinning down idle disk drives when lack of I/O is detected.  Lack of I/O is based on user-configurable timeout period.   Then when the data in the spun down drives received a request for access (I/O), the drive is automatically spin up.  Spun down, the drive fully stop the rotation of the platter.  The reduction of power consumption on the spun down drives is between 35% and 58% depending on the type and configuration of the drives.  Of course this operation introduces a delay in first I/O access of a spun down drive, therefore it’s recommended  to only spin down drives where occasional slower access time is acceptable.  There is no impact on performance once drive is spun-up and the system performs automatic ongoing health checks on spun down drives to ensure reliability   

 

Power savings plus is most usefully used for backup or archival storage, or hierarchical life cycle operations, as for example Masaryk Universityhttp://www.hds.com/assets/pdf/hitachi-white-paper-masaryk-university-and-hitachi-power-savings.pdf did using Symantec Storage Foundation.  While disk spin down/off power savings is not generally something customers want for an entire subsystem, it is a valuable option for further cost savings over time.  And the power savings plus feature can be enabled for some or all of the disk drives as desired.  

 

Another reason to take a look at the Hitachi Unified Storage 100 family as a consolidation and cost savings tool..!