Since the SPC numbers for the G1000 came in at 2 million IOPs and blew away every other all flash array vendor, there have been a number of posts in the blog sphere discussing the relevance of IOPs as a measure of performance. Lets look at some of the myths about IOPs

 

MYTH #1:  IOPS ARE NOT RELEVANT

I agree that IOPS alone are not relevant. However, IOPs are relevant when it is measured within the context of response time, workload, and cost. The G1000’s 2 million IOPS grabbed a lot of attention, but the real story was that G1000 was able to sustain an average response time of .96 ms, at 100% load, where the workload was designed by an independent third party, SPC, to demonstrate the performance of a storage subsystem while processing the typical functions of business critical applications. The cost of these IOPs also came in at $1 per SPC-1 IOP.  This workload was the same used by all the participating vendors and the test results and configurations were certified by SPC.

320-1.png

 

In the SPC-1 results shown above the IBM Power 780 has more IOPs, than HUS VM but the response time is quite different, especially as you approach 100% load. In this case the comparison of IOPs alone would not give you a true picture of overall performance differences.

 

MYTH #2:  LOCAL STORAGE IS ALWAYS FASTER THAN SHARED STORAGE. 

Compute systems with local storage are assumed to have less overhead than storage that is externally attached and shared by multiple compute platforms. These overheads include host bus adapters, Networking (SAN), and the storage controller. The elimination of these overheads is expected to enable local storage systems to be faster and lower in cost than shared storage systems. In reality the time taken to talk over fibre channel vs. local bus attach is a few micro seconds whereas access to the SSDs will take hundreds of micro seconds to several milliseconds depending on the design of the software and hardware to manage the programming requirements of flash media. These requirements, include wear leveling, block/page mapping, block reclamation and page formatting, extended ECC, endurance management, write amplification, etc. If you don't offload this to an external controller that is shared by multiple compute platforms, then each compute platform must do these functions.  The external storage also adds additional functions for data availability including RAID, remote replication, active/active and much higher scalability.  With local attach storage, these functions would have to be done by the host processors communicating with each other over a network. So it matters less WHERE the SSD is but whether your software + hardware is designed for this and what your requirements are for performance, scalability and availability. The following SPC-1 chart shows the cost performance of an IBM Power 780 server with 240 locally attached SSDs versus a VSP G1000 with 64 flash modules. Not only is the response time much lower at 100% load, but the cost per SPC-1 IOPS was also much lower.

320-2.png

 

MYTH #3: HYPER CONVERGED SCALE OUT SYSTEMS CAN MATCH THE PERFORMANCE OF SCALE UP SYSTEMS.

When VMware launched Virtual SAN 5.5 (hyper-converged no-san) they claimed to be able to scale linearly to 2million IOPS.  Some marketing folks have been using this to claim that you no longer get value out of hardware offload. Why buy a G1000 to get 2 million IOPs when you can use VMware Virtual SAN with commodity servers with internal SSD and HDD to get the same result.  This claim was supported by an IOmeter workload for 100% reads and is documented in a blog post.

 

VMware's standard methodology for determining performance of a VMware cluster is VMmark which runs a mixed 'DVD store' and mail server simulated workload.

Instead of using this well understood test workload they have constructed an artificial 'marketing' workload with iometer, which only reads from local SSDs on each node. They have then turned off mirroring to make sure that no data can traverse nodes, but with mirroring turned off there is no margin for node failures. Each VSAN node in the benchmark acts like a standalone storage unit. With this configuration it is no surprise that they achieve apparently linear scale.  But the purpose of VSAN, virtual SAN, is to create a single shared data store, which all VMs can access with out the need for a FC SAN. By carefully ensuring that all I/O is local and using an artificial workload they have certainly not demonstrated this in any meaningful way.

320-3.png

 

If the purpose of Virtual SAN is to scale workloads across nodes, this is not what they tested. I am sure that the response time for each I/O was very good since they did not have to go out side the node to read from an internal SSD. No response time was given for the 100% read case. When they went to a 70/30, read/write mix, the IOPs dropped down to 640K with an aggregate latency of 2.98 ms.

 

Unlike this simple workload, the SPC-1 workload generator creates a variety of I/O access patterns found in real production environments. The access patterns include random, sequential, and random walk with hierarchical re-use. These patterns also use real-world read/write ratios, arrival distributions, data locality, and transfer sizes. With this demanding workload defined by the Storage Performance Council, the G1000 delivered 2 million IOPS and with less than 1 ms response time at 100% load.

 

VSAN is a great solution for certain use cases. Hitachi has partnered with VMware to provide an EVO Rail solution, which uses VSAN in a hyper-converged, VMware-powered appliance. It is designed to be cost-optimized, and allows for faster deployment on a simple, scale-out architecture. In its first release, the Hitachi EVO:RAIL appliance will be in a 2U, 4-node configuration, with up to 16TB of internal storage.

 

CONCLUSION

IOPs alone are not relevant as a measure of performance unless it is measured with a defined workload, and is correlated with response time and cost.  The best way to measure relative performance is with a comparison between products where the configurations are fully disclosed, run against a common work load that represents a real production environment, and the results are certified by a third party. That is what we have with the Storage Performance Council SPC-1 tests. Buyers should require SPC-1 results from their vendors to verify their performance claims.