Skip navigation
1 2 3 Previous Next

Storage Systems

37 posts

Per IDC’s Copy Data Management Challenge, “65% of storage system capacity is used to store non-primary, inactive data”. In fact, Inactive data residing on flash is the single biggest threat to the ROI of data center modernization initiatives. As my friends and I get older we often find ourselves talking about “right-sizing”, or moving to a smaller, and less expensive piece real estate. In many cases life has evolved or perhaps needs have changed and we simply don’t want or need the cost of maintaining a larger residence.  The benefits are obvious when it comes to savings in mortgage, utilities, and general upkeep.  Well your data has a lifecycle as well, it starts being very active with high IO and the need for a high quality of service and low latency.  However, as data ages it’s just not economical or necessary for it to reside in the “high rent” district which in most cases is an active flash tier.

 

 

 

Now data tiering through the lifecycle is not a new concept, but the options available to you have never been greater.   Destinations such as lower cost/performance tiers of spinning disk can be an option.  If your organization has deployed a private cloud, that might be an excellent destination to tier inactive data.  For those who have adopted public cloud IaaS, that certainly is a low-cost destination as well.  Let’s explore some of these options and solutions for managing the data lifecycle through the different options available.  More importantly, let us look at some issues that should be considered before creating a data lifecycle residency plan with the goal of maximizing your current investments in on premise all flash arrays, and both private and public clouds.

 

Automated Storage Tiering

Deploying automated storage tiering is a good place to get started as the concept is familiar to most storage managers.  For example, Hitachi Dynamic Tiering is software which allows you to create 3-tier pools within the array, and it allows you to enact policies which will automatically move data to a specified type of media pool once the pre-defined criteria has been met.

 

 

In a modern hybrid flash array like the Hitachi Vantara VSP G series, your pools can be defined based upon the type of storage media in the array.  This is especially economic in the VSP G series because the array can be configured with SSDs, Hitachi Flash Modules (FMD), or hard disk drives.  Essentially, high IO data starts residency on high performance SSD or FMD, and then dynamic tiering automatically moves it to low cost hard disk drives as it ages and becomes less active.  The savings in storage real estate in terms of cost per GB can be well over 60%.  But wait there’s more benefits to be had.

 

Integrated Cloud Tiering – Private Clouds and Object Storage

It’s no secret that migrating inactive data to the cloud can lead to a storage saving well over 70%.  The benefit doesn’t stop there as a well-managed data lifecycle frees up top tier flash for higher priority active data. Many top financial institutions choose to tier inactive based data off flash tiers and onto lower cost private cloud object storage.  In this way, they get the savings of moving this data into a low-cost tier and the there are no questions of data security and control behind the corporate firewall.  In addition, if the data ever needs to be moved back to and active tier it can be done quickly and inexpensively without the data egress fees incurred by public cloud providers.  In addition, Private cloud object storage like the Hitachi Content Platform (HCP), give your enterprise a “rent-controlled” residence with all the benefits of a public cloud and without concerns of security because you are in control of your data.

 

Cloud Tiering – Public Clouds

Public clouds like Amazon and Azure have changed the data residency landscape forever.  They are an excellent “low-cost” neighborhood for inactive data.  Companies of all sizes, from small to the largest enterprise, leverage the low cost and ‘limitless’ storage of public clouds as a target for inactive and archive data.

 

Potential Issues - Tiering Data to the Cloud

The concept of tiering to either public or private clouds is simple but executing a solution may not be as straightforward.  Many storage vendors claim the ability to tier to the cloud, but when you look at their solution, you’ll often find that they are not transparent to applications and end-users, often requiring significant rework of your IT architecture and a lower / delayed ROI in data center modernization. These solutions add complexity to your environment and often add siloed management requirements. The bottom line is that very few vendors understand and offer an integrated solution of high performance flash, automated tiering, and a choice of migration to multiple cloud types. Regarding public clouds, it should be noted that a downside is that if the data is not quite inactive, let’s say “warm,” it can be very costly to pull it back from the public cloud due to the previously mentioned data egress fees.  Not to mention it can take a very long time based on the type of service level agreement. For this reason, many tenants choose to only migrate backups and cold archive data to public clouds.

 

Hitachi Vantara Cloud-Connected Flash – 1 Recipe integrates all the right Ingredients

Cloud-Connected Flash is a solution from Hitachi Vantara which delivers a complete solution for file data lifecycle residency.  Hitachi is a leading vendor in hybrid flash arrays, all-flash arrays, object storage and cloud.

 

 

The solution is an easy concept to understand, “Data should reside on its most economic space”.  As illustrated in the graphic above. Active NAS data is created and maintained in a Hitachi Vantara VSP G or F series unified array.  Within the array, data is dynamically tiered based on the “migration policy” between pools of SSD, FMD (Hitachi Flash Modules) and disk drives.  As the file data ages, Hitachi data migration to cloud software moves the data, based on policy to your choice of clouds (Public Amazon, Azure, IBM Cloud or private HCP, Hitachi Content Platform).  When migrating data to HCP, the data can also be pulled back into the top flash tiers if needed creating a highly flexible and dynamic data residency environment.  But the value doesn’t stop at the savings in terms of cost of storage and maintenance. The Hitachi cloud connected flash solution can also include an analytics component to glean insights from your data lake which is comprised of both on premise and cloud data.

 

Cloud to Data Insights with Pentaho Analytics

Pentaho Analytics enables the analysis of your data both on premise and in public and private clouds. As an expert in IoT and analytics, Hitachi Vantara offers the customizable Pentaho platform to analyze data and create dash boards and reports in your cloud connected flash environment. The goal being to achieve better business outcomes by leveraging your Hitachi Vantara cloud-connected flash solution. Pentaho offers data integration, a highly flexible platform for blending, orchestrating, and analyzing data from virtually any source, effectively reaching across system, application and organizational boundaries.

  • Run Pentaho on a variety of public or private cloud providers, including Amazon Web Services (AWS) and Microsoft Azure
  • Leverage scale-out architecture to spread data integration and analytics workloads across many cloud servers
  • Connect on premise systems with data in the cloud and blend a variety of cloud-based data sources together
  • Seamlessly integrate and analyze leading Big Data sources in the Cloud, including Amazon Redshift and hosted Hadoop distributions
  • Access and transform data from web services and enterprise SaaS applications

 

 

Learn More, Hitachi Vantara Can Help

Hitachi Vantara Cloud-Connected Flash solutions are highly flexible, and individual components can be deployed based on business needs.  This enables customer to start with dynamic array tiering, then add cloud tiering and ultimately analytics. Please contact your Hitachi Representative or Hitachi Partner to learn how your organization can benefit from cloud-connected flash.

 

NVMe (non-volatile memory express) is a standard designed to fully leverage the performance benefits of non-volatile memory in all types of computing and data storage.  NVMe’s key benefits include direct access to the CPU, lightweight commands, and highly scalable parallelism which all lead to lower application latency and insanely fast IO. There has been a lot of hype surrounding NVMe in the press and how it can put your IT performance into the equivalent of Tesla’s “Ludicrous mode”, but I would like to discuss some “real world” considerations where NVMe can offer great benefits and perhaps shine a caution light in areas that you might not have considered.  As NVMe is in its infancy as far as production deployment, its acceptance and adoption are being driven by a need for speed.  In fact Hitachi Vantara recently introduced NVMe caching in its hyperconverged Unified Compute Platform (UCP).  This is a first step to mobilizing the advantages of NVMe to accelerate workloads by using NVME in the caching layer.

 

Parallel vs Serial IO execution - The NVMe Super Highway

What about storage? So where are the bottlenecks and why can’t SAS and SATA keep up with today’s high performance flash media?  The answer is that both SAS and SATA were designed for rotating media and long before flash was developed, consequently these command sets have become the traffic jam on the IO highway. NVMe is a standard based on peripheral component interconnect express (PCIe) and its built to take advantage of today’s massively parallel architectures.  Think of NVMe as a Tesla Model S capable of achieving 155mph in 29 seconds, stifled by old infrastructure (SAS/SATA) and a 55mph speed limit. All that capability is wasted. So what is driving the need for this type of high performance in IT modernization?  For one, Software-Defined Storage (SDS) is a rapidly growing technology that allows for the policy–based provisioning and management of data storage independent of the underlying physical storage. As datacenter modernization is at the core of IT planning these days, new technologies such as Software-Defined Storage are offering tremendous benefits in data consolidation and agility. As far as ROI and economic benefits, SDS’s ability to be hardware agnostic, scale seamlessly, and deliver simplified management is a total victory for IT. So then what is the Achilles heel for SDS and its promise to consume all traditional and modern workloads?  Quite frankly, SDS has been limited by the performance constraints of traditional architectures. Consequently, many SDS deployments are limited to applications that can tolerate the latency caused by the aforementioned bottlenecks.

 

Traditional Workload: High-Performance OLTP and Database Applications

Traditional OLTP and database workloads are the heartbeat of the enterprise. I have witnessed instances of customers having SDS deployments fail because of latency between storage and the application, even when flash media was used.  Surely the SDS platform, network, and compute were blazing fast, but the weak link was the SAS storage interface.  Another problem is that any type of virtualization or abstraction layer used to host the SDS instance on a server is going to consume more performance than running that service on bare metal. In an SDS environment, highly transactional applications will require the additional IOPS to keep latency from the virtualization layer in check and deliver the best quality of service to the field.  At the underlying storage level, traditional SAS and SATA constrain flash performance. The bottom line is that NVMe inherently provides much greater bandwidth than traditional SAS or SATA.  In addition, NVMe at the media level can handle 64,000 queues compared to SAS (254 queues) and SATA (32 queues).   This type of performance and parallelism can enable high-performance OLTP and deliver optimized performance with the latest flash media.  So the prospect is that more traditional high-performance OLTP workloads can be migrated to an SDS environment enabled by NVMe.

 

 

Caveat Emptor – The Data Services Tax
The new world of rack scale flash, SDS, and hyper converged infrastructure offerings promise loftier performance levels, but there are speed bumps to be considered.  This is especially true when considering the migration of mission-critical OLTP applications to a software-defined environment.  The fact of the matter is that data services (compression, encryption, RAID etc.) and data protection (snaps, replication, and deduplication) reduce IOPS. So be cautious when considering a vendor’s IOPS specification because in most cases the numbers are for unprotected and un-reduced data. In fact, data services can impact IOPS and response times to the extent that AFA’s with NVMe will not perform much better than SCSI-based AFA’s The good news is that NVMe performance and parallelism should provide plenty of horsepower (IOPS) to enable you to move high performance workloads into an SDS environment. The bad news is that you will need your hardware architecture to be more powerful and correctly designed to perform data services and protection faster than ever before (e.g. more IO received per second = more deduplication processes that must occur every second). Note that you also need to consider whether or not your SDS application, compute, network and storage are designed to take full advantage of NVMe’s parallelism.  Also note that a solution is only as fast as its weakest link and for practical purposes it could be your traditional network infrastructure. If you opt for NVMe on the back-end (between storage controllers and media) but do not consider how to implement NVMe on the front-end (between storage and host / application), you may just be pushing your performance bottleneck to another point of IO contention and you won't get any significant improvement.

 

Modern Workload: Analytics at the Edge

It seems as though “Analytics” has replaced “Cloud” as the IT modernization initiative de jour. This is not just hype as the ability to leverage data to understand customers and processes is leading to profitable business outcomes never before possible. I remember just a few years ago the hype surrounding Hadoop and batch analytics in the core data-center and in the cloud.  It was only a matter of time before we decided that best place to produce timely results and actions from analytics are at the edge. The ability to deploy powerful compute in small packages makes analytics at the edge (close to where the data is collected) a reality.  The fundamental benefit is the network latency being saved by having the compute function at the edge.  A few years ago analytics architecture data would travel via a network or telemetry-to-network and then to the cloud. That data would be analyzed and the outcome delivered back the same way it arrived.  So edge analytics cuts out data traversing the network and saves a significant chunk of time.  This is the key to enabling time sensitive decisions like an autonomous vehicle avoiding a collision in near real-time.  Using NVMe /PCIe, data can be sent directly to a processor at the edge to deliver the fastest possible outcomes.  NVMe enables processing latency to be reduced to microseconds and possibly nanoseconds.  This might make you a little more comfortable about taking your hands off the wheel and letting the autonomous car do the driving…

 

The Take Away

My advice to IT consumers is to approach any new technology with an open mind and a teaspoon of doubt. Don’t get caught up in hype and specs. Move at a modernization pace that is comfortable and within the timelines of your organization.  Your business outcomes should map to solutions and not the other way around. “When in doubt, proof it out”, make sure your modernization vendor is truly a partner.  They should be willing to demonstrate a working proof of concept, especially when it comes to mission-critical application support.  Enjoy the new technology; it’s a great time to be in IT!

 

More on NVMe

NVMe 101: What’s Coming To The World of Flash

How NVMe is Changing Networking Part 1

How NVMe is Changing Networking Part 2

Redesigning for NVMe: Is Synchronous Replication Dead?

NVMe and Data Protection: Time To Put on Those Thinking Caps

The Brocade view on how NVMe affects network design

NVMe and Me – How To Get There

An In Depth Conversation with Cisco.

 

Welcome back! (Yes, I was waiting for this opportunity to show my age with a Welcome Back Kotter image).

 

For those of you that haven’t been following along in real time – c’mon man! – we’re in the midst of a multi-blog series around NVMe and data center design (list of blogs below).

 

That’s right, data center design. Because NVMe affects more than just storage design. It influences every aspect of how you design an application data path. At least it should if you want maximum return on NVMe investments.

 

In our last blog we got the Brocade view on how NVMe affects network design. As you might imagine, that conversation was very Fibre Channel-centric. Today we’re looking at the same concept – network design – but bringing in a powerhouse from Cisco.

 

If you've been in the industry for a while you've probably heard of him: J Michel Metz. Dr. Metz is an R&D Engineer for Advanced Storage at Cisco and sits on the Board of Directors for SNIA, FCIA and NVM Express. So… yeah. He knows a little something about the industry. In fact, check out a blog on his site called storage forces for some background on our discussion today. And if you think you know what he’s going to say, think again.

 

Ok. Let’s dig in.

 

Nathan: Does NVMe have a big impact on data center network design?

 

J: Absolutely. In fact I could argue the networking guys have some of the heaviest intellectual lift when it comes to NVMe. With hard disks, tweaking the network design wasn't nearly as critical. Storage arrays – as fast as they are - were sufficiently slow so you just sent data across the wire and things were fine.

 

Flash changed things, reducing latency and making network design more critical, but NVMe takes it to another level. As we’re able to put more storage bits on the wire, it increases the importance of network design. You need to treat it almost like weather forecasting; monitoring and adjusting as patterns change. You can’t just treat the storage as “data on a stick;” just some repository of data at the end of the wire, where you only have to worry about accessing it.

 

Nathan: So how does that influence the way companies design networks and implement storage?

 

J: To explain I need to start with a discussion of how NVMe communications work. This may sound like a bizarre metaphor, but bear with me.

 

Think of it like how food is ordered in a ‘50s diner. A waitress takes an order, puts the order ticket on the kitchen counter and rings a bell. The cook grabs the ticket, cooks the food, puts the order back on the counter and rings the bell. The waitress then grabs the order and takes it to the customer. It’s a process that is efficient and allows for parallel work queues (multiple wait staff and cooks).

 

Now imagine the customers, in this case our applications, are a mile away from the kitchen, our storage. You can absolutely have the waitress or the cook cross that distance, but it isn't very efficient. You can reduce the time to cross the distance by using a pneumatic tube pass orders to the kitchen, but someone ultimately has to walk the food. That adds delays. Again, the same is true with NVMe. You can optimize NVMe to be transferred over a network, but you’re still dealing with the physics of moving across the network.

 

At this stage you might stop and say ‘hey, at least our process is a lot more efficient and allows for parallelism.’ That could leave you with a solid NVMe over Fabric design. But for maximum speed what you really want is to co-locate the customers and kitchen. You want your hosts as close to the storage as possible. It’s the trade-offs that matter at that point. Sometimes you want the customers in the kitchen. And that’s what hyper-convergence is, but obviously can only grow so large. Sometimes you want a centralized kitchen and many dining rooms. That’s also what you can achieve with rack-scale solutions that put an NVMe capacity layer sufficiently close to the applications, at the ‘top of rack.’ And so on.

 

Nathan: It sounds like you’re advocating a move away from traditional storage array architectures.

 

J: I want to be careful because this isn’t an ‘or’ discussion, it’s an ‘and’ discussion. HCIS is solving a management problem. It’s for customers that want a compute solution with a pretty interface and freedom from storage administration. HCIS may not have nearly the same scalability as an external array, but it does allow application administrators to easily and quickly spin up VMs.

 

As we know though, there are customers that need scale. Scale in capacity; scale in performance and scale in the number of workloads they need to host. For these customers, HCIS isn’t going to fit the bill. Customers that need scale – scale across any vector – will want to make a trade-off in management simplicity for the enterprise growth that you get from an external array.

 

This also applies to networking protocols. The reason why we choose protocols like iWARP is for simplicity and addressability. You choose the address and then let the network determine the best way to get data from point A to point B. But, there is a performance trade-off.

 

Nathan: That’s an excellent point. At no point have we ever seen IT coalesce into a single architecture or protocol. If a customer needs storage scale with a high-speed network what would you recommend?

 

           J: Haven’t you heard that every storage question is answered with, “It depends?”

 

Joking aside, it’s never as simple as figuring out the best connectivity options. All storage networks can be examined “horizontally.” That is the phrase I use to describe the connectivity and topology designs from a host through a network to the storage device. Any storage network can be described this way, so that it’s easy to throw metrics and hero numbers at the problem: what are the IOPS, what is the latency, what are the maximum number of nodes, etc.

 

What we miss in the question, however, is whether or not there is a mismatch between the overall storage needs (e.g., general purpose network, dedicated storage network, ultra-high performance, massive scale, ultra-low latency, etc.) and the “sweet spot” of what a storage system can provide.

 

There is a reason why Fibre Channel is the gold standard for dedicated storage networks. Not only is it a well-understood technology, it’s very, very good and not just performance, but reliability. But for some people there are other considerations to pay attention to. Perhaps the workloads don’t need to lend themselves to a dedicated storage network. Perhaps “good enough” is, well, “good enough.” For them, they are perfectly fine with really great performance with Ethernet to the top-of-rack, and don’t need the kind of high availability and resiliency that a Fibre Channel network, for instance, is designed to provide.

 

Still others are looking more for accessibility and management, and for them the administrative user interface is the most important. They can deal with performance hits because the management is more important. They only have a limited number of virtual machines, perhaps, so HCIS using high-speed Ethernet interconnects is perfect.

As a general rule, “all things being equal” are never actually equal. There’s no shortcut for good storage network design.

 

Nathan: Let’s look forward now. How does NVMe affect long term network and data center design?

 

J: <Pause> Ok, for this one I’m going to be very pointedly giving my own personal opinion. I think that the aspect of time is something we’ve been able to ignore for quite a while because storage was slow. With NVMe and flash though, time IS a factor and it is forcing us to reconsider overall storage design, which ultimately affects network design.

 

Here is what I mean. Every IO is processed by a CPU. The CPU receives a request – write, etc. –passes it on and then goes off to do something else. That process was fine when IO was sufficiently slow. CPUs could go off and do any number of additional tasks. But now, it’s possible for IO to happen so fast that the CPU cannot switch between tasks before the IO response is received. The end result is that a CPU can be completely saturated by a few NVMe drives.

 

Now, this is a worst-case scenario, and should be taken with a grain of salt. Obviously, there are more processes going on that affect IO as well as CPU utilization. But the basic premise is that we now have technologies that are emerging that threaten to overwhelm both the CPU and the network. The caveat here, the key take-away, is that we cannot simply swap out traditional spinning disk, or flash drives, with NVMe and expect all boats to rise.

In my mind this results in needing more intelligence in the storage layer. Storage systems, either external arrays or hyperconverged infrastructures, will ultimately be able to say no to requests and ask other storage systems for help. They’ll work together to coordinate and decide who handles tasks like an organic being.

 

Yes, some of this happens as a result of general machine learning advancements, but it will be accelerated because of technologies like NVMe that force us to rethink our notion of time. This may take a number of years to happen, but it will happen.

 

Nathan: If storage moves down this path, what happens to the network?

 

J: Well, you still have a network connecting storage and compute but it, too, is more intelligent. The network understands what its primary objectives are and how to prioritize traffic. It also knows how to negotiate with storage and the application to determine the best path for moving data back and forth. In effect, they can act as equal peers to decide on the best route.

 

You can also see a future where storage might communicate to the network details about what it can and can’t do at any given time. The network could then use this information to determine the best possible storage device to leverage based on SLA considerations. To be fair, this model puts the network in a ‘service broker’ position that some vendors may not be comfortable with. But since the network is a common factor that brings storage and servers together it creates opportunity for us to establish the best end-to-end route.

 

In a lot of ways, I see end-to-end systems coming together in a similar fashion to what was outlined in Conway’s game of life. What you’ll see is data itself self-organizing based on priorities that are important for the whole system – the application, the server, the network and the storage. In effect, you’ll have autopoiesis, a self-adaptive system.

 

I should note that what I’m referring to here are really, really large systems of storage, not necessarily smaller host-to-storage-array products. There are a lot of stars that need to align before we can see something like this as a reality. Again, this is my personal view.  

 

Nathan: I can definitely see why you called this out as your opinion. You’re looking pretty far in to the future. What if we pull back to the next 18 – 24 months, how do NVMe fabrics play out?

 

Nathan: I know. I’m constraining you. Sorry about that.

 

J: <Laughs> In the near term we’re going to see a lot of battles. That’s to be expected because the standards for NVMe over Fabrics (NVMe-oF) are still relatively new.

 

Some vendors are taking shortcuts and building easy-to-use proprietary solutions. That gives them a head start and improves traction with customers and mind share, but it doesn't guarantee a long-term advantage. DSSD proved that.

 

The upside is that these solutions can help the rest of the industry identify interesting ways to implement NVMe-oF and improve the NVMe-oF standard. That will help make standards-based solutions stronger in the long run. The downside is that companies implementing early standards may feel some pain.

 

Nathan: So to close this out, and maybe lead the witness a bit. Is the safest way to implement NVMe – today – to implement it in an HCI solution and wait for the NVM-oF standards to mature?

 

J: Yeah. I think that is fair to say, especially if there is a need to address manageability challenges. HCIS absolutely helps there. For customers that do need to implement NVMe over Fabrics today, Fibre Channel is probably the easiest way to do that. But don’t expect FC to be the only team on the ball field, long term.

 

If I go back to my earlier point, different technologies are optimized for different needs. FC is a deterministic storage network and it’s great for that. Ethernet-based approaches, though, can be good for simplicity of management, though it’s never a strict “either-or” when looking at the different options.

 

I expect Ethernet-based NVMe-oF to be used for smaller deployment styles to begin with, single switch environments, rack-scale architectures, or standalone servers with wicked fast NVMe drives connected across the network via a Software Defined Storage abstraction layer. We are already seeing some hyperconvergence vendors flirt with NVMe and NVMe-oF as well. So, small deployments will likely be the first forays into NVMe-oF using Ethernet, and larger deployments will probably gravitate towards Fibre Channel, at least in the foreseeable time frame.

 

 

CLOSING THOUGHTS <NATHAN’S THOUGHTS>

 

As we closed out our conversation J made a comment about NVMe expanding our opportunity to address customer problems in new ways.

 

I can’t agree more. In my mind, NVMe can and should serve as a tipping point that forces us, vendors, to rethink our approach to storage and how devices in the data path interoperate.

 

This applies to everything from the hardware architecture of storage arrays; to how / when / where data services are implemented; even to the way devices communicate. I have some thoughts around digital force feedback where an IT infrastructures resists a proposed change and respond with a more optimal configuration in real-time (imagine pushing a capacity allocation to an array on your mobile phone and feeling pressure of it resisting then responding with green lights over more optimal locations & details on why the change is proposed), but that is a blog for a day when I have time to draw pictures.

 

The net is that as architects, administrators and vendors we should view NVMe as an opportunity for change and consider what we keep vs. what we change – over time. As J points out NVMe-oF is still maturing and so are the solutions that leverage it. So to you dear reader:

 

  1. NVMe on HCI (hyper-converged infrastructure) is great place to start today.
  2. External storage with NVMe can be implemented, but beware anyone who says their architecture is future proof or optimized to take full advantage of NVMe (J’s comment on overloading CPUs is a perfect example of why).
  3. Think beyond the box. Invest in an analytics package that looks at the entire data path and lets you understand where bottlenecks exist.

 

Good hunting.

 

NVMe 101 – What’s Coming to the World of Flash?

Is NVMe Killing Shared Storage?

NVMe and Me: NVMe Adoption Strategies

NVMe and Data Protection: Time to Rethink Strategies

NVMe and Data Protection: Is Synchronous Replication Dead?

How NVMe is Changing Networking (with Brocade)

Hitachi Vantara Storage Roadmap Thoughts

An In Depth Conversation with Brocade

 

As we've discussed over the last several blogs, NVMe is much more than a communication protocol. It’s a catalyst for change. A catalyst that touches every aspect of the data path.

 

At Hitachi we understand that customers have to consider each of these areas, and so today we’re bringing in a heavy hitter from Brocade to cover their view of how data center network design changes – and doesn't change – with the introduction of NVMe.

 

The heavy hitter in this case is Curt Beckmann, principle architect for storage networking. A guy who makes me, someone who used to teach SEs how to build and debug FC SANs, feel like a total FC newbie. He’s also a humanitarian, on the board of Village Hope, Inc. Check it out.

 

Let’s dig in.

 

Nathan: Does NVMe have a big impact on data center network design?

 

Curt: Before I answer, we should probably be precise. NVMe is used to communicate over a local PCIe bus to a piece of flash media (see Mark’s NVMe overview blog for more). What we want to focus on is NVMe over Fabric, NVMe-oF. It’s the version of NVMe used when communicating beyond the local PCIe bus.

 

Nathan: Touché. With that in mind. Does NVMe-oF have a big impact on network design?

 

Curt: It really depends on how you implement NVMe-oF. If you use a new protocol that changes how a host interacts with a target NVMe device, you may need to make changes to your network environment. If your encapsulating NVMe in existing storage protocols like FC though, you may not need to change your network design at all.

 

Nathan: New protocols. You’re referring to RDMA based NVMe-oF protocols, right?

 

Curt: Yes. NVMe over Fabrics protocols that use RDMA, iWARP or RoCE, reduce IP network latency by talking directly to memory.  For NVMe devices that can expose raw media, RDMA can bypass CPU processing on the storage controller. This allows faster, more ‘direct’ access between host and media. It does however require changes to the way networks are designed.

 

Nathan: Can you expand on this? Why would network design need to change?

 

Curt: Both iWARP and RoCE are based on Ethernet and IP. Ethernet was designed around the idea that data may not always reach its target, or at least not in order, so it relies on higher layer functions, traditionally TCP, to retry communications and reorder data. That’s useful over the WAN, but sub-optimal in the data center. For storage operations, it’s also the wrong strategy.

 

For a storage network, you need to make sure data is always flowing in order and is ‘lossless’ to avoid retries that add latency. To enable this, you have to turn on point-to-point flow control functions. Both iWARP and RoCE v2 use Explicit Congestion Notification (ECN) for this purpose. iWARP uses it natively. RoCE v2 added Congestion Notification Packets (CNP) to enable ECN to work over UDP. But:

 

      1. They aren't always ‘automatic.’ ECN has to be configured on a host. If it isn't, any unconfigured host will not play nice and can interfere with other hosts’ performance.
      2. They aren't always running. Flow control turns on when the network is under load. Admins need to configure exactly WHEN it turns on. If ECN kicks in too late and traffic is still increasing, you get a ‘pause’ on the network and latency goes up for all hosts.
      3. They aren't precise. I could spend pages on flow control, but to keep things short, you should be aware that Fibre Channel enables a sender to know precisely how much buffer space remains before it needs to stop. Ethernet struggles here.

 

There are protocol specific considerations too. For instance, TCP-based protocols like iWARP start slow when communication paths are new or have been idle, and build to max performance. That adds latency any time communication is bursty.

 

Nathan: So if I net it out, is it fair to say that Ethernet and NVMe is pretty complex today?

 

Curt: (Smiles). There’s definitely a level of expertise needed. This isn't as simple as just hooking up some cables to existing switches. And since we have multiple RDMA standards which are still evolving (Azure is using a custom RoCE build, call it RoCE v3), admins will need to stay sharp. Which raises a point I forgot to mention. These new protocols require custom equipment.

 

Nathan: You can’t deploy iWARP or RoCE protocols on to installed systems?

 

Curt: Not without a NIC upgrade. You need something called an R-NIC. There are a few vendors that have them, but they aren’t fully qualified with every switch in production.

 

That’s why you are starting to hear about NVMe over TCP. It’s a separate NVMe protocol similar to iSCSI that runs on existing NICs so you don’t need new hardware. It isn't as fast, but it is interoperable with everything. You just need to worry about the network design complexities. You may see it ultimately eclipse RDMA protocols and be the NVMe Ethernet protocol of choice.

 

Nathan: But what if I don’t care Curt? What if I have the expertise to configure flow control, plan hops / buffer management so I don’t hit a network pause? What if R-NICs are fine by me? If I have a top notch networking team, is NVMe over Fabric with RDMA faster?

 

Curt: What you can say is that for Ethernet / IP networks, RDMA is faster than no RDMA. In a data center, most of your latency comes from the host stack (virtualization can change the amount of latency here) and a bit from the target storage stack (See Figure 1). That is why application vendors are designing the applications to use a local cache for data that needs the lowest latency. No wire, lower latency. With hard disks, network latency was tiny compared to the disk, and array caching and spindle count could mask the latency of software features.  This meant that you could use an array instead of internal drives. Flash is a game changer in this dynamic, because now the performance difference between internal and external flash is significant.  Most latency continues to be from software features, which has prompted the move from the sluggish SCSI stack to faster NVMe.

  

Figure 1: Where Latency Comes From

 

I've seen claims that RoCE can do small IOs, like 512 bytes, at maybe 1 or 2 microseconds less latency than NVMe over Fibre Channel when the queue depth is set to 1 or some other configuration not used in normal storage implementations.  We have not been able to repeat these benchmarks, but this is the nature of comparing benchmarks.  We were able to come very close to quoted RoCE numbers for larger IO, like 4K. At those sizes and larger, the winner is the one with faster wire speed. This is where buyers have to be very careful. A comparison of 25G Ethernet to 16G FC is inappropriate. Ditto for 32G FC versus 40G Ethernet. A better comparison is 25G Ethernet to 32G FC, but even here check the numbers and the costs.     

 

Nathan: Any closing thoughts?

Curt: One we didn't really cover is ease of deployment alongside existing systems. For instance, what if you want to use a single storage infrastructure to support NVMe-oF enabled hosts and ‘classic’ hosts that are using existing, SCSI based protocols? With FC you can do that. You can use existing Gen 5 and Gen 6 switches and have servers that supports multiple storage interface types. With Ethernet? Not so much. You need new NICs and quite possibly new switches too. Depending on who you speak with DCB switches are either recommended, if you want decent performance, or required. I recommend you investigate.

 

CLOSING THOUGHTS <NATHAN’S THOUGHTS>

Every vendor has their own take on things, but I think Curt’s commentary brings to light some very interesting considerations when it comes to NVMe.

 

 

 

  1. Ecosystem readiness – With FC (and maybe future Ethernet protocols), NVMe may require minimal to no changes in your network resources (granted, a network speed upgrade may be advised). But with RDMA, components change, so check on implementation details and interop. Make sure the equipment cost of changing to a new protocol isn't higher than you expect.
  2. Standard readiness – Much like any new technology, standards are evolving. FC is looking to make the upgrade transparent and there may even be similar Ethernet protocols coming. If you use RDMA, great. Just be aware you may not be future proofed. That can increase operational costs and result in upgrades sooner than you think.
  3. Networking expertise – With Ethernet, you may need to be more thoughtful about congestion and flow control design. This may mean reducing the maximum load on components of the network to prevent latency spikes. It can absolutely be done, you just need to be aware that NVMe over Fabric with RDMA may increase operational complexity that could result in lower than expected performance / ROI. To be clear though, my Ethernet friends may have a different view. We’ll have to discuss that with them.

 

Other than that, I’ll tell you what I told myself when I was a systems administrator. Do your homework. Examine what is out there and ask vendors for details on implementations. If you buy a storage solution that is built around what is available today, you may be designing for a future upgrade versus designing for the future. Beware vendors that say ‘future proof.’ That’s 100% pure marketing spin.

An Interview with Bob and Bob

Cat herding by Nathan Moffitt

 

 

So here we are, just a few weeks in to the life of Hitachi Vantara. Since the formation of Hitachi Vantara there has been a lot of press and activity around everything from infrastructure offerings, to IoT solutions, to the vision of the new company. For those interested in an overview of Vantara there are some great blogs like Mary Ann Gallo’s up on our community and a detailed press release.

 

These announcements provide a number of insights into who we are and where we are headed, but we recognize there is – and always will be – an insatiable desire to know more. Especially around core development areas like IT infrastructure.

 

To help satisfy that desire and provide a ‘blunt hammer’ forum to educate certain vendors that can’t read a press release, I sat down to talk storage with two IT infrastructure leaders at Hitachi Vantara. Bob O’Heir, VP of product management and Bob Madaio, VP of marketing.

 

<Insert Office Space joke here – I do regularly.>

 

Moffitt: To start, you’re both responsible for storage and broader IT infrastructure offerings, right?

 

O’Heir: Is this even a question?

 

Moffitt: You know why I’m asking.

 

O’Heir: <Sighs> Yes. We’re not a ‘storage only’ company. We’re a data company. That means we think about the entire infrastructure. Our teams design storage solutions, but we also design integrated systems and management software that optimizes overall IT operations.

 

Madaio: Agreed. I won’t belabor the point.

 

Moffitt: Ok. With that in mind, what changes for you with the formation of Hitachi Vantara?

 

O’Heir: I see a huge opportunity to integrate our infrastructure offerings more tightly with our analytics technologies. Some of this is already in progress, but with the formation of Vantara, our teams are better aligned to co-develop solutions that help customers gain deeper insights from their data.

 

Madaio: I see the Hitachi Vantara structure enabling us to more easily share ideas and deliver data-driven solutions. Look, each of the entities that makes up Vantara was already working with our customers and each other. Now though, we’re more integrated. It's easier for customers to engage with externally and easier for developers to work together internally. That lets us be faster to market, more customer-friendly and customer-centric, which is a big win for us and customers.

 

Moffitt: Is storage going to be a key investment area for us as we design new data-driven solutions?

 

O’Heir: Of course. Storage, and more precisely infrastructure, is a key strength for Hitachi Vantara. Continuing to design products and services in this space makes us a better company to work with because we can bring experience around storing, monitoring, protecting and delivering data to IoT solutions. It also gives us the ability to provide collection points – storage – for helping customers analyze and decide what to save. We can also make storage behave more like an IoT device, which is critical for all IT components moving forward. <Pauses> Think about it like this, does GE quit making equipment now that they do IoT software? No.

 

Madaio: <Chuckles> Our corporate focus is on helping people activate and leverage their data. Not leveraging our heritage in information storage and data management would be absurd. When comparing us to other industrial powerhouses, having a strong IT business is a pretty unique differentiator. It lets us create differentiated solutions for deriving value from data based on knowing how customers ACTUALLY deploy IT infrastructures. It also provides launch points so customers can start with storage and add OT capabilities as they become more data-driven. Pigeon holing ourselves into one market segment would be a fast path to oblivion.

 

Moffitt: Let’s continue down this path. How does IT + OT change the evolution of our infrastructure portfolio?

 

Madaio: It changes our design focus. With the combination of IT, IoT and analytics we’re better able to deliver on customer outcomes versus focusing solely on a new ‘box’ or application. With Vantara we move beyond thinking about form factor and dropping data into a fixed location where it just. sits. Instead we focus on data services that provide a consistent way to access and leverage data anywhere. This is key as the locations where data is born and needs to be analyzed expands.

 

O’Heir: Exactly. That has a huge impact on how we approach storage in particular. Software-defined storage (SDS) is and will be big for us moving forward. We want customers to consume data services in a very flexible fashion. It might be on a 1U server at the edge or a high-end server / custom built controller at the core which is optimized for maximum performance and uptime. Edge to core analytics will also be a big consideration.

 

Moffitt: Talk more about that. How do analytics and storage fit together in the new Vantara?

 

O’Heir: There are 2 aspects to consider. First, how do you view a storage system as an IoT device? Every storage system is collecting all kinds of telemetry data. By pulling analytics information from it into our Smart Data Center software we can better optimize storage behavior and enable the array to work with other parts of the infrastructure to optimize the entire data path. Of course this also means that the ‘language’ arrays speak changes too.

 

Moffitt: MQTT (note: a protocol that can be used by IoT devices) for instance?

 

O’Heir: Yep. That might be for passing information to another infrastructure component or it might be for receiving data from an IoT device. Protocols aren't static. They are always changing and with Vantara we have the ability to be forward thinking about how to transmit data. Hitachi Universal Replicator is an example of that thought process. When released it was revolutionary, it used more of a pull vs. push method to better tolerate outages and reduce bandwidth consumption. With IoT, protocols have to change.

 

The second aspect is looking at what we can do while we hold the data. If you have the data, why not perform some level of analytics on it? I equate this to the old argument about where you run functions like replication. Yes, you can run them on the application server, but why pull cycles from the host for that? Offload it to an array. The same thing is true of analytics. If data is resident, storage could pull metadata and make predictions about whether to retain the data or just the metadata.

 

Madaio: Of course, exploiting this data gravity is much easier if we still develop storage. To be sure, we could simply produce the analytics software, but if we provide a fully integrated system and broader infrastructure offerings, we reduce complexity of deployment and acquisition. And we add accretive value. Oh, and I recommend folks check out a recent blog I did on storage as an IoT device. It ties right into this conversation.

 

Moffitt: Accretive. Good word. It seems like this means there a blend of our corporate technologies.

 

O’Heir: It certainly enforces and helps drive where we want to go with simplification of operational processes. When you blend data services and analytic services you reduce the number of resources you deploy and the complexity of optimizing resources. You also open up a broad range of opportunities to deliver value.

 

Moffitt: Talk about that. What opportunities does this open up?

 

O’Heir: Well, I don’t want to say much here.

 

Madaio: Really? You seem like a sharing kind of guy. And a huge Michael Bolton fan.

 

O’Heir: For my money, I don't know if it gets any better than when he sings "When a Man Loves a Woman". <Pause> Ok, one example. Data ownership and privacy is a growing concern. You need to think about a whole new host of things when you store data. Can analytics be allowed on a data set, is data within proper ‘borders,’ things like that. Having the ability to do some level of base security analytics in storage lets you make decisions about where / how to replicate it, etc. Yes, users can set their own controls, but accidents can happen. If storage can help prevent missteps in data handling, everyone wins.

 

Madaio: The key thing for me in all this is that these are things every storage vendor will need to consider unless they want to become a ‘just a bunch of disk’ provider. Storage design must change if vendors want to add value.

 

Moffitt: Let’s close out. Talk to me about what storage looks like in 5 years.

 

O’Heir: Timelines are tough, but directionally I think storage providers will need to think about traditional items like performance, capacity and resiliency as well as analytics facilitation. With all the data being produced from edge to core you’re going to need every system that retains data to be mining it. I talked to a financial institution recently that is very concerned about this. They see an explosive growth coming in the number of data points they have to gather every few microseconds. Microseconds. How do you process all of that? Yes, you can do edge or cloud processing, but why not in the storage? For some architectures that may be an imperative.

 

That leads to fundamental architecture design changes that I think, hope, all infrastructure vendors are considering - microservice architectures. If you have the ability to insert an analytic function into an array then data scientists can develop and run analytics from where-ever the data repository is.

 

And Of Course There Was More.

 

In every blog I write – personal or interview – a lot of detail ultimately ends up on the cutting room floor. In the case of this interview we had to scrap some conversational elements around NVMe, Microservice architectures, data versus control plane functionality and even what competitors are still standing in 5 years.

 

If you’re interested in that detail, let me know. We might be able to create a part 2 for this blog. We could even pull in another smart VP of product management, Bob Primmer. Why does Hitachi Vantara have so many Bob’s well that is a blog in itself.

 

To close, for now, here are the takeaways I’d point out.

 

  1. Hitachi Vantara is still developing storage (shocking I know), but long term it may not look like the storage you know today. We see opportunities for massive innovation.
  2. Innovating across the entire infrastructure – not only storage – is critical for vendors to stay relevant and deliver maximum value to customers.
  3. Analytics and infrastructure are blending together, having expertise in one allows you to develop more impactful solutions in the other.

 

Hopefully you found this blog enjoyable. If so, let me know! Until next time. Bob and Bob, thank you.

 

As we've been discussing in recent blogs, NVMe holds the promise of lightning fast data transfers where business operations complete in microseconds, enabling faster decision making, richer business insights and better customer experiences.

 

To achieve that promise though you have to be mindful of how your data path is designed. Value added services that consume resources or need to be processed as part of a data transfer can affect the benefits of NVMe. One value added service that seem like a stop light in front of your NVMe sports car is synchronous replication, a key tool for preventing data loss if a system or site goes offline.

 

Which leads us to an important question. In a world of microsecond latency NVMe transactions, is it time for synchronous replication to get sent to the rubbish heap?

 

 

Figure 1: Common Synchronous Replication Workflow

Synchronous Replication. Safe But ‘Slow.’

 

Quick review: synchronous replication can be defined as a business continuity process that mirrors IO between two systems to prevent data loss and downtime. As shown in figure one, the traditional practice is as follows:

 

  1. Server sends an IO to a primary storage array
  2. Primary array mirrors IO to a target array
  3. Target array acknowledges IO is received
  4. Primary array responds to server

 

I’ve truncated things, but that should cover the basics.

 

The challenge with synchronous replication comes from how long it takes to send IO from the primary array to a target array, get an acknowledgement back and then tell the server the IO has been successfully committed. The amount of time depends on several factors including distance, routing and line quality. For this article let’s assume that 100KM of distance adds roughly 1 millisecond of latency.

 

WHY YOU CARE: 1 millisecond might not seem slow, but with NVMe… it could be considered a snail’s pace. Since certain NVMe implementations can theoretically transfer data in the sub-100 microsecond range, 1 or more milliseconds can easily translate to a 10x slow down, crushing the value of NVMe. Distance does not make the heart grow fonder with NVMe.

 

Note: Read IO are serviced by the primary storage array and no data needs to be sent to the target. For read intensive applications the impact of NVMe will be lower, but unless your workload is read-only… you need to consider the effects.

 

Value Add or Speed Impediment?

 

At this point you might be thinking, yes synchronous replication is a speed impediment and not worth using. For certain use cases, you might be right. Speed demons (IT teams pulling data from drones; data scientists crunching numbers) may see no need for synchronous replication. Of course they’re also likely to get rid of almost any feature that sits in the data path as we discussed in previous blogs.

 

But for business operations where loss of any data could impact financial viability / result in lawsuits, it is better to have a little slow down than risk losing data. For that reason I find it hard to believe synchronous replication will go away.

 

The question will instead be how we redesign our approach to synchronous replication so that latency is minimized (eliminating latency is hard… for now). There are several ways this could play out:

 

  • Predictive replication. Synchronous replication occurs as usual, but the source system analyzes previous IO data (RTT, etc.) to determine if it should wait for an acknowledged by the target system before responding to the host. If IO has been consistently stable for x amount of time, this more semi-synchronous method might be acceptable for a limited number of IO transfers.
  • Host side replication. While this doesn't resolve the overall latency of transferring data to a remote site, it does ‘cut out the middle man.’ Having a host manage replication would eliminate latency of having the source array broker IO and, if paired with predictive replication, could improve IO response times while improving data protection.
  • Staging or tiering. In this instance, synchronous replication is not performed on initial data capture (hold on, it makes sense). Instead it is performed after initial processing. For workloads where raw data is captured, processed and only the ‘output’ is saved, synchronous replication can and should wait until the final product is created. That lets you get both speed and security. And to be fair, this strategy aligns very well with workloads where NVMe adds most value.
  • Faster networking. It is feasible that we could have faster connectivity across metropolitan sites, enabling at least semi-synchronous, replication to make it into strategies. Physics will continue to be an issue (insert PM joke on requesting a change to the speed of light) but with higher quality links and even caching stations, latency could be beat into some level of submission - without having to fold space & time.

 

WHY YOU CARE: While some of this is theoretical, it does demonstrates that synchronous replication can / could be combined with NVMe to deliver performance and business continuance if strategies are adjusted. It also demonstrates that storage solutions can change to better leverage NVMe. Coupling host side data services with traditional AFAs or software-defined storage would be a perfect example of this. Similarly software-defined networking with NFV service insertion could provide a model to follow.

 

FINAL THOUGHTS AND NEXT STEPS

 

When all-flash arrays were introduced a number of folks said synchronous replication was dead because it impacted the overall speed of data transfers. And yet, synchronous replication continues to be deployed. Startup AFA vendors have even adopted it to compete in the enterprise.

 

With NVMe the same thing is likely to be true. Yes, some types of solutions (rack-scale flash) may eschew synchronous replication because it doesn't fit their target workloads. But for vendors that serve enterprise workloads, expect synchronous replication to remain – just optimized for NVMe.

 

From a Hitachi perspective, we are embracing both paths: high performance solutions that don’t necessarily need synchronous replication because of the workload type as well as enterprise solutions that include synchronous replication because of the criticality of business continuance.

 

For this second area, we could slap together some communication protocols, PCIe and high speed interfaces on an existing system, but that isn’t in our DNA. Instead, we’re taking the time to examine the best approach to delivering NVMe so that we don't’ sell our customers one thing and then immediately refresh it with something else because we’ve optimized software and hardware for critical functions like synchronous replication. That may mean we aren’t first in market, but it does mean we’ll have the industry’s most resilient offering in market. One that supports the next generation of IoT, analytic driven smart data centers.

 

Other blogs to read:

 

NVMe and Me (The Journey to NVMe)

NVMe 101 – What’s Coming to the World of Flash?

Is NVMe Killing Shared Storage?

If there is one thing that is certain in life, it’s that nothing is static. Change is inevitable. And with change comes the need to rethink the way we approach… everything.

 

Point in case, NVMe. As previous blogs covered, NVMe and other flash advancements are forcing us to re-examine the way we architect IT solutions. Application IO stacks, networks, storage software and a host of other items can – and probably will – be tweaked or totally redesigned to get maximum value from NVMe.

 

This includes data protection. Initial press for NVMe has been around increased IOPS and lower latency, but if you read between the lines you’ll see it changes how data is safeguarded.

 

Why? Because data protection isn’t a zero add process. It requires your storage array to do ‘stuff,’ and that ‘stuff’ either takes resources – which NVMe desperately wants to consume – or time – for which NVMe has no patience. In fact, NVMe is already tapping its foot and looking at its watch!

 

The end result is that as vendors and IT leaders optimize for NVMe, data protection implementations will change. And that not only affects your IT best practices, it affects how vendors design storage solutions. So buyer be aware, it’s time to put on that thinking cap.

 

DATA PROTECTION GROUND ZERO: FLASH MEDIA RESILIENCY

A good place to start looking at this topic is with the flash media where data lives. Luckily, NVMe offerings tend to have similar uptime metrics when compared to current SSDs. That includes MTBF and DWPD. Still, NVMe media isn’t fully mainstream yet and you should consider:

 

  • Dual Porting: Provides 2 connections from media to backplane. If one fails, the other allows IO to continue. Dual porting doesn’t impact speed, but it does affect price. Depending on who you ask, dual port NVMe drives are 20%, 50% or more expensive than current generation SSDs.

 

  • Hot Swap: Allows you to pull a drive and replace it while your storage system is running. Again, this doesn’t impact speed, but it does affect uptime. Check this carefully. Hot swap testing at plug-fests is revealing that hot swap of PCIe NVMe doesn’t always work.

 

WHY YOU CARE: NVMe drives are still coming of age and this can affect future-proofing. If your vendor uses single port drives to reduce costs or does not support hot swap, a refresh may be in your near future.

 

WHERE THINGS REALLY GET GOING: RAID AND ERASURE CODING

The next step up the data protection stack is RAID or more advanced forms of erasure coding (EC). Both enable storage solutions to continue serving data if one or more SSDs fails, but they are ‘expensive.’

 

RAID 1 mirroring is fast but cuts capacity in half. RAID 6 and advanced EC minimize the capacity tax but add processing overhead that slows down IO. In fact, maximum throughput can easily be cut in half and latency doubled. As a result, expect NVMe vendors to recommend you switch to RAID 1 / 10 to lower overhead.

 

WHY YOU CARE: For many organizations, this will change IT best practices. It may also increase overall solution costs. It may also signal that your vendor’s RAID / EC strategy may change. Read on.

 

Don’t expect vendors to throw out RAID 6 and EC just yet though. Instead, expect to see new ways of implementing these technologies including:

 

  • Offload. Accelerating tasks by offloading them to an add-on card or separate set of resources can reduce overhead, but it is likely necessitate a new (or additional) hardware controller.
  • Change the location. This is what some startup NVMe players in the analytics space are doing. Rather than put burden on the array, they move it to the host. Hosts that need advanced resiliency will have a burden, but hosts that don’t will see improved speeds.
  • Rewrite the algorithm. In theory, you can streamline code and release it via a software update. The downside is that you would have to create new RAID volumes and move existing data to the new volumes. That can be transparent, but it takes extra ‘swap capacity.’

 

 

 

WHY YOU CARE: Each approach has its merit depending on workload, but all have impact. Adding an offload function will require a tech refresh (impacts future-proofing). Moving tasks to the host requires a rework in strategy and a new set of products (more on this in future blogs). Rewrite of code forces rebuilding data volumes (impact to IT operations and the need for more storage).

 

TURN ON, TURN OFF: ADVANCED DATA SERVICES NEED TOGGLES

If we continue on to advanced data protection services like snapshotting and replication, a similar line of questioning occurs: how much latency will the data service introduce? If the impact on latency and IO is high, do you turn off a service?

 

For most data it is unlikely you’ll turn off data protection but other data services like deduplication and compression that eat up resources and can sit in the data path adding latency to IO? Hmm… That is a story for another day. So what is an administrator to do? Let’s look at 2 common DP functions.

 

Snapshots: For important data that drives business operations, snapshots for backup are key. You should be aware though that the do consume resources and can add latency to IO. Keep in mind:

 

    • The snapshot methodology will influence overhead. The more writes, the more overhead.
    • If you have to quiesce a data set (e.g. with a consistency group) latency will go up.

 

To minimize the impact snapshots have on performance you can change the method, avoid quiescing or change the frequency so potential impact is infrequent.

 

Asynchronous Replication: Asynchronous replication creates a remote copy of a data set for recovery in the event of a site failure. It does not usually sit ‘in the data path,’ meaning it can happen at after an IO is processed. This avoids potential impact to NVMe latency. That said, replication does require CPU cycles and could steal resources, impacting maximum performance.

 

Similar to snapshots, you can minimize any impact by changing the amount of resources used and the frequency of transfers.

 

Note: Synchronous replication is a bird of another feather and will be reserved for a future blog.

 

WHY YOU CARE: Understanding how many resources are required for data protection tasks and if they sit in the data path may change a number of things including the number of workloads / size of data sets you host per array. This is why it is critical for solutions to allow ANY data management service to be toggled on or off. It is also why you should expect future NVMe-optimized solutions to have new forms of QoS for fencing resources – not just by host, but by service.

 

YOUR NEXT STEP: CHECK YOUR BUYING QUESTIONS

At a high level the big take away here is that NVMe isn’t something you just plug into your system and say ‘go!’ To get the promised increases in throughput and low latency it is imperative we consider how IO is influenced by data services, especially data protection.

 

When buying NVMe solutions you’ll want to consider:

 

 

  1. What influence do my data protection services have on throughput and latency?
  2. Do I need to change my approach to RAID, snapshots and replication to improve IO?
  3. Do I need to reduce the amount of workloads I host per array or ‘beef up’ my array?
  4. How is my vendor responding to NVMe and will it require a new controller or software?
  5. Should I consider a totally new approach to where data services run?

 

These questions are key to determining how you adjust your strategy, how you size an NVMe storage purchase AND if you are buying a solution that may be out of date soon.

 

As we discussed, be sure to look for an offering that uses dual ported, hot pluggable media; a RAID / EC strategy that is upgradeable (doesn’t require a new controller); selectable data services; etc. There are other considerations (networking for instance), but that is a topic for another blog.

 

You’ll also want to be wary of marketing numbers that claim ‘100 microsecond latency’ but don’t tell you what data services (RAID, replication, snapshots or others) are running. Because as we all saw with the initial wave of flash offerings. There was the speed you saw on day one, and the speed you saw on day 30. And boy, were they different.

 

 

You've heard it before, I know. Companies need to take advantage of digital transformation in order to achieve next level growth. But digital transformation is *not* a light switch - where you can just flip it on and voila, you're done.

 

Digital transformation uses technology to enable efficiency, differentiation and innovation, to the benefit of and for all aspects of an organization’s activities and processes.

 

But before an organization can realize the full benefits of digital transformation, they still need to “take care of business” from an IT services delivery perspective.

 

IDC did a recent study of IT staff teams, and the results are interesting.

 

IDC survey data indicates that 45% of IT staff time is taken up by routine operations like provisioning, configuration, maintenance, monitoring, troubleshooting, and remediation whereas only 21% is allocated to innovation and new projects.

 

Many routine tasks are automatable and others may be dramatically simplified and streamlined.

 

First Step - The very very first step in simplification and streamlining is the ability to meet business demands by setting up additional resources to serve clients  - and in our topic today, it’s storage.  Hitachi Virtual Storage Platform.

 

With the industry’s leading 100% data availability guarantee – one could understandably believe that configuring such a robust system would be very involved and complex – and dare I say it, may even involve rudimentary, but much loved CLI (Command Line Interface). CLI provides control and power yes. But simple and intrinsic CLI is not.

 

 

6 Steps to Configure a VSP, with Hitachi Storage Advisor.

 

Some background info for readers new to Hitachi -  we developed Hitachi Storage Advisor (HSA) in response to customer input for an intuitive, graphical user interface to easily manage our VSP platforms. HSA accomplishes this by delivering a simplified, unified approach to managing storage

 

  1. Quick and simplified VSP storage deployment – HSA accomplishes this by abstracting complex unified management tasks for both block and file requirements into fewer and less complex steps
  2. It saves time by using wizard-driven workflows that enable storage management operations to be completed faster
  3. And yes…<6 steps…to configure and provision a VSP for SAN & NAS deployments

 

 

 

Realizing our customers may want to deploy VSP in both SAN & NAS environments, we designed HSA to manage both of these…easily.

Hitachi Storage Advisor can configure, provision and (locally) protect aVSP in <6 steps for both block and/or file workloads, here’s how..

  1. First Discover the storage array
  2. Create and initialize parity groups – HSA automates this based on best practices
  3. Create either block or file pools
  4. Create hosts
  5. For block; engage the create volume, attach volume and protect volume multi-operation workflow
  6. For file;  create file system, share and exports
  7. And that’s it..done

 

**Notice that the single workflow in step #6 not only creates volumes and provisions them but can also (optionally) protect them.**

 

In addition this single “federated” workflow, can provision from ANY of the discovered/on-boarded data center-wide storage systems

 

Via a single HSA console, HSA can manage VSP’s deployed in the data center and remote locations.

 

A single HSA instance can manage up to 50 storage systems, be they local, remote or a combination of local and remote. If you want remote staff to manage your systems, that’s also ok, as HSA was designed for non-storage experts so that almost anyone can  configure and provision a storage system, as well as monitor capacity utilization, and other storage management issues without having to go back to the experts.

 

And Hitachi Storage Advisor simplifies complex tasks by and abstracting/hiding these complexities with intuitive easy to use workflows that are based on best practices. Ex. HSA auto categorizes drive capacities into tiers, which enables one-click parity group creation and initialization. These tiers are then leveraged to further simplify the pool creation operation.

 

Note: when creating pools HSA (in the background) makes sure that pool creation best practices are adhered to. This protects customers from creating pools with two few PGs for example.

 

Hitachi Storage Advisor is a key, and first step in helping customers realize the full benefits of digital transformation. By greatly simplifying initial VSP deployment and on-going management, customers can re-direct these IT resources for more impactful and strategic projects that will help them achieve full benefits of digital transformation for their business.

 

Stay tuned for more data center management related blogs to come.

 

Thanks for reading.

 

Tony

 


When we have seen what Flash has done to change our businesses in such a short space of time, it is no wonder that the talk around NVMe-based solutions is becoming such a high performance and exciting proposition. Just one tiny, small detail… how on this earth do you move your business planning to make use of this technology? Oh… and why should you care?!

 

Now, I used to be a customer, an IT leader, much like many of you who are reading this blog. I used to be someone who loved to live on the bleeding edge of solutions, although this was more often than not dictated by my industry.

 

I worked in Motorsport for 10 years, but the most challenging had to be my time in Formula 1. I arrived at the dawn of downsizing and consolidation, the death of the big 3.0Ltr V10 engine and introduction of 2.4Ltr V8 (19,000 RPM and 800HP). One slight issue did arise, the move from a V10 to a V8 changed the vibration resonance in the garage when the cars were started… in fact it was just enough to make every spinning HDD in our garage wobble... ahhh the Blue Screen of death and no Telemetry… I wasn’t a very popular chap and I didn’t get much sleep that week in Bahrain I can tell you!

 

jenson.png

 

So, in 2006 (goodness I feel old) I went all flash… in EVERYTHING. Also, this did have another effect…WOW our application performance really had been supercharged! On the other hand, my CIO nearly had a heart attack when we needed more than the predicted budget for the year . But this is for the business, we are there to win races and championships and it’s no good if you cannot even start the car !

 

As we look at the dawn of the next generation of flash storage coming with NVMe, I am getting excited all over again about how this can really transform businesses to the next level with the performance on demand from NVMe over PCIe. I just have this little niggle, I have this notion like the first time someone said I should try “HD movies” (sorry I was too young for the Betamax VHS saga), but do you want HD DVD or Blu-ray? I certainly do not want to be stuck with HD DVD!

 

blueray.png

 

 

Lets take a look what we need to take into account before we jump straight into NVMe Flash:

 

1. Multiple form factors – Right now a standard hasn’t really been decided, of course NAND will be custom built by vendors but we also have the flash manufacturers wading in. Just look at NGSFF SSD from Samsung vs Intel’s Ruler SFF-TA-1002. Does anyone want to be locked into a single vendor without a certified standard?

 

2. Architecture choices - the main changes from SAS to NVMe is a direct path to the processor, either for the compute host or storage controller. Which makes the most sense? Hyper-converged or SAN? Either way these are point solutions, how can I use both? You can read more on this topic here.

 

3. Impact of data services – to get the best performance latency is going to be the enemy here at any point. So data reduction and replication services are almost out of the question for the time being. Long live tiering!

 

4. Evolving connectivity standards – Eventually we are going to want to extend performance outside of the array or between hosts using fabric connectivity… we don’t want a bottleneck in the network so you are going to need a 100Gb/S network… but there is a mix of options here and what do you choose? OMNIPath, RoCEv2, NVMe over FC … which protocol do you choose? To make matters more interesting, some vendors are using proprietary networking protocols. Whichever way, you are going to have to replace your current network.

 

5. Limited scalability - the limit today is 10s of devices on a PCIe bus and not the many hundreds that can be found in SAS-based AFAs. Scale-out can help resolve this challenge but it is expensive to add controllers when you only need more capacity and the communication latency between nodes will significantly reduce the performance advantage that NVMe was designed to deliver.  Future PCIe switching designs may help address the scalability issue but it isn’t ready yet.

 

Ummm – so all is not as it seems out the box… we need to make a few decisions. It seems like the unknowns at present could lead us to be stuck with a solution that could not be upgradable in the future, or even a dead form factor. So how can I future proof myself?

 

 

What options do we have?

 

In the first instance, you can take a standard SAN array that is making using of NVMe backend technology to accelerate your workloads. This would allow you to supercharge your storage solutions, while making use of tiering technology to use multiple types of flash in one solution. This would then enable you to consolidate your storage performance classes onto one solution… who said tiering was dead? What is cool at this point is then enabling you to focus on automating the workloads between the classes of storage, hypervisor (because we all need more than one), and compute. While at the same time you can make use of enterprise replication and data reduction services.

 

The second option is to look at scale out hyper converged solutions using the Skylake processors from Intel. Having the workload being able to access the storage and compute on the same silicon makes for incredible response times. Also as workloads are localized on a per node basis we can really make use of PCIe Flash as a caching layer before committing our writes into enterprise SAS Flash. The cost of growth is so granular, it’s just another server appliance so you can pay as you grow rather than high upfront investment.

 

Both options have one major issue as highlighted earlier…. You are going to have to replace your network to really make use of the technology. In both instances, we are moving the bottleneck around… you are going to need a 100GB network! This is a real cost in making real use of the very high performance on offer from NVMe. Again, there is not a clear direction in which the market is going to go… I mean, how many of you have FCoE deployed?

 

rick.png

 

The right Software Defined strategy could be the answer

 

As we already are aware from SDS and SDN, decoupling the software away from the hardware gives us the flexibility to make use of whatever hardware platform is right for each workload. The main outcome, especially as there are technology and topology unknowns, allows us to keep delivering constant SLAs to the business by automating the data center with software.

 

Do you think Amazon or Google give a bugger about what hardware they use? NOPE!

 

One thing I do want is an enterprise level quality from my software, I do not want another point solution again, that would defeat the object! The software must span the data center, from storage arrays to hyper-converged and must be hypervisor and cloud agnostic… every customer I am speaking to is exploring every option in reducing cost and consolidating their tools.

 

The ideal solution is where I can still buy my storage platform with NVMe backend for my key business applications… but I also have the same storage software running on a scale out x86 platform using NVMe Caching for my virtualized workloads. In fact, what is stopping me from also having the same storage software running in the public cloud to offer me instant DR at any point? V.v.v.cool!

 

The same can also be said for my fabric connectivity, I could use NVMe over FC for my storage arrays and ROCeV2 for my Hyper-converged stack, but if these are flat topologies then the intelligence is sitting in software you are not tied to any vendor.

 

By opening our eyes to the right Software Defined strategy, it will enable your business to ultimately be flexible enough to make use of what technology is available and not lock you into any hardware solution, form factor or standard.

 

 

In Summary

 

We can see how a strong software defined strategy can prevent pain further down the line, but does this stop us making a technology decision today? What do you do right NOW…?

 

While standards are being defined for new technology we can make use of NVMe caching today to accelerate our virtualized workloads without any-uplift. The perfect option would be Hitachi Vantara UCP solutions, as this allows you to scale your application performance and agility without locking you into one technology. You can get more information on these solutions here.

 

This gives your business the breathing room to deliver a killer software defined strategy that will enable real digital transformation. Best part… the UCP platform is already software defined, so you have already started your journey!

 

Keep moving forward!

 

Cheers,

Bear

 

 

Read more in this series:

NVMe 101 – What’s Coming to the World of Flash?

Is NVMe Killing Shared Storage?

keepmoving.jpg

If you've been investigating NVMe solutions lately you may have noticed some interesting comments around shared storage and NVMe. Namely, NVMe and NVMe over Fabrics (NVMeF or NVMe-oF depending on who you talk to. I like fewer characters) introduces some... challenges for current shared storage architectures.

 

What? You've been told that NVMe slots right into current designs? Well sure, you can support NVMe and NVMeF by 'tweaking' an existing array design but that doesn't mean it’s going to give you the ROI you expect. So buyer beware lest you become a grumpy cat.

 

Note: For an overview on NVMe, NVMe over Fabrics an different approaches to implementing NVMe, check out this blog by Mark Adams.

 

BACKGROUND: SHARING IS FUN AND SAVES MONEY

The notion of shared storage has been around for a long time. Implementations vary, but the basic idea is the same: a system owns a pool of storage and shares it out for use by multiple hosts. This enables superior economies of scale compared to direct attached storage because capacity is:

 

  • Managed and serviced from a consolidated location (operational simplicity)
  • Not stranded on / under-utilized by individual hosts (reduced budgetary waste)
  • Scaled independently of the host (operational efficiency)
  • Able to be virtualized and over-subscribed to minimize idle resources (storage efficiency)

 

Combined, these benefits significantly improve IT economics, reducing system, management and environmental costs (power & footprint).

 

NVMe: HIGH SPEED BUT AT A COST TO ENTERPRISE SHARING

From a ‘raw’ performance standpoint a single NVMe device has the ability to completely saturate a 40GbE network link (some think 3D XPoint will saturate a 100GbE link. I’d be very skeptical.). A single NVMe device also has the potential to soak up storage controller CPU time faster than a dry sponge in swimming pool.

 

Note for the experts: I agree NVMe is more CPU efficient than AHCI, but even an improvement in IO processing from 10 µs to 3 µs of CPU time still means a NVMe device can saturate a CPU with 100% read workloads. At 100% writes you've got only slightly more scale.

 

The implication is that even a small set of NVMe devices is going to consume a lot of network ports and CPU resources – even with all data services turned off (more on that in a minute). You’ll be spending a lot on a high end storage controller and expensive NVMe media to share… a few TB? Sure, you can add NVMe capacity but to what end? You aren't accessing its value because the controller is tapped out.

 

The other challenge is that the storage services used to abstract physical media into logical resources take processing time, adding latency. And if you plan on using deduplication to keep costs down? Bad news, deduplication adds a lot of overhead.

 

Even scale-out AFAs are not immune. Scale-out gets expensive if you only have a few NVMe devices per node (and nodes with more CPUs & 40GbE ports = more cost). Plus, cluster communications across multiple nodes will increase latency and reduce value.

 

RACK SCALE FLASH: THE NEW COOL WAY TO GET A SUPERIOR ROI FROM NVMe

This is why software-defined vendors are saying you should get rid of external shared storage and use NVMe only in direct attached storage (e.g. hyperconverged) or NVMe over Fabric RDMA solutions (e.g. rack-scale flash). Rack-scale flash vendors get more out of NVMe storage by breaking the mainstream storage design:

 

  • Stripping out storage services that can add latency (e.g. thin provisioning)
  • Moving core storage services to the host (e.g. very basic RAID)
  • Implementing NVMe over Fabric with RDMA (e.g. RoCE or iWarp)

 

Yes. Very very basic diagram, but hopefully it illustrates the point. Using NVMe over Fabrics with RDMA can be particularly helpful in increasing performance because it sends IO requests directly to the NVMe media, bypassing storage controller processing and its capabilities. The complication is that you lose shared storage capabilities unless you add a ‘manager’ that owns carving up the media and telling each host what block ranges they own (note: a few vendors have this).

 

Net, this architecture can really tap into the potential of NVMe but at the cost of enterprise sharing and data services. Do you want to give up replication? What about thin provisioning to over-commit and reduce costs? Data reduction to make flash more affordable? All are lost – at least for now.

 

So rack-scale flash gets us past the ROI challenges with current shared storage architectures, but it struggles in the shared storage department.

 

 

SHARING 2.0: NVMe SPEED AND ENTERPRISE DATA SERVICES

So how do we get the performance of rack scale flash (which really unlocks the ROI from NVMe) coupled with the enterprise shared storage functionality (which delivers the best TCO and, ultimately, resliency) found in today’s arrays?

 

This is the million dollar question storage vendors are looking to solve. It’s also the question that may drive you to hold tight and focus NVMe investment to hyperconverged workloads that benefit from a local NVMe footprint (see the Hitachi UCP HC) or rack scale flash workloads where IT teams may value performance over the cost efficiencies of shared storage.

 

OPTION 1: Improve shared storage in rack-scale flash. It’s absolutely possible to do this, just be aware that it may not be done completely in the storage array. Some services may be added to the array while others are done host side and managed via a hive intelligence or master control server. Where they are done will depend on the frequency that updates need to be done (lots of status updates across hosts? Do it in the array.) and technical feasibility.

 

My 2 cents. This is already happening, but it’s going to take a while before they deliver the level of sharing that you expect from an enterprise shared storage array. Even when it is, you can argue whether you want that functionality for the workloads rack-scale flash serves best. In the near term I’d use rack-scale flash for what it does better than anything else. Run analytic workloads at high speed.

 

OPTION 2: Use a hyper-converged infrastructure with robust data services and NVMe support. The benefit of this strategy is that adds back in our shared storage data services (virtualize and abstract!). By having a software-defined storage element and virtual server hypervisor on the same system you can access NVMe at high speed. There are considerations though:

 

  1. Data service overhead. The same thing that makes this a better solution also makes it slower. Every data service adds overhead so make sure you can toggle services off and on.
  2. Data set size / Distance impacts. Some day we’ll be able to fold space and instantly transmit data to nodes in different galaxies / pocket universes / points in time. Until then, we have to deal with wires. If a VM has to access another node to find data, you lose time. For smaller data sets this isn't an issue, but as the capacity used by a workload increases, it becomes a reality. You can optimize the code, use SR-IOV to bypass virtual NICs and use bigger pipes (100 GbE) to get around this, but it won’t have the streamlined stack of rack-scale flash.

 

My 2 cents. For ‘smaller’ data sets, HCIS (hyperconverged infrastructure solution) is a great option, but as data sets grow you need to consider a solution with a lighter stack (rack-scale) or the ability to consolidate larger amounts of NVMe devices.

 

OPTION 3: Use a NVMe over Fabric Optimized AFA. This doesn't exist yet. Yes, I hear vendor whining, but it doesn't. Here’s what we need, in my ever so insane opinion:

 

  • More, faster ports. For you to have a real NVMe AFA you need to have a much faster network interconnect.
  • NVMe Optimized Scale-out. Scale-out can be a good way to tap into NVMe power, but you have to consider cost & latency issues. To resolve this an AFA will need controller modules that can support more than a few NVMe devices and new QoS protocols that optimize for locality (including use of Directives) plus dynamically provision resources. Scaling will also likely include tiering. Wait… I said that before didn’t I?
  • Resource Fencing. To get the most out of NVMe devices you need to minimize the impact of data services. Use of offload cards or fixed / dynamic resource fencing is one way of shunting tasks to ‘dedicated’ resources so performance impacts are limited. There is a cost associated with this, but to reach maximum performance, it could be worth it.

 

My 2 cents. Getting to a fully optimized NVMe AFA is going to take time. Along the way we will see partial implementations that treat NVMe as ‘bulk capacity’ only tapping into the performance of a few devices. For those with deep pockets and the desire to upgrade over time – current solutions are not future proof - this may be fine. For those who want the best ROI, you may want to wait until an architecture has many of these elements. And if you have archive data, even near-line data, there are very few reasons to look at NVMe yet.

 

CLOSING THOUGHTS

What shocks me is that this is a trimmed down blog. There is just too much to say. NVMe and NVMe over Fabrics changes the parameters of how we share storage and necessitates changes to our storage architectures. Those changes are evolving rapidly as vendors consider how to adjust architectures to support new and emerging workloads.

 

In the near term many of our notions of shared storage will have to adjust if we want to get every ounce of ‘juice’ out of NVMe, but there are options like Hitachi’s UCP HC – our hyperconverged offering that has NVMe as direct attached storage for high performance access. Like other offerings, it doesn’t hit the current scale of enterprise storage, but it is also a lot easier to upgrade and scale. Longer term solutions will evolve but you’ll likely deploy a mix of the solutions I called out (starts to feel a little like the rock band blog…).

 

Cheers,

 

Nathan Moffitt

There were two hot topics at the annual Flash Memory Summit held in Santa Clara, CA earlier this month.  The first was the unfortunate fire that kept the Exhibit Hall closed for the entire show. And the second was the coming wave of NVMe adoption in flash storage products.  Discussion about NVMe was included in nearly every keynote and session presentation that I attended. This blog is the first in a multi-part series that will address the ways NVMe is expected to change storage, how customers will benefit, things that must be understood by potential buyers and Hitachi’s plans to bring products to market that will utilize NVMe.

What is NVMe?

NVMe is an open-standards protocol for digital communications between servers and non-volatile memory storage.  For many of us, non-volatile memory storage means solid state drives or Hitachi Accelerated Flash FMDs.  The NVMe protocol is designed to transport signals over a PCIe bus or a storage fabric.  Much more on the emerging fabrics for NVMe will come in a later blog.

 

NVMe is expected to be the successor to SCSI based protocols (SAS and SATA) for flash media.  That’s because it was designed for flash while SCSI was developed for hard drives.  The command set is leaner and it supports a nearly unlimited queue depth that takes advantage of the parallel nature of flash drives (a max 64K queue depth for up to 64K separate queues).  Within queues, submission and completion commands are paired for greater efficiency. See figure 1.

Command queueing.png

Figure 1 – Source: NVM Express organization

 

The protocol streamlines communications by removing the need for an I/O controller to sit between the server CPU and the flash drives.  Storage controllers are freed up to do tasks that don’t involve managing I/Os and the onboard DRAM doesn’t need to store application data.  It’s also ideal for the next generation storage that’s known as Storage Class Memory (SCM).  SCM, which promises to be many times faster than today’s flash drives, is not likely to use SCSI protocols as NVMe is much better suited.

 

While storage has seen new protocols developed with great promise and hype (FCoE anyone?), NVMe is not brand new.  It’s been deployed in what’s known as “server-side flash” for several years now. Basically, many servers support the installation of an NVMe flash card option on a PCIe bus within the server.  This has been used for applications that require extremely fast response times.  One example of this is Flash Trading, which is a type of High-Frequency Trading, where even a few micro seconds in delay can mean the difference between a great trade and a mediocre one.  However, server-side flash has a limited set of use cases due to its relatively small capacity and inability to share storage resources across a network for greater efficiencies and better data protection. 

 

NVMe is widely supported in nearly all operating systems and vendors across the entire ecosystem of external storage are now working to make end-to-end NVMe adoption possible.  This includes the makers of drivers, host bus adapters, network adapters, flash drives and storage controllers.  There’s a maturation process underway in developing all the pieces and making them work in manner that customers expect for their enterprise applications.  Many of these development challenges will be solved in the short-term.  But some fundamental differences will remain between how NVMe storage will operate and the operations of traditional SAN systems that we’ve become very accustomed to.  These differences will be felt in ways that data services such as snapshots, replication, data reduction and RAID protection . Also, PCIe bus limitations will impact how scalability will work. Careful planning on how and where to use NVMe is going to be crucial because there will be trade-offs between NVMe and SCSI-based storage.  This topic of planning for NVMe will also be covered in detail in a later blog.

 

What are the advantages of NVMe?

NVMe is all about performance.  Depending on which set of benchmark comparisons you look at, all-flash arrays (AFAs) that have NVMe can achieve many times more random read and write IOPS (2 to 3x) and far more bandwidth capacity for sequential reads and writes (2 to 2.5x) than current generation AFAs.  NVMe also has far less latency than SAS (up to 6x) so the wait time to complete an operation is much lower.  The latency advantage is something than nearly every user will benefit from. Not many users will ever need millions of IOPs but most will enjoy blazing fast response times that stay consistent even when their system capacity fills up.

 

In the right architecture, NVMe performance has greater linear scalability.  If you need more performance, then add a drive.  It’s also capable of supporting more VM density. And more applications can be consolidated on a single system due to its higher performance ceiling.  But the right architecture and design is required to take advantage of these performance scalability and consolidation advantages.  Today's AFA's with with dual controllers or limited scale-out will quickly saturate with just a few NVMe drives installed.

 

Finally, NVMe drives will consume less power per device than SAS drives due to advanced power management which has more states than is available to SAS flash drives.

 

Which use cases will benefit the most?

Any application that requires high levels of performance will benefit from NVMe storage. The primary use cases for the initial deployments of NVMe are expected to be:

  1. High performance computing – Examples include projects like climate science research, drug interaction simulations and Bitcoin mining which all require demanding levels of computational resources with access to data at very low latencies. 
  2. Real-time analytics – An increasing number of environments are requiring real-time analysis of data. Fraud detection, for example, works by looking for irregularities that might be coming from a group of transactions.  Real-time analytics is also the basis of future artificial intelligence and machine learning applications that are coming from IoT deployments.  These applications will benefit from the high read IOPs that NVMe storage can deliver.
  3. Large-scale transactional processing – Online retailers, large financial institutions and governmental service agencies may need to process thousands of transactions per second.  In these write-intensive environments, NVMe storage will become the preferred solution.

 

What will be the deployment architectures for external storage?

  • Software defined storage (SDS) – Some customers, looking to reduce their costs, will want to use a proven storage operating system and management interface to run their commodity hardware.  The commodity hardware can have NVMe included as long the storage OS supports it.
  • All-flash arrays (AFA) –  Other customers may need to improve their performance to levels that are better than what they are getting from their AFA today.  We hear from users of competitive AFAs that their performance isn’t consistent and latency make spike during periods of heavy workloads or when capacity starts to fill up. NVMe will resolve these issues and still allow customers to utilize many of the data services and management workflows that they rely on.  In AFAs, NVMe might be deployed in the “back-end” (over PCIe between the storage controller and the flash drives), “front-end”(over a storage fabric) or end-to-end (from the server to the flash drives).  However, data services, SAN overhead and data protection that are common in AFAs will take away from their ability to realize the full potential of performance improvements that NVMe is capable of.

AFA.png

  • Hyperconverged infrastructure (HCI) – For virtualized and cloud-scale applications that need more performance, it makes sense to redesign the nodes to utilize NVMe over PCIe for communications between the servers and the flash drives. This is the fastest way to make use and deploy NVMe for customers to accelerate their virtualized workloads in a production environment. The speed and nature of setting up HCI in a customer environment allows organizations to accelerate the deployment of their projects from weeks to hours.  And scaling nodes to add more capacity and compute resources can be accomplished without impacting ongoing operations.

 

HCI.png

  • Rack-scale flash – This is very dense external flash storage that can be shared by multiple application servers.  Servers and storage are connected via very high-speed Ethernet which can run at 100Gbps speeds today and with even faster transport speeds coming soon.  Unlike AFAs, rack-scale flash deployments move most of the data services (snapshots, replication, compression, etc.) to the host so the storage performs as fast as possible. Rack-scale flash will have the highest possible performance and the best price/performance ratio.

 

RS.png

 

What are Hitachi’s plans?

Hitachi views NVMe as an important technology for delivering maximum performance and highest value to our customers. Our intent is to take a multi-prong approach to implementing NVMe technologies that covers the broadest set of workloads and use cases to give our customers the best solution choice for their needs.

 

Hitachi’s first NVMe product to be offered is our hyperconverged, Unified Compute Platform, the UCP HC. Recently, we announced support for NVMe caching on the UCP HC. This allows application workloads to be accelerated without a significant investment by customers. Using NVMe as a caching layer within scale-out HCI environments allows customers to make use of the highest performance media distributed across the cluster. This is beneficial for workloads that have ultra-low latency requirements. NVMe persistent storage will follow shortly. Our expectation is that growth here will occur steadily through 2018 as pricing on NVMe starts to reach the level of current generation SSDs. 

 

We believe that starting our introduction of NVMe with UCP HC allows our customers to leverage NVMe for improved application performance and do so by starting small with the ability to quickly and easily expand as needed.  This avoids the need for a high CAPEX investment in the technology right up front.

 

The next phase of our NVMe evolution will be to deliver offerings aimed at the applications best suited for NVMe – extremely high performance large-scale analytics and transactional operations. We’ll cover why these applications are well suited to leverage larger NVMe data stores in a future blog, but suffice to say this is where we seem tremendous opportunity and alignment with the larger Hitachi analytics portfolio.

 

Support for NVMe in our AFAs will come at a later date when price, performance and maturity of NVMe stabilize. In our view, other AFA vendors that have recently announced NVMe products have not fully considered how their designs must change to achieve low latency and high performance while still delivering enterprise data services and non-stop availability. This is particularly important as many customers rely on their AFAs to store their business records. More information about our future direction can be shared under NDA by our Hitachi sales reps.

 

Read more in this series:

Is NVMe Killing Shared Storage?

NVMe and Me: NVMe Adoption Strategies

NVMe and Data Protection: Time to Rethink Strategies

NVMe and Data Protection: Is Synchronous Replication Dead?

How NVMe is Changing Networking (with Brocade)

How NVMe is Changing Data Center Networking: Part 2

Hitachi Vantara Storage Roadmap Thoughts

In February I shared my blog Lies, Damned Lies and Uptime Statistics which covered some justifications for higher IT resiliency.  San Jose flood.pngJust a few days later, California floods were front and center in the news as many populated neighborhoods were devastated by an overflow of rain….  A few miles away from the HDS Santa Clara headquarters, San Jose experienced some of the worst flooding in a century; this was an ironic turn around since California had been in a drought for the last 4 years. 

 

It reminded me how vulnerable we are to mother nature’s mood swings and led me to catch-up on business continuity reading, such as the 2017 Forrester Research study  on the State of Disaster Recovery Preparedness published in the 2017 spring Disaster Recovery Journal.

 

I needed to create some sales and partner communication in support of the of Hitachi Storage Virtualization Operating System (SVOS) 7.1 and the Global-Active Device (GAD) enhancements released in March 2017.  If you are not familiar with Hitachi VSP Family active-active storage solution, here is a short description: GAD enables you to achieve a strict zero RTO and RPO; it ensures continuous operations for key applications and nonstop, uninterrupted data access for critical SAN and NAS deployments.

 

Hitachi GAD solution distance limitation was recently extended from 100km to 500km.  Why 500km? Who cares about this distance improvement? Why does it matter?  This blog post provides the insights I gathered from HDS solution engineering testing, product management (Tom Attanese) and market data.

 

One of the most effective ways to improve the resilience of IT infrastructure is to increase the geographical separation between primary and recovery data center sites. By keeping data protection solutions and failover servers in a different region from primary IT infrastructure, companies reduce the chances that a single natural disaster or power grid failure will take out their backup along with the systems the primary systems that the backup is supposed to protect. However, determining the appropriate level of geographical separation remains the subject of confusion and debate.

 

US Power Grid.pngThe basic rule requires that the datacenter sites should be far enough apart that they are not subjected to the majority of the same risks. 

 

Whether it's winter storms, power outages, or terror threats, you need to make sure that it's highly unlikely that a single event could take down both sites since the two most common causes of declared disasters are power failures and floods.  Tactically, your backup datacenter needs to be connected to an alternate power grid.

 

From a United States perspective, here is a short guide for distances per threats facing your business:

  • Hurricanes: 105 miles of distance (170 kilometers)
  • Volcanoes: more than 70 miles (112 kilometers)
  • Floods: more than 40 miles (64 kilometers)
  • Power grid failure: more than 20 miles (32 kilometers)
  • Tornadoes: more than 10 miles (16 kilometers)

 

Sterling Research.pngThe Sperling research illustrates the risk and the area of each threat.   While the datacenter distance of a hundred miles is never a problem in the U.S; for some European or Asian countries with a smaller geographical size, the easiest solution would be to position a backup site in a neighboring country with compatible laws and regulations.  To mitigate most of the risks, industry consensus suggests you place a disaster recovery location somewhere between 30 miles (50 kilometers) and 100 miles (160 kilometers) away from your primary location. But again, please do your risk assessment first.  By performing a risk assessment on the business that includes threats to the physical location of operations, labor force availability and customer locations, the business will be able to make informed decisions on “how far is far enough”.

 

So, is GAD distance support of 500km overkill? Quite the opposite, many customer requests drove this enhancement; the demand came from our customers who already own their datacenters, or operate in co-locations, with many of these customers located outside the U.S. 

 

The IT industry already recognizes Hitachi’s resiliency as the gold standard, but did you ever wonder why our engineering is so good at it? High availability requirements from our Japanese customers are some of the most demanding due to the risk associated with doing business in one the densest populated areas in the world which is exposed to nearly every possible natural disaster risk (below extract from Lloyd’s City Risk Index report).

WW Cities Risk.png
The Lloyd’s City Risk Index quantifies exposure potential in 301 cities from both man-made and natural threats. Tokyo, which is vulnerable to a much wider combination of risks, including natural (e.g. Tsunami, wind storm, earth quake, flooding, etc) and man-made, sits near the top of Lloyd’s list (after Tapei) with 183 $US Billion in gross domestic product (GDP) at risk. New York City is the highest U.S. city with 91 $US Billion in GDP at risk.  Tokyo businesses need their active-active data center requirements extended to 500km to support their business continuity plan. 

WW Cities Risk Ranking.png

Another Hitachi customer from Switzerland assisted us in the validation as they needed to go beyond the 70 miles (120km) distance to accommodate the location of their secondary data center; it was too cost prohibitive to deploy a high-availability solution outside their existing datacenter locations.

 

What made an impression on me after interviewing Product Manager  (Tom Attanese) was the Hitachi VSP F series test results from our solution engineering that demonstrated the low and consistent latency of the GAD 500km extension.  Keep in mind that the read I/Os are executed locally.  It is only the write I/Os that incurred the response time increases by 1 millisecond per 100km round trip. See the following chart of the Hitachi VSP F series test results at local, 100km, 200km, 300km and 500km distances. Response time is pretty much flat until the workload saturates the system.

GAD long distance test results.png

If you feels only partially prepared or totally unprepared to meet your business continuity goals or you need to raise your SLAs to 24x7 to stay online and be competitive, or thought that a synchronous active-active storage solution was out of reach due to the distance between your datacenters; we are here to help with enterprise wide high availability and disaster recovery solutions.  

 

I am looking forward to your comments.

 

 

 

Patrick Allaire

Like many Silicon Valley professionals, I am juggling my time between work related activities and being a parent; I have little time for myself until everybody is asleep.  At that point, I am so exhausted that I will last 30 minutes or so before I fall asleep. As a dad of 3-year-old son that is in pre-school, I experienced a different pattern where every winter brings a new strain of a nasty cold virus.  This winter being no different, one day, my son came back from school with a runny nose, sneezing and being so needy that avoiding these critters was like fighting gravity1

 

After a week of extra hand washing care, using hand sanitizer everywhere and dodging every sneeze, I still caught something nasty.  I felt it in the middle of the night with a scratchy throat and within 12 hours, it moved up to my sinus and my head wanted to explode. While you can call in a sick day, there is no sick day as a dad, even less when mom caught the same thing….

 

This experience reminded me of the value of health; there is no good time to be out sick when your family needs you.  While we take for granted that the we have the strength and energy to go over our daily tasks, a small and minuscule bug can bring down the healthiest parent…

 

Software-Bugsv2.jpgI see a similar parallel with the digital world IT professionals are building; I am not talking about the same bugs or critters your kids bring home but something even worse: quality defects found in hardware and software that once awakened can bring down any production application (see February 28th, 2017 Amazon cloud service outage example).

 

Early in my career, I thought some of my peers working in the government and financial industry were overzealous with IT changes and processes.  My position changed 180 degrees over the course of 20+ years as I saw many IT careers shattered by blind trust of a new technology.

 

 

broken_promises.jpgThe most sophisticated IT buyers reminds me that every vendor claims the same thing in terms of uptime, promising upward of six 9s (99.9999%) availability but only few deliver on their promise.  So, what is the value of quality for your organization? These same customers will be quick to say that there is no monetary value2 in trust but like a parent looking for a sitter for one night out, relying on a trusted family member to watch over your child keeps your mind at peace.

 

If the all-flash array (AFA) market uptime standard is 32 seconds of unplanned downtime annually, how can any vendor deliver on this promise when one incident can take hours to resolve?

Table of Nines.png

 

I reached out to our Hitachi Virtual Storage Platform (VSP) family and Storage Virtualization Operating System (SVOS) engineers, consulted HDS customer services and support (CS&S) organization to find the answer.  While Hitachi quality and resilience has always been considered the gold standard in the industry, I never understood why others failed to emulate the same methodology or practices.  Non-initiated buyers wrongly assumed that this trusted status is achievable by any vendors with the financial resources to commit to quality; the reality is quite the opposite; see prior blog “Lies, Damned Lies and Uptime Statistics” how vendors get away with their inflated uptime specifications.  No new AFA vendor can re-create Hitachi storage quality and resilience overnight as their engineers can’t predict the interaction of all edge conditions across the stack from the host down to the array.

 

With full access to Hitachi Data Systems support database, a random sample of 150 million operating-hours of Hitachi Virtual Storage Platform was gathered to better understand how and if we were delivering on the quality our customers expect.

 

Here are the facts I gathered on Hitachi VSP and SVOS quality and resilience from my research:

 

In the same time period, over 500,000 Storage Virtualization Operating System (SVOS) predictive monitoring system information message (SIM) alerts were collected by HDS annually.  And more than 50% of HDS support calls were initiated by these SVOS predictive SIM where HDS customer support informed the system administrator ahead of time that service was needed prior to customer noticing any issues.

 

I followed up with Hitachi engineering to better understand what these SIM alerts were.  In short, SVOS predictive monitoring was built over time in partnership with our support organization.  It is the fruit of more than 28 years of experience in system engineering and support embedded in every Hitachi storage platform.  SVOS on-going data collection and “phone home” reporting capability (aka SIM communication) ensure quality is maintained over time by monitoring over 450 hardware, software, environmental and data path metrics to pro-actively ensure the system is operating under ideal conditions. 

 

On a daily basis, SVOS predictive monitoring collects more than 6,000 system performance data points to track system quality of service; enabling quick root cause analysis of any response time abnormalities.  Annually, SVOS smart predictive monitoring analyze over 100 billion drive information events to ensure data integrity, data availability and identify quality issues before they impact customers.

 

The simplest way to describe what all this predictive capability means in the data center world is looking into a drive failure use case.  Over 9 out 10 of hard disk drive, solid state drive (SSDs) or flash module (FMD) failure incidents are triggered by SVOS predictive health insights which elect to spare a device to avoid long rebuild time and performance degradation prior to a device hard crash.

 

This superior resilience enabled by Hitachi SVOS can identify at its source early quality abnormality before a bug affects a production application.  Note that SVOS cognitive insight extends further than the system components, storage devices or software/firmware, it also monitors interoperability between the software function across logical and physical data path, to the network and the host.

 

Drilling down on high priority service requests helps me understand why application resilience can only be achieved with visibility of the entire stack from the host down.  The classic hardware failure represented about 12% of HDS service requests in that sample while software/firmware bugs was less than 5% of service calls.  To reduce the data availability and quality of service risks, a predictive monitoring engine needs to have visibility on configuration related issues which represented 19% of HDS service call and non-Hitachi data path issues call which represented 13% of HDS calls.

 

In this 150 million operating-hours system population, Hitachi delivers 100% data availability for greater than 99.9% of Hitachi Virtual Storage Platform customers.  Average system uptime in this sample was greater than six 9s (99.9999%) where 100% of acute system incidents were caused by either software, configuration, users or change management issues.  These acute system incidents were related to partial data access or quality of services issues  with zero hardware incidents caused a system down outage due to Hitachi Virtual Storage Platform fault-tolerant architecture.

 

Which AFA vendor can you rely on to compete in a digital world and have peace of mind when you get back to your family at night?

 

Sincerely,

 

 

Patrick Allaire

 

1I had no chance; according to the UK's National Health Service, it's possible for a cold virus to survive outside the body for more than one week. Viruses last longer in indoor environments.

 

2Analyst research on data center outage costs such as Ponemon Institute points to a median outage cost of $648,174.

Digital Transformation Reflections

 

The holidays were a welcome break from Silicon Valley’s hectic pace and I stepped away from technology marketing and enjoyed family time. I wanted to share some reflections from publications’ insights, customers’ deployments and enquiries that I received in the last few months.

 

I stumbled on some good reading in CIO’s “Guide to Digital Transformation” article from Forbes that HDS sponsored. The interesting point was that IT executives who are pursuing closer alignment between IT and lines of business will appreciate lessons from pioneers and a strawman plan.

 

Many IT executives recognized early that in a digital world, service level objectives and agreements need to be revised upward to the same level as hyperscale data centers (e.g. Google, Facebook, AWS). The simplest justification I found to reframe the customer experience is the below research on the impact of increased webpage load times on sales, traffic or user satisfaction. An outage incident is just much more extreme in its impact on revenue, brand, liabilities or user satisfaction (see below).

 

Milliseconds are Money.pngWhile the traditional justification for a business continuity solution requires sizing the business impact based on the risk and investment profile needed, I recognized that IT professionals who are not initiated to the business continuity practice are often confused about risk and investment profiles.

 

For customers already engaged in their own digital transformation, the review of the latest 2017 Society for Information Management (SIM) IT Trends Comprehensive report was a good reminder that for many similar projects, these initiatives are often more about data management and legacy system integration than infrastructure. This survey report confirms this generalization because the most prevalent software development investments reported are either in systems integration, legacy system improvements or customizations (see below).

 

SIM 2017 Software Development.png

So, does infrastructure still matter with digital transformation projects? Newer generations of mobile or internet of things (IoT) applications may be infrastructure resilient, but several web services often require integration with the system of record when payment, authentications or access to database are needed.

Judging from this 2017 SIM survey, business continuity rose to the #5 spot as the most important or worrisome IT management issue in 2016. It appears that these availability expectations are still top of

mind…

 

SIM 2017 Most Worrisome.png

Even if many IT leaders report that the “availability/uptime” metric is still the #1 measurement of internal or outsource IT performance, I suspect the up and down on this issue is, in part, influenced by the amount of bad press coverage.

 

 

I am surprised that industry analysts are not pushing back more on vendors’ uptime claims. Many AFA vendors claim six-9s (99.9999%) uptime, but The Register keeps reporting several “snafus” like HPE 3PAR storage SNAFU takes Australian Tax Office offline and XtremIO 'outages bork US hospital patient records system'.

 

Auditors should be wary of infrastructure deployments that hinge on a single access point after so many recent infamous airline outages in 2016, such as Delta computer outage costs $100mBritish Airways check-in system checks out: Staff flung back to cruel '90s world of paper, JetBlue blames Verizon after data center outage cripples flights.

 

Bottom line, if a single system’s uptime is relevant to your project, you should be concerned as this post will provide evidence that this specification is often irrelevant. Worse, if your architecture or vendor justification is based on single system uptime, update your resume quickly as you are about to make a career limiting move…

 

An Intel presentation at the 2016 Storage Developer Conference broke down the root causes of an outage in different environments.

 

Intel RAS.pngIn traditional enterprise data centers, hardware fault is responsible for about 20% of outages, 43% of outages are caused by software errors and 37% of service disruptions are caused by operator (configuration) faults.

 

Compare these statistics with the fact that many AFA vendors’ uptime claims are tied solely to the hardware, or are representative of the last 12 months in between two software releases, exclude any planned outage needed for an upgrade or technology refresh in a system migration, included support excludes network/server environment, and offer no commercial guarantee with financial penalties. Can you really rely on self-reported vendor uptime specifications?

 

Lies Damned Lies and Uptime Statistics .png

Hitachi Data Systems does not provide any narrowly-defined uptime specifications, we offer instead a 100% data availability guarantee. We also recognized that no vendor’s financial penalty can compensate for the business impact associated to a service disruption. This is why Hitachi Virtual Storage Platform (VSP) F series all-flash storage systems were designed with unified active-active capabilities.

 

The growth in Hitachi’s high availability solutions deployment validates that in the real world the expectations of business continuity are on the rise. Our customers are relying more and more on multi-data-center deployments to support their critical business. Here are a few factoids on our unified Global-Active Device solution available for block and file data: In the last 2 years, in a shrinking market, these active-active deployments have grown more than 40% quarter after quarter, across thousands of deployments supporting hundreds of petabytes. In any quarter, up to one out of three systems is targeted for deployment in an active-active cluster…

 

As AFA deployment capacity increases, the risk to the business keeps growing as more users and applications will be impacted by an outage incident. If IT agility matters to you, make sure your next AFA supports active-active deployment, and that clustering can be enabled nondisruptively if it’s needed in the future. In a digital world, the question is not IF but WHEN you will be asked to deliver the same class of service as Google, Facebook or AWS on your flash platform.

 

Happy new year.

 

Patrick Allaire

The TechTarget article, “Storage managers seek storage answers with a storage purchase” by Rich Castagna, got me thinking of additional storage answers that you might be looking for after making that storage purchase.  Without key storage insights, how do you know exactly how much new storage is required or where to best utilize new storage capacity and bandwidth?  However by applying storage analytics, you can gain new insights into your operating environment to easily answer these storage questions plus more, such as:

 

  • Do we have an inventory of all available storage resources?
  • What storage performance and capacity are critical business applications receiving?
  • How can we ensure our storage system performance is optimized?
  • How much capacity is needed to meet next year's storage requirements?

 

Storage Analytics Pic.png

 

Storage analytics has come a long way to efficiently analyze environments to help you answer these common storage questions.  This starts with the collection of key storage performance and health indicators from applications, host servers, SAN switches and storage systems.  Comprehensive storage analytics is not just about analyzing performance for the networked storage system.  But it needs to be a holistic view that includes all the infrastructure resources on the application’s data path due to the interconnectivity of these resources.

 

Depending on the size and complexity of the environment, scalability needs to be a central consideration as well.  Millions of statistical data points may need to be collected, stored and analyzed to properly monitor a large storage network along with the built-in flexibility to expand as your data storage grows over time.

 

Storage analytics enables the monitoring of a wide range of metrics with key storage performance metrics including:

 

  • Response times or latencies for reads and writes
  • I/O rates (I/O per second) and I/O sizes
  • I/O types (random, sequential)
  • Data transfer rates or throughput
  • Capacity (used, allocated, free)
  • Queue depth
  • Cache hit rates

 

When analyzing application performance, certain storage performance metrics will become more significant to the analysis depending on the application’s specific workload.  For example, I/O rate will be the most important metric for an online transaction processing application to increase the overall number of transactions processed.  In contrast, data transfer rates and cache hit rates will be the most important metrics for a rich media application to maximize its overall storage throughput.

 

Storage analytics facilitates monitoring not just the current state of performance, but more importantly it enables monitoring key storage performance and health indicators for trends that may have developed over time.  Identifying historical performance trends can help to quickly identify applications requiring an all flash storage performance boost or establish performance baselines so you can easily compare current versus past performance.  By analyzing these performance trends, you can identify areas where performance bottlenecks may exist, find applications not meeting their service level agreements or help to alleviate resource conflicts that may be impeding storage system performance.

 

Hitachi Data Systems offers a complete storage analytics solution to help you to answer these common storage questions for your environment.  Hitachi Infrastructure Analytics Advisor with Hitachi Data Center Analytics provide  comprehensive storage analytics with integrated performance management and troubleshooting.  The foundation for this solution is a highly scalable performance data warehouse to analyze and report on large storage environments that could span multiple data centers.  By establishing new performance insights and trends, you can optimize end-to-end storage performance from application to shared storage resources while accurately planning future storage requirements.  When bottlenecks are identified, integrated diagnostic tools can streamline problem troubleshooting and root cause analysis so performance issues can be resolved quickly.

 

This storage analytics solution covers the entire Hitachi Virtual Storage Platform family with Hitachi Storage Virtualization Operating System including the latest VSP G1500 and VSP F1500 enterprise models.  A recent software release also includes new reporting capabilities for data reduction technologies including deduplication and compression.  In addition to Hitachi storage systems, these storage analytics capabilities incorporate multivendor storage system support that include EMC, IBM and NetApp.

 

Your quest for storage answers is just beginning.  Take the next step with storage analytics to gain the complete performance picture so you can take full advantage of your new storage purchase.

 

analytics-547015753-low.jpg

 

Richard Jew's Blog