Paul Morrissey

Hitachi and VMware Virtual Volumes - Part 1

Blog Post created by Paul Morrissey on Mar 16, 2015

This is a the first part in a series of blog posts where I'll address the many questions that arisen with respect to Virtual Volumes and Hitachi implementation in delivering VVol and the SPBM framework. I'll run through a series of questions and provide answers over the blog series so let's take it away..


  • What is Virtual Volumes (VVol)?

Virtual Volumes  is  based on  a  new integration  and  management framework  between  vSphere and  the  storage array.  Virtual  Volumes virtualizes SAN and NAS  devices  by abstracting  physical  hardware resources  into  logical pools  of  capacity (called  Virtual or VVol Datastore)  that  can  be  more  flexibly  consumed and configured  to span  a  portion, one  or  several storage  arrays. It  implements an  out-‐of-‐band  bi-‐direction  control path  through  the vSphere  APIs  for Storage  Awareness  (VASA) and  leverages  unmodified standard  data  transport protocols  for  the data  path  (e.g.  NFS, iSCSI,  Fiber  Channel). On  the  array side,  two  new components  are  added to  the  storage environment:  “VASA  provider” for  integration  with VASA  APIs  and “Protocol  Endpoint”  (PE) that  allows  the storage  controller  to direct  IOs  to the  right  Virtual Volumes.  On  vSphere, there  are  three dependent  features:  VASA, Virtual  Volumes  and SPBM.  In  order to  create  policies at  the  vSphere level,  a  set of  published  capabilities must  be  first defined  in  the storage  array.  Once these  capabilities  are defined,  they  are surfaced  up  to vSphere  using  the storage  vendor  VASA provider.


The Virtual  Datastore (or sometimes referred to as VVol Datastore) defines capacity  boundaries,  access logic,  and  exposes a  set  of data  services  accessible to  the  VMs provisioned  in  the pool.  Virtual  Datastores are  purely  logical constructs  that  can be  configured  on the  fly,  when needed,  without  disruption and  don’t  require to  be  formatted with  a  file system.


Virtual Volumes  defines  a new  virtual  disk container  (Virtual  Volume) that  is  independent of  the  underlying physical  storage  representation allowing for finer control .  In other  terms,  with Virtual  Volumes  the virtual  disk  becomes the  primary  unit of  data  management at  the  array level.  This  turns the  Virtual  Datastore into  a  VM-‐centric pool  of  capacity. It  becomes  possible to  execute  storage operations  with  VM granularity  and  to provision  native  array-‐based data  services  to individual  VMs.  This allows  admins  to provide  the  right storage  service  levels to  each  individual VM.

 

  • What has been HDS involvement with Virtual Volumes (VVol)?

Virtual Volume (VVol) is a storage management framework devised by VMware. HDS has worked with VMware as a consultative partner to deliver on VVol implementation on both block and file storage over the last 2.5 years. Here is link to VMware PR on the general availability of VVol with vSphere 6 referencing Hitachi Data Systems.


  • What is HDS value proposition for Virtual Volumes management framework ?

Below is the general value proposition for Virtual Volumes

    1. Simplifying Storage Operations
      1. Separation of consumption and provisioning
      2. End to End visibility
    2. Simplifies Delivery of Storage Service Levels
      1. Finer control, dynamic adjustment
      2. Improve resource utilization
      3. Maximize storage offload potential

 

HDS focus is to provide an enterprise level, trusted reliable zero worry implementation while emboldening the storage administration team with rich SPBM storage capabilities control. We are leveraging the VVol/SPBM framework to  further elevate the rich data services that we bring to a VMware environment  (such as Global Active Device (GAD), Hitachi V2I) plus the efficient data movement and snapshot/cloning offload technologies.

VVol arch #2.pngvvol-arch-connectivity.jpg

Running Virtual Volumes on Hitachi will bring a robust reliable enterprise journey to software-defined, policy controlled data center


  • What are the key components of VVol enablement with HDS?

VASA Provider: Hitachi VASA provider sets up a communication management path between VC and storage platform(s). It operates as a virtual appliance(s) in the environment. It translates those VC management operations such as Create VVol, Snapshot VVol into HDS specific calls or offload operations. It also provides the interface to share storage capabilities for storage containers between storage platforms and VC.

Protocol End Points (PE): Protocol end point provides I/O data path connectivity between ESXi hosts and Hitachi storage arrays. Protocol End Points are compliant with both FC (Virtual LUN) and NFS (mount point). Multi-pathing occurs against the PEs.  In the initial release, HDS VVol will support FC & NFS PEs with support for iSCSI and FCoE following subsequently.

WebUI:

Tailored Interfaces for VM admin to manage the VP virtual appliances and entry-level SPBM

Hitachi Command Suite (HCS):

Enterprise level interface extension for Storage admins to manage Storage Containers and Storage Capabilities


  • How does a Protocol End Point (PE) function?

PE represents the data path or IO access point for a Virtual Volume. All I/O flows through one or more PEs. Hosts discover SCSI PEs as they discover today's LUNs; NFS mount points are automatically configuredWhen a Virtual Volume is created, it is not immediately accessible for IO. To access Virtual Volume, vSphere needs to issue a “Bind” operation to a VASA Provider (VP), which creates an IO access point for a Virtual Volume on a PE chosen by a VP. A single PE can be an IO access point for multiple Virtual Volumes and PEs can be visible to all ESXi hosts.  An “Unbind” Operation will remove this IO access point for a given Virtual Volume. vCenter is informed about PEs automatically through the VASA Provider.


  • What are the performance and scalability limits of the protocol endpoint ? How will this affect VM density ?

Protocol endpoints are used to access I/O. VVol architecture implementation predisposes that PE doesn’t become bottleneck. There has been some concern raised regarding queue depths and VVols.  Traditional LUNs and volumes typically do not have very large queue depths, so if there are a lot of VVols bound to a PE, doesn’t this impact performance? This is addressed in a number of ways. First, we are free to choose any number of PE     s to bind their VVols to (i.e. they have full control over the number of PEs deployed, which could be very many). Secondly,  VMware are allowing for greater queue depth for PE LUNs to accommodate a possibly greater I/O density. However, considering that we already provide a choice regarding the number of PEs per storage container, and storage container size, this increased queue depth may not be relevant in many situations. We don’t expect more than single digit # of PEs deployed

[update] The Hitachi implementation with G1000 will allow environments to configure one PE without incurring any processor or queue-depth bottleneck. We take advantage of a protocol ASIC chip to distinguish I/Os and distribute I/Os to VVols directly (and not the PE LUN)  in conjunction with PE multi-port allocation across some/all ports. Pretty clever.


  • Are the VVols a replacement for the VMDK or separate VVols inside a VMDK?  and since the LUN is specific to the Guest, does this create more work for Customers?

From VI admin perspective, VVols can be considered a 1 to 1 relationship to VMDK. VI admin still manage VMs and Storage admin manages storage containers with increased visibility to the VVols consuming the storage container capacity. Internally, we use File or DPVol association to a VVol-VMDK depending on Hitachi Storage platform.  It actually creates less work. One time multi-pathing for PE and efficient provisioning with the SPBM framework to take away the guesswork.

granularityVVol.jpg


 

  • Can I use legacy/traditional datastores along with Virtual Volumes datastores?

        Yes, you can provision legacy/traditional datastores (VMFS, NFS) along with Virtual Volumes datastores in same array with vSphere 6

 

 

More Q/A style blog in part 2 and part 3 to to come shortly where we will zero in on the important aspects around SPBM, storage containers, storage capabilities and other interesting questions that have been raised.

Outcomes