Sam Walker

Experience with VMware VVols

Blog Post created by Sam Walker on Jul 27, 2015

Overview

 

I have been involved with a customer who’s expressed an interest in VMware VVols – something which is a new vSphere 6 storage technology.  We have put together a POC environment for them which I thought I would share…

 

First of all; what are VVols?  In my opinion, VVols address two main areas:

 

Storage Policy Based Management (SPBM) – the ability for the storage administrator to present out storage with specific characteristics (such as performance, disk speed, RAID type, recoverability, latency, etc).  These can in turn be consumed by the VMware administrator based on the application requirements.  This is on a VM or vDisk level.  An example I used; an Oracle or SQL DB server.  The OS may have one kind of requirement – tier 2 IOPS, RAID 5 or 6.  The DB and logs may have another kind of requirement, tier 1 (SSD or FMD disks), RAID 10.  There may be a third drive for backups; these need to be on tier 3 IOPS (NL-SAS), not be replicated, as wide a RAID stripe as possible, etc.  Although this was possible before with separate VMFS datastores on different pools on the array, it was difficult to manage, required a strong naming convention and was error prone.  Now, you will define performance requirements on a policy basis.

No more LUNs, no more VMFS datastores!  Although the marketing material I have read doesn't focus too heavily on this, it’s very important.  The old model of presenting out a number of uniform LUNs (2TB for example) to an ESXi cluster, on which, VMs are positioned (with their page files, config files, VMDK files, etc, all in the same datastore) was based on old SCSI architecture.  With the new model, each of these objects mentioned above have direct access to the storage and the storage has its own LDEV (or volume depending on terminology) for each disk, config partition, swap file & snapshot.  These are called VVols.  These have no access to the cluster through a LUN in the traditional sense, but are presented through a Protocol Endpoint.

 

The Protocol Endpoint is an LDEV presented out from the array to each host on LUN ID 256 (remember the traditional LUNs are numbered from 0-255).  The array needs to have a specific code (and with Hitachi storage, the version of Hitachi Command Suite (HCS) needs to be new enough too).  The LUN type presented from the Hitachi Storage array is named an ALU (Access Logical Unit).  How this looks in VMware:

1.png

 

ALL access through VVols is through the Protocol Endpoint; no access direct to the VVol LDEV.


How is this done?

In addition to the Protocol Endpoint, there is an additional requirement of a VASA Provider (Storage Provider (or VP-b / VP-f (file or block) depending on who you’re talking to).  This is the technology piece that sits between VMware and the storage.  In previous versions of the VASA Provider, the function has been to pull storage performance from the array, from which, the VMware administrator can consume the storage’s resources.  With this latest version, the VASA Provider also acts on the VMware cluster’s behalf to perform storage related tasks (such as provisioning LDEVs, taking TI snapshots, etc).

 

Also, in a Hitachi context, this has changed from a Windows appliance configured through PowerShell to a Debian VM deployed through an OVA.  How this looks:

2.png

Once the appliance has been given an IP address, log on through https://vasaIP:50001/VasaProviderWebUI and add the array:

3.png

From the VMware perspective, this is added into vCenter in the same way as before (using the address https://vasaIP:50001/VasaProvider/version.xml)

4.png

This allows VMware admins to:

 

• Create a VVol Datastore (not a datastore per se, but a reference to the storage container defined at the storage array):

5.png

 

• Define Storage Profiles based on what the application requirements are:

6.png

 

How  does this translate into the real world?

The crux of the VVol conversation is this; “why do we care” (or “why should we care”)?  Simply put, the traditional VMFS model hasn't been built for today’s virtualised data centres.  LUN sizes growing beyond administrable levels, unable to easily give granular performance characteristics to different disks / VMs, long over complex provisioning cycles, etc.  The introduction of SPBM and decoupling storage consumption from arduous provisioning cycles should be very interesting to both storage admins and VMware admins.  Also, being able to offload even more to the array (such as snapshot capability) can’t go unnoticed as a feature.

 

Caveats & requirements

Something that cannot be forgotten about…  This is a new technology – not all functionality is available (yet).  There is a VMware article which outlines some fundamental VMware technologies as “not interoperable”; features such as vCloud Air, NSX 6.*, SRM 5.* to 6.0.*, vROps 6.0.*, FT, MS Failover Clustering and more restrictive, Array based replication.

See http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2112039 for more information.

 

First of all, the key word used in this article is ‘yet’ – in the sense that these features are ‘not interoperable yet’.  In my opinion, support for the necessary functionality will come with development in the VASA Provider.  Customers who are looking into VVols at the minute are the early adopters, testers, customers looking at developing a deployment roadmap for the next 12-18 months, etc.

 

One gap that HDS might be able to plus is the array based replication.  Although HDS cannot offer replication (that is a VMware limitation), with Global Active Device (GAD), two arrays can have a single LDEV which is read-writable from both arrays / sites.  A GAD protected VVol device is something that I believe is in the pipeline. https://www.hds.com/products/storage-software/global-active-device.html?WT.ac=us_mg_pro_gad. OK HDS plug over!

 

On to what you’ll need to get a VVol demo running:

• A supported storage array (such as the Hitachi HNAS – the VSP G1000 I did this demo on was on test code) and management software (a suitable version of Hitachi Command Suite in if using Hitachi storage).

• VMware 6 (vCenter and ESXi both on 6 – no interoperability with previous version of either).

• VASA Provider / Storage provide that offers VVol support.

• Supported hosts that will run ESXi 6

• HBAs with the secondary LUN ID – this is important, for older HBAs, the secondary LUN ID feature will not be there. To check, visit www.vmware.com/go/hcl and search on your HBA:

7.png

• From the above, you can see a particular version of firmware and driver – make sure you are on both (and as a hint, the default driver shipped with ESXi won’t be late enough!)

Outcomes