Dang Luong

Hitachi Storage Connector for VMware vCenter Orchestrator in Action

Blog Post created by Dang Luong on Apr 23, 2015

This blog is a follow-up to Discovering Hitachi Storage Connector for VMware vCenter Orchestrator, which was an introduction to vCO Storage Connector. Now, I will get into the software and explain how it works. We will start by examining vCO Storage Connector’s prerequisites for enterprise arrays. Then I will demonstrate three basic workflows and one complex workflow.

 

Prerequisites for Enterprise Arrays

These are the prerequisites for vCO Storage Connector to manage enterprise arrays (e.g. HUS VM, VSP, VSP G1000).

 

  • Array administrator account
  • Command Device (CMD)
  • Command Control Interface (CCI) server
  • HORCM instance

 

A Command Device is a volume with the Command Device attribute enabled. It is mapped to a server that has the Command Control Interface (CCI) software installed. This software is available on TISC (for employees) and portal.hds.com (for customers and partners). Version 01-32-03/03 or higher is required. A HORCM instance is an UDP socket that allows applications, such as vCO Storage Connector, to interact with the array using the CMD.

 

In the example below, I will show how to fulfill these prerequisites for a HUS VM array. I am using a Windows operating system for the CCI server but other OS types are supported.

 

  1. Create a CMD using Storage Navigator:
    1. Create a small volume (47MB for example).
    2. Go to Logical Devices > select the volume.
    3. Click More Actions > Edit Command Devices.
    4. Enable these attributes: Command Device, Command Device Security, and User Authentication.
      1.png 
    5. Map the CMD to the CCI server.
  2. Create an administrator account for vCO Storage Connector in Storage Navigator:
    1. Go to Administration > User Groups > Administrator User Group.
    2. Click Create User. **Be careful with special characters in the password. I ran into a problem with vCO Storage Connector where it failed to register an array because the administrator’s password contained an exclamation mark.**
      2.png 
  3. Download and install CCI. It is quite simple to install and the user guides cover the process in details so I won’t provide additional information here.
  4. Configure HORCM instance on CCI server:
    1. Match CMD volume to Windows disk. Use the inqraid tool that is part of the CCI kit. For example: “inqraid.exe -fxg -CLI $Phys”.
    2. Format Windows disk:
      1. Server Manager > Disk Manager > right click on disk > Online.
      2. Right click on disk again > Initialize > select MBR > OK.
      3. Right click on the Unallocated bar > New Simple Volume.
      4. Create a new volume that spans the entire disk. Choose no drive letter or drive path and do not format.
    3. Create HORCM file (ex: C:\Windows\horcm1.conf). Change the serial number to match your array. See screenshot below for an example.
      3.png 
    4. Confirm HORCM instance is valid by starting it.
      4.png
    5. Try logging in with the administrator account created earlier.
      5.png
    6. Log out: “raidcom -logout -IH1”. The HORCM instance needs to stay active for vCO Storage Connector to work. The next step explains how to automatically start it as a Windows service.
  5. Configure HORCM instance to automatically start as a Windows service.
    1. Create a copy of the file C:\HORCM\Tool\HORCM0_run.txt to HORCM1_run.txt (same directory; change the instance number accordingly).
    2. Make the following edits in the new file:
      1. Change all appearances of “HORCMINST=0” to “HORCMINST=1” (or whatever the number it is you are using).
      2. Under the START section, add “HORCM_EVERYCLI=1”.
      3. Delete all commented lines (those that start with #).
      4. The results should look like the screenshot below.
        6.png
    3. Register the HORCM instance using as a service: C:\HORCM\Tool\svcexe.exe /S=HORCM1 “/A=C:\HORCM\Tool\svcexe.exe”.
    4. Confirm the service shows up in Services. Change its Startup Type to “Automatic (Delayed Start)”.
      7.png
    5. Confirm the service works by starting or restarting (if started already) it.

First Workflow - Add Enterprise Array

Now that we have a working CMD, we need to register the array in vCO. This is done with the Add Enterprise Array workflow. It is located under Hitachi > Storage > Block > Storage Configuration. Right click on the workflow > Start workflow.

 

The fields are pretty self-explanatory. The only thing worth clarifying is the IP address field. This is the SVP of the array. It is the same IP address that you use to access Storage Navigator. A port is not needed.
8.png 

If the registration completes successfully, you will see a green checkmark. If it failed, a red X is displayed. In which case, there is information available under the Variables tab to help troubleshoot.

Basic Workflow - Create LU

For our first real workflow, let’s create a new volume. There are three workflows available to accomplish this. You can find them under Hitachi > Storage > Block > Provisioning. They are:

 

  1. Create LU in Dynamic Pool
  2. Create LU in Parity Group
  3. Create LU in RAID Group

 

The difference between workflow #2 and #3 is the former is for enterprise arrays and the latter for modular arrays (basically the HUS family as it is the only modular platform supported at this time). We will use workflow #1 to create a new volume in a pool.

 

I selected the HUS VM (serial number grayed out) as the target array, entered pool 1 as the target pool, and specified 77GB as the new volume’s size. The stripe size has to be 512KB for enterprise arrays.
9.png 

If the workflow executes successfully, it will return the ID of the new volume. You can locate it under the Variables tab > LunNumber. Note that this value is in decimal. All LUN number input and output fields in vCO Storage Connector workflows are in decimal. Keep in mind that other tools, such as Storage Navigator and Hitachi Command Suite, display LUN numbers in hexadecimal.

Basic Workflow - Present LU

Next, we will use the Present LU workflow to map the new volume to a host. This workflow adds one volume to one host group at a time. So if a host is connected to multiple storage ports, the user will have to run this workflow once for each port. Also note that the format of the port should be: CL#-#.
10.png 

Complex Workflow - Back up ESXi Datastore

This last example demonstrates a complex workflow, which actually comprises of smaller, more basic workflows. As its name implies, Back up ESXi Datastore (under Hitachi > Sample Workflows), creates a backup snapshot of a datastore. The following diagram is a schema of this workflow. It illustrates how smaller workflows can be used to orchestrate a larger, more complex task.

11.png

At runtime, the workflow prompts the user for a datastore name. The first step is to convert the datastore name to a volume ID. Then vCO connects to the targeted array, makes a virtual volume, maps the virtual volume to the same host group as the datastore, creates a snapshot of the datastore, and suspends the snapshot.

 

While seemingly simple on the surface, this workflow does require some additional information from the vCO administrator ahead of time. Specifically, it requires the array credentials, IP address, a HORCM instance, and a snapshot pool.

12.png

Once the workflow completes, it will tell you whether the task was successful. If it was able to complete the entire sequence, a volume ID for the snapshot will be provided (SVOL field below).
13.png 

Now if you go to the vCenter and run Rescan All, this snapshot will be listed as a device. Then it is just a matter of executing Add Storage. vCenter will pick up the duplicated VMFS label and ask if you want to assign a new signature.

Final Thoughts

The four workflows covered here are just a portion of the many workflows available. There is a whole set of workflows for the modular series and another for the HNAS platform. Once you combine these workflows with the existing capabilities of vCenter Orchestrator, there isn’t much in the VMware environment that cannot be automated. And of course we can expect even more functionalities with future releases of vCO Storage Connector.

Outcomes