Jeff Chen

Hitachi VSP F/G meets Docker

Blog Post created by Jeff Chen on Nov 14, 2017

Docker container is gaining popularity and the Docker adoption is on the rise. As demanding of the Docker platform increase, Hitachi is also helping customers making better and more reliable Docker infrastructure. In this article, I would like to introduce Hitachi Storage Plug-in for Container (HSPC). The overview of HSPC can be also found in the link below.

Hitachi Storage Plugin For Containers

 

The HSPC Quick Reference Guide can be found below.

Containers - Hitachi Vantara Knowledge

 

State-full Persistent Storage for Container

HSPC enables docker container applications to access the persistent storage backed by industry leading Hitachi storage, Virtual Storage Platform (VSP) series. Some of the benefits that HSPC provides are:

 

  • Automatically deploy reliable persistent container volumes presented by Hitachi VSP G/F storage
  • Storage-based container volume cloning
  • Enhance container data protection by storage-based volume snapshot and restore

 

The figure below shows how the HSPC volumes are presented to the containers.

 

hspc1.png

  • In the example above, 2 servers are in a docker cluster (swarm or Kubernetes).
  • When a docker volume is created by the HSPC, a virtual volume (LUN) is created in the storage and presented to all the hosts in the cluster.
    • In this case, 3 HSPC volumes are presented to each server.
  • The container orchestration layer (swarm or Kubernetes) selects which container runs on which server.
    • in this case, 1 container is running HSPC volume #2 on Server1, and 2 HSPC volumes are attached to 2 containers on Server2.
  • Presenting the HSPC volumes to all servers in the cluster helps in the container failover scenario.

 

Planned Failover + Snapshot Validation

In this article, I would like to show you an example of snapshot and restore of HSPC docker volume, and container planned/maintenance failover in docker swarm environment. This is just to show some of the useful capabilities of HSPC, and might not be for a real use case. The following steps shows the high-level procedure.

 

  1. Create docker swarm 3-node cluster. Node name: manager1, worker1, and worker2
  2. Create a docker swarm container with HSPC volume on worker1 node (docker swarm selected this automatically in this particular case)
  3. From the container, write some data to the attached HSPC volume
  4. Take a storage-based snapshot of the HSPC volume
  5. Perform docker swarm maintenance failover
  6. The docker swarm brings up the container with HSPC volume to worker2 node
  7. Perform storage-based restore from the snapshot taken in worker1 node

 

The table below lists the main hardware components used.

Hardware ComponentQuantityDescription
Hitachi VSP G6001

Micro-code: 83-04-64-40

2 x FC ports used

1 x HDP pool for volume creation

1 x HTI pool for snapshot

Server3

Intel Xeon CPU E5-2680 v4 - 2 Sockets

RAM 256 GB

1 x Emulex LPe12000 HBA 2 ports

 

The table below shows the main software components used.

Software ComponentVersion
Hitachi Storage Plug-in for Container (HSPC)1.0.0
Red Hat Enterprise Linux7.4
Docker17.05.0-ce

 

The figure below shows the lab setting used for this validation.

 

Validation Details

1. Create docker swarm 3-node cluster. Node name: manager1, worker1, and worker2

Follow the link here to create 3-node docker swarm cluster.

The following shows the status of swarm nodes

cmd1-1.png

 

2. Create a docker swarm container with HSPC volume

Use the following command to create docker service with HSPC volume. In this example, the new docker service: service3 with HSPC volume serviceVol3 was deployed to worker1 node.

 

# docker service create --name service3 --mount type=volume,source=serviceVol3,destination=/mnt,volume-driver=hspc,volume-opt=size=130m -d nginx

 

Use docker service ps service3 command to find out which node it was deployed.

cmd2-2.png

 

3. From the container, write some data to the attached HSPC volume

For this validation purpose only, we manually go worker1 node, and go into the container: service3.1 and write some data to the HSPC volume with following command. (Normally, custom container application starts to write data into HSPC persistent volume)

 

# docker exec -it <containerID> /bin/bash -c 'echo before-snapshot-worker1 > /mnt/test.txt'

cmd3-1.png

As shown above, a text "before-snapshot-worker1" was written into a text file on HSPC volume.

 

4. Take a storage-based snapshot of the HSPC volume

Use the following command to take a storage-based HSPC volume snapshot. For docker swarm environment, the snapshot can be taken in any swarm node.

 

# /opt/hitachi/hspc/hctl snapshot create -sourceVolName serviceVol3

 

You can validate snapshot creation by docker volume inspect command. Each snapshot has a unique ID called MU (Mirror Unit), and this MU will be used for restore operation later.

cmd4-2.png

 

5. Perform docker swarm maintenance failover

In case of server maintenance, use following command to failover the running container(s) to other nodes in cluster.

 

# docker node update --availability active worker1

 

In this example, the worker1 node went under the maintenance and the container: service3 failed-over to worker2 node.

 

cmd5-2.png

 

6. The docker swarm brings up the container with HSPC volume to worker2 node

Let's validate the data consistency after the failover. For this, we manually go to the worker2 node, and read the content of the text file that we wrote in worker1 node. As shown below, the text "before-snapshot-worker1" successfully appeared.

cmd6-2-1.png

 

7. Perform storage-based restore from the snapshot taken in worker1 node

Before restoring previous data from a snapshot, let's update some data on HSPC volume. The following shows that we updated the content of the text file from "before-snapshot-worker1" to "after-snapshot-by-worker2".

cmd6-2-2.png

Use the command below to restore a HSPC volume with snapshot MU number. Restoring to previous snapshot requires no volume access from the server host, so we need to bring down the container for a short amount of time.

 

# /opt/hitachi/hspc/hctl snapshot restore -sourceVolName serviceVol3 -mu 3

 

We can validate the status of snapshot restore by checking the SplitTime update like below.

cmd7-2.png

Let's bring up the container with HSPC volume after restore with the command below.

 

# docker service create --name service3 --mount type=volume,source=serviceVol3,destination=/mnt -d nginx

 

The container service3 is up again on the worker2 node.

cmd8-2.png

Let's check the contents of the HSPC volume after the restore.

cmd9-2.png

The content of the text file was restored successfully from  "after-snapshot-by-worker2" to previous snapshot contents of "before-snapshot-worker1".

 

 

Helpful Tips and Configurations

In this section, I would like to share some of the tips and configurations that might be helpful for you to set up HSPC environment.

 

HSPC Configuration File

During the HSPC installation, you will need to enter the storage information into the configuration file below. If you are using multiple storage ports to setup a multi-pathing, you can enter multiple storage port information like this.

 

/opt/hitachi/hspc/config.json

{
    "base": {
        "serialNumber": 440138,
        "ip": "172.17.42.186",
        "user": "administrator",
        "password": "xxxxxxxx"
    },
    "options": {
        "poolId": 13,
        "snapshotPoolID": 11,
        "scsiTarget": [{
                    "portId": "CL1-E",
                    "scsiTargetNumber": 6
         },
        {
                "portId": "CL4-E",
                "scsiTargetNumber": 5
       }]
    },
    "dataBaseIp":"10.76.47.44:2379"
}

 

You can find "scsiTargetNumber" from the Deivice Manager - Storage Navigator shown in the picture below.

 

Storage Host Group Configuration

To present same HSPC volumes to all the server hosts in the cluster, configure the Host Group like below.

hg1.png

  1. On each port (that you are using for HSPC), create one Host Group
    • In this case, HSPC-1E-Docker is created
    • In this case, "scsiTargetNumber" is 6 for HSPC configuration file /opt/hitachi/hspc/config.json
  2. Use Add Hosts button to add WWN of all the hosts

 

Find Matching LDEV ID of HSPC Volume

Normally, you will probably no need to know corresponding LDEV information for your HSPC volume, but you might need this in case of troubleshooting and validation. First, the LDEV ID can be found in the docker host by using docker volume inspect command. Note that, the LDEV ID here is shown as decimal. On the Device Manager - Storage Navigator, the LDEV ID is shown in Hex.

ldev1.png

To convert Decimal to Hex, simply use Calculator tool on Windows shown below.

ldev2-2.png

Now we know we need to look for LDEV ID of 00:6D for corresponding HSPC volume shown below.

ldev4.png

 

Conclusion

Hitachi released first version of Hitachi Storage Plug-in for Container (HSPC) early November 2017. In this article, we are just showing some of the capabilities that HSPC offers. Stay tuned for up coming new releases with more features and integrations with container ecosystems.

Outcomes