Skip navigation

The SMI-S User Manuals for all supported Hitachi storage arrays are now available on the Developer Network. Two kinds of SMI-S providers are available - the legacy one from the Hitachi Command Suite which supports AMS, HUS, USP, USP V, and VSP, and the embedded one, that runs on the array service processor (SVP), and supports VSP, HUS VM, VSP G1000, and VSP Gx00 and Fx00.

 

These manuals describe the classes and properties of the Storage Management Initiative Specification (SMI-S) that are implemented by the SMI-S provider. This will allow our partners, or even our customers, to implement support for Htachi storage in their products that use SMI-S.

In my last blog post I described how to install the VMware Plugin for Docker.  Now we will deploy an environment and show how we can use a persistent storage volume across containers.

Build a new volume for a container

docker volume create --driver-vmdk --name BusyBox1 -o size=2gb

vmdocksharevol1

You can check the volume with docker volume ls or docker volume inspect  commands.

Create a container with persistent storage

docker run --rm -it -v Volumename:/mnt/mymount containerimage.

docker run --rm -it -v BusyBox1:/mnt/myvol busybox

vmdocksharevol2

Create a file in the persistent storage

In your container go to the directory you specified in the previous command. In our case it is /mnt/myvol

Create a quick file ex: touch file1  and then add some information to it.

vmdocksharevol3

We have created a file in the persistent storage not in the container file system.

Now you can exit your container. ex: exit

Create a different container and attach the persistent volume

I will download a new image for an Ubuntu:latest container and mount the persistent storage volume.

docker run --rm -it -v BusyBox1:/mnt/myvol ubuntu

vmdocksharevol5

Change to the same directory /mnt/myvol and see that you can browse the file was created before.

Why is this important?

This allows the developers or administrators to provide a persistent storage that can be used across container reboots to save data outside the container. This can be a great use case for database or shared data that can be used for different containers.

Drop me a comment if you like this type of post.

You may have heard that VMware released a docker plugin. This plugin allows the developers use persistent storage and store it in the VMware datastore while allowing IT Administrators the capability to manage their environment their way.

In this post I will go thru the steps to add the all the components to the ESXi server and to the VM's

Download Binaries

To download the binaries I used a Ubuntu machine.

1. create a folder to download the files. ex: mkdir vmware

2. change to the directory. cd vmware

3. download the files:

ubuntu package: https://github.com/vmware/docker-volume-vsphere/releases/download/1.0.beta/docker-volume-vsphere_1.0.beta_amd64.deb

ESXi vib package: https://github.com/vmware/docker-volume-vsphere/releases/download/1.0.beta/vmware-esx-vmdkops-1.0.beta.vib

Optional: I created a script that you can use to download all the files directly to your Linux Machine Download the Script from GitLab

Example: curl https://gitlab.com/carlosvargas/devops/raw/master/ubuntu/downvmdockplugin | sh

vmdockplugin1

Copy the VIB to your ESXi server

To copy the VIB file you can follow these steps

scp vmware-esx-vmdkops-1.0.beta.vib root@1.2.3.4:/vmfs/volume/datastorename/foldername/vmware-esx-vmdkops-1.0.beta.vib

vmdockplugin2

 

Enable Community Support for VIB

Connect to your ESXi host: ssh root@1.2.3.4

Enable community support for VIB files: esxcli software acceptance set --level communitysupported

Install the VMware Plugin VIB: esxcli software vip install --no-sig-check -v /vmfs/..../vmware-esx-vmdkops-1.0.beta.vip

vmdockplugin3

 

Copy VMware Docker Plugin to your Docker VM

scp docker-volume-vsphere_1.0.beta_amd64.deb user@1.2.3.4:/home/user/docker-volume-vsphere_1.0.beta_amd64.deb

vmdockplugin4a

 

Install the VMware Plugin in your Ubuntu VM

Login to your VM and install the plugin. Ex: sudo dpkg -i docker-volume-vsphere_1.0.beta_amd64.deb

vmdockplugin5

Reboot your node: sudo reboot

Create Docker Volume with the VMware Plugin

To create the volume you need to use the --driver=vmdk parameter. here is an example:  docker volume create --driver=vmdk --name=MyVolume -o size 10gb

vmdockplugin6
To check your volumes execute: docker volume ls

Where are this volumes stored?

Go to to your vsanDatastore, you will see a new folder called docksvols. There you will find your new volume created.

vmdockplugin7

I thought this was useful for the wider audience to read. Some really good insights on new players and startups vying to make entries into the various categories within Industrial IOT.

 

I think CB Insights is a great tool for us to look into for market intel and insights and see how this and other adjacent markets of interests are developing.

 

Eyeing IIoT

 

We examined investments into Industrial IoT (IIoT) companies targeting enterprise and heavy industry — including deals and funding — since 2012. At the current run-rate, deal volume is on track to surpass 2015′s total by 30%.

 

https://www.cbinsights.com/blog/industrial-iot-startup-funding-trends-q2-2016/?utm_source=CB+Insights+Newsletter&utm_campaign=1badb099f3-WedNL_7_13_2016&utm_medium=email&utm_term=0_9dc0513989-1badb099f3-87473429

 

Industrial IOT Industry Map – this is a great chart showing all the new players

 

https://www.cbinsights.com/blog/top-startups-iiot/

 

industrial IIOT Map.png

Jeff Maaks

Getting Started with Docker

Posted by Jeff Maaks May 17, 2016

docker_logo.png

I've been testing some new products we have coming out soon, running through the installation process and creating some basic developer how-to guides.  I also wanted to learn more about DevOps and containers in general, and dust off my Linux and other technical skills, so this seems like a perfect time to explore Docker.


So what's special about Docker?  Well for me specifically:

  • It was easy to create an isolated environment for testing.  I didn't have to track down a Linux server for testing, take time to install a fresh image, or worry about mucking up my laptop.  By taking the time to create an install recipe ("Dockerfile"), I was able to pretty much automate the build process.
  • Docker made it much easier to clean up after test runs and get back to a known good state.  Make a mistake and screw something up?  Accidentally delete some files?  Things aren't working and you're not sure what happened?  Who cares?!  Just type exit and start a new container!
  • I've been hearing a lot about Docker lately and wanted an excuse to try it out!

 

Install Docker

Installing Docker is ridiculously easy.  From your server/desktop/laptop, simply go to https://www.docker.com, click on "Get Started with Docker", and follow the instructions to install Docker Toolbox (on Mac or Windows) or Docker Engine (on Linux).  Docker will install a number of helper applications, including Oracle VirtualBox.

 

The good news is there's a faster and more reliable installation of Docker coming soon: Docker for Mac and Windows!  This version is running in an Alpine Linux distribution on top of xhyve hypervisor on Mac OS X or on Hyper-V on Windows.  It's in beta as of this writing, but you can learn more and apply for access here: Docker for Mac and Windows Beta: the simplest way to use Docker on your laptop | Docker Blog.

 

Luckily I've been accepted into the Docker for Mac/Windows beta program and have been playing around with this new version.  It's quite an improvement!  I'll try to write up my experiences soon.

Example Docker Commands

The Docker Getting Started materials are very good, and after installing Docker I'd suggest working through the whalesay image, building your own image, and other included exercises.

 

A few other useful command examples:

 

Get the current client & server Docker versions:

docker version

 

Show all running containers:

docker ps

 

Run Ubuntu interactively in a container:

docker run -it ubuntu /bin/bash

 

See all the images in your local Docker instance:

docker images

 

Remove all stopped containers:

docker rm -v $(docker ps -aq -f status=exited)

 

Delete a local Docker image (note you'll need to run the above command first or otherwise remove any stopped containers based on the image to be deleted):

docker rmi ubuntu

docker-con 2016.png

DockerCon 2016

DockerCon 2016 will be held in Seattle, WA June 19-21, 2016.  If you're going drop me a note and we can connect there!

 

 

What's Next?

I'm hoping to make this part of a blog series, and you may have already seen my blog post on Kubernetes and Docker.  In the next few days we'll post a how-to on installing Docker on the Hitachi Hyper Scale-Out Platform, and as I get experience with them I'll post other how-to guides on using various other Hitachi products with Docker.

 

Learn More

If you'd like to learn more about Docker, the Docker Docs are great and quite detailed.  There's a Docker Channel on YouTube, a /r/docker

sub-Reddit, and these two O'Reilly books are highly recommended:

 

Using+Docker.jpgDocker+Cookbook.jpg

 

 

Feedback Welcome!

 

Let me know what you think: Are you using Docker or another container technology?  Which Hitachi products would you like to see as container-friendly or for us to document how to deploy using containers?

 

 

Jeff Maaks

Kubernetes and Docker

Posted by Jeff Maaks May 6, 2016

m7S2-kubernetes.pngsmall_v.png

Earlier this week I had the opportunity to attend a Cloud Native PDX Meetup with a presentation on Kubernetes 1.2 given by Kelsey Hightower of Google.  I've been playing around with Docker and have heard of Kubernetes, but I really didn't have a good understanding of how these technologies fit together, and I wanted to learn more about how all this supports the concept of DevOps.  Ultimately, I'm wanting to understand how Hitachi's products can fit into this ecosystem.  Here are a few highlights from Kelsey's talk.

 

Docker Stack.png

Docker Overview

I'll blog separately about Docker soon, but in a nutshell it's a standard for running processes/applications in containers.  Think of containers as lightweight, portable virtual machines. Well-designed "containerized" applications separate the components of the application into separate containers (ideally so that each container only runs one process).

 

Isn't Docker Enough?

Not really.  The challenge is that when you have a complex application that's been properly containerized (for example, with separate containers for the back-end database, the business logic, the front-end web server, etc.) you now have to manage all these parts.  Yes, you can use Docker to start each of the components, but how do you guarantee containers with a high affinity are running in the same node?  How do you manage all of the parts as a whole?

 

Introducing Kubernetes

This is where Kubernetes comes in.  As Kelsey describes, Kubernetes is a framework for building distributed systems.  It's basically the plumbing to build a distributed platform by taking a lot of physical machines and making them look like a single large machine.  As pertains to containers, Kubernetes introduces the concept of a "pod", which is a way to tightly couple containers that make up an application.  Pods are logical applications, which have:

  • One or more containers and volumes
  • Shared namespaces
  • One IP address per pod

 

You can think of pods as being virtual machines that are constructed at run-time.

 

Imperative vs Declarative

Isn't Kubernetes just a container orchestration platform?  That's what I thought coming into this Meetup based on what I'd read about Kubernetes.  But according to Kelsey, Kubernetes is not an orchestration tool at all.  But people want to use it that way.

 

As I mentioned earlier, Kelsey describes Kubernetes as a framework for building distributed systems.  And yes, you can use it in an imperative fashion, issuing commands to Kubernetes which it's happy to execute.  But that's missing the true power of the framework.

 

Rather, Kubernetes is meant to be used in a declarative fashion: You define your applications (say, via Dockerfile definitions), then you describe to Kubernetes what you require for that application to meet your business needs (configuration files, number of instances, load balancing, etc.) and Kubernetes just makes it happen!

 

Learn More

This meetup wasn't recorded, but here's a video of an almost identical talk Kelsey gave last year, which also contains a great Tetris analogy.  It's a great talk and provides a lot more detail about Kubernetes and includes a great live demo.

 

 

 

What is Hitachi doing with Kubernetes and Docker?

We'll provide more details about what we're doing with Kubernetes and Docker in upcoming blog posts, but here are a few resources available now:

 

 

What are you doing with containers?

I'd love to hear about what you or your customers are doing with DevOps and containerized applications.  Which technologies are you exploring?  How would you like to use Hitachi products in a containerized model?

I know I’ve been a bit silent since the holiday break, but I’ve been wanting to pivot from talking about what we’re planning to what we’re doing for you, our developers.  We have a few things in the pipeline that we’ll be announcing soon (stay tuned!), but I’m really excited about what we have in store for you this fall!

 

The biggest thing this fall is NEXT 2016, our conference for customers and partners. The main track has general sessions and a killer showcase of the latest – and future – solutions and services that will help businesses get the most from their data. We’ve seen that it’s the data that allows the digital transformation of businesses and industries. Analysts are talking about it too. The important thing for our customers is to get ahead of these changes and their competitors. We’re helping our customers find the value in the Internet of Things and big data. Our advantage is that as a company, Hitachi has direct experience in so many vertical industries – literally planes, trains and automobiles. This perspective and our strengths in data management and analysis make an awesome combination for solutions that work well. We’ll demonstrate that at NEXT 2016.

 

Developer Track

As part of NEXT 2016, we’re excited to share that we’re planning on a Developer track focused on your needs!  This would include:

  • 2-hour training sessions on Day 1
  • Developer-oriented main conference sessions on Days 2-3
  • A “genius bar” type approach where we ensure our engineers are available in the solution showcase to ask technical API and automation questions

 

Did You Say “Hackathon”?

Why, yes I did!  We’re planning on hosting a hackathon one evening where you can get hands-on with our APIs, while enjoying a beer and some tasty munchies.  The hackathon is still in the early planning phases, but we’re thinking about keeping it light and fun, with something for everyone whether you’re new to programming or a hard-core hacker.

 

What Would You Like to See?

We’d love to hear from you — what are your thoughts on:

  • Which product APIs are you most interested in learning more about?
  • How about over-arching topics such as:
    • Implementing DevOps with Hitachi products?
    • Integrating Hitachi products into an Open Stack environment?
    • Hitachi and Open Source
  • What is your preference for the hackathon: something fun and whimsical to help you get more familiar with integrating or scripting against our products, or help building something tangible you can use when you get back to the office?
  • Any other fun ideas to share that you’ve seen as successful approaches at events you’ve attended before?

 

I’ll be sharing more information as we firm up our plans, but in the meantime check out http://www.hitachinext.com/ for more information, and to sign up for updates from the NEXT team.

 

Open Polls

We'd also love to get your feedback on the The specified item was not found. and on What's your top interest area in 2016 for automation or integration?

Best of.png

I thought I'd kick off a new tradition of a year-end review of HDS' developer resources posted throughout 2015.  Here are some links to content related to APIs, DevOps, Open Source, and anything else that might be relevant to scripting or development.

 

 

 

Pentaho

Probably the biggest news for developers (especially those working with Big Data and analytics) is our acquisition of Pentaho.  As a company that was founded from an existing Open Source community, Pentaho is at the opposite end of the spectrum for developer enablement than we are at HDS.  We have a lot to learn from Pentaho's developer community, and I'm looking forward to continue picking the brains of our new colleagues.logo-pentaho-n.png

 

Hitachi Content Platform

One of the best content areas for developers is the Developer Network for Hitachi Content Platform, which has been around since we launched the HDS Community in 2013. In addition to the HCP code mentioned below in the Open Source section, a number of great contributions for HCP were posted in 2015:

 

Open Source

My colleague Manju Ramanathpura is driving HDS' Open Source initiative, and in 2015 we released a few projects:

  • librsyncWrapper - A thin Java JNI wrapper around the C librsync library, making it easy to use the librsync library from Java code.
  • comet - Custom Object Metadata Enhancement Toolkit
  • HCP SDK for Python 3 - HCPsdk provides a simple SDK to access an Hitachi Content Platform (HCP) from Python3.  It handles name resolution, multiple sessions spread across all available HCP nodes, persistent connections, and recovery from failing connections.
  • HCP Metadata Query Tool - Another HCP tool from Thorsten Simons that enables you to query an Hitachi Content Platform (HCP) based on operations on objects happened in the past.  It facilitates the Metadata Query Engine integrated in HCP to list object metadata for all or a subset of the data stored in HCP.

 

Stay tuned, as we'll be releasing more software as Open Source in 2016, and building out HDS' presence on GitHub.

 

Adapters for Microsoft and VMware

We published a number of 3rd-party adapters in 2015 for our storage and compute products, including Microsoft applications and VMware vCenter Operations Manager.  And yes, I know you shouldn't have to go to a sales rep or a partner for these adapters.    I'm working on making these easier to find and obtain...stay tuned.  In the meantime, check out these videos:

 

Hitachi Adapters for Microsoft Applications

 

Hitachi Storage Adapter for VMware vCenter Operations Manager (vCOps)

 

Hitachi Unified Compute Platform Management pack for VMware vCenter

 

OpenStack

 

Tools for Hitachi Enterprise Storage and Tuning Manager

Over in the Scripting area, ERIEK REGANDONO release several interesting tools:

  • Enterprise Offline Viewer - A tool that can view configuration (offline) for Hitachi Enterprise Storage subsystems, extract all data to HTML/CSV/Excel, build raidcom script, horcm, etc.
  • rEplication List MakEr - Simple tools to get list of Hitachi replication from Hitachi Enterprise Storage subsystems
  • HTnM Report Generator - An automatic Hitachi Report Generator for Tuning Manager software.

 

 

Hitachi Virtual Infrastructure Integrator

Paul Morrissey posted a detailed blog series about the Hitachi Virtual Infrastructure Integrator:

 

 

Hitachi Universal Compute Platform

 

Kubernetes

In June 2015, HDS: Hitachi Data Systems Announces Availability Of Kubernetes On The Hitachi Unified Compute Platform.  A few blog posts on Kubernetes:

 

Unfortunately, the link to the setup guide in the white paper is broken.    Another one I'm working on to get fixed...

 

UCP Director API

Anil Erduran posted a very informative series on the UCP Director API:

 

Hitachi Cloud Services

Quite a bit of good information about Hitachi Cloud Services were posted:

 

Hitachi Cloud Service for Content Archiving Documentation

 

Hitachi Cloud Service for Content Archiving On-Ramps Guides

 

Using s3fs with Hitachi Cloud Services - Content Archiving

Also, Leland Sindt posted a how-to about usingHitachi Cloud Services - Content Archiving  and s3fs

 

...and one more thing

And of course, a shameless plug for my recent blog posts in the Developer Network Blog:

 

...and a couple on LinkedIn:

 

Did you learn something new or see anything I missed?  If so, drop a comment below and let me know.  Oh, and have a safe and Happy New Year!  I look forward to providing you more and better developer-oriented resources in 2016!


It's that time of year when I start planning for which conferences are coming up in the next year that I'd like to attend.  As part of our revamp of the Developer Network, I'm looking forward to attending the Evans Data Developer Relations Conference in Palo Alto, March 20-22.


A few other conferences on my radar include:

CrownePlaza Palo Alto.jpg

 

Are you already confirmed for any developer conferences or what's on your wishlist?  Any others you'd recommend?

We developers share a passion for possibility — we see the future and it is built on technical innovation.  I remember back when IBM released OS/2 Warp.  I was blown away by this revolutionary operating system: I can easily virtualize practically any version of DOS, Windows, and OS/2 simultaneously!?!  And I can simulate a LAN for development and testing of multi-user applications?  And all this for about $99!?!  Yes, I know that in today’s world where products like VMware and XenServer are pervasive you’re probably thinking, “Well, duh!”  But as a start-up software developer in the early 90’s (that had been struggling with developing and testing a large LAN-based application), I found OS/2 Warp to be an incredibly powerful tool that enabled me to do amazing things I couldn’t do before.  What many of us thought back then was, “If only IBM could get out of their own way, OS/2 will kill the competition!”Os2W4.png

 

Sometimes I feel we’re the same way at HDS.  We’re consistently rated as a great place to work and as one of the most ethical companies.  Plus we have some really, cool, solutions and made a couple great acquisitions recently.  But when it comes to developer enablement, it seems that sometimes we just can’t get out of our own way.

 

I’ve been told, “Dealing with HDS from a development perspective is nothing short of challenging.  Not only is documentation lacking, but existing interfaces are difficult to work with or incomplete, not to mention almost a complete lack of SDKs.  This is not purely around application integration, but also operational automation.”  It’s clear that our developer community is hungry for more, more, more APIs and developer-focused materials.

 

We can do better.  We WILL do better.

 

It’s not that we aren’t making progress.  According to The Register, “Hitachi Data Systems has impressed with what they can offer.  If you need OpenStack done right, HDS will compete with HP, IBM or Ericsson head-on and deliver.”  We’ve released Kubernetes on the Hitachi Unified Compute Platform.  We’ve released several Open Source projects on GitHub.  But can you easily find this information?  Is it done well with good documentation, examples, etc.?  Sadly, no.  We have more work to do to provide easily-findable and usable developer materials.

 

With that said, I’m now working to build a Developer Advocacy team at HDS and I need your help to learn about what matters to you and what you want from HDS.  Armed with your needs, my team and I will advocate on behalf of you, our customers, partners, and prospective customers that are building automation and integration against our product and solution APIs.  We’ll make sure our engineering teams know what you need.  And we’ll make sure you get as much timely and accurate information about our APIs and developer resources as we can make available.  The first step is to overhaul the Developer Network on the HDS Community and posting more developer content for products beyond Hitachi Content Platform.  We’re also creating more developer training and are planning to host a hackathon or two in 2016.  We have released some software to Open Source and will be releasing more soon.

 

What do you think?  Share your stories and perspective — what are we doing well, where can we improve, and how can we best help developers like you succeed?  Do you want to know more about specific product/solution APIs?  How about Open Stack and DevOps?  Feel free to reply here, or contact me directly at jeff.maaks at hds.com or @JeffMaaksHDS.  I’d love to hear from you.  And if you’d like to be involved in helping build our developer community please let me know.  We want to ensure what we’re building meets your needs.

 

In exchange for your feedback and/or suggestions, two commenters selected from a random drawing** will get their choice of an HDS Ogio backpack or Energi 5K+ battery pack.  The drawing will close out on Friday, January 15 at 5pm PST, but feel free to leave comments afterwards.

 

2015-05-10 19.03.56.jpg2015-05-10 19.06.59.jpg


Stay tuned.  I’ll share a bit more about myself and what we’re working on in the coming weeks.  You’ll see a different side of HDS as we start getting out of our own way. 


** Void where prohibited.  Winners will be selected from a random drawing, with one entry per user that provides a substantive (for example, more detailed than "I agree") comment with their experience or a suggestion.