Cris Danci

Containers: why and probably why not (at least for now for enterprise)

Blog Post created by Cris Danci on May 17, 2015

Over the past few weeks, there has been a lot of messaging around the software-defined infrastructure story coming from the HDS camp as well as everyone else in the infrastructure market.  I'm not going to break into a discussion around software-defined as plenty of people have already addressed it from multiple aspects.  Instead, I'm going to talk about containers, which is a technology that is included in the software-defined family.   For those of you that haven't heard of containers, I suggest some background reading here,  since my objective for this blog is not to provide a primer. 


To be clear, I'm actually pro-containers as a technology class in general. I'm currently a Docker consumer, but I was also a LXC consumer and before that Solaris zones, jails and an assortment of other technologies that provide similar levels of isolation hanging off the root partition without creating a VM specifically.  I don't have any specific problems with any container technology, in fact, I think as we've matured in our use of containers which have become easier and more practical to consume.  Docker substantially simplified my own container experience, moving away from a straight LXC implementation, and my life is all the much better for it.... but, just because I, personally have largely benefited from it, doesn't mean everyone will.  Asides from my day job which happens to cover consulting, pre-sales and delivery (not much these days), I'm also a full time technology fiend with my fingers in lots of pies.  Three of those pies just happen to be programming, security and all things *NIX, which is what the container space is really all about.  As a consultant working with enterprise, I can't say containers are of much interest - lots of noise and hype, sure, but interest, nope.  Sure, I have customers that are musing about using them, some apps have been test ported to them, but in large, the perception is that they're not enterprise ready and to a large extent I tend to agree despite how much I love my containers.


Realistically, I think containers are more aligned with developer-defined infrastructure rather than software defined infrastructure.  If you don't know what that is, do some reading here.  Also, realistically, we are sometime away from developer-define infrastructure becoming a reality.  This is largely because developers don't think about technology in the same ways system people do and this can have real implications (years of working with JAVA developers should have already proved this).  There are two obvious solutions to this problem: DevOPS (which I'm 100% in favour for), to merge the system and developer skills; and PaaS (again 100% in favour) which is about providing the required abstraction and allowing the system to manage itself.   Both technology classes are in their infancies despite some recent leaps forward.   The collective psych required to successfully utilise these technologies on enterprise scales are even less mature and could probably be considered embryonic.


The story is somewhat different for start-ups that operate almost exclusively in the 3rd platform space, since the underlying developer culture is already ingrained into the company culture and the fact that PaaS provides better automated management of 3rd platform style applications (particularly if they are constructed using Microservices). However, as organisations grow, they start to naturally develop attributes that traditional enterprises have.   Web-scale companies like AWS, Yahoo, Google, Apple etc. run bucket loads of 2nd platform infrastructure and software, and have non-agile business processes. This is just a by-product of how organisations grow and evolve (there is also probably an Bimodal undertone here as well).  It is also somewhat different for technology manufacturers already exploiting the Linux stack to provide low level services.   Containers provide a way to create multiple isolated services (as separate stacks).  This really helps modularize how systems are built and makes services incredibly portable between different hardware platforms.  Think traditional types of technologies like routers, modern, TVs, toasters, storage arrays, but also emerging technologies like sensor networks and other IoT devices.


This affinity to developer is also directly observable.  The “so what” or “doesn't virtualisation already provide” attitude seems to be everywhere outside of developer circles.  There is a good reason for this and the reality is that virtualisation and containerisation do share many of the same benefits - it just so happens that virtualisation as a concept is much easier for people to consume than containers, which is probably why virtualisation caused the paradigm shift, and not containerisation.   There are a few other factors that need to be considered here too:


1. Virtualisation did not really require much modification, operating systems just went from running in/from a physical context to a virtual one.  Non-experts could easily understand that “we run 1000 physical servers, we need to now run 1000 virtual ones, it will be 800% cheaper, nothing will need to change”. Containers do definitely offer highly levels of consolidation and reliability (less moving parts), but they do also require more ground work as their tighter coupling to the underlying operating system makes them more difficult to understand and consume for non-experts working in IT.


Sidebar: I need to commend Docker here.  More than any other container technology, Docker has created enough abstraction to dramatically simplify the story for non-experts.


2. Containers are still largely a *NIX story and a lot of developers (if you want to call people that only know .NET actual developers) are Windows centric.   This will somewhat change moving forward with a recent Microsoft announcement to introduce nano servers and support containers, read more here, but as usual this will take time to diffuse into the market.


For those of you paying close attention, or who already consume Docker, you’ll know that Docker on Windows runs a VM and so we are back to the cyclical debate of VMs over containers.  Of course there is the concept of nesting containers on VMs and all the benefits that can bring, but maybe for another post.


3. Like any new technology class, there are still ancillary factors that prevent diffusion. Skills and security (and this is a problem for Docker, particularly when using the hub or other public registries (repos)) are the frontrunners.


So I'm not saying it's not going to happen, or containers are crap (because they’re not, they’re awesome), there are just too many factors for it to take the world by storm like virtualisation did, and to also create the next paradigm shift. The shift for most enterprises will probably have to wait for the operating system vendors.