Bob Madaio

Software-Defined For The Real World

Blog Post created by Bob Madaio on Apr 28, 2015

Today Hitachi Data Systems announces our largest portfolio expansion in company history, and we do so with a focus and technology set that belies some people's antiquated notions of who we are.


wind.jpg

First, there is our suite of Social Innovation solutions, tying the power of HDS information technology solutions with the vertical market expertise of Hitachi Ltd., our $90B+ parent company.  This is a massive move toward fulfilling our vision of driving positive societal impact through technology, and taking the learning and experiences from those endeavors to help power our customers’ businesses. 


Find more about those solutions here, since I’ll be focusing more on an entirely different, equally exciting part of our announcements: our new IT infrastructure technologies and solutions.


hw.png


And there were many. And they too were not as expected from HDS, the “great hardware guys” -- not that there's any doubt about the excellence of our hardware.


But instead, our launch was centered around helping our customers build a Software Defined Infrastructure.


Much debate has existed about what is or is not “software-defined” – a great argument if you are into esoteric discussions that don’t end anywhere particularly productive. (I tend not to be.)  We instead decided to focus on outcomes we think customers want to attain. Simplicity, Insight, Agility.  And how to achieve them. Automation. Expansive data access. Infrastructure abstraction.

 

There are many great HDS bloggers who have covered these themes already…

  • Hu Yoshida explains how such an infrastructure can lead to greater IT innovation, here.
  • If IT tea is, umm, your cup of tea, check out this blog by Paula Phipps.
  • If you love a good software-defined Star Wars reference, check out Paul Meehan’s blog, here.

 

But to sum up the basics, the flexibility of software is fundamentally changing IT infrastructure. HOW it should change YOUR infrastructure is something us vendors cannot know until we work with you to understand your application types, business needs, internal skill-sets and current IT architecture.


onesize.jpg

This is why we talk about the path to IT as a Service being software-defined and application-led. And that is why this new suite of technology from HDS is so exciting, if I do say so myself – because it brings software enhancements for IT infrastructure that supports both traditional applications as well as new, scale-out, type workloads.

 

But some of you may have heard that “HDS revamped all their mid-tier storage” or that “HDS launched EVO:RAIL” or maybe “HDS launched a new Hyper-Converged big-data platform”… or even  that we “Added new automation tools.”

 

All true.  But even the hardware introductions (looking at you, VSP Family) are really software stories.  So here is a quick list of some of what we launched, and how it fits within the software-defined context. Unfortunately, for space and time, I’ve had to pick some “favorites” and let a few of the HDS family of bloggers cover the rest.


Expanded Hitachi Virtual Storage Platform (VSP) family

vsp.jpg


New VSP models (G200, G400, G600 are available today, and G800 model coming soon) join the market-leading VSP G1000 as part of a cohesive family spanning from 2U to blow-you-away.


What’s the “software-defined” angle here? These systems all run the same Hitachi Storage Virtualization Operating system – a suite of tools that include native storage virtualization (most 3
rd party support available!), built in active/active capability (appliances need not apply!) and all the data management and replication our VSP customers are accustomed to. All this is done on cost-effective storage systems that leverage Intel multi-core technology without the need for Hitachi-specific ASICs.


What's OUT: the compromises of “midrange” storage functionality and the lack of interoperability with high-end systems. What's IN: the reliability and performance that Hitachi VSP technology is known for – whether you deploy them in converged infrastructure solutions or as standalone best-of-breed components.


Multiple hardware architectures, one software suite, many happy customers…


Introducing The Hitachi Hyper Scale-Out Platform

While VSPs are considered the best in class for supporting traditional applications, there are sometimes environments that would benefit from something different.  New environments are being architected to take advantage of different styles of applications, massive pools of data and a cost-effective scale-out infrastructure.  This is one of the areas where we are seeing the rising interest in hyper-converged infrastructure platforms.


Hitachi has made TWO entries into the hyper-converged space with our recent announcements, one targeted at smaller, general-purpose workloads and VDI implementations, our Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL, and the other targeted at big data analytics and building an Active Data Lake – our Hitachi Hyper Scale-Out Platform. Learn more about our UCP family updates, over in Hanoch Eiron’s post..

 

I’ve spoken to a number of people about our Hyper Scale-Out Platform, and tend to find one area that creates confusion – is it really hyper-converged?  You see, in most people’s minds, hyper-converged solutions tend to be a) a start-up’s answer to every application need, or b) a repackaging (with some value-add) of VMware technology.  The Hitachi Hyper Scale-Out platform is neither. 


hsp.png

At HDS we know that one architecture is typically NOT the answer for everything, and this platform is actually based on a combination of Hitachi intellectual property (massively scalable, POSIX-compliant file system) as well as open source OS (Linux) and virtualization (KVM) to drive virtualized, scale-out compute and storage, together.


The focus on analytics takes advantage of the massive ingest capabilities of the platform (allowing the ability to quickly take in new types of information) as well as the ability to run Hadoop and analytics applications directly on the platform – because compute and storage are co-located. This helps eliminate the “Data Swamp” problem, where enterprises collect massive amounts of information, but struggle to move, analyze and get-insight from it.

 

Now, could this platform support other applications?  Sure, when you are leveraging open source software technology and adding value on top, it opens up plenty of options.  But for now, expect to see this platform underpin analytics environments of our customers and support the Hitachi direction of Social Innovation based on solutions leveraging native data analytics applications right on the platform.

 

Software-Defined Software

So, I've just described two hardware platforms, whose value is derived by software, the Virtual Storage Platform which is powered by SVOS and the Hitachi Hyper Scale-Out Platform and powered by both Hitachi and Open Source software.

Auto.png


But sometimes the conversation involves no hardware at all.  And no, I’m not talking about the demo of SVOS running in a VM on a PC-server that we have at our sales-conference/influencer event HDS Connect this week.  Which is super-cool, but not orderable today as we work through the right model for that capability. And no, I’m not talking about the software-only options of Hitachi Content Platform, or Hitachi Content Platform Anywhere or Hitachi Data Ingestor. Which are also super-cool and already deployed in customer environments.  Those are mostly storage systems, in software.

 

Instead, I’m now talking about automation applications that simplify the lives of administrators, and live “above” the storage systems. And we launched some significant products to help there.

  • Hitachi Automation Director takes Hitachi best practices for provisioning infrastructure for common environments/applications and creates templates that are easy to use, easy to update and that make it easy to give role-based access to those templates to your internal consumers.
  • Hitachi Data Instance Director simplifies and orchestrates array-based local and remote data protection with a simple white-board style interface for building workflows for managing data protection.
  • Hitachi Infrastructure Director brings the new members of our VSP family a simple, lightweight and API-based management interface for customers who don’t need the breadth and depth of Hitachi Command Suite (which, of course, also supports the new systems.)


And So Much More

There’s way more to cover and luckily there are many HDS blogs out there covering lots of it.  From an expanded family of Unified Compute Platform converged-infrastructure solutions that now covers core-to-edge use cases and much smaller customer environments than before, to the Hitachi Protection Platform based on Sepaton technology, to our early and deep support of VMware VVol… the list is long.

 

But the list of updates isn’t the point.  The point is that we are taking a broader view of delivering on software-defined infrastructure – and we’re not pretending that there is only one way to make that real.  Because we live in the real world.  And most real world customers are not going to run mission critical workloads on commodity scale-out today. Nor do real customers need to have a VSP G1000 for every office or application. But they do want to simplification through automation, insight through advanced data analytics and agility through infrastructure abstraction for all apps and environments.

 

And that’s what we are delivering in a way that's software-defined, application-led and ready to go.


Outcomes