Yong Kim

Enabling The Business Defined Data Center

Blog Post created by Yong Kim on Sep 29, 2014

by Yong Kim, Director – File, Mobility and Cloud, HDS

 

As your IT team is acutely aware, the pace of business is dramatically increasing, and IT requirements are evolving just as quickly. There are several key market dynamics driving significant changes for IT. Organizations need insight into how to build a true partnership between business and IT to remain competitive.

 

That’s why HDS is sponsor the upcoming ‘A New Framework For Enabling The Business Defined Data Center’ vendor tech session at Interop NY on Thursday, October 2nd, click here for details, to provide IT professionals with valuable insight into this critical issue.

 

Many in the IT industry are moving toward a new framework to enable the business defined data center.  Customers are pressed to gain business insights from their various data systems and content “silos”.  In order to do so, getting increasing volumes of disparate data under management in one place, serves as a foundation to enable that insight. 

 

That foundation is what HDS delivers with the Information Cloud.

 

Here is a brief excerpt from our speaker, Lee Abrahamson, Vice President - Solutions and Products, at HDS:

 

Can you share with us HDS’ strategy and core competencies around IT efficiencies for customers?

Our technology vision is really about two core things.  Everything we do in our technology is about virtualization or abstraction.  The other thing about our technology is dynamic data movement — everything is about moving data. And the key thing about our technology strategy is that is the same whether it is for our block, or whether it is for our file.

 

And our strategy really drives things big.  We do these efficiencies at scale. It's really easy to do something at 10 or 20 terabytes. It's not so easy to do it at 300, 400, or 1.2 petabytes.

 

How does HDS define the solutions in Software Defined Data Centers and Software Defined Storage?

HDS see two approaches to software defined:

1) Software and hardware differentiation 

2) Intelligence lies wholly with the software

In both cases, infrastructure is programmable, wherein users may customize and define infrastructure behavior through software. 

 

In case 1, where both software and hardware differentiation exists, features such as performance acceleration and cache management, as well as data services such as snapshots, replication, deduplication and encryption, receive hardware assist when warranted by workload criticality, such as transactional workloads with strict timing.

 

In case 2, intelligence is completely in the software, with no hardware differentiation (General purpose HW, e.g. COTS, JBOD).

 

HDS plans to go to market with both approaches.  Programmable infrastructure today with UCP, VSP, HUS-VM.  And platforms running intelligent software on general purpose hardware in the future.

 

Any comments on Cloud and changing IT consumption models?

A focus on business value around mobility, economics and insight, and the desire to have a more Business Defined IT environment, requires a different type of IT infrastructure.  It requires one that is always available, always automated, and always agile.  They need a continuous infrastructure cloud. 

 

This means infrastructure choices that can last a long time while remaining adaptable and relevant.  It means continuous availability of services deployed, no matter what.  It means that as your needs change, so does the value you get out of the IT infrastructure you deploy.  And yes, it means having the agility so closely associated with cloud deployment.

 

So, how would an enterprise start down this path of increased efficiency?

The first step is understanding some key requirements of a long-lasting, high-value infrastructure.  This starts with looking at how to solve some key current and future requirements -- for instance, eliminating service disruptions, managing to service levels and accelerating time to deployment.

 

To ensure workload mobility and to eliminate service disruptions, global storage virtualization, like that within the Hitachi Storage Virtualization Operating System, can create multi-datacenter, active/active computing environments that is a “game-changer” and can create an always on environment, despite system or site issues. 

 

To better manage to service levels, a deeper understanding of application environments within infrastructure management tools, like that provided by UCP Director and Hitachi Command Suite, means that top tier applications will have top tier performance. 

 

And to speed time to value, intelligent deployment with pre-tested and pre-validated solutions, like what Hitachi offers within our Unified Compute Platform.  Of course, all of this is useless unless applications have unfettered access to information, which is why it needs a foundation of scalable, virtualized and highly-available storage like what HDS offers with the new Virtual Storage Platform G1000.

 

Together, these solutions provide an environment that is always available, always automated and always agile that services as a foundation for your Continuous Cloud Infrastructure.

 

To learn more about converged infrastructure solutions read our white paper titled, The Data Center of 2015 available at http://www.hds.com/assets/pdf/create-the-data-center-of-2015-whitepaper.pdf?WT.ac=us_cal3_dcwp.

Outcomes