Skip navigation
1 2 3 Previous Next

VMware

65 posts

Traditional agent-based backup and recovery solutions can dramatically impact the security, performance and total cost of ownership of virtualized environments. As organizations expand their use of virtualization, hyper-converged infrastructure like VMware vSAN, they need to closely examine whether their data protection strategy supports efficient, fast, secure backups that won’t tax storage, network, budget, or computing resources. As data grows, the need for more frequent data protection and a variety of other challenges have forced administrators to look for alternatives to traditional backups.

 

Backup Challenges

Initially, most backup administrators chose to back up virtual machines by deploying backup agents to each individual virtual machine. Ultimately, however, this approach proved to be inefficient at best. As virtual machines proliferated, managing large numbers of backup agents became challenging. Never mind the fact that, at the time, many backup products were licensed on a per-agent basis. Resource contention also became a huge issue since running multiple, parallel virtual machine backups can exert a significant load on a host server and the underlying storage. Traditional backup and recovery strategies are not adequate to deliver the kind of granular recovery demanded by today’s businesses. Point solutions only further complicate matters, by not safeguarding against local or site failures, while increasing licensing, training and management costs.

 

Business benefit and Solution Value Propositions

Hitachi Data Instance Director (HDID) is the solution to protect Hitachi Unified Compute Platform HC V240 (UCP HC V240) in a hyper converged infrastructure. The solution focuses on the VMware vStorage API for Data Protection (VMware VADP) backup option for software-defined storage . Data Instance Director protects a VMware vSphere environment as a 4-node chassis data solution with options for replicating data to outside the chassis.

Hitachi Data Instance Director provides business-defined data protection so you can modernize, simplify and unify your operational recovery, disaster recovery, and long-tern retention operations. HDID provides storage-based protection of the VMware vSphere environment.

 

Data Instance Director with VMware vStorage API for Data Protection provides the following:

 

  • Agentless backup using the VMware native API
  • Incremental backup that provides backup window reduction
  • Easy to implement and maintain for a virtualization environment
  • Easy to replicate backup data to other destinations or outside of chassis

 

Logical Design

Figure shows the high-level infrastructure for this solution

 

Below are the Use cases and results

 

Use Case

Objective

Test Result

Use Case 1 — Measure the backup-window and storage usage for the VMware VADP backup using Hitachi Data Instance Director on a VMware vSAN datastore.

Deploy the eight virtual machine's DB VMDK evenly on two VMware ESXi hosts with VMware vSAN datastores. The workload runs for 36 hours during the backup test. Take the measurement with both quiesce options enabled/disabled. This backup is a full backup, with initial backup and a later incremental backup.

Initial Full backup

Backup time : 52 Min

Storage used : 1920 GB

 

Incremental Backup with Quiesce ON

Backup time : 4 Min 15 Sec

Storage used : 35.02 GB

 

Incremental Backup with Quiesce OFF

Backup time : 2 Min 25 Sec

Storage used : 34.9 GB

Use Case 2 — Create a cloned virtual machine from the Hitachi Data Instance Director backup

Restore a virtual machine after taking a Hitachi Data Instance Director backup. Measure the timestamp of the restore operation.

Restore backup with HDID

Restore time : 22 Min 15 Sec

Storage used : 213 GB

 

Conclusion

With Hitachi Data Instance Director, you can achieve broader data protection options on the VMware virtualized environment. With VMware VADP CBT, the backup window for the incremental backup was relatively short and optimized.

 

  • Eliminate multi-hour backups without affecting performance
  • Simplifies complex workflows by reducing operational and capital costs with automated copy data management
  • Consolidate your data protection and storage tasks
  • One-stop data protection and management

 

Sincere thanks to  Jose Perez , Jeff Chen, Hossein Heidarian, Michael Nakamura for their vital contribution and making this tech note possible.

 

Please Click Here to get tech note Protect Hitachi Unified Compute Platform HC V240 with VMware vSphere and Hitachi Data Instance Director

Army-duck-dontvolunteer-with-stamp2.2.jpg

You’ll recall from my last blog I volunteered for to give a presentation to an organisation in London and I found myself having signed up to deliver an evening lecture to the Institute of Engineering and Technology on the subject of cloud. I had managed to pull some material together and coerce a colleague into sharing some of the load by applying an equal degree of vagueness in the description!

 

The Event….

So we had a story, I had a willing partner to help share the challenge, we had overcome the anticipation and were as ready as we could be! The Presentation was polished during the day in between client meetings and we headed to the venue for the evening event.

 

The building where the lecture was to be given wasn’t intimidating at all, nor was all the signage hanging in the entrance hall in anticipation.

 

Picture1.pngPicture2.png

 

 

As if the pressure couldn’t have been any greater the venue we were to be using to give our talk was none other than the Alan Turing Lecture Theatre, named after arguably the founding father of modern computing. The registered attendees numbered 100-150, there was to be tea and coffee on arrival followed by a drinks and nibbles reception afterwards with the night concluding around 9PM.

 

Picture3.png

 

We quickly set up, dumped our bags and then headed to the nearest watering hole for a sherbet and lemonade as a steadier in preparation for the event! On our return we kicked off and we introduced on stage by the event organiser. Surprisingly (for me) the audience seemed to be very aware of Cloud technologies and the Cloud field in general, I was therefore sincerely hoping they would be able to get something out of the event.

 

Sylvain and I delivered our presentation which was well received. The audience listened intently and made notes. We covered the HEC value proposition, the key differences in Public and Private Cloud and the fact that our HEC solution offers the public cloud consumption experience of self service and pay per use with the security / latency benefits of retaining IT on premise in a clients data centre. We covered our SLA driven approach to selling, our pricing being more competitive than a Public Cloud alternative and having a holistic solution to address a changing market.

 

Following the presentation, we took some fantastic questions from the audience which were very balanced and somewhat different to what we had heard before due to the diversity of the audience, people were very keen to understand our IoT story as well as our approach to things like machine learning algorithms. The questions would have continued beyond the allowed time however was stopped by the organisers to allow us to retire to the drinks reception.

 

Picture5.png Picture6.png

 

 

The aftermath……

Picture8.png

Now the event was over we could relax and managed to meet many of the members and people from the audience. The feedback was good and they enjoyed the lively debate, some areas of particular interest were what our views were on edge based data analytics and machine learning integration with Cloud IT. I found these discussions to be very enlightening hearing opinions on the industry from outsiders who have a different (and often very well informed) perspective on what we are doing.

 

I managed to team up with a small group including a Dutchman involved in 3D printing of industrial wind turbine blades (who kindly liberated a bottle of wine for us from the main table) and a retired gentleman who was very well read on the subjects of cloud computing following a 60 year career in IT. I avoided the fact that I was born half way through his career but I think I got away with it.

 

In conclusion…..

Although I started this as a “never volunteer for anything” that’s not how I look back on the experience, often we choose to do things squarely inside our comfort zone however its very fulfilling to step outside this now and again. We also tend to stick to the circles socially and professionally of our peers or customers looking to buy what we have to offer. I found it particularly enlightening to hear the opinions of people with a really diverse set of backgrounds which I would never come into contact with ordinarily. So I’d say in conclusion take the time to do things you wouldn’t ordinarily do and hear from people you wouldn’t expect to ordinarily speak to – you’ll be pleased you did.

 

 

Material…

IET Blog of the event

Presentation Slides

 

Neil Lewis

With memories of Sapper Featherstone, British Army - Royal Engineers circa 1946

Allright, this is the technical part, describing how to built the blueprint and what to configure in NSX to make it work like described in the overview. Let's get started, shall we?

 

Getting started

First things first. I have to create a list of requirements in order to master all the challenges such a micro DMZ concept brings. Lets see what we need:

  • NSX installed and ready to be used
    • Integrated with HEC
    • Security groups for Web and DB
    • Virtual wire for DB
    • Edge configured and ready for external traffic
    • DLR (Distributed Logical Router) configured and ready (OSPF, etc...)
    • Security Tags for DB and WEB server
  • Hitachi Enterprise Cloud
    • Linux blueprint / image to use for WEB and DB server
    • Software components to install such as Apache, MySQL, PHP5, etc...
    • Network reservation for on-demand DMZ (routed-on-demand-network) and the DB network (static)

OK - that should be it. I will focus in this part on the NSX config in the blueprint and the designer. Assuming everything else is just fine and had been pre-configured installed by our fine consulting folks. Just like a customer, I am eager to use it - not to install it

Set up the NSX Tags and security policies

OK, I decided to start with the very important and yet super complex NSX integration...

Alright, you got me there, it is actually not that complex to integrate

 

First I created some NSX Security Tags. These can be used to identify VMs and run actions based on the found tags. Also it might be a smart way of dynamically add VMs to security groups in NSX. In order to use them in the HEC blueprint canvas, the Tags need to be pre-existant in NSX.

OK got it, but were do you create these Tags in the first place?

 

Well, this is done in the NSX management in vCenter. To create custom security tags, follow these steps:

  1. Got to the home screen in vCenter and click on Network and Security
  2. In the left hand side menu click on NSX Managers
  3. In the left hand side select your NSX Manager by clicking on it
  4. Click on the Manage tab
  5. Select the Security Tags button in the headline of the Manage tab
  6. 6. Click on the New Security Tag symbol on top left of the table to add a tag.

 

OK, I created the tags "HEC_DB" and "HEC_Web" and am ready for action. These tags are now useable on VMs for advanced processing.

Also, I created two security groups:

  • DbServer
  • WebServer

To create those, go to Networking and Security and click on Service Composer in the left hand side menu.
These security groups are later used to apply the firewall rules onto. The Tags will be used to assign the VMs to their respective security group (DB VM to DbServer, WEB VM to WebServer),  after the VM deployment.

 

Screen Shot 2017-04-04 at 15.55.20.png

This means you are now able to enforce firewall rules to VMs where you might not even know the IP address nor their subnet mask just by putting the VMs in NSX security groups.

Welcome, to the power of the Service Composer in NSX!

 

After the creation of Tags and Groups in NSX

After the security groups have been created we have to set up the rules of engagement, ahem I mean, the rules for communication between the WEB server and the DB server. Since the WEB server is exposed to the internet, we do not want to have him chatty chatting to the DB server as he whishes. Therefore the communication between these two servers (WEB to DB) has to be limited as much as possible in order to keep the security high! These sophisticated firewall rules are set in so called Security Policies.
We can create a new Security Policy by just clicking on the Security Policies tab and selecting the Create Security Policy icon.
Now you can specify rules for interaction between Security Groups on NSX or even from external sources (like the internet) to Security Groups.
In our case, we want the following rules to apply for a secure configuration:

  • WEB Server can access DB sever only to issue MySQL queries using specific MySQL ports
  • The Internet can access the Web Server only by HTTP or HTTPS
  • All other actions from DB to WEB server are blocked
  • All other actions from WEB to DB sever are blocked

Screen Shot 2017-04-04 at 16.07.15.png

 

Voilá: That should be it, now VMs in the DB security group will only allow VMs in the WEB security group access via the MySQL port. All other access is blocked. For the WEB servers, we are even stricter, from the perimeter firewall (aka: the internet), only HTTP and HTTPs will be let through to the WEB server. The only other server outside of the DMZ  the WEB server can reach is the DB server. The communication is only possible via the MySQL ports to initiate DB queries.

 

You might wonder how to enforce all of this without specifying a single subnet or IP address? Well that is solved by the Security Tags. As soon as the VMs are assigned to the right policies in the Service Composer, the rules will be enforced on them, automagically!

 

Create the blueprint

Assuming everything else is just fine and had been configured correctly, we can now start building the actual application. So lets get started with the design, given that I already have created some installable components, so called Application Blueprints, I can start drag and dropping my way to a versatile multi-tier web application.

 

Screen Shot 2017-04-04 at 15.12.59.png

 

I decided to have a DB sever and a WEB server (shocking - isn't it?). In the design canvas I dragged the DB components such as MySQL installation as well as the FST_Industries_DB component on the DB server.

To do this, simply drag and drop the packages onto the VMs. The FST_Industries_DB component is a customising the DB to set up a table space and does make some other minor edits to prepare the DB server for the use of the WEB Server.
After doing that, I dragged Apache, PHP and the FST_Industries_Web component onto the WEB server.

Besides installing all the software assets, the FST_Industries_Web is then creating an on-demand web site which is accessing the DB sever via its full qualified domain name (FQDN). HEC will now install these packages on the specified VMs, it is important to know that all this data is passed on as dynamic variables during the install (IP addresses, domain names, DB names, etc...) Otherwise it would be fairly complex to install anything on demand

 

After the actual service design is done, we need to ensure that the VMs are tagged to auto assign them into the respective security groups in NSX. Therefore you can drag the Tags directly into the canvas.

The Tags are shown in the picture right above each VM, a thin line represents their assignment to each of the VMs

Just drop it somewhere, for the sake of a clean graphic I put it on top of each of the VMs. By clicking on the dragged in security tag, the actual tag value can be assigned. You will see a list of possible NSX security tags, pick HEC_DB for one and WEB_DB for the other - done

 

If you just finished created the Security Tags in NSX, give HEC a moment to pick them up. If they are not showing up after 15 minutes, it might be necessary to re-run the network and security inventory data collection task. You can find it under "Infrastructure -> Compute Resources -> Mouse over vSphere resources -> Data Collection. The Network and Security inventory is the second last entry in the list. Select "Request Now" after creating the tags and wait for its completion. After this they will show up in the design canvas.

Now, the tags need to be formally assigned to each of the VMs. This is done by clicking on the VM in the canvas and selecting the Security tab. In there you will see both tags available, just tick the one which applies:

  • HEC_DB if you selected the DB Server
  • HEC_Web if you selected the WEB server
  • Done!

 

You might wonder why both tags are always displayed in this security settings for the VM. This is because a VM can have multiple security tags - all tags dragged in the canvas will be shown. In our case it is important to make sure to prevent a double select of a tag with a VM, this mite shake up our well thought through security concept (however, it is easy to spot and fix).

Last but not least both VMs need to be placed in a NSX network. For the DB VM, this network ("virtual wire" in NSX slang) needs to be set as an internal and protected network, since possibly other DB servers might run in there as well.

 

Defining the networks to use

For the WEB server, we want to create the DMZ on demand. That means this network is not pre-existent at the time of deployment.

To accomplish this, we need to define two different types of networks in HEC:

 

Do not get over excited by the term "External" in this case, that refers to all networks that are pre-existing before the time of deploying a service. The "Routed" network is different, this one is a pure logical construct which only comes to life at the time of deployment. This will be configured to form smaller networks to than place the newly created VMs into them.

Therefore its configuration might be a bit confusing in the first place. To configure the network profiles in HEC, go to Infrastructure -> Reservations -> Network Profiles and click on New to select either External or Routed.

The External one has to be pre-existing, which means it has to be defined in NSX before it can be added to HEC.

 

This means you have to create a new virtual wire in NSX prior to the selection in HEC.

The Routed one is more difficult, this is why I think it might be worth going over its options quickly. In the form you will see the following fields:

 

Provide a valid name: DMZ_OnDemand

Description: DMZ network, created on demand each time for every deployment

External Network profile: Transport*

Subnet mask: 255.255.192.0**

Range subnet mask: 255.255.255.240***

Base IP: 172.30.50.1

 

OK, here we are in the networking nirvana. What does all this mean. Just let me explain the "*" real quick:

*: The transport network for your DLR. This is configured during NSX setup for external network access. To describe how to do this would be to much detail for this blog post. In our case, it is named "Transport", but you can name it also Bob, Jon, or Fritzifratzi if that works better for your use case

 

 

**: This is the subnet mask, defining how much devices we want to put into the micro DMZs. In this case it is a /18 subnet mask, which gives us "only" 16,382 addresses. You could also go for a /16 which would give you 65,534 or a /14 for a whopping 262,142 addresses. But be careful, all these addresses are pre-calculated by HEC, which can be quite CPU intense if you chose big ranges.

 

***: The subnet mask for the different small network areas. Basically it creates the "micro" networks, based on the given subnet mask (255.255.255.192.0) and uses the /28 subnet mask (255.255.255.240) to create a net with 14 useable addresses.

This means HEC will now go ahead and create as many small subnets as possible using the provided big /18 (255.255.192.0) subnet mask. In my case it will create network chunks looking like this:

  • 172.30.50.1 - 172.30.50.14 (useable addresses)
  • 172.30.50.17 - 172.30.50.30
  • ...
  • 172.30.63.225 - 172.30.63.238
  • 172.30.63.241 - 172.30.63.254

 

Now you might wonder why there are small gaps between these address spaces. That is because only the useable 14 addresses are shown. For example, the first address is 172.30.50.1, the network address would be 172.30.50.0 and the broadcast address would be 172.30.50.15. So the entire network is actually 172.30.50.0 - 172.30.50.15. But given how networks work the network address and the broadcast address can't be used for servers, leaving a total of 14 addresses useable. It is important to understand that principle in order to make the networks chunks big enough for the amount of servers to be in them.

 

If all this network calculations, slicing and subletting is creating the father of all headaches don't give up! There are quite nice websites which do all the calculations mentioned here for you. One of these sites can be found here:

IP Calculator / IP Subnetting

 

What have we achieved so far

Good, after all this hard work of clicking and brain twisting network mask calculations the setup is finally done.

We configured security tags, automatically assigned them to the right VMs. Firewall rules will assure only allowed protocol communication from one security group to another.

The VMs and its software get installed by HEC, once the tags are assigned and the VMs are installed one is placed in a static and the other one is placed in a routed network. The routed network will be sliced by a subnet algorithm to only allow 14 devices, each WEB server will have its own DMZ.
After all that has been configured by HEC, the NSX security kicks in and our freshly deployed application will work like intended and only let MySQL queries reach the DB server. Also, HTTP / HTTPs queries from the internet can only reach our WEB server running in its very own "private" DMZ. All of this is created for each and every new application being deployed.

 

To Summarize

Wow, after all this clicking and configuring and calculating we do have a quite comprehensive blueprint, not only setting up a full service with a single mouse click, but also providing enterprise grade IT security for each and every deployment.

Not only through the firewall and security capabilities of NSX, but also through the flexible and purpose ready design of a micro DMZ per WEB server per service. This is an achievement which would be fairly difficult to reach without the capable technologies introduced by HEC.

 

If you want to see all this running, stay tuned for the next article in this series showing all of this working in our HEC Solution Centre environment which is located in the Netherlands in a wonderful small town called Zaltbommel...

Right, if Francois Zimmermann is in no mood for sharing his Heinz baked beans with the imminent threat of Doomsday, then fine, I will get my own tin of beans and get on with my survival strategy. This, you may recall from my previous blog, is about creating a multi-tier application blueprint using NSX with Hitachi Enterprise Cloud (HEC) which plays an important role in securing your data from potential everyday hackers let alone those in a Doomsday threat. This is where I get into the technical detail especially around micro-segmentation? More of that later.

 

Technical Alert!! If you are interested in a detailed explanation go to the “techies” part of this blog. If not, read on for the high level summary...

The “Micro” in Micro Segmentation

To create a more secure environment than a traditional DMZ we would have to change the DMZ from being traditional, monolithic and predefined structure into one that is more flexible, agile and dynamic.

Micro segmentation is a well used term when it comes to network virtualization, but what does it actually mean? It stands for a way to permit traffic from one instance to the other, even if they are on the same network with the same IP address.

 

You can think of micro-segmentation as isolation of workloads on the same network. As an example, typical networks are like trains, you can move from one carriage to another carriage within the same train easily. Micro-segmentation is more like cars on a highway. All are driving on the same highway in the same direction (more or less), but changing from one car to another while moving is almost impossible.

 

If you think of this network with a given IP subnet as a segment, typically every server within that segment can talk to each other. So traditionally, you had to separate an entire segment in order to prevent one server in segment A talking to another server in segment B.

MicroSegmentation_Off.png

While servers in the same segment, like Server A1 and Server A2 can directly “talk” to each other.

 

Now, in the software defined world this is all “snow from yesterday” (sorry – famous Austrian saying which means – Old News). With the new capabilities of dynamic firewalling and policy based VM security profiles, we could achieve a similar outcome without putting the servers in two different networks.

In this case, the firewall would be acting as if it might sit right in between the two servers, allowing only specific protocols to connect to the peer server. In some cases, communication can be terminated entirely to any peer server, which is often used in desktop environments.

A micro segmented network might look like this:

MicroSegmentation_On.png

Now in this case, Server A is only allowed to talk to Server B through a firewall, using specific ports to communicate. All other direct communication is prohibited. This makes the management easier since you can add a security layer even if servers are running in the same network. The big benefit is, that this security layer can be managed centrally and applied on demand to any group of servers.

 

So what does all of this mean for our “Micro DMZ” project? Everything!

 

The first step is to setup up one DMZ per service. A service might be any WEB server and DB server pair or similar. In a traditional datacentre you might use a static DMZ and place the WebServer there and then place the DB server in an internal network. But as described in part 1 of my blog series, there might be a more secure way of doing that.

And this is where “micro” comes into play. Instead of creating a big DMZ housing everything exposed to the internet, we are creating many small DMZs. One for each service. The service itself does not need to know anything about that, since the software defined infrastructure takes care of setting all the rules and routes in order to work properly.

Tip: If you want to see all the techy details and want to get a crash course in subnet calculations (what was /24 again?) visit this technical part of this blog

Now, when a new service is rolled out, it gets its very own DMZ and firewall rules. With the use of micro segmentation within the DMZ – web servers cannot talk to each other but they can talk to their DB server peers. This makes the DMZ itself more secure. Also, since each service has its own DMZ, a security breach will never affect other services, indeed it might very well only affect the very Webserver experiencing the security flaw.

 

With this technology, you can limit the impact of a security breach from being catastrophic to just being slightly annoying at best.

 

So are we now in Lock Down?

In a Doomsday scenario, instead of the rebels rushing into my shelter and stealing and breaking my stuff, they just get a glimpse of my security fence. If they manage to break through that, they see…

 

…wait for it…

 

Another security fence

 

The use of multiple DMZs and micro segmentation within those DMZs is enhancing the security layer significnatly. Everything is managed from a central instance so no micro management (pun intended) for the micro segmentation is needed. If we run through the technical part of this configuration and finish all of the step by step configuration items we are nearly done reaching the final solution. If the configuration of the blueprint is completed successfully, everything should automatically unfold in our Hitachi Enterprise Cloud solution, again, saving us a ton of time and effort for every new deployment of a service. Also, with every additional new service deployment the security is enhanced, not diminished!

 

Meanwhile, I’ve worked up an appetite so I need to crack into my stock of baked beans whilst the tests run. I’ll be back later with the results and take you through some seriously deep dive technical actions which makes the magic unfold and finally get us into secure lock down.

 

I wonder how my buddy, ole Mr"Get your own beans" is getting on with his NSX shelter? Is it secure enough to protect his services from the latest ransomware madness?

Which leaves me to ask, "What are you doing to enhance your security for your new or existing services?"

The-Future-Of-Big-Iron-V3 copy.jpg

Not a week goes by without an article that compares "web native applications" with "legacy applications".  The implication is that all innovation and competitive edge will come from well-behaved scale-out containerized apps.  But how true is this?

 

In the next 1-2 years companies will invest in the following scale-up, big-iron, 'fragile', non-cloud-ready technologies for specific use cases where they believe they can get a substantial business advantage:

  • Storage Class Memory to support in-memory computing - We already have customers who deploy technologies like SAP HANA to reduce the time it takes to roll up their forecasts and stock positions from days to minutes.  With the introduction of Storage Class Memory in the Skylake timeframe ever-larger data sets will be able to take advantage of the extreme flexibility if in-memory computing and businesses will leverage this to be able to interrogate and model market data and improve business instrumentation.
  • Specialized hardware for Artificial Intelligence - Many companies will start to look at deep learning technologies as a way of optimizing complex business problems and automating processes to drive competitive advantage. Machine learning algorithms can run on general purpose infrastructure but the learning speed for large data sets is typically constrained by the bandwidth between processing nodes.  Rather than trying to overcome these by changing the algorithms it will be faster to just deploy specialist hardware that is optimized for running Neural Networks (e.g. Intel Lake Crest).
  • NVMe and alternatives to Ethernet for low latency apps - Algorithmic trading and low latency transactional workloads will look to alternatives to commodity interconnects to provide the sort of marginal gains that they need to maintain advantage.
  • FPGA acceleration - When Intel bought Altera we started to talk about ways to move 'beyond Moore's law' for certain types of workload that needed to crunch a large amount of parallel data streams at wire speed.  For example, we believe this will be particularly relevant when looking at use cases like Stream Analytics - How do I efficiently aggregate and sort data from a bunch of continuous data streams from IoT, market or web sources?  How can I sort through all that data in-flight so that I can only retain what is useful and quickly identify items of interest in all the noise?  How can I raise events and actions against these in real time?

 

In my last post I spoke about the need to integrate the management of Mode One and Mode Two environments - this is required in order to be able to run existing core workloads AND also enable the delivery organization to make the  transition to DevOps practices.  Now we have another dimension: I can deliver innovation to my business by enabling rapid software development AND by enabling rapid adoption of specific "hardware assist" technologies that deliver a compelling competitive edge.

 

In order to solve both of these problem sets we have started to speak about the need to move beyond the Software Defined Data Center to a Programmable Data Center. This new paradigm aims to solve the problem of how you can consume both specialized hardware acceleration (for cutting-edge or scale-up workloads) and commodity infrastructure services (for well-behaved scale-out cloud-native apps).  When physical infrastructure services can be programmed as easily as virtual services then you are able to provide a real innovation platform – one that enables you to rapidly adopt these difficult, cutting edge technologies ahead of your competitors and get a real market advantage.

If like most of us, you are time poor then get the "skinny" from this Infographic on how HDS has delivered tangible outcomes to SPAR via a Private Cloud Solution based on Hitachi Unified Compute Platform and VMware technology.

 

Short and to the point....

 

Enjoy!

Well here’s your chance to get  a quick view of our offering which comes directly from HDS’ Centre of Excellence based in the Netherlands. Dylan Lange takes 5 mins to show us around the V240F VMware environment touching on data reduction, erasure coding, storage and space efficiency policies.

 

Simple and so easy.  Plus "No More SAN".

 

Take a look -  then share with those who should be in the know! ;-)

 

NoMoreSAN-emblem-highres.jpg

Many businesses are constrained by legacy IT infrastructure that is not well suited for VDI initiatives. Soiled data centers,composed of independent compute, storage, and networks with distinct administrative interfaces are inherently inefficient, cumbersome, and costly. Each platform requires support, maintenance, licensing, power, and cooling — not to mention a set of dedicated resources capable of administrating and maintaining these elements. Rolling out a new application like VDI becomes a manually intensive, time-consuming proposition involving a number of different technology platforms, management interfaces, and operations teams. Expanding system capacity can take days or even weeks, and requires complex provisioning and administration.Troubleshooting problems and performing routine data backup, replication, and recovery tasks can be just as inefficient.While grappling with this complexity, organizations also need to address challenges that are unique to VDI.

VDI Challenges

1.  Difficulty in sizing VDI workloads upfront, due to the randomness and unpredictability of user behavior.

2.  Periodic spikes in demand, such as “login storms” and “boot storms”, that may significantly

     degrade performance if not properly handled.

3.  High cost of downtime in the event of an outage.

Business benefit and Solution Value Propositions

Hitachi UCP HC addresses each of these challenges by providing a scalable, building block-style approach to deploying an infrastructure for VDI, offering the enterprise predictable costs, and delivering a high-performing desktop experience to end users. VDI Load Generation for this Solution, VDI Performance has been captured for Task, Knowledge and Power Users.

 

The reference architecture guide "VMware Horizon View 7 with UCP HC" is used to design a hyper-converged solution for VMware Horizon View 7 with Instant Clone on Hitachi Unified Compute Platform HC (UCP HC) for VDI environment. This describes the performance of Microsoft® Windows 10® Virtual Desktops and Microsoft RDSH remote sessions on a 4-node UCP HC compute vSAN cluster with a mixture of Power workers, Knowledge workers and Task workers using Instant Cloning features. This environment uses integrated servers, storage systems, and network with storage software in a unified compute converged solution for VDI environment. The 4-node UCP HC provides better performance and throughput with low latency.

 

The dedicated UCP HC nodes run ESXi 6.0 U2 with VMware vSAN 6.0 clusters using VMware Horizon View 7. This VDI environment solution uses Microsoft Windows 10 virtual desktops and Microsoft® Windows Server® 2012 R2 RDSH remote sessions.

 

This document is for the following audiences:

  • Corporate Desktop administrators
  • Storage administrators
  • IT help desk
  • IT professionals such as a Pre-sale solution team
  • Customer CIO

 

Logical Design

Figure shows the high-level infrastructure for this solution

 

 

There are two scenarios taken into consideration to capture the performance results separately. First, a Windows 10 VDI pool of 250 VMs was deployed and the results were captured. Later this pool was erased and the UCP HC  vSAN cluster was recreated to deploy a new RDS pool of 250 VMs to capture performance results separately.

 

Use Case
ObjectiveResults
Duplication and Compression of UCP HC vSAN DatastoreDedup and Compressed vSAN datastore used for creating 250 Virtual Desktops

Dedup and Compressation ratio for 250 VMs is 4.19 times and saved space is 2.46 TB

Before : 3.23 TB, After : 789.16 GB

Boot Storm

Scenario 1 : Boot storm for 250 Windows 10 virtual desktop

Scenario 2: Boot storm for 250 Windows RDS machines

Scenario 1 result : Took 5 minutes to boot up

Boot IOPS peak    :  10,000

Boot IOPS Avg      :  6000

Avg read latency    :  35ms

Avg write latency    :  30ms

 

Scenario 2 result : Took 6 minutes to boot up

Boot IOPS peak    :  13,000

Boot IOPS Avg      :  5678

Avg read latency    :  35ms

Avg write latency    :  30ms

Login StormLogin to 250 virtual dektops using LoginVSI tool during logon period of 1-48 minLoginstorm duration was from 1-48 min and percent utilization peaks at approximately 72% and 66% for both scenarios respectively
Steady StateGenerated workload using LoginVSI for workload profile - Power, Knowledge, Task for both the scenarios with various microsoft applicationsSteady storm duration was from 48-50 min and percent utilization peaks at approximately 63% and 75% for both scenarios respectively
LoginVSI WorkloadLoginVSI workload profile is used to verify whether VSImax is reached for Power,Knowledge and Task UsersVSImax is not reached for Power, Knowledge and Task users in both the scenarios

 

Conclusion

Architecture provides guidance to organizations implementing VMware Horizon 7 on UCP HC infrastructure, and

describes tests performed by Hitachi Data Systems to validate and measure the operation and performance of the recommended solution, including third-party validated performance testing from Login VSI, the industry standard benchmarking tool for virtualized workloads.Organizations are looking to VDI solutions like VMware Horizon to reduce software licensing, distribution and administration expenses, and to improve security and compliance. The market-leading hyper-converged infrastructure platform from Hitachi Data Systems helps to deliver the promised benefits of VDI, while overcoming many common challenges.

 

Hitachi UCP HC for VDI provides:

  • Simplified deployment for a hyper-converged Infrastructure.
  • Ability to start small and scale out in affordable increments—from pilot to production.
  • Highest density of desktops per node in the hyper-converged infrastructure category.
  • Independently validated, unmatched VDI performance for a superb end user experience.
  • Deployment of full-clone desktops with the same data efficiency as linked clones.
  • Enterprise-class data protection and resiliency

 

Sincere thanks to  Jose Perez , Jeff Chen, Hossein Heidarian, Michael Nakamura for their vital contribution and making this paper possible.

 

Please Click Here to get reference architecture guide to VMware Horizon View 7 with UCP HC

Spar VMWare Image.png

Those of you who went to VMworld 2016 may have heard SPAR ICS' Michael Gstach, Enterprise Architect in a break out session with Valentin Hamburger  presenting on how SPAR is digitally transforming using HDS' Private\Hybrid Cloud using VMware technology.  With this change SPAR have also transitioned to a managed services OPEX model provided by Hitachi True North Partner Axians in Austria.

 

Now we have the case study all wrapped up for you in a two page document. Find out about the challenges, the solution and the outcome.

Well - I was just testing a new application blueprint in our solution center in the Netherlands when Francois Zimmermann threw me a blog about the Doomsday clock - and that I shouldn't bother getting it to work, since there might not be a next quarter to enjoy it...

 

Well - while he was shopping for his favourite Heinz baked beans, I was thinking about some automation possibilities using our Hitachi Enterprise Cloud with VMware's NSX. If you now wonder how in the world I can think of automation if Doomsday is imminent, let me explain my slightly geeky line of thought...

 

If Doomsday ever comes, the chaos outside will be massive. There´s likely to be de-militarized zones set up sooner or later to mark the boundaries between a secure community and the chaotic, dangerous outside world where gangs may roam and rule the empty lands.

 

Ok - this might be a bit of an over dramatic and depressing thought but in the digital world this is already happening.

 

The digital Doomsday world

Today it is quite dangerous to expose any of your servers straight to the internet. This is why it is not only best practice but actually crucial to have a so called DMZ (Demilitarized Zone) where the internet exposed servers of any datacenter live. This DMZ is typically secured by a firewall externally to the internet and one internally to the inner datacenter which is the secure community. So if you want to prevent being in the news as the "worlds biggest bot net provider", it is essential that web servers are secured and only accessible for their intended use case.

 

However, there is one problem with this DMZ concept. The more servers there are accessing the internet, the bigger the DMZ needs to be. One of the difficulties with this concept is, if somebody (a gang) is able to break into the DMZ and compromise one of the servers, they basically have access to the entire DMZ. From there it might only be one step  away from entering into the datacenter core network. Or, they simply enslave all the DMZ servers to run DoS attacks for the highest bidder against any given target.

 

Now, this is not so much a sci-fi scenario. We hear about hacks, data leaks and security breaches nearly every week in the digital world. Since this world is getting more and more complex, managing a DMZ has grown from an occasional task to a highly critical full time job. If something goes wrong in the DMZ security, the Doomsday clock for any company might strike 5 minutes PAST midnight instantly.

 

Ok, so what could the Programmable Data Center do for me to change that? Everything!

 

The Programmable Data Center to the rescue

Lets imagine that each service which needs to be directly exposed to the internet has its own DMZ. In fact, it is a micro

MicroDMZs.png

DMZ only valid for the web server (ie. exposed to the internet) components of the service, while the Data Base (DB) server or the data components will still live in the secure internal network.

 

Lets further assume that these micro DMZs are created only if a web application is deployed. Plus, they are not able to access other DMZs. In fact - they do not even know of other DMZs (no routing tables, etc...).

 

That would elevate the security for a DMZ tremendously, since the so called attack surface is much smaller. If one of the micro DMZ's is compromised, it is only the few servers in there which can be taken over and not all the servers as in a traditional DMZ.

 

Ok, got it but how would someone do that? The effort in creating all these micro DMZs on demand, managing and controlling them would be monumental compared to a single DMZ. Who should take care of all of that? Well, you might already have guessed the answer:

Hitachi Enterprise Cloud (HEC) with NSX can do that for you.

 

This got me thinking, so I started to create a blueprint in HEC which does exactly that. It includes a Web Server component which will be exposed to the internet and therefore lives in a micro DMZ. Additionally, it has a DB server component which will live in the internal datacenter DB network. The Web Server can access the DB server only through a firewall to get valid data for its queries while the DB server can't access the DMZ at all. Also, the Web Server will get its own external IP address in its own external subnet which assembles its micro DMZ.

 

I was keen to get started modelling this in NSX and then creating a blueprint using this functionality. However, I was still concerned that Doomsday was rapidly approaching and I needed to check in with Francois Zimmermann and see if he might share some of those favourite Heinz baked beans with me. Should I survive Doomsday, you will catch my next part on what techniques I use in HEC with NSX in my blog. Hang in there....

Army-duck-dontvolunteer-with-stamp1.223.jpg

An old saying goes that when joining the army there are two golden rules which you should always stick to (I should say at this stage I have never been in the forces - but I won’t let that stop me). So in every situation you find yourself in, these rules, if applied, should keep you out of unnecessary trouble - I think with a view of staying alive and trying to steal any sense of enjoyment possible in the field.

 

Rule 1 : Never Volunteer for Anything – things are rarely as good as they are portrayed,

Rule 2 : Never go back – anywhere you have been will never be as good the second time.

 

I recently experienced the consequences of failing to adhere to rule #1 first hand back in January this year and I thought I’d share my experience with you (doing my bit in sharing these golden rules for the good of mankind).

 

The volunteering…..

Before Christmas and in the hectic run up to the end of the year where we were all busy with closing out Q3 and Christmas parties etc I received an email asking if I could help out with a “quick Cloud presentation to a few people in London - usual stuff”. This was to be done at some point in early January. I thought, “Well… easy enough, usual Hitachi Enterprise Cloud sales story using our most recent slide desk which can be altered a bit to suit the audience”. Plus January seemed so far away. So I accepted - volunteered almost, then thought no more about it.

Picture1.png

Some weeks passed, then I started getting e-mails from the event organisers asking for a title and description for the “January event” as well as a bio, photographs and marketing material. At first I thought they were perhaps just a bit keen so I responded with the standard sales material titles and descriptions with some additional marketing material to match their enthusiasm.

 

The realisation…..

Almost immediately, the organisers came back to me with an alternative title and some guidance as to what they are expecting. This will be a 2-3 hour event for the Institute of Engineering and Technology (IET) on the Embankment in London where my Lecture (?) on the subject “A Cloud Enabled World: Growth opportunities for businesses and individuals” will be the sole event attended by 100+ professional members of the Institute for the purpose of general interest and towards a professional qualification. The lecture can’t be a sales presentation as this will breach their professional standards therefore needs to be impartial and informative.

 

So it looked like I had fallen into giving a lecture at an esteemed professional technical institute on the future opportunities that Cloud computing offers to support peoples learning and achievement of some professional accreditation.

 

Naturally at this stage and only 2 weeks out I was in too deep and  there was no backing out now.  I had to make the best of it. So another trick came to mind. Attract further volunteers to spread the load. On the day of the lecture I was scheduled to be at a client with my colleague Sylvain Gaugry. Him, being a French romantic, he was bound to fall for a well told story.  This cunning plan worked out well as he agreed to come along and co-present this lecture sales presentation!!

 

The Preparation….

Now, although we have a great story in Hitachi Enterprise Cloud, it was obvious that we needed some material as clearly the standard Hitachi sales only story wouldn’t have worked as any sales material was against the rules. On researching the IET, this was formed as the Institution of Electrical Engineers therefore was more engineering/electrical focussed than computing or technology (at least from history). Given the event was intended to be informative, almost educational (if I was capable of delivering such a feat), I thought I would try and draw parallels between Cloud computing and the world of mass produced electricity as essentiality the  utilisation of IT is what Cloud and the Hitachi Enterprise Cloud offering is actually all about.

 

A little digging online led me to discover that the building next door,  from  where we were going to be presenting, was the first public building in the world to be lit by electric light (Savoy Theatre). As it happens, I lived just abut 20 miles from the first house in the world to be lit by electric light (Cragside). I had our story…..

 

The mainstay is essentially that Cloud in IT is the same revolution as we have seen in the 20th century with Electricity production. Early days of electricity generation was dedicated to the property or business with limited capacity, scalability and high costs of operation. This is the same as Enterprise IT where capacity and capability is often tied to the owned equipment in the businesses datacentre;

 

IET A Cloud Enabled World - Growth opportunities for businesses and individuals FINAL.jpg Slide06.jpg

 

These limitations were overcome with a revolution in Electricity delivery, production and sales, moving from dedicated small scale to commoditised mass produced, supported by a national delivery network. This is exactly the same as Cloud in IT – powered by the low cost delivery network – the Internet;

 

Slide08.jpg

So the parallels are simple that the 20tht century electricity revolution is the same we are seeing today in the world of IT, with IT services being moved from the datacentre into the Cloud enabled by large scale Public Cloud services underpinned by the Internet as a reliable remote delivery mechanism comparable to the National Grid in Electricity speak. The driving force behind this in the IT world is the growth of data and the expectation for real time analytics and just in time decision support. This requires data sources to be centralised (or at least linked) and very powerful processing resources made available to support this data growth. Hitachi has some great credibility in this space, namely in the world of IoT and real time analytics capability underpinned by Pentaho so it wasn’t too far a departure from what we are known for.

 

The second part of my story was essentially about the Cloud offering smaller business/enterprises the same quality of IT which was once reserved for multinationals. This offers unique opportunities for start-ups to threaten the bread and butter of the major players in various verticals. We use the example of Kodak showing  that businesses are not immune from the threats to their markets. Organisations need to embrace these new technologies if they are to survive the tidal wave of change approaching them;

 

Slide09.jpg Slide12.jpg

 

So, it felt like we had a plan……. look out for part 2 to find out if the preparation paid off!!!

Whoop! Whoop! See our winner below, Steve Lewis, CTO of UK and Ireland who has been taking an active role in blogging and commenting in the VMware space on HDS Community.

 

HDS Community WinnerPhoto2.jpg

 

We set out a competition for our community members to simply comment on 10 blogs with the chance of winning a
$200 Amazon Voucher. The winning comment needed to be insightful, inspiring and eye opening. Our team chose Steve’s
winning comment on Paul Meehan’s blog “Timing is Everything….Hitachi Enterprise Cloud brings Public Cloud to YOUR back yard”. Why not take a quick look yourself.

 

We asked our winner why he spends time contributing to the HDS Community space and this is what he had to say:

 

“The great thing about HDS Community is that it takes contributions from the collective experiences of a lot of really bright people around HDS, the people who can really tell our story because of their own personal knowledge and experiences.”

 

Congratulations Steve!!

 

Apparently he is a drummer? True or false – what do you think?

StartupVSEnterprise.jpg

 

Here are two companies that both make flying machines.  The first one can develop and launch a new generation of products every six months.  The second one takes more than ten years to design and build a new generation of products.   So what?

  • The startup operates in an industry that is very lightly regulated.  They can "fail fast, fail often" and cater to a small group of early adopters and enthusiasts who are OK to live with variable service levels as they typically only expect to get a few months life out the product before they move on to the next upgrade.
  • The enterprise operates in one of the most highly regulated industries in the world.  Product failures lead to loss of life and massive legal liabilities.  Their customers expect to get good return on the massive investment they make for decades and loss of service has a significant financial impact and destroys brand value.

You wouldn't expect the business model for the enterprise to look like the startup - so why would we expect their infrastructure and IT services to look the same?  Why do we keep seeing opinion pieces that compare the needs of Digital Enterprises to those of Netflix / Uber / AirBnb?

 

Let's take a look at what makes a digital Enterprise fundamentally different to a Startup:

1. Market Opportunity: "Be First To Market" VS "Build on Brand"

  • Being first to market is key for the digital startup - they need to define a new market segment and are often most successful while governments haven't quite worked out how to regulate them.  They start with a small customer base of early adopters who are happy to experiment while they get the offering right and get ready for explosive growth.
  • By contrast, the digital enterprise needs to look out for new market segments but they are also trying to unlock new value out of their existing customer base.  They often have a lot of intellectual property and assets that they need to optimize in order to improve ROI.  So a lot of the market opportunity will focus on Return on Data - how can they derive new business insights from their data assets?  Enterprises recognize that data is their key asset and understand that they need to manage it appropriately in order to meet their regulatory requirements.  And when a digital enterprise launches a new channel they will often have a very large number of loyal customers using it immediately - so they need to get the service right immediately in order to protect their brand.

 

2. Application Landscape: One Big App VS Multi-Channel

  • The digital startup typically has just one or two big web-native apps that are built for 100% webscale IT.  Their IT workforce has grown up around software design skills.
  • By contrast, the digital enterprise needs to run thousands of diverse applications (scale up and scale out) and their IT platform needs to be able to support both traditional workloads and web-native apps.  If they spend all their time modernizing those legacy apps to fit webscale IT then they will never be able to focus on launching additional channels and delivering new customer experiences.  When they launch a new channel it will often have interfaces to existing systems and so they need to make sure that they can deliver legacy and new workloads effeciently in order to deliver a successful platform for innovation.  The enterprise IT workforce often needs to undergo organizational change in order to support digital transformation as they were not built around strong software design and devops skills.

 

The net of all of this is that the CIO in the digital enterprise is right at the middle of this digital transformation and needs to put in place effective programs that deliver:

  1. Infrastructure Modernization - improve the efficiency of delivery for both traditional and cloud-ready workloads and build a platform for innovation that can rapidly launch new channels.
  2. Digital Workplace - enable new ways of working to empower the digital workforce to access any data set in a compliant way from any application and any device.  Centralize discontinuous data (e.g. data that can only be accessed by one application, organizational unit or device) to eliminate siloes and enable business process change.
  3. Business Insights - empower the business to add new value to their existing customer data and get better end to end visibility over the business and their market in order to discover new opportunities.

 

In my next blog I will look at the first of these practices (Infrastructure Modernization) and focus on the platform building blocks that are required to support the diverse range of needs of the Digital Enterprise:

Application_Data_Services.jpg

 

Some related content:

shutterstock_550421383.jpg

Yesterday I saw that the Bulletin of the Atomic Scientists have updated their assessment of global threats here.

 

My initial thoughts:

  • Why am I not at the shops right now panic buying essentials?
  • Why am I bothering to put money into a pension when I should be spanking it all away on holidays?
  • Then I dashed off a mail to Valentin Hamburger about a tricky cloud platform problem that we've been working on: 'I wouldn't worry about that design too much!  Next quarter?  What next quarter?'.  What a great way to build team morale!

 

In a roundabout way this thinking is relevant for how CIOs need to think about IT disruption.  There is a tension between investing in the sort of platform that can deliver the current generation of applications and the recognition that the next paradigm shift can kill off large parts of today's application landscape.  And occasionally you end up in a counterproductive spiral where you only 'panic buy' for the short term neads of Mode 1 Systems of Record because you are so focused on getting ready for cloud ready Mode 2 Systems of Innovation.  This means that you don't focus enough on driving cost and complexity out of Mode 1 (which still accounts for the majority of IT spend) in order to release investment and headcount to focus more on systems of innovation.

 

The Digital Enterprises that we work with have a complex mix of traditional scale-up workloads and modern, cloud-ready scale-out workloads.  Their application life-cycles are typically 5-7 years and so there is a very long tail of core systems that will continue to provide backend services to new digital channels "after the paradigm shift" and this leads to two big trends in terms of buying behaviour for hybrid cloud:

  1. They look to integrate the management of Mode One and Mode Two environments.  A modern cloud strategy will enable the delivery organization to rapidly transition to new DevOps paradigms (Mode 2) and also enable autonomic management and efficient delivery for existing core workloads.  There is no cliff edge between the legacy and new world.
  2. Large scale full fat outsource arrangements are getting less and less common - if the next paradigm shift is just around the corner then you don't want to get locked into a multi-year organization change and stuck in last years inflexible operating model.  Instead, enterprises are looking to offload risk for delivery of much more specific outcomes - so a vendor or SI may own infrastructure assets, provide on-premise burst and take charge for delivery of platform services against strong SLAs but they won't outsource the IT organization.  These more flexible contracts are the future of on-premise cloud.

 

Now I have get off and finish stuffing my supermarket trolley with Heinz beans(*) and bottled water.

(* Other brands of beans are available, but it is the end of the world so I figure I will spoil myself a bit)

In search of some expert advice on virtualization and the Software Defined Data Center, TechTarget, recently caught up with HDS’ specialist for VMware, Valentin Hamburger, for some sound recommendations. Find out more here.

 

VH Cropped.jpg