Nick Winkworth

When is bigger better?

Blog Post created by Nick Winkworth on Jan 20, 2015

If you’ve read even a few of the same articles I have, you will know there’s a lot of buzz today about hyperscale and hyperconverged computing - the idea that many small atomic building blocks together can do the work of a single large system. This approach has long been the domain of web giants like Google and Facebook, but is now seen as an upcoming trend for the rest of us (one that Hitachi Data Systems is working on too - watch for announcements though the year), but like any “hype”, we must be careful  not to assume that it’s the answer to every problem.

 

Hyperscale is all about “scaling out” …so is there still a place for “scaling up”? IBM, certainly seems to think so, having just announced the most powerful system ever built. But if you are not among the handful who can afford that (and if you are, check this out), why would you need anything bigger than a typical two socket server (…or rather, a room full of them)?

 

The answer becomes clear when you look below the headlines. Away from the glitz and glamor is a world of practical “brown bag” computing – the stuff that makes the wheels of industry and commerce turn. It may not be sexy to process payrolls, store email, manage inventory or analyze millions of sales records, but without these functions most companies couldn’t do their jobs. These “mission-critical” or “business-critical” applications are the backbone of our economy.

 

Many of these applications have long legacies - some were originally designed for mainframes or large proprietary UNIX machines; in other cases the code may be hard to maintain, impossible to virtualize or too expensive to rewrite, and they may have huge databases or support thousands of users …and on top of all this, they absolutely cannot fail.

 

Under pressure to lower operational costs, and become part of the new world of connected computing, these applications are starting to move to x86 based hardware – but they need hardware that can scale UP - to many cores of CPU and terabytes of main memory – and also to hardware that has very high reliability and availability. (characteristics that typical commodity x86 hardware is not best known for). That’s why the release, today, of the new CB2500 Blade server from Hitachi Data Systems is such great news!

 

Hitachi CB2500CB2500-main-s.png

 

The CB2500 is the latest in a long heritage of enterprise class blade servers from Hitachi. It offers not only Hitachi’s legendary reliability, but also a host of high availability features, such as redundant power, fans, management processors, IO and more.

 

 

In addition, it supports the CB520X blade. With dual E7 Intel Xeon processors supplying up to 30 cores and 1.5TB of RAM it’s a powerful beast by itself – but two of these can be connected to create a single four socket machine with 3TB, or four can be connected together to create an 8 socket machine with 120 cores and 6TB of RAM (that’s with 32GB DIMMs by the way. 64GB DIMMs can be supported later when they become available…) The CB2500 chassis can accommodate up to two of these 8 socket configurations - offering the ultimate in in-chassis redundancy for high availability.

 

BS520X_7_s.png

CB520X Blade

 

With in-place capacity scaling from two to 8 sockets, massive IO through a combination of shared fabric switches and 28 direct connect PCIe cards, plus that very large memory capacity – the CB2500 makes an ideal foundation for those mission critical applications. (That’s why, over the coming months, you’ll see several Unified Compute Platform (UCP) solutions announced based on this system.)

 

So while new technologies and new paradigms will eventually allow us to do things we have as yet barely dreamed of, the unglamorous core business needs will continue to demand the basic values of reliability and scale. Here at least the ability to grow big still matters.

Outcomes