Nick Winkworth

Record Breaker! Hitachi servers power the largest TPC-H benchmark ever published!

Blog Post created by Nick Winkworth on Oct 31, 2013

The TPC-H benchmark is designed to simulate a real-world business intelligence scenario for a typical Big Data application, and measures the performance of a complete system under the stress of intensive complex queries – much more complex than OLTP – on large volumes of data . This growing application area allows companies to scour vast repositories of customer and market information to uncover new markets, opportunities and cost-saving measures, a capability becoming essential in our data intensive world.

 

TPC-H results are grouped by database size – referred to as “Scale Factor”, and the Transaction Processing Council stresses that only results within the same Scale Factor category can be compared.

 

Now, thanks to Hitachi in Japan, there is a new Scale Factor category - based on the largest database size ever benchmarked for TPC-H – 100,000GB.

 

So what’s the powerhouse behind this Big Data behemoth? It’s Hitachi’s Compute Blade 2000 server (known as “Blade Symphony” inside Japan), and just four high capacity (8 socket) Xeon E7-8800 based blades were needed, running Red Hat Linux 6.2. The CB2000 blades used for this benchmark have the unique ability to be combined, in SMP mode, to support a single large system image with huge memory, and also to support up to 16 PCIe slots per blade via an expansion chassis (again, a feature unique to Hitachi), with the bandwidth to make effective use of them. It was this combination of capabilities that the Hitachi team leveraged to obtain this impressive result.

 

The data itself was handled by sixteen HUS 150 arrays.

 

Here’s a great article about this accomplishment, published on betanews.com.

 

All the gory details of the benchmark can be found on the TPC site, of course.

Outcomes