Matthew O'Keefe

Hitachi and the Massive Storage and Systems Technology Conference

Blog Post created by Matthew O'Keefe on Feb 20, 2015

Together with a great program committee, I've been working on the

agenda for the Massive Storage and Systems  Technology conference

co-sponsored by  Santa Clara University and the IEEE, along with many

prominent vendors including Hitachi.

 

Since the mid-1970's when the Massive Storage and Systems

Technology conference was founded by the USA's leading national labs

in defense, intelligence, space and weather research, the conference has been a

venue for massive-scale storage architects, researchers, and vendors

in HPC, web, and enterprises to discuss how to build and secure the

largest (top 5%) storage systems in the world.

 

The conference theme this year is "Security in Large-Scale Storage Systems".

Henry Newman, a well known advisor to large data centers (see for

example his blog post on last year's conference: 2014

IEEE Mass Storage Conference Highlights - Henry Newman's Blog)

is leading a panel on how to secure very large-scale storage systems

with techniques that go beyond just standard network security.

 

Another panel will discuss using disks (versus tape) as media

for long term archives, looking at the following questions:

 

(1) Power consumption: spinning disks eat up a lot of power relative

to tapes. Can disk spin down reduce power adequately to make them

effective as a long-term archive medium? What

are the costs and challenges associated with disk spin-up and spin-down?

 

(2) Space: disk volumetric efficiency is lower than tape's. Can techniques

such as compression and very high disks-per-unit-volume ratios

(ala Copan and Backblaze) make up some of the difference. Does

this really matter in today's large data centers?

 

(3) Migration: how will data be migrated from older generation,

lower-density drives to newer, higher-density drives? As the

disk bandwidth-to-capacity ratio continues to decline, will disks

run up against the same low-bandwidth-to-capacity issues tape faced?

 

(4) Tape challenges: what tape issues are causing users to reconsider

it as the long-term archive medium of choice? Management complexity?

Too few vendors building the technology? Migration challenges of

large archives due to low bandwidth-to-capacity ratios?

Or is tape fine for another decade and more?

 

In addition to sponsoring the conference, Hitachi's research in optical

and especially holographic storage has excited a lot of interest

when presented at the conference in the past two years.

The Advisory Board for the conference includes prominent Hitachi

customers, and currently consists of these leading system architects:

 

  • Randy Olinger, United Health Group (designed, maintains, and grows UHG's several hundred Petabyte storage infrastructure)

 

  • Gary Grider, Los Alamos National Labs (lead designer for LANL's computing and storage infrastructure and USA's exascale project)

 

  • Justin Stottlemyer, Intuit (leading web storage architect, designed Shutterfly's EC-based object store system scalable to 100+ Exabyte now at Intuit)

 

  • Keith Gray, BP (runs largest oil and gas data center in North America)

 

  • Michael Declerck, Amazon (Amazon's DynamoDB engineering)

 

We also expect more panels, talks, and tutorials as the conference agenda

is completed: you can find information about the conference at the following web page:

 

http://storageconference.us/

 

If you want to participate in the the conference in any way (e.g., give a talk, serve on or organize a

panel, bring a customer, submit a research paper, or attend a tutorial), contact me or another

program committee member. The conference is scheduled for June 1-5 and will be hosted by

Santa Clara University on their beautiful campus.

Outcomes