Dec 10, 2013


Trend 7: Reduction of Backup through Archive, Copies, and Snaps

One of the trends I identified last year was the explosion of data replication, which was cited in an IDC study that claimed the creation and management of multiple copies exceeded the cost of primary storage in many environments. IDC has posted a running time clock which tracks the word wide cost of data copies since January 2013. As of the date I wrote this post, it stood at over $44,128,957, 900.


I identified one of the biggest creators of copies as backup along with all the snap shots, remote copies, and versioning that is associated with business continuity. I also predicted that the volume of replicated data will blast to new levels in 2013, unless IT focuses on the reduction of copies through tools like active archives. Since then, many analysts have begun to talk about “backup data sprawl”. These analysts provide many useful tips on how to reduce this sprawl. One of the most common recommendations is deduplication.

While deduplication helps to reduce the volume of data that is consumed by backup data sprawl it does not address the reduction of backup. Most of the data that we backup today is data that does not change, yet we backup it up over and over again, and if we use dedupe, we dedupe the same data over and over again. Backup can be reduced if we archive the inactive portion of data to Hitachi Content platform, where it is single instanced, compressed, encrypted, and copied once for protection.

Backup is important, but recovery is critical. As the number of clients and virtual servers increase, the cost of an outage increases and the need for both lower recovery times and recovery point objective becomes even more important.  This is driving a need for technologies like continuous data protection (CDP) and application aware snaps and clones for faster point in time recovery. While backups are still needed for systems outages, most recoveries can be done via in-system snaps and clones.  These technologies also make it possible to do backups without stopping the application when integrated with management tools, and help to reduce the recovery point as well as the recovery time objectives. Hitachi with virtual thin images can rapidly create up to 1,024 snap shots per LUN and 32,000 per array for backup and application testing purposes.

In addition to tools like archive, snaps and clones to reduce backup sprawl, we also need tools to manage their lifecycle. Hitachi offers HAPRO an application protector for Microsoft, Storage Viewer for Backup environments that include Symantec (Veritas) NetBackup and Backup Exec, IBM® Tivoli® Storage Manager and EMC Networker environments. Hitachi Data Protection Suite, powered by CommVault®, is a tightly integrated blend of snapshot, replication and persistent copies that are application-aware, secure, deduplicated, managed and accessible through a single, unified platform. Additionally, Hitachi Data Instance Manager is a unified, easy-to-use, policy- and workflow-based solution that centralizes multiple data protection capabilities for file, SQL and Microsoft® Exchange in Microsoft and Linux mid-tier environments

You can expect to see more focus on archive, snaps and clones and data lifecycle management tools in 2014, to address the explosion of backup data.

See full list of my top ten trends for 2014 here.