Eliminating Legacy Backup with HDIM (1)

Document created by Tony Cerqueira on Aug 14, 2013
Version 1Show Document
  • View in full screen mode

Backups hit systems hard. They hit VMs and networks even harder. Things like scans, trawls of the file system, polling clients, are not pretty. Backups create a huge I/O, CPU, memory cost to the backup client system. Finally the backup set is created. And then . . .  all that data hits the network where backup sets play havoc with precious bandwidth.  Eventually, all the backup sets from multiple Client systems and VMs find their way to a target disk system, where all that data must be written to disk. Once there, it must be deduplicated. Lots of crunching, churning and rewriting of data. Huge I/O, CPU, memory cost to the Target systems. Is it any wonder host based backups oftentimes don’t finish before they are scheduled to start again?

  • The obvious question is: Why do people do it this way?
  • The common answer is: That’s the way it’s done.


Fortunately, not anymore. Hitachi Data Instance Manager (HDIM) uses a number of new host based data protection technologies and techniques that create efficiencies with huge improvements over “legacy backup” methods that most of our competitors use.  We can typically achieve reduced bandwidth loads and Client IO processing that are up to 95% or more when compared to legacy backup methods.


Restore times can be drastically improved as all restore points are essentially a “synthetic full” that requires no rehydration from deduplication. Simply select, point and restore.


Legacy.GIF.gif

Data Classification can be the difference between restoring just the fillet, versus the whole cow.


Unlike normal backup systems, HDIM’s Changed Byte Transfer technology monitors data changes in real-time, at the byte level, and transfers them immediately (or caches them if the network is down). More importantly – HDIM has a unique ability to select very specific data sets. You may want the whole disk.  Or, you may only want your SQL data.  You may only want Excel files that belong to your Accounting group that are more than 7 days old. Whatever you want, you simply have to drag and drop with the HDIM Data Classification tool and adjust your policy.


So now instead of a 95% load reduction on Client, Target and Bandwidth, using HDIM with Data Classification it can become more like a 99.999% reduction. It depends on what you want on your specific recovery points.  If you want to run 10 different protection policies via schedule (batch) or real-time (continuous), you can mix them to your heart’s content.


Data Reduction on systems and networks is only one benefit of Data Classification. But it is a good example of how to intelligently tune data reduction strategies on data protection requirements such as Backup, Continuous Data Protection (CDP), Replication (to a live file system), Email Archive (direct to HCP) and File Versioning, and now, File Archive to HCP as well. All of which are unified and interchangeable functions within HDIM.


If you’ve decided you want the filet, instead of the whole side of beef, try downloading HDIM and setting up a 45 minute install + tutorial with one of our experts at HDIM-sales@hds.com.

Attachments

    Outcomes