Will the Upcoming Version of HDI Support Multipart Uploads (MPU)?

Document created by Amy Townsend on Jun 26, 2017Last modified by STIWARI on Dec 6, 2017
Version 6Show Document
  • View in full screen mode

Yes.  With the launch of HDI v6.4, it includes an ability to use multipart upload (MPU) API offered with HCP v8.0 software.  Hitachi Data Ingestor (HDI) has historically employed exclusively HCP REST API calls.


MPU is only supported with non content sharing use cases. MPU does not apply for roaming home directory, ROCS, RWCS.


MPU API is extension to the HCP HS3 protocol.  A namespace can ingest (PUT) using one protocol and retrieve (GET) with another. This will allow HDI the ability to use HS3/MPU for very large files, yet require no changes to the way it retrieves stubbed files. MPU allows client applications to break objects into chunks, but there are caveats, for example uploads are limited to a maximum of 10,000 parts. For this reason HDI engineers devised a simple algorithm that grows chunk as a function of original files size:


Original File SizeChunk Size
≦ 50GB5MB
≦ 100GB10MB
≦ 200GB20MB
2n * 25GB < file size < 2n * 50GB2n * 5MB


So suppose you have a 5TB file (the largest single file size permitted by S3).  In this case we choose n=7 to satisfy the equation. 27=128, which means each part will be 5MB*128 = 640MB.

1 person found this helpful