Hitachi Cloud Services - Content Archiving  and s3fs

Document created by Leland Sindt on Dec 18, 2015Last modified by Leland Sindt on Mar 23, 2016
Version 7Show Document
  • View in full screen mode

Using s3fs with Hitachi Cloud Services - Content Archiving

 

Hitachi Cloud Services - Content Archiving offers an Amazon S3 compatible REST interface that can be used with s3fs.

 

 

 

 

s3fs Installation

Download, compile, and install s3fs following the s3fs-fuse documentation.

 

s3fs Usage

 

s3fs <namespace> /mount/point -o nocopyapi -o use_path_request_style -o nomultipart -o sigv2 -o url=https://<tenant>.content.<site>.cloud.hds.com -o passwd_file=/path/to/passwordfile








<namespace>

The Namespace within your <tenant> to be mounted via s3fs.

 

/mount/point

The local mount point.

 

<tenant>

The Tenant containing the <namespace> to be mounted via s3fs.

<site>

The site of the <tenant> to be mounted vis s3fs.

Examples: us-az1, us-nj1

 

/path/to/passwordfile

The path to the password file.

 

Troubleshooting

 

Adding the `-d -d -f` option will cause s3fs to not run as a daemon and instead display helpful debug information.

Creating a password file

The password file must have only user read/write permissions (600) and contain the base64 encoded username and md5 sum of the password separated by a ':'

 

The following command(s) can be used to generate a password file.

 

read -p "Password File Path: " pathtopass; echo "$(echo -n "$(read -p Username: username; echo $username)" |base64):$(echo -n "$(read -p Password: -s password; echo $password)" |md5sum |awk '{print$1}')" > $pathtopass; chmod 600 $pathtopass; echo







 

Notes

 

As of this writing, there is a bug that will cause s3fs to fail if the <namespace> and <tenant> are the same.

 

s3fs has been tested with basic cp, mv, chmod, chown and rsync operations and while everything works some operations can be very expensive in terms of total HEAD, PUT, GET and DELETE operations.

 

s3fs should be used thoughtfully, it is best suited for low activity and archive data sets where traditional file access operations are a requirement.

 

Additional helpful information:

 

s3fs limitations

s3fs FAQ

Attachments

    Outcomes