Skip navigation
1 2 3 Previous Next

Hu's Place

246 posts

There is a good probability that you or someone you know has diabetes. The World Health organization believes that an estimated 8.8 percent of the adult population worldwide had diabetes. This figure is projected to rise to 9.9 percent by the year 2045. Type-2 diabetes is the most prevalent form of diabetes and affects more people as the population ages. Today one in every four Americans, 65 years or older has Type-2 diabetes. The spread of Western lifestyles and diet to developing countries has also resulted in a substantial increase.


Word Wide Diabetes.jpg

Diabetes is a chronic, incurable disease that occurs when the body doesn't produce any or enough insulin, leading to an excess of sugar in the blood. Diabetes can cause serious health complications including heart disease, blindness, kidney failure, and lower-extremity amputations and is the seventh leading cause of death in the United States. Diabetes can be controlled by medication, a healthy diet, and exercise.


The problem with medication is that there are many ways to treat the disease with different combinations of drugs. Some medications breakdown starches and sugars, other decreases the sugar your liver makes, some affect rhythms in your body and prevent insulin resistance, others help the body make insulin, still others control how much insulin your body uses, some prevent the kidneys from holding on to glucose, and others help fat cells to use insulin better. New medications are being developed continuously as the population of diabetics increases.  Diabetics often need to take other medications to treat conditions that are common with diabetes like heart health, high cholesterol, retinopathy, and high blood pressure. The efficacy of the drugs changes with age and other physical factors. There are also different side effects depending of the individual’s situation, and the drugs can be expensive. The effectiveness of the treatment is measured every three months by a blood test for a measure called A1C. A1C measures the average blood glucose level for the last 3 months. An A1C measure of 7.0% indicates that the blood glucose level and the diabetes are under control. However, 7.0% is an ideal reading and higher readings may be acceptable depending on the individual. Up to now the prescription of medication is usually a trial and error approach and more than half of diabetes patients fail to achieve the treatment targets according to the World Journal of Diabetes. The selection and monitoring of the most effective medication or combination of medications that is also safe, economical and better tolerated by patients is often hit or miss.


On March 12, Hitachi and the University of Utah Health, a leading institution in electronic health records and interoperability clinical information systems research announced the joint development of a decision support system that allows clinicians and patients to choose pharmaceutical options for treating type-2 diabetes. The system uses machine learning methods to predict the probability of a given medication regimen achieving targeted results by integrating with electronic health records which allows for guidance that is personalized for individual characteristics.


Diabetes ML Model.png

The system compares medication regimens side-by-side, predicting efficacy, risks of side effects, and costs in a way that it is easy for clinicians and patients to understand.


Diabetes Dashboard.png

Using Machine learning combined with the individual’s individual health records will increase the probability of selecting the right combination of medications that will help individuals reach their targeted goals to control diabetes. Think how this could be applied to other treatments like chemo therapy for cancer. If you know anyone with diabetes please forward this post to them so they can understand what is possible when you apply machine learning to the control of this disease.

Last week Hitachi Vantara Labs announced Machine Learning Model Management To accelerate model deployment and reduce business risk. This innovation provides machine learning orchestration to help data scientists monitor, test, retrain and redeploy supervised models in production. These new tools can be used in a data pipeline built in Pentaho to help improve business outcomes and reduce risk by making it easier to update models in response to continual change. Improved transparency gives people inside organizations better insights and confidence in their algorithms. Hitachi Vantara Labs is making machine learning model management available as a plug-in through the Pentaho Marketplace.


Machine learning explores the study and construction of algorithms that can “learn” from and make predictions on data through building a model from sample inputs without being specifically programmed. These algorithms and models become a key competitive advantage – and potentially a risk. Once a model is in production, it must be monitored, tested and retrained continually in response to changing conditions, then redeployed. Today this work involves considerable manual effort and is often done infrequently. When this happens, prediction accuracy will deteriorate and impact the profitability of data-driven businesses.


David Menninger, SVP & Research Director, Hitachi Vantara Research, said, “According to our research, two-thirds of organizations do not have an automated process to seamlessly update their predictive analytics models. As a result, less than one-quarter of machine learning models are updated daily, approximately one-third are updated weekly and just over half are updated monthly. Out-of-date models can create significant risk to organizations.”


So, what is Machine Learning Model Management and where does it fit in the analytic process?

Machine Learning Model Management.png

Machine Learning Model Management recognizes that machine learning models need to be updated periodically as the underlying distribution of data changes and the model predictions become less accurate over time. The four steps to Machine Learning Model Management include, Monitor, Evaluate, Compare, and Rebuild as shown in the diagram above. Each step implements a concept called “Champion/Challenger”. The idea is to compare two or more models against each other and promote the one model that performs the best. Each model may be trained differently, or use different algorithms, but all run against the same data. These 4 steps to Machine Learning Model Management is a continuous process and can be run on a scheduled basis to reduce the manual effort of rebuilding these models.


Hitachi Vantara’s implementation of Machine Learning Model Management is part of the Pentaho data flow which makes machine learning easier by combining it with Pentaho’s data integration tool. In the diagram above the preparation of data may take 80% of the time to implement a model with preparation processes that rely on coding or scripting by a developer. Pentaho Data Integration empowers data analysts to prepare the data they need in a self-service fashion without waiting on IT. An easy to use graphical interface simplifies the transformation, blending, and cleansing of any data for data analysts, business analysts, data scientists, and other users. PDI also has a new capability that provides direct access to various supervised machine learning algorithms as full PDI steps that can be designed directly into your PDI data flow transformations.


For more information on PDI and how it integrates with Machine Learning Model Management see the following blog posts by Ken Wood.

            Machine Intelligence Made Easy

            4-Steps to Machine Learning Model Management

Back in November 2014 I posted a blog on how “Controlling your explosion of copies may be your biggest opportunity to reduce costs”. I quoted a study by Laura DuBois of IDC which reported that 65% of external storage systems capacity is used to store non-primary data such as snapshots, clones, replicas, archives, and backup data. This was up from 60% just a year earlier. At this rate it was estimated that by 2016 the spend on storage for copy data would approach $50 billion and copy data capacities would exceed 315 million TB. I could not find a more recent study, but I would estimate that the percentage may have increased due to more online operations, ETL for analytics, DevOps, and the larger number of shorter lived applications which tend to leave dark data behind that never gets cleaned up. Copies serve a very useful purpose in an agile IT environment, just like the massive under water bulk of an iceberg provides the displacement that keeps the iceberg afloat. However, the copies need to be monitored and managed and automated to reduce costly waste and inefficiencies.




At that time in 2014, our answer for Copy Data management was a product called Hitachi Data Instance Manager which came from the acquisition of the Cofio Aimstor product. Most users at that time were using this product as a lower cost backup solution. A key feature was a workflow manager with settable policies for scheduling the operations it controlled. Since that time Cofio and Hitachi Engineers worked to provide the latest enterprise features into this product and renamed it Hitachi Data Instance Director or HDID (which sounds better than HDIM). HDID provides continuous data protection, backup and archiving with storage-based snapshots, clones, and remote replication in addition to application server hosts.


In October of last year with the announcement of the new Hitachi Vantara company, we announced Hitachi Data Instance Director v6 which was re-architected with MongoDB as the underlying database. The more robust database enables HDID to scale to hundreds of thousands of nodes compared to previous versions which scaled to thousands of nodes. Now you can set up as many users as you want with access rights. Another improvement was an upgrade from a single login access control to granular role-based access controls to align user access capabilities to the business’ organizational structure and responsibilities.


Another major enhancement was a RESTful API layer which enables the delivery of recovery, DR and copy data management as a private cloud service. Rich Vining, our Senior World Wide Product Manager for Data Protection and Governance explains this in his recent blog post Expand Data Protection into Enterprise Copy Data Management:


“Hitachi Vantara defines copy data management as a method for creating, controlling and reducing the number of copies of data that an organization stores. It includes provisioning copies, or virtual copies, for several use cases, including backup, business continuity and disaster recovery (BC/DR), and data repurposing for other secondary functions such as DevOps, financial reporting, e-discovery and much more.”


Read Rich’s blog to see how HDID can solve the copy explosion that I described above by automating the provisioning of snapshots, clones, backups and other copy mechanisms, mounting virtual copies to virtual machines, automatically refreshing them and, more importantly, expiring them when they are no longer needed.


Think of HDID as a way to automate the copy data process and reduce the estimated $50 Billion spend on storage for copy data.


Over the past year there has been a surge in reports of data breaches from Amazon S3 buckets that were left accessible online, exposing private information from all sorts of companies and their customers. The list of high profile victims includes, Accenture, Booz Allen Hamilton, Dow Jones, Time Warner, Tesla, TSA, Verizon, and the U.S. National Geospatial Intelligence Agency, and resulted in the exposing of hundreds of millions of private records to public access.


How did this come about? When you create an Amazon S3 (Simple Storage Service) cloud storage bucket, you can store and retrieve data from anywhere on the web. In most of the breaches, the companies left Amazon S3 buckets configured to allow public access. This means that anyone on the internet with the name of the S3 bucket could access and download its content.


According to statistics by security firm Skyhigh Networks, 7% of all S3 buckets have unrestricted public access, and 35% are unencrypted, meaning this is a major problem for the entire Amazon S3 ecosystem. To compound the problem there are many tools available on the web to comb through leaky AWS datasets, so finding these exposed S3 buckets is relatively easy. BuckHacker is a search engine tool that provides the most accessible way to search for exposed S3 buckets.


While encryption is one way to limit the consequences of an S3 data breach many people might think they are protected by encryption when they actually aren’t. People believe that if the data is encrypted, then the data content cannot be rapidly compromised. It depends on where the encryption is done. The most easily-adopted approach to encryption, Server Side Encryption (SSE) doesn’t solve the open bucket problem. With server-side encryption (SSE) Amazon S3 will encrypt your data before saving it to disks in its datacenter, but the data is automatically decrypted when the data is downloaded – so data in an encrypted open bucket is accessible in clear text despite being stored in an encrypted manner.


Protection is provided by client-side encryption, where a client such as Hitachi Content Platform (HCP) encrypts the data and then uploads the encrypted data to Amazon S3. In this case the client manages the encryption process, the encryption keys, and related tools, and maintains ownership of the encryption keys. And even if the bucket is publicly readable, the content is still encrypted when it is accessed.


Another solution for S3 data leaks is to take an on-premises or hybrid approach with HCP as a client to S3. HCP is designed to run in the datacenter or as a hybrid cloud where layers of existing enterprise security processes and protocols already keep hackers out. HCP avoids risk thanks to the enterprises’ custom security that is well understood and trusted, whereas systems that are run as a public cloud have a much broader attack surface with a one-size-fits-all security approach. Running the system on-premises minimizes the risk of accidental public exposure of confidential data. In this way you can take advantage of the scalability and low cost of Amazon S3 storage while enjoying the protection provided by your enterprise firewall, and access controls, along with HCP’s data protection services like data at rest and data in flight encryption. See my previous post on HCP security features.  All these HCP security features remain in force when tiered to S3 buckets.


An on-premises or hybrid approach with HCP as a client to S3 can provide another layer of protection if hackers should be able to penetrate AWS and gain access to the underlying computing infrastructure, which may enable direct access to the physical media that Amazon uses for S3 or any backup or replicas. Since the data would have been encrypted by HCP, the hackers would be unable to access the data. Also, since Amazon is a public cloud service, there is always the possibility of a National Security Letter or other legal hold being placed on the data, which would require users to retain their files in another location that could also be attacked.


On premises or hybrid HCP running as a client to Amazon S3 services is the best way to plug leaky Amazon S3 buckets.

HCP Security image.jpg


In a November 2017 report from Scott Sinclair, Enterprise Storage Group Senior Analyst showed that the top factor that leads organizations to deploy or consider deploying on-premises object storage technology is a higher level of data security.


451 data chart.png


Object storage offers tremendous advantages over a hierarchical file system when it comes to security. Object storage is designed with a single, massively flat address space enabling files or objects to be accessed by a unique identifier and accompanied by customizable metadata. The metadata not only enables object storage to scale to higher capacities than traditional file systems, it can help meet regulatory requirements for content and records retention by designating specific content as immutable while providing the necessary auditing and reporting to verify immutability. Object storage has the ability to find, move, manipulate, and analyze metadata and data content for data security and protection.


Security is not something that you can wrap around the outside of a product. It must be designed in from the beginning. Hitachi Vantara’s Hitachi Content Platform (HCP) was designed and developed with security at the very core. Here is a recent white paper that provides an Overview of Server Security and Protection for HCP. It focuses on security features built into HCP and HCP cloud storage software to protect data access and secure communications. The white paper is written for systems and network administrators to set best practices for HCP deployment that minimizes vulnerability and threat exposure.


Security highlights include:


On-Premises or Hybrid Deployment

HCP is designed to run in the datacenter or as a hybrid cloud where layers of existing enterprise security processes and protocols already keep hackers out. HCP avoids risk thanks to your custom security that you understand and trust, whereas systems that are run as a public cloud have a much broader attack surface with a one-size-fits-all security approach. Running the system on-prem minimizes risk of accidental public exposure of confidential data.


Multitenancy and Namespace Isolation

A single HCP system is an overall structure for managing one or more tenants enforcing the boundaries that keep applications, users, and data of each tenant isolated. Each tenant is a virtual object storage system with independent management and data access that is bounded by the overall policies of the HCP system. Each tenant in turn has one or more namespaces which follow policies set by the tenant and provides mechanisms for separating the data stored by different applications, business units or customers. Namespaces provide segregation of data while a tenant provides segregation of management. This segregation of HCP, tenant and namespace provides multiple levels of security access to data and provides isolation to a namespace should a hack occur.


Role-Based Access Control for Management

HCP provides role-based access controls (RBAC) for administration accounts at both the system and tenant levels. The roles are system administration, compliance, security, monitoring, search and service. An HCP administrator may fulfill one or more roles at the system and tenant levels. There is no single super user account in HCP. The boundaries between various administrative and data access domains limit the scope of damage that can be done by a malicious user through a compromised account.


Network Security Considerations

Networks are avenues for malicious attacks, so the referenced white paper goes into detail about segregation and managing network access to HCP. HCP is typically deployed behind a corporate firewall and limiting access to the HCP front-end network remains an important part of the security strategy. Network engineers may elect to restrict port utilization to a minimum set required by the HCP software. The referenced white paper lists ports that HCP might need for operations. HCP uses the Transport Layer Security protocol (TLS 1.2) to ensure privacy and data integrity between the HCP and the other systems with which it communicates. TLS provides data in flight encryption for HCP services, including HCP system management, tenant management, RESTful API gateways, replication, and cloud tiering. HCP also operates its own internal firewall and many ports can be enabled or disabled via HCP management. Some port examples are Port 123 for NTP services or Port 514 for Remote Syslog. Syslog can stream HCP event messages to one or more servers performing security audit functions.


Data Access Methods

HCP supports industry standard data access methods that include Amazon S3, OpenStack Swift, WebDAV, SMB/CIFS, NFS, SMTP, as well as a proprietary REST API. When an application writes a file, HCP puts it in a bucket (namespace) along with its metadata. HCP is designed for write once, read many (WORM) access of information, but namespaces can be enabled with versioning to permit write and rewrite operations. Tenant level administrators can restrict access originating from a specific IP address using an allow (whitelist) or deny (blacklist) list. When HCP namespaces are cloud optimized through RESTful APIs, HCP will block all ports associated with SMTP, WebDAV, CIFS and NFS to reduce the attack surface.


User Authentication

HCP uses system level user and group accounts to control access to the data, management consoles, APIs and search console. HCP validates users with any of the following authentication methods:

            Local Authentication

            Remote Active Directory

            Remote Radius

            Remote Keystone (OpenStack)


Virus Scanners

HCP Anywhere, Hitachi’s file-sync-and-share for mobile devices can be configured to communicate with a corporate virus scanning engine. But the HCP repository does not incorporate a virus scanner since it does not provide an execution environment for objects that are uploaded. Since the file or object is never opened or executed on HCP servers, it is immune to viruses.


Ransomware and Data Protection Strategies

HCP offers several capabilities for protection against data loss, including preventing and reversing a Ransomware attack (a malware attack that encrypts data and demands a ransom for the decryption key, also known as a crypto-locker).


All information that is stored in the HCP is WORM (write once read many), making it immune to Ransomware attacks.


HCP supports the storage of multiple versions of an object to protect data from accidental deletions or roll back accidental changes. Versioning can be enabled at the tenant and namespace level. The tenant administrator can configure how long a prior version of an object is kept.

Retention + Legal Hold

HCP provides flexible retention capabilities to prevent accidental or malicious deletion of object before a designated retention period or while under a legal hold.

File Integrity

A hash is computed for every object at ingest time to ensure data integrity. The hash or “digital fingerprint” is stored as metadata and is used to validate integrity upon retrieval. If there is any discrepancy, HCP can repair the data from the hash or restore the data from a replica copy.


Auditing and Monitoring

The system management console and the tenant management consoles provide displays of critical system events to authorized role-based administrators.


Limiting Command Line Interface Risks

System administrators do not have command line access to HCP systems so that organizations can credibly prove regulatory compliance, auditing, and non-tampering. Everyday administrative capabilities are GUI or API driven. Making system changes that require command line access requires the cooperation of both the organizations’ administrators, and authorized Hitachi Vantara customer support. This approach increases security by preventing clandestine manipulation.



  This post is just an overview of the Hitachi Vantara white paper that I referenced at the beginning. Please download the white paper for more information on the data security features of the HCP object store and compare it with other vendor’s data security capabilities when evaluating object storage options.

Object storage, or content addressable storage, which was once an afterthought for archiving data has now become mission critical as we see the explosion of unstructured data driving more of our business decisions. While core database applications with structured data still drive much of the business today, integration with unstructured data from mobile devices, internet and other connected devices are driving a digital transformation through the cloud, big data, analytics, governance, and IoT.

Mission Critical.png


All major public cloud storage providers, including Amazon Web Services (AWS), Microsoft, IBM and Google have adopted object storage as their primary platform for unstructured data which makes it the primary storage for hybrid cloud applications. Service providers see immediate benefits from object storage’s flexibility and scalability over file-based approaches. As more enterprises adopt public and hybrid cloud applications, object storage with RESTful cloud interfaces and APIs provide easy access to cloud applications and management of unstructured data. Hitachi object storage, Hitachi Content Platform (HCP) provides object storage flexibility and scalability from edge mobile devices, to on-prem core, to cloud.


Big Data

Unstructured data growth is far outpacing the growth of structured data, and more enterprises are struggling to store and manage multiple petabytes of unstructured data. File systems with their hierarchical data structures cannot scale to meet the growth of this data without creating multiple silos of isolated data. Backup, which multiplies the storage requirements has also become untenable. The only way to manage this big data growth is to implement a metadata based, scale out platform that is not dependent on infrastructure or location. The data will outlive the application that created it and the infrastructure where it initially resides. Object storage metadata will preserve the data’s content and RESTful interfaces will keep it accessible in a cloud environment. Backup can be eliminated by keeping two or more replicas of the data store.



Analytics will be driving more critical business decisions, but analytics is only as good as the data that it analyzes. Analysts and data scientists spend 80% of their time gathering, cleansing, and curating the data that goes into their analytic models. This is where the metadata in object storage is valuable. Metadata is attached to data when it is ingested and stays with the data until it is deleted and scrubbed. The content of metadata is customizable and offers flexibility in the identification and management of the stored data. A key differentiator in object storage systems is the vendor’s metadata framework that best addresses the enterprise’s long term needs. Another differentiator are APIs for access by analytic tools.



The metadata in object storage also facilitates the governance of data, especially where content awareness is needed for regulation compliance. For instance, European Union privacy regulations require that an individual has the right to be forgotten, which means that all records with their private information must be found and deleted unless they are under legal hold.  That would be difficult to do without metadata. Object storage can also provide WORM (Write Once Read Many) technology to prevent data from being modified. Hitachi’s object storage solution also provides a hash to prove immutability.



IoT is driving even more unstructured data to improve business operations. Machine driven data has very little metadata. In order to integrate operational data into the business process, we need to address the growing issues around data management, data governance, data sovereignty, identity protection and security breaches. These can be helped with object storage metadata.


Hitachi Content Platforms Strengths

Hitachi Vantara’s Hitachi Content Platform (HCP) object storage solution, has significant market traction in mission critical applications. With over 2,000 global customers, we are installed in 4 out of 5 of the largest banks in the world, 4 out of 5 of the largest insurance organizations, and 5 out of the 10 largest telecom companies. We have over 14 years of experience, deploying into highly sophisticated environments, and satisfying the most stringent governance requirements.


Here are some analysts reports that evaluate our object storage capabilities;


Gartner Critical Capabilities for Object Storage report


GigaOm Sector Roadmap: Object Storage  report


IDC MarketScape: Worldwide Object -Based Storage 2016 Vendor Assessment report

IDC Marketscape.png

Enterprise Storage Group: Hands-On Evaluation of Hitachi Content Platform Portfolio report



If you have not yet considered Object Storage, review these reports, call our HCP representatives, and talk to our customers to see what HCP can do for your critical business needs.


Problem Statement

IBM Mainframe environments are strategic to many of our mission critical business systems. Whether or not your environment uses mainframes, you may be more dependent on the resilience of mainframe systems in the outside world than you might imagine. For instance, financial systems around the world are interconnected, and some core financial systems must be able to recover from a system outage within 2 hours, in order for other  systems to recover in 4 hours and still others in 24 hours, like ripples in a pond. You can be sure that most of these core financial systems run their mission critical applications on mainframes due to their long history of resiliency, manageability, and standardization in recovering mainframe-based applications and data. Add to that IBM’s conscientious effort to add new functions and feature as new technologies, deployment models, and threats have evolved. The gold standard for IT resilience is IBM’s Geographically Dispersed Parallel Sysplex (GDPS) family of offerings for the mainframe.


As a result, GDPS has grown in complexity as the family of offerings has grown to cover nearly every possible threat to resiliency. Currently the GDPS family includes the following products:

            GDPS/PPRC based on IBM PPRC synchronous disk mirroring technology

GDPS/PPRC HyperSwap Manager

GDPS/MTMM based on IBM Multi-Target Metro Mirror synchronous disk,

IBM GDPS Virtual Appliance, a near-CA or DR solution across two sites

GDPS/XRC based on IBM Extended Remote Copy (XRC) asynchronous disk mirroring  GDPS/Global Mirror based on the IBM System Storage® Global Mirror technology

            GDPS/Metro-Global Mirror 3 site with two Metro and one asynch mirror

GDPS/MGM 4-site symmetrical 4-site solution provides Continuous Availability within region and DR across regions

GDPS/Active-Active, asynchronous mirroring between two active production sysplexes

GDPS automation code which relies on the runtime capabilities of IBM Tivoli NetView® and IBM Tivoli System Automation (SA)


GDPS  also includes an extensive list  of services. GDPS implementations have a unique requirement or attribute that makes it different from every other implementation. The services aspect of each offering requires experienced GDPS practitioners. The amount of service included depends on the scope of the offering. For example, function-rich offerings such as GDPS/PPRC include a larger services component than GDPS/PPRC HyperSwap Manager.


Competitive Offerings

IBM GDPS is designed to provide near-continuous data and systems availability across sites separated by virtually unlimited distances. Its roadmap and features include something for everyone, including future additional configurations that can lead to full active-active function. GDPS comprises a myriad of complex products and architectures to address various customer requirements.


Dell EMC’s Geographically Dispersed Disaster Restart (GDDR) is focused on automating disaster restart of applications and systems within mainframe environments in the event of a planned or unplanned outage. Also intricate in design, GDDR leverages multiple architectures and replication suites to eliminate any single point of failure for disaster restart plans in mainframe environments.


Solutions like GDPS and GDDR attempt to take the “everything to everyone” approach which can quickly add greater complexity and risk into the very environments you wish to simplify. It can be phenomenally difficult and stressful to configure and implement all the intricacies, map to existing tools and add software. Then, you must ensure you have the resources to manage a solution of magnitude. With the effort expended to implement these large solutions, you can end up with a semi-permanent level of vendor lock-in.



Enter, Hitachi Mainframe Recovery Manager

Hitachi Vantara has a long, proven history of delivering mainframe resilience. At Hitachi Vantara, we use mainframe services across many of our own businesses and support an ever-expanding ecosphere of mainframe tiering, analytics and functionality for customers. We have learned from decades of mainframe experience that the simpler the solution, the better. Together, our mainframe resiliency solutions are straightforward, scalable and streamlined. Hitachi Mainframe Recovery Manager (HMRM) is a simpler, more focused and lower-cost streamlined mainframe failover and recovery solution which can provide you with the functionality you actually care about and nothing you don’t.


We listened and collaborated with our mainframe customers to understand what resonates most for them to achieve modern business resiliency. Hitachi Mainframe Recovery Manager delivers mainframe processor, host and replication orchestration. Mainframe Recovery Manager is practical and easy to deploy, so you get results faster, without the typical complexity, risk or cost. Mainframe Recovery Manager is delivered as a service and is based on one cohesive, tightly integrated architecture for singular simplicity.


Hitachi Mainframe Recovery Manager delivers unique processor orchestration capabilities via IBM’s Base Control Program internal interface (BCPii). HMRM executes IBM BCPii commands to maintain attributes of production Image — also known as a logical partition (LPAR) — being managed and to orchestrate the Reset, Deactivate, Activate and Load of processor Images or LPARs. These BCPii Commands are identical to the commands issued manually by any IBM Z Processor Customer from the hardware management console (HMC). The beauty of Mainframe Recovery Manager’s minimalistic architecture is that you can manually intervene, using familiar HMC commands, to affect automation issues. Additionally, Mainframe Recovery Manager allows:

  • Authorized z/OS applications to have control over systems in the HMC network.
  • Direct communication with management console Support Elements rather than going over an IP network.
  • A z/OS address space to manage authorized interaction with the interconnected hardware


The HMRM Managed Service Offering

Unlike DR solutions that require you to mastermind and manage many moving parts, Hitachi Mainframe Recovery Manager solution is delivered as a managed service. We deliver greater orchestration, so you can recover easier, faster and more completely.


Mainframe Recovery Manager Services begin with an assessment of your unique mainframe ecosystem. We work with you to tailor Mainframe Recovery Manager to your specific goals. You may prefer a single, push-the-button environment or customized automation levels for various stages of failover and testing. Full implementation and ongoing support are included in Mainframe Recovery Manager Services. If you are new to Hitachi Vantara replication, our services experts can also manage the complete design and implementation, along with any migrations you require.


Where GDPS and GDDR may typically take 7 months to implement. Hitachi Mainframe Recovery Manager Services’ simpler approach, will take a fraction of that time.


Summary of HMRM Benefits

In summary, Hitachi Mainframe Recovery Manager removes the burden of juggling all those replication activities so you can improve mainframe resiliency — and do more with less. Mainframe Recovery Manager is a sleek, simplified and cost-efficient approach that is instrumental in orchestrating the “restart” of IBM Z processors, operating systems and custom applications, in half the time of competitive GDPS and GDDR solutions in the following circumstances.


  • Unplanned disaster event from the primary site to a secondary site.
  • Planned production workload switch from primary site to secondary site.
  • Planned production workload switch back from secondary site to primary site.
  • Planned disaster recovery test in parallel to production running “business-as-usual” at the primary site.


For More Information

To learn more about how Hitachi Mainframe Recovery Manager can improve your mainframe resiliency goals, please visit

ostcuki, brian, scott.png

Yesterday, Hitachi Vantara, announced three key promotions to its executive team: Brian Householder has been promoted to Chief Executive Officer of Hitachi Vantara and Scott Kelly has been promoted to Chief Operating Officer. Both promotions are effective April 1, 2018. Prior to being named as Chief Executive Officer, Householder served as President and Chief Operating Officer of Hitachi Vantara. Kelly will continue to serve as Hitachi Vantara’s Chief Transformation Officer, leading the company’s enterprise-wide initiatives to drive its global growth strategy. Prior to his promotion, Kelly also served as Chief Human Resources Officer where he was responsible for all aspects of human resources at Hitachi Vantara.


Ryuichi Otsuki, the current CEO of Hitachi Vantara, will leave his role to take an expanded role for Hitachi, Ltd. as Deputy General Manager of Hitachi’s Corporate Sales & Marketing Group and Chairman of Hitachi Global Digital Holdings, the holding company for Hitachi Vantara, oXya and Hitachi Consulting. Otsuki will also serve as Chairman of Hitachi Americas. Under the leadership of Otsuki as CEO, Brian as President and COO, and Kelly as our Chief Transformation Officer, Hitachi Vantara has demonstrated strong performance as the premier partner to deliver enterprises value through data. In their new expanded roles, we expect to see even more growth in delivering to our double bottom line, growth in business and social innovation. 


Leadership is an important function which helps to maximize efficiency and to achieve organizational goals. It has the potential to influence and drive the group efforts towards the accomplishment of goals. However, in order for leadership to be effective, it must be built on a solid foundation consisting of a clear mission, a vision for the future, a specific strategy, and a strong company culture. I believe that Hitachi Vantara has all these ingredients to help this leadership team to be successful.


We have a clear mission as the data arm for Hitachi Ltd, with a vision for Social Innovation, a specific strategy for driving results for customers from edge to outcomes and a recognized culture based on the guiding principles of Harmony, Sincerity, and pioneering spirit.


For more information about these executive changes please see our press release.

Several years ago, I posted a blog on how Alan Turing was able to break the Enigma code, the secret language used by the Nazis in WWII that was originally thought to be unbreakable. With a polyalphabetic substitution cipher that was many times the length of the longest message, a 4-rotor scrambler, and with 10 leads on to a plug-board that set the number of ways that the pairs of letter could be interchanged, it had a possible 151 trillion permutations and combinations to decipher.


Alan Turing started work on the Enigma code in 1939 and was able to break some of the code as early as 1940 with the help of a team of mathematicians and logicians at Bletchley Park in the UK during World War II and with prior information from Polish intelligence who were able to break earlier Enigma code. Each branch of the German armed services managed and maintained their own Enigma systems so some were easier to decode then others. The Naval Enigma systems were the hardest to break consistently since the plug-board setting was changed on a daily basis and they were the most disciplined in its use. It usually took two days to decipher a new key setting which was useless when the keys were changed every 24 hours. Solving the Naval Enigma code was critical for the D day invasion in July 1944. Breaking the Enigma code is credited for shortening World War II by at least 2 years.


Turing developed a machine that could do a brute force decryption that took several days but this could be shortened if they could uncover certain phrases that might be repeated with different encryption keys. This is where the mathematicians and logicians helped the most. Turing looked for people with a creative imagination, a well-developed critical faculty, and a habit of meticulousness. Skill at solving crossword puzzles was famously tested in recruiting some cryptanalysts. These are personal traits which are still highly desired as we look for analysts today.


An AI company recently demonstrated how the use of AI and cloud servers could break the Enigma code in just over 10 minutes Unlike the traditional method of cracking the code, the AI was trained to look for German language, and then work out the statistical probability of the sentence decrypted being the accurate original based on how ‘German’ it was, using 2,000 cloud servers to do the calculations. Ai took a different approach to solving an encryption challenge that can be used in solving many other challenges like finding cures for different cancers.


What is disturbing is that an AI system like this could crack a 20 character password in less than 20 min.

That’s why we need to look beyond encryption to biometrics for systems of authentication. See my previous blog on Biometrics.

What do we mean by Smart Spaces? Justin Bean, Director of Smart City Solutions Marketing Hitachi Vantara, defines Smart spaces as urban and industrial areas that use video, IoT, analytics, and AI technologies to deliver insights to people, buildings and machines, to make organizations more effective and improve our quality of life.


Smart spaces keep people and property safe by providing better situational awareness, automated alerts for threats and incidents, manage incident records and evidence, and coordinate across silos. They enhance experiences, by moving people efficiently through roads, stations, and vehicles by understanding people flows throughout spaces and buildings for enhanced commerce and planning. Smart spaces also improve operations and efficiency, through cross-organizational data sharing and analysis, automate key tasks and infrastructure, and identify potential improvements and eliminate waste.


Smart spaces are the building blocks for smart cities, smart economies and smart societies as this slide illustrates

Smart Spaces.png

A city can only be as smart as the sum of its parts. Cities are expansive ecosystems, composed of entities ranging from government and law enforcement agencies, businesses, healthcare systems, airports and transportation, to utilities, schools, universities and more. Each of these organizations has its own unique set of challenges and opportunities, but they all share common needs: They must increase operational efficiencies and cost savings, enhance the enjoyment and experience of the people they serve, as well as employees, and keep people and assets safe. The internet of things (IoT) is helping to connect our digital and physical worlds like never before. It drives new opportunities to innovate and make your business or organization, and the city it is located in, smarter, safer and more efficient. Sounds great, right? But where and how do you get started?


Join Justin Bean and our special guest, Dr. Alison Brooks, research director, Smart Cities Strategies and Public Safety at IDC, on a live Brighttalk Webcast tomorrow, January 24 at 9am Pacific time where they will discuss: What Makes a Smart City Smart?


Click here to attend:


  If you cannot attend the live webcast click the link anyway to receive access to the webcast recording.

Mars Venus.png

In the 1990’s, a best-selling, pop culture book was entitled “Men are from Mars, Women are from Venus”. The author, John Grey, stated that the most common relationship problems between men and women are because they are from two distinct planets and that each sex is acclimated to its own planets society and customs, but not those of the other. Men are more focused on doing things and women are more focused on feelings. This idea spawned a lot of talk shows, comedians, plays and even a movie. While clearly a pop culture idea, it attracted a lot of conversation since it appeared to elicit a response in many people.


At first glance, the concept of IoT, (Internet of Things) the integration or networking of IT (Information Technology) and OT (Operational Technology) seemed pretty straight forward. It was about combining business information with operational information to create smarter business and operational systems. As enterprises take a closer look at these system, they are finding that information and operations have very different technology perspectives and requirements. OT is about how things work and IT is about how information about how things get interpreted. The challenge of IoT is to bring them together to “change the way we live and work and create a significant social and economic impact on society”.


IT is a relatively young technology being born in a very structured environment, thanks to the early dominance of venders like IBM, Intel, Microsoft and CISCO. The world of OT is much older, supporting thousands of vendors, dozens of standards for networking and messaging, and high volume, low power systems. Industrial OT systems can be very capital intensive making it necessary to integrate new technology onto existing OT systems. If the legacy systems don’t have the sensors to measure things like vibration, heat, climate, etc, these must be added in a way that we have a software model or avatar of the operational system for not only monitoring, but also predicting, and prescriptive operations.


Another challenge is the plethora of protocols in machine to machine devices, MQTT (MQ Telemetry Transport) , BLE (Bluetooth low energy), ZigBee, Z-Wave, Thread, We-Mo. Some are radio frequency and others are WIFi depending on processing capability and low energy requirements. A message broker will be required to process the different types of messaging.


Building all this from scratch is a daunting task. That is where an IOT platform can shortcut a lot of new learning trials. It is easier to build on the experience of vendors who have been working in the field with customers to co-create IOT solutions.


Hitachi Vantara, offers Lumada, an IoT platform that can be preconfigured and installed as an appliance, on a VM, or in the cloud, to begin you IoT journey. This platform has been recognized with the IoT Breakthrough Award for 2018 Enterprise Solution of the Year! The IoT Breakthrough Awards program recognizes innovative technologies and exemplary companies that are driving innovation in the internet of things (IoT) market.

HItachi Street.png


IoT Breakthrough, an independent organization that recognizes the top companies, technologies and products in the global Internet-of-Things (IoT) market, today announced the winners of the organization’s 2018 awards program, showcasing technologies and companies that drive innovation and exemplify the best in IoT technology solutions across the globe. The 2018 IOT Breakthrough Award categories included: Connected Home, Consumer IoT, Enterprise IoT, Industrial IoT, IoT Partner & Ecosystem, Connected Car, IoT Leadership.


“2017 was a banner year for the entire IoT industry as we continue to move in the direction of real deployment and monetization of IoT systems, with the true value of IoT becoming abundantly clear, “said James Johnson, managing director at IoT Breakthrough. “As consumers and businesses continue to embrace IoT products and solutions, the IoT Breakthrough Awards provides critical recognition for the break-through companies and solutions in the crowded IoT market. We are absolutely thrilled to present the 2018 IoT Breakthrough Award winners.”


Hitachi Vantara’s, Lumada IoT Platform was recognized as the Enterprise Solution of the Year


In response to this award, Brad Surak, Hitachi Vantara’s chief product and strategy officer, said,

“Hitachi is honored to be recognized by IoT Breakthrough with the IoT Innovation Award for Enterprise Solution of the Year and we congratulate our fellow winners. The many breakthrough capabilities of the Lumada platform wouldn’t be possible without the learnings and insights gained in co-creating IoT solutions with our customers. They are the true trailblazers of the IoT era and we are privileged to be partnering with them to develop data-driven solutions that are changing the way their businesses – and the world – works.”


This award helps to validate two of the trends that I identified for 2018. The adoption of IoT platforms to facilitate the application of IoT solutions and the co-creation of value through the process of collaborating with customers and ecosystem players in order to innovate and create new value for business stakeholders, customers and society at large. Many of the companies that were recognized on the IoT Breakthrough Awards list are our partners and part of the co-creation ecosystem along with our many customers.


For Hitachi Vantara’s press release, see this link.

Containers are a lightweight, executable, package for deploying and running distributed applications. It is the next generation up from virtual machines (VM)s in that VM’s manage hardware resources and containers only manage software resources. Where the traditional virtual machine abstracts an entire device including the OS, containers only consist of the application and all its dependencies that the application needs. This makes it very easy to develop and deploy applications. Monolithic applications can be modernized and written as micro services that run in containers, for greater agility, scale, and reliability.

Container cranes.png

Up to now a limitation to containers has been the lack of persistent storage. When a container expires or moves to another server, it loses access to the storage and data. If containers are to run stateful applications, they require an underlying storage layer to provide enterprise features just like those available to apps deployed in a VM. That problem is now being addressed with storage plugins which provide a persistent storage link in coordination with container platforms and orchestrators. Container platforms like Docker automate the packing and loading of containers and provide governance for the app development process, to build, host, deploy, manage, and scale multi-container applications. Orchestrators like Docker Swarm and Kubernetes have the ability to manage containers across clusters. These orchestrators provide arrangement, coordination and management of the container ecosystem. Using storage plugins which leverage APIs, clusters can provide persistent storage with automation, high availability, seamless replication and analytics. With these new developments, the increased agility, scalability, and aggregation services will make IT more efficient and cost effective in application development and deployment,


Traditional storage can be exposed to a container or group of containers from an external mount point over the network, like SAN or NAS using standard interfaces. However, the storage may not have external APIs that can be leveraged by the orchestrator for additional services like dynamic provisioning. These functions would have to be done manually if APIs aren’t available.


Containerizing storage services enables storage resources to be managed under a single management plane such as Kubernetes. In addition, the orchestrator does not need to administer the storage services provided by the plugin. Hitachi Plug-in for Containers (HSPC) is available for our VSP G and F series with the latest version of our Storage Virtualization Operating System (SVOS). The communication between the HSPC plug-in container software and the VSP is through a RESTful interface on the Hitachi VSP service processor or directly with the VSP operating system, SVOS, via Hitachi’s CM REST APIs.  In both cases SVOS manages storage services like Hitachi’s provisioning, snapshotting and replication. Hitachi Storage Plug-ins for containers is available for Docker Swarm and Kubernetes.


In order for organizations to take advantage of in the full benefits of containers they must address the persistent storage challenge. For more information on Hitachi Vantara’s Storage Plug-in for Containers, see this Quick Reference Guide.

This post is part 3 of my Top IT Trends for 2018. Here we will cover three data types that IT will need to store and manage as these applications become more prevalent in 2018. The first is video analytics, the second is Blockchain, and the third is the use of biometrics for authentication.


6.  Wider adoption of Video analytics

Video content analytics will be a “third eye” for greater insight, productivity, and efficiency in a number of domains, beyond public safety. The algorithms to automatically detect and determine temporal, spatial, and relational can apply to a wide range of businesses like retail, healthcare, automotive, manufacturing, education and entertainment. Video, when combined with other IoT information like cell phone GPS and social media feeds can provide behavior analysis and other forms of situational awareness. Hitachi has used video analytics at Daicel, a manufacturer of automotive airbag injectors, in its quality management system to increase product quality, reduce cost of rework and root cause eradication. Retaillers  are using video to analyze customer navigation patterns and dwell time to position products and sales assistance to maximize sales. Video analytics relies on good video input so it requires video enhancement technologies like denoising, image stabilization, masking, and super resolution. Video analytics may be the sleeper in terms of analytics for ease of use, ROI, and generating actionable analytics.



7.  Blockchain projects mature

Blockchain will be in the news for two reasons. The first will be the use of crypto currencies, namely Bitcoin. In 2017 Bitcoin has accelerated in value from about $1000 USD to over $11,000 USD by the time of this posting! One of the drivers for Bitcoin is the growing acceptance of Bitcoin in countries that are plagued by hyper-inflation like Venezuela and Zimbabwe where bitcoin provides a “stable” currency.  Japan and Singapore are indicating that they will create fiat-denominated cryptocurrencies by 2018. These systems will be run by banks and managed by regulators. Consumers will use this for P2P payments, ecommerce and funds transfers. This means banks will have to build an IT capacity to manage accounts in cryptocurrencies. Russia, South Korea and China may also move in this direction.


The other reason is the growing use of blockchain in the financial sector beyond crypto currencies. Financial institutions will begin routine use of blockchain systems for internal regulatory functions such as KYC (Know Your Customer), CIP (Customer Identification Program is the KYC + checks against various blacklists or other government watch lists), customer documentation, regulatory filings and more. Interbank funds transfer via abstract cryptocurrencies and blockchain ledgers will expand beyond the test transactions of 2017.  A recent breakthrough in cryptography, Zero-knowledge Proof, may solve one of the biggest obstacles to using blockchain technology on Wall Street, which is keeping transaction data private. Previously, users were able to remain anonymous but transactions were verified by allowing everyone on the network to see the transaction data. This exposed client and bank positions to competitors who could profit from the knowledge of existing trades. Zero-Knowledge Proof is being implemented in several blockchain systems like zCash (ZEC) and ethereum in 2017 and is expected to be widely adopted by FSI in 2018. This could have major impact on IT in the financial sector.


Other sectors will begin to see prototypes with smart contracts, for provenance and identity services for health care, governments, food safety, counterfeit goods, etc. Blockchain provides provenance by building a traceability system for materials and product. You can use it to determine where a product originated, to trace the origin of contaminated food or illegal products like Blood Diamonds. Provenance may soon be added to the list of regulatory requirements that I mentioned in my Data Governance 2.0 trend



8.  The time is right for Biometric Authentication

A survey in 2016 showed that the average English speaking internet user had 27 accounts. By 2020, ITPro predicts that the average number of unique accounts will be 200! If every account had a unique password, this would be a nightmare to manage. Imagine updating your passwords every 90 days. That would be 800 passwords that you needed to generate and keep track of. In reality, most of us use the same password for many accounts that we don’t think are important. Unfortunately, hackers know this so once they discover a password they will use it to successfully hack our other accounts. The use of AI has been shown to crack a 20 character password in 20 minutes. Even if we adhere to the best password practices, it may be disclosed through hacks against third parties as has happened at Equifax. Businesses are coming to the realization that proxies that represent our identity like passwords, ATM cards, and pin numbers, even with two factor authentications, are hackable. In the United States the most common identification is a Social Security card number which was never intended to be used as a national identity token.


  Smart phone vendors and some financial companies like Barclays are moving to solve this problem by using biometrics which represent the real you. India has implemented a national identification program which includes iris scans and finger prints. However, choosing the right biometric is important. If a biometric like a finger print is hacked there is no way to reset it, like you would a pin number or password. Since we leave our finger prints on everything we touch, it is conceivable that someone could lift our prints and reuse it. Hitachi recommends the use of finger vein which can only be seen when infrared light is passed through a live finger to capture the vein pattern and is the most resistant to forgery.



Next week I will conclude these trends with Agile methodologies and Co-Creation and how they will contribute to Digital Transformation in 2018.

Gartner differentiates object storage from distributed file systems, and have published separate Critical Capabilities reports for each. The last Critical Capabilities for object storage was published March 31, 2016. In this report, written by Gartner analysts, Arun Chandrasekaran, Raj Bala and Garth Landers, Gartner recommends that readers of the report “Choose object storage products as alternatives to block and file storage when you need huge scalable capacity, reduced management overhead and lower cost of ownership.” We believe the use cases for object storage and block and file systems are quite different.


This report clearly showed Hitachi Vantara’s HCP in a leadership position for object store


Quadrant Object store.png

Then in October of 2016, Gartner combined object storage and distributed file systems into one Magic Quadrant (MQ) report, and as stated in the 2017 report, Gartner defines distributed file systems and object storage as software and hardware solutions that are based on "shared nothing architecture" and support object and/or scale-out file technology to address requirements for unstructured data growth.  However, they still recognized the difference in these two technologies in their research.


Distributed file system storage uses a single parallel file system to cluster multiple storage

nodes together, presenting a single namespace and storage pool to provide high bandwidth for

multiple hosts in parallel. Data is distributed over multiple nodes in the cluster to deliver data

availability and resilience in a self-healing manner, and to provide high throughput and

capacity linearly.”


“Object storage refers to devices and software that house data in structures called "objects," and serve clients via RESTful HTTP APIs..”



On October 17th, Gartner published their second annual " Magic Quadrant for Distributed File Systems and Object Storage." For the second year in a row, we are the only vendor in the Challengers quadrant since we were only evaluated on our HCP object storage. It is important to note that the Magic Quadrant report is NOT a stand-alone assessment of object store.  As the title states, this is a vendor-level analysis based on the COMBINATION of an Object Storage and Distributed File Systems offering.


A new Critical Capabilities for object storage is expected to be published in early 2018.  We believe that report will be a more accurate way to evaluate object storage systems. We would expect to score much higher due to the addition of geo-distributed erasure coding and other functionalities in HCP, as well as the addition of Hitachi Content Intelligence to the HCP portfolio.


Gartner, Critical Capabilities for Distributed File Systems, 09 November 2017

Gartner, Critical Capabilities for Object Storage, 31 March 2016

Gartner, Magic Quadrant for Distributed File Systems and Object Storage, 17 October 2017


Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.