Skip navigation
1 2 3 Previous Next

Hu's Place

200 posts

VMware Partner Leadership Summit 2017 which took place last month, concluded with awards ceremonies recognizing exemplary achievements in the VMware Partner ecosystem. Hitachi was singled out as the Innovation OEM partner, globally and in EMEA. The two awards recognise Hitachi's accomplishments in 2016, including unprecedented year-over-year revenue growth and continued integration between Hitachi Unified Compute Platform (UCP) offerings and VMware vSphere, vSAN, vRealize and NSX solutions.

VMware partnerr of the year.jpg

Hitachi’s unique integration with VMware through its Unified Compute Platform orchestration software layer has always been a key innovation from the very beginning of our OEM relationship. In 2016 Hitachi Data Systems announced enhanced integration with VMware Solutions with new support for VMware NSX, Docker Containers and enhanced VMware vSphere Virtual Volumes, which strengthens Converged and Hyperconverged Portfolios to Automate IT, improve security and enable application continuity. Hitachi also debuted all-flash versions of UCP 4000, UCP 4000E, UCP 2000 and UCP HC V240F and now offers all-flash configurations across its entire portfolio of converged and hyperconverged infrastructure platforms, to deliver powerful performance for enterprise applications and help customers move to an all-flash data center. Hitachi extended its OEM agreement to include VMware NSX for use with Hitachi Unified Compute Platform.

 

The justification for the Innovation OEM partner award from VMware can be summarized in the data sheet “Top 10 reasons to select Hitachi Unified Compute Platform for VMware vSphere

 

The following are the key points. For more detail and customer use cases, please take the time to review the referenced data sheet.

1. Hitachi’s UCP Director and UCP Advisor orchestration software can plug into VMware vCenter for mouse-click simplicity for extending the cloud management capabilities.

2. UCP for VMware vSphere is a pre-engineered, validated, turnkey solution that is preconfigured at the factory and delivered to your site racked, cabled, and configured for fast deployment and rapid transition to the cloud.


3. Accelerate time to market by leveraging all the elements of a software-defined data center including VMware’s software –defined networking solution NSX. This integration helps automate SDN capabilities and improve security with VM-level firewalls.


4. Infrastructure flexibility satisfies the requirements of any VMware data center infrastructure from high-end block or file-based highly available, cost-effective storage products to converged and hyperconverged systems, to enterprise cloud solutions.


5. Industry leading performance dramatically increases the pace of innovation for the most demanding online transaction or data analytics application. In addition to all flash support, UCP for VMware vSphere is SAP certified. Hitachi enterprise-level solutions for SAP HANA enable 24/7, mission critical, in memory, HANA environments.


6. Significant economic benefits result from UCP’s product family architecture which consists of an economical open system that leverages modular building blocks for converged and hyperconverged infrastructures.A third party report by the Edison group shows significant TCA and TCO savings over comparable DIY and VCE configurations.


7. Reduce TCO for virtualization and cloud management through industry leading infrastructure orchestration technology which is fully integrated into the overall system. When used with cloud automation software such as vRealize Automation, UCP solutions enable cloud self-service and role based access functionality.


8. Protect current IT investment and modernize for future growth. UCP for VMware vShpere integrates and protects your current infrastructure investment while enabling you to expand as needed.


9. Create a software defined data center (SDDC) with enterprise grade storage, compute, network and software from Hitachi Data Systems and leading virtualization capabilities from VMware including VMware Virtual SAN and VMware NSX.


10. Optimize Business Continuity with UCP Director and VMware vCenter Site Recovery Manager for automated disaster recovery and fast replication to move secondary copies to off-site recovery locations.


Digital transformation requires reliable infrastructure that can deliver cloud like economics and agility, with improved efficiency and simplified manageability. IT transformation requires a single solution that can modernize their core legacy applications while enabling them to bridge to innovative new cloud based solutions. The innovations in the partnership between VMware and Hitachi Data Systems will help businesses drive change and stay ahead of the competition. Unlike DIY or joint efforts by multiple compute, storage and networking vendors, Hitachi’s OEM relationship with VMware provides a major benefit in streamlining the acquisition, maintenance, and ongoing support for VMware vSphere solutions.

Twenty five years ago when I returned to California after living in Japan, I was able to realize my dream of building my own custom home. At that time home builders in California were buying up farmland and throwing up houses as fast as they could. I wanted something different. Something with the quality and attention to detail that I was used to in Japan but on a larger scale. I was fortunate to find an architect and contractor who shared my vision for quality. Building a house is a very manual process with opportunities for human error at every step. While other contractors hired low cost itinerant subcontractors who moved from site to site to do the framing or shingle the roof, our contractor kept the same subcontractors for each job to ensure quality execution and follow through.

 

House plans.png

 

My colleague in Infrastructure Solutions, Tony Huynh, believes that architecting and building an effective data center is akin to how a proven home builder creates a blueprint for a rock-solid home that customers buy with confidence in its long  term value. Both the IT Architect and the home builder has similar qualities – solid engineering prowess, efficient execution, and a predictable, high quality result. Conversely, a shoddy home builder can put something together quickly (and even cheaper), but the results will reflect their effort. Tony contributed the following post to explain the benefits of using the Hitachi Automation Director in building an effective data center.

 

The approach to implementing a modern data center should be looked at in its’ totality.  This is especially true when you’re considering deploying flash technology for your critical tier 0 and tier 1 applications. In addition to the core hardware “bits”, an assessment should be taken for the robustness of the underlying OS, as well as complementary software that makes the solution complete, whole – a solid brick house.

 

Intellectual property and engineering prowess has always mattered, more so now more than ever.

For our Hitachi VSP F all flash and VSP G hybrid arrays, we offer Hitachi Automation Director – our data center automation software that helps reduce, in some cases manual processes by 90% by utilizing templates for automatic provisioning and other resource intensive tasks.  By reducing manual processes, this can then result in significantly reduced probability of human errors. And we all know the financial, brand, and customer impact of a single keystroke error.

 

Today, we have a large Hitachi Automation Director customer that previously spent 23 hours+ manually provisioning storage for their AIX servers, and they did this 100+ times a month (that’s not a typo). With Automation Director, they have now reduced the same provision process down to less than <50 minutes.

 

BEFORE HITACHI AUTOMATION DIRECTOR

23 Hours x ~100 times a month = ~2300 manual hours. HIGH PROBABILITY FOR HUMAN ERRORS

 

WITH HITACHI AUTOMATION DIRECTOR

<50 minutes x ~100 times a month = 83 hours. SIGNIFICANT REDUCTION IN HUMAN ERRORS

Imagine what can be accomplished with the additional ~2200 hours freed up per month in IT resources https://www.hds.com/en-us/pdf/datasheet/hitachi-datasheet-automation-director.pdf

 

NEW HITACHI AUTOMATION DIRECTOR 8.5.1

We are constantly investing in features that help customers optimize their VSP F and G Flash deployments. Introducing Hitachi Automation Director 8.5.1, with  automatic flash pool optimization.

 

This new feature reduces the manual processes associated with increasing flash pool sizes, which would take seven steps.

 

With HAD 8.5.1, this can now be collapsed into two steps (see figure below). That’s a 70% reduction over manual processes, but the benefits are compounded since HAD will automatically increase the pool size without future storage admin/user intervention.

 

HAD.png

 

When it comes to choosing a solution for critical flash workloads, it’s important to look for the entirety of the solution. Your VSP Flash deployments can reach its maximum effectiveness with Hitachi Automation Director by reducing manual provisioning processes by 90%. This has a direct impact on significantly lowering risks of human errors, and re-directing those IT resources to other strategic projects.

Indy.jpg

 

This week we announced a number of new versions for products in our Hitachi Content Platform Portfolio. These included HCP v8.0 our object storage platform, HDI v6.1 cloud gateway, HCP Anywhere v3 for file synch and share, and general availability of our Hitachi Content Intelligence for big data exploration and analytics which we announced in 4Q2016. In my last post I talked about some of the many new features and capabilities that are integrated in this portfolio, which Gartner and IDC recognize as the only offering that allows organizations to bring together object storage, file synch and share, and cloud storage gateways to create a tightly integrated, truly secure, simple and smart cloud storage solution.

 

One of the benefits that I failed to mention is the benefit that this provides for the DevOps process. The agility and quality of the products in this portfolio are a great example of the DevOps process that is used by the HCP development and QA teams.  Hitachi Content Platform, which is recognized by the industry for cloud computing excellence, is also one of the tools in our DevOps tool chain in Waltham where we develop our Hitachi Content Platform portfolio.

 

Recently there have been some articles about difficulties in orchestrating the DevOps tool chain. A DevOps tool chain is a set or combination of tools that aid in the delivery, development, and management of applications throughout the software development lifecycle. While DevOps has streamlined the application development process from the old "waterfall approach",  DevOps toolchains are often built from discrete and sometimes disconnected tools, making it difficult to understand where bottlenecks are in the application delivery pipeline. Many of these tools are great at performing their intended function but may not apply all the disciplines needed for enterprise data management.

 

HCP's main benefit for DevOps is its high availability capabilities, which can help insulate downstream test automation tools from software upgrades, hardware maintenance/failures, etc. as well as insulating from availability issues in the upstream tools as well. We use Jenkins for continuous integration and if it goes down or is being upgraded, the downstream test tools don't notice or care since they are fetching builds from the always-online HCP.

 

HCP’s Metadata Query Engine (MQE) helps abstract away where the artifacts are located and named in a namespace. As long as the objects are indexed, MQE will find them and present them to the client, regardless of the object name & path. Even further downstream, after the automated tests are run, we can again take advantage of HCP by storing the test results and logs on the HCP (preferably in a separate namespace than the build artifacts). HCP’s security and encryption features ensure a secure enterprise environment, which is not always available with DevOps tools. DevOps is about automation and HCP can automate managing the space consumption by taking advantage of retention & disposition to "age out" and delete old logs or old builds, or tier them off elsewhere for long-term storage (such as an HCP S storage node or public cloud). HCP also provides an automated backup solution, using its replication feature as a way to get copies of the backups off-site for DR. HCP Anywhere and HDI are also valuable to ensure a secure and available distributed environment.

 

There is no doubt that DevOps has contributed to the Speed and Agility of HCP development. In return the use of HCP in the DevOps tool chain has made the development more secure and available, and facilitated the quality integration of features and products in the HCP portfolio.

 

Enrico Signoretti, Head of Product Strategy at OpenIO, in a March 2017 Gigaom report called Sector Roadmap: Object storage for enterprise capacity-driven workloads, wrote the following: “The HCP (Hitachi Content Platform) is one of the most successful enterprise object storage platforms in the market. It has more than 1700 customers, with an average cluster capacity between 200 and 300TB. … Alongside the hardware ecosystem, HDI (remote NAS gateway) and HCP Anywhere (Sync & Share) stand out for the quality of their integration and feature set.”

 

If you already have or are planning to implement an HCP, consider including DevOps as another tenant in your multitenant HCP. For reference you can download this white paper written by our HCP developers on how they use HCP as a Continuous Integration Build Artifact storage system for DevOps.

Digital transformation is the transformation of business activities and processes that fully leverage the capabilities and opportunities of new digital technologies. There are many cases of companies who have adopted new technologies but have not transformed their business process or business model to fully leverage the technology. An example is a bank who adopts mobile technology so that a loan request can be entered on a mobile app. While the mobile app is easy to use, it can still take weeks to approve and process the loan if the back office processes are not changed. This puts them at a disadvantage when competing with Fintech companies who can process a loan request in two days.

 

Fintech companies have an advantage in that they are technology companies that were born in the cloud and do not have the legacy that traditional financial companies have. In order to compete, traditional companies must take a bi-modal approach to digital transformation. They must continue to enhance and modernize their core applications while they transition to new systems of innovation. In the case of the bank processing loans they need to have access to more information in order to evaluate the credit worthiness of the applicant. If that information is locked up in different silos, it will be difficult to provide the agility required to shorten the loan process.

 

Traditional companies may have an advantage over Fintechs, if they can unlock the wealth of data that is already in their legacy systems. The key to success will be the ability to free the data from their legacy silos, and use them to augment new sources of data that can be analyzed to create new business opportunities. Object storage with its rich metadata capabilities, open interfaces, and scalability can eliminate these silos. However, no single object storage product can do it all from mobile, to edge, to cloud, to core. There must be an integrated portfolio of object management products that can optimize and modernize the traditional core systems while activating and innovating new business processes with technologies like cloud, mobile, analytics, and big data. The lack of an integrated approach will create increased complexity and costs.

 

Today Hitachi Data Systems announced enhancements to our Hitachi Content Platform portfolio which will further enhance its value as a digital transformation platform

 

HCP Porfolio.jpg

 

Hitachi Content Platform (HCP) is an object storage solution that enables IT organizations and cloud service providers to store, share, sync, protect, preserve, analyze and retrieve file data from a single system. Version 8 is the latest version of the object storage repository. The HCP repository is usually mirrored across two site to provide availability and eliminate the need for back up. Geo Protection services has been added which essentially does an erasure coding of objects across three to six sites, so that data is protected when any site fails. It also uses less storage capacity than full replication of data. With erasure coding across three site you save 25% capacity compared to mirroring, and with six sites you can save up to 40%. Erasure coding will impact retrieval time so provisions are offered to retain whole copies locally for a specified time. The support for KVM, the use of 10TB drives in the storage nodes, a 55% increase in objects per node and simplified licensing has been added to improve the economics of private versus public cloud.

 

Hitachi Data Ingestor (HDI) is an elastic- scale, backup-free cloud file server with advanced storage and data management capabilities. It appears as a standard NFS or CIFS server to a user but every file is replicated over HTTP to an HCP so it does not require backup. When the local storage begins to reach a threshold, the older files are stubbed out to HCP so it appears to be a bottomless filer. HDI v6.1 has been enhanced with R/W content sharing and universal file migration improvements.

 

HCP Anywhere provides file synchronization and sharing on mobile devices with end user data protection and mobile device management to increase productivity and reduce risks in a secure, simple and smart way. Version 3 provides next generation Windows client with VDI support and thin client desktop client (selective caching and Synching). There are also enhancements for data protection, collaboration, and usability such as Android app enhancements. You can use “My Documents” as HCP Anywhere directory and protect your files from ransom ware. Any alteration of your file would be stored as a new version, so recovery would simply be a matter of accessing the previous version.

 

Hitachi Content Intelligence was announced in 4Q 2016 and is now available for general availability and rounds out the HCP portfolio with a new data analytics and recommendation capability. With Content Intelligence, organizations can connect to and aggregate data from siloed repositories, transforming and enriching data as it’s processed and centralizing the results for authorized users to access.  By balancing our existing HCP Portfolio, Hitachi is now the only object storage vendor in the market with seamlessly integrated cloud-file gateway, enterprise file synchronization and sharing, and big data exploration and analytics. Hitachi Content Intelligence is capable of connecting to and indexing data that resides on HCP, HDI, HCP Anywhere, and cloud repositories. Once data repositories are connected, Hitachi Content Intelligence supports multiple processing workflows for analytics, extraction, transformation and enrichment stages that can be applied as your data is processed.

 

Last year Gartner and IDC gave the Hitachi Content Platform portfolio their highest marks, recognising it as the only offering that allows organisations to bring together object storage, file synch and share, and cloud storage gateways to create a tightly integrated, truly secure, simple and smart cloud storage solution. Hitachi is further solidifying that lead with today's announcements.

 

For more details see the announcement letter

Hu Yoshida

HCP Stands the Test of Time

Posted by Hu Yoshida May 26, 2017

Archive Storage.png

 

Recently FINRA fined an investment firm $17m for inadequate investigation of anti-money laundering “red flags”, during a period 2006 to 2014 when they had significant growth in their business. Their successful growth was not accompanied by the growth in systems to assure compliance. They had patch work solutions for large volumes of data resulting in data silos which made it difficult to combine the data for investigations.

 

The period from 2006 to 2014 is a very long time in terms of the evolution of technology and changes in regulations and financial instruments, so I can see how this situation could easily happen. If you did not have a data management system that could scale, consolidate silos of data, leverage technology advances and respond to regulatory and business changes, compliance can easily get out of hand.

 

Ten years ago data management for compliance was mainly dependent on CAS (Content Addressable Storage) which was successfully marketed by EMC as Centera. CAS is based on hashing the object (file) and using that hash as the address into the CAS repository. The hash could also be checked on retrieval to show immutability which was a plus for compliance. Another plus was that it had a flat structure and could grow to large capacities for of low cost storage. Access to Centera required an API which made it proprietary, but that did not deter users who saw it as a solution for retention of compliance data. Many ISV were happy to jump in and provide application specific solutions based on the Centera API since it provided them with a proprietary lockin.

 

Hitachi Data Systems offering at that time was a product called HCAP (Hitachi Content Archive Platform), which was developed in partnership with Archivas. Hitachi and Archivas took the approach of indexing the metadata and content as the files were written to the archive so that HCAP had awareness of the content and could provide event-based updating of the full text and metadata index as retention status changed or as files were deleted. Hashing of the object was also provided, but for immutability, not for addressing. While proprietary CAS solutions focused on storing content, HCAP was focused on accessing it. The interfaces to HCAP were non-proprietary, supporting NFS, CIFS, SMTP, WebDAV, and HTTP and supported policy-based integration from many distributed or centralized repositories such as e-mail, file systems, databases, applications and content or document management systems. The elimination of silos enabled users to leverage a set of common and unified archive services such as centralized search, policy-based retention, authentication and protection.

 

In 2007 Hitachi Data Systems acquired Archivas and shortly thereafter changed the name to Hitachi Content Platform since this product offered more capabilities than just archiving. HCP is a secure, distributed, object –based storage system that was designed to deliver smart web-scale solutions. HCP obviated the need for a siloed approach to storing an assortment of unstructured content. With massive scale, multiple storage tiers, multi-tenancy and configurable attributes for each tenant, HCP could support a range of applications on a single HCP instance and combine the data for in depth investigations. As technology evolved, HCP added support for additional interfaces like Amazon S3 and Open Stack Swift, the latest advances in server and storage technology like VMware and erasure coding, and numerous advances in security, clustering, and data management enhancements

 

HCP is designed for the long term with its open interfaces and non-disruptive hardware and software upgrades to take advantage of the latest technology solutions and business trends. A customer who purchased HCAP in 2006, could have non-disruptively upgraded through multiple generations of hardware and 7 versions of HCP. More importantly, they could adopt new technologies and information management practices as new initiatives like cloud, big data, mobile, and social evolved.  While HCP remains up to date and positioned for future growth, many analysts like Gartner are claiming that Centera is obsolete  and are reccomending compliance archiving alternatives to Centera after the Dell acquisition of EMC. With an estimated 600 PB of data on Centera, migration will be a major problem.

 

HCP is at the core of Hitachi's object storage strategy, and Hitachi Data Systems is unique in the way that it has expanded its object storage portfolio around HCP.

  • Hitachi Data Ingestor (HDI) is a cloud storage gateway that enables remote and branch offices to be up and running in minutes with a low cost, easy to implement, file serving solution for both enterprise and service providers.

  • Hitachi Content Platform Anywhere (HCP Anywhere) a secure enterprise file synch-and share solution, which enables a more productive workforce with secure access and sharing across mobile devices, tablets, and browsers.

 

  • Hitachi Content Intelligence (HCI) connects and indexes data that resides on HCP, HCP Anywhere, HDI, and cloud repositories to automate the extraction classification, and categorization of data.

 

Initially in 2006, HCP used DAS for small and intermediate configurations and SAN attached storage for large enterprise configurations. Today, HCP configurations include low cost, erasure coded HCP S10 and S30 network attached storage nodes as well as public cloud, enabling hundreds of PB of object storage all under one control without the need for a SAN. HCP server nodes have been consolidated to one model, the HCP G10 and HCP can also run on a VM. By separating logical services from physical infrastructure, HCP allows both to scale independently, while continuing to utilize existing assets.

 

HCP’s track record has proven that it can support your long term and changing requirements for archive, compliance and analytics. You can be sure that there will be a version 8 of HCP as it evolves to leverage new technologies and information management practices. You can also be sure that version 8 and the integrated portfolio of HDI, HCP Anywhere and HCI will continue with non-disruptive upgrades.

Ransomware.jpg

Today May 12, there has been a massive denial of service attack that began with the NHS system in the UK which affected dozens of hospitals, and spread to six continents affecting an estimated 75,000 machines!

 

According to gizmodo.comUnknown attackers deployed a virus targeting Microsoft servers running the file sharing protocol Server Message Block (SMB). Only servers that weren’t updated after March 14 with the MS17-010 patch were affected; this patch resolved an exploit known as ExternalBlue, once a closely guarded secret of the National Security Agent, which was leaked last month by ShadowBrokers, a hacker group that first revealed itself last summer.

The ransomware, aptly named WannaCry, did not spread because of people clicking on bad links. The only way to prevent this attack was to have already installed the update.”

 

Attached is a screen shot shown on Kaspersky Lab's blog on WannaCry. The ransom started at $300worth of bitcoin, but has since been raised according to the Kaspersky post.

 

wannacry_05-1024x774.png

 

The scope of this attack is unprecedented and underscores the need to keep current with security patches. While this attack may not have come from clicking on bad links, as a reminder, many of these attacks start from a link or attachment inside an email. Do not click on links or open attachments in emails that you are not expecting. It is also recommended that you reboot your computer on a regular basis to complete any security patches that may be waiting to complete.

 

It also underscore the need to have a recovery plan. Recovering from Ransomware attacks may be possible if backups have been taken. and you have a point in time copy prior to the attack. Scott Sinclair of Enterprise Strategy Group recommends the use of (some vendor's) Object storage in a recent report: Object storage helps with ransomware protection.

 

Scott notes that some object storage systems like Hitachi’s HCP, supports a feature called object versioning. Object storage systems are designed for write once, read many (WORM). With object versioning, any change or update to the object is written as a new version of the object, while the previous version is retained as well. When malware encrypts the data to prevent it use, it is written as a new version and the original object is not changed. In other block or file systems the original data is locked up with encryption and not available until a ransom is paid.

 

With HCP the storage admin simply sends out a command to roll affected objects back to their previous versions. This restoration is much faster, simpler and less costly than restoring data from a backup copy, if one is available and if it is current.

 

In the case of these systems that are affected, if they do not have such an object storage in place or the recovery from backups is too costly, they might just have to pay the ransom.

Laptop ban.png

Airlines told to "be prepared" for an expanded ban on carry-on electronic devices allowed on airplanes.

 

On February 2, 2016, an explosive that was hidden in a laptop exploded 20 minutes into the flight of a Somali airlines plane. It blew a hole in the airplane, but since it had not reached cruising altitude, the plane was able to return to the airport and landed safely. Fortunately, the plane’s departure had been delayed for an hour which prevented the explosion from occurring at cruising altitude where it would have destroyed the plane and everyone onboard. The explosives in the laptop were not detected during the x-ray screening prior to boarding.

 

As a result of this incident and a similar airliner downing in 2015, the United States Transportation Administration and United Kingdom Transport Security placed a ban on electronic devices larger than a cell phone/smart phone in carry-on luggage on certain direct flights to the United States and the United Kingdom. Currently the ban applies to U.S. and U.K. inbound flights from eight countries in the Middle East as of March 24, 2017.

 

On May 9, 2017, Homeland Security spokesman David Lapan confirmed to reporters that the U.S. administration is considering expanding the ban on laptops, to potentially include "more than a couple" other regions, including flights from Western Europe.

 

What does this mean for those of us who travel frequently? It means we can’t take our laptops on board to finish that last minute presentation before we arrive at our next meeting. While we might welcome the excuse to put off our work, it creates a bigger problem since it means we will have to pack our laptop in our checked luggage where it is subject to damage, loss and possible theft.

 

Hitachi Data Systems’ Mobile Computing Policy prohibits company issued laptops, tablets and mini tablets to be placed in checked baggage just for that reason. Packing our Laptops in our checked luggage is not an option for us. However, that does not create a problem, since we use our HCP Anywhere, file synch and share solution. All the files we need for the trip can be loaded into our HCP Anywhere folder and we can retrieve it from our smartphones or a loaner PC at our destination.

 

It just so happens that I will be in Istanbul next week to participate in several conferences. My presentations are loaded in my HCP Anywhere folder, and I am leaving my laptop at home. I plan to pack one of my private iPads in my checked baggage, to use when I get there since I prefer a larger screen than my iPhone. and I can face time with the family. I have several iPads since I seem to get one free every time I upgrade my iPhone. These iPads are cheap to replace and they don't contain company data. I will spend my time on the flight sleeping, watching movies or doing email. This ban is likely to be extended to other countries, and possibly domestic airports, and that is fine with me since it means another layer of security. So not having a laptop in my carryon, means a safer, more restful flight. 

 

Since airlines are not the only places where I can lose a laptop while I am travelling, HCP Anywhere eliminates the liabilities of travelling with a company laptop altogether.

Hu Yoshida

Provenance and Blockchain

Posted by Hu Yoshida May 3, 2017

Declaration of independence.jpg

Last month two Harvard researchers stunned the experts with the discovery of a second parchment copy of the United States Declaration of Independence in the UK’s West Sussex Records Office. They tracked down the manuscript and confirmed its provenance.

 

Provenance is the history of the ownership and transmission of an object. In the world of art and antiquities, provenance includes the auction houses, dealers, or galleries that have sold an item, the private or institutional collections in which the item has been held, and exhibitions where the item has been displayed. Provenance is defined by Meriam Webster as “the history of ownership of a valued object or work of art or literature”. Today provenance can be extended to anything of value through the implementation of blockchain technology.

 

Blockchain technology is the technology behind Bitcoin. According to Wikipedia, A blockchain facilitates secure online transactions. A blockchain is a decentralized and distributed digital ledger that records transactions across many computers in such a way that the registered transactions cannot be altered retroactively. (I blogged about blockchain over a year ago and recently posted about the use of blockchain in Hitachi’s 150 million member PointInfinity awards system.) Blockchain can provide a secure immutable record of when an object was created, its history of use, and where it is now. Block chain is all about provenance.

 

While blockchain was originally about crypto currencies and financial transactions, now I am hearing about new application of this technology in different fields almost on a daily basis, from startups as well as established companies.

diamond.gif

Everledger is a London startup that is using blockchain technology as a platform for provenance and combating insurance fraud in the selling and trading of diamonds. Instead of being reliant on paper receipts or certificates that can be easily lost or tampered with, blockchain provides an electronic distributed ledger that is immutable. The chain can also be tracked back to the creation of a particular diamond to certify that it is not a blood diamond, diamonds which are mined in war torn areas and are illegally traded to finance conflict in those areas. Everledger plans to extend this to other luxury items.

 

odometer.jpg

Bosch, a 130 year old German company, has created a system that uses an in-car connector to regularly send a vehicle’s mileage to its “digital logbook,” a blockchain based system which stores the mileage in an unalterable way. If the odometer on the car is suspected to have been tampered with, its mileage can be checked against the mileage it has recorded on Bosch’s system, via a smartphone app. A car owner could log their mileage on the blockchain and when they attempt to sell their vehicle, receive a certificate of accuracy from Bosch that confirms the veracity of their car’s mileage. Carfax reports that in Germany, police have estimated that approximately every 3rd car has been subject to odometer fraud causing over € 6 billion in damage per year. In addition to defrauding used car buyers and insurance companies, under reported mileage can result in unexpected failure due to inadequate maintenance. Bosch can prove the provenance of your car mileage.

 

The provenance of blockchain can provide an immutable record of when something was created, its history, and its current state. Perhaps we can use blockchain to eliminate “fake news”

This past week I visited several customers in different business verticals. One was an energy and power company, another was an insurance company and the others were in media and entertainment. A common theme in all these meetings was digital transformation and the disruption it is causing in all these sectors. The disruption was not so much about the technology, but more about the disruption in their business models by new startups in their industry. Each meeting also provided insight into the value of IoT

 

Energy and Power

Energy DX.png

In the energy and power vertical, the old centralized model of a huge, capital intensive, energy generation plant with a huge transmission grid located far from their consumers is giving way to a distributed model where energy is being generated from renewable sources close to the consumer in micro grids. Many of these large energy generation plants are being used at a fraction of their capacity making it difficult to generate revenue for established power companies. While renewable sources of energy like solar and wind are clean, they are subject to the vagaries of weather, and need new technologies around IoT, big data, and predictive analytics to provide reliable power. With deregulation new competitors are entering the utility business. Some municipalities are setting up their own mini-grids sharing power generated from solar panels on residences and businesses. While power companies may have installed smart meters to reduce the cost of meter readers, they have not yet tapped the real potential of that connection into the home to manage home devices. See how Hitachi is working with Island of Maui in the state of Hawaii, to Solve Energy Problems in Hawaii Hitachi's Smart Grid Demonstration Project by connecting residential solar panels with wind farms and electric vehicles.

 

Insurance

In the insurance business there are many new opportunities which has given rise to a new class of Fintechs, that are classified as Insurtechs.

Insurtech.png

Like their banking counterparts, many of the insurance companies are partnering with these Insurtech companies to drive innovation. The particular company that I visited had just created a unit to drive digital innovation across its businesses and their CIO was promoted to head up this unit as the Chief Technology Innovation Officer. The group is tasked with identifying emerging customer segments; leveraging data and analytics; and helping the company’s distribution partners innovate and strategically grow their businesses. They are already partnering with several Insurtech companies. They also are subject to many regulations including GDPR, which I covered in a previous post. Insurance companies are also connecting with IoT. Real-time data from embedded sensors in a wide range of internet-connected devices like automobiles and Fitbits, along with advanced analytics, will allow insurers to offer many new and enhanced products and services. Read this link to see how Hitachi is creating new IoT driven Insurance services that can reform insurance services from being reactive, responding to accidents and other incidents after the fact, to proactive, which provides services in advance.

 

Media and Entertainment

The media entertainment business is also undergoing a massive change. Twenty years ago the definition of broadcast and entertainment was very simple. Content came from the establishment and sent in one direction. Today’s social and video networks are changing the entire business model of content, publishers, broadcasting, entertainment, news, advertising and digital rights. The online and mobile ecosystem also changes how content reaches viewers, and on-demand viewing has made the fixed, mediated schedule of linear programming seem obsolete. For example, to deliver contextual advertising you need to truly understand your audience. This starts with combining a lot of different data: Information about your viewer, demographics, etc; their viewing patterns; and metadata about the content they are viewing. This data comes from many sources, in different formats, and delivery mechanisms. A data integration tool like Pentaho will help to provide market insights, operational insight and risk insights.

Hitachi TV.png

The infrastructure is also undergoing major changes Streaming video completely bypasses the traditional video-aggregation and distribution models around broadcast networks, cable, and satellite—disrupting long-standing value chains and dedicated infrastructure (for example, broadcast towers, cable lines, and satellites) that have historically been critical to the television industry. Hitachi also has vertical expertise which can help with solutions on the operational side. MediaCorp Pte Ltd, Singapore's leading media company overcomes transmission challenges using Digital Microwave Link from Hitachi.

 

An IoT Approach to Digital Transformation

In each of these business there was a very heavy IT focus on data integration and analytics. But there was also a need for OT focus as well when it comes to transforming the business model. This is where Hitachi has an advantage with its expertise in IT and OT and the capability to integrate this into IoT. While I just met with three different verticals, I expect that the same would be applicable in other verticals.

In the past I have blogged about Fintech companies that are disrupting the financial markets, by offering banking services with more consumer efficiency, lower costs, and greater speed than traditional financial companies. Fintechs have been a catalyst for traditional banks to search for solutions to automate their services in order to compete. In fact, financial companies are embracing the fintechs and the real competition is with technology companies!

 

Fintech city.png

In an Interview with Digital News Asia, Sopnendu Mohanty, chief fintech officer at the Monetary Authority of Singapore (MAS) is quoted:

 

When people talk about fintech, the natural understanding is a technology company doing banking, That’s the classic definition of fintech, but the reality is something different – 80% of fintech [startups] … are actually helping banks to digitize, challenging current processes and technology. They are actually disrupting the large tech players in a way that the banks’ technology expenses are getting smaller, while they are getting better customer services,”

 

“What is visible to consumers is the disruption to financial services, but the real disruption is happening to the IBMs of the world,” said MAS’ Sopnendu…. “Fintech companies have not exactly created new financial products, they are still moving money from A to B, they are still lending money, a classic banking product,” he said.

 

“What has changed is the distribution process and customer experience through technology and architecture.”

 

A CIO of a bank in Asia showed me a mobile app that they developed to apply for loans. It was very easy to use. However, processing the loan still took over a week because the back end systems were still the same. There are a number of fintech companies that could process that loan application in a few days at lower cost, and make it available in smaller amounts to a farmer or a pedi-cab driver whom the larger banks could not afford to service.

 

Banks have one major problem to overcome. How do they disengage from the legacy technology upon which they have built all their core processes, in order to adopt the agile fintech technologies and architectures? Many remember the painful, drawn out process of converting their core financial systems from monolithic mainframes to open systems. There are many banks that still use mainframes for legacy apps.

 

The way that the banks transitioned from mainframes to open systems required a bi-modal approach, modernizing their mode 1 core systems while transitioning to the new mode 2 architectures. Some out sourced that transition. The same approaches are needed today and fintechs can help the banks retain their customers during this transition.

 

Another element in this transition are the regulators and there is a class of fintech called regtechs, who are applying some of the same technologies to automate compliance with regulations. Machine learning, biometrics and the interpretation of social media and other unstructured data are some of the technologies being applied by these startups.

 

Technology vendors must become proficient in these technologies and move beyond that to AI, block chain, and IoT. In addition, they must provide technology that can bridge and integrate the mode 1 with mode 2 infrastructure, data, and information. Tools like converged and hyper-converged platforms, object stores with content intelligence, and ETL (Extract, Translate, and Load) automation. 

 

Technology companies should not be looking at each other for competition, they need to be looking at the Fintechs, and understanding how they have innovated in bringing business and technology together.

 

Hitachi established a Fintech lab in Santa Clara last year to work with customers and partners and was a founding member of the open source Hyperledger project started in December 2015 to support open source blockchain distributed ledgers and related tools. Our work on the Lumada IoT platform will also help us to compete in new technology areas like AI.

The Hitachi Data Systems’ object based storage (OBS) solution HCP (Hitachi Content Platform) has earned another analyst assessment as a leader in the Object Based Storage (OBS) Market. In addition to being named a leader in IDC’s MarketScape: Worldwide Object-Based Storage 2016 Vendor Assessment and Gartner’s 2016 Critical Capabilities For Object Storage Report, HCP was ranked number 1 in the March 2017 GigaOm Sector Roadmap: Object storage for enterprise capacity-driven workloads. This latest report has HDS widening the gap with the other vendors in the OBS market with a score of 4.5 while the other 8 closest vendors are clustered together in a range from 3.3 to 3.8.

GigaOm HCP.png

 

Interpreting the Chart

The GigaOm chart IS:

  • • An indication of a company’s relative strength across all vectors (based on the score/number in the chart)
  • • An indication of a company’s relative strength along an individual vector (based on the size of ball in the chart)
  • • A visualization of the relative importance of each of the key Disruption Vectors that GigaOm has identified for the enterprise object storage marketplace. They have weighted the Disruption Vectors in terms of their relative importance to one another.

 

GigaOm is a technology research and analysis firm. They describe themselves as being “forward-leaning, with a futures-oriented take on the trends and tools that are shaping the economy of the 21st century: Cloud, Data, Mobile, Work Futures, and the Internet of Things.”  GigaOm reaches over 6.5 million monthly unique readers, with a mobile reach of over 2 million monthly visitors. This report was authored by Enrico Signoretti, a thought leader in the storage industry, a trusted advisor and founder of Juku Consulting.

 

This report is a stand-alone assessment on object store and does not review other products included in the HCP Portfolio, such as HDI, HCP Anywhere, and Hitachi Content Intelligence.  The report is a vendor-level analysis that examines an expanding segment of the market—object storage for secondary and capacity-driven workloads in the enterprise—by reviewing major vendors, forward-looking solutions, and outsiders along with the primary use cases.  Vendors covered in the report include: Scality, SwiftStack, EMC ECS, RedHat Ceph, HDS HCP, NetApp StorageGRID Webscale, Cloudian, Caringo, DDN, and HGST.

 

The heaviest weighted disruptive vector, identified at 30%, was the Core. This refers to the Core Architecture. According to GigaOm: “Most of the basic features are common to all object storage systems, but the back-end architecture is fundamental when it comes to overall performance and scalability. Some products available in the market have a better scalability record and show more flexibility than others when it comes to configuration topology, flexibility of data protection schemes, tiering capabilities, multi-tenancy, metadata handling, and resource management. Some of these characteristics are not very relevant to enterprise use cases, especially when the size of the infrastructure is less than 1PB in capacity or built out of a few nodes. However, in the long term, every infrastructure is expected to grow and serve more applications and workload types.

 

Core Architecture is what distinguishes HCP from all the rest. The core architecture has enabled Hitachi Data Systems to expand the HCP portfolio, with Hitachi Data Ingestor (HDI) an elastic- scale, backup-free cloud file server with advanced storage and data management capabilities, HCP Anywhere a simple, secure and smart file-sync-and-share solution, Hitachi Content Intelligence (HCI) software that automates the extraction, classification, enrichment and categorization of data residing on both HDS and third-party repositories.

 

These are valuable capabilities that Hitachi Data Systems has built on their HCP object storage based system which are not included in the OBS evaluations by the different analysts. Last week I blogged about the value of a centralized data hub and the importance of cleansing and correcting the data as you ingest it; moving data quality upstream and embedding it into the business process, rather than trying to catch flawed data downstream and then attempting to resolve the flaw in all the different applications that are used by other people. The Hitachi Content Intelligence software can help you cleanse and correct data that you are ingesting into the HCP object based storage. This is a capability that is not available in other OBS.

 

If you consider the breadth and depth of the HCP portfolio rather than just the object storage traits that are common to all OBS products, you will realize the full power of this portfolio. While GigaOm does not consider HDI, HCP Anywhere, or Hitachi Content Intelligence, directly in its disruption vectors, they do point out that;

 

“Core architecture choices are not only important for scalability or performance. With better overall design, it is easier for the vendor to implement additional features aimed at improving the platform and thus the user experience as well.

 

This what Hitachi Data Systems has been able to do with HCP. Stay tuned for additional features and functions in the HCP portfolio as we continue to evolve to meet new market challenges.

In my trends for 2017, I called out the movement to a centralized data hub for better management, protection and governance of an organization’s data.

 

Rooster.png

“2017 The year of the Rooster teaches the lessons of order, scrutiny and strategic planning.”

 

Data is exploding and coming in from different sources as we integrate IT and OT, and data is becoming more valuable as we find ways to correlate data from different sources to gain more insight, or we repurpose old data for new revenue opportunities. Data can also be a liability if it is flawed, accessed by the wrong people, is exposed, or is lost, especially if we are holding that data in trust for our customers or partners. Data is our crown jewels, but how can we be good stewards of our data if we don’t know where it is: on some one’s mobile device, an application silo, an orphan copy, or somewhere in the cloud? How can we provide governance for that data without a way to prove immutability, and show the auditors who accessed it when, and how can we show that the data was destroyed?

 

For these reasons, we see more organizations creating a centralized data hub for better management, protection and governance of their data. This centralized data hub will need to be an object store that can scale beyond the limitations of file systems, ingest data from different sources, cleanse that data, provide secure multi-tenancy, with extensible meta data that can provide search and governance across public and private clouds and mobile devices. Scalability, security, data protection and long term retention will be major considerations. Backups will be impractical and will need to be eliminated through replication and versioning of updates. An additional layer of Content Intelligence, can connect and aggregate data, transforming and enriching data as it’s processed, and centralize the results for authorized users to access. Hitachi’s content platform, (HCP) with Hitachi Content Intelligence (HCI) can provide a centralised, object data hub with seamlessly integrated cloud-file gateway, enterprise file synchronization and sharing, and big data exploration and analytics.

 

Creating a centralized data hub starts with the ingestion of data which includes the elimination of digital debris and the cleansing of flawed data. Studies have shown that 69% of information being retained by companies was, in effect, “data debris,” information having no current business or legal value. Other studies have shown that 76% of flaws in organizational data are due to poor data entry by employees. It is much better to move data quality upstream and embed it into the business process, rather than trying to catch flawed data downstream and then attempting to resolve the flaw in all the different applications that are used by other people. The Hitachi Content Intelligence software can help you cleanse and correct data that you are ingesting and apply it to the aggregate index (leaving the source data in its original state), or apply the cleansing permanently, when the intent of ingest and processing is to centralize the data on an HCP via write operations.

 

When data is written to the Hitachi Content platform; it is encrypted, single instance stored with safe multitenancy, with system and custom metadata, and replicated for availability. The data is now centralized for ease of management and governance. RESTful interfaces enable connection to private and public clouds. HCP Anywhere and Hitachi Data Ingestor provide control of mobility and portability to mobile and edge devices. Hitachi Content Intelligence can explore, detect, and respond to data queries.

 

HCO suite.png

 

Scott Baker our Senior Director for Emerging Business Portfolio recently did a webcast about the use of this HCP suite of products in support of GDPR (General Data Protection Regulation) which is due to be implemented by May 25, 2018 and will have a major impact on organizations that do business with EU countries. The Transparency and privacy requirements of GDPR cannot be managed when data is spread across silos of technology and workflows. (You can see this webcast at this link on BrightTalk)

 

In this webcast he gave a use case of how Rabo Bank used this approach to consolidate multiple data sources to monitor communications for regulatory compliance.

Rabo architecture.png

Rabo Bank is subject to a wide range of strict government regulations and penalties for non-compliance over various jurisdictions with too many independently managed data silos, including emails, voice and instant messaging and some data stored on tape. The compliance team was reliant on IT for investigations which limited their ability to respond and make iterative queries. Regulatory costs were soaring due to the resources required to carry out data investigations across silos. The results of implementing the HCP suite of products are shown in the slide below.

Rabo Results.png

For more information on this use case for a centralized data hub you can link to this PDF

fake news.jpg

 

Competitors who lack the engineering capability to design their own flash devices use standard SSDs that were designed for the high volume commodity PC and server markets. These competitors are trying to create FUD around our purpose built enterprise flash module, FMD, claiming that the offload process causes significant backplane issues and the loss of an FMD would impact performance and cause other issues due to the design of our offering. This is utter nonsense and comes under the category of “fake news

 

Hitachi has taken a very different approach to flash storage devices. Unlike other flash array vendors that use standard SSD’s, Hitachi has built their own flash module (FMD) from scratch to integrate the best available flash controller technology into the storage portfolio. While Hitachi does support SSDs, they recognized the opportunity to deliver higher capacity flash drives with advanced performance, resiliency, and offload capabilities beyond what SSDs provide by developing our own flash module, with a custom flash controller.

 

Industry Analyst, George Crump, stated in one of his blogs: "With flash memory becoming the standard in enterprise SSD systems, users need to look more closely at the flash controller architectures of these products as a way to evaluate them. The flash controller manages a number of functions specific to this technology that are central to data integrity and overall read/write operations. Aside from system reliability, poor controller design can impact throughput, latency and IOPS more than any other system component. Given the importance of performance to SSD systems, flash controller functionality should be a primary focus when comparing different manufacturers’ systems"

 

Gartner’s August 2016 report (ID G00299673) on Critical Capabilities for Solid-State Arrays recognized this as a trend “To gain increased density and performance, an increasing number of vendors (e.g., Hitachi Data Systems, IBM, Pure Storage and Violin Memory) have created their own flash NAND boards, instead of using industry-standard solid-state drives (SSDs). Dedicated hardware engineering has reappeared as a differentiator to industry-standard components, moving away from the previous decade's trend of compressed differentiation”. Hitachi Data Systems started this trend in 2012 with the announcement of our first FMD.

 

Competitors who lack the engineering capability to design their own flash devices use standard SSDs that were designed for the high volume commodity PC and server markets. These competitors are trying to fight this trend by creating FUD around our FMD, claiming that the offload process causes significant backplane issues and the loss of an FMD would impact performance and cause other issues due to the design of our offering. This is utter nonsense and comes under the category of “fake news

 

In the first place, the only function that is off loaded from the storage controller to the FMD is data compression which is handled by two coprocessors in the FMD and has no impact on performance. Compare this to the software overhead of doing the compression/decompression for selected SSDs in the storage array controller versus doing this in hardware in the FMD. Because of the performance impact of doing compression in the storage array controllers, storage administrators have the added task of managing  the use of compression. With the FMD you can turn on the compression and forget it. Aside from the reporting of audit log and normal usage information, there is no significant side band communication between the Hitachi VSP storage controller and the FMDs so the claim that the offload process causes significant backplane issues is completely false.

 

The management of a flash device is very compute and bandwidth intensive. SSDs rely on a controller to manage the performance, resilience and durability of the flash device. As the I/O activity increases, these functions cause controller bottlenecks, sporadic response times, and a poor customer experience, causing IT organizations increasing work in managing the tradeoffs between workloads and data placement on different SSDs. We have published a white paper that explains the technology, and the pitfalls of trying to do this with the limited architecture and compute power of standard SSDs. You can down load this white paper via this link.

 

FMD Controller.png

 

The brains of the FMD is a custom-designed ASIC featuring a quad core processor, 32 parallel paths to the flash memory, and 8 lanes of PCIe v2.0 connection to the external SAS target mode controllers. This ASIC is composed of more than 60 million gates, two coprocessors for compression and decompression, and direct memory access (DMA) assist. The ASIC compute power drives up to eight times more channels and 16 times more NAND packages than typical 2.5 inch SSDs. This powerful ASIC enables the FMD to avoid the limited capabilities of the standard SSD controllers, which restricts the amount of flash that an SSD drive can manage and forces the flash array controller and storage administrator to do more management in the placement of data on SSD drives. The FMD ASIC enables Hitachi Data Systems to deliver capacities of up to 14 TB in one FMD today.

 

Unlike standard SSDs, the FMD was purpose built for enterprise storage workloads. It was specifically designed to address intensive large-block random write I/O and streams of sequential write requests from applications such as software as a service, large scale transaction processing, online transaction processing (OLTP) databases, and online analytic processing(OLAP). It also understands that storage array controllers format drives by writing zeros, so the FMD avoids writing the zeros, to improve performance and durability. It can also erase all the cells, even the spare cells that the array controller cannot see and report back the status of the erased cells through a SCSI read long command for auditing purposes. SSD storage arrays have no way to securely erase all the flash cells in an SSD since they cannot see the spare cells and overwrites are always done to a new cell.

 

As far as the loss of an FMD having an impact on performance there are two cases, a planned threshold copy-replacement and an unplanned RAID reconstruction. With the copy-replacement, the simple copy places a minor load on the FMD, but there is no impact to host I/O. There is an impact for the standard RAID reconstruction, but that is the same for any storage array, except that higher performance of the FMD could shorten the reconstruction time, depending on what else is happening in the array.

 

To get the true news on Hitachi Data System’s FMD please read our white paper which explains it all.

It has been about a year since I blogged about the distributed ledger technology called blockchain and the establishment of the Hitachi Financial Innovation Lab in Santa Clara.

 

This year Hitachi has partnered with Tech Bureau to use the NEM-based Mijin Blockchain platform for Hitachi’s point management solution “PointInfinity” which serve 150 million members and users. PointInfinity is a rewards system that allows you to earn points at one place and use them in another place similar to Plenti in the United States. Pointinfinity makes use of data-driven results and application know-how which allows merchants to deploy point-based and electronic money management systems. Members can design their own membership programs and Point of Sale (PoS) software for loyalty programs and special offers. It is a low-cost solution and can be implemented in a short period of time, while offering a high level of security. Hitachi’s PointInfinity system has gained immense popularity over the past few years by service providers and dealers that utilize loyalty based programs for frequent customers. It is the most popular points reward system in Japan with the largest number of users.

 

The test began on 9th February, and will progress with the goal of determining whether a private blockchain could meet the demands of a high-volume transaction system. In statements, Tech Bureau CEO Takao Asayama said that he believes the trial could help boost the perception of what he believes could be a powerful use case in Japan, one that he believes appeals to customers and corporates. Besides speeding the exchange of points between its members blockchain technology is projected to reduce running costs by over 90% according to Tech Bureau executives.

 

Ramen shop.png

Mijin is software that can build a private blockchain site for use within a business or inter-business, from the standpoint of the cloud or within the data center of one’s company. A product of Tech Bureau, Mijin blockchain’s applicable functionality has been proven as a bank, a microfinance ledger and an electronic money core banking system. Its NEM cryptocurrency have proved themselves in Japan as counter coinage and more than 300 companies already deal with it. Now this will be used with Hitachi’s PointInfinity points program,

 

The successful completion of this test will see one of the largest operational deployments of blockchain technology with the 150 million members of PointInfinity.

 

While Hitachi is well known for bullet trains and Enterprise IT systems, Hitachi Innovation is also leading the way in other business areas. This project with Pointinfinity and Tech Bureau helps with opening new ventures that deal with membership based on points and providing merchants with more services and rewards due to the engagement of a larger audience. The goal of this experiment enables the expansion of business from buyers because the points get used as currency, and several new services are made possible because of this development.

 

While this project uses blockchain technology from Tech Bureau, Hitachi has been investing in their own blockchain development and implementation. Hitachi is a founding member of the Hyperledger Consortium, an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration, hosted by The Linux Foundation, including leaders in finance, banking, IoT, supply chain, manufacturing and technology. Hitachi is also working with several companies on blockchain proof of concepts. One of these is a POC with the largest bank in Japan, Bank of Tokyo-Mitsubishi, testing a blockchain-based infrastructure that they developed to issue, transfer, collect, and clear electronic checks. The POC is being done in the Singapore Fintech friendly regulatory sandbox that was installed by the Monetary Authority of Singapore.

 

As the year progresses I will bring you updates on Hitachi's progress with blockchain. Some of the links provided in this post are in Japanese and will require Google translate.

IEP stands for the Intercity Express Programme, which will convert the current 60+ year old diesel UK intercity trains to state of the art, high speed electric trains. The new trains are from Hitachi and are packed with the latest technologies based on Hitachi Ai Technology/H to create a cleaner, safer, smarter transport system for passengers, workers, and residents along the rail corridors.

 

Hitachi IEP.jpg

 

This train is being delivered as a service, and Hitachi is taking on the billions of dollars of up front risk in rolling stock and train control system; building a factory and maintenance depots. This is a 27-year project and Hitachi is confident that they can return a profit to their investors through the use of big data, IoT, and AI.

 

The challenge of converting a running system from old diesel trains to electric trains without disruption was answered in the same way that we do in IT conversions; through the use of virtualization. Hitachi built a virtual train, an electric train that has a diesel engine in the undercarriage that can run diesel on the older tracks and electric on the newer tracks. When all the tracks are converted the diesel engines can be removed so that the lighter electric train would have less wear on the tracks.

 

The trains will be built in County Durham at a new factory which marks the return of train manufacturing to the north-east UK, supporting thousands of jobs and developing a strong engineering skills base in the region. The plant will employ 900 by this spring with more than double that as new maintenance facilities are opened for this fleet. AI will play an important part in the efficiency of maintenance workers and rolling stock utilization. KPIs and sensors will be incorporated in measures to improve and enhance work efficiency based on the maintenance workers daily activities and level of well-being (happiness). While many believe that AI will eliminate jobs, Hitachi AI is being applied to create jobs and enhance the work experience. By improving the rolling stock utilization rate, anticipating the relationship between time related deterioration, operating conditions of rolling stock and worker well-being, Hitachi AI can be applied to detect the warning signs of system failure.

 

Hitachi AI is also used in the analysis of energy saving performance in traction power consumption, the energy consumed by the traction power supply system which is influenced by parameters such as carriage mass and speed and track infrastructure data such as track gradient and curve information. By managing acceleration and deceleration, AI showed that it can reduce power consumption and carbon emissions by 14% while maintaining the same carriage speed.

 

In addition to passenger amenities like Wi-Fi and power outlets, AI provides improved passengers comfort through reducing noise and vibration and increased happiness with a workplace environment that manages air conditioning and airflow sensing when doors are opened and closed. Residents along the rail corridors enjoy less noise and cleaner air. Commuters to London can enjoy a reduced commute time, greater productivity during this time and the ability to locate their families outside the congestion of the city.

 

On June 30 2016, the Great Western Railway unveiled its first Intercity Express Programme (IEP) Class 800 train carrying invited passengers from Reading to London Paddington Station. This commemorated 175 years since the opening of the Great Western Main line. This service is scheduled to go on line in the summer of 2017 with a fleet of 57 trains, and will run on London and Reading, Oxford, Swindon, Bath, Bristol and South Wales as well as north and south Cotswold lines.

 

 

For more information about the Hitachi AI Technology/H that is being applied in this train service, please link to this Hitachi Research paper. Hitachi AI Technology/H is a core part of our Lumada IoT platform to deliver social innovation.

 

Hitachi's strategy is focused on a double bottom line. One bottom line for our business and investors and another bottom line for social innovation.