Skip navigation
1 2 3 Previous Next

Hu's Place

233 posts

This post is part 3 of my Top IT Trends for 2018. Here we will cover three data types that IT will need to store and manage as these applications become more prevalent in 2018. The first is video analytics, the second is Blockchain, and the third is the use of biometrics for authentication.

 

6.  Wider adoption of Video analytics

Video content analytics will be a “third eye” for greater insight, productivity, and efficiency in a number of domains, beyond public safety. The algorithms to automatically detect and determine temporal, spatial, and relational can apply to a wide range of businesses like retail, healthcare, automotive, manufacturing, education and entertainment. Video, when combined with other IoT information like cell phone GPS and social media feeds can provide behavior analysis and other forms of situational awareness. Hitachi has used video analytics at Daicel, a manufacturer of automotive airbag injectors, in its quality management system to increase product quality, reduce cost of rework and root cause eradication. Retaillers  are using video to analyze customer navigation patterns and dwell time to position products and sales assistance to maximize sales. Video analytics relies on good video input so it requires video enhancement technologies like denoising, image stabilization, masking, and super resolution. Video analytics may be the sleeper in terms of analytics for ease of use, ROI, and generating actionable analytics.

 

 

7.  Blockchain projects mature

Blockchain will be in the news for two reasons. The first will be the use of crypto currencies, namely Bitcoin. In 2017 Bitcoin has accelerated in value from about $1000 USD to over $11,000 USD by the time of this posting! One of the drivers for Bitcoin is the growing acceptance of Bitcoin in countries that are plagued by hyper-inflation like Venezuela and Zimbabwe where bitcoin provides a “stable” currency.  Japan and Singapore are indicating that they will create fiat-denominated cryptocurrencies by 2018. These systems will be run by banks and managed by regulators. Consumers will use this for P2P payments, ecommerce and funds transfers. This means banks will have to build an IT capacity to manage accounts in cryptocurrencies. Russia, South Korea and China may also move in this direction.

 

The other reason is the growing use of blockchain in the financial sector beyond crypto currencies. Financial institutions will begin routine use of blockchain systems for internal regulatory functions such as KYC (Know Your Customer), CIP (Customer Identification Program is the KYC + checks against various blacklists or other government watch lists), customer documentation, regulatory filings and more. Interbank funds transfer via abstract cryptocurrencies and blockchain ledgers will expand beyond the test transactions of 2017.  A recent breakthrough in cryptography, Zero-knowledge Proof, may solve one of the biggest obstacles to using blockchain technology on Wall Street, which is keeping transaction data private. Previously, users were able to remain anonymous but transactions were verified by allowing everyone on the network to see the transaction data. This exposed client and bank positions to competitors who could profit from the knowledge of existing trades. Zero-Knowledge Proof is being implemented in several blockchain systems like zCash (ZEC) and ethereum in 2017 and is expected to be widely adopted by FSI in 2018. This could have major impact on IT in the financial sector.

 

Other sectors will begin to see prototypes with smart contracts, for provenance and identity services for health care, governments, food safety, counterfeit goods, etc. Blockchain provides provenance by building a traceability system for materials and product. You can use it to determine where a product originated, to trace the origin of contaminated food or illegal products like Blood Diamonds. Provenance may soon be added to the list of regulatory requirements that I mentioned in my Data Governance 2.0 trend

 

 

8.  The time is right for Biometric Authentication

A survey in 2016 showed that the average English speaking internet user had 27 accounts. By 2020, ITPro predicts that the average number of unique accounts will be 200! If every account had a unique password, this would be a nightmare to manage. Imagine updating your passwords every 90 days. That would be 800 passwords that you needed to generate and keep track of. In reality, most of us use the same password for many accounts that we don’t think are important. Unfortunately, hackers know this so once they discover a password they will use it to successfully hack our other accounts. The use of AI has been shown to crack a 20 character password in 20 minutes. Even if we adhere to the best password practices, it may be disclosed through hacks against third parties as has happened at Equifax. Businesses are coming to the realization that proxies that represent our identity like passwords, ATM cards, and pin numbers, even with two factor authentications, are hackable. In the United States the most common identification is a Social Security card number which was never intended to be used as a national identity token.

 

  Smart phone vendors and some financial companies like Barclays are moving to solve this problem by using biometrics which represent the real you. India has implemented a national identification program which includes iris scans and finger prints. However, choosing the right biometric is important. If a biometric like a finger print is hacked there is no way to reset it, like you would a pin number or password. Since we leave our finger prints on everything we touch, it is conceivable that someone could lift our prints and reuse it. Hitachi recommends the use of finger vein which can only be seen when infrared light is passed through a live finger to capture the vein pattern and is the most resistant to forgery.

 

 

Next week I will conclude these trends with Agile methodologies and Co-Creation and how they will contribute to Digital Transformation in 2018.

Gartner differentiates object storage from distributed file systems, and have published separate Critical Capabilities reports for each. The last Critical Capabilities for object storage was published March 31, 2016. In this report, written by Gartner analysts, Arun Chandrasekaran, Raj Bala and Garth Landers, Gartner recommends that readers of the report “Choose object storage products as alternatives to block and file storage when you need huge scalable capacity, reduced management overhead and lower cost of ownership.” We believe the use cases for object storage and block and file systems are quite different.

 

This report clearly showed Hitachi Vantara’s HCP in a leadership position for object store

 

Quadrant Object store.png

Then in October of 2016, Gartner combined object storage and distributed file systems into one Magic Quadrant (MQ) report, and as stated in the 2017 report, Gartner defines distributed file systems and object storage as software and hardware solutions that are based on "shared nothing architecture" and support object and/or scale-out file technology to address requirements for unstructured data growth.  However, they still recognized the difference in these two technologies in their research.

 

Distributed file system storage uses a single parallel file system to cluster multiple storage

nodes together, presenting a single namespace and storage pool to provide high bandwidth for

multiple hosts in parallel. Data is distributed over multiple nodes in the cluster to deliver data

availability and resilience in a self-healing manner, and to provide high throughput and

capacity linearly.”

 

“Object storage refers to devices and software that house data in structures called "objects," and serve clients via RESTful HTTP APIs..”

 

 

On October 17th, Gartner published their second annual " Magic Quadrant for Distributed File Systems and Object Storage." For the second year in a row, we are the only vendor in the Challengers quadrant since we were only evaluated on our HCP object storage. It is important to note that the Magic Quadrant report is NOT a stand-alone assessment of object store.  As the title states, this is a vendor-level analysis based on the COMBINATION of an Object Storage and Distributed File Systems offering.

 

A new Critical Capabilities for object storage is expected to be published in early 2018.  We believe that report will be a more accurate way to evaluate object storage systems. We would expect to score much higher due to the addition of geo-distributed erasure coding and other functionalities in HCP, as well as the addition of Hitachi Content Intelligence to the HCP portfolio.

 

Gartner, Critical Capabilities for Distributed File Systems, 09 November 2017

Gartner, Critical Capabilities for Object Storage, 31 March 2016

Gartner, Magic Quadrant for Distributed File Systems and Object Storage, 17 October 2017

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


 

 

On October 31st, Gartner published its 2017 version of the Magic Quadrant for General Purpose Disk Arrays. Gartner defines general-purpose disk arrays as those designed to satisfy the storage needs of applications running on physical or virtual servers. This Magic Quadrant excludes all-flash systems, also called solid state arrays (SSAs), because they have their own Magic Quadrant.

 

We have been positioned in the “Leaders” quadrant for five years in a row.  For this report, Gartner’s analysis is centered on the hybrid VSP G series line, not our fast growing, all-flash VSP F Series or UCP converged and hyper-converged offering.

VSP Family.png

Gartner is never completely positive on any vendor and lists both strengths and cautions for each vendor. Gartner’s commentary about our storage capabilities remained strong, recognizing the VSP’s common architecture, administrative tools, interoperability, ecosystem and scalability which simplifies the sales cycle and aligns well with channel capabilities. The Hitachi VSP is the only storage system that shares a common architecture and management tools from the smallest VSP G200 to the flagship VSP G1500. GAD, Hitachi Automation Director, Hitachi Infrastructure Analytics Advisor, and tiering to Amazon Web Services (AWS), Microsoft Azure and Hitachi Content Platform (HCP) improve usable availability and staff productivity. This also includes the ability to virtualize other vendor storage arrays, enhance them with VSP functions, and facilitate the migration for technology refreshes. This preserves customer investments in policies and procedures, and leverages ecosystem-related investments.

 

I would argue that the VSP is the only General Purpose Disk Array, among the leader quadrant vendors, since the other vendors have to offer a number of different arrays to be able to come close to the capabilities of the VSP.  Dell/EMC has four different arrays, HPE has 6 different arrays, IBM has three, and NetApp has two.

 

Gartner cautioned users about our ability to influence the storage market as we go through our corporate transformation effort. We strongly disagree with this since, the transformation has already been underway for several years and in the past year we have made over 14 major product introductions and enhancement to our VSP all flash and hybrid storage lines. While other vendors are focused on storage we are focused on data with a coordinated family of hybrid, all flash, converged, hyper-converged, and erasure coded storage nodes (HCP S3 and S4), We were one of the first to deliver NVMe in our hyper-converged infrastructure system.

 

One of the capabilities of Hitachi Vantara storage that was added after the cut off for Gartner’s evaluation is our use of containers and micro-services. Our Hitachi Storage Advisor, formerly known as Hitachi Infrastructure Director, uses containers in the form of installation support using Docker. The Hitachi Storage Plug-in for Containers(HSPC) provides connectivity between container runtimes such as Docker and Hitachi Virtual Storage Platform (VSP) G & VSP F series storage platform (requires Hitachi Storage Virtualization Operating System [SVOS] 7.1 or later). HSPC will enable stateful applications, such as databases, to persist and maintain data after the life cycle of the container has ended. Built-in high availability enables orchestrators such as Docker Swarm and Kubernetes to automate and orchestrate storage tasks between hosts in a cluster.  These storage capabilities are essential for enabling DevOps and agile IT workflows. In addition, the storage plug-in supports the advanced capabilities of Hitachi VSP G and F series arrays which provide features such as automation, high availability, seamless replication, remote management and analytics.

 

Containers and micro-services are the base technologies that enable agility and scalability for public and private clouds, and IoT platforms. These innovations will influence the storage market going forward. The category for General Purpose Storage Arrays will need to be expanded beyond physical and virtual servers to containers.

 

So, we believe in our ability to continue to influence the storage market and remain in a leadership position during our ongoing corporate transformation.

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


 

 

This is part 2 of my Top IT trends for 2018. In my first post, I covered Preparing IT for IoT. This post will look at some new requirements for managing storage that will make IT more effective.

 

Part 2 Top trends.png

 

4.     Data Governance 2.0

2018 will see new challenges in data governance which will require a new data governance framework. Previous data governance was based on the processing of data and meta data. The new data governance must now consider data context and be flexible to quickly adapt as new regulations are unleashed by the regulators on new processes and data types like crypto currencies. The new data governance must now consider data context. Surely one of 2018’s biggest challenges will arrive on May 25, 2018 when the EU’s General Data Protection Regulation (GDPR) goes live and affects all countries worldwide where the processing of personal data for EU citizens occurs. GDPR gives EU residents more control of their personal data. Individual controls include the ability to prohibit data processing beyond its specified purpose for collection, the right to access, the right to rectification, the right to be forgotten, the right to data portability, the ability to withdraw consent to the collection and use of personal data, and many more.  Consider this, if an EU citizen invokes their right to be forgotten, a company must be able to find the individual’s data throughout its technology and application stacks (many of which are logically, if not physically, separated), evaluate the intent of each data element (as some regulations will likely supersede GDPR – such as financial reporting responsibilities), eradicate the data, and provide proof that the data has been eradicated to the EU citizen along with an audit log to demonstrate compliance to regulators. Responding to individual actions and enforcing individual rights can drive up costs and increase risks in collecting and storing personal data. Those costs are not limited to the working hours required to complete the requests – there are also penalties to consider. GDPR violations can cost up to €20m ($21.75m) in fines, or up to 4% of the total annual worldwide turnover of the preceding financial year.

 

GDPR also requires mandatory breach notifications within 72 hours to your customers. What is interesting here is the ambiguity of the term “breach”.  In IT, this word often conjures up images of clandestine or rogue groups executing various forms of network intrusion attacks to unlawfully gain access to organizational data.  However, in the eyes of GDPR, a data breach is defined as a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data, transmitted, stored, or otherwise processed.  Consider how broadly defined that is – those “hackers” certainly fit the definition, but so does your database administrator who accidently executes a “DROP TABLE table_name” command against your CRM system.  With mandatory breach notification requirements due to your customers within 72 hours, such a short window of time can be quickly compounded without comprehensive data processing models and the appropriate checks and balances regarding data use.   It has taken months for discovery and notification of breaches in high profile cases like the Yahoo breach. The ability to do this is impossible for most simply because of a lack of data awareness – that is, when data is scattered in different application and technology silos throughout the organization, especially since more data creation is done today on the edge, on mobile devices and/or in the cloud.

 

To be sure, GDPR is seen as complicated to some, confusing to others, and normal to those already in highly regulated markets.  Regardless, as citizens across the globe become just as ‘digitized’ as data, it makes sense that general requirements for how data is accessed, managed, used, and governed would be formalized.  The impacts this has this audience are just as unique as the organizations you represent, but regardless, it underscores the simple need for a progressive and more intelligent data governance framework, one that allows you to oversee the data no matter where it resides, capable of being updated to include content intelligence tools that can detect and notify when breaches occur, and based on a “smarter” technology stack that can quickly adapt and respond to regulatory and market requirements.

 

5.     Object Storage Gets Smart

By now most IT shops have started on their digital transformation journey and the first problem that most run into is the ability to access usable data. Application and technology decisions often lock data into isolated islands where it is costly to extract it and put it to other uses. These islands were built for a specific purpose, or use-case, and not necessarily in the spirit of sharing the data. Many of these islands contain data that is duplicated, obsolete, or goes dark in that it is valid but no longer used because of changes in business process or ownership. Data scientists tell us that 80% of the effort involved in gaining analytical insight from data is the tedious work of acquiring and preparing the data. The concept of a data lake is alluring, but you can’t just pour your data into one system. Unless that data is properly cleansed, formatted and indexed or tagged with meta data so that the data lake is content aware, you end up with a data swamp.

 

While Object Storage can hold massive amounts of unstructured data and provide customized and extensible metadata management and search capabilities, what’s been missing is the ability for it to be contextually aware. Object Storage now has the ability to be “smart” with software that can search for and read content in multiple structured and unstructured data silos and analyze it for cleansing, formatting, and indexing. Hitachi Content Intelligence software can extract data from the silos and pump it into workflows to process it in various ways. Users of Content Intelligence can be authorized so that sensitive content is only viewed by relevant people and document security controls are not breached. It can create a standard and consistent enterprise search process across the entire IT environment by connecting to and aggregating multi-structured data across heterogeneous data silos and different locations.  Additionally, it provides automated extraction, classification, enrichment and categorization of an organization's data.

 

Combining my comments in section 4 with this topic, and you can quickly see the foundational components that you can deploy to address regulatory and compliance related obligations with a high-scalable, performant, adaptable, and intelligent Object Storage foundation from Hitachi Vantara.  Of course this platform is not limited to how data is managed and governed.

 

In a recent evaluation of a complex stroke CT case study, a custom MatLab DICOM parsing script was written to perform the filtering and extraction of DICOM tag data, a process that took 50 hours. Using a Hitachi Content Intelligence DICOM processing stage and the same medical image data, the query time was reduced to 5 minutes. This amounts to a 99.8% performance increase in being able to analyze the CT cases

 

My next post on IT Trends will talk about new data types that IT will start addressing in 2018.

2017 was a watershed year for digital transformation. It wasn’t a year of major technology breakthroughs, but it was a year in which many of us began to change the way we use technology. Cloud adoption increased and more applications were being developed for it. Increasingly, corporate executives were more committed to and investing in digital transformation projects; early indications are that we have stopped the decline in productivity and are on an upturn.

For my 2018 IT trend predictions, I’ve decided to focus more on the operational changes I believe will affect IT, rather than changes in technologies like flash drives. Over the next four weeks, I will be posting my predictions under the following groupings:

  • Preparing IT for IoT
  • IT must do more than store and protect data
  • Get ready for new Data types
  • Methodologies for IT Innovation

 

These are my own prognostications and should not be considered as representing Hitachi’s opinion.

 

Preparing IT for IoT

 

top trends.png

 

 

Prediction 1: IT will adopt IoT platforms to facilitate the application of IoT solutions. 

The application of IoT (Internet of Things) solutions can deliver valuable insights to support digital transformation and is rapidly becoming a strategic imperative in almost every industry and market sector. To achieve this, IT must work closely with the operations side of the business to focus on specific business needs and define the scope of an IoT project. IoT is an opportunity that can benefit all industries, whether it is a highly automated manufacturer or more manually oriented businesses like agriculture, which can benefit from timely, connected information about weather, soil conditions, equipment maintenance, etc.

 

Building IoT solutions that provides real value can be difficult without the right underlying architecture and a deep understanding of the business to properly simulate and digitalize operational entities and processes. This is where the selection of an IoT platform and the choice of an experienced services provider is important.

 

IT will be challenged to acquire the skills, to build this platform if it has to be developed from scratch. Utilizing a purpose-built IoT platform like Hitachi’s Lumada, will speed up time to value and free up IT teams to focus more on the final business outcomes. Depending on the complexity of the project, it might be implemented as an appliance, or could be a distributed platform from edge to gateway to core to cloud. Evaluate the available IoT platforms from experienced vendors before you commit to the time and resources to build your own.

 

Prediction 2: Movement to the next level of virtualization with containers.

IoT applications are designed to run in a cloud like environment for scalability and agility. Container-based virtualization are designed for the cloud and will gain wide acceptance in 2018.

Containerization is an OS –level virtualization method for deploying and running distributed applications on a single bare metal or VM operating system host. It is the next generation up from virtual machines (VM)s where the traditional virtual machine abstracts an entire device including the OS, containers only consist of the application and all its dependencies that the application needs. This makes it very easy to develop and deploy applications. Monolithic applications can be written as micro services and run in containers, for greater agility, scale, and reliability.

Everything in Google runs in containers from Gmail to YouTube. Each week they spin up over two billion containers. Almost every public cloud platform is using containers. The level of agility and scalability we see in the public cloud will be required in all enterprise applications, if we hope to meet the challenges of exploding data and information in the age of IoT.

Hitachi has adopted containers in all their new software platforms and is rapidly converting key legacy platforms over to containers, to not only realize the benefits of containers in our own operations, but to facilitate the use of containers by our customers. Our Lumada IoT platform is built on containers and micro services to ensure that it can scale and be open to new developments. We also provide a VSP/SVOS plugin to provision persistent VSP storage in containers. (For more information on our use of Containers see my previous blog post)

 

 

Prediction 3: Analytics and Artificial Intelligence

One of the primary objectives of IoT platforms is to gather data for analysis, which can also be learned and automated through AI. In 2018, we will see more investment in analytics and AI across the board as companies see real returns on their investments.

According to IDC, revenue growth from information-based products will double the rest of the product/services portfolio for a third of Fortune 500 companies by the end of 2017. Data monetization will become a major source of revenue as the world will create 163 zettabytes of data in 2025, up from 10 zettabytes in 2015. IDC also forecasts that more than a quarter of that data will be real-time in nature, with IoT data making up more than 95-percent of it.

 

Preparing a wide range of data for analytics is a struggle that affects both business analysts and the IT teams that support them. Studies show that data scientists spend 20% of their time collecting data sets and 60% of their time cleansing and organizing the data, leaving 20% of their time doing the actual analysis. Organizations are increasingly focused on incorporating self-service data preparation tools in conjunction with data integration, business analytics, and data science capabilities. The old adage “GIGO” (Garbage in, Garbage out) applies to analytics. This starts with data engineering: cleansing, shaping, transforming and ingesting data. Data preparation: refining, blending, preparing and enriching; before analytics can build, score, model, visualize, and analyze.

 

AI has become mainstream, with consumer products like Alexa and Siri. Hitachi believes that it is the collaboration of AI and humans that will bring real benefits to society. Through tools like Pentaho Data Integration our aim is to democratize the data engineering and data science process to make Machine Intelligence (AI + ML) more accessible to a wider variety of developers and engineers. Using tooling like the Pentaho Data Science Pack with R and Python connectivity are steps in that direction. Hitachi’s Lumada IoT platform enables scalable IoT machine learning with flexible input (Hadoop, Spark, No QL, Kafka streams) and standardized connections that can automatically configure and manage resources, and provide flexible output (databases, dashboards, alert emails, and text messages) and, in addition to Pentaho analytics, is compatible with python, R, and Java for machine learning.

 

This is an area where IT departments will need to learn new languages and toolsets for data preparation, analytics and AI. Integrated tools, like Hitachi’s Pentaho, will greatly reduce the learning curve and the effort.

 

In my next post, I will look at how data requirements are expected to shift in 2018, and the tools needed to address the coming changes.

This week I was in Cape Town, South Africa to attend AfricaCom and present what Hitachi Vantara is doing in IoT.  It was very eye opening for me to see the localization that would be required for IoT in this part of the world. While the Sub-Saharan region of Africa is composed of many countries with varying rates of employment, the average unemployment rate for the region is 12.4 percent. This doesn’t sound bad, but the African region has the world’s highest rate of working poverty – people who are employed but earning less that $2 a day.

 

How do you deliver IoT solutions like Hitachi’s Train as a Service in the UK when most people here cannot afford a ticket and only 20% of the fares are collected?

Africa Train.png

An example of how one company has adapted to this market is M-Kopa in East Africa. M-Kopa is a company that was launched in 2012, and has connected over 500,000 homes in East Africa to solar power. This is in an area where many houses are off-the-grid, and lighting is done through the use of kerosene lamps. How can people afford these solar systems? Unlike the solar system on my home in California, which requires several square meters of solar panels, the systems in this market only needs to have enough solar power generation and battery capacity to power three LED lights and charge a mobile phone, so the requirements are much lower.

M-Kopa.png

Customers only have to pay a deposit of $35 to take the system home and then pay $.50 per day for a period of one year to own the solar system. 50 cents a day is less than the household would pay to buy kerosene for their lamps and for mobile phone charging at a kiosk. It also provides a healthier environment and better quality of light than the kerosene lamps. This makes it affordable for low income families and opens up the opportunity for adults to work at night producing more goods and services and for children to study.

 

M-Kopa users.png

This also introduces many people to their first experience with financial services. For M-Kopa this means deferred revenue, but this works in East Africa due to the availability of M-Pesa, a mobile phone-based money transfer, financing and microfinancing service, launched in 2007 by Vodafone for Safaricom and Vodacom, the largest mobile network operators in East Africa. If the customer does not pay his 50 cents a day, the battery is shut down remotely until he pays up. The customer is usually “unbanked” so M-Pesa fills this void for deferred payments. Towards the end of the one year payment period, M-Kopa sales people take the opportunity to upsell the customer to larger systems that can support a TV or even a refrigerator or a stove.

 

It turns out that the founders of M-Kopa, also worked for Vodafone in the development of M-Pesa. M-Kopa has drawn investment funds from the likes of Richard Branson and Al Gore and is expected to be a billion-dollar business by selling to the long tail of small households.

 

Similar approaches must be taken to develop IoT in this type of market and make it profitable while creating social innovation. The solutions must be affordable, where affordability must be in relation to the local level of wages. It must be simple and not over-engineered for the needs of the market. The normal ways of financing must be changed, and there must be a motivating factor, such as the need for lighting and the charging of mobile phones.

 

Applying this to the “train as a service concept”, one of the ideas that my colleagues came up with was to give everyone a free mobile phone. With a mobile phone we could tell who is using the train and be able to collect fares. If there is a payment method like M-Pesa, mobile phones would make it easier for people who would like to pay the fare but have no means to do so. This would increase fare collection and make the fares more affordable. On the train they could have access to wifi and mobile services that are only available on the train to make it more enjoyable or be more productive while they use the train.

 

I guess one has to first understand why the people above are risking life and limb to ride the train above. Are they just train surfing for fun or are they in need of transportation to get somewhere for employment to support their families. Maybe this market does not need a sleek 200KM/Hr train like we have serving the UK commuter market. It may be a train that  is less about speed and comfort and more about affordability, slower speeds, more frequent stops, Much like the difference in solar panels between my home in California and M-Kopa in East Africa.

 

IoT can benefit both environments, but it must be tailored to the market requirements.

Hu Yoshida

Lumada Asset Avatar

Posted by Hu Yoshida Nov 8, 2017

Avatar.png

Last week I was presenting our IoT platform, Lumada to a delegation from one of the Northern states of India. Hitachi’s Lumada IoT platform provides a complete software platform based on Hitachi’s expertise in building both IT systems and OT systems that power many businesses. A core component of the Lumada IoT platform is the asset avatar where an asset avatar is a digital representation of a physical asset. With asset avatars you can drive automation, predict and prevent costly outages, and move to a fully automated business. With the asset avatar, you can get a complete 360 view of a single asset or a view of all your assets at once. In this way, the Lumada IoT platform connects to OT systems and IT systems to access, analyze, and automate machine, human and business data to produce business outcomes.

Lumada.png

 

When I mentioned avatar, there were some puzzled looks and questions from the delegation. They explained that an avatar is a concept in Hinduism that means "descent", and refers to the material appearance or incarnation of a deity on earth. This is somewhat confusing from our view of an avatar as a digital representation of a physical asset.

 

In addition, avatars are used widely on websites and in gaming as a personalized graphical illustration that represents a computer user. Avatars in a virtual world are interactive characters, which may be customized by the user. They interact within a computerized landscape where they can be acted on by using a keyboard and mouse or today’s virtual reality props. This concept is somewhat opposite from our concept of an asset avatar.

 

The 2009 film AVATAR is probably the closest example to our asset avatar. In the film, humans in the year 2154 have depleted earth’s natural resources and are looking to mine a valuable mineral unobtanium on a moon called Pandora in the Alpha Centauri star system. Pandora is inhabited by an indigenous humanoid species called the Na’vi. The atmosphere is poisonous to humans, so in order to explore Pandora’s biosphere, scientists create a genetically engineered Na’vi body with the mind of a remotely located human that is used to interact with the natives of Pandora. If you think of Na’vi as physical machines and our asset avatar as a software engineered machine with a remotely located brain, I would say that describes our asset avatar. Lumada’s core asset avatar is a computerized version of a real machine, it can monitor and effect changes on the physical machine.

 

To help you understand the benefits of asset avatar, Hitachi Vantara has published a short white paper on this concept and how it is used in Lumada. Here is an excerpt from that white paper.

 

Asset Avatar WP.png

In the Avatar movie, it gets a little messy when the male Avatar starts to interact with a female Na’vi. Not a problem with our asset avatar.

 

I invite you to read the white paper.

Singapore port.png

 

Internet companies like Google and twitter have unprecedented agility and scale and the secret to this is the use of containers. Everything at Google runs in containers from Gmail to YouTube to Search. Each week they spin up over two billion containers. Actually, this isn’t a secret, since most of this is open source. While the concept of containers has been around for a decade in Linux there was little uptake until 2014 when dotcloud went out of business and spun out the docker project, simplifying how containers could be used by developers to package their applications. At about the same time, Google decided to open source the Kubernetes project and the Mesos project spun out of work at Twitter and UC Berkeley. Now almost every public cloud platform is supporting Kubernetes or Mesos as the scheduler/orchestrator, and Docker as the format for container image and run time.

 

Introduction

Containerization is an OS –level virtualization method for deploying and running distributed applications on a single bare metal or VM operating system host. Container based virtualization is the latest virtualization technology that is gaining wide acceptance. It is the next generation up from virtual machines (VM)s where the traditional virtual machine abstracts an entire device including the OS, containers only consist of the application and all its dependencies that the application needs as illustrated here.

Docker.png

The main difference between containers and virtual machines is that a virtual machine includes an operating system as well as the application. A physical machine running three VMs would need to have a hypervisor and run three operating systems on top of it.  A machine running three container applications with Docker would run on a single operating system and share parts of the operating system in read only. Each container has its own mount for writing. This means the use of containers consume far fewer resources than a virtual machine.

 

The main benefit of a container is that since it consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, are bundled into one package; it can run reliably when moved from one computing environment to another. A container can be moved from a developer's laptop to a test environment, from a staging environment, into production from a physical machine in a data center to a virtual machine in a private or public cloud. Other benefits of containers are “lightweight”, meaning a single server can host far more containers than traditional virtual machines. Containers can be started immediately, just in time, when they are needed and can be closed to free up resources when they are not needed. Developers can use them as a “sandbox” to test code without disrupting other applications. These attributes are ideal for agile development where the elasticity of resources enables a modern DevOps environment. Monolithic applications can be written as micro services that can be run in containers, for greater agility, scale, and reliability.

 

Containers Help Address The Data Deluge

Casey O’Mara in a recent Hitachi Vantara blog post says that Financial services must move to containers in order to keep up with the deluge of data that they are facing. He gives the example of FINRA. FINRA has to go through 37+Billion transactions per day with over 900 cases of insider trading a year, and over 1500+ companies getting disciplinary actions levied against them. “The amount of data sources you need to traverse in a small amount of time is huge. Going through that many transactions a day (with around 2.5 cases happening each day) means that the queries can take hours unless you factor in containers and the ability to blend data sources quickly, and return results fast……. Today’s dated VM systems aren’t ready to tackle the current problems, and they certainly don’t scale as well”.

 

Hitachi Vantara, An Early Adopter of Containers

Hitachi Vantara has been using containers for some time and is expanding its use in more of our software products to not only enhance our agility but also to enable customers to easily deploy containers. In July 2015, our Unified Compute Platform was part of the Kubernetes 1.0 launch and the first enterprise infrastructure platform to adopt and community validate Kubernetes orchestration. Last month we announced a Hitachi Enterprise Cloud Container Platform to assist organizations in reducing their reliance on public cloud resources for their born-in-the-cloud applications. It provides hybrid cloud resources for DevOps that utilize micro services architecture. This architecture combines Hitachi infrastructure with Mesosphere DC/OS to provide PaaS element integration and user interface, container engine, scheduling, orchestration and workflow. The pre-engineered environment enables developers to focus on app development and provides operations with robust and industrialized service-level choices that the business demands once those applications reach production and global scale. We also announced managed services revolving around the Hitachi Enterprise Cloud support for DevOps and containers. Hitachi Enterprise Cloud partners with VMware and Mesosphere to provide hybrid cloud management tools that provide a pay-as-you-go cloud utility alternative to public clouds.

 

On the storage side, our Hitachi Storage Advisor, formerly known as Hitachi Infrastructure Director, uses containers in the form of installation support using Docker. The Hitachi Storage Plugin for Containers (HSPC)is now available . HSPC enables stateful applications, such as databases, to persist and maintain its data after the life cycle of the container has ended. Built-in high availability enables orchestrators such as Docker Swarm, Kubernetes to automatically orchestrate storage tasks between hosts in a cluster.  In addition, the storage plug-in enables the advanced storage capabilities of Hitachi VSP G and F series. These storage platforms provide features such as automation, high availability, seamless replication, remote management and analytics.  There are plans for additional infrastructure software to add container support in the future either in containers or monitoring/provisioning containers.

 

Last year Pentaho announced support for Docker. Through this joint collaboration, developers only need to build a big data app once and then deploy via a Docker software container and Pentaho Server. No matter what the operating environment, the app will run as designed, freeing up more time to develop new applications and less time debugging or porting them. The Pentaho versions of Docker utilities can be downloaded from GitHub.

 

Containers Are A Foundation for Hitachi Vantara Products

Recent Hitachi Vantara products make extensive use of containers, Hitachi Content intelligence , Pentaho 8 Worker Nodes and Lumada all use the same platform for rapid development of service-based applications delivering the management of containers, communications, security, and monitoring toward creating enterprise products/applications, leveraging technology like docker, mesos, marathon.

 

With Hitachi Content Intelligence, organizations can connect to and aggregate data from siloed repositories, transforming and enriching data as it is processed, and centralizing the results. Hitachi is now the only object storage vendor in the market with seamlessly integrated cloud-file gateway, enterprise file synchronization and sharing, and big data exploration and analytics.

HCI.png

Pentaho 8, which was announced last month, was a major revision with the introduction of Worker Nodes that uses the same container platform for better monitoring, improving lifecycle management, and active-active HA. When scaling PDI in the past, the ETL designer was responsible for manually defining the slave server in advance, and controlling the flow of each execution. Now, once you have a Pentaho Server installed, you can configure it to connect to the cluster of Pentaho Worker nodes. From that point on - things will work!  It will deploy consistently in physical, virtual and cloud environments, scale and load balance services, helping to deal with peaks and limited time-windows, and allocate the resources that are needed.

Pentaho Worker nodes.png

Lumada is an intelligent, composable, secure and flexible IoT platform that is built on four key pillars: edge, core analytics and studio. Lumada Foundry is the foundational layer which supports these pillars with deployment, repair, upgrade and scaling services for industrial grade software, on premise or in the cloud. Lumada Foundry is based on a micro services architecture which is underpinned by container’s ecosystem technology (Docker, Mesos, Marathon, etc.)

Lumada.png

Conclusion

Containers with the eco system of scheduler and orchestration open source tools is the next level of virtualization that will provide greater agility and scalability as we face the deluge of data that will come from IoT.  Hitachi Vantara is heavily invested in this technology to not only increase the agility in our own operations, but to enable our customers to adopt containers in their operations. Based on the IDC forecast below (excluding web/SaaS provider), container instances will continue to grow rapidly through 2020 as IT modernization efforts expand.

2020 COntainers.png

 

The “V” in Hitachi Vantara is a nod to our heritage in virtualization. That heritage continues with OS level virtualization. The last example shows the development synergy of the former Hitachi Data Systems’ Hitachi Content Intelligence, Pentaho 8’s Worker nodes, and Hitachi Insight’s Lumada Foundry, working together to develop an agile and scalable platform for object storage, analytics, and IoT. You can expect to see more of this as we move forward as one Hitachi Vantara

Brian Householder, Hitachi Vantara’s President and COO sat down with Dave Vellante of the CUBE at Pentaho World this week to answer questions about Vantara and our operating model. Prior to his current role, Brian held a similar role at Hitachi Data Systems (HDS), which integrated Hitachi Data Systems with the Pentaho organization and Hitachi Insight with our IoT assets to form Hitachi Vantara. Brian joined Hitachi Data Systems in 2003 and has held multiple positions of responsibility in areas such as strategic planning, worldwide marketing, business development, partner enablement, acquisitions, and as COO. Householder helped lead many key initiatives at HDS, including the company strategy and direction, BlueArc, Shoden, Archivas and Pentaho acquisitions, and improvements to our marketing and partner enablement capabilities. Brian is the key executive in Hitachi Vantara who has been most influential in defining Hitachi Vantara, providing a bridge from where we were to what we are now and what we hope to be.

 

Brian opened up the interview with a simple explanation of what Hitachi Vantara is. We are the data arm for Hitachi. The mission of Vantara is to help our customers deliver edge to outcomes, where ever our customer’s data happens to be created, whatever environment it happens to be in, we can help customers deliver the outcomes that they need for their business.

 

Next, he explained how Hitachi Vantara was chosen as the name of this new company. Hitachi is always front and center. Vantara is a suggestive name, suggesting the advantage we can provide customers with their data, providing a vantage point that helps customers see across their environment, and V which gives a nod to our virtualization heritage. Vantara gives us the opportunity to teach customers what we really can do, which is much broader than infrastructure.

 

Dave Vellante asked about our focus on Pentaho, open source and software and how we plan to go to market. Brian said that our strategy is all about being open, having customer use what technology they want is very critical for us. We know that customers want to go to open source and we want them to embrace us. We want to foster the open source developer community and add value beyond what is happening in the open source community. While the open source contributions are amazing, when you talk about the ability to scale that is where we can add value and that is where the commercial piece comes into play.

 

Brian emphasized that we don’t want to own the customer’s data. Our content and analytic platforms provide the customers with the ability to control their data, no matter where it resides. There are other companies out there who would like to own your data even though they may not say so. We will partner and work with customers to build models to solve problems. The knowledge of how to solve problems is shared but the algorithms that the customers develop is their data.

 

Dave asked about our edge to outcomes strategy, How will the edge evolve and how can Hitachi Vantara add value? Brian recognizes that there is this pendulum swing from centralized mainframes to distributed open system to centralized cloud and now to distributed edge with IoT. That is a direction that we are on now, and we need to be. The biggest thing we look for is follow the data.  Wherever the data is created, is where some of the processing will have to occur. Processing and analytics will need to be done on the edge depending on what the real-time requirements are or what you are asking the edge to do. Then you begin matching that in terms of bringing those data points into the broader ecosystem of what is happening. Instead of bringing the data to the analytics, we believe we need to bring the analytics to the data where it is created. There needs to be a multi-tier model, where you have an edge, then one or more aggregation points, and then an overall aggregation and analytics platform be it private or public cloud, depending on how much data is out there, what you are looking to do, and how quickly you need to get the data into a deep learning model.

 

The last question Dave asked was how Brian was spending his time in the near to midterm. Brain is spending a lot of time with customers and partners at events like Hitachi NEXT last month and Pentaho World this week meeting a number of customers in a short period of time. He is also focused on his leadership team to make sure that we are all aligned on what is the next phase in executing our strategy for Hitachi Vantara.

 

Attached is a 15 minute Youtube video of Brian’s interview with Dave Vellante. It is well worth watching to hear from the source.

Two of the question I get from Hitachi Data Systems customers that I have known for many years are “Why the Name change? And What does Vantara mean?” I had that discussion again this morning.

Asim Vantara.png

There is a very good 7 minute video interview by Teri McClure and Mark Peters of ESG that answers these two questions through conversations with Asim Zaheer, our Chief Marketing Officer regarding why the change and with Mary Ann Gallo, our Chief Communications Officer as to why the name Vantara was chosen.

 

Asim explains that the change was driven by the merger of three Hitachi Companies, Hitachi Data Systems known for IT infrastructure solutions, Pentaho a leader in data integration and analytics, and Hitachi Insights who is responsible for the Lumada IoT platform. The combination of these three related companies into one entity, with one focus on data from edge to business outcomes creates a synergy that is more than the sum of the parts. For Hitachi Data Systems’ customers this does not mean a focus away from our core IT business, but an expansion of our IT business through the integrated portfolio of Hitachi Vantara. We are as committed as ever to our Hitachi Data Systems customers and are better positioned to help them with their digital transformation goals.

 

Mary Ann, explained that the name Vantara is a portmanteau, a linguistic blend of words. The V is a reference to virtualization which has played a key role in our success as an infrastructure virtualization company. Virtualization could also refer to Pentaho’s data integration tools, and to the asset avatar in Lumada, which is a digital representation of a physical asset like an IoT machine. The other parts could refer to a vantage point or to an advantage.

 

I personally like the name, because it is not tied down to a temporal technology or trend, like “network” or “business machine” or person’s name. While Hitachi Data Systems was an established brand, it was closely identified with storage. Vantara can represent more than storage, or any of today’s technologies. While it has no meaning it has a linguistic link to what we do without confining us to a particular technology. I also like the Fact that we have kept Hitachi in the name to show our linkage to Hitachi’s core values, Wa - harmony, Makoto -sincerity and Kaitakusha-Seishin - pioneering spirit.

 

Hitachi Vantara

Agile.png

Agile is a process methodology in which a cross functional team uses iteration and continuous feedback to successfully refine and deliver a process outcome. The Agile methodology has been developed in the world of software development over the past few decades where it has been paired with DevOps. Agile is mostly about process while DevOps is about automated build, test automation, continuous Integration and continuous delivery.

 

Digital Transformation is all about efficiency and working together to drive faster and more relevant business outcomes, and that is why more IT organizations are adopting the Agile methodology. IT organizations have a legacy of siloed operations with server, network, storage, data base, virtualization, and now cloud administrators passing change notices back and forth to deliver a business outcome. Many would argue that IT was more focused on IT outcomes and not business outcomes. Even when data centers used technology to create shared data repositories to break down the data silos, the different functions were still focused on their own objectives and not on the overall business objectives. Now with cross functional teams including business as well as IT functions working together using iterative Agile sprints of two to four weeks, IT can focus on relevant business out comes and deliver it more efficiently.

 

Hitachi Vantara"s IT under the leadership of our CIO, Renee Mckaskle, has been using Agile methodologies over the past two years to drive Digital Transformation. And the results have been very impactful. I described this in a previous blog on how IT used Agile to change the branding of three companies, Hitachi Data Systems, Hitachi Insight, and Pentaho into Hitachi Vantara on our electronic systems and service desks worldwide in less than 30 Hours! IT was also part of an Agile cross functional team lead by our Chief Communications Officer, Mary Ann Gallo, including other key stakeholders, like sales, supply chain, HR, legal, DevOPs, etc. which completed the process of rebranding in less than 6 months, a process which would otherwise take over a year.

 

  Hitachi Vantara has expanded the use of Agile from its beginnings in software development across the entire organization. Agile provides us with a nimble approach, where small cross functional teams, with a clear direction and strategic milestones can iterate through short sprints to ensure alignment across the board, communicate effectively, and focus on problem solving and achieving our common business goals.

Cocreation.png

Digital transformation is impacting business and society on a worldwide basis. In the past most enterprises focused on the new technologies which were supposed to make us more productive. In reality according to the office of Economic Cooperation and Development, productivity in most countries, even in the most technologically advanced countries like the U.S. and Japan, has declined. In the past few years we have come to realize that technology alone cannot make a positive change in business and society if we did not change the way in which we used technology. This is what digital transformation is about.

 

In order to drive greater efficiencies and transform business models we must progressively change in response to the wave of innovation created by digital technology. Today it is more about business outcomes rather than products. Companies like Uber and Airbnb have shown us that transformation is more about “usership” rather than ownership with everything as a service. Closed and proprietary has given way to open interfaces in order to exist in a sharing ecosystem. Enterprises are beginning to realize that individual optimization may not contribute to overall optimization. While IT departments have become very efficient with advances in data center technologies, that has not always resulted in increased revenue and growth for the enterprise. Now enterprises are integrating IT with the lines of business in Agile teams to drive overall efficiencies.

 

Digital transformation is also redefining the relationship between producers and consumers in the creation of value. We are witnessing a shift in value creation away from producer centric, solution value creation to a co-creation paradigm of value creation. Professors Prahalad and Ramaswamy, who wrote the book on co-creation, defined co-creation as “the joint creation of value by the company and the customer; allowing the customer to co-construct the service experience to suit their context.”

 

Traditional business thinking starts with the premise that the producer autonomously determines value through its choice of products and services. Consumers were consulted through market research but were passively involved in the process of creating solutions and value.  Producers and consumers can no longer survive in the digital world with this traditional approach to value creation. In the digital world the pace of change is relentless, problems span across multiple domains, with a blurring of industry domains and boundaries. Producers cannot take years to develop a solution and consumers cannot plan their business on multiyear roadmaps that may not deliver what they need. If consumers and producers are to innovate they must be active participants in the value creation process as co-creators.

 

Hitachi sees co-creation as the process of collaborating with customers and ecosystem players in order to innovate and create new value for business stakeholders, customers and society at large.  Several years ago, Hitachi restructured their research division. For years Hitachi research was ranked among the top ten research organizations in terms of the number of patents that were granted. So why reorganize? Hitachi realized that they needed to focus on business outcomes and not patents, and they needed to organize around co-creation with their customers in order to drive innovation. Hitachi research was reorganized into three parts: A vision driven Center for Exploratory Research (CER) to pioneer new frontiers through exploratory research, a technology driven Center for Technology Innovation (CTI) to generate innovative products by focusing on strong technology platforms and their deployment, and a customer driven Center for Social Innovation (CSI) designed to co-create services and solutions with customers. The center for Social Innovation has behind it the full research capabilities of the Center for Technical innovation and the Center for exploratory research while working with customers and ecosystem players.

 

Centers for Social innovation are located around the world. One is located in Sunnyvale California which is located only a few miles from Hitachi Vantara headquarters in Santa Clara. In this center, there is a Financial Innovation Laboratory that is working with FSI companies, Fintech companies with original ideas, other non-finance innovation labs that share the same site and other research institutions like Stanford University. CSI labs are also located in Brazil, Europe, China, Singapore, and Tokyo.

 

Hitachi Vantara with its Lumada platform will facilitate co-creation by providing a platform for agile and collaborative creation. Lumada is designed for ease of operations and management for faster deployment, configuration, and modification. Lumada uses open source software for its individual components to enable quick adoption to new fields of IT that may be developed by the community, more frequent interaction with other industries and business categories, and ease of connectivity and sharing with other companies.

 

Colaboratice creation.png

Hitachi Vantara services and support are ready to work with customers who are willing to invest in co-creation. Not all customers are ready for co-creation. Co-creation requires an investment of time and resources and a transparency and dialogue with your co-creator. The first step is to engage each other to define a use case. Then develop a model which we put through a proof of concept and finalize a solution through a proof of value. After that we operationalize the solution by deploying it at scale and integrate with the business. The customer can then operate and manage the solution or Hitachi can provide this as a service to deliver business value. While this process sounds simple, there will probably be many iterations. Along the way there may be POCs and POVs that fail, but even the failures will help us both gain more skills and insights into the outcomes that we are targeting.

 

Digital transformation will drive co-creation, and Hitachi would like to be your co-creation partner.

Hu Yoshida

Octopus, Fog, and Vantara

Posted by Hu Yoshida Oct 5, 2017

One of the breakout presentations at Hitachi’s Next 2017 event in Las Vegas last month was on Fog Computing.

 

Octopus.png

Sudhanshu Gaur, Director, Digital Solution Platform Lab Hitachi America started  the presentation by describing how an octopus has a distributed brain. Apparently an Octopus has separate brains in each tentacle besides the brain in his head. If the head brain wants to eat something, it has to send a message to the appropriate tentacle to seize the food and bring it to its mouth. If a tentacle is severed it continues to react to stimuli for some time after. In addition, the suckers on the tentacles have some ability to recognize its own skin so that it does not get stuck on itself. This was used as an example of fog computing where intelligence is distributed between the cloud and the edge device.

 

Techtarget.com defines Fog computing, also known as fog networking or fogging, as a decentralized computing infrastructure in which data, compute, storage and applications are distributed in the most logical, efficient place between the data source and the cloud. Fog computing essentially extends cloud computing and services to the edge of the network, bringing the advantages and power of the cloud closer to where data is created and acted upon.

 

A good example of fog computing is an autonomous car. Some compute and storage resides in the edge device, the car, where it needs to make instant decisions about steering, braking and accelerating. The car may be connected to a traffic controller that manages the traffic on a roadway system, and it may be connected to a system in the cloud that monitors usage, predicts remaining driving range, and prescribes maintenance.

 

Peter Levine, a general partner at the venture capital firm Andreessen Horowitz, would take exception to the fog definition that fog computing extends cloud computing and services to the edge of the network. He is being widely quoted in the business media as saying that edge computing will displace the cloud. He points to edge devices like self-driving cars and drones which are really data centers that need to make immediate decisions using AI and machine learning and cannot wait for cloud computing decisions. Peter believes that his job as an investor is to recognize where the industry is headed before it happens and he sees this as a major new trend. You can see his 25 minute video at this site

 

I have been in this business since the mainframe days, and I have seen the pendulum swing from centralized mainframes to distributed open systems and then back to a centralized cloud; so I am not surprised to see it swing once more to a distributed edge or fog. IoT will drive compute to the edge and points in between, where it is most logical and efficient to drive business outcomes. Whenever we see these swings it is accompanied by major disruption in the industry and many technology companies that were leaders in their day, are no longer in existence - Digital, Amdahl, and yes even EMC.

 

Cloud, like mainframes and distributed open systems will still have a place in the computing ecosystem, but the role of the cloud will change. IoT, devices on the edge will act and react with other devices in real time aided by AI and machine learning. Cloud will do the heavy analytics work, facilitate communication between systems, provide common repositories and data governance, work that would be too heavy for the edge at this time.

 

IoT is about edge computing and Hitachi is pivoting to address this with the formation of a new company, Hitachi Vantara, which integrates Hitachi Data Systems Infrastructure and cloud, Hitachi Insight Lumada IoT platform, and Pentaho analytics. Hitachi Vantara is a new company, purpose built, to address the IoT market and all points in between.

Last week we announced that Hitachi Data Systems, was integrated with two other companies, Hitachi Insight Group and Pentaho to form a new legal entity that is called Hitachi Vantara. Nothing was lost in this integration, in fact the benefits of this integration was greater than the sum of the parts. All the resources, the intellectual property, the experience and solutions capabilities of Hitachi Data Systems are now augmented by the IOT capabilities of Hitachi Insights and the analytics of Pentaho.

Hitachi-Vantara-next-2017.jpg

The response has been very positive as the industry is focusing on digital transformation and business outcomes. However, one of our competitors in the storage space published a blog speculating that this announcement indicated a move away from enterprise storage. This is nonsense, and would have been clear to them if they had read our announcement letter or any of the social media like the Register, which followed the announcement. In our announcement letter we stated:

“Hitachi Vantara will continue to develop the trusted data management and analytics technologies Hitachi is known for, including Hitachi’s popular data infrastructure, storage and compute solutions, and Pentaho software. It will also be driving the development of strategic software and services solutions, including Hitachi Smart Data Center software and services, Lumada, Hitachi’s IoT platform, now available as a standalone, commercial software offering, and Hitachi co-creation services”.

 

This storage competitor made the misleading claim that we didn’t release any new flash storage offerings and that we do not have NVMe support. This is obviously false based on our concurrent release of New Converged Infrastructure Systems which includes the UCP CI (built on Hitachi VSP flash systems), and the UCP HC, which includes next generation NVMe flash. Perhaps they missed this since they are only able to talk about storage and not about hyper converged solutions with NVMe acceleration, and end-to-end software which provides automated provisioning, simplified data management, orchestration, and improved operational efficiencies for a converged infrastructure including Storage, compute, and networking. While we have delivered NVMe support, this competitor’s NVMe is still months away.

 

Today a storage vendor that is tied to the hardware, will have a hard-time pivoting to software-defined offerings, moving up the stack to provide automation, analytics & higher-value solutions, and delivering value in cloud environments. Customers prefer dealing with vendors that can be outcome and business focused.

 

Hitachi Vantara is about solutions and business outcomes and not about boxes.

Industrial robots are estimated to be a $14 billion market in 2017 led by the automotive industry. Smaller and cheaper robots, that are designed to work alongside humans, called cobots, are expected to open the doors for more use of robotics in assembly lines where traditional industrial robots are too big, dangerous or expensive to install to be worth the investment. IDC is predicting that the robot market will be worth $135bn by 2019.  IDC notes that Asia is retooling its manufacturing sector and already accounts for 69% of robot spending.

 

The history of Hitachi Group’s robotics dates back to the 1960s with a remote controlled device for nuclear power plant operations. Since then Hitachi has introduced a wide variety of robots from mechatronic products such as semiconductor testing equipment to robots for extreme environments like nuclear power plants. Now as we move the focus to IoT, Robotics will play a key role in bridging the gap between the real and virtual worlds. Data processing must go beyond cyberspace. IoT will require new and innovative services in the real world which will depend on our ability to combine data with physical operations. Hitachi is implementing social innovation by integrating robotics with IoT platforms and using that technology in social infrastructures.

 

In 2005 Hitachi’s R&D developed one of the first human symbiotic robots, and named it "Excellent Mobility and Interactive Existence as Workmate" or EMIEW for short. Last year they introduced the third version EMIEW 3 which we displayed at you NEXT 2017 event in Las Vegas September 18-19. Here is a photo of EMIEW 3 and me in Vegas. EMIEW 3 is on the right.

Emiew and Me.jpeg

The purpose of EMIEW is to assist our daily lives and to live with humans. It is designed with a round shape and its voice is programmed to be child-like. Its size was designed to be at eye level with someone who is seated, and it weighs only 15 kg so that it will not do any harm in the event that it bumps into a person. It rolls along on wheels at normal walking speed of 6 km/hr, can step over a 5.9 inch height, avoid obstacles, and can right itself if it happens to fall over. It has a remote brain in the cloud that can provide information from video cameras and other sensors around the area, and communicate with other EMIEWs. It can isolate human voices amid noise, communicate in English, Japanese, and Chinese, make appropriate responses to questions regardless of phrasing, and initiate conversations based on visual analysis of what it sees through its own cameras or remote cameras in the cloud.  EMIEW incorporates, AI, natural language processing, visual analytics, autonomous mobility, edge and cloud computing all in one package.

 

EMIEW 3 is a human symbiotic robot that is designed to provide user assistance and information services. It is intended for practical service use at customer sites and was developed for safer and more stable operations.  It was recently tested at Haneda International airport to assist travelers with directions and questions.

 

emiew Use case.jpg

 

EMIEW has some very Japanese characteristics. When I lived in Japan, I was always impressed by how helpful and polite people were. When I was lost in Tokyo, I often had to ask directions, especially in the large subway stations. People would stop and try to understand my poor Japanese, then walk with me to the right subway platform. This would never happen in the U.S. If I were able to stop someone, they would at best point me in the direction then hurry off on their own business. That is why it is important for EMIEW to be able to move as well as converse. EMIEW will also bow to you and kneel. EMIEW 3 in the picture above is kneeling. I chose not to kneel, since I wasn't too sure that I could stand up again.

 

EMIEW is still in a development phase with co-creation with several customers around the world. In Japan it may be able to provide care giving services for their aging population. Each project will provide valuable experience with customers in the field, and we will be able to harvest a lot of the learnings for other IoT projects.