We have all heard the ‘data is the new oil’ aside, usually elicited by someone who honestly believes you’re hearing it for the first time. Indeed, the caveat that data needs to be built upon – as oil is only thick goop before refinement – is never too far away either.
Thankfully Jonathan Bowl, vice president and general manager of big data analytics and IoT at Hitachi Vantara, has a much more refreshing take on this aphorism. Data is not the new oil, he explains. Data does not run out; it does not deplete; and you can use it an infinite number of times. What’s more, the more you use it, the better it gets.
Bowl was speaking at the recent AI & Big Data Expo event in Amsterdam (below) around the ‘big data challenge’. The ultimate goal is not to go the same way as Kodak and Blockbuster. Yet in both cases the shoots of success were there, if only they were actioned upon. Kodak sold its digital imaging patents to a big tech consortium – Apple, Google, Facebook and more – after it had filed for chapter 11 bankruptcy in 2012, while the data Blockbuster held on its patrons had far more potential than realised.
Alongside those who fell by the wayside, Bowl noted those who were doing it right. Domino’s Pizza is a clear example of a company which significantly changed its fortunes. The firm realised its mission wasn’t to make pizzas, but deliver them, thereby thrusting it into the technology business. As per HBR’s 2016 analysis, fully half of the company’s employees at its HQ work in data and analytics.
It all fits in with Microsoft CEO Satya Nadella’s vision that every company needs to become a technology company. “Just get a taxi in London – you see all the apps that are appearing,” explains Bowl. “Organisations have always had the data – it’s less about collecting the data now, it’s using what you’ve already got.”
The pace of business is changing with it for those in the industry too. Cloudera and Hortonworks butted heads for years but came together in a $5.2 billion merger at the end of last year. Salesforce shelled out $15.7 billion – its biggest ever acquisition – for Tableau last month.
Bowl notes that Hitachi has had to make its own big moves – the acquisition of business intelligence (BI) software provider Pentaho in 2015 being one such example. “Hitachi is a good example of a company that recognised it needs to change,” he says. “Hitachi Ltd sells ‘trains as a service’. If you look at some of the solutions we’re trying to build, the applications we’re trying to build… Hitachi Vantara, which is just one company of Hitachi, its heritage is in infrastructure and storage solutions. If I look at how much it’s changed in the three years that I’ve been there, it’s almost night and day.”
It is never the easiest of journeys, however. Last month, Bowl wrote a blog which argued that only 5% of corporate data had been successfully analysed to drive business value. The rest is left uncovered in unconnected single devices or trapped in organisational silos. “Businesses are swimming in data but it’s only when you have it all under management that you can start to get a return on the data,” Bowl wrote.
One way of attempting to right this course is through hiring data scientists, but again you have to know what to do. Jonny Brooks-Bartlett, data scientist at Deliveroo, wrote about the travails of being hired by certain companies last year. The data scientist is employed with the hope of writing smart machine learning algorithms, but there is a ton of admin to plough through first; while the company simply wants ‘a chart that they could present in their board meeting each day.’
Hitachi employs many data scientists across its businesses, and Bowl’s team has a number of PhD data scientists where this topic crops up ‘a lot’, he explains. “It’s almost ticking a box – we’ve got some data scientists, we’re well on our way to transform. Ultimately, you’ve got to put them to work; you’ve got to empower them and make their job easier so they can spend the majority of their time doing the high-level work you hired them for in the first place – actually deriving value from data,” he explains.
“They can only work with the tools that they’ve got – if they can’t get to the data, if they can’t find the data, then that’s a challenge. It’s the old question – I don’t know what I don’t know. Until I get that, I can’t prove that one way or the other.
The answer, Hitachi Vantara argues, is DataOps. It’s a relatively new methodology, but it should really be considered standard business practice. It automates much of those processes taking up your data scientists’ time, ensuring your data is in the right place, at the right time and accessible to the right people. It’s a more agile, more deliberate approach to data science that ultimately enables better insights and better business decisions as a result.
“I can see that a being a real focus now, making sure you can unlock all the data in an enterprise,” Bowl adds. “There was this notion once of data lakes – but data’s got too many characteristics, storage requirements, performance characteristics. You’ve got to put data where it needs to go, on the platforms where it’s best to reside.”
Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.