In the software industry, we have been talking about Big Data for years. Lots of specialized technologies have been build to capture all type of data in a repository for downstream usage. You probably heard some experts saying: “don’t worry, if you don’t use it today, you will need it tomorrow!”. It’s like when you keep your old stuff, furniture, etc. and you store all these things in your attic. And you won’t use them anymore until you go up on a given weekend and you throw it all away. So, what is going on here?
Big Data solutions still focus on data capture and less to downstream usage. Even today, many vendor’s briefings highlight predominately on data ingestions, data management, catalogs and all cool and sophisticated technologies that come with that: connectors here and adapters there, a “spaghetti” of cloud services that only few understand “just” to move all your data to some place. It is like you use all kind of fancy (expensive) tools to move all your furniture to your attic and then close the doors and forget about it.
Many software vendors did not start as “big data” solutions: they come from different backgrounds and along way they rebranded them self as big data vendors. For example, most of the IoT vendors, are today offering a Big Data solutions since they figured out that getting data from sensors without a big data solution isn’t working. But, many of these vendors, are now figuring out that they need an Artificial Intelligence (AI) platform to get insights of the data. Vendors are now transitioning to be an AI vendor and not a Big Data vendor anymore. So big data was just a phase, or a trend, initiated but not completed. Along with that, while software vendors keep redefining themself, enterprises are left with big data “attics” and they don’t do anything with it. And btw, AI is becoming a phase too as AI vendors are now figuring out that they need at the end enterprise apps for the data.
Our human brain isn’t build for dealing with big data, but we can only deal when we have tangible advantages and outcomes today, not maybe tomorrow (old furnitures in your attic). And what happens is that in the first phase you feel like you have a secret weapon in your pocket, but in reality, along the time, people are going to forget about it and when the next storage bill arrives, people will question why we should have all that data and pay for it.
In order to deal with big data, we should stream and store data that we know we are going to use immediately for downstream use cases. And, yes, you can start with the classical reporting and analytics use cases, but … that is not enough and would not have a tangible business justification. Why would you move all data using all the cloud “spaghetti” services to just do what we were able to do already the last ten years? We got to invent more downstream use cases than these ones.
And yes, you will say … “well you can apply AI and you can build AI apps!”. True, but the reality is that AI not yet mainstream as many software executives cannot see its value and independent AI apps, not integrated with your core enterprise apps will be again a new architecture nightmare. The opportunity relies more to enterprise apps to change their architecture and embed AI into their future architecture (closed loop). UI is not the next AI, but AI is the next generation of business logic, not hard coded, no rules engines but a trained model.
But we need more downstream use-cases. The sweet-spot is for apps that are data “hungry”. That could go between fraud detection, risk and compliance, budgeting and planning, augmented search and recommendation apps (best vendor, pricing, etc.). Or the next gen MRP is driven by an AI engine, Artificial Resource Planning (ARP).
As you bring more of these use cases, you can start bringing additional data into your big data repository and take advantage, immediately, today. Maybe your attic is now more a strategic asset than a dump.
Stay safe.
Massimo
Leave a Reply