I recently attended the Kafka summit in London and it sparked my interest in the state of the art of current big data topologies/architectures and where the differential values lie.
In the last decade we have seen the birth and growth of a huge range of big data solutions: on premises, cloud, hybrid, even scalable relational DBs that resemble big data platforms. We have been able to develop powerful and accurate solutions based on AI, ML, DP, NLP reliant on the enormous amount of information stored, powered by these big data technologies. The companies implementing these solutions in the few last years have enjoyed a competitive advantage.
I am afraid to say that currently this is not enough, as these solutions were developed on the basis of batch processing, under the assumption that processing in real time was not an option. Unfortunately, at speed of the light, ‘batch big data’ is becoming a commodity, is no longer disruptive or represent a differential value against the competition.
Kafka Summit has put under the spotlight the ‘real time’ solutions, tools, infrastructure and necessary components available on the market that companies can benefit from, to take action against any ‘event’ immediately instead of waiting until tomorrow.
Our partner, Confluent has developed a set of enterprise components that facilitate the implementation of real time solutions. Currently these solutions can be adopted in any kind of infrastructure (OnPrem, cloud or hybrid) so there are no excuses to start the journey!