Thanks to Gartner, we have a new buzzword: bimodal IT. It’s nothing special actually, just a new way to describe common sense, and the fact that the world – the IT world in this case – is not black or white.
In practice, in modern IT organisations it is better to find a way to integrate different environments instead of trying to square the circle all the time. This means that you can’t apply DevOps methodology to everything, nor can you deny its benefits if you want to deploy cloud-based applications efficiently. (Gartner discovers great truths sometimes, doesn’t it?)
But here is my question, “Does bimodal IT need separate infrastructures?”
Bimodal IT doesn’t mean two different infrastructures
In the past few weeks I published quite a few articles talking about Network, Storage, Scale-out, and Big Data infrastructures. Most of them address a common problem: how to build flexible and simple infrastructures that can serve legacy and cloud-like workloads at the same time.
From the storage standpoint, for example, I would say that a unified storage system is no longer synonymous with multi-protocol per se, but it’s much more important if it has the capability of serving as many workloads as possible at the same time. Like a bunch of Oracle databases, hundreds of VMs and thousands of container accessing shared volumes concurrently. The protocol used is just a consequence.
To pull it off, you absolutely need the right back-end architecture and, at the same time, APIs, configurability and tons of flexibility. Integration is another key part, and the storage system has to be integrated with all the different hypervisors, cloud platforms and now orchestration tools.