This syndicated post originally appeared at Zeus Kerravala's blog.
When I look back at the past decade, it’s remarkable to see the changes that have taken place in corporate IT. For those of you who don’t know my past, I was in corporate IT prior to being an analyst. I, like many other IT individuals, built an IT philosophy based on tight control. IT had control over everything – the networks were all private lines or frame relay, each application had its own dedicated servers and storage and, of course, IT owned the end points. The whole model of IT was built on a premise of IT control even if there were a lot of inefficiencies in the architecture.
Today, the IT model has been flipped on its head. Things that were the exception are now the norm. Remember when people had to ask permission to work from home? Now it’s done all the time. When you a saw a Mac in the workplace it seemed unusual, but now not it’s weird to not see several. Virtualization was a tool for the labs; today there are more virtualized workloads than physical ones. Because we live in this consumerized, virtualized, mobilized, cloud-driven IT environment, we can do so much more with technology than ever before. People now blend work and life so smoothly there’s very little transition time between the too. Life is great, right?
Well, not so fast. The cost to IT managers is a technology environment that is much more difficult to manage. As an example, consider the basic application of enterprise voice. This used to be simple. You had a PBX, phone and cable. If there was a problem, it was one of those elements. Simple, but highly inefficient. Today, enterprise voice is made up of virtual servers, cloud servers, desk phones, soft phones, Wi-Fi clients, application integration and other things. How does one even begin to troubleshoot an environment like this? Not with legacy management tools, that’s for sure.
Legacy management tools operate “bottoms-up” with each IT element having it’s own management application. All the feeds from these are rolled up into some kind of centralized management console with some sort of rules-based correlation to find errors. This worked OK in the era of static IT, but IT is hardly static now. Everything is in motion or virtual or consumer, so this bottoms-up approach is just too hard to scale. This is one of the reasons the management dashboard can be all “green,” and yet stuff isn’t working.
Solving this requires flipping network management around and looking at the environment through the lens of the service and not the IT elements. Management tools need to have an understanding of the IT elements that make up a service – both virtual and physical – and then what the interactions looks like between them. If this can be baselined, then any deviation from that baseline indicates a problem could be occurring and IT should have a deeper look.
A robust visual front end to a performance-based solution allows IT to be much more predictive and respond faster to user problems. One of the data points that I like to point out is that 73% of IT issues are reported by the end user, not the IT department. So, companies have spent millions of dollars for legacy network management tools to catch 27% of the problem?
This is why I think there’s a sea change in network management coming. I feel so strongly about this that I’m doing a webinar on this topic next week. IT has evolved by leaps and bounds over the past decade, and now it’s time for the management tools to evolve and keep up.
Latest posts by Zeus Kerravala (see all)
- Thinking About the Cloud 2020 - March 20, 2017
- SD-WAN: Right for the Cloud Era? - March 14, 2017
- Microsoft and NVIDIA partner to bring GPUs to the public cloud - March 8, 2017