This syndicated post originally appeared at Zeus Kerravala's blog.

Without sounding sarcastic, the primary benefit of a virtual application delivery controller (ADC) is that, well, it’s virtual. It requires no hardware to deploy making it low cost. It’s mobile so the ADC can be moved from one location to another in real time and it can be self provisioned by anyone, including an application developer. But virtual ADCs have their drawbacks, too.

Historically, ADCs have been physical appliances located between the network and application tiers in a data center or deployed at the edge of a network to help optimize service delivery. The primary role of ADCs has been for load balancing purposes but a number of advanced features such as encryption, security, video optimization and some application specific features have been added over the past half decade. This shift in functionality has added to the need for ADCs across different verticals and company sizes.

This increased demand is why there have been so many more versions of the ADC launched recently, including virtual editions. This begs the question, though, can virtual ADCs replace physical ones in production environments? There’s no doubt that virtual ADCs can be used as a developer tool but the big question is around production environments which leads us to the question of “to virtualize or not to virtualize?”

In my opinion, the application developer use case is one of the primary use cases for a virtual ADC. This would allow application developers to test how applications respond to the various functions in an ADC before it’s moved into a production environment. Otherwise the developer may write the application one way, then have to modify it after it moves into production.

The other obvious use case is for small businesses or companies that haven’t historically used ADCs before. It’s a low cost way to try them out, understand the benefits and then decide whether staying with a virtual ADC is the right thing to do or not.

For high performance areas though, I believe that the scalability that is offered up by dedicated, optimized hardware platforms can’t be match by virtual applications. I know many will argue with me on this point but look at pretty much any network function in a high performance area – routers, security devices, session border controllers, WAN Optimization and ADCs to name a few. There are virtual versions of all these devices but there is very little use of the virtual version in high performance areas. Networking is hard and requires specialized hardware to be optimized.

Despite the advancements in virtualization technology, physical devices that are deployed on hardware specifically optimized for that specific function will outperform a virtual version on general-purpose hardware. So if the ADC needs to perform to not slow down the application or service, the small amount of CapEx that would be saved by deploying a virtual ADC isn’t worth the risk performance degradation. The heart of a data center or the edge of a network is definitely not the place to skimp, as the service problems that could result will eventually cost more money.

I do think virtual ADCs have their place. They can be used to augment a company’s overall ADC strategy. But before making the proclamation that the physical ADC is dead, think about the role the device plays and where it sits in the network.

The following two tabs change content below.

Zeus Kerravala

Zeus Kerravala is the founder and principal analyst with ZK Research. Kerravala provides a mix of tactical advice to help his clients in the current business climate and long term strategic advice.
Share This Post:
No Interactions

Comments are closed.

Insight and Influence Through Social Media
ZK Research: Home
RSS Feed
ZK Research is proudly powered by WordPress | Entries (RSS) | Comments (RSS) | Custom Theme by The Website Taylor