Archive for April 2013

The data center is where all the action has been in networking over the past few years. We saw the introduction of the network fabric, the rise of software defined networks (SDN), a number of startups emerge, and we’ve seen a fair bit of M&A activity as well. Because of the rapid evolution, we’ve seen almost every major network vendor – Cisco, Brocade, Juniper, Extreme, Avaya, Alcatel-Lucent and others – revamp the data center portfolio.

The one vendor that I thought was noticeably absent from the data center networking wars was HP. The company outlined its FlexFabric vision last year, but the only products it had to support the related architecture was the 10K, which is a campus switch, and the 12,500, which was great when H3C first released it, but was getting a bit old even when HP acquired H3C. Now, it’s clearly past its prime. The company has positioned the 12,500 as a data center switch and has beefed up the features set accordingly. The 12,500 now supports Ethernet Virtual Interface, SPB and other data center features. As of now, it’s limited to 10 Gig-E, but HP has stated that 40/100 Gig-E will be available later this year. Despite the added features, though, HP is the vendor I get the least amount of inquiry on regarding data center networking.

All eyes were on Silicon Valley and the Open Networking Summit this week. One of the big topics of conversation was Intel’s push into an already highly competitive software defined networking (SDN) space.

In theory, this move by Intel makes sense. SDNs transform the data center and create an opportunity for low-cost switch manufacturers to become a more important part of the data center. However, “theory” and “reality” are two different things, and I don’t believe a pure, white box switch really works in this market.

Over the past decade, chip companies such as Broadcom, Marvell, Mellanox and now Intel have tried to create software to complement the switch chips, in a bid to to sell the switch chips. Intel has WindRiver, Broadcom acquired LVL7, Marvell has Radian and even Mellanox uses open source software. Software plus chips equals success, right? Again, in theory it makes sense, but in practicality, this hasn’t worked.

It’s a significant milestone, as it lets the company take advantage of several market transitions and evolve Polycom to being more than a vendor that sells expensive room-based video systems.

Last October at its Strategy Day, Polycom announced that its RealPresence CloudAXIS suite was in beta. The software suite is an extension of the company’s RealPresence platform and enables browser-based UC applications such as chat, presence and, of course, video. After what appears to be a successful six-month beta program, Polycom this week announced the general availability of CloudAXIS.

The GA version of CloudAXIS is a significant milestone for Polycom, as it gives the company a solid platform to take advantage of several market transitions and continually evolve Polycom to being more than a vendor that sells expensive room-based video systems. CEO Andy Miller regularly talks about the evolution of Polycom, and CloudAXIS is a good proof point. These transitions are:

Last week, Alcatel-Lucent (ALU) held its annual Industry Analyst conference in Annapolis, Maryland. Unified Communications has historically been the primary focus for ALU’s go-to-market strategy, but the company has spent the last few years beefing up its OmniSwitch data networking portfolio as well. In fact, if you recall, ALU was the focal point of this Network World Article where the company beat out Cisco for a network project in its own home state.

Like every other network vendor, ALU has been trying to jump on the market opportunity created by the rise and complexity of server virtualization. I recently did some research that pointed out that a small amount of server virtualization saves both capex and opex. However, highly virtualized environments, meaning those that are more than 50% virtualized, have actually seen operational costs rise by as much as 20%. High amounts of server virtualization create unpredictable traffic flows that can wreak havoc on the network.

The company is willing to share the risk with its channel partners to move this market forward and this will be a competitive differentiator.

This week was Alcatel-Lucent’s (ALU) industry analyst conference in Annapolis, MD. Much of the first day was dominated by presentations and discussions around the communications portfolio, which had a distinct cloud flavor to it. The company had announced some of this at Enterprise Connect last month but did fill in some gaps, and I thought it was worth the review of the offerings given how hot cloud-based UC is today.

ALU will go to market with three cloud packages–OpenTouch Cloud Enterprise, OpenTouch Cloud Office and OpenTouch Cloud Personal. ALU’s strategy, like the other equipment vendors, is to be a cloud enabler, not a cloud provider; they’ll sell infrastructure to their resellers and channel partners. There are many similarities in the go-to-market for ALU compared to the other UC solution providers, but there are a number of differences.

Talk to anyone in IT today about anything and it’s hard not to transition to a discussion on BYOD. Almost every IT leader I speak to is struggling with the pressure of having to allow workers to use personal devices in the workplace while still maintaining security. This is one of the reasons the mobile device management (MDM) market has been growing.

However, it’s been my belief that MDM alone isn’t enough to establish a BYOD strategy. Most MDM solutions are based on client software being deployed and maintained on the device. But devices change so frequently in the workplace that trying to manage security by managing the device does not scale. What’s needed is a solution that’s delivered from the network so devices can be brought onto the corporate network and then used to access information without putting the organization at risk.

This week, data center specialist Brocade announced its “HyperEdge” architecture for campus networks. The concept of HyperEdge is similar to the value proposition the company put forth with its data center fabric architecture – networking is becoming increasingly complex, so a simpler, flatter network is required to support companies moving forward.

Over the past few years, the concept of the network fabric has been aligned with the data center since this is where the most significant changes have been on the compute side. Virtualization, cloud computing, growth in storage and other trends have driven more East-West traffic, creating the need to move away from the traditional multi-tier, spanning tree (STP) supported network. The solutions offered by almost every mainstream network vendor today is to implement a two-tier network (or single-tier in the case of Juniper’s QFabric) based on TRILL, shortest path bridging or some sort of proprietary protocol to replace STP.

Insight and Influence Through Social Media
ZK Research: Home
RSS Feed
ZK Research is proudly powered by WordPress | Entries (RSS) | Comments (RSS) | Custom Theme by The Website Taylor