ZK Research: Home
Google+
Twitter
LinkedIn
Facebook
RSS Feed

Posts Tagged ‘network management’

Over the years, a number of companies have become synonymous with certain technology markets. These are companies that have been the primary evangelists for a market and are typically the technology and/or share leaders. Examples of this are F5 with Application Delivery Controllers and Aruba Networks in wireless LAN. In the WAN optimization, Riverbed is that vendor. The company has had its way with WAN optimization for over a decade now, has over 50% market share and has been a leader in Gartner’s Magic Quadrant for WAN optimization for seven years running now.

However, while being a de facto standard has many advantages, it often makes it a challenge to move into adjacent markets. Blue Coat is a great example of a company that become known as a great security specialist, but struggled to establish itself in the WAN optimization market. This has been Riverbed’s struggle over the past few years. The company has acquired companies like Mazu, Opnet and, my favorite, Zeus, to move into new markets, but has struggled to grow its share in these areas.

[keep reading…]

The role of the CIO has changed more in the past five years than any other position in the business world. Success for the CIO used to be based on bits and bytes, and is now measured by business metrics. Today’s CIO needs to think of IT more strategically and focus on projects that lower cost, improve productivity, or both, ideally.

However, many IT projects seem to be a waste of time and money. It’s certainly not intentional, but a number of projects that seem like they should add value rarely do. Here are what I consider the top IT projects that waste budget dollars.

Over provisioning or adding more bandwidth

Managing the performance of applications that are highly network-dependent has always been a challenge. If applications are performing poorly, the easy thing to do is just add more bandwidth. Seems logical. However, bandwidth is rarely actually the problem, and the net result is usually a more expensive network with the same performance problems. Instead of adding bandwidth, network managers should analyze the traffic and optimize the network for the bandwidth-intensive applications.

[keep reading…]

Big data and analytics are hot topics of conversation for almost anyone in IT today, including network operations. This is one of the reasons Gigamon has been on a tear over the past couple of years, especially since its IPO last year.

Last week, Gigamon announced an upcoming application to generate and export NetFlow records from its visibility fabric. The NetFlow Generation application will create NetFlow records and then send that information to one of the many NetFlow collectors and analyzers available on the market today.

Historically, Gigamon has traditionally focused on developing features and applications to help optimize the performance of network tools. This application, though, will help optimize the performance of network infrastructure, such as routers and switches. Generating NetFlow traffic can be very processor-intensive and offloading this to the visibility fabric can reduce the burden on network hardware.

[keep reading…]

This week, traffic visibility solution provider Gigamon announced its Unified Visibility Fabric, which provides Traffic Intelligence to help enterprises and services providers get a better handle on what traffic is flowing across the network. Gigamon has beefed up the application and services layer of its visibility fabric with new applications and features that offer advanced filter capabilities, such as stateful correction, user-level awareness and deep packet visibility. The Traffic Intelligence provides more granular filtering and forwarding to make sure the tools and applications network managers use to manage and secure the network receive only the data that it needs to operate.

Gigamon’s focus is to provide fabric-wide, integrated applications that send the correct data to the correct tools so organizations can optimize the performance of the tools, including network and application performance.

There’s no question that the trends of video, virtualization, software defined networks, BYOD, 40 Gig and 100 Gig have all added significantly more traffic to networks today. The challenge created from the increased volumes of traffic, combined with increased network speeds, is that the management, performance, and security tools customers use can’t capture the volume of data being pushed to it. Think of network traffic having to pass through a tollbooth and when it gets through, it’s directed to the right tool(s). If there’s too much traffic, then the cars get backed up and the things on the other side of the toll plaza won’t operate as well.

[keep reading…]

Keeping the company network up and running is, by far, the most important task that a network manager has today. However, the largest cause of downtime is actually self-inflicted. ZK Research recently ran a survey that asked what the primary cause of downtime with networks is today, and the No. 1 response was “human error,” with 29% of the 1,320 respondents citing this as the top issue. This is down from the 37% that my research showed a couple of years ago, but it’s still top dog.

There are a number of reasons why human error causes downtime, and they all tend to revolve around the fact that network managers typically have very poor visibility holistically across the network. Additionally, change management, documenting processes and auditing tends to done on an ad hoc basis. Some do it well, but most don’t. Now, in many ways, this really isn’t the fault of the IT department, as the tools to manage network changes and to see what’s going on with the network also tend to be pretty poor.

Last week, ActionPacked Networks announced the 3.1 version of its LiveAction network management product to address some of these issues in Cisco environments. ActionPacked Networks is a Cisco Developer Network partner and has added a number of new features to improve the visibility and manageability of Cisco networks.

[keep reading…]

It’s certainly been an exciting month for Extreme Networks. Earlier this month, the company closed the acquisition of Enterasys and announced earnings that Wall Street liked so much that the stock shot up 20% to a five-year high.

And this week the company announced its new Summit X770 top-of-rack (ToR) switch. The X770 is a 1RU switch but has a whopping 104 10 Gig-E ports on it, which makes it the highest-density 1 RU switch that I know of. Alternatively, customers can get 32–40 Gig-E ports from the switch.

Why might anyone need this many ports and that much bandwidth in a single RU switch? Well, the answer is bandwidth, and there’s certainly no shortage of new bandwidth-generating applications in the data center today. Extreme is focusing this particular switch on “Big Data” environments, which is a sound strategy given the momentum behind big data today and the reliance on the network.

[keep reading…]

ZK Research is proudly powered by WordPress | Entries (RSS) | Comments (RSS) | Custom Theme by The Website Taylor