Avaya’s Restructuring Is a Financial Exercise and Should Not Be a Concern for Customers

February 2023 // This week, after months of speculation, Avaya announced its financial restructuring plans. The company has entered into […]

Continue Reading

Unlock UC Usage Constraints: Meet Communication Compliance Requirements Without Sacrificing Employee Experience

February 2023 // Over the last five years, we have seen a significant shift in how we work. Cloud-based unified […]

Continue Reading

Solve Hybrid Work Compliance Challenges with Theta Lake and RingCentral

January 2023 // For most companies, the COVID-19 pandemic is in the rearview mirror. Business leaders are now focused on […]

Continue Reading

Check out

June 5 Interview with Veeam customer, Kim LaGrue, CIO of City of New Orleans

6.3K views June 5, 2023 9:38 am

63 0

June 2 ZK Tour of Veeam Tech Festival (Expo Hall)

5.2K views June 2, 2023 9:52 am

143 0

May 31 ZKast with Jason Buffington of Veeam

8K views May 31, 2023 8:00 pm

131 1

ZK Research Blog


Veeam used its annual user event, VeeamON, in Miami this week to release the results of some research on ransomware highlighting some alarming statistics that raise concerns for businesses of all sizes. Veeam is a backup and recovery company, so one might wonder why it’s releasing research in cyber security. The reality is, that ransomware recovery has become a top use case for backup and recovery. Although companies will continue to spend on security tools to keep the bad guys out, when they are breached and their data is locked up, their ability to restore data quickly can make the difference between being able to maintain business operations with minimal disruption or shelling out bitcoin and hoping for the best. The Veeam 2023 RansomwareTrends Report provides a great reality check on the increasing threat of ransomware and how businesses are coping. An independent research firm surveyed 1,200 information technology leaders whose organizations, across 14 countries, experienced at least one ransomware attack in 2022. It’s important to note that Veeam did this as a blind survey across a wide base of companies rather than just focusing on its customers. This gave a truer indication of the state of ransomware. The respondent breakdown was as follows: security professionals (37%), chief information security officers or other IT executive stakeholders (21%), IT operations generalists (21%) and backup administrators (21%). They explained how ransomware affected their organizations, IT strategies and future data protection initiatives. One of the most glaring report findings is that one in seven organizations could potentially have over 80% of their data compromised from a ransomware attack. This reflects a major deficiency in the protection measures currently implemented by many businesses. Even worse, 93% of these attacks target backups, and in three out of four cases, the attackers succeed in crippling an organization’s ability to recover. On average, it takes at least three weeks to recover, per attack, after triage. In 2022, most organizations (80%) paid the ransom to recover their data, a 4% increase from the previous year. The data is surprising, considering 41% of organizations have a policy against such payments. Yet paying the ransom doesn’t always guarantee data recovery since 21% of organizations failed to regain access to their data. This data point might shock people, but it’s a story I’ve heard many times. Once the threat actors have the money, they have little incentive to help the business. Only 16% of organizations avoided paying the ransom by restoring data from their backups. The report stresses the importance of data backup as a strategy against ransomware attacks, especially because cyber criminals often target backup repositories. Almost all (93%) attacks attempted to compromise backups, resulting in 75% of organizations losing some data and 39% losing all their backup data. Given the risks, it’s imperative that businesses ensure their backups are “immutable” or incapable of being changed or deleted. The good news is that 82% of organizations already use immutable clouds, while 64% use immutable disks. Only 2% don’t employ any form of immutability in their backup solution. Veeam is optimistic about more organizations achieving immutable data backup across their entire data protection lifecycle this year. Another promising statistic shows that 87% of organizations have a risk management program, a plan designed to protect against cyberattacks. But only 35% of these organizations believe their plan is working well, while more than half (52%) are looking for ways to improve it. That is why organizations need to have a playbook or a set of steps that need to be followed in case a cyberattack occurs. Organizations should at least have these two steps in their playbook. First, they should keep clean extra copies of their data stored somewhere safe. The backup copies should be protected from attacks and not contain any harmful or malicious code. Second, data in the backup copies should be used to get the organization up and running if the main systems are attacked. Additionally, there should be a cohesive approach to dealing with ransomware across the organization since a separation often exists between backup and cyber teams. Another worrying trend uncovered in the report is the increasing cost and declining coverage of cyber insurance. A fifth of IT leaders reported that ransomware is now excluded from their company policies, while most experienced increased premiums and deductibles, as well as reduced coverage benefits. The vast majority (96%) of cyberattack victims could pay the ransom using insurance in 2022. Half of them used insurance specifically designed for cyber incidents. However, 28% of victims used insurance that wasn’t specifically for cyber incidents, while 18% didn’t use insurance at all even though they had it. That’s because getting insurance to cover cyberattacks is becoming more difficult and expensive, just like how it’s getting harder to get flood insurance because of more frequent storms. In fact, 21% of organizations said their insurance policies no longer cover ransomware attacks.

What to do

The report strongly recommends that businesses take a more proactive approach to ransomware. Given the high probability of cyberattacks and the significant data loss that can occur with each attack, organizations should place a high emphasis on both preventing cyberattacks and preparing effective recovery strategies. In conclusion, Veeam advises businesses to maintain clean backup copies and regularly verify their recoverability as a part of their risk management strategy. Other recommendations include the use of “staged restorations” to gradually bring back data and prevent system re-infection during recovery. This is important because if infected data is restored, a second ransom event will likely occur. Lastly, implementing hybrid IT architectures can help organizations with their overall disaster recovery strategy by recovering servers to different platforms. One recommendation I would like to make is that backup and recovery funding and policy should be done in conjunction with the security team. Historically, backup and recovery is one of those poorly funded areas as no one cares about it until it’s an issue. On the other hand, security is a top area of focus for organizations as everyone, including business leaders, is concerned about a breach. All of the money put into cyber protection is to prevent a breach. Zero trust, security information and event management, security orchestration, automation and response, extended detection and response, next-generation firewalls and other tools protect the company differently. This does not account for the worst-case scenario, which is that a breach occurs, data is encrypted and a ransom is requested. At that moment, backup and recovery will be put to the test. If it has been well-funded, tested and retested, data can be recovered quickly and the ransom ignored. If not, well, the data points in the Veeam report highlight what happens. CISOs and chief information officers must work together to ensure that data protection, backup and recovery are all on the same page.

Ransomware, Kubernetes, and security were just some of the key themes at the Veeam 2023 conference.

This week Veeam is holding its annual user event, VeeamON, in Miami, FL. This is the 9th year that Veeam has held the event, and attendance has steadily grown as Veeam’s customer base has enlarged. Headcount this year topped 16,000 people, an impressive number for a company not even two decades old. This is also the first year that Veeam will be hosting the event as the backup and recovery market leader. At VeeamON 2022, IDC predicted Dell would be fractions of a percentage point ahead of Veeam. But in the second half of 2022, Veeam experienced 8.1% YoY growth while Dell shrank 2%, putting Veeam in the pole position. Backup and recovery certainly doesn’t have the same sex appeal and sizzle as some other technology categories, such as AI or collaboration. Still, it remains important as data remains the lifeblood of companies. Digital transformation, AI, customer experience, collaboration, and other trends have one thing in common: they rely on the organization’s effective data backup and recovery strategy – a critical process that most companies do not do well. Given the struggles organizations have with backup and recovery, there is lots of room for innovation, so I was looking forward to what Veeam had in store for the show. Below are my top five takeaways from the show.

Simplicity Wins

Like many vendors, if one asks Veeam about its differentiators, you’ll likely get a laundry list of technical advantages. While Veeam does have a leg up in several areas, such as Kubernetes, its big differentiator is that the product is easy to use, particularly in data recovery. I’ve often said that this industry is filled with vendors that do a great job of backing up data, but the recovery process is slow and error-prone. Veeam CTO Danny Allan echoed this during his keynote when he stated that backup was pointless without the ability to recover, and Veeam does that better than anyone. At the event, Kim LaGrue, CIO of the City of New Orleans, talked about her experience with Veeam. I asked her post-keynote why she chose Veeam, and she said that the operator console was easy to use and intuitive, making recovering files fast and easy. During his keynote, Veeam CEO Anand Eswaran cited an IDC study that found that Veeam recovers data from AWS, Azure, and GCP 5x faster than any other solution, and that translates into significant operations savings as well as better business continuity.

Backup and Recovery Combats Ransomware

How to handle ransomware? That is certainly the question for many organizations today. Some companies have a policy to pay it, particularly if they have good insurance. Others may keep Bitcoin on hand to pay the ransom when their organization is hit. Other organizations may choose not to pay and deal with the consequences when it happens. At the event, Veeam released its ransomware report, which showed that insurance companies paid 77% of ransoms, but 74% of companies saw an increase in premiums, and 43% saw their deductible go up. Relying on insurance is becoming increasingly expensive, which may only be a viable route for a short while longer. The best approach is to have a proven and tested backup and recovery strategy that can quickly restore the organization’s information and get things back to normal operations. During her time on stage, New Orleans’ LaGrue talked about how, by using Veeam, the city can now recover its full data set in a day or two, removing any advantage a fraudster may have when seeking a ransom payment. One important point is that when data is recovered, it must also be analyzed and cleaned so as not to reinstate the initial cause of the breach, which can then cause another ransom event. In sum, backup and recovery preparedness is the best, fastest, and most cost-effective way to combat ransomware.

Security Operations Should Focus on Backup and Recovery

Cybersecurity is going through its own modernization process. Companies are implementing zero trust, SSE, multi-factor authentication, XDR, SOAR, and other technologies to prevent breaches. Yet no matter how good the technology is and how smart the engineers are, breaches happen, and the question is, what happens then? The security team should ensure that if a threat does slip through all the cyber protection and the company is breached, the right backup and recovery solution is in place to ensure operations can be restored quickly. Typically, backup would fall under the CIO, and security would be the responsibility of the CISO, but these organizations must work together. This would open a new set of technology partners for Veeam and the backup and recovery industry, so I’m hoping to see more from Veeam here.

Kubernetes is the Next Frontier for Veeam

At VeeamON, the company announced version 6.0 of its Kasten K10 product, which is used for Kubernetes data protection. New features in the latest version include:
  • Enterprise-grade ransomware protection for Kubernetes via suspicious activity detection capabilities, which provides immutable backups enabling instant recovery. This new release also extends threat detection capabilities by logging all events into Kubernetes Audit natively.
  • Scale and efficiency improvements. The new version includes an application fingerprinting feature to enable newly deployed stateful applications to be automatically mapped to specific blueprints to achieve proper data consistency. This can reduce the risk and minimize complexity allowing for faster scaling of the environment.
  • Cloud-native expansion. Kasten K10 now supports Kubernetes 1.26, Red Hat OpenShift 4.12, and Amazon RDS allowing for better interoperability. Veeam also added hybrid support on Google Cloud and cross-platform restore targets for VMware Tanzu and Cisco Hybrid Cloud.
Kasten is currently a small part of Veeam’s overall business, but as companies move to cloud-native application design, this should be its fastest-growing area.

Veeam Helps with Cloud Repatriation

During his keynote, CTO Allan talked about Veeam’s core tenets, one of which is data freedom. The Veeam platform was designed to enable customers to back up data in one place and restore it elsewhere, which is necessary for disaster situations. As an example, in the event of a natural disaster, a company may choose to restore its private cloud to a public cloud temporarily while the physical facility is not available. Over the past year or so, I’ve talked to many companies that have moved data and workloads to the cloud, but the cost of the service has grown to the point where they want to bring them back on-prem. Veeam’s ability to move data from one environment to the other can provide customers with a fast and cost-effective way of repatriating workloads and data back in-house. On an earlier call with analysts, Danny Allan stated that Veeam is and will remain, unapologetically, a backup and recovery vendor, and that seems to be serving the company well. Too often, tech vendors want to jump on the latest bandwagon, and that derails them from their core competency. Even in ransomware recovery, Veeam is steadfast in its statement that it’s not a security company, although it can play a key role in recovering from a breach. As I stated earlier, backup and recovery are hard to do well. Veeam’s focus on being the best at what it does, regardless of compute model or data type, has served its customers well – and will continue to – as distributed organizations put more data and workloads in more places.

By integrating Anthropic's Claude into its platform, Zoom plans to boost contact center efficiency and AI integration in the contact center.

Zoom this week announced a partnership with artificial intelligence system developer Anthropic. As part of the partnership, Zoom will invest in Anthropic and collaborate with the company to improve Zoom’s use of AI. The exact amount Zoom Ventures invested in Anthropic was not disclosed. The venture arm of Zoom has historically been very active in investing with companies it partners with. Other investments turned technology partner includes Neat, Mio, observe.ai, DTEN, and ThetaLake. This provides a strong pool of companies that can add to the Zoom ecosystem with the possibility of a future financial return.

AI in the Zoom Contact Center

Zoom plans to integrate Anthropic’s AI, specifically its Claude virtual assistant, into the Zoom Contact Center platform, which includes Virtual Agent and Workforce Management. The integration's strategic goal is to enable Zoom customers to improve service to their customers while by making contact center agents more efficient but also advance its approach to using AI to enhance the customer experience. Claude is based on Anthropic’s Constitutional AI model which is described here. In the context of the Zoom Contact Center, Claude will serve two primary roles: It will assist customer service agents by helping them find answers to customer inquiries or problems, and it will assist customers by acting as a self-service tool so customers don't have to wait for an agent.

Zoom Using Multiple AI Models

Zoom’s approach to AI is interesting, as it combines in-house developed technology and partnerships. Most of the UCaaS and CCaaS providers have chosen to go down one path or the other, but Zoom is doing both. The company uses AI models from its own research and development group, plus other AI companies like OpenAI and Anthropic, and plus AI models from some of its customers. By incorporating different types of AI models, Zoom can give customers the best service based on their specific needs. For instance, a model that excels in speech recognition might be used for transcribing meetings, while another good at understanding context might be used for customer service. Zoom was a late entrant into the contact center, but since AI is causing a major disruption in the industry, that will likely reset the leaderboard. By taking a broad and open approach to AI, Zoom should be able to bring in features faster than competitors, as it can pick and choose from a wider pool. This does create an extra level of complexity as Zoom has to worry about integration, user interfaces, data sets, training consistency, and other factors. I’ve talked to Zoom leadership about this, and they seem confident they can do this technically. Their laser focus on keeping the product simple from a user perspective should ensure that complexity is not passed on to the customer.

Zoom in the Months Ahead

For Zoom as a company, the contact center is an important “attach” to its existing install base of meeting customers. Feedback from channel partners is that Zoom will see a significant number of “COVID contracts” (that is customers who signed three-year deals with the pandemic began) come up for renewal in the next several months. Ideally, Zoom would like to ensure they buy the Zoom One suite but then add on contact center. Given the tough competitive landscape in CCaaS, Zoom’s ability to showcase differentiated AI is one of the keys to success.

Zoom’s Stellar CMO Retires

One final note, Zoom’s fine Chief Marketing Officer, Janine Pelosi, recently posted on LinkedIn that she is set to retire after an 8-year career at Zoom that can only be described as stellar. If there were a CMO Hall of Fame, Pelosi would be a first-ballot inductee. When she arrived at Zoom in 2015, it was trying to sell video meetings in the market with many competitors. Her “Meet Happy” campaign that blitzed airports, arenas, and other venues with the Zoom logo combined with the pandemic put Zoom on the map, and the result is that the company has achieved something few brands ever do, and that is it now a verb. During the pandemic, it was common to say, “Let’s Zoom later on this” (or some variant), which has carried over post-COVID. Today, of all the collaboration vendors, Zoom has the most end-user demand (versus corporate IT) as people generally like the product. Meet Happy has become Call Happy, Webinar Happy, Event Happy, and more. In many ways, Zoom changed the world, and Janine Pelosi was a big part of that.

At Think, IBM highlighted how companies could use AI to solve some of their biggest challenges.

Last week IBM held its annual Think event in Orlando, FL. The venue was near “Islands of Adventure,” which I felt was an interesting backdrop as corporate IT has become just that: a bunch of islands of adventure. Between generative AI, ChatGPT, quantum computing and security, businesses are certainly due for a fair share of adventure over the next few years. IBM positions Think as an event that looks into current and future technology and how companies can use this tech to transform their businesses. I’ve now had time to aggregate the content and develop key takeaways.

Key Takeaways from IBM Think 2023

The business use of generative AI will be narrow when compared to ChatGPT Ever since ChatGPT sprang into action, industry watchers and business leaders have wondered what impact it will have on businesses. With AI being the focal point of Think 2023, this was a big topic of conversation. The reality is that ChatGPT is a broad tool used by consumers, and this does not apply to businesses. Consider search. Google is used by consumers for general search by billions of people every day. However, it is not the way businesses look for information. Financial services use applications like Bloomberg, whereas legal uses LexisNexis. Search is valuable, but only in the narrow context of that industry. Similarly, generative AI and large language models have value, but it needs to be applied narrowly for the business world to realize value from them. To help companies accelerate their use of AI, IBM announced WatsonX, which includes foundation models, generative AI, a governance toolkit, and more. Hybrid cloud is the way forward for most organizations Public or private, that has become the proverbial question regarding the enterprise use of cloud computing. The real answer is both. During his keynote, IBM Chairman and Chief Executive Officer Arvind Krishna mentioned an IBM Institute for Business Value (IBV) study that found that over 75% of their customers plan to leverage a hybrid model. This is consistent with my research, although my data pointed to about 90% of enterprise-class companies using a mix of public and private. One of the challenges with hybrid cloud, particularly when multiple providers are used, is creating consistency across the different clouds. This is where Red Hat OpenShift can add value as it creates a logical container layer making data and workloads portable across clouds. With public cloud, IBM has been a distant number four to the “big three,” but the shift to hybrid should act as a catalyst for IBM to gain ground. Security needs AI to keep up with threat actors As long as we have had cyber security, the bad guys have had a leg up on the good guys. One of the challenges facing security pros today is that there is a massive amount of telemetry data that needs to be aggregated and analyzed. There is too much for anyone to do manually, so malware often takes months to be detected. At the recent RSA event, IBM Security announced its new QRadar Suite, which includes EDR and XDR, SOAR, and an advanced SIEM, all powered by AI. In Think Forum, I participated in a demo where the tools were used together to identify a breach, and a recommendation was given to the SOC engineer on how to correct it. While people can’t make sense of all the data being generated today, machines can, and that’s the future of security. AI and data can close the ESG gap During the Day 2 keynotes, John Granger, Senior Vice President IBM Consulting, showed a data point where 95% of executives say their company has an ESG plan, but only 10% of companies have made progress against it. That is a Grand Canyon size gap. Granger discussed that the biggest barrier to executing on ESG plans is that organizations do not know where to start because of a lack of data. What’s needed is accurate data that can be measured and managed and then acted against to gauge performance and drive accountability. He mentioned how with many consumer services, people are often shown the carbon impact of things like delivery options, but we do not have that capability in the business world. With that being said, companies do have data across a wide range of systems. There are currently hundreds of ESG frameworks available to companies but silos of data make thing more difficult. Christina Shim, Vice President and Global Head of Product Management & Strategy for IBM Sustainability Software, she talked about how AI can ingest, interpret and automate insights from the data in these frameworks. Also, artificial intelligence will reduce manual processing by automating the classification, extraction, and validation of thousands of invoices, documents, and other data spanning multiple businesses. Consumers are watching how their brands progress against ESG goals, and companies need to use AI and data to help close the gap. Contact centers are low-hanging fruit for AI Where to start with AI? I get asked that a lot by business and IT executives. I always recommend an area that currently has accurate KPIs and also where a small improvement can have a big payback. This points to the contact center. Today, businesses compete based on customer experience, which often starts in the contact center. In the AI, Automation and Data pod inside the Think Forum, IBM had set up a demo of a contact center with Watson Assistant infused into it. It created a game where the player would act as an agent and see how many inbound inquiries it could solve without AI. It then ran the same sequence with AI turned on and enabled the AI-powered virtual agent to take care of simple tasks like password resets and account balances, allowing the agent to handle more complex ones. At the end of the game, the results were shown so you could see the improvement. Without Watson Assistant, only 3% of requests were solved compared to 65% with Watson Assistant. Businesses looking to win quickly with AI should look to the contact center as a starting point. Here is a how-to resource for building a chatbot platform quickly and easily.

Bottom Line: IBM Think 2023

In conclusion, IBM Think 2023 provided valuable insights into the future of technology and its impact on businesses. First, while generative AI has its merits, ChatGPT emerges as a powerful tool for consumers, but enterprises need to apply AI in different ways. Second, the hybrid cloud approach is gaining momentum, with organizations recognizing the value of leveraging public and private clouds. Red Hat OpenShift is crucial in achieving consistency across multiple clouds and driving IBM’s growth in the cloud market. Third, the symbiotic relationship between AI and security is imperative to combat the ever-evolving threats businesses face. IBM Security’s QRadar Suite uses AI to transform threat detection and response. Fourth, AI and data have the potential to bridge the ESG gap, empowering companies to measure, manage, and act on sustainability goals. Finally, the contact center emerges as the low-hanging fruit for AI implementation, revolutionizing customer experiences and enabling significant returns on investment.

ZDX measures user experience metrics providing IT teams with the data they need to identify and troubleshoot performance issues.

Security service edge (SSE) is a service that integrates security and networking functions into a single cloud-based platform. This approach is designed to provide consistent and reliable security across all locations and devices within an organization. Zscaler is a pioneer in this space and has been recognized as a Gartner Magic Quadrant leader for as long as they have had the quadrant.

The challenges of digital experience monitoring

While security was the main focus for Zscaler’s SSE offering, a few years ago, the company added a visibility product called Zscaler Digital Experience (ZDX), which helps its customers measure and analyze the performance of business-critical apps, identify issues, get actionable insights, and more. The thesis is sound in that security companies must capture and analyze traffic to find threats. The same data can be looked at for performance information. Other network management companies have tried this in the past, such as NetScout and Riverbed. Still, both flopped in security, partially because of poor product but more because networking and security had yet to converge seriously. Since Zscaler launched ZDX, other SSE providers have launched visibility products. I recently caught up in person with Javier Rodriguez, Zscaler’s director of product management, at the RSA Conference, which took place April 24-27 in San Francisco. Rodriguez discussed the latest release of ZDX and how it brings security and network teams together.

May 9 ZKast with Javier Rodriguez of Zscaler

Here are the highlights of the ZKast interview.

End-to-end visibility with Zero Trust Exchange

ZDX leverages Zscaler’s security platform, Zero Trust Exchange, to provide end-to-end visibility into an organization’s digital environment—from the user’s device to the application server. ZDX measures user experience metrics like response time, page load times, and availability, providing IT teams with the data they need to identify and troubleshoot performance issues. ZDX also provides a comprehensive view of an organization’s internet traffic and usage patterns.

New security monitoring features

Zscaler uses zero trust not only as an authentication mechanism but also as a modern security monitoring tool. With millions of data points, it’s necessary to have visibility and metrics to identify issues. One key feature called tracing allows the configuration of an app to send additional telemetry and check policy based on responses.

A growing data set

With over 30 million agents, Zscaler has access to a massive data set that helps identify issues that affect certain apps. This is important because it allows organizations to have a comprehensive view of their digital experience and detect problems quickly. ZDX is Zscaler’s fastest-growing product category due to a combination of factors, including the need for visibility in the context of security, the challenges of troubleshooting in situations like pseudo-trust, the impact of cloud computing and COVID-19, and the importance of quantitative data in positioning the product.
  • Initially, ZDX was primarily used by security operations, but IT operations have also found it useful. Since ZDX provides a detailed view of an organization’s network, apps, endpoints, and Wi-Fi, it’s a helpful tool for both security and IT operations for pinpointing issues. ZDX has different use cases in the enterprise. Examples include monitoring hybrid workplaces and collaboration tools like Zoom and Microsoft Teams, as well as zero trust monitoring, which identifies problems with Wi-Fi or internet connections.
  • ZDX comes in standard and advanced versions. The standard version provides minimal features, enough to understand the performance of apps. The advanced version includes additional features like faster resolution, more alerts, and new artificial intelligence (AI) capabilities that Zscaler recently rolled out to provide IT teams with advanced insights and help them resolve issues faster. The advanced version also collects more data at a higher frequency, making it suitable for organizations with more complex environments.
  • The newly released AI capabilities show how users are experiencing apps, offer solutions for performance issues, and measure the quality of meetings in apps like WebEx. ZDX already works with Microsoft Teams and Zoom. With AI analysis and alerts, IT teams can address user complaints faster and compare good and bad experiences. ZDX also helps troubleshoot devices for remote workers and supports network problem-solving, privacy rules, and tracing of protected apps.
  • Last year, Zscaler announced a partnership with Zoom for unified communications as a service (UCaaS) monitoring. The feature is included in the advanced version and works by integrating with Zoom’s application programming interface (API) to gather additional signals like data latency and loss.
  • Zscaler is launching several other new features: third-party proxy support for complex deployments; AI to make it easier for analysts to detect incidents and troubleshoot; a single-screen dashboard for customers; and support for Webex integration. Additionally, Zscaler is leveraging internet service provider (ISP) insights to understand “last mile” performance. These developments are important for organizations because they provide more visibility and monitoring capabilities to troubleshoot issues and make informed decisions based on performance data.
Watch the entire ZKast interview.
Networking giant Cisco Systems Inc. showed strong execution in most areas in its fiscal third quarter as it posted record revenue of $14.6 billion, up 14% year-over-year and topping estimates, but there’s a lot more behind the headline numbers. Cisco is now creeping up on the $60 billion annual revenue mark, which is a number that seemed out of reach just a few years ago when revenue seemed stuck just under $50 billion. Looking ahead, Cisco expects to earn between $1.05 and $1.07 per share in the fourth quarter, with revenue expected to grow in the 14% to 16% range, the midpoint slightly ahead of the expectation of $1.03 per share and 14.1% growth. The company also raised its full-year number, expecting earning to grow to $3.80 to $3.82 a share and revenue to grow 10% to 10.5%. The numbers are the numbers, but as I always do, I like to look between them to understand current and future trends. Here are some of the notable points from the quarter and what that means for Cisco’s future:

Backlog returning to ‘normal’ over time

In the earnings release, Cisco noted that remaining performance obligations are at $32.1 billion, up 6%, with 53% to be recognized as revenue over the next 12 months. This indicates that the supply chain issues are easing, which is something Chief Executive Chuck Robbins (pictured) has repeatedly mentioned on previous calls. One concern I had was that I wasn’t sure if the backlog would fully translate to revenue, as many customers had told me they ordered products from multiple vendors and would cancel once one vendor could meet their demand. Cisco did tell me that order cancellation rates are now below historic levels, and channel partners have confirmed this. The one question that remains is what is a “normal” backlog for a company like Cisco. It won’t be $30 billion, but it likely won’t ever return to pre-COVID levels.

Declining orders does not mean declining demand

Although Cisco put up good numbers, the stock traded down slightly after hours, with the focus seemingly on the metric that orders were down 23%, and this is a good example of looking between the numbers to fully understand what’s going on. In an interview with Barron’s, Chief Financial Officer Scott Herren confirmed several things I had heard in the field. The most notable point is that the relief in the supply chain has shortened product availability times, meaning customers do not need to order 18 months or two years out any longer. Also, the reduction in backlog means Cisco is shipping previously ordered products, and now customers need to take the product in and deploy it before they order more.

Cisco continues to transition to a software company

As has been the trend, all software metrics were up this quarter. Total software revenue is now $4.3 billion, which annualizes to about $17 billion. This was up 18% year-over-year. Annualized revenue run rate is now $23.8 billion, up 6% from a year ago, with product ARR up 10%. Total subscription revenue rose 17%. All of these numbers are important in the continued transformation of Cisco. Although many software numbers are tied to hardware, it creates revenue predictability and more consistent product upgrades. Cisco also has the opportunity to leverage some best-of-breed assets as Trojan horses that can eventually pull other products through. One reseller I talked to this week told me, “ThousandEyes is an excellent product, and I’d like to see it deployed everywhere.”

Networking is booming

Of all the product areas, networking, or “Secure, Agile Networks” in Cisco-speak, rose a whopping 29%. This is, by far, Cisco’s largest product area and currently comprises 52% of total revenue. This growth is driven by the fact that the network is the foundation for digital transformation. Companies are modernizing compute by moving to the cloud, changing the way people work, which relies on mobility, and connecting everything as part of “smart” initiatives. This mandates network evolution, and Cisco still has the broadest portfolio with the largest share. Looking ahead, I expect to see an acceleration of the network business as the company cleans up its portfolio. In some ways, Cisco has remained successful despite itself. If one looks at the portfolio, Meraki, Catalyst, Nexus, DNA Center, Viptela, and the like, it looks like a hodgepodge of products. At Cisco Live 2022, the company announced the long-awaited Meraki-Catalyst integration, and I expect to see more in this area come Cisco Live 2023 in June.

Security is still flailing

The product area known as “End to End Security” posted revenue of $958 million, representing 2% growth. While growth is good, that pales compared with the growth numbers of companies such as Fortinet Inc. (30%) and Palo Alto Networks Inc. (23%). As I pointed out in my security platform post, Cisco has good products, but it’s a collection of best-of-breed components without a larger end game. This seems to be changing. At the RSA Conference, the company announced its extended detection and response or XDR offering, and I’m hearing rumors that Cisco has another major security announcement set for Cisco Live 2023. If it can grow just a few percentage points in the massive security industry, that will move the revenue needle like no other product category. More coming here, I’m sure.

Collaboration falls in double digits

The collaboration revenue bucket fell 13%, posting revenue of $985 million. I’m sure this disappoints the collaboration business unit that has loaded Webex with innovation. I use both Microsoft Teams and Cisco Webex regularly and Webex and can definitively say Webex has a significant technology, performance and technology advantage over its Redmond-based competitor. Unfortunately, customers haven’t been looking for the best product but have adopted Teams because of the Microsoft License bundles. Over the past year, Cisco has changed its competitive approach with Microsoft and adopted the attitude, “If you can’t beat them, join them.” At the Enterprise Connect event, Cisco demonstrated that its device portfolio now natively integrates with Microsoft Teams. Also, Microsoft’s voice plans are expensive and not enterprise-grade, and many of the unified-communications-as-a-service providers, Cisco included, can sell voice services into Teams, which can help “backdoor” Cisco into accounts. Once it gets a foothold into an account, it can bring the benefits of Webex to other departments. For example, Webex offers better Webinar and hybrid event capabilities, which might appeal to a sales team. The reality is that, although many companies use Teams, it’s often not the only collaboration tool. One of Cisco’s bigger Webex reseller partners told me, “The strategy of winning voice and selling the suite after is starting to work. We are hopeful that this continues to generate new opportunities for us.” Overall, it was a solid quarter for a company still transitioning into the next wave of itself. The secular trends of artificial intelligence, hybrid work, cloud, the internet of things, and more work in Cisco’s favor, but there is still plenty of work to be done with its product. As Cisco Live 2023 approaches in less than a month, we should get a good glimpse of what the next year has in store for Cisco customers.

Cisco integrates ThousandEyes and AppD information to provide visibility into user experience.

Cisco recently rolled out a new service called Customer Digital Experience Monitoring, which integrates its application monitoring tool AppDynamics and ThousandEyes network intelligence tools. The integration is bi-directional so that data can be shared between both systems in real-time. This improves the user experience of digital apps, and it also allows different teams within an organization to work together and make faster decisions.

Understanding User Experience is Critical

A good user experience is crucial since organizations rely heavily on digital apps for many business interactions. Therefore, identifying issues in both the apps and the network enables organizations to be more proactive and fix problems before they affect the end user. Cisco’s approach is to gain insights into the app and network performance by leveraging application observability from Cisco AppDynamics and network intelligence from ThousandEyes via this new Customer Digital Experience Monitoring service. The offering utilizes OpenTelemetry—an open-source set of tools for collecting telemetry data from applications—to provide digital experience monitoring by combining application and network vantage points. The bi-directional integration pulls together data from multiple sources, analyzes it in real time, and reduces the time it takes to resolve issues.

Collaboration Between IT Silos

The solution also helps break down silos and reduce friction among different teams within an organization, said Carlos Pereira, Cisco fellow, and chief architect, during a recent news briefing on Customer Digital Experience Monitoring. A complete picture of an application’s health and user journeys reduces tool sprawl. It fosters collaboration between infrastructure and operations (I&O), security operations (SecOps), development/security/ operations (DevSecOps) teams, and app developers. A challenge most organizations face is ensuring a smooth digital experience for users accessing apps from various devices and locations. Internet connectivity can have a significant impact on the user experience, especially if there is poor connectivity and users cannot access services. That’s where full-stack observability (FSO) adds value. It can help organizations understand and manage the complex connections between users, apps, and the internet.

Customer Digital Experience Monitoring

Customer Digital Experience Monitoring brings together observability and network intelligence in two ways:
  • ThousandEyes sends network metrics in the open telemetry format to AppDynamics, contextualizing the metrics for specific apps. AppDynamics then correlates internet performance with app performance and the user experience, so IT teams can understand if the problem is on the end-user side, networking, or in application performance.
  • AppDynamics shares real-time application dependency mapping information with ThousandEyes, which helps network operators know which networks are being used for which app and if any issues are affecting them. It also significantly reduces mean time to resolution (MTTR) by providing actionable recommendations and prioritizing network remediation based on business impact and criticality.
Pereira shared a scenario where Customer Digital Experience Monitoring helped identify an issue with an e-commerce service that didn’t stem from a network problem. In just minutes, the root cause was identified as an application problem, which would otherwise have taken hours to determine without the integrated monitoring capabilities. “From a triage perspective, the problem was detected in less than five minutes. We can perform root cause analysis by just correlating all the domains, and we make this seamless as a workflow that goes across all the tools,” said Pereira. Cisco is offering Customer Digital Experience Monitoring as part of its FSO Advantage package. A second package, FSO Essentials, comes with hybrid app monitoring, modern app monitoring, app security, and Business Risk Observability—a service Cisco added to the package back in February. However, FSO Essentials doesn’t include the real-time network intelligence metrics and application dependency mapping that FSO Advantage provides.

Smartlook and Digital Experience Monitoring

In April, Cisco shared plans to acquire Smartlook, a company specializing in analyzing and contextualizing end-user digital behavior. Through the acquisition, which will be completed in the fourth quarter of FY23, Cisco aims to further enhance its FSO solution with new insights, analytics, and other capabilities related to application and user experiences. Pereira said customers could take advantage of additional digital experience monitoring capabilities by implementing Customer Digital Experience Monitoring in combination with Smartlook’s Real User Monitoring (RUM).

Bottom Line: Leveraging Data

As a Cisco watcher, it’s good to see the company leveraging the data in its AppDynamics solution. The ability to understand how network changes and anomalies impact application behavior enables IT pros to translate technical information into business metrics. Earlier this year, at the Cisco Live EMEA user event, the company announced Business Risk Observability, which uses AppDynamics information to help prioritize security risks. This is part of Cisco’s bigger commitment to creating better integration and interoperability across all its products. Cisco Live US is right around the corner, and I’m fully expecting to see more offerings, like Customer Digital Experience Monitoring, that highlight the Cisco platform advantage.
Veeam Software Inc.‘s user event, VeeamON, running May 22-25 in Miami, is an important one because Chief Executive Anand Eswaran (pictured) has been at the helm for almost a year and a half and has now had time to put his fingerprints on the company. Looking back, it has been a successful year for Veeam, as it was named a leader in the 2022 Gartner Magic Quadrant for Enterprise Backup and Recovery Software Solutions, six years running, with the highest ranking on the ability to execute scale for the third consecutive time. Also, the company achieved the No. 1 share position in IDC’s tracker for Data Replication and Protection. According to IDC, in the second half of 2022, Veeam grew 8.1% year-over-year and reported revenue of just under $700 million, which equates to a 12% share of the overall market. At the same time, Dell shrank about 2%, which puts it at 11.2% share or $652 million. This is a topic that Dave Vellante, Dave Nicholson and I discussed on TheCUBE at VeeamON 2022 when Dell and Veeam were in a virtual tie for market share. What’s fascinating about the overall market is that the largest share line in IDC’s tracker is “others,” which currently comprises 53% of the overall market. This means there is a tremendous amount of opportunity for any of the vendors that can drive innovation into the market, with Dell and Veeam moving in opposite directions. Innovation in this area would include both traditional backup and recovery but emerging use cases. Based on that, here are some topics I expect to see at VeeamON 2023:

Innovation in ‘the basics’

There are a lot of adjacent areas for backup and recovery, but it has been a struggle for this industry to make the process of recovering files easy. I’ve often joked that the backup and recovery industry is filled with vendors that are excellent at backing up data but can’t make the recovery process easy, but this is where Veeam shines. Earlier this month, I talked to Eswaran. He told me that the No. 1 thing Veeam customers like about the company is “reliable recovery,” where getting data back is fast and easy, which has been a “rallying cry” for him and the company. The event content will likely reflect this.

Ransomware recovery

There has been no bigger driver of advancing backup and recovery than its role in helping companies recover from ransomware. The Veeam Ransomware Trends report found that in 2022 85% of organizations had at least one ransomware event or attempted attack. One option is to pay the ransomware, and I have talked to plenty of organizations that have done this, but that doesn’t guarantee the return of data. The only way to ensure one can recover quickly is to employ a “3-2-1-1-0” backup strategy. I believe the 2023 study from Veeam will be released at VeeamON, so we should see where the gaps are.

The Veeam platform approach

Over the years, Veeam has built a broad set of capabilities that enhance its core offering. In addition to backup and recovery, the company offers monitoring, analytics, and orchestration capabilities. Also, it can back up data from any computing model, including cloud, virtual, on-premises data, local applications, and SaaS. If this were “Lord of the Rings,” Veeam might call this the “one backup platform to rule them all.” At VeeamON ‘22, the company touched on the platform advantage, but this is an area I’m hoping to see more of.


In late 2020, Veeam acquired Kasten to target Kubernetes-native workloads. At the time, Kubernetes was highly hyped but lightly used. Today, the broad use of containers has accelerated the adoption of Kubernetes. One of the challenges of containers is that they are ephemeral in nature, making them difficult to back up. Kasten was a big part of VeeamON 2022, and I’m expecting to see more in this area to help customers safely ramp up the use of Kubernetes.


Backing up Amazon Web Services, Microsoft Azure and Google Cloud Platform has driven growth for Veeam. One might think cloud services do not need to be backed up since it’s something the cloud provider should be doing, but that’s not the case. Although the cloud providers do offer backup solutions, they tend to be basic in capabilities when compared with those from Veeam. Also, one can’t use AWS to back up Azure or GCP to back up AWS, meaning companies need to manage multiple systems. With Veeam, it’s one platform, one set of policies and a single management console. This is an area of constant innovation, and I expect to see how Veeam has adapted its platform as cloud companies have evolved.
For this year’s Formula 1 Crypto.com Miami Grand Prix, Verizon made a significant investment to boost connectivity speed and improve reliability. The service provider deployed a dedicated private wireless network with high-speed, high-capacity and low-latency connections, as well as reduced interference and improved privacy and security. The event occurred May 5-7 at the Miami International Autodrome (pictured), located at Hard Rock Stadium. To get the network ready, Verizon engineers added more 4G and 5G coverage in different areas of the Autodrome, including the bowl, the back office and the parking areas. The network was able to support various applications at the event, such as ticket scanning, digital sign management, multiple point-of-sale terminals used by merchants, and content upload. For upgraded 5G services, Verizon used two types of radio waves: C-band (mid-band spectrum of cellular broadband network frequencies between 3.7 and 4 GHz) and mmWave (millimeter waves or frequencies starting at 24 GHz and above). C-band provided attendees with extensive wireless coverage, while mmWave handled large amounts of data simultaneously, ensuring that the network didn’t get too slow or crowded. Verizon also worked to increase the capacity of the fiber connections in the area by adding more fiber strands in different parts of Hard Rock Stadium and the west lot. Additionally, Verizon expanded fiber connections outdoors for its distributed antenna system, equipment that provides cellular connectivity in large facilities and arenas. Finally, engineers added more fiber connections for cell sites along the track course to guarantee optimal connectivity for fans watching the race. To handle wireless traffic in the parking area, Verizon introduced network slicing, which divides the crowd into separate sections, like pie slices. Each section can be adjusted individually to better manage wireless traffic by fine-tuning network performance based on where people are and how they use their devices. This approach has ensured a smoother and more reliable experience for race fans during the event. Verizon recently made major upgrades to its network in Miami, which included the addition of new cell sites for better coverage, increased capacity of fiber optic cables, and more bandwidth for new services such as wireless internet. According to Verizon, 94% of its customers in Miami now have access to 5G Ultra Wideband, the operator’s highest-performing 5G. Post-race, Verizon provided data on usage. The Miami Grand Prix brought a whopping 270,000 fans to the three-day event, with about one-third using the Verizon wireless network to share their experience. Verizon customers alone generated 42.9 terabytes of data over the three days, up 26% from a year ago. To gain some perspective, that is the equivalent of streaming a full-length movie over 12,000 times. The last Super Bowl generated in the range of 30 terabytes of traffic,. That was in a single day, but it’s a good indicator of the volume of traffic. These enhancements are part of Verizon’s broader business strategy, dubbed Business Connected Venue, to invest in 5G at more than 95 large public places — such as sports stadiums and concert halls — across the U.S. With a combination of public/private networks and strong technology partnerships, Verizon wants to help venues enhance the overall experience for everyone attending events. There has been tremendous hype around 5G over the past few years, but service providers have had difficulty finding those “killer” use cases. That has left many to question whether 5G really transformative or just another solution looking for a problem. Although I don’t believe there is a single use case that will make 5G a must-have for all companies, I do think 5G, both public and private, enables companies to use a number of digital technologies that were not practical without high-speed wireless. Wireless point-of-sale is an excellent example of this, as I’ve been to venues where the wireless mobile credit card reader was experiencing problems because of a slow network. That can cause a 15- to 30-second delay in purchasing an item. Extrapolate this over a three-hour football game, and it’s easy to understand how a slow network can cost the venue big money. More examples like this are needed to show “what’s possible.” This helps other businesses visualize how they can transform their organizations using 5G.

Secure access service edge (SASE) deployments have seen strong momentum thanks to increased complexity in managing networks and dealing with security threats.

Digital businesses are more reliant on their networks than ever before. Technologies that enable digital transformation, such as IoT, mobility, and cloud, are all network-centric, and that has raised the bar on the value of the network. My research shows that almost two-thirds of business leaders believe the network is more valuable today than it was five years ago. That said, today’s organizations face increased complexity in managing networks and dealing with security threats. We live in a world where everything is connected, and it’s up to network operations to manage traditional connectivity and connections to people’s homes and a growing number of things such as kiosks, autonomous machines, and others. In the current business environment, organizations are increasingly adopting cloud services while maintaining some on-premises applications. The rise of remote work due to the COVID-19 pandemic has accelerated this trend, leading to more security threats and network management challenges. To tackle these challenges, the industry has recognized the need for a unified approach to networking and security, which is why secure access service edge (SASE) deployments have seen such strong momentum. In fact, SASE was one of the hot topics at the RSA 2023 show, as security vendors are trying to align themselves with networking and vice versa. In concept, (SASE) management addresses these challenges by combining software-defined wide area networks (SD-WAN) with security. Delivered as a single vendor, cloud-based service, SASE management allows businesses to protect their networks and digital resources while simplifying the management process. I used the term “in concept” as there are many options for customers. but, the customers typically need to pick and choose networking and security components from different providers and bring them together and correlate the information and policies manually. The complexity of doing this opposes the simplicity SASE is supposed to bring. Managing SASE requires two disciplines that have historically not been tied together - making wide area networks more efficient (SD-WAN) and ensuring strong network security. This combination needs to be delivered as a single service via the cloud. Additionally, SASE management should make it easy to apply rules and policies across the entire network for all users and devices. This can be difficult for companies that want to take a “best of breed” approach, as the correlation of data and integration of services and policies are typically done manually. Verizon’s SASE solution combines its managed network and security services to create a closed-loop system that involves zero-trust networking, visibility and reporting on security threats, and improved latency and performance for accessing applications. The provider has partnered with Versa Networks and Cisco for the SD-WAN, plus Zscaler and Palo Alto Networks for security, to create an integrated offering that includes a single portal view for SD-WAN and SASE deployments. The deployments are managed by one network operations center (NOC) and security operations center (SOC), potentially lowering costs and simplifying operations. “There isn’t yet a single vendor or partner with a technology stack that does both SD-WAN and security service edge (SSE)-like services to meet the SASE vision. As such, Verizon is focused on a couple of combinations for our customers using top-tier SD-WAN solutions and top-tier SSE solutions,” said Vinny Lee, Verizon’s product development director, during a recent webinar hosted by the provider. A networking element that plays a key role in SASE management is network as a service (NaaS). In fact, Verizon views SASE as part of the evolving network as a service (NaaS) narrative. NaaS provides organizations with a flexible, programmable, and scalable way to manage networks, combining different levels of service, access types, and service-level agreements (SLAs). Alternatively, traditional SD-WAN solutions often require a combination of internet-based and private-based access technologies in order to achieve better performance and reliability. Verizon aims to build a “single pane of glass for the SASE digital experience,” which will eventually expand into a broader NaaS offering, said Lee. This would allow customers to see all the services they have procured from Verizon in one location and understand how they work together, creating a holistic experience for customers. Verizon's NaaS strategy involves securing, connecting, and managing all aspects of the network, including incident management and monitoring connectivity tunnels. Verizon is also addressing the increasing use of wireless local area networks (LANs) and 5G networks. The provider is offering customers the option to access its network with any type of last-mile access, including broadband, fixed wireless access, and 5G using what Verizon calls a “secure hybrid network” service. This allows customers to get into Verizon’s core private internet protocol (PIP) network even if they’re using other providers, including global providers in areas like the Asia-Pacific (APAC). Lee shared two use cases that demonstrate how customers are utilizing SASE management. The first customer, a security solutions company based in Europe, acquired Verizon’s SASE management package as part of a total NaaS solution. The customer embarked on a complete network transformation, which included implementing various security components. The second customer, in healthcare, focused on transitioning to a traditional SD-WAN, integrating Palo Alto Prisma and leveraging Verizon’s NOC incident ticket handling and policy management services. This customer sought a single vendor to deploy the complex solution and achieve overall cost reduction. SASE management from Verizon is available in three packages, with the price based on the number of users. Each package provides integrated security management support for a defined feature set.
  • The first package is multi-vendor SASE management, which includes change management and incident management for a set of basic security features.
  • The second one is multi-vendor SASE management plus, which includes features available under multi-vendor SASE management, change management and incident management with enhanced security options.
  • Package three is multi-vendor SASE management preferred, a complete service package that includes all the features in the other packages in addition to managed detection and response—Verizon’s security as a service (SECaaS) offering.
Verizon launched SASE management and its advanced SASE offering in early 2023, which included the combination of Cisco and Versa for SD-WAN and Zscaler and Palo Alto Prisma for security. This bundled solution serves as a standard offering for customers. According to Lee, Verizon plans to introduce a single-vendor solution through Versa later this year, combining the security and SD-WAN components to provide a reliable, end-to-end solution for customers seeking a unified experience.

Managing the network from the edge offers performance and data sovereignty benefits.

Networking vendor Extreme Networks is holding its user event, Connect, this week in Berlin, Germany. At the conference, the company announced ExtremeCloud Edge, which brings Extreme’s network management capabilities to the network edge. This includes network operations as well as analytics, AI-infused functions, and networking applications. Like most network vendors today, Extreme offers cloud management via its Cloud IQ portal. For customers that prefer to keep the management functions on-premises, Extreme has a private cloud version of Cloud IQ as well. Now the company has added an edge option, which I believe makes them the first to offer this type of solution.

Edge Addresses Latency and Data Sovereignty

The use case for management at the edge is for organizations with latency-sensitive requirements, the most obvious of which is artificial intelligence. AI is becoming a bigger part of network operations, and customers can run Extreme’s AI application, CoPilot, from the edge. Not all customers would need to do this, but customers such as retailers who need to make real-time decisions about the network could benefit. The other advantage of running from the edge is data sovereignty, which has become a more significant issue in Europe since the war in Ukraine began. Running network operations from the edge allows customers to benefit from a cloud operating model while keeping data in the country. ExtremeCloud Edge will be made available in the summer of 2023 for select partners and includes ExtremeCloud SD-WAN, Extreme Intuitive Insights, and the previously mentioned ExtremeCloud IQ. The rest of the company’s application portfolio will be made generally available (GA) in early 2024. The company also plans to make the platform available to certified partners for ecosystem solutions. The edge would be ideally suited for applications such as IoT management, video analytics, and retail operations. Extreme has a large footprint with sports teams through partnerships with the NFL, NHL, MLB, and other organizations, and stadium analytics would be another good use case for analyzing network data at the edge.

New Hardware Platforms Announced at Extreme Connect

In addition to ExtremeCloud Edge, the company announced several new network products, including the following: AP3000 The is a low-power, small form factor Wi-Fi 6E Access Point designed for environments where power consumption is an issue. The AP draws only 13.9W of power, significantly lower than the 25-30W many APs take today. This means companies can power the APs with PoE (15W) instead of upgrading to PoE+ (30W). The device has the option for external antennas with an extended temperature range, making it suitable for freezers or in hot climates. 7520 and 7720 Universal Switches These new products are for a high-performance network core or aggregation point. The former is designed for 1/10/25Gb server and top-of-rack (ToR) deployments within data centers and wiring closets. The latter lets customers address higher-speed core switching needs with up to 32×100 Gb ports. It can consolidate up to eight different aggregation and core switch lines from previous generations into a single family. Extreme 8820 Switch The is a high-density, fabric-enabled switch for large-scale environments. The new switch brings Extreme’s Universal Platforms to large enterprises and service providers and can be used in a data center as a border leaf or spine switch. The 8820 will be available as 40 x 100GB or 80 x 100Gb (QSFP28) configurations, with the ability to split to 4 x 25/10Gb, resulting in either 80 x 40Gb, 144 x 25Gb, 144 x 10Gb. Or, with the 8820-40C, it splits to 40 x 40Gb, 72 x 25Gb, 72 x 10Gb configurations

Universal Hardware Gives Customers Choice and Minimizes Risk

Extreme’s Universal Hardware architecture enables customers to purchase one set of hardware but then have flexibility in how the switches are configured and managed. For example, a customer could choose initially to run the network in a traditional networking mode with on-premises management to minimize disruption to the business. At a later date, once the company has tested the fabric operations, they can switch to that model and even migrate to a cloud-managed solution without having to replace hardware. This lets customers evolve the network at a pace they are comfortable with.

Bottom Line: Catching Its Stride

Extreme has caught its stride over the past year. The company went through an aggressive acquisition strategy where it rolled up network assets from Avaya Networking, Brocade, Aerohive, and others. This created many challenges as the company looked to rationalize the portfolio and consolidate software platforms while shifting to a cloud management model. The pandemic also created a problem as supply chain issues created a large backlog in sales, but the past 12 months have seen the company execute consistently, leading to an uplift in stock price. With much of the messy work behind it, Extreme can focus more on innovation, much of which is on display at Connect Berlin this week.
One of the major themes from last week’s RSA Security conference was the rise of converged network and security platforms as organizations look to consolidate the number of vendors they have and leverage the ubiquity of the network. During my interview on theCUBE from RSA, I mentioned how chief information security officers are starting to understand that perceived best-of-breed does not lead to best-in-class threat protection. In fact, most security professionals I talk to discuss the difficulty of maintaining policies across dozens of security vendors. This has finally led to organizations wanting to rationalize down to a few platforms. That said, I think it’s important to define what a platform is and is not. Within the context of security, a platform has multiple products tied together with common telemetry. That latter point enables vendors to accomplish two things. The first is to configure or change a policy once and push it out across the environment. The second is that it enables faster threat detection and response as the platform can see more security events and identify where the breach occurred, enabling faster response. Regarding product capabilities, for security and network convergence, I believe the vendor needs to have a minimum of cloud, network and endpoint security to be a viable platform, as that gives the end-to-end view of the threat landscape. Given the definition, here is how I rank the security platform companies, which are listed alphabetically inside each tier, since I have not ranked them inside their tier:

The A-list

Fortinet Because FortiGate is so widely deployed, the company is best known as a firewall vendor. What many don’t know is that it offers an incredibly broad portfolio that spans everything from Wi-Fi to endpoint security to zero trust and a software-defined wide-area network solution that seemingly came out of nowhere. Fortinet’s “secret sauce” is its homegrown silicon, which gives the company a consistent set of features with the industry’s best price/performance. More importantly, the silicon provides the standard telemetry for rapid threat identification and resolution. The downside of this approach is that it can make acquisitions difficult, since the capabilities of the purchased company need to be ported to the silicon. This can prevent the company from being first to market, but that has never been Fortinet’s game. Palo Alto Networks The biggest security by revenue is also best known as a firewall vendor. The company has done a great job of assembling a broad product line through acquisition, complemented by organic development. Of all the security vendors, Palo Alto has been the loudest when articulating the value of platforms. For a while, it was fair to say that its marketing was well ahead of the product capabilities. Today, its Cortex XDR SOC tool is built on its end-to-end platform and provides customers with automated detection and response.

The watch list

Cisco Systems On paper, Cisco should be one of the most potent security platforms, particularly now as networking, Cisco’s area of dominance, is coming together with security. The company has a number of excellent security products as well, including Kenna, Talos, Duo, Umbrella and AnyConnect, the most widely deployed endpoint client. This gives Cisco tremendous potential, but it remains just potential until Cisco can tie those products together. In reality, Cisco Security has never been what it could be. However, I do think the tide is changing. Over the past month, I’ve met with Cisco Security leadership on a number of occasions, and they are laser-focused on bringing the security capabilities together and creating a much better experience for its customers. However, I do think the tide is changing. At RSA, the company launched its XDR solution and now has a unified policy engine across all its form factors of firewalls. The company is well aware that security presents the most significant “needle-moving” opportunity for growth. I believe this will be the top focus area for innovation for Cisco in for the foreseeable future. Microsoft Microsoft is approaching security like it does other markets – with bundles. The E5 license is loaded with many “good enough” products except Defender, which is best in class when running on Windows. What holds Microsoft back is that it doesn’t do cross-platform very well. For example, the previously mentioned Defender has much more instrumentation on Windows than on Macs. Also, maintaining Microsoft security is highly complex and requires a number of consoles which require manual correlation across them. Zscaler This might be the most interesting security company today. The company came out of nowhere and has been the most vocal evangelist for shifting security to the cloud. Technically the company wouldn’t fall into the converged network and security platform category as it doesn’t do network security – at least not the traditional way. Instead, Zscaler has taken a fundamentally different approach with the network by going directly to the internet and Zscaler cloud to eliminate the potential for lateral threats. Initially, I was skeptical of this approach, but I’ve talked to enough of its customers to know this model does work, and the company should close the year at more than $1.5 billion in revenue, so I’ve included it on my watch list.

Could sneak up on people

Arista Networks The high-performance network vendor has maintained its mission of building products built on a single operating system with common telemetry across its portfolio. Initially, the data was held individually on each switch, but when it rolled out its CloudVision management portal, it introduced a data lake that aggregated the information. In 2020, it acquired Awake Security, which brought it network detection and response, and since then, it has quietly been rolling out more security capabilities. Given the size of the security market, I expect Arista to continue to build more security capabilities with an eye toward eventually being a platform. The company never bites off more than it can chew, but when it targets a market, it typically has success. CrowdStrike Holdings CrowdStrike is arguably the leading endpoint protection company, with a growing cloud security presence. Its lack of network security would exclude it from being a converged platform by my above definition. However, its large installed base and nearly $30B market cap make it a formidable security company with the resources to acquire or build capabilities. Most of the vendors in this list have historical strength in networking, but CrowdStrike could take the opposite approach. Juniper Networks Security has been a core offering for Juniper since it acquired Netscreen in 2004. However, it spent many years integrating ScreenOS with Juniper’s operating system, Junos and the security business have lost momentum since then. Since the arrival of Chief Executive Rami Rahim and the acquisition of Mist, the company has doubled down on enterprise. It has an opportunity to couple the advanced AI capabilities from Mist with security. This could act as a strong differentiator for Juniper. VMware The virtualization leader has built a strong network portfolio but currently has limited security capabilities, although it has chosen to partner with many security companies. At a recent Cloud Analyst Day, I asked VMware leadership about their security aspirations, and they currently do not want to compete with the mainstream security platform vendors. Instead, the company is choosing to adopt a “better together” strategy where its technology is used to secure VMware deployments.

What about…


The Google security story is similar to how it approaches everything in that it’s a little bit Amazon Web Services and a little bit Microsoft but, in reality, just confusing. Microsoft has laid the gauntlet down and wants to be a platform. Conversely, AWS has provided a platform for security vendors to run on and would instead partner with them. When Google acquired Mandiant, I thought the company was heading down the platform path, but it has chosen to be part platform and part enabler, which in reality means it’s neither.

digital concept art in gold