Featured
Reports

Avaya’s Restructuring Is a Financial Exercise and Should Not Be a Concern for Customers
February 2023 // This week, after months of speculation, Avaya announced its financial restructuring plans. The company has entered into […]

Unlock UC Usage Constraints: Meet Communication Compliance Requirements Without Sacrificing Employee Experience
February 2023 // Over the last five years, we have seen a significant shift in how we work. Cloud-based unified […]

Solve Hybrid Work Compliance Challenges with Theta Lake and RingCentral
January 2023 // For most companies, the COVID-19 pandemic is in the rearview mirror. Business leaders are now focused on […]
Check out
OUR NEWEST VIDEOS
June 5 Interview with Veeam customer, Kim LaGrue, CIO of City of New Orleans
6.3K views June 5, 2023 9:38 am
63 0
June 2 ZK Tour of Veeam Tech Festival (Expo Hall)
5.2K views June 2, 2023 9:52 am
143 0
May 31 ZKast with Jason Buffington of Veeam
8K views May 31, 2023 8:00 pm
131 1
Recent
ZK Research Blog
News


What to do
The report strongly recommends that businesses take a more proactive approach to ransomware. Given the high probability of cyberattacks and the significant data loss that can occur with each attack, organizations should place a high emphasis on both preventing cyberattacks and preparing effective recovery strategies. In conclusion, Veeam advises businesses to maintain clean backup copies and regularly verify their recoverability as a part of their risk management strategy. Other recommendations include the use of “staged restorations” to gradually bring back data and prevent system re-infection during recovery. This is important because if infected data is restored, a second ransom event will likely occur. Lastly, implementing hybrid IT architectures can help organizations with their overall disaster recovery strategy by recovering servers to different platforms. One recommendation I would like to make is that backup and recovery funding and policy should be done in conjunction with the security team. Historically, backup and recovery is one of those poorly funded areas as no one cares about it until it’s an issue. On the other hand, security is a top area of focus for organizations as everyone, including business leaders, is concerned about a breach. All of the money put into cyber protection is to prevent a breach. Zero trust, security information and event management, security orchestration, automation and response, extended detection and response, next-generation firewalls and other tools protect the company differently. This does not account for the worst-case scenario, which is that a breach occurs, data is encrypted and a ransom is requested. At that moment, backup and recovery will be put to the test. If it has been well-funded, tested and retested, data can be recovered quickly and the ransom ignored. If not, well, the data points in the Veeam report highlight what happens. CISOs and chief information officers must work together to ensure that data protection, backup and recovery are all on the same page.
Ransomware, Kubernetes, and security were just some of the key themes at the Veeam 2023 conference.

Simplicity Wins
Like many vendors, if one asks Veeam about its differentiators, you’ll likely get a laundry list of technical advantages. While Veeam does have a leg up in several areas, such as Kubernetes, its big differentiator is that the product is easy to use, particularly in data recovery. I’ve often said that this industry is filled with vendors that do a great job of backing up data, but the recovery process is slow and error-prone. Veeam CTO Danny Allan echoed this during his keynote when he stated that backup was pointless without the ability to recover, and Veeam does that better than anyone. At the event, Kim LaGrue, CIO of the City of New Orleans, talked about her experience with Veeam. I asked her post-keynote why she chose Veeam, and she said that the operator console was easy to use and intuitive, making recovering files fast and easy. During his keynote, Veeam CEO Anand Eswaran cited an IDC study that found that Veeam recovers data from AWS, Azure, and GCP 5x faster than any other solution, and that translates into significant operations savings as well as better business continuity.Backup and Recovery Combats Ransomware
How to handle ransomware? That is certainly the question for many organizations today. Some companies have a policy to pay it, particularly if they have good insurance. Others may keep Bitcoin on hand to pay the ransom when their organization is hit. Other organizations may choose not to pay and deal with the consequences when it happens. At the event, Veeam released its ransomware report, which showed that insurance companies paid 77% of ransoms, but 74% of companies saw an increase in premiums, and 43% saw their deductible go up. Relying on insurance is becoming increasingly expensive, which may only be a viable route for a short while longer. The best approach is to have a proven and tested backup and recovery strategy that can quickly restore the organization’s information and get things back to normal operations. During her time on stage, New Orleans’ LaGrue talked about how, by using Veeam, the city can now recover its full data set in a day or two, removing any advantage a fraudster may have when seeking a ransom payment. One important point is that when data is recovered, it must also be analyzed and cleaned so as not to reinstate the initial cause of the breach, which can then cause another ransom event. In sum, backup and recovery preparedness is the best, fastest, and most cost-effective way to combat ransomware.Security Operations Should Focus on Backup and Recovery
Cybersecurity is going through its own modernization process. Companies are implementing zero trust, SSE, multi-factor authentication, XDR, SOAR, and other technologies to prevent breaches. Yet no matter how good the technology is and how smart the engineers are, breaches happen, and the question is, what happens then? The security team should ensure that if a threat does slip through all the cyber protection and the company is breached, the right backup and recovery solution is in place to ensure operations can be restored quickly. Typically, backup would fall under the CIO, and security would be the responsibility of the CISO, but these organizations must work together. This would open a new set of technology partners for Veeam and the backup and recovery industry, so I’m hoping to see more from Veeam here.Kubernetes is the Next Frontier for Veeam
At VeeamON, the company announced version 6.0 of its Kasten K10 product, which is used for Kubernetes data protection. New features in the latest version include:- Enterprise-grade ransomware protection for Kubernetes via suspicious activity detection capabilities, which provides immutable backups enabling instant recovery. This new release also extends threat detection capabilities by logging all events into Kubernetes Audit natively.
- Scale and efficiency improvements. The new version includes an application fingerprinting feature to enable newly deployed stateful applications to be automatically mapped to specific blueprints to achieve proper data consistency. This can reduce the risk and minimize complexity allowing for faster scaling of the environment.
- Cloud-native expansion. Kasten K10 now supports Kubernetes 1.26, Red Hat OpenShift 4.12, and Amazon RDS allowing for better interoperability. Veeam also added hybrid support on Google Cloud and cross-platform restore targets for VMware Tanzu and Cisco Hybrid Cloud.
Veeam Helps with Cloud Repatriation
During his keynote, CTO Allan talked about Veeam’s core tenets, one of which is data freedom. The Veeam platform was designed to enable customers to back up data in one place and restore it elsewhere, which is necessary for disaster situations. As an example, in the event of a natural disaster, a company may choose to restore its private cloud to a public cloud temporarily while the physical facility is not available. Over the past year or so, I’ve talked to many companies that have moved data and workloads to the cloud, but the cost of the service has grown to the point where they want to bring them back on-prem. Veeam’s ability to move data from one environment to the other can provide customers with a fast and cost-effective way of repatriating workloads and data back in-house. On an earlier call with analysts, Danny Allan stated that Veeam is and will remain, unapologetically, a backup and recovery vendor, and that seems to be serving the company well. Too often, tech vendors want to jump on the latest bandwagon, and that derails them from their core competency. Even in ransomware recovery, Veeam is steadfast in its statement that it’s not a security company, although it can play a key role in recovering from a breach. As I stated earlier, backup and recovery are hard to do well. Veeam’s focus on being the best at what it does, regardless of compute model or data type, has served its customers well – and will continue to – as distributed organizations put more data and workloads in more places.
By integrating Anthropic's Claude into its platform, Zoom plans to boost contact center efficiency and AI integration in the contact center.

Zoom this week announced a partnership with artificial intelligence system developer Anthropic. As part of the partnership, Zoom will invest in Anthropic and collaborate with the company to improve Zoom’s use of AI. The exact amount Zoom Ventures invested in Anthropic was not disclosed. The venture arm of Zoom has historically been very active in investing with companies it partners with. Other investments turned technology partner includes Neat, Mio, observe.ai, DTEN, and ThetaLake. This provides a strong pool of companies that can add to the Zoom ecosystem with the possibility of a future financial return.
AI in the Zoom Contact Center
Zoom plans to integrate Anthropic’s AI, specifically its Claude virtual assistant, into the Zoom Contact Center platform, which includes Virtual Agent and Workforce Management. The integration's strategic goal is to enable Zoom customers to improve service to their customers while by making contact center agents more efficient but also advance its approach to using AI to enhance the customer experience. Claude is based on Anthropic’s Constitutional AI model which is described here. In the context of the Zoom Contact Center, Claude will serve two primary roles: It will assist customer service agents by helping them find answers to customer inquiries or problems, and it will assist customers by acting as a self-service tool so customers don't have to wait for an agent.Zoom Using Multiple AI Models
Zoom’s approach to AI is interesting, as it combines in-house developed technology and partnerships. Most of the UCaaS and CCaaS providers have chosen to go down one path or the other, but Zoom is doing both. The company uses AI models from its own research and development group, plus other AI companies like OpenAI and Anthropic, and plus AI models from some of its customers. By incorporating different types of AI models, Zoom can give customers the best service based on their specific needs. For instance, a model that excels in speech recognition might be used for transcribing meetings, while another good at understanding context might be used for customer service. Zoom was a late entrant into the contact center, but since AI is causing a major disruption in the industry, that will likely reset the leaderboard. By taking a broad and open approach to AI, Zoom should be able to bring in features faster than competitors, as it can pick and choose from a wider pool. This does create an extra level of complexity as Zoom has to worry about integration, user interfaces, data sets, training consistency, and other factors. I’ve talked to Zoom leadership about this, and they seem confident they can do this technically. Their laser focus on keeping the product simple from a user perspective should ensure that complexity is not passed on to the customer.Zoom in the Months Ahead
For Zoom as a company, the contact center is an important “attach” to its existing install base of meeting customers. Feedback from channel partners is that Zoom will see a significant number of “COVID contracts” (that is customers who signed three-year deals with the pandemic began) come up for renewal in the next several months. Ideally, Zoom would like to ensure they buy the Zoom One suite but then add on contact center. Given the tough competitive landscape in CCaaS, Zoom’s ability to showcase differentiated AI is one of the keys to success.Zoom’s Stellar CMO Retires
One final note, Zoom’s fine Chief Marketing Officer, Janine Pelosi, recently posted on LinkedIn that she is set to retire after an 8-year career at Zoom that can only be described as stellar. If there were a CMO Hall of Fame, Pelosi would be a first-ballot inductee. When she arrived at Zoom in 2015, it was trying to sell video meetings in the market with many competitors. Her “Meet Happy” campaign that blitzed airports, arenas, and other venues with the Zoom logo combined with the pandemic put Zoom on the map, and the result is that the company has achieved something few brands ever do, and that is it now a verb. During the pandemic, it was common to say, “Let’s Zoom later on this” (or some variant), which has carried over post-COVID. Today, of all the collaboration vendors, Zoom has the most end-user demand (versus corporate IT) as people generally like the product. Meet Happy has become Call Happy, Webinar Happy, Event Happy, and more. In many ways, Zoom changed the world, and Janine Pelosi was a big part of that.
At Think, IBM highlighted how companies could use AI to solve some of their biggest challenges.

Key Takeaways from IBM Think 2023
The business use of generative AI will be narrow when compared to ChatGPT Ever since ChatGPT sprang into action, industry watchers and business leaders have wondered what impact it will have on businesses. With AI being the focal point of Think 2023, this was a big topic of conversation. The reality is that ChatGPT is a broad tool used by consumers, and this does not apply to businesses. Consider search. Google is used by consumers for general search by billions of people every day. However, it is not the way businesses look for information. Financial services use applications like Bloomberg, whereas legal uses LexisNexis. Search is valuable, but only in the narrow context of that industry. Similarly, generative AI and large language models have value, but it needs to be applied narrowly for the business world to realize value from them. To help companies accelerate their use of AI, IBM announced WatsonX, which includes foundation models, generative AI, a governance toolkit, and more. Hybrid cloud is the way forward for most organizations Public or private, that has become the proverbial question regarding the enterprise use of cloud computing. The real answer is both. During his keynote, IBM Chairman and Chief Executive Officer Arvind Krishna mentioned an IBM Institute for Business Value (IBV) study that found that over 75% of their customers plan to leverage a hybrid model. This is consistent with my research, although my data pointed to about 90% of enterprise-class companies using a mix of public and private. One of the challenges with hybrid cloud, particularly when multiple providers are used, is creating consistency across the different clouds. This is where Red Hat OpenShift can add value as it creates a logical container layer making data and workloads portable across clouds. With public cloud, IBM has been a distant number four to the “big three,” but the shift to hybrid should act as a catalyst for IBM to gain ground. Security needs AI to keep up with threat actors As long as we have had cyber security, the bad guys have had a leg up on the good guys. One of the challenges facing security pros today is that there is a massive amount of telemetry data that needs to be aggregated and analyzed. There is too much for anyone to do manually, so malware often takes months to be detected. At the recent RSA event, IBM Security announced its new QRadar Suite, which includes EDR and XDR, SOAR, and an advanced SIEM, all powered by AI. In Think Forum, I participated in a demo where the tools were used together to identify a breach, and a recommendation was given to the SOC engineer on how to correct it. While people can’t make sense of all the data being generated today, machines can, and that’s the future of security. AI and data can close the ESG gap During the Day 2 keynotes, John Granger, Senior Vice President IBM Consulting, showed a data point where 95% of executives say their company has an ESG plan, but only 10% of companies have made progress against it. That is a Grand Canyon size gap. Granger discussed that the biggest barrier to executing on ESG plans is that organizations do not know where to start because of a lack of data. What’s needed is accurate data that can be measured and managed and then acted against to gauge performance and drive accountability. He mentioned how with many consumer services, people are often shown the carbon impact of things like delivery options, but we do not have that capability in the business world. With that being said, companies do have data across a wide range of systems. There are currently hundreds of ESG frameworks available to companies but silos of data make thing more difficult. Christina Shim, Vice President and Global Head of Product Management & Strategy for IBM Sustainability Software, she talked about how AI can ingest, interpret and automate insights from the data in these frameworks. Also, artificial intelligence will reduce manual processing by automating the classification, extraction, and validation of thousands of invoices, documents, and other data spanning multiple businesses. Consumers are watching how their brands progress against ESG goals, and companies need to use AI and data to help close the gap. Contact centers are low-hanging fruit for AI Where to start with AI? I get asked that a lot by business and IT executives. I always recommend an area that currently has accurate KPIs and also where a small improvement can have a big payback. This points to the contact center. Today, businesses compete based on customer experience, which often starts in the contact center. In the AI, Automation and Data pod inside the Think Forum, IBM had set up a demo of a contact center with Watson Assistant infused into it. It created a game where the player would act as an agent and see how many inbound inquiries it could solve without AI. It then ran the same sequence with AI turned on and enabled the AI-powered virtual agent to take care of simple tasks like password resets and account balances, allowing the agent to handle more complex ones. At the end of the game, the results were shown so you could see the improvement. Without Watson Assistant, only 3% of requests were solved compared to 65% with Watson Assistant. Businesses looking to win quickly with AI should look to the contact center as a starting point. Here is a how-to resource for building a chatbot platform quickly and easily.Bottom Line: IBM Think 2023
In conclusion, IBM Think 2023 provided valuable insights into the future of technology and its impact on businesses. First, while generative AI has its merits, ChatGPT emerges as a powerful tool for consumers, but enterprises need to apply AI in different ways. Second, the hybrid cloud approach is gaining momentum, with organizations recognizing the value of leveraging public and private clouds. Red Hat OpenShift is crucial in achieving consistency across multiple clouds and driving IBM’s growth in the cloud market. Third, the symbiotic relationship between AI and security is imperative to combat the ever-evolving threats businesses face. IBM Security’s QRadar Suite uses AI to transform threat detection and response. Fourth, AI and data have the potential to bridge the ESG gap, empowering companies to measure, manage, and act on sustainability goals. Finally, the contact center emerges as the low-hanging fruit for AI implementation, revolutionizing customer experiences and enabling significant returns on investment.
ZDX measures user experience metrics providing IT teams with the data they need to identify and troubleshoot performance issues.

The challenges of digital experience monitoring
While security was the main focus for Zscaler’s SSE offering, a few years ago, the company added a visibility product called Zscaler Digital Experience (ZDX), which helps its customers measure and analyze the performance of business-critical apps, identify issues, get actionable insights, and more. The thesis is sound in that security companies must capture and analyze traffic to find threats. The same data can be looked at for performance information. Other network management companies have tried this in the past, such as NetScout and Riverbed. Still, both flopped in security, partially because of poor product but more because networking and security had yet to converge seriously. Since Zscaler launched ZDX, other SSE providers have launched visibility products. I recently caught up in person with Javier Rodriguez, Zscaler’s director of product management, at the RSA Conference, which took place April 24-27 in San Francisco. Rodriguez discussed the latest release of ZDX and how it brings security and network teams together.May 9 ZKast with Javier Rodriguez of Zscaler
Here are the highlights of the ZKast interview.End-to-end visibility with Zero Trust Exchange
ZDX leverages Zscaler’s security platform, Zero Trust Exchange, to provide end-to-end visibility into an organization’s digital environment—from the user’s device to the application server. ZDX measures user experience metrics like response time, page load times, and availability, providing IT teams with the data they need to identify and troubleshoot performance issues. ZDX also provides a comprehensive view of an organization’s internet traffic and usage patterns.New security monitoring features
Zscaler uses zero trust not only as an authentication mechanism but also as a modern security monitoring tool. With millions of data points, it’s necessary to have visibility and metrics to identify issues. One key feature called tracing allows the configuration of an app to send additional telemetry and check policy based on responses.A growing data set
With over 30 million agents, Zscaler has access to a massive data set that helps identify issues that affect certain apps. This is important because it allows organizations to have a comprehensive view of their digital experience and detect problems quickly. ZDX is Zscaler’s fastest-growing product category due to a combination of factors, including the need for visibility in the context of security, the challenges of troubleshooting in situations like pseudo-trust, the impact of cloud computing and COVID-19, and the importance of quantitative data in positioning the product.- Initially, ZDX was primarily used by security operations, but IT operations have also found it useful. Since ZDX provides a detailed view of an organization’s network, apps, endpoints, and Wi-Fi, it’s a helpful tool for both security and IT operations for pinpointing issues. ZDX has different use cases in the enterprise. Examples include monitoring hybrid workplaces and collaboration tools like Zoom and Microsoft Teams, as well as zero trust monitoring, which identifies problems with Wi-Fi or internet connections.
- ZDX comes in standard and advanced versions. The standard version provides minimal features, enough to understand the performance of apps. The advanced version includes additional features like faster resolution, more alerts, and new artificial intelligence (AI) capabilities that Zscaler recently rolled out to provide IT teams with advanced insights and help them resolve issues faster. The advanced version also collects more data at a higher frequency, making it suitable for organizations with more complex environments.
- The newly released AI capabilities show how users are experiencing apps, offer solutions for performance issues, and measure the quality of meetings in apps like WebEx. ZDX already works with Microsoft Teams and Zoom. With AI analysis and alerts, IT teams can address user complaints faster and compare good and bad experiences. ZDX also helps troubleshoot devices for remote workers and supports network problem-solving, privacy rules, and tracing of protected apps.
- Last year, Zscaler announced a partnership with Zoom for unified communications as a service (UCaaS) monitoring. The feature is included in the advanced version and works by integrating with Zoom’s application programming interface (API) to gather additional signals like data latency and loss.
- Zscaler is launching several other new features: third-party proxy support for complex deployments; AI to make it easier for analysts to detect incidents and troubleshoot; a single-screen dashboard for customers; and support for Webex integration. Additionally, Zscaler is leveraging internet service provider (ISP) insights to understand “last mile” performance. These developments are important for organizations because they provide more visibility and monitoring capabilities to troubleshoot issues and make informed decisions based on performance data.


Backlog returning to ‘normal’ over time
In the earnings release, Cisco noted that remaining performance obligations are at $32.1 billion, up 6%, with 53% to be recognized as revenue over the next 12 months. This indicates that the supply chain issues are easing, which is something Chief Executive Chuck Robbins (pictured) has repeatedly mentioned on previous calls. One concern I had was that I wasn’t sure if the backlog would fully translate to revenue, as many customers had told me they ordered products from multiple vendors and would cancel once one vendor could meet their demand. Cisco did tell me that order cancellation rates are now below historic levels, and channel partners have confirmed this. The one question that remains is what is a “normal” backlog for a company like Cisco. It won’t be $30 billion, but it likely won’t ever return to pre-COVID levels.Declining orders does not mean declining demand
Although Cisco put up good numbers, the stock traded down slightly after hours, with the focus seemingly on the metric that orders were down 23%, and this is a good example of looking between the numbers to fully understand what’s going on. In an interview with Barron’s, Chief Financial Officer Scott Herren confirmed several things I had heard in the field. The most notable point is that the relief in the supply chain has shortened product availability times, meaning customers do not need to order 18 months or two years out any longer. Also, the reduction in backlog means Cisco is shipping previously ordered products, and now customers need to take the product in and deploy it before they order more.Cisco continues to transition to a software company
As has been the trend, all software metrics were up this quarter. Total software revenue is now $4.3 billion, which annualizes to about $17 billion. This was up 18% year-over-year. Annualized revenue run rate is now $23.8 billion, up 6% from a year ago, with product ARR up 10%. Total subscription revenue rose 17%. All of these numbers are important in the continued transformation of Cisco. Although many software numbers are tied to hardware, it creates revenue predictability and more consistent product upgrades. Cisco also has the opportunity to leverage some best-of-breed assets as Trojan horses that can eventually pull other products through. One reseller I talked to this week told me, “ThousandEyes is an excellent product, and I’d like to see it deployed everywhere.”Networking is booming
Of all the product areas, networking, or “Secure, Agile Networks” in Cisco-speak, rose a whopping 29%. This is, by far, Cisco’s largest product area and currently comprises 52% of total revenue. This growth is driven by the fact that the network is the foundation for digital transformation. Companies are modernizing compute by moving to the cloud, changing the way people work, which relies on mobility, and connecting everything as part of “smart” initiatives. This mandates network evolution, and Cisco still has the broadest portfolio with the largest share. Looking ahead, I expect to see an acceleration of the network business as the company cleans up its portfolio. In some ways, Cisco has remained successful despite itself. If one looks at the portfolio, Meraki, Catalyst, Nexus, DNA Center, Viptela, and the like, it looks like a hodgepodge of products. At Cisco Live 2022, the company announced the long-awaited Meraki-Catalyst integration, and I expect to see more in this area come Cisco Live 2023 in June.Security is still flailing
The product area known as “End to End Security” posted revenue of $958 million, representing 2% growth. While growth is good, that pales compared with the growth numbers of companies such as Fortinet Inc. (30%) and Palo Alto Networks Inc. (23%). As I pointed out in my security platform post, Cisco has good products, but it’s a collection of best-of-breed components without a larger end game. This seems to be changing. At the RSA Conference, the company announced its extended detection and response or XDR offering, and I’m hearing rumors that Cisco has another major security announcement set for Cisco Live 2023. If it can grow just a few percentage points in the massive security industry, that will move the revenue needle like no other product category. More coming here, I’m sure.Collaboration falls in double digits
The collaboration revenue bucket fell 13%, posting revenue of $985 million. I’m sure this disappoints the collaboration business unit that has loaded Webex with innovation. I use both Microsoft Teams and Cisco Webex regularly and Webex and can definitively say Webex has a significant technology, performance and technology advantage over its Redmond-based competitor. Unfortunately, customers haven’t been looking for the best product but have adopted Teams because of the Microsoft License bundles. Over the past year, Cisco has changed its competitive approach with Microsoft and adopted the attitude, “If you can’t beat them, join them.” At the Enterprise Connect event, Cisco demonstrated that its device portfolio now natively integrates with Microsoft Teams. Also, Microsoft’s voice plans are expensive and not enterprise-grade, and many of the unified-communications-as-a-service providers, Cisco included, can sell voice services into Teams, which can help “backdoor” Cisco into accounts. Once it gets a foothold into an account, it can bring the benefits of Webex to other departments. For example, Webex offers better Webinar and hybrid event capabilities, which might appeal to a sales team. The reality is that, although many companies use Teams, it’s often not the only collaboration tool. One of Cisco’s bigger Webex reseller partners told me, “The strategy of winning voice and selling the suite after is starting to work. We are hopeful that this continues to generate new opportunities for us.” Overall, it was a solid quarter for a company still transitioning into the next wave of itself. The secular trends of artificial intelligence, hybrid work, cloud, the internet of things, and more work in Cisco’s favor, but there is still plenty of work to be done with its product. As Cisco Live 2023 approaches in less than a month, we should get a good glimpse of what the next year has in store for Cisco customers.
Cisco integrates ThousandEyes and AppD information to provide visibility into user experience.

Understanding User Experience is Critical
A good user experience is crucial since organizations rely heavily on digital apps for many business interactions. Therefore, identifying issues in both the apps and the network enables organizations to be more proactive and fix problems before they affect the end user. Cisco’s approach is to gain insights into the app and network performance by leveraging application observability from Cisco AppDynamics and network intelligence from ThousandEyes via this new Customer Digital Experience Monitoring service. The offering utilizes OpenTelemetry—an open-source set of tools for collecting telemetry data from applications—to provide digital experience monitoring by combining application and network vantage points. The bi-directional integration pulls together data from multiple sources, analyzes it in real time, and reduces the time it takes to resolve issues.Collaboration Between IT Silos
The solution also helps break down silos and reduce friction among different teams within an organization, said Carlos Pereira, Cisco fellow, and chief architect, during a recent news briefing on Customer Digital Experience Monitoring. A complete picture of an application’s health and user journeys reduces tool sprawl. It fosters collaboration between infrastructure and operations (I&O), security operations (SecOps), development/security/ operations (DevSecOps) teams, and app developers. A challenge most organizations face is ensuring a smooth digital experience for users accessing apps from various devices and locations. Internet connectivity can have a significant impact on the user experience, especially if there is poor connectivity and users cannot access services. That’s where full-stack observability (FSO) adds value. It can help organizations understand and manage the complex connections between users, apps, and the internet.Customer Digital Experience Monitoring
Customer Digital Experience Monitoring brings together observability and network intelligence in two ways:- ThousandEyes sends network metrics in the open telemetry format to AppDynamics, contextualizing the metrics for specific apps. AppDynamics then correlates internet performance with app performance and the user experience, so IT teams can understand if the problem is on the end-user side, networking, or in application performance.
- AppDynamics shares real-time application dependency mapping information with ThousandEyes, which helps network operators know which networks are being used for which app and if any issues are affecting them. It also significantly reduces mean time to resolution (MTTR) by providing actionable recommendations and prioritizing network remediation based on business impact and criticality.
Smartlook and Digital Experience Monitoring
In April, Cisco shared plans to acquire Smartlook, a company specializing in analyzing and contextualizing end-user digital behavior. Through the acquisition, which will be completed in the fourth quarter of FY23, Cisco aims to further enhance its FSO solution with new insights, analytics, and other capabilities related to application and user experiences. Pereira said customers could take advantage of additional digital experience monitoring capabilities by implementing Customer Digital Experience Monitoring in combination with Smartlook’s Real User Monitoring (RUM).Bottom Line: Leveraging Data
As a Cisco watcher, it’s good to see the company leveraging the data in its AppDynamics solution. The ability to understand how network changes and anomalies impact application behavior enables IT pros to translate technical information into business metrics. Earlier this year, at the Cisco Live EMEA user event, the company announced Business Risk Observability, which uses AppDynamics information to help prioritize security risks. This is part of Cisco’s bigger commitment to creating better integration and interoperability across all its products. Cisco Live US is right around the corner, and I’m fully expecting to see more offerings, like Customer Digital Experience Monitoring, that highlight the Cisco platform advantage.

Innovation in ‘the basics’
There are a lot of adjacent areas for backup and recovery, but it has been a struggle for this industry to make the process of recovering files easy. I’ve often joked that the backup and recovery industry is filled with vendors that are excellent at backing up data but can’t make the recovery process easy, but this is where Veeam shines. Earlier this month, I talked to Eswaran. He told me that the No. 1 thing Veeam customers like about the company is “reliable recovery,” where getting data back is fast and easy, which has been a “rallying cry” for him and the company. The event content will likely reflect this.Ransomware recovery
There has been no bigger driver of advancing backup and recovery than its role in helping companies recover from ransomware. The Veeam Ransomware Trends report found that in 2022 85% of organizations had at least one ransomware event or attempted attack. One option is to pay the ransomware, and I have talked to plenty of organizations that have done this, but that doesn’t guarantee the return of data. The only way to ensure one can recover quickly is to employ a “3-2-1-1-0” backup strategy. I believe the 2023 study from Veeam will be released at VeeamON, so we should see where the gaps are.The Veeam platform approach
Over the years, Veeam has built a broad set of capabilities that enhance its core offering. In addition to backup and recovery, the company offers monitoring, analytics, and orchestration capabilities. Also, it can back up data from any computing model, including cloud, virtual, on-premises data, local applications, and SaaS. If this were “Lord of the Rings,” Veeam might call this the “one backup platform to rule them all.” At VeeamON ‘22, the company touched on the platform advantage, but this is an area I’m hoping to see more of.Kubernetes
In late 2020, Veeam acquired Kasten to target Kubernetes-native workloads. At the time, Kubernetes was highly hyped but lightly used. Today, the broad use of containers has accelerated the adoption of Kubernetes. One of the challenges of containers is that they are ephemeral in nature, making them difficult to back up. Kasten was a big part of VeeamON 2022, and I’m expecting to see more in this area to help customers safely ramp up the use of Kubernetes.Cloud-a-palooza
Backing up Amazon Web Services, Microsoft Azure and Google Cloud Platform has driven growth for Veeam. One might think cloud services do not need to be backed up since it’s something the cloud provider should be doing, but that’s not the case. Although the cloud providers do offer backup solutions, they tend to be basic in capabilities when compared with those from Veeam. Also, one can’t use AWS to back up Azure or GCP to back up AWS, meaning companies need to manage multiple systems. With Veeam, it’s one platform, one set of policies and a single management console. This is an area of constant innovation, and I expect to see how Veeam has adapted its platform as cloud companies have evolved.


Secure access service edge (SASE) deployments have seen strong momentum thanks to increased complexity in managing networks and dealing with security threats.

Digital businesses are more reliant on their networks than ever before. Technologies that enable digital transformation, such as IoT, mobility, and cloud, are all network-centric, and that has raised the bar on the value of the network. My research shows that almost two-thirds of business leaders believe the network is more valuable today than it was five years ago. That said, today’s organizations face increased complexity in managing networks and dealing with security threats. We live in a world where everything is connected, and it’s up to network operations to manage traditional connectivity and connections to people’s homes and a growing number of things such as kiosks, autonomous machines, and others. In the current business environment, organizations are increasingly adopting cloud services while maintaining some on-premises applications. The rise of remote work due to the COVID-19 pandemic has accelerated this trend, leading to more security threats and network management challenges. To tackle these challenges, the industry has recognized the need for a unified approach to networking and security, which is why secure access service edge (SASE) deployments have seen such strong momentum. In fact, SASE was one of the hot topics at the RSA 2023 show, as security vendors are trying to align themselves with networking and vice versa. In concept, (SASE) management addresses these challenges by combining software-defined wide area networks (SD-WAN) with security. Delivered as a single vendor, cloud-based service, SASE management allows businesses to protect their networks and digital resources while simplifying the management process. I used the term “in concept” as there are many options for customers. but, the customers typically need to pick and choose networking and security components from different providers and bring them together and correlate the information and policies manually. The complexity of doing this opposes the simplicity SASE is supposed to bring. Managing SASE requires two disciplines that have historically not been tied together - making wide area networks more efficient (SD-WAN) and ensuring strong network security. This combination needs to be delivered as a single service via the cloud. Additionally, SASE management should make it easy to apply rules and policies across the entire network for all users and devices. This can be difficult for companies that want to take a “best of breed” approach, as the correlation of data and integration of services and policies are typically done manually. Verizon’s SASE solution combines its managed network and security services to create a closed-loop system that involves zero-trust networking, visibility and reporting on security threats, and improved latency and performance for accessing applications. The provider has partnered with Versa Networks and Cisco for the SD-WAN, plus Zscaler and Palo Alto Networks for security, to create an integrated offering that includes a single portal view for SD-WAN and SASE deployments. The deployments are managed by one network operations center (NOC) and security operations center (SOC), potentially lowering costs and simplifying operations. “There isn’t yet a single vendor or partner with a technology stack that does both SD-WAN and security service edge (SSE)-like services to meet the SASE vision. As such, Verizon is focused on a couple of combinations for our customers using top-tier SD-WAN solutions and top-tier SSE solutions,” said Vinny Lee, Verizon’s product development director, during a recent webinar hosted by the provider. A networking element that plays a key role in SASE management is network as a service (NaaS). In fact, Verizon views SASE as part of the evolving network as a service (NaaS) narrative. NaaS provides organizations with a flexible, programmable, and scalable way to manage networks, combining different levels of service, access types, and service-level agreements (SLAs). Alternatively, traditional SD-WAN solutions often require a combination of internet-based and private-based access technologies in order to achieve better performance and reliability. Verizon aims to build a “single pane of glass for the SASE digital experience,” which will eventually expand into a broader NaaS offering, said Lee. This would allow customers to see all the services they have procured from Verizon in one location and understand how they work together, creating a holistic experience for customers. Verizon's NaaS strategy involves securing, connecting, and managing all aspects of the network, including incident management and monitoring connectivity tunnels. Verizon is also addressing the increasing use of wireless local area networks (LANs) and 5G networks. The provider is offering customers the option to access its network with any type of last-mile access, including broadband, fixed wireless access, and 5G using what Verizon calls a “secure hybrid network” service. This allows customers to get into Verizon’s core private internet protocol (PIP) network even if they’re using other providers, including global providers in areas like the Asia-Pacific (APAC). Lee shared two use cases that demonstrate how customers are utilizing SASE management. The first customer, a security solutions company based in Europe, acquired Verizon’s SASE management package as part of a total NaaS solution. The customer embarked on a complete network transformation, which included implementing various security components. The second customer, in healthcare, focused on transitioning to a traditional SD-WAN, integrating Palo Alto Prisma and leveraging Verizon’s NOC incident ticket handling and policy management services. This customer sought a single vendor to deploy the complex solution and achieve overall cost reduction. SASE management from Verizon is available in three packages, with the price based on the number of users. Each package provides integrated security management support for a defined feature set.
- The first package is multi-vendor SASE management, which includes change management and incident management for a set of basic security features.
- The second one is multi-vendor SASE management plus, which includes features available under multi-vendor SASE management, change management and incident management with enhanced security options.
- Package three is multi-vendor SASE management preferred, a complete service package that includes all the features in the other packages in addition to managed detection and response—Verizon’s security as a service (SECaaS) offering.

Managing the network from the edge offers performance and data sovereignty benefits.

Edge Addresses Latency and Data Sovereignty
The use case for management at the edge is for organizations with latency-sensitive requirements, the most obvious of which is artificial intelligence. AI is becoming a bigger part of network operations, and customers can run Extreme’s AI application, CoPilot, from the edge. Not all customers would need to do this, but customers such as retailers who need to make real-time decisions about the network could benefit. The other advantage of running from the edge is data sovereignty, which has become a more significant issue in Europe since the war in Ukraine began. Running network operations from the edge allows customers to benefit from a cloud operating model while keeping data in the country. ExtremeCloud Edge will be made available in the summer of 2023 for select partners and includes ExtremeCloud SD-WAN, Extreme Intuitive Insights, and the previously mentioned ExtremeCloud IQ. The rest of the company’s application portfolio will be made generally available (GA) in early 2024. The company also plans to make the platform available to certified partners for ecosystem solutions. The edge would be ideally suited for applications such as IoT management, video analytics, and retail operations. Extreme has a large footprint with sports teams through partnerships with the NFL, NHL, MLB, and other organizations, and stadium analytics would be another good use case for analyzing network data at the edge.New Hardware Platforms Announced at Extreme Connect
In addition to ExtremeCloud Edge, the company announced several new network products, including the following: AP3000 The is a low-power, small form factor Wi-Fi 6E Access Point designed for environments where power consumption is an issue. The AP draws only 13.9W of power, significantly lower than the 25-30W many APs take today. This means companies can power the APs with PoE (15W) instead of upgrading to PoE+ (30W). The device has the option for external antennas with an extended temperature range, making it suitable for freezers or in hot climates. 7520 and 7720 Universal Switches These new products are for a high-performance network core or aggregation point. The former is designed for 1/10/25Gb server and top-of-rack (ToR) deployments within data centers and wiring closets. The latter lets customers address higher-speed core switching needs with up to 32×100 Gb ports. It can consolidate up to eight different aggregation and core switch lines from previous generations into a single family. Extreme 8820 Switch The is a high-density, fabric-enabled switch for large-scale environments. The new switch brings Extreme’s Universal Platforms to large enterprises and service providers and can be used in a data center as a border leaf or spine switch. The 8820 will be available as 40 x 100GB or 80 x 100Gb (QSFP28) configurations, with the ability to split to 4 x 25/10Gb, resulting in either 80 x 40Gb, 144 x 25Gb, 144 x 10Gb. Or, with the 8820-40C, it splits to 40 x 40Gb, 72 x 25Gb, 72 x 10Gb configurationsUniversal Hardware Gives Customers Choice and Minimizes Risk
Extreme’s Universal Hardware architecture enables customers to purchase one set of hardware but then have flexibility in how the switches are configured and managed. For example, a customer could choose initially to run the network in a traditional networking mode with on-premises management to minimize disruption to the business. At a later date, once the company has tested the fabric operations, they can switch to that model and even migrate to a cloud-managed solution without having to replace hardware. This lets customers evolve the network at a pace they are comfortable with.Bottom Line: Catching Its Stride
Extreme has caught its stride over the past year. The company went through an aggressive acquisition strategy where it rolled up network assets from Avaya Networking, Brocade, Aerohive, and others. This created many challenges as the company looked to rationalize the portfolio and consolidate software platforms while shifting to a cloud management model. The pandemic also created a problem as supply chain issues created a large backlog in sales, but the past 12 months have seen the company execute consistently, leading to an uplift in stock price. With much of the messy work behind it, Extreme can focus more on innovation, much of which is on display at Connect Berlin this week.
