Featured
Reports

Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience

Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]

Continue Reading

Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation

April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]

Continue Reading

Nathan Howe, VP of Global Innovation at Zscaler talks mobile security

March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]

Continue Reading

Check out
OUR NEWEST VIDEOS

2025 ZKast #95 with Jeffrey Russell, CEO of C1 at Cisco Live 2025

1.1K views 8 hours ago

0 0

2025 ZKast #94 with Fabrix.ai from Cisco Live 2025

4.7K views June 20, 2025 4:11 pm

4 0

2025 ZKast #93 with Josh Barney, CEO of SEAT with a conference preview

10K views June 17, 2025 3:26 pm

2 0

Recent
ZK Research Blog

News

As zero-trust security vendor Zscaler Inc. held its user event, Zenith Live, this week in Las Vegas, Chief Executive Jay Chaudhry sought to shift the company’s traditional narrative.

In his Tuesday keynote, rather than focus on Zscaler as a replacement for virtual private networks and firewalls — though that was clearly articulated as well — Chaudhry (pictured) emphasized how zero trust everywhere could unlock the potential of artificial intelligence.

Although the product specifics came later, Chaudhry appealed to the audience to embrace a fundamental shift in their security posture, evolve with modern trends and thrive in a hyperconnected, AI enabled world. These were the top of themes from Chaudhry’s keynote:

AI requires zero trust everywhere

The concept of “zero trust everywhere” is to apply least privilege access across the business. Network protocols were designed to allow “trusted” devices to talk to any other device, regardless of whether it needed to or not.

The problem with this is that if the trusted endpoint is breached, the threat actor now has unfettered access to any system and all data. Zero trust dictates that any device is unable to communicate with any other unless explicitly allowed. If there is a breach, the blast radius is contained to a very small area.

The central theme of the keynote was the expansion of zero trust from initially protecting users that were connecting to private applications and the internet. Now Zscaler’s scope has now expanded to cover workloads, internet of things devices and AI agents.

The inclusion of AI agents as zero-trust entities is a pivotal step forward. As AI agents are increasingly become autonomous, accessing most applications and data sources, their identity and activity need to be rigidly determined and regulated. Zscaler is presently working with companies such as Microsoft Corp. to set the identity of AI agents and extend their “exchange” to safeguard the new participants. This proactive approach ensures that when organizations roll out AI-enabled co-pilots and apps, they will do so with confidence, with the agents functioning within policy boundaries.

During the keynote, T-Mobile USA Inc. came on stage to talk about its use of zero trust, describing how securing 100,000 employees across 2,000 care sites, including iPads used in-store across 5G networks, was achieved by moving perimeter defense to an efficient, scalable zero-trust solution.

As AI expands, the need for zero trust continues to expand. In every keynote Nvidia Corp. CEO Jensen Huang has done this year, he has talked about the next wave of AI being physical AI, which brings in a world of autonomous machines. These also need to be secured, and that can’t be done with firewalls. As AI becomes ubiquitous, the world needs to move away from perimeter-based security and the answer is AI everywhere.

The café-like connectivity model is the right one for many companies

Chaudhry brought up the topic of network evolution and explained the internet is a vast network that already connects everything and questioned why we need to build overlay networks that require firewalls to protect them. When Zscaler customers are working from home or a café, they’re secured by the proxy-based zero-trust service. Their connection is secured back to the Zscaler cloud and then connected to the software-as-a-service applications they work with.

This raises the question: When one is in the office, is there a need for a firewall? If the user can be secured at a café, simply extend that to the corporate office.

At the event, I had a chance to talk with Zuora Chief Information Officer Karthik Chakkarapani. Zuora had moved to an all-SaaS model and along with that, moved away from the traditional castle and moat to using Zscaler. Chakkarapani explained the deployment went incredibly smooth, users were much happier as they no longer had to fiddle with VPNs, the security posture improved, and the company saved enough money that the Zscaler deployment paid for itself in only four months.

I’m not saying the café-like connectivity model is right for all companies, but it should be considered by organizations that rely heavily on cloud applications. With SaaS, there isn’t any data that goes between locations, so why build a wide-area network? Instead, treat users as if they were working remotely and they’ll have the same experience regardless of where they are working.

Comprehensive data protection and LLM proxies are the keys to AI security and data governance

The keynote highlighted that with the onset of the AI era, data security takes center stage, going beyond traditional data loss prevention to a more comprehensive approach to data security. Chaudhry emphasized that “it’s all about data security” these days, with data dispersed across SaaS applications, endpoints, cloud infrastructure as a service, and even the AI applications themselves.

Having multiple vendors and having to manage data protection policies across them is a formidable challenge, so that is why Zscaler has invested in a unified data protection framework. This allows one set of policies to be universally applied, regardless of where the data resides or how it’s being accessed, including through AI services.

A critical piece of innovation mentioned was adding the LLM proxy. Chief Innovation Officer Patrick Foxhoven explained how AI, and LLMs in particular, can’t be secured based on traditional threat signatures or sandboxing. Instead, it must ascertain the intent of what is happening, both in the prompts customers are sending and the output that AI generates. The LLM proxy employs 15 small language models to identify numerous injects of prompts, toxicity, and off-topic questions to enable the AI chatbots and apps to operate within established parameters.

Zscaler ran a demo that illustrated how this prevents unwanted or malicious applications, such as a car chatbot offering a car at $1 or leaking sensitive competitive information. This capability is crucial to preventing risk from public-facing AI apps and maintaining data privacy, even with internal AI tools such as human resources chatbots. This takeaway highlights Zscaler’s focus on building intelligent security products that understand the nuances of AI interactions and data flow, making secure and compliant AI adoption possible.

AI-driven security operations and exposure management streamline risk mitigation

Zscaler is best known as the firewall and VPN replacement company, and it’s not turned its sights on modernizing security operations. Chaudhry explained that IT pros struggle with massive data lakes, slow queries and trying to keep pace with security incidents. In 2024, Zscaler acquired Avalor to accelerate security operations. This gives Zscaler the ability to consume, combine and apply context to data to cut times for detection and investigation by orders of magnitude. During his keynote, Chaudhry explained that an investigation that typically took 30 to 40 minutes could now be done in about three minutes, with most of that time being used for human verification.

Zscaler’s security operations center journey extends beyond data gathering and remediation and into preemptive avoidance of danger. Zscaler’s platform holds billions of telemetry driven data points and the company is using AI to deliver exposure management, which is an end-to-end view of an organizations attack surface.

Attack management is another part of the Zscaler operations suite, which uses its massive data fabric combined with AI to speed up threat response. The SOC segmet is filled with legacy vendors today, many of which are embedded into security workflows. Though the market is ripe for disruption, Zscaler’s success will be based on its ability to work with legacy vendors and chip away at their share, much the way it did with its access products.

Security professionals need to jump on the AI train or get left behind

There’s an expression that states, “Some people make things happen, others watch things happen and the rest wonder what happened.” In the AI era, the last two are the same as IT evolving at a pace never seen before.

I understand the hesitancy of using AI. Can I trust it? What does this mean for my job? What happens if a mistake is made? These and others are viable questions, but the reality is that AI is coming, and it will redefine the way security is done.

Today, threat actors use AI and can pivot quickly. The only way to fight AI-driven threats is by embracing AI. At the end of his keynote, Chaudhry showed a slide of Charles Darwin with his famous quote citing that it’s those most adaptable to change that survive and that has always been the case is IT.

Think back to other IT evolutions – mainframes to PCs, time division multiplexing voice to voice over IP, physical servers to virtualizations, on-premises computing to cloud. Each of these enabled IT to do more. Those that embraced the change moved into the new world, and those that did not were left behind.

The best quote for this came from a customer at Zenith Live. A chief information security officer for a well-known insurance company told me, “The established security model does not work, has not worked and is never going to work, which is why we shifted away from firewalls and VPNs to zero trust.” I asked him, when he removed the firewalls from the branch offices, did that scare him, and he responded, “At first it scared the crap out of me,” but he quickly realized that it was a superior security model that was simpler to run.

This need to change isn’t just for security operations. Network engineers need to heed this warning as well, particularly those that run the WAN. The café-like model I alluded to will change the job function, moving it away from being connectivity-based to one that requires deeper security skills. From a resume perspective, network pros should embrace this, as it gives them more options as the world continues to evolve because of AI.

Final thoughts

Overall, this was a different kind of Zenith Live than ones I had been to in the past. Chaudhry’s narrative was a bit more “in your face” and had the necessary level of urgency to it: AI is coming and it’s coming fast. It’s disrupting computing, networking, storage ad the way we build apps, and it will do the same to security. The time for change is now and Zscaler wants to be the company that helps customers adopt AI securely.

Fabrix.ai Inc., previously known as CloudFabrix, has evolved from focusing on providing a data-driven artificial intelligence operations platform to now offering an agentic platform for information technology operations.

The Fabrix.ai platform delivers a purpose-built agentic AI operational intelligence platform that enables enterprise users to streamline IT operations use cases, make better decisions more quickly and successfully accelerate digital transformation.

Fabrix.ai’s intelligent agents take over repetitive, time-consuming operational workloads for its enterprise customers, delivering increased agility and cost efficiency. With its broader focus, Fabrix.ai continues to drive innovation in robotic data automation and data fabric, while continuously integrating additional AI capabilities.

There are three components to the Fabrix.ai operational platform:

  • Agentic AI – Agent-driven automation with an enterprise grade agentic platform
  • Generative AI copilot – Incident management, asset intelligence and storyboards
  • Cisco-specific solutions – Fabrix.ai integration with Splunk, Fabrix.ai integration with OutShift by Cisco Systems Inc., its innovation incubator

Growing demand in telecommunications

As Fabrix.ai has evolved, it has become an operations platform optimized for telecommunications and service providers. The solution takes advantage of the demand for interoperable agents, which OutShift calls the internet of agents: Autonomous agents work to discover, collaborate and exchange information the way the internet itself enables people to acquire useful information and work together.

The company views its platform as having a unique capability to focus on automation, particularly in network observability. Running a network tends to be more stochastic than deterministic, so providing enterprises and service providers a solution requires additional building blocks, including guardrails, Model Context Protocol or agent-to-agent interfaces, and Fabrix.ai has built those.

While Fabrix.ai continues to work closely with Cisco and telcos, the company is also branching out to serve customers in other areas, including AI security. There are multiple checks and balances to ensure no erroneous processing of data. The multi-agent orchestrator uses all inference time compute and storyboards to manage lifecycles and provide full visibility.

Charting a differentiated course

One of the biggest differentiators for Fabrix.ai is the ability to work with real-time data. That’s something that not all automation vendors can do today. What makes automating operations challenging is alerts and events coming in real time. IT professions struggle to take immediate action on these as the analytics platforms can’t process the underlying telemetry to provide a solution in real time.

Fabrix.ai leverages many of the common building blocks, but the platform is purpose-built for IT ops use cases rather than trying to modify a generic AI model. Its focus on handling real-time information has enabled it to get traction in key verticals, especially telco.

The company counts Indian giant Tata Communications among its customers, as well as other large service providers in the U.S. and globally that it can’t yet publicly name. The company also sees growth from OEM relationships with partners that wrap services around the Fabrix.ai platform.

Enterprises typically are forced to run multiple tools — AIOps, campus analytics, capacity planning, network automation and the like, with a requirement to restrict access to authorized users or groups. Fabrix.ai has created tools that are divided by specific domains. Each persona is assigned to a user group, so each authorized user will only have access to the specific tools or data sources based on their assigned persona. This is how checks and balances are handled to ensure that no erroneous event happen.

During a call, the Fabrix.ai provided an example of its storyboard.

“We start by providing a conversational phrase, for example, ‘monitor all change requests for ACL, and if a specific ACL is blocking access to complete the endpoint, opening access to known risky assets, which are dangerous,’” said Shailesh Manjrekar, chief AI and marketing officer at Fabrix.ai. “The platform creates what we call a task graph. Think of it as a graph of thoughts, or a chain of thoughts. Each node has a particular purpose, and we can test this individually with positive and negative tests. There are all kinds of explainability, reasoning and observability built into the platform to ensure that no errors occur.”

Fabrix.ai also leverages its growing partner ecosystem to bring its capabilities to more enterprise customers. The company can use whatever data platform a customer has, including Splunk, Elastic, OpenSearch, MinIO, HP or others. Or it could be a data lake, since it has partnerships with many of the data platforms and its data abstraction layer can read directly from the platforms.

The company’s evolution from observability into a fully capable agentic platform addresses automation without customers having to rip and replace the tools they’re currently using. The new platform is well-designed to meet the needs of telcos and enterprises today and into the future. Given the interest in using AI today to simplify operations and reduce operational expenses, Fabrix.ai should be well-positioned to take advantage of the growing AI wave.

Extreme Networks Inc. is holding its user event, Extreme Connect 2025 this week in Paris — a fitting city for the event as it’s home to many customers of Extreme, including the Musee d’Orsay and Charles De Gaulle Airport. The cloud networking provider used the event to announce several new products that focused on making the network simpler to deploy and easier to manage.

Extreme Networks is moving closer to a full rollout of Platform ONE, which combines artificial intelligence, networking and security in a single system. The cloud networking provider launched several new features in limited availability and showcased the platform at the event. Platform ONE will now be available to a broader set of customers, with general availability expected in the third calendar quarter of 2025.

Getting Platform ONE to where it is today has been a long journey for Extreme. A decade ago, Ed Meyercord and Norman Rice assumed the positions of chief executive and chief operating officer, respectively, and transformed the company through a series of acquisitions. Those included the Wi-Fi division of Motorola, campus networking from Avaya, data center from Brocade, cloud networking Aerohive and others.

Since then, the company has been focused on creating on simplification of the portfolio, which included universal hardware and cloud management. Platform ONE is the culmination of that as artificial intelligence required all the data to be brought together.

Platform ONE leverages three types of AI: conversational, multimodal and agentic. Together, these technologies provide automation and more visibility into how networks perform. The underlying architecture that powers all AI interactions across Platform ONE is the AI core and data hub, which aggregates both structured and unstructured data across Extreme’s portfolio. This includes switching, routing, wireless and software-defined wide-area networking.

The platform comes with several key tools that help organizations visualize, monitor and control their environments. AI Expert, for example, draws from more than 30,000 support documents, troubleshooting guides and articles to answer technical questions and assist users with common issues. AI Canvas lets teams create customizable dashboards and reports using real-time data.

Service AI Agent is another useful tool that diagnoses network issues. It supports advanced tasks like live troubleshooting. For instance, if a device fails to connect, the AI agent will walk through a step-by-step diagnostic process before identifying the actual problem. Though it might not fix the problem automatically, it gives the user a clear explanation and recommended next steps, all from within the same interface.

During the keynotes, Extreme Chief Technology Officer Nabil Bukhari (pictured) and Vice President of Product Management Hardik Ajmera did a demo of Service Agent troubleshooting Wi-Fi problems by automatically going through multiple steps to solve the problem. Bukhari said customers expectations need to change as, with AI, things that took days could now be done in hours and things that took hours will be instant.

Initially launched in December 2024, Platform ONE has since been tested in real-world environments through an extensive early access program involving more than 100 customers. As part of the program, the platform has been deployed across 19 remote data centers and 9,500 onboarded devices. Managed service providers were the first to get early access, and more than 50 have already started using the platform.

“We’ve listened to customer feedback. We’ve done about three years of research, speaking with our customers, doing in-depth interviews, looking at how users are behaving within Extreme Cloud IQ and our other applications,” Carla Guzzetti, senior vice president of product, experience and innovation at Extreme, explained in a conversation prior to Connect. “We wanted to approach the design for Platform ONE in a completely different way. It’s an integration of our portfolio-wide capabilities and reimagining how we can make things simple and easy.”

What’s new

AI Canvas now has a live update feature for building dashboards, eliminating the need to repeatedly generate manual reports. These dashboards can be saved, customized and shared instantly — something customers specifically asked for when preparing regular updates for other departments or executives.

Extreme also enhanced how users visualize and interact with their networks in Platform ONE. New features such as geo maps, interactive overlays and detailed path tracing make it easier to see how physical infrastructure connects to services. These updates are part of a broader effort to make the platform easier to use and more relevant to each user’s role. Teams can now save custom layouts and apply filters, so they’re only working with the information that’s relevant to their responsibilities.

The visualization aspect is critically important to the broader platform story for Extreme, particularly for Fabric customers. Most vendors offer an IP fabric, but Extreme has a shortest-path bridging-based solution that operates at layer two, which has some significant advantages, such as the ability to set up virtual networks with just a few mouse clicks.

One of the challenges has been that the fabric acts like a “black box,” making troubleshooting difficult. With Platform ONE, Extreme is bringing visibility to Fabric, enabling customers to observe performance better.

Extreme also built new data connectors enabling Platform ONE to pull telemetry from third-party devices and platforms, giving information technology teams a single view into their broader network infrastructure. This means tools like AI Expert, AI Canvas and Service AI Agent will now have access to more insights, not just from Extreme. Bukhari told the audience that while Extreme would like to own the end-to-end network, the reality is customers run multivendor environments, so the company has extended Platform ONE to work with its competition, since that’s best for the customer.

Also, Extreme overhauled its licensing model, offering all-in-one subscriptions. The updated licensing model simplifies the process for upgrades, renewals and hardware, so organizations don’t have to manage dozens of separate timelines.

Finally, no tech show is complete without new hardware. At Connect, Extreme rolled out new network gear that integrates with Platform ONE. In wireless, the company added new Wi-Fi 7 access points, including the indoor AP4020 and the outdoor-ready AP4060.

On the wired side, Extreme introduced the high-capacity 7830 switch for core networks and the compact 5320 for smaller spaces. Extreme also refreshed its Universal Compute Platform portfolio with new models that support Kubernetes-based orchestration and broader ExtremeCloud deployments.

Why it matters

Extreme sees this as the start of a bigger shift. By connecting its newest hardware with Platform ONE, the company wants to make it easier for organizations to manage networks across campuses, data centers and remote sites — all from a single platform, with AI and automation doing more of the heavy lifting.

It also makes Extreme’s Fabric technology more accessible. Fabric, though powerful, can seem intimidating or too good to be true for smaller IT teams. Extreme’s “North Star” is being able to deliver secure, always-on experiences across its product portfolio, explained Bukhari. Fabric and Platform ONE are two sides of the same coin. Fabric unifies the data flow across the network, while Platform ONE unifies how that network is managed, monitored and optimized.

Bukhari summed it up this way: “The combination democratizes our fabric. Now, anybody with one person in their IT team or 100 people in their IT team can all deploy it and have absolute visibility. That’s exactly what we’re trying to do: expand the applicability of the Fabric technology with Platform ONE.”

For Extreme, this launch can help it differentiate itself versus some much bigger networking vendors. Though the company has great technology, it’s using its ability to help its customers solve business outcomes as its advantage.

As an example, at Connect I talked with Farid Farouq, vice president of innovation for the Dubai World Trade Center, home of more than 500 events a year. He explained his team needs to create unique virtual networks for each show at the facility.

Doing this with traditional networking equipment is nearly impossible with the short turnaround times it has. With the Fabric, it’s easy. This is an example of where Extreme could try to explain the technical benefits of SPB but instead is letting its customer success do the talking. Platform ONE brings more automation and visibility to the portfolio, which is what more and more customers are asking for today.

As IBM Corp.‘s client and partner event, Think 2025, wrapped up last week in Boston, to no one’s surprise the primary theme of the event to no one’s surprise was artificial intelligence — but there were several other related topics, such as quantum and hybrid cloud.

I thought the keynotes and Q&A with IBM Chief Executive Arvind Krishna were excellent as he provided his vision for those areas, which were differentiated from what I’ve heard before. It’s important to note that IBM serves the needs of enterprises, and those companies will adopt technology dramatically differently than consumers and small businesses.

Below are my top five takeaways from IBM Think:

  • The future of enterprise AI is small models. Since the rise of ChatGPT, large language models have been all the rage. At Think, IBM talked extensively about the value of small language models or SLMs. Both are neural networks designed to understand and generate human language, but they are markedly different. As the name suggests, LLMs use a “large” number of parameters, often in the billions, and are trained on massive data sets enabling them to perform a wide range of tasks. Conversely, SLMs are more narrowly focused using fewer parameters. SLMs excel at domain-specific tasks and use significantly less power. For consumers, LLM powers are ideal as they can answer everything from Civil War history to how to learn to play guitar. Businesses do not need models this broad – but rather domain-specific models to do a very specific task. At Think, IBM rolled out dozens of SLMs to enable its customers to bring AI that won’t break the bank in terms of computing power.
  • The mainframe is alive and kicking. If the mainframe were a person, it would be Mark Twain as reports of its death are greatly exaggerated. At the event, IBM announced its latest flagship product, LinuxONE Emperor 5. During the Q&A with analysts, Krishna discussed how the mainframe business is growing at IBM. Industries such as financial services, which have relied on mainframes in the past continue to do so. He provided a data point that about 90% of credit card transactions are processed on mainframes. IBM has done an excellent job of making mainframes significantly more open than they have been in the past and that’s enabled organizations to take mainframe data and now use it as part of their AI strategy. For companies that rely on mainframes, that will likely continue into the foreseeable future. IBM is, by far, the leader in this space, and I expect the company to continue to drive innovations into mainframes.
  • IBM innovation focuses on “high value.” In the information technology industry, innovation comes in many shapes and sizes. As an example, hyperscalers have driven new ways of consuming IT resources that are far more cost-effective and convenient than ever before. Krishna talked about this and discussed how IBM is focused on “being able to invent things that do not exist anywhere else.” Current examples include quantum and the previously mentioned domain-specific small language models. These are focus areas Krishna believes IBM is well positioned. That’s not to say M&A isn’t part of IBM’s strategy. It obviously is as the company has made some big moves under Krishna, including the recent HashiCorp acquisition. However, he said, “I believe for the absolute game-changers, where we get a step function up in value and in the brand of the company, we must invent things that are very, very hard. And one goes through IBM’s history, whenever we have done that, we have changed both the market and the world’s perception of IBM.” He cited Java as an example where the technology was once used primarily for consumer applications, but IBM has brought the necessary tools, security, and ecosystem to make it mainstream in enterprises. Looking ahead, expect IBM to continue to focus on those hard problems that few other companies can solve.
  • AI will replace jobs but also create new ones. One question asked of every executive about AI is the impact on jobs. Many CEOs shy away from this question with generic responses. To Krishna’s credit, he did not and talked about it openly and honestly. To underscore his thoughts, he looked back at industrial automation, the steam engine and other “big shifts,” which displaced jobs but then created new ones because of the change. Krishna does expect that, over time, AI will displace some jobs at IBM, but the company is currently offering reskilling to several employees to help with the transition. One of the more interesting questions is what new jobs will be created. This has yet to be determined but if one uses the Internet as an analogy, that technology shift democratized access to information and created an upswell of data workers. AI democratizes access to expertise so it stands to reason that there will be jobs working with the output of AI. Job replacement is coming, but so is job creation, and it’s important for workers to keep looking ahead and be willing to evolve as the business requires.
  • Partners play a key role in IBM’s growth. Day 1 of IBM Think was Partner Plus Day which focused on IBM’s massive ecosystem of resellers, ISVs, systems integrators and other companies that partner with IBM. While IBM is well-positioned in areas like AI and quantum, it can’t win alone. The partners, including startups and smaller independent software vendors play an important role in helping create use cases, ideation and scaling the business technology. There’s no question that IBM does enterprise well and because of that, its partner ecosystem is filled with other companies that target large companies. However, startups are often more innovative and agile and can help create new use cases for emerging technology. As AI matures and quantum becomes real, I’m expecting to see IBM continue to diversify its partner ecosystem to bring new use cases to companies large and small.

IBM has been perceived by many to be an old, stodgy company, but at Think, Krishna was crystal-clear: IBM is driving innovation into established areas of strength such as Java and the mainframe, as well as AI and quantum. Time will tell if the company can lead these industries, but the activity at Think 2025 was certainly a good start.

Networking giant Cisco Systems Inc. announced a solid third quarter Wednesday, posting a beat and raise.

Here are my top five thoughts from this quarter.

  • Artificial intelligence revenue is gaining momentum. On the call Cisco noted $600 million in AI product orders, blowing by the $1 billion target set for fiscal year 2025. Notable was that about two-thirds of this was for Cisco networking gear, with the other third being optics. This is a stark shift from historical numbers that were closer to 50/50, highlighting the acceptance of Cisco networking gear with web-scale customers. On the call, Chief Executive Chuck Robbins talked the three legs of the AI opportunity for Cisco being AI training infrastructure for web-scale customers, AI inferencing and enterprise clouds and AI network connectivity. We are at the very beginnings of the AI revolution, and it should create a “rising tide” for networking vendors, like the internet, and given the reach Cisco has with large companies and governments, I expect it to benefit disproportionately. A good example of Cisco’s uniqueness is the partnership with Humain in Saudi Arabia, where the company worked with government leaders on the initiative.
  • Tariffs and macro creating caution but not delaying business. One of the big questions on investor minds is the impact of tariffs. Is this causing customers to push orders out or to pull orders in? In fact, the first question during the analyst Q&A was from Meta Marshall from Morgan Stanley and she asked whether there was a pull forward, which could push numbers up and cause an air pocket in future quarters. Robbins answered the question, “We looked at a ton of data points to see if we saw any signs of broad-based pull ahead business, and we did not.” Chief Financial Officer Scott Herren added, “We didn’t motivate a lot of pull-ahead by talking about price increases.” This morning on CNBC, Robbins mentioned that though the macro is on customers minds, the pressure to keep up with AI and modernize far outweighs any concerns and that’s keeping customers spending.
  • The business is firing on most cylinders. Taking a granular look at the numbers, product revenue was up 15% and services up 3%. Networking was up 8% with growth coming in routing and switching, offset by a decline in servers. Security jumped 54% primarily driven by Splunk and secure access service edge. Collaboration grew 4% and observability rose 24%. I’ve often described Cisco as an eight-cylinder car that typically fires on six as we have seen great performance in one area offset by a sharp decline in another. Other than servers, which is a small part of Cisco’s business, all product areas saw growth. To me, this indicates Cisco is finally capitalizing on its platform strategy as it’s better able to create a “1+1 = 3” value proposition. Another proof point of platform execution is that both gross and operation margins were up slightly indicating they aren’t getting squeezed on price as they did when specific products drove sales.
  • Nvidia partnership in the early innings. During the call, Robbins talked about the systems Nvidia and Cisco are partnering on. During the quarter, Cisco announced its intent to create a cross portfolio unified architecture where Silicon One will be the only third-party chip to be part of Spectrum-X and Cisco will build systems that include Spectrum silicon. Partnerships work best when there is an equal amount of value exchanged and this one seems like a slam dunk. Nvidia has more AI domain knowledge at a system level than any other company in the world and Cisco has an equal amount in networking. Add in Cisco’s massive channel, which is licking its chops to jump onto the Nvidia train, and it’s easy to see how Nvidia can pull Cisco into AI and how Cisco can help Nvidia complete the infrastructure stack and go to market. The two companies are highly complementary, and I expect to see a significant amount of co-creation between the two.
  • Cisco is looking to lead in quantum networking. Cisco has certainly been bitten by the “Innovators Dilemma” in the past where emerging trends that look negative to the current business are ignored until it was too late, letting new vendors jump into the market. Software-defined wide-area network, software-as-a-service-based collaboration and the cloud come to mind. Cisco is not going to be late with quantum. Last week Cisco introduced its Quantum Network Entanglement Chip and opened its Quantum Lab. The company mentioned this on the earnings call, and I was a little surprised analysts did not ask about the timing of this. We are certainly early in the cycle, but at IBM Think just last week, IBM CEO Arvind Krisha was fairly bullish and expected to see broader adoption in the three- to five-year time frame. This and AI are two market transitions Cisco will not miss.

The other news from the quarter was the retirement of CFO Scott Herren, who has been in the position for five years. Herren has had an incredibly busy tenure at Cisco having gone through the COVID phase, supply chain issues, the biggest acquisition in Cisco’s history (Splunk) and more. Current Chief Strategy Officer Mark Patterson,will assume the role of CFO, a title he held before joining Cisco. Patterson is one of the sharpest minds at Cisco; if Chuck is the James T. Kirk of the Starship Cisco, Mark Patterson would be Spock. He hasn’t had as much public-facing time as many of the Cisco executive leadership team, but he’s the person in the background calculating how to execute on the plans in place.

Also, Chief Product Officer Jeetu Patel was named president, assuming the position Gary Steele had held. When looking at all the hires Robbins has made in the last decade, it’s fair to say that Patel has been the most important one. Prior to Patel, Cisco had a lot of great technology, but few great products and Jeetu is an interesting combination of technologist, businessperson and product evangelist and has shaken things up at Cisco. In the conversations I have had with Patel, he is obsessed with building great products that people love and that obsession has been pushed down through the product teams.

I wasn’t surprised Robbins promoted from within. After Steele left, I talked to him about who he might put in the global sales role, and he mentioned he was hesitant to go external as he feels the executive team is finally at the point where everyone has a great deal of trust in each other, and he didn’t want to disrupt this. Hence the appointment of Oliver Tuszik into that role. Patterson and Patel certainly fit that strategy.

Overall, given the AI momentum combined with the platform focus for products, Cisco should be able to continue to see steady growth in its core business complemented by accelerated growth in security and other emerging areas.

Riverbed Technology LLC Tuesday kicked off its 23rd anniversary with the biggest product launch in almost a decade.

The company that pioneered wide-area network optimization introduced major enhancements to its acceleration technology to help enterprises manage the explosive growth of data and artificial intelligence workloads.

For Riverbed, this launch brings the company full circle. At one time, the company dominated acceleration, also known as WAN optimization, resulting in a peak market cap of more than $6 billion in 2011.

Companies such as Cisco Systems Inc., Juniper Networks Inc. and even Oracle Corp. aimed to take a chunk out of Riverbed, but the company managed to stay ahead of the pack. I recall an investor call when an equity analyst asked then-Chief Executive Jerry Kennelly if he would consider selling Riverbed and the brash executive answered by asking something to the effect of “If you invented fire, would you sell it?”

However, being the de facto standard is both a curse and a blessing. As the world moved away from branch office computing to the cloud, the need for WAN optimization appliances faded and the company did not adjust. In came software-defined WAN and despite an acquisition of Ocedo, the company completely whiffed on “the next big thing” in WANs and let competitors such as Silver Peak quickly gain share and pass Riverbed. Since then, the company pivoted hard toward observability, leaving many industry watchers, myself included, to wonder if the company could ever regain its WAN mojo.

This week, Riverbed launched the new SteelHead 90 series powered by RiOS 10, an upgraded version of Riverbed’s operating system. The new high-end SteelHead 8090 appliance delivers double the performance of the previous top-tier model, supporting up to 60 gigabits per second of data and 6 Gbps of optimized traffic. It’s also more efficient, using about a third of the rack space, which helps reduce power and cooling costs.

Though SD-WAN may have put a big dent in the acceleration market, the current trend toward AI everywhere has the opportunity to move the pendulum back to Riverbed. One of the biggest challenges facing companies with AI is moving data between location and accelerated links significantly speed up the transfer while reducing the amount of data transferred.

During the peak of the branch computing industry, one Riverbed customer referred to SteelHeads as “network crack,” meaning once you got a bit of it, you wanted more and more. Several customers told me they would never run a non-accelerated link again. AI can make acceleration as important or even more so than it was over a decade ago when Riverbed was at its peak.

Riverbed cuts down on how much data must move across the network by avoiding duplicate transfers. Typically, customers see up to 90% less data being transferred, which lowers cloud egress costs by about 50% to 75%. In an interview with ZK Research, CEO Dave Donatelli (pictured) shared an example of one high-tech customer that used Riverbed’s optimization tools to reduce a petabyte of data transfer from 11 days to less than two days.

SteelHead 8090 is part of the new SteelHead 90 series of next-gen appliances. The 6090 model serves midsized data centers with up to 20 Gbps, while the 4090 and 2090 are tailored for edge and branch locations. Riverbed also rolled out SteelHead Virtual, a software-only version designed for private cloud deployments.

These improvements extend across Riverbed’s full product line, including physical, virtual and cloud, with performance doubling or tripling depending on the form factor, according to Donatelli. “The whole idea is that the dollars spent per gigabit of data moved are much more cost-effective,” he explained. “They’re more efficient, consuming about a third of the rack units. So, you save on power and cooling as you rack and stack these in your data center.”

In addition to speed and efficiency, Riverbed is placing a strong focus on security. RiOS 10 features post-quantum cryptography and confidential computing developed in collaboration with Intel. These capabilities help protect sensitive data, even if systems are breached. This is especially useful when moving workloads across hybrid environments.

When it comes to supporting AI at the edge, Riverbed has a new software solution that keeps data synchronized between distributed locations and central systems. SteelHead RS allows enterprises to run AI tasks where the data is being created, such as remote sites or branch offices. Therefore, the data stays aligned with their central systems.

Customers now also have more access to different cloud platforms with the expansion of SteelHead Cloud. It’s available through Amazon Web Services, Microsoft Azure, Oracle Cloud and Google Cloud marketplaces. Using SteelHead Cloud, enterprises can move data and applications at speeds up to 20 Gbps, which improves performance and reduces data transfer time and cost.

Lastly, Riverbed overhauled its licensing with a new subscription model called Flex. This addresses a legacy pain point for Riverbed customers where they often felt stuck with current products and had to forward roll licenses on upgrades and the like.

Customers can now buy a pool of licenses and use them wherever they’re needed — on hardware, on virtual machines or in the cloud. They can scale up or down, shift between environments, or bring workloads back onsite. There are no extra costs involved, which is a major improvement over the old model, where licenses were tied to specific appliances and fixed capacities.

“What we’ve done is make it much more customer-friendly,” said Donatelli. “We’re at a time of transition in the marketplace. Customers don’t want to get tied down to software, to an appliance. They want the ability to take that software and then use it in different ways as their architectures and needs change.”

Since being acquired by Vector Capital in July 2023, Riverbed has shifted much of its business to observability. At one time, WAN optimization dominated, but now it’s much more balanced around the two areas. Its observability offerings include the Aternity software-as-a-service platform and a suite of network performance management tools. According to Donatelli, observability grew 102% year-over-year, while acceleration grew 59%, contributing to a 90% overall business growth.

“We collect a lot more data than anybody else to understand the blind spots, and we can impact performance in a positive way with our acceleration,” Donatelli said. “So we know what’s happening — observe — and we can then act on it — accelerate.”

These two business areas will now be given equal emphasis on Riverbed’s redesigned homepage. The new and enhanced Riverbed products are generally available starting this month, with further updates expected later this year.

Last week I was at the industry event, RSAC and this week I’m at IBM Think, a vendor event and this got me thinking about the difference between the two types of conferences and how much they’ve changed over the years.

Flashback to 15 years ago: If you wanted to experience a blockbuster tech event, you were probably booking your ticket to something like Networld + Interop. Back then, tech expos such as Interop and SUPERCOMM were the “it” events, allowing vendor carnival barkers to yell over each other to grab your attention. Sure, you could catch glimpses of exciting innovations, but it felt more like tech industry speed-dating: all surface, no depth.

Fast forward to today, and the rise of dedicated tech vendor conferences has turned vendor events into specific tech-centric hubs where you can dive deep into select technologies without the distraction of brightly colored booth swag or gimmicky sales pitches.

There aren’t as many industry events as there were in the past. The ones that remain, such as MWC, EnterpriseConnect and the recently held RSAC, have become a place to meet and for people to build relationships rather than for the practitioner to learn. An interesting byproduct of this is that industry events seem to have a much lower percentage of actual information technology buyers compared with years past.

Juxtapose this with vendor events, which are geared more toward specific job functions, specifically the practitioner and increasingly C-level. Complementing that are ecosystem partners, resellers, investors and industry disruptors, all there to talk shop. Because of this, most attendees at vendor events are IT pros looking to plan out their purchases for the next 12 months.

It wasn’t until the pandemic forced these events into digital limbo — or placed them in suspended animation — that I realized just how valuable they are. After years of virtual meetings and webinars trying (and failing) to replicate the unique aspects of these gatherings, I can confidently say virtual events can’t replace the effectiveness of a tech vendor event.

The reality is, when you’re at a physical event, there’s power in the palpable energy of brainstorming with like-minded “birds of a feather,” the unfiltered honesty of hallway discussions, and the serendipity of bumping into someone at the coffee station who just might inspire your next career move. Dedicated vendor conferences deliver all that plus deep-dive roadmaps, hands-on demos and a peek behind the curtain at what’s coming next. Industry events can replicate some element of networking but not the engineering-specific content.

Over the next few years, as IT continues to grow in complexity, I’m expecting these events will only grow in importance. Whether it’s about cybersecurity, AI-powered tools or cloud everything, these conferences are where innovation is born and celebrated. Attendees will leave with more than just a bag full of SWAG — they will exit with the connections, ideas and inspiration to fuel the next big breakthrough.

Many of my peer analysts prefer dedicated analyst-only events. Though these certainly add value, my preference has been vendor events because I can interact with people that use the company’s products as part of their day-to-day job and that’s something that can’t be shown on a PowerPoint.

Here are 10 excellent examples of some recent vendor events and what I thought was the most meaningful knowledge gained:

  • AWS Re:Invent. The Amazon Web Services portfolio continues to grow in breadth and depth. Re:Invent provided many sessions on how Amazon services can be used together to create broader, portfolio value and how to derive short-term value from AI.
  • Zscaler Zenith Live. The concept of zero-trust security is no longer a foreign one but getting from today’s security environment to zero trust everywhere can be complicated. At Zenith Live, Zscaler focused on the Zero Trust Exchange and how to implement the technology to dramatically reduce their attack surface. Over the past few years, Zscaler has done a nice job of keeping the practioner content while adding in tracks specific to the C-level.
  • Canva Create. Canva is the hot new thing and has turned creative upside down. It’s a relatively young company but Canva Create had many sessions on how to get started with it. Canva is the first vendor I have seen in a long time that has a realistic shot at disrupting Microsoft and Adobe.
  • Nvidia GPU Technology Conference. This event was a good mix of vision and hands on learning. Engineers at GTC learned about the latest advancements in AI, accelerated computing, and related software, along with their applications across various industries. Developers also love GTC because the event helps them fast-track accelerated computing apps.
  • Cisco Live. Networking is a critical part of AI and Cisco is the biggest network vendor. Attendees of Cisco Live learned about the network and security roadmap but how the integration of the two offers a unique way to secure artificial intelligence. As AI grows in importance, so does the value of Cisco Live.
  • Extreme Connect: The current Extreme Networks has been put together through a series of acquisitions, for which most of the integration work is done. The last connect focused on its strength in Wi-Fi and its Fabric and how those fit into the broader portfolio strategy.
  • Zoomtopia. The very nature of work has changed, and few vendors have as broad a set of tools to reach every type of worker – remote or hybrid. Zoomtopia has provided prescriptive guidance on how to ensure workers of any type – from back-office to front-office — can use Zoom technology to be more efficient.
  • NICE Interactions. NICE is the king of contact center, and its Interactions show has been filled with guidance on how to bring AI into customer service, securely and with minimal risk. NICE always does a great job of parading customer after customer on stage to help prove the value.
  • VeeamON. Over the past few years, the boring industry of backup and recovery has evolved into business resiliency and no vendor has been more important in that shift than Veeam. Veeam technology is a critical component of recovering from all kinds of outages, including ransomware. Backup and recovery have shifted from something no one cared about to a board level discussion and that’s on display at VeeamON.
  • IBM Think. What I like most about Think is how it front-ends of the user event with a Partner Plus Day. This lets it take its AI, cloud and other content and deliver how to information to its customers but also drive its portfolio value throughout its massive partner ecosystem. Many companies split these events, but I like the two being part of one event, as they are two sides of the same coin.

Palo Alto Networks Inc. kicked off this week’s RSA Conference in San Francisco by introducing new capabilities for its ever-expanding security portfolio.

The announcements were focused on its two major platforms: network security and Cortex.

Prisma Access Browser 2.0

Palo Alto Networks has introduced Prisma Access Browser 2.0 into its secure access service edge offering. In late 2023, Palo Alto acquired Talon to jump into the secure enterprise browser market and now it has made the offering part of its SASE stack. Capabilities include:

  • Safely enabling generative artificial intelligence use and protecting data with real-time visibility, access control and user coaching to accurately secure sensitive data with LLM-powered context-based classification to prevent leaks or breaches.
  • Real-time defense against sophisticated web attacks to detect evasive and targeted attacks, such as AI-generated cloaking and SaaS-hosted phishing.
  • A reimagined unified user experience that provides maximum performance for modern web and SaaS applications while enabling users to easily launch legacy infrastructure from the same browser.

Other new Prisma SASE capabilities include endpoint data loss prevention; integration into Prisma SD-WAN to support new productivity apps and extend enhanced user-to-app performance to the branch; simplifying the information technology experience with a next-generation unified SASE agent; and the addition of Oracle Cloud Infrastructure to extend the global reach of Prisma SASE and deliver cloud resiliency and greater uptime.

Secure enterprise browsers aren’t new, but the market has been in a bit of a renaissance. Older solutions required users to install a separate browser, where current solutions, such as Prisma Access Browser run in Chrome and Edge making it invisible to the user. Also, with the permanency of remote work, for many organizations, the browser has become the primary workspace. Securing the user and data at the browser brings consistent security to the first line of defense.

Cortex XSIAM 3.0

Palo Alto unveiled the 3.0 version of Cortex XSIAM, the next version of its SecOps platform. Among the new features are proactive exposure management and advanced email security, which enable customers to consolidate more functions onto Cortex with better and faster results, providing a proof point as to the value of platforming their security operations center.

Cortex Exposure Management prevent attacks by using AI to analyze the massive amounts of data to prioritize and remediate actions across the attack surface. What’s interesting about the release is it changes the role of XSIAM. Palo Alto Chief Executive Nikesh Arora often talks about security tools being designed for “peacetime” or “wartime,” with XSIAM being the former. Exposure management adds in element of war time and playing a dual role.

Other new capabilities include:

  • Providing a unified solution to uncover risks across native network, endpoint, and cloud scanners that can integrate with third-party sources.
  • Reducing alert noise based on actual risk by using AI to prioritize high-risk, exploitable vulnerabilities — without the need for compensating controls — and eliminating false alarms.
  • Preventing future attacks by creating new projections for critical risks in native network, endpoint and cloud security solutions, and automating remediation with playbooks across first- and third-party tools.
  • Stopping sophisticated email-based attacks with Cortex Advanced Email Security by:
    • Detecting advanced fishing and email-based threats with large language model-powered analytics that continuously learn from emerging threats.
    • Using built-in automation to stop attacks in real time, automatically remove malicious emails, disable compromised accounts and isolate affected endpoints.
    • Extending detection and response with email context that correlates email, identity, endpoint and cloud data to show the full attack path to facilitate incident response.

Palo Alto expects the new SASE features, Exposure Management, and Advanced Email Security to be generally available in Q4 of fiscal 2025, which ends July 31.

Prisma AIRS

Palo Alto also introduced Prisma AIRS (pictured), which is an AI security platform designed to protect the entire enterprise AI ecosystem, including applications, agents, models and data. It addresses both traditional and AI-specific threats, enabling organizations to deploy AI more confidently.

Built on the Secure AI by Design portfolio that Palo Alto launched in 2024, Prisma AIRS capabilities include:

  • AI Model Scanning checks AI models for vulnerabilities to enable enterprises to secure AI ecosystems against a wide range of risks, including model tampering, malicious scripts and deserialization attacks.
  • Posture Management delivers insight into AI ecosystem security posture risks due to a number of issues, including excessive permissions, sensitive data exposure to platform misconfigurations, and access misconfigurations.
  • AI Red Teaming enables security teams to identify potential exposure and risks proactively. They can use a Red Teaming agent to stress-test AI deployments by performing automated penetration tests on AI apps that adapt how an attacker would.
  • Runtime Security protects LLM-powered AI apps, models and data against runtime threats, including prompt injection, malicious code, toxic content, resource overload, and more.
  • AI Agent Security enables enterprises to secure agents, including those built on no-code/low-code platforms, against various new agentic threats.

During RSAC, Palo Alto announced the intent to acquire Protect AI. As the name suggests, the vendor focuses on security AI and machine learning systems. At the event I talked with Anand Oswal, senior vice president and general manager of network security at Palo Alto, about the acquisition. “When the deal does close, Protect AI will become part of the Prisma AIRS team, accelerating our journey to comprehensively secure every app, agent, data set and model,” he said, so there’s more to come.

For Palo Alto, this was a strong set of announcements as it expands the definition and capabilities of its platforms. Almost every security professional I talk to has bought into the concept of the platform but struggle with how to get from where they are today to the future state of a platform.

The challenge for Palo Alto, and the other platform vendors, will be to help companies migrate from multivendor, multitool environment and consolidate down to a few platforms. It’s what customers want. Now the vendors need to help them get there.

It’s nearing the end of April and for the security industry, this means it’s time for the RSAC Conference. Over the next week or so, about 50,000 people will flock to the Moscone Center to take in the latest and greatest in cyber.

One company that has retooled much of its portfolio over the past year is Cisco Systems Inc. and it’s used RSAC to launch many new products. At the 2023 conference, the company introduced its extended detection and response solution and last year it added enhancements to Identity Intelligence with Duo, more capabilities with Cisco Hypershield and Splunk-Cisco integrations.

At RSAC 2025, Cisco continued its security binging with the following:

  • Additional enhancements to Cisco XDR
  • Cyber Vision industrial internet of things and operational technology integration with Hybrid Mesh Firewall.
  • AI supply chain risk management — visibility into and control of the AI supply chain.
  • A partnership to integrate Cisco’s AI defense capabilities into ServiceNow.

Here are details on each:

XDR 2.0 AI enhancements

Under Chief Product Officer Jeetu Patel (pictured), Cisco has been focused on bringing together the historically siloed security and infrastructure domains with a goal of providing better security outcomes while lower operating and capital expenses. It’s an ambitious goal, but the company is looking for AI to help achieve this. Adding to the complexity is that AI workloads introduce a whole new set of security challenges, particularly for the mid-market.

The value of XDR is that it can look across the entire attack surface – from network to endpoint to email and web but then identify the lateral movement of an attack, in near real time. The Cisco enhancements give it the ability to bring XDR to the mid-market, which hasn’t had access to it before.

XDR 2.0 uses AI is to close the gap between the bazillion alerts companies get to find the truly malicious ones and then take advantage of the automated response capabilities. Security teams can use agentic artificial intelligence to build a tailored investigation plan and then execute it. There is so much data being generated today that people can no longer analyze in manually, but AI can. In fact, one could look at AI as the missing piece to realize the promise of XDR.

Cyber Vision with Hybrid Mesh Firewall

Security for AI is also a component of Cisco’s Cyber Vision IoT and operational technology capabilities enhancements. Cyber Vision takes the asset inventory of the IoT endpoints, checks against any of the vulnerabilities that might be in place, organizes all these assets into groups and then the secure firewall is able to communicate seamlessly with this collection of IoT devises to enforce segmentation as well as firewall policies.

As Cisco brings more security into the fabric of the network with a Hybrid Mesh Firewall, it can read and understand feedback from Cyber Vision to make sure that it is automating least privileged controls to devices on a factory floor, as well as to users, whether they’re remote or in a branch office.

For Cisco customers, Security Cloud Control is the interface used to define policies and the intent behind them, and then enforce everywhere, where the application and workloads may be running — on firewall variants, on secure workload, on Hypershield, as well as secure access. Historically, Cisco has had good security tools, but the management was scattered across different systems. Under Patel, Tom Gillis and Raj Chopra, Cisco has done a much better job of simplify workflows to complement the products.

AI supply chain and risk management

AI supply chain and risk management are other areas in which Cisco is enhancing security. AI is being infused into every corporate application and business process and that creates new risks. As an example, downloading a model from a source such as Hugging Face creates risk, as the exposed models can be infected with malware.

Cisco has worked on all the artifacts around AI, not just the usage of AI, but how they are being built and modeled. Cisco has visibility into the entire supply chain around AI and can enforce the right kind of controls, be it on the endpoint of the developer or the usage of that particular application.

Partnership with ServiceNow

Cisco also announced a product and go-to-market partnership with ServiceNow to bring together Cisco’s AI risk and governance portfolio within ServiceNow in a hybrid model. Joint customers will realize the value of the partnership as they start to adopt AI more holistically. This spans a wide range of the use cases, whether it is the visibility of the application being used, the models, and the kinds of attacks they may be vulnerable to, including real-time protection.

Final thoughts

For Cisco, success in security is critical to accelerating growth. Security is a massive, highly fragmented market and even moderate success will move the needle on Cisco growth and stock price. Tying security to Cisco’s dominate share position in networking gives it a unique approach that is highly defensible.

Historically, Cisco has treated the domains as individual silos and created complexity for its customers. Much of the innovation in security for AI and AI for security has come by bringing these two worlds together – something long overdue for Cisco and its customers.

As Veeam Software Group GmbH, the market leader in data resiliency, holds its annual user event VeeamON in San Diego this week, it’s on a roll, continuing to stretch its lead over the legacy vendors.

Recently IDC released its Data Protection Software Tracker and Veeam grew 12%, outpacing IBM Corp.’s 5% growth and well ahead of Dell Technologies Inc. and Veritas Technologies LLC, which shrank 10% and 15%, respectively, allowing Veeam to stretch its share lead.

At VeeamON, the company made several announcements, including a partnership with CrowdStrike Holdings Inc., protection for Entra ID, ransomware updates, a new Linux-based appliance and more. One of the more interesting announcements, to help companies understand where they are with their data protection was the unveiling of the Data Resilience Maturity Model, or DRMM.

This era of information technology is driven by data, and that has raised the stakes on cyberattacks and IT outages. During his keynote, Chief Executive Anand Eswaran (pictured) laid out the truth when he said, “Cybercrime is a business, and that business is booming.”

The DRMM is a framework which enables organizations to objectively assess their true resilience posture and take decisive, strategic action to close the gap between perception and reality to improve data resiliency against proliferating cyberattacks.

Veeam led the creation of a consortium of industry experts, which includes MIT and McKinsey, tasked with identifying the state of enterprise data resilience and providing recommendations for improving it. During his presentation, Eswaran showed some data that highlighted a significant disconnect between how chief information officers see their organizations’ data resilience and the reality.

The study found that 70% of organizations believe they have the necessary levels of data resilience to protect their company. In fact, 30% consider themselves “above average” in this area. However, once customers went through the DRMM exercise, the model found that fewer than 10% had an acceptable level of data resiliency to react to an event without disrupting the business.

The fact there is a gap isn’t a surprise shouldn’t be a shock. In every example of a maturity model I have seen, businesses overestimate their capabilities. Rarely have I seen a gap this large and it’s something business and IT leaders need to take seriously. It has been well-documented that the artificial intelligence era is fueled by data and if there’s one thing events such as CrowdStrike’s have taught, it’s that most companies can’t recover quickly from a disruptive event.

The report quantified the impact of downtime as it highlighted that IT outages costs Global 2000 companies more than $400 billion annually, with $200 million in losses per company from outages, reputational damage and operational disruption.

Other key findings include:

  • 74% of organizations fail to meet best practices, operate at the two lowest maturity levels, and have data recovery risk exposure.
  • Organizations at the highest maturity level recover from outages seven times faster, experience three times less downtime, and suffer four times less data loss than their less mature peers.

Data resiliency has always been important but is now critical to survival. Eswaran expressed some urgency for companies to get a handle on their data when he stated, “Most companies are operating in the dark. The Veeam DRMM is more than just a model; it’s a wakeup call that equips leaders with the tools and insights necessary to transform wishful thinking into actionable, radical resilience, enabling them to start protecting their data with the same urgency as they protect their revenue, employees, customers and brand.”

Through a series of questions, the DRMM provides an empirical framework for organizations to assess their current resilience posture, identify gaps, and implement targeted improvements. Insights MIT and cybersecurity companies Palo Alto Networks Inc. and Splunk Inc. contributed to the research. The DRMM categorizes organizations across four data resilience maturity horizons:

  • Basic: Reactive and manual, highly exposed
  • Intermediate: Reliable but fragmented, lacking automation
  • Advanced: Strategic and proactive, yet missing full integration
  • Best-in-class: Autonomous, AI-optimized, fully resilient

Resilience goes beyond data

“Data resilience isn’t just about protecting data, it’s about protecting the entire business,” said Eswaran. I agree with this sentiment, as improving in this area can be the difference between shutting down operations during an outage or keeping the company running.

For many organizations, it can be the difference between paying a ransom or being able to quickly recover. Also, with the world moving into the AI era, a best-in-class data resiliency model should be considered a foundational component of companies’ AI strategy.

A rigorous, vendor-agnostic framework

To develop the DRMM, Veeam worked with experts in operational efficiency. McKinsey and Dr. George Westerman, principal research scientist at the MIT Sloan School of Management, led the effort to create a rigorous, vendor-agnostic framework. The model was designed to assess an organization’s ability to ensure data resilience across three core dimensions:

  • Aligning business and risk: A data strategy that integrates business goals with resilience planning, ensuring organizations can anticipate threats, enforce governance, and maintain compliance.
  • Empowering resilience through alignment and action: True resilience is driven by empowered teams and standardized execution. Organizations can respond decisively during a disruption by investing in skilled talent, aligned leadership and clear cross-functional protocols. Defined workflows and governance ensure continuity, while collaboration and training enable teams to adapt and recover quickly and confidently.
  • Technology supports resilience across six key areas:
  • Backup: Secure protection of data that’s located on-premises or across clouds.
  • Recovery: Agile restoration of critical systems, even at scale.
  • Architecture and portability: Scalability across heterogeneous and often hybrid environments.
  • Security: Prevention and anticipated remediation against cyberthreats.
  • Reporting and intelligence: Real-time visibility and insights to support compliance, improve recovery, and optimize operations.

Key benefits of DRMM

During his presentation, Eswaran highlighted several benefits:

  • Revenue protection
  • Cost optimization
  • Compliance and risk management
  • Brand integrity and consumer trust

Veeam cited the example of a global bank that followed the maturity model and improved revenue protection by improving reliance. By reducing mean time to resolution for vital IT systems, the bank achieved 99.99% uptime for critical applications, no cybersecurity-related outages for the past three years, and $300,000 in savings per outage.

Closing the gap between the boardroom and budgets

At VeeamON, I talked with several IT professionals about the DRMM. Though the technical folks found the information interesting, CIOs and chief information security officers look at it as a way of closing the gap between the boardroom and deployments. One CISO I had talked to discussed how she tried to get budget for Veeam as she were worried about ransomware, but the budget kept getting denied. And then the organization got hit, leaving it unable to transact business for about a week while everything was getting restored.

The CISO told me if she had access to this, it would have armed her better to more accurately quantify the risks at a board level, which would have given her a better opportunity to have the budget approved before they were hit with ransomware.

Copies of the report are available at https://go.veeam.com/wp-data-resilience-maturity-model.

This is the time of year when professional football teams are hoping to get the next Tom Brady, Joe Montana or Deion Sanders. Every now and then there are a couple of “can’t miss” prospects such as Peyton Manning or Saquon Barkley, but there are often far more draft busts than hits. It raises the question of why this is given the amount of money and resources teams pour into scouting, the NFL Combine, workout days and more.

The reason is that scouting is an imperfect science. A player at a top-ranked school, such as Alabama or Georgia, rarely faces competition on par with his team, skewing his statistics to the positive. Conversely, players from smaller schools do not get the airtime to show their skills. NFL Hall of Fame quarterback Kurt Warner is a great example of this, as he went undrafted after playing for the University of Northern Iowa but had a stellar NFL career.

At the recent MIT Sloan Sports Analytics Conference, I met a company, SumerSports, that is trying to change this by using AI-based video analytics to study every player on every play to understand whether they are making the right decisions that lead to success.

At the event I spoke with Chief Executive Lorrissa Horton, and she explained the mission of the company is to use AI to evaluate “every single player at the frame level, which leads to the snap level and then game level.” However, despite the rich set of data, Horton clarified it was important to bring in their knowledge of seasoned football professionals to ensure the AI is interpreting the data correctly.

She mentioned the company currently has 20 NFL veterans that work alongside the data scientists to bring these two worlds together. Horton admitted these are currently two worlds that have not overlapped, but the “two sides” have done a good job of learning from each other to understand problems and how data and analytics can solve it.

SumerSports is currently using its AI platform for the following three specific use cases.

Optimized roster construction: By analyzing player performance data in detail, SumerSports helps teams identify undervalued players, make strategic trades and build rosters that maximize on-field performance within salary cap constraints.

Enhance player evaluation: The platform provides advanced tools for evaluating player skills, potential and fit within a team’s system. This goes beyond traditional scouting methods, incorporating a wider range of data points to provide a more comprehensive assessment.

Improve decision-making: SumerSports offers insights that can inform various strategic decisions, from game-day play calling to long-term roster planning.

There are currently several other data sources for football, and I asked Horton as to how SumerSports was different. “The combination of humans and AI allow us to combine subjective and objective information into a single point of view,” she said. “Also, our frame level information is very important as we look the different roles for every player on a given snap. This lets us see how receivers create space, how the ball is being caught and then grade each player on every play. This gives us a better understanding of how each player contributes and how they can improve.”

Over time, I fully expect all teams in every league at all levels to use AI. In the near term, it will be interesting to see how fast NFL teams will adopt AI-based tools such as SumerSports. The company currently has several NFL and NCAA teams using the product and is working on more, but resistance from “old school” thinking does exist.

At the conference, Horton was on a panel with Scott Pioli, who has had front-office jobs with the Patriots, the Chiefs, the Jets, the Ravens and others. Pioli admitted that SumerSports represents a threat to many people in football and can “scare” them, as he put it, but did state that attitude must change.

He cited his own epiphany moment was when he met Washington Commands QB Jayden Daniels when he was at LSU. Daniels came out of a practice session with a virtual reality headset and Pioli explained his first thought was that Daniels needs to stop playing games and take football seriously.

Daniels explained the VR headset allowed him to experience a stadium he had never played at before to better prepare. He knew where the clocks were, he could hear fan noise and understand the distractions. Pioli used this as his “aha” moment for the advancement of technologies such as VR in football.

Long-term, there are endless possibilities for SumerSports. It can go deeper into football and expand to high school football; it can create training tools or expand to other sports. Horton said all are on the table, but right now the company is laser-focused on building the best collegiate and professional football tool.

For those following the draft, the company has created a free draft guide giving its own rankings of players. For those following the draft, it’s worth the download to get the additional insights to see what kind of player your favorite team has drafted.

The New England Patriots are among the most successful teams in NFL history. As impressive as their 11 Super Bowl appearances and six championships are, the team is just one piece of a diverse, successful organization — Kraft Group — that also owns the New England Revolution of MLS soccer, forest products, construction, real estate development, and private equity and venture investing companies. The organization also owns and operates Gillette Stadium in Foxborough, Massachusetts home of the Patriots and Revolution and, next year, one of the U.S. venues for the FIFA World Cup soccer tournament.

To support activities throughout Kraft Group, the organization is constantly upgrading and expanding its network and telecommunications systems to accommodate more than 60,000 fans at the stadium and other business needs.

The World Cup will be the ultimate litmus test for the network and technology foundation since it will bring a massive number of people to the venue and will require temporary fan activation zones outside the stadium. Michael Israel, chief information officer of the Kraft Group described hosting the World Cup as having to put on seven Super Bowls in a four-week period. The infrastructure needs to hold up to these demands and provide the necessary levels of security to protect the games, which are watched globally.

To handle that critical work, the Kraft Group this week entered into a five-year strategic partnership with Boston-based NWN, one of the largest technology solution providers in the U.S., to transform the technology framework underpinning the entire Kraft Group business portfolio.

Jim Sullivan, president and chief executive of NWN, said the company is “excited to engineer and deliver an innovative and secure IT framework that scales with the organization and supports their long-term business objectives.” Key projects will include network connectivity upgrades to support new applications, modernized cloud-based collaboration solutions, and AI-enabled applications that improve the stadium experience for fans and players.

“Gillette Stadium is used throughout the year for a variety of events, and it is key for this venue to be as accommodating for our guests as possible,” said Jim Nolan, chief operating officer of Kraft Sports + Entertainment. “Partnering with NWN ensures we have the newest technological capabilities to exceed fan expectations.” He said NWN’s experience and “ability to bring us new technologies” while supporting the current infrastructure, “is key to making our facilities a place where guests can always stay connected.”

New initiatives to enrich the fan experience

Gillette Stadium, like most modern sports venues, is a microcosm of society with technology powering every element of fan experience and back-office operations. Ticketing is mobile and soon to be done using facial recognition, there are “grab and go” food services, mobile payment options and other fan facing technologies that rely on the network, which includes more than 1,800 Extreme Networks Inc. access points on a wired Cisco Systems Inc. network. Because tailgating has become so popular at NFL games, the Wi-Fi needs to be extended to the parking lots, adding another layer of complexity.

The enhanced connectivity and AI-driven solutions will support new initiatives, such as wayfinding applications and the expansion of Gillette Stadium’s internet protocol television or IPTV network. The goal is to enable people attending games, concerts or other events in the stadium’s convention spaces to connect with the Gillette Stadium wayfinding application to find the most direct route to their seat, locate stadium amenities and services, and acquire tickets quickly and easily.

That becomes critically important for the World Cup because many of the visitors are from other countries and have never visited Gillette Stadium before. The IPTV network expansion includes an enhanced digital viewing experience and improved content delivery with interactive features designed to increase fan engagement during live events.

Israel told me the investment in new technologies and connectivity is dedicated to meeting the needs of fans who are “expecting more in their experience here. Years ago, Wi-Fi was a nice-to-have, but now their experience is one in which they’re truly locked into us. They’re engaging more and more from their device itself.”

The team offers even more engaging services to fans, from “autonomous purchasing in our food or retail locations to day-of-game activities, it just becomes one in which it’s just becoming more and more a direct relationship with the guests as opposed to a relationship with the masses,” he said.

Though Israel and team have a lot on their plate right now, there is more coming. The organization is currently looking to modernize its contact center to leverage AI to personalize customer experience. After that, World Cup mayhem starts and will consume most of the IT organization’s time. Post-FIFA, the Kraft Group will look at a holistic Wi-Fi upgrade from the existing Wi-Fi 6 network. The team is currently building a new training facility for the players that will offer a modernized experience.

After that, all the firewalls and security will be looked at and likely get and upgrade. If that weren’t enough, the Kraft Group is looking to build a new soccer stadium in Everett, which will be managed remotely from Patriot Place. Israel mentioned it’s critical to work with a partner, like NWN, what will learn their environment and invest with them.

Network-agnostic fan connection

Israel said having a reliable, high-performance Wi-Fi network throughout the stadium from ticket gates to the concession stands and souvenir shops is critical. “All of our gates are on Wi-Fi. 500 point-of-sale terminals are on Wi-Fi. We have a variety of use cases. We’re looking at installing more monitoring-type devices that can tell where our employees are on the grounds, integrating our geographic mapping software into the Wi-Fi and other devices so we can monitor parking, deliveries, and everything happening simultaneously.”

NWN CEO Sullivan said the goal is to partner with the Patriots and Kraft Group to meet the needs of fans at the stadium and the organization’s other business activities. “We’re looking holistically at what Michael is doing over the next five years, and we’re going to work with him to bring best-of-breed solutions by leveraging our experience management platform and service delivery capabilities. Whatever Michael and his team need, we can deliver a holistic solution versus swapping tech out component by component.”

Innovation-led growth at NWN

Sullivan said NWN has grown from a regional solution provider with “a couple of hundred million in sales in 2019” to “more than a billion in sales” last year. As technologies have evolved, including the fast growth of artificial intelligence, NWN has grown and expanded with them.

“We’re thinking about everything from delivering the network and infrastructure services and solutions Michael needs for the next five years, and then things that connect to that infrastructure to enable new fan experiences, new player experiences, such as an IPTV system,” explained Sullivan. On the drawing board are other new applications, such as wayfinding digital tools that help people navigate through the stadium and digital walls with entertainment and information that connect to that infrastructure.

“We’re focusing on delivering the capabilities of a next-generation infrastructure, and we’re going to partner with the Patriots for continued innovations, including agentic AI-powered solutions to do some of these other solutions Michael and his team dream up,” said Sullivan. “One of the things that really attracted us to this partnership was just the alignment of the innovation and the openness to explore, experiment, and drive new ways of creating an incredible experience for everyone.”

A recent survey from ZK Research found that 93% of IT leaders believe the network is more important to business operations than it was just two years ago. In that same time, 80% find managing the network to be more complex.

That creates an interesting juxtaposition as organizations, like Kraft Group, need more technology to move the business forward but complexity creates risk. By partnering with NWN, Israel and team can ensure they are using the latest and greatest best-of-breed technology and, as its services partner, NWN can help minimize the impact of the complexity.

digital concept art in gold