Featured
Reports
Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience
Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]
Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation
April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]
Nathan Howe, VP of Global Innovation at Zscaler talks mobile security
March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]
Check out
OUR NEWEST VIDEOS
2026 ZKast #43 - How Salesforce Agentforce is Reinvigorating CX with AI | Enterprise Connect 2026
6K views 20 hours ago
1 0
2026 ZKast #42 - Tech vs. Humanity: Savannah Peterson on Her New Venture "Savvy Millennial" at CES
7.5K views 21 hours ago
0 0
2026 ZKast #41 - Equinix at GTC 2026: Powering the Shift from AI Training to Edge Inferencing
3.5K views March 18, 2026 2:07 pm
3 2
Recent
ZK Research Blog
News
We’ve seen this movie before: massive capital expenditure, the promise of “revolutionary” services and the eventual, quiet realization that we’ve built a faster highway, only to struggle to persuade anyone to pay a higher toll.
The core question facing the industry at MWC26 isn’t just “What is 6G?” It’s whether we are finally moving past the era of infrastructure-for-the-sake-of-infrastructure and into an era of intelligence-for-the-sake-of-value.
The 6G evolution: More than just a speed bump
If 5G was defined by raw performance and massive connectivity, the path to 6G is fundamentally different. It is not a call to “rip and replace” the legacy we’ve spent billions building. The consensus — or at least the pragmatic view shared by industry leaders — is that 6G must be an evolution, not a reboot.
The pivot lies in moving away from viewing the network as a static pipe. Instead, the vision for 6G is an AI-native infrastructure. In this model, intelligence is not an overlay or a secondary software feature; it is woven from day one into the silicon, the Radio Access Network, or RAN, and the core.
The chronic monetization struggle
It’s no secret that the telecom industry has a “value capture” problem. When we look back at the 5G rollout, while the network performance improved significantly, the revenue models remained stubbornly tied to legacy consumption-based billing. Operators have spent years optimizing for internal efficiency — making the network “faster” and “denser” — but they have largely failed to identify and sell new, high-margin, revenue-generating services that consumers and enterprises actually recognize.
We have spent twenty years talking about “AI-powered services,” yet the examples that move the needle remain frustratingly scarce. We see occasional flashes of brilliance, such as T-Mobile’s live translation feature — an example of intelligence in the pipes — but these remain the exception. The rest of the effort has been focused on internal efficiencies, such as optimizing power usage or automating maintenance. Though these are excellent for the bottom line, they don’t generate new top-line growth.
Intel and the ecosystem: A ‘no moats’ strategy
A major factor in the industry’s slow pace of innovation has been the verticalization of solutions, which often traps operators in proprietary, walled-garden architectures. Intel Corp. is currently trying to break this cycle through a “no moats” strategy, providing an open, common platform — specifically the Xeon 6 family — that spans the entire network.
This open approach is gaining significant traction across the global telecom landscape. The list of operators actively leveraging Intel’s silicon for this transition is telling:
- One of the major U.S. operators (still under NDA) is deploying vRAN (virtualized RAN) on Intel’s platforms to drive operational efficiency.
- NTT and NTT DoCoMo are collaborating with Intel and Ericsson to modernize their networks, focusing on the integration of AI-ready infrastructure.
- Vodafone is utilizing Xeon 6 processors for ORAN and vRAN deployments, signaling a move toward more flexible, software-defined radio networks.
- Rakuten Mobile continues to be a bellwether for cloud-native, virtualization-based network deployments.
- SK Telecom is deep in the core network modernization effort, proving that even the most complex, high-traffic nodes can thrive on a virtualized Intel backbone.
These partners are performing massive, carrier-grade due diligence on total cost of ownership and power efficiency, which is becoming increasingly critical as they introduce more compute-intensive AI workloads into the network.
Why AI changes the calculus (finally)
Is this time different? It might be, provided the industry shifts its focus from peak model science to operational scalability. The breakthrough isn’t going to come from a massive, monolithic AI model that solves everything. Instead, it’s coming from the rise of small language models and specialized inferencing tasks that run at the network edge.
Carriers are now looking at models with hundreds of millions or single-digit billions of parameters — models that are small, fast and cost-effective enough to run on standard server hardware without needing a specialized, power-hungry AI farm. This is where the “right compute for the right workload” philosophy comes into play. By leveraging existing, open-platform silicon, telcos can move inferencing closer to the data.
This allows for:
- Real-time optimization: Using AI to improve channel estimation and link adaptation on the fly.
- Predictive maintenance: Moving from reactive, “break-fix” models to proactive network health management.
- Hardened security: Employing silicon-level features such as Crypto Acceleration and Trusted Domain Extensions to handle the increased threat landscape of AI-driven cyberattacks.
The verdict: A programmable future
6G will succeed or fail based on whether the industry can bridge the gap between “network efficiency” and “service innovation.” We are finally moving into a phase where the silicon is capable, the software frameworks — such as OpenVINO — are mature, and the infrastructure is ready to host intelligence natively.
The technology is no longer the bottleneck; the bottleneck is the business model. The operators that win in the 6G era won’t necessarily be the ones with the fastest peak speeds; they will be the ones who treat their network as a programmable, AI-native platform capable of launching new, high-value services in weeks, not years.
My message to the infrastructure providers and telcos coming out of MWC26 is to stop talking about the “future of 6G” and start proving it with the hardware and software we have today.
Enterprises are currently fighting a two-front war. On one side, there is an aggressive push toward AI adoption; on the other, an infrastructure landscape so fractured across edge, cloud and on-premises sites that scaling becomes nearly impossible.
This “complexity tax” is stalling innovation. For the modern operations team, the dream of lightning-fast artificial intelligence is being deferred by the manual labor of managing a dozen disconnected tools that provide plenty of alerts but almost no actual signal.
This week at F5 AppWorld in Las Vegas, the conversation shifted from the “what” of AI to the “how.” The message from F5? Organizations cannot secure or scale the AI era using a “Frankenstein” architecture of disconnected point products. To move forward, the industry is eyeing a massive consolidation of the networking and security stacks — a move toward what F5 calls the Application Delivery and Security Platform or ADSP.
This is something F5 has been moving toward for years. The company has been the undisputed leader in application delivery controllers or ADCs for years despite many companies both big and small taking runs at that business. Along the way, F5 has built a strong security portfolio and the coming together of the two products, XOps and F5 Insight, resulting in the ADSP.
The three friction points holding back AI
Before organizations can move forward with autonomous AI agents, they must resolve three fundamental conflicts currently stalling adoption:
1. The signal-to-noise ratio Modern information technology environments are saturated with data but starved for information. “They have a dozen tools, a thousand alerts and not enough signal,” Kunal Anand, chief product officer at F5, noted on a briefing with analysts. Without unified observability, identifying a bottleneck in an AI training pipeline or a security flaw in a large language model becomes a forensic exercise rather than a real-time fix.
2. The agentic security gap As we shift from chatbots to agentic AI — where AI agents autonomously interact with APIs to execute tasks — the attack surface gets exponentially bigger. Traditional web application firewalls or WAFs were built for human-to-app interactions. They are often blind to the Model Context Protocol traffic that defines the AI-to-AI economy.
3. The looming shadow of “Q-Day” While AI is the immediate priority, the “store now, decrypt later” threat of quantum computing is forcing a rethink of encryption. Organizations are hesitant to overhaul their entire delivery stack for AI if it isn’t also “crypto-agile” enough to survive the transition to post-quantum cryptography or PQC.
F5’s strategy: Collapsing the ‘mess’ into a platform
F5’s announcements this week center on the idea that application delivery and security are no longer separate domains. During the briefing, Chief Marketing Officer John Maddison emphasized that the goal is to be a “control point” regardless of where the app lives — whether it’s on-premises, in the cloud or sitting on Nvidia DPUs.
“F5 ADSP collapses that mess into a platform,” he explained. “With F5 Insight, we turn scattered telemetry into a clear story and the next best action. Then we extend that foundation for agentic AI workloads and future-focused cryptography, because the infrastructure is changing, ready or not.”
Key evolutions in the ADSP stack
To address these hurdles, F5 unveiled several major enhancements to its platform:
- F5 Insight for ADSP: This is the “brain” of the operation. It leverages OpenTelemetry to provide unified visibility across hybrid and multicloud settings. Crucially, it uses AI-driven proactive guidance to generate “operational narratives,” allowing teams to prioritize vulnerabilities through natural language rather than digging through logs.
- BIG-IP v21.1: A significant update for F5’s flagship software, introducing NIST-compliant PQC ciphers to protect against future quantum threats. It also adds Dynamic Client Registration to empower agentic AI with secure, automated resource access. The AI-WAF has been deeply integrated to secure the specialized traffic used by LLMs.
- AI Remediate: Bridging the gap between “red teaming” (finding holes) and “guardrails” (blocking them), this new tool automates the creation of security policies to protect AI models in production.
- NGINX Agentic Observability: By inspecting MCP metadata directly in the traffic path, NGINX now provides visibility into “shadow AI” activity — AI agents interacting with services without explicit IT oversight.
The transformative value: Beyond the perimeter
The shift toward a unified platform isn’t just about technical elegance; it’s about business velocity. Research from IDC suggests that by integrating these layers, organizations can “optimize operations, strengthen security and scale across their environments” more effectively than with siloed tools.
For the C-suite, the value proposition is clear: Convergence equals lower risk. By replacing dozens of disparate SKUs with simplified, value-driven bundles — as seen in F5’s new Distributed Cloud Services packaging — companies can reduce tool sprawl and proxy overload.
As Maddison noted during the briefing, the market for cybersecurity is expanding by roughly $3 billion thanks to AI-enabled applications. However, capturing that value requires an infrastructure that is “AI-aware.” Whether it’s ensuring session persistence for AI workloads or providing air-gapped API security for highly regulated industries, the platform approach is becoming the only viable way to manage the “geo-repatriation” of data and the rise of the agentic economy.
The bottom line
The “Age of AI” is quickly becoming the “Age of Complexity.” The winners won’t just be the companies with the best models, but those with the most resilient, observable and converged delivery platforms. As F5 moves to make its entire stack — from BIG-IP to NGINX to Distributed Cloud to a singular, intelligent fabric — it is allowing its customers to simplify an increasingly complex environment and start security for the AI-first future.
For years, the industry conversation around stadium technology has been stuck on a single, albeit important, metric: How many thousands of fans can simultaneously post a selfie to Instagram?
Though the “connected stadium” was once a differentiator, it has rapidly become a baseline requirement. I recently talked to the leadership at Ruckus Networks and the Los Angeles Football Club about the recent deployment of Wi-Fi 7 at BMO Stadium (pictured), and one of the big takeaways is the narrative around high-density Wi-Fi has shifted.
The measuring stick is no longer “more bars” or faster social media uploads. Instead, the Wi-Fi network is shifting from an on ramp for a connected spectator experience to a highly deterministic, operationally intelligent digital ecosystem.
The catalyst for this shift is the arrival of Wi-Fi, which is more than just a speed upgrade from older generations. While the industry has been busy debating the merits of private cellular versus Wi-Fi, the reality is that Wi-Fi 7, with its ultra-wide channels, multi-link operation and improved reliability is turning the stadium into a high-performance lab for innovation.
The determinism factor: Moving beyond ‘best effort’
The biggest limitation of previous Wi-Fi generations in high-density environments was the “best effort” nature of the connection. In a stadium filled with 22,000 shouting fans — all armed with mobile devices — the sheer noise floor could lead to latency spikes and dropped packets. For a fan trying to check a score, this is an annoyance. For a stadium operator relying on that network to process a payment, verify a ticket or scan a biometric identity, it is a business risk.
Wi-Fi 7 changes the equation. By introducing features like preamble puncturing, which allows the network to “ignore” or “carve out” interference in a channel rather than abandoning the entire spectrum, stadiums can now achieve a level of determinism that previously required expensive, dedicated private cellular infrastructure.
As LAFC Chief Technology Officer Christian Lau noted in our recent discussion, the network has evolved from a utility to a mission-critical asset. “Selecting Ruckus to build the first Wi-Fi 7 network in MLS was a strategic decision to extend our leadership on and off the pitch,” he said. “This network is the backbone for our entire digital ecosystem, ensuring seamless experiences from mobile ticketing and concessions to immersive fan engagement for every one of our 22,000 guests.” When you have a mission-critical system, like biometric access control or automated retail, one can’t afford “best effort.” Guaranteed latency is mandatory.
New use cases: The ‘store-in-a-box’ reality
So, what does this new level of connectivity look like in practice? It is enabling a new generation of operational flexibility that wasn’t possible even three years ago.
- Autonomous retail: There has been a sharp rise in grab and go retail environments, such as Amazon’s “Just Walk Out.” Previously, these setups required permanent, hard-wired infrastructure. With the throughput of Wi-Fi 7, venues can deploy “store-in-a-box” concepts. A stadium can spin up a temporary merchandise stand in an ancillary concourse, connect it to the network wirelessly and be fully operational in hours rather than weeks.
- Enhanced biometric security: As stadiums move toward biometric-led entry, the stakes for connectivity rise. Systems such as Zonar, which utilize radar to detect potential threats, require robust, low-latency connectivity to function in real-time. Moving these safety systems onto a secure, enterprise-grade Wi-Fi 7 network provides the agility to reconfigure entry points and security perimeters based on the specific needs of an event.
- The content factory: Sports organizations are increasingly media companies. They need to ingest, edit and distribute high-definition content from the pitch to their servers in real-time. By employing Wi-Fi 7, broadcast crews can dump massive RAW files directly from the sideline without physical cabling, enabling a faster turnaround for social media engagement and marketing.
The future-proof foundation
A common point of contention is whether private cellular will displace Wi-Fi in sports venues. Though I think there was serious debate with venue operators a few years ago, reality has set in, and the future of the stadium is a converged, multi-access environment.
Private cellular remains ideally suited for broad coverage and high-mobility use cases — such as like tracking assets moving across a vast parking lot. However, for the high-density environment of the seating bowl, Wi-Fi 7 is the superior economic and technical choice. Furthermore, the ability to offload data traffic from expensive cellular networks onto the stadium’s Wi-Fi is a major revenue opportunity that venue operators are only just beginning to monetize.
As Bart Giordano, senior vice president and president of Ruckus Networks, emphasized, “this installation isn’t just about faster Wi-Fi; it’s about providing a reliable, enterprise-grade digital foundation that LAFC can build upon for years to come — powering new applications and revenue opportunities that engage a new generation of fans.”
The bottom line
The most forward-thinking chief information officeres in the industry, such as those at LAFC, have stopped thinking of the network as a “utility” and started thinking of it as an “asset.” When a stadium can dynamically reconfigure its connectivity based on the event, it isn’t just saving on cabling costs; it is opening up new revenue streams and operational efficiencies.
The transition to Wi-Fi 7 is about much more than speed tests or bragging rights. It’s about building a digital foundation that is flexible, reliable and intelligent enough to support the next decade of fan engagement. The stadiums that embrace this shift won’t just be “connected” — they will be future-proofed.
The communications industry is filled with vendors that once dominated one aspect of the “stack” but have been focused on building a unified platform that includes voice, video, meetings, contact center and much more. But the most interesting vendor in the market is Zoom Communications Inc.
Zoom has not only broadened the scope of what one would expect from a communications provider but expanded outside the footprint of the traditional unified-communications-as-a-service/contact-center-as-a-service market. In addition to the core communication capabilities, it has added e-mail, document sharing, front-line worker tools, small business apps and more.
This makes Zoom the company with the broadest set of work functions integrated into one back-end stack. Microsoft Corp. and Google LLC have similar products, but they were built when siloed applications were the norm. Historically, silos of apps were never ideal as workers wound up being the integration point between them, but people managed to do their jobs albeit with a heavy “toggle tax.” This is why users are constantly having to copy and paste information between the core work apps they use.
It’s through this lens that Zoom has been building its platform. Today at at Enterprise Connect, Zoom made a series of announcements that shows the fruits of these efforts as it transitions from the “video company” to work orchestration. It is positioning itself as an artificial intelligence-first, agentic platform designed to move conversations to completion by leveraging its back-end platform.
The shift to agentic intelligence
On its analyst pre-briefing, the company focused heavily on its evolution into a system of action. This isn’t just about summarizing a meeting but rather about “knowledge creation” — taking the output of a conversation and other forms of work and feeding it into downstream workflows.
The highlights of the announcements center on:
- My Notes: Work isn’t limited to meetings. People are on the go, run into co-workers at lunch, in the hallway and other places. When we see them, we take notes, but these are often disconnected bullet points. With My Notes, workers can type quick bullets and then AI Companion will expand on these using transcripts and build summaries, action items and next steps.
- Zoom Virtual Agent (ZVA) 3.0: This is a significant step forward in customer experience. By employing a new execution architecture, ZVA can handle complex, multi-step customer interactions that previously required a human agent to intervene. One big milestone for ZVA 3.0 is the integration with Workvivo, Zoom’s tool for front-line workers, which make up about 80% of the overall workforce. This is an audience that has had to rely on consumer tools to help them do their jobs, but Zoom is bringing ZVA to them through their mobile apps.
- Intelligent Retrieval and Third-Party Integration: Zoom is making it easier for AI Companion to “know” your data. By connecting to 10-plus major enterprise apps — suchas Salesforce, Slack, ServiceNow and Jira — the AI has access to the data needed to surface answers without the user needing to switch between tabs or hunt for files. Toggle tax kills productivity, but almost all workflows require data from multiple applications. Zoom is removing the typical human integration required to share data.
- AI Docs, AI Slides and AI Sheets: Zoom is launched Zoom Docs as a collaborative document tool but customers that use Zoom aren’t just writing, they’re having conversations with a goal of creating finished deliverables and they want to use AI to do it. To meet this new way of working, Zoom announced AI Docs, AI Sheets and AI Slides, which aren’t your legacy productivity apps with AI bolted on, but rather AI canvases that can understand the context of work, meetings, email and chats as well as the outcomes the team is looking to achieve and then leverages AI to create the rest of the deliverables such as a spreadsheet or presentation.
The wrapper for these applications is obviously agentic AI. By automating end-to-end tasks, Zoom is expanding its agentic capabilities to reduce all that busy work that get in the way of doing and finishing work.
Why this matters for customers
The biggest friction point in the modern enterprise isn’t the lack of tools; it’s the fragmentation of those tools. Customers are suffering from “context switching” fatigue. In fact, my research shows that workers spend 40% of their time managing work instead of doing their jobs.
Zoom’s focus on “Conversation to Completion” directly addresses this. By embedding AI Companion into Workvivo, and allowing it to pull from third-party tools, Zoom is essentially trying to become the “connective tissue” of work, solving the problems I highlighted.
For a business, this means a meeting isn’t just an hour spent talking to people only to have 90% of the information lost once we got back to our other tasks. The meeting and any other interaction become the launchpad for automated follow-ups, document creation and customer relationship management updates.
I’ve used the analogy that high-level executives have chief of staffs to connect the dots between the work they do, but the other 95% of the workforce doesn’t. With the scope of work that Zoom has access to, it should be able to use agentic to bring that kind of capability better than its peers. The proof will come through customer wins and use cases, but the strategy is sound. As Zoom has entered some of these nontraditional communications markets, many industry watchers have criticized Zoom for trying to compete in areas such as e-mail and docs, but that does give Zoom access to data it would not have had.
Furthermore, its focus on “verticalization” — with specialized integrations for healthcare (Epic), financial services and retail — shows that Zoom understands that a “one-size-fits-all” AI isn’t good enough for enterprise-grade deployments.
Long-term, products such as Zoom’s agentic agents will need to interface with agents from other companies. Though Zoom can address a wide number of end to end processes, it can’t do them all, and that’s where Agent-to-Agent or A2A and Model Context Protocol or MCP will become important.
Zoom mentioned on the call something I have been hearing that though MCP and A2A make great media headlines, the reality is that demand for multi-agent is still nascent. Though not part of the Enterprise Connect payload, Zoom did confirm it’s experimenting with it. This shows a grounded, realistic approach to product development: It’s building the capability, but it isn’t forcing the market before it’s ready, which is typical Zoom.
Licensing the next frontier
As Zoom continues to broaden its scope of work, it will also need to evolve how customers pay for the product. One of the great things about Zoom is that buying the product has been simple, with just a few bundles. However, with AI coming, Zoom is looking at a mesh of possibilities with a core license and then several add-ons. Zoom came to prominence during the pandemic because “Meet Happy” included “Purchase simply.”
It’s worth noting that this problem of licensing complexity is not unique to Zoom but something the industry is grappling with. Customers don’t want to manage a sprawling inventory of bundles for workplace, contact center and various AI builders. There’s a valid concern about whether Zoom’s packaging can keep pace with its rapid innovation and that was brought up in the analyst Q&A.
Zoom’s response was promising but a work in progress. It’s moving toward:
- Segmentation-based packaging: Tailoring toolsets for small and medium-sized businesses versus enterprise needs.
- Standalone SKUs: By launching a standalone “Custom AI Companion” SKU at the end of this month, Zoom is providing flexibility for users who want the power of their agentic AI without necessarily being locked into the full suite.
The bottom line
The “meeting app” label is officially dead or at least should be. Zoom is now an AI-orchestration platform and arguably broadest one in the industry. The next phase for the company will be execution — simplifying licensing and proving that its agentic workflows can deliver consistent return on investment across different verticals.
The move into productivity apps is the one I find the most compelling. When Zoom first launched Docs, people questioned why we needed another document platform and gave Zoom zero chance of success. The fundamental thesis of my research has always been that share shifts happen when markets transition and AI is causing people to work differently, which gives Zoom an opening.
Canva has disrupted Adobe in the creative market, and generative AI has moved eyeballs away from the once untouchable Google Search business. Decades ago, people said, “Who needs Microsoft Office?” because we had Lotus 1-2-3. The Office productivity suite is a de facto standard, but the user experience has always been subpar, so Zoom does have an opportunity to extend its expertise in “ease of use” to this space.
To be clear, Zoom’s ability to “win” is not tied in building a better document, e-mail or chat, but rather it’s the value users get when they go “all in” on Zoom. The company needs to be able to demonstrate that, as I add Zoom components, a worker’s job becomes exponentially simpler. This will get more users yelling, “I use Zoom!” as they do with communications today.
Zoom continues to beat to the march of its own drum by focusing on areas that are not typical to communications. Zoom came to market by focusing on ease of use, which, for whatever reason, was not a focus for this industry. The company isn’t trying to enable us to communicate better but addresses the larger scope of work, which has been broken for a long time. So a bigger shot of nontypical might be what’s needed to move us into the agentic era.
Hewlett Packard Enterprise Co. announced new networking, compute hardware, cloud operations software and financing updates for service providers at this past week’s MWC26 in Barcelona. The updates center on meeting the new demands being created by artificial intelligence reshaping every aspect of network design — from centralized data centers to distributed edge environments.
AI adoption is driving more traffic into AI data centers, accelerating investment by hyperscalers and neocloud providers. In those environments, ultra-low latency and high reliability are no longer optional. They are basic requirements for delivering AI as a service.
Traditional network traffic is still growing but it’s quickly being dwarfed by AI traffic. But that traffic behaves differently and it’s far more sensitive to delays. The existing networking oversubscription model cannot handle today’s AI requirements.
AI creates new networking demands
To address these changes, HPE is expanding its Juniper-based PTX routing portfolio. The updates include new PTX12000 modular routers, which support dense 800G connectivity initially and can scale to 1.6 terabits per second without major redesigns. Additionally, HPE introduced a new line of PTX10002 fixed-form routers — a smaller, more efficient option for building AI networks and connecting data centers. The PTX Series routers run on Juniper Express 5 silicon, with an emphasis on throughput, deep buffering and power efficiency.
Routing is what Juniper has always done best and at MWC I had a chance to meet with HPE Executive Vice President AE Natarajan about the new products. He told me customers are building out larger and larger graphics processing unit clusters, which are geographically distributed and need to be connected making the network central to the growth of AI. “The appetite for the PTX12000 platform is very strong right now,” he said. “Some telcos are making big leaps into building out AI networking fabrics, inference edges or sovereign clouds.”
Custom silicon creates HPE differentiation
Though there are many scale-up, -across and -out products available, these are powered by the Express 5 Silicon, which came to HPE via its acquisition of Juniper Networks. The new application-specific integrated circuit delivers roughly 49% more power efficiency than the previous generation, with PTX10002 systems achieving up to a 54% improvement over earlier platforms. The silicon was built with AI in mind an includes things like inline MACsec (Media Access Control Security) for integrated security.
Another aspect of Express 5 is the load balancing and quality of service algorithms. With AI networking, it’s not enough just to be fast; the network needs to know how to handle congestion, and that’s something merchant silicon doesn’t not handle well. “We built load balancing capabilities with least amount of switch over drops making it ideal for AI,” Natarajan explained. “By not dropping packets, the GPUs are never having to wait for the network to catch up.”
HPE is also updating Juniper Routing Director to be agentic AI-ready. The software platform provides end-to-end transport and wide-area network automation for service providers and large enterprises. Many large operators are building their own AI copilots and customized models rather than relying on vendor-provided assistants. With this update, Juniper Routing Director can integrate with customer-built AI copilots, allowing operators to automate network operations and speed up troubleshooting.
ProLiant gets an upgrade
On the compute side, HPE is introducing new ProLiant platforms aimed at service providers, including the Compute EL9000 chassis and EL140 Gen12 servers. These systems are designed to handle higher network traffic density for AI and 5G workloads. HPE is also integrating Juniper’s cloud-native routing software directly into select ProLiant servers, combining routing and compute into a single system for radio access network deployments. This includes the 1U HPE ProLiant Compute DL110 and the new 2U HPE ProLiant EL140 Gen12 servers.
Telco RANs are now software-based and do not need to have a separate compute server and routing platform. These layers have collapsed, giving HPE the opportunity to bring its strength in compute and networking together.
Additionally, HPE is expanding its CloudOps Software as a unified control plane for managing virtualization, containers, observability, automation and operations across multicloud and multivendor environments. The idea is to make complex cloud environments easier and cheaper for service providers to run by managing everything through a single platform instead of multiple disconnected tools.
To support adoption, HPE Financial Services is launching a new 90/9 Advantage financing program, which offers deferred payments followed by low monthly lease options. The program covers HPE’s portfolio across networking, compute, storage and software.
Final thoughts
As an industry watcher, I was curious as to how quickly or slowly HPE and Juniper would come together. When HPE acquired Aruba Networks years ago, it left that business unit alone as to not disrupt it and I thought HPE might take a similar approach with Juniper. That does not seem to be the case as the joint company has built out an aggressive roadmap of products that brings the best of HPE and Juniper together with former Juniper CEO Rami Rahim running the entire networking business.
In my discussions with HPE management, though the company is moving fast, it is also acutely aware of the product loyalty that HPE and Juniper had, and that’s part of the design principals where any new products do not require a “rip and replace.” Both sets of customers can benefit from cross-engineering but should not ever have to do anything that disrupts their businesses.
We are squarely in the artificial intelligence event season with MWC just wrapping up and Nvidia GTC and RSAC on deck. The talk of every show this year has been about moving AI from vision to reality. However, it’s often the case that the transition from AI experimentation to production-grade, value-generating systems hits a wall because of infrastructure availability and readiness.
While the industry has been fixated on the AI model arms race, enterprise AI teams are finding that their biggest constraint isn’t the quality of their algorithms — it’s the bottleneck of accessing graphics processing unit capacity. A new report from neocloud provider QumulusAI and HyperFRAME Research, titled “The Hyperspeed Compute Era: Reclaiming AI Velocity for Enterprise Teams,” confirms a sentiment many chief information officers have told me: Legacy cloud infrastructure was designed for information scale — transactions, web traffic, storage — not the intelligence scale required by modern AI.
For organizations trying to move beyond the pilot phase, this infrastructure gap is becoming a chasm that seemingly continues to grow.
The velocity gap: Why ‘good enough’ isn’t working
Most of today’s enterprise cloud platforms were built with rigid capacity models and long-lead-time procurement cycles. The world of generative AI is far more fluid, less predictable and agility is far more important. With traditional compute methodology, development teams face a “stop-and-start” lifecycle: They request compute, wait for allocation, run a workload and then repeat. When provisioning takes weeks rather than hours, the agility required for iterative AI development — fine-tuning, testing and rapid refinement — is lost.
The QumulusAI report highlights that we are entering a “flight to efficiency” phase. Enterprises are moving away from monolithic, “bet-the-company” model builds and toward smaller, domain-specific models that require faster iteration cycles. If your infrastructure forces you into a “wait-and-see” approach, you are effectively handicapping your ability to ship.
The FACTS framework: An AI measuring stick
To help organizations evaluate their AI readiness, QumulusAI has introduced the FACTS framework — a set of principles designed to address the specific friction points of modern AI infrastructure:
- Flexibility: Moving beyond one-size-fits-all cloud instances. The modern stack must allow teams to scale seamlessly from fractional GPUs for rapid prototyping to dedicated bare-metal clusters for production training.
- Access: Distributed GPU capacity is critical. Teams should not have to wait for availability in a specific region or compete for scarce resources in a monolithic cloud provider.
- Cost: “Cloud sprawl” in AI often hides behind egress, storage and premium support fees. Predictable, transparent pricing is no longer a luxury; it’s a prerequisite for long-term capacity planning.
- Trust: In an era of AI volatility, enterprises need a partner, not a transaction. This means focusing on long-term capacity assurance and security-first, distributed architecture.
- Speed: The defining metric of the current era. Provisioning must happen in hours, not weeks. Without it, the “fail-fast” development methodology that powers AI innovation is impossible to sustain.
The emergence of ‘hyperspeed compute’
QumulusAI’s response to these challenges is what they define as “hyperspeed compute” — a model that acknowledges that the future of enterprise AI will be a hybrid one. Hyperscalers remain vital for global reach and integrated software ecosystems. However, the most successful enterprises are learning to augment those platforms with specialized AI infrastructure providers. By offloading specific, high-velocity AI workloads — such as training and model fine-tuning — to infrastructure purpose-built for that purpose, organizations can bypass the latency of traditional cloud provisioning.
Why this matters for the industry
The “infrastructure velocity gap” is real. If the barrier to entry for a new AI feature is a three-week wait for GPU cycles, the cost of innovation becomes too high. For the industry, this signals a shift in how customers choose and procure AI resources. The “AI-mature” organizations will be those that view infrastructure as a strategic asset rather than a commodity.
AI isn’t the same as general computing and “good enough” is no longer good enough. Companies leading the way with AI will be ones that refuse to let their developers sit idle while procurement catches up. By embracing distributed, purpose-built AI infrastructure, they are ensuring that their time to insight is as fast as the algorithms themselves.
The bottom line
The era of “AI experimentation” is rapidly drawing to a close, replaced by the demand for AI outcomes with measurable return on investment. If your infrastructure is optimized for 2015-era web traffic, it will struggle to support 2026-era intelligence.
The QumulusAI report highlighted something that all information technology leaders should keep in mind: Infrastructure choice is now a strategic differentiator. Companies that continue to treat AI compute as a standard cloud service will find themselves outpaced by those that can scale, iterate and deploy at speed.
As the industry continues to navigate this shift, CIOs and chief technology officers should look to the FACTS framework to ensure their infrastructure is built for the velocity of the intelligence era, not just the information age. In my experience, maturity models and frameworks such as FACTS do a good job of helping organizations with a reality check. Generally, organizations overestimate their capabilities and a third-party tool can help level set where a company is today and provide a roadmap of how to move up the maturity curve.
Over the past 18 months, the enterprise technology narrative has been dominated by a singular, persistent theme: artificial intelligence, more specifically agentic AI. From CES to NRF to the World Economic Forum, every vendor, service provider and analyst firm has been preaching the gospel of AI.
Yet if we pull back the curtain on the actual state of AI agent deployments, a different story emerges. Though the vision and ambition are there, the execution is lagging.
The data supports this observation. Despite nearly 80% of organizations experimenting with agentic AI in the last year, a significant portion of these projects remains indefinitely stalled in the pilot stage. Companies are pouring capital into AI, but they are struggling to bridge the “AI execution gap” — that is, moving from a successful proof-of-concept to a production environment that results in a positive return on investment.
This is the goal of Dialpad Inc. On Tuesday, ahead of next week’s Enterprise Connect event, it announced the next iteration of its Agentic AI Platform. What’s notable about the announcement is that, rather than just adding “more AI” to its stack, Dialpad’s is focusing on outcomes by identifying the right use cases for AI agents, validating ROI and enforcing the kind of governance that makes enterprise-wide adoption possible.
The evolution from generative to agentic AI
To understand why this announcement is meaningful, we must first recognize the shift in the market. The industry has moved beyond the “wow factor” of generative AI — the chatbots that take notes and summarize meeting transcripts. Enterprises today are looking for agentic AI, that is systems that don’t just talk, but act. They want machines that can resolve customer issues, update CRM records, and navigate complex workflows without human intervention.
Doing this can remove many of the mundane and tedious tasks that prevent human workers from being more productive. One of the interesting data points from my research is that currently workers spend over 40% of their time managing work rather than doing the job. AI agents can remove most or all the time spent toggling among apps, taking notes and sending reminder emails.
However, moving from a passive AI assistant to an autonomous AI agent is a massive leap in complexity. If a chatbot makes a mistake, a customer gets incorrect information, but if an agent makes a mistake, an entire process could be done incorrectly, resulting in something that could harm the customer and the business.
This is where many of the current “pilot-stuck” projects fall apart. They lack the guardrails, and the clear business logic required for a mission-critical environment like a contact center or a customer-facing workflow.
Dialpad’s new AI Agent platform, by focusing on “from insight to production,” tackles these friction points.
The four pillars of production-ready AI
The business value of this update lies in four distinct functional areas that address the specific roadblocks enterprises face when trying to adopt agentic AI:
- Skill Mining (the strategy): One of the biggest mistakes companies make is trying to automate everything. By analyzing historical conversation data, Dialpad’s Skill Mining allows enterprises to identify specific friction points — customer issues that happen repeatedly — and prioritize those for AI agent intervention. It replaces the guesswork with data-driven strategy.
- Proving Ground (the validation): This is perhaps the most critical addition. Before deploying an AI agent, how do you know it will work as intended? Proving Ground allows organizations to test AI agent performance and model ROI before going live. This is the “de-risking” that chief financial officers and business leaders have been asking for. It allows businesses to see if an AI agent will drive down average handle time or improve customer satisfaction scores before it ever interacts with a real customer.
- Agent Studio (the empowerment): The bottleneck for most agentic AI projects is the reliance on highly specialized, expensive developer resources. With the no-code Agent Studio, Dialpad is democratizing the creation of AI agents. By providing a conversational interface with an ecosystem of connectors, it is allowing subject matter experts — the people who understand the business processes — to build the AI agents themselves.
- Guardian (the governance): In the enterprise, compliance is non-negotiable. Guardian acts as a real-time safety supervisor, monitoring AI agent interactions to reduce data exposure risk. This isn’t just a “nice-to-have” feature; it is an essential piece of infrastructure that allows information technology and security teams to sleep at night while the business moves forward with innovation.
A win for Dialpad customers
For Dialpad, this move is a logical extension of its history as an AI-first company. Last week I spoke to Chief Executive Craig Walker about this, and he said the company is not trying to bolt AI onto a legacy system, but rather build a solution with AI as the foundation. The goal is to build on its strengths in unified communications as a service and contact center as a service and become a core layer of the modern enterprise AI stack for customer interaction.
The company is a smaller player in the world of customer experience, but these market transitions always create opportunities to disrupt the incumbents. The agentic AI pivot should open the door to new buyers that don’t have a historical allegiance to some of the bigger vendors.
For Dialpad customers, the benefits are more tangible. First, it shortens the time-to-value. By providing the tools to identify the right use cases for AI agents and prove ROI upfront, Dialpad helps companies avoid the “pilot purgatory” that kills so many digital transformation efforts.
Second, it solves the trust gap. Many enterprises are terrified of AI hallucinations or unpredictable AI agent behavior. By embedding governance into the lifecycle via Guardian, Dialpad is providing a framework where speed and confidence can coexist. You don’t have to sacrifice safety to innovate quickly.
Post announcement, I asked Joe Rittenhouse, co-CEO of Converged Technology Professionals, one of the communications industry’s premier services companies and one of four partners which looked at the beta, for his thoughts. “There are a lot of agentic solutions available today, but this was one of the more complete ones and addresses end to end CX workflows,” he said. “It has an intuitive interface that addresses everything from scheduling to analytics as well as a broad selection of marketplace apps to connect to.”
The path forward
The AI agent race is no longer about who can generate the most text or who has the flashiest demo. It is about who can deliver actual business impact. The era of agentic AI experimentation is drawing to a close, and the era of agentic AI operationalization is beginning.
With these new capabilities, Dialpad is effectively telling its customers: “Stop experimenting and start executing.” By providing the tools for planning, testing, building and governing AI agents, they are providing a roadmap to move from vision to production.
One topic that comes up every year at MWC, the telecommunications industry’s largest and de facto standard event, is modernizing telco networks to enable them to create new, revenue-generating services. However, again at this year’s conference in Barcelona, revenue growth for service providers remains elusive, despite the industry having spent billions on building out its 5G infrastructure and introducing many digital experience products. Despite those investments, annual revenue growth is expected to diminish to less than 3% by 2029, according to a PwC forecast.
Last week, ahead of the MWC mayhem, Salesforce Inc., which grew into an industry giant by providing technology to help companies grow their revenue, launched a new product — Agentforce for Communications — that aims to do the same for telcos. It features five new prebuilt agents to help telco teams reclaim their time and capitalize on opportunities to create new revenue streams that didn’t exist before.
“We’re embedding agentic AI or Agentforce into almost all of our products. This is the big change that we’ve seen in the past year and a half,” Meredith Alexander, who leads industry marketing for the company, said in a briefing for industry analysts. “We’ve fully pivoted the company and are embracing Agentforce across every part of our business. By integrating Agentforce into every element of our platform, employees gain access to the benefits of agentic AI in their workflow. Agentforce also uses our industry data model and metadata layer, so it has the context to understand our customers’ businesses and industry.”
Focus on telecommunications companies
Alexander said Salesforce designed Agentforce for Communications to help telecom customers reduce their operating costs while still accelerating growth. “We believe a key to breaking through the headwinds and achieving unconstrained growth is agentic AI, because it empowers our customers to supercharge their workforce,” Alexander said. “Instead of workers slogging through manual tasks like writing case reports or searching for customer information or responding to repetitive questions, agents can tackle these tasks for them, freeing the human workers to focus on higher-value activities like building customer relationships or resolving complex issues.”
Benefits for Salesforce’s telecommunications customers include accelerating revenue and reducing costs, so they can “optimize marketing with hyper-personalized engagement. They can increase sales by shortening deal cycles and improve order delivery by reducing fallout. And this is all while they’re able to reduce their cost to serve, but also boost their customer satisfaction with consistent and seamless experiences across every point of interaction,” she said.
Natively built on the Agentforce 360 platform, Agentforce for Communications pulls live data from customer relationship management, operations support systems and business support systems, enabling agents to “take trusted action instantly, respond in natural language, and leverage deep customer context to find immediate solutions and opportunities to drive growth.”
The company says Agentforce for Communications will enable “a self-healing network where issues are resolved before they’re even noticed, billing is transparent, service is efficient, and human representatives are free to go above and beyond to deliver world-class service.”
Agentforce replaces manual data retrieval with “real-time, actionable intelligence, enabling them to focus on complex, empathetic problem solving and win-win scenarios that drive revenue while building lasting brand loyalty.”
Five agents, five strategic levers
Salesforce has introduced five prebuilt agents that directly address the most painful areas of the telco lifecycle:
- Billing Resolution Agent: By harmonizing data across fragmented third-party systems, this agent provides deep bill analysis and autonomous resolution. Of all the capabilities AI can bring, this could be one of the most significant. Billing disputes are a primary driver of churn. Removing the “I need to talk to a supervisor” bottleneck can shift a moment of frustration into a moment of trust.
- Service Level Objective Insights Agent: This shifts the mindset from reactive support to proactive assurance. By comparing real-time network usage against service-level agreements, it allows account owners to get ahead of outages or performance dips. In the business-to-business world, this is the difference between a renewed contract and a lost client.
- Quoting Agent and Site Grouping Agent: These two are the “velocity multipliers.” Configuring quotes for multisite enterprise deals is notoriously complex, prone to technical incompatibilities and human error. Automating these with natural language inputs reduces the “fall-out rate” of orders and slashes the time spent in the middle office.
- Guided Selling Agent: This is perhaps the most intriguing from a revenue perspective. It puts the power of a sales engineer in the pocket of a field technician. When techs are onsite for a repair, they are in the ultimate “high-touch” position. Giving them the ability to generate technically valid upsell quotes in the moment turns a cost-center into a revenue-generating opportunity.
Industry leaders are already seeing the ‘AI dividend’
We are already seeing early signs of this working. Lumen Technologies is reclaiming more than 300 hours of productivity every week. One NZ has seen a fourfold increase in engagement.
When you strip away the manual overhead of data retrieval and reconciliation, telcos aren’t just saving money, they are freeing your most expensive resource, your human employees, to do what they do best: Solve complex, high-value problems that require empathy and nuanced decision-making.
The bottom line: Can telcos rebrand as AI-first?
The telco industry has been notoriously slow to modernize its internal tech stack due to the sheer complexity of legacy infrastructure. Salesforce is positioning Agentforce as the “glue” that allows these disparate systems to function as a modern, agentic enterprise.
If telcos can effectively deploy these agents, they move from being “dumb pipes” providers to sophisticated digital service providers. They can resolve issues before they are noticed, offer pricing that makes sense for the customer’s usage, and — most importantly — keep their sales teams focused on the customer rather than the spreadsheet.
The revenue growth challenge isn’t insurmountable, but it requires a change in strategy. It requires moving from a world where AI is a customer-facing mask to a world where AI is an internal, operational engine. With Agentforce for Communications, Salesforce has given the industry the roadmap to get there. The question for telco chief information officers and chief marketing officers is no longer “Should we adopt agentic AI?” It is “How fast can we integrate it into our core operations before the competition makes the move for us?”
AI presents the latest and best opportunity for telcos to add new services that can change how they are perceived. They own the network and that’s a great asset. Now they need to build on top of that. Agentic AI can change almost every aspect of telco operations, which is important for an industry that continues see capital investments grow well ahead of revenue.
Over the past year there have been plenty of media reports discussing artificial intelligence failures and highlighting the negative aspects of it. I’m of the belief that AI will eventually be infused into every aspect of our lives and change the way we work, live and learn.
This is similar to the impact the internet had, although the scope and impact of AI will be much bigger than that technology transition. Like the internet, there will be stops and starts, failures and successes but make no mistake: AI is here to stay.
AI agents have arrived
A recent report from RingCentral Inc., 2026 Agentic AI Trends, found AI agents are indeed showing up in the workplace – not as features buried inside applications but rather as coordinated systems that help work move from one step to the next. According to the report, spoken and written interactions are becoming a key input for organizations as they implement AI agents.
A lot gets lost when conversations are boiled down to dashboards. Voice, video and chat carry context that can be missed, especially in live voice conversations where tone and intent matter. In fact, in the past I have referred to voice as “dark data,” and agentic voice AI can capture that input. By listening to live conversations, AI agents can ask questions and pass information along to other systems, so work doesn’t stall or lose context as it moves from one step to the next.
Generative AI has pushed these capabilities directly into everyday work, but most deployments are isolated. This is a good focus for RingCentral as gaining AI adoption is less about introducing new tools and more about connecting what’s already in place. This is an important industry shift as we move from AI vision to AI reality.
Customer sentiment for AI is high today
One of the more interesting data points from the survey was around adoption. Ninety-seven percent of those who participated in the report said they’re using at least one form of AI today. Unsurprisingly, generative AI is the most widely used at 77%, followed by predictive analytics (54%) and process automation (53%).
Early AI deployments tend to focus on tools that are easy to roll out. Sixty-nine percent of business decision-makers said their first initiative went live within a year. During that timeframe, 77% saw a return on investment, and 92% said they were satisfied with their AI initiatives overall. It’s interesting that this data counters the HBR report that came out last year that stated most AI projects fail. I do think the data points in the report were taken out of context, but the RingCentral study shows that AI is becoming more mature and value is being realized.
In January, I talked with Liesl Perez, co-founder and chief growth officer of Denver-based Axis Integrated Mental Health, which uses RingCentral’s AIR (AI Receptionist) product. The organization struggled to answer all the inbound calls and was leaving money on the table. In the time Axis has been running AIR, under a year) Perez told me, it has generated $1.7 million in additional revenue, which for a small organization is very meaningful.
AI can remove day-to-day tedium
Organizations described AI’s impact as largely operational. Fifty-two percent said they’re using it to improve productivity, and 90% stated it works best when applied to specific workflows. Among organizations using or testing agentic AI, 61% reported productivity gains and 58% said workflows move faster, with additional benefits cited around customer experience, operating costs, and customer satisfaction. Perez described the operational use cases for AI as being able to “remove the tedium” from work.
Many organizations are past the early stages with AI agents. Fifty-seven percent reported moving beyond exploration, 93% said they’re familiar with AI agents, and nearly everyone (96%) agreed that AI agents will be essential to staying competitive.
Execution challenges remain
However, there are plenty of organizations that experience execution challenges, and AI projects stall once they’re in motion. Forty percent of organizations have paused or canceled at least one AI project or initiative. Integration complexity is the most common reason (46%), followed by internal resistance or misalignment (33%), unclear or inconsistent ROI (3%), and poor employee user experience (26%).
Similar challenges show up when companies try to scale AI agents. Trust in outcomes (38%) is a major barrier for organizations already using agents. Other barriers include employee resistance, data integration issues and worries about cost and compliance.
This is why orchestration is necessary. RingCentral described it as the layer that connects AI agents, people and systems across an entire workflow. Orchestration allows data to move from one step to the next. Orchestrated systems rely less on rigid inputs like forms or tickets and more on conversational input. Agents interpret conversational input, handle incomplete information and respond to exceptions. They also work together, passing context instead of duplicating effort.
Voice rises as a preferred channel for agentic AI
Interaction preferences suggest where this is heading. RingCentral asked respondents to imagine interacting with an AI agent across customer-facing and employee contexts now and in the future. Forty-two percent said they prefer to interact with AI agents through chat today. However, in two years, that dropped to 25%. Meanwhile, preference for voice rose from 14% to 23% in two years, and video increased from 10% to 22%.
Respondents were also asked to give an AI agent some human traits. They prioritized correctness and clarity over emotion by ranking reliability first (28%). Creativity (24%) and common sense (16%) followed after that. Empathy, accountability, humor and patience were not as desirable.
RingCentral concluded the report with the prediction that the next phase of AI will be system-level, not tool-level. Most organizations already have the necessary building blocks, and the early value is clear. What continues to hold them back is fragmentation. According to RingCentral, the focus must shift to agents that can work together and use real conversations to understand what needs to happen across people and systems.
From a company perspective, the pivot to AI appears to be working for RingCentral. On its most recent earnings call, the company called out annual revenue run rate from customers using at least one monetized AI product as having more than doubled year over year. That’s approaching 10% of overall ARR, with new AI-led products alone reaching $100 million in ARR.
Final thoughts
Just as the internet once transitioned from an experiment to the foundational fabric of global commerce, AI is currently undergoing its own “connectivity” phase. We are moving past the era of the standalone chatbot and into the era of the orchestrated agent. The data in the report shows where the industry is today: Businesses aren’t looking for AI that can tell a joke or show empathy; they want reliability, clarity and the removal of “tedium.”
For organizations still on the sidelines, the key takeaway from the 2026 Agentic AI Trends report is urgency. AI isn’t just a feature to be added — it’s the new system-level architecture required to stay competitive.
Cisco Systems Inc. has long been regarded as the market leader in networking, but over the past few years, the company has strived to position itself as “critical infrastructure for the artificial intelligence era.” It now seems to be making headway with that as the stock hits an all-time high.
This week at Cisco Live EMEA, in Amsterdam, Cisco delivered another payload of innovation targeted at helping customers move their AI from the “chatbot phase,” and jump into the agentic era. Agentic AI will create a marked improvement in productivity as it goes far beyond humans asking AI questions but enables autonomous agents to perform complex tasks, reason through workflows, and interact with enterprise data at large scale.
Though the vision of agentic AI paints a rosy future, it’s important to note that traditional infrastructure wasn’t cut out for the rigors of AI and most companies will be looking to do the most significant technology refresh since the early days of the internet. Cisco has been methodically upgrading its portfolio to meet these new demands. Here are the five most significant announcements from Cisco Live EMEA:
1. Silicon One G300: Terabit switching
The lead product item was the debut of the Cisco Silicon One G300. This switching silicon capable of a massive 102.4 Tbps of bandwidth, optimized for scaling out networks. As AI clusters scale toward “gigawatt-scale,” the network often becomes the bottleneck. The G300 tackles this with Intelligent Collective Networking, which provides 2.5 times better burst absorption than other alternatives.
For AI, the ability to handle bursts of traffic is critical to ensure data is delivered to the AI systems consistently and reliably, even over long distance. The Intelligent Collective Networking uses a combination of network features including shared packet buffering, path-based load balancing and network telemetry to improve performance.
In real-world terms, Cisco claims the new silicon can deliver 33% increased network utilization and a 28% reduction in job completion time when compared to non-optimized path selection, which would lead to more tokens generated at a lower cost. Based on my familiarity with off-the-shelf Ethernet, these claims seem reasonable, if not a bit conservative.
2. ‘AgenticOps’: The new IT operating model
Perhaps the most ambitious shift for Cisco is the expansion of AgenticOps. Cisco is moving from AI that merely observes to AI that reasons, decides, and acts. This isn’t just one tool; it’s a suite of autonomous capabilities integrated across networking, security and observability.
Key innovations include:
- Autonomous troubleshooting: End-to-end investigations that can cut Mean Time to Resolution from hours to minutes by validating multiple hypotheses simultaneously.
- Continuous optimization: Agents that autonomously tune RF, QoS and pathing to maintain user experience before a human even notices a degradation.
- Trusted validation: Risk-aware agents that assess network changes against live topology to identify potential “blast radius” issues before they cause an outage.
The concept of a “self-driving network” is something that the industry has bandied about for years, but historically IT pros have been cool on the idea. Over the past year, I’ve noticed a significant change in attitude as engineers are now starting to understand that AI is here to be a tool and allows them to work faster and smarter. I expect Cisco to continue to add more and more agentic capabilities while road mapping to fully autonomous somewhere in the next 24 to 36 months.
3. AI defense: Guarding the agentic supply chain
As agents become more autonomous, the security risks become more “semantically complex.” To address this, Cisco launched the biggest update to its AI Defense solution since it was initially announced. The highlight with this release is the AI Bill of Materials, which provides visibility into AI software assets and third-party dependencies. This is significant because it shifts security from tracking code to tracking “intent,” providing the visibility needed to manage the unique risks of autonomous agents. By documenting models and data dependencies, it allows enterprises to secure the AI supply chain against semantic threats that traditional firewalls simply can’t see.
Furthermore, Cisco is introducing Advanced Algorithmic Red Teaming. Unlike traditional security that looks for a single “bad” prompt, this uses adaptive multi-turn testing to see how an agent behaves over a long conversation. It’s designed to stop “poisoned tools” or prompts that try to hijack an agent’s authority.
At Cisco’s AI Summit, Amazon Web Services Inc. Chief Executive Matt Garman offered an analogy which highlighted the importance of guardrails. He explained that if one puts a board across a canyon, that person would crawl or walk very slowly across. That same board with handrails allows you to run. AI Defense gives companies confidence that their AI is doing what it should do and enables them to adopt it much faster.
4. Full-stack post-quantum cryptography
In an industry-first move, Cisco announced full-stack PQC protections within its new IOS XE 26 operating system. This is a “harvest now, decrypt later” defense strategy.
As AI workflows increasingly involve long-lived, sensitive data, the threat of future quantum computers cracking today’s encryption is real. Cisco is embedding PQC across its new 8000 Series Secure Routers and C9000 Smart Switches, aligning with evolving global regulatory guidance and ensuring that data remains encrypted even in the quantum age.
This should have appeal with regulated industries, governments or any organization where the time value of their data is several years. The timeline around quantum is still uncertain but it’s good Cisco offers some protection today in for when it arrives.
5. Nexus One: The unified AI fabric
To simplify the sheer complexity of these new technologies, Cisco is unifying its data center strategy under Nexus One. This is an integrated solution that brings together silicon, systems (such as the new N9000), optics and software under a single operating model.
A notable feature is the Native Splunk Platform Integration, expected in March, which allows customers to analyze network telemetry directly where it resides. This is critical for sovereign cloud deployments where data locality and compliance are paramount. Essentially, Cisco is giving enterprises a “single pane of glass” to manage everything from traditional workloads to massive AI training clusters.
During his Q&A with analysts, President and Chief Product Officer Jeetu Patel (pictured) talked about Cisco’s evolution to a platform or systems company. This is a good example, as Cisco historically had good technology but much of it was deployed in silos. Since Patel took over product, Cisco has been much more focused on delivering value at “Cisco” level rather than individual products.
The bottom line
Coming out of Cisco Live EMEA 2026 this week and AI Summit last week, it’s easy to see that the era of AI as a feature is coming to an end. We have entered the era of AI as the infrastructure.
By combining massive 100T silicon with autonomous “AgenticOps” and post-quantum security, Cisco is betting that the winner of the AI race won’t just be the company with the best model, but the company with the most resilient, secure and automated network to run it on. When ChatGPT burst on the scene, few thought of Cisco as an AI company, but it has delivered products consistently to help its customers move from AI vision to reality.
In the world of professional sports, “data-driven” is often a term tossed around to describe basic box scores. But for the National Football League, the last 10 years have represented a fundamental shift in how the game is measured, analyzed and even played.
This week, as the league reflects on a decade of its Next Gen Stats or NGS platform, the story isn’t just about football — it’s an excellent example of how cloud-native infrastructure and machine learning can transform an industry in real time.
What began in 2015 as a tentative experiment with radio-frequency identification or RFID tags has flourished into an artificial intelligence-led set of experiments and decisions fueled by Amazon Web Services Inc. Today, the partnership between the NFL and AWS serves as a model for the “intelligent enterprise,” processing millions of data points per game to deliver insights that were once considered impossible to quantify.
The origin: From tracking to intelligence
A decade ago, the NFL’s “Next Gen” journey started with hardware. The league embedded RFID chips into every player’s shoulder pads and within the football itself. Twenty ultra-wideband receivers were mounted in every stadium to capture the X/Y coordinates of all 22 players 10 times per second, and the ball 25 times per second.
“Football, for 100-plus years, has been a box score game,” Mike Band, NFL’s senior manager of research and analytics, noted in a recent retrospective. “You had yards, touchdowns and tackles. But those numbers only captured a sliver of what unfolded on the field.”
The early years focused on low-hanging fruit, metrics such as top speed and player separation. However, the raw data was just the substrate. The real breakthrough came in 2017 when the NFL formalized its partnership with AWS, moving the project from a tracking experiment to critical league infrastructure. By 2018, the league opened its tracking data to all 32 teams, putting every franchise on a common analytical footing.
Scaling the stack: The SageMaker era
The complexity of the questions grew: How difficult was that catch? What is the probability of a sack? The NFL needed more than just data storage; it needed advanced machine learning capabilities. The league turned to Amazon SageMaker to build, train and deploy models that could handle the high-velocity data streaming from the field.
In addition to SageMaker, the NFL has adopted many AWS tools, including Amazon Quick, which is an agentic AI-enabled workspace that acts as a set of “teammates” for business users. The NFL is using Quick to deliver real-time, interactive visualizations and answers to different stakeholders, including fans, broadcasters and analysts.
The first major milestone of this partnership was “Completion Probability,” launched in 2018. Built using an XGBoost machine learning model, it factored in 10 variables, including receiver separation and quarterback pressure, to assign a percentage to the likelihood of a catch.
Today, that single model has evolved into a library of more than 75 machine learning models running simultaneously. What’s equally impressive is the sheer scale of the data being generated and analyzed:
- Data ingestion: Every snap triggers the creation of a massive amount of physical data.
- Latency requirements: Models must return results in under 100 milliseconds to be relevant for live broadcasts.
- Volume: The system now produces between 500 and 1,000 unique stats per play.
It’s important to note that though NGS was initially created for broadcasters and fans, the data backbone now underpins everything from officiating and schedule creation to the “Digital Athlete” — an AWS-powered injury prediction tool that helps teams identify when players are at increased risk of injury.
During a media panel in San Francisco this week, Julie Souza (pictured, right), global head of sports for AWS, and Mackenzie Herzog (left), vice president of player health and safety for the NFL, discussed the impact AI has had on injuries. They explained it was the combination of the Digital Athlete and tens of thousands of simulated games that led to the dynamic kickoff rule, banning of the hip drop tackle, and a redesign of helmets, all of which led to the lowest injury rate the NFL has seen in decades.
Decoding the ‘game within the game’
One of the most recent and complex innovations to come out of the AWS-NFL lab is “Coverage Responsibility.” For decades, defensive performance was a statistical “black box.” If a quarterback didn’t throw at a cornerback, that corner’s effectiveness was invisible in the box score.
Using spatio-temporal transformer architectures, the same type of technology behind modern large language models, NGS can now identify defensive assignments in real time. The system can tell if a safety was disguising a coverage pre-snap or if a cornerback was “left on an island” in man coverage. This transforms the eye test of scouts into hard, verifiable data.
The league has also democratized this innovation through the “Big Data Bowl,” an annual competition where data scientists from outside the NFL are invited to solve league problems using tracking data. Many of the metrics seen on Amazon Prime’s Thursday Night Football today, such as “Pressure Probability,” originated as submissions from this open-source community.
The next frontier: Optical tracking and skeletal data
As the league looks toward the next decade, the NFL is already moving beyond the X/Y coordinate. The next evolution of Next Gen Stats involves “optical tracking,” using 4K camera arrays to capture the full 3D pose of a player.
Instead of seeing a player as a single dot on a screen, the system will soon track 30-plus points on a player’s body, including joints such as elbows, knees and hips. This skeletal data will unlock a new dimension of biomechanical analysis, allowing teams to analyze a quarterback’s throwing motion or a lineman’s leverage with millimeter precision.
Lessons for IT leaders from Next Gen Stats
For IT leaders and enterprise architects, the NFL’s decade with AWS offers three key takeaways:
- Context is king: Raw data is a liability until it is contextualized by machine learning models.
- Infrastructure dictates innovation: You cannot run real-time AI on legacy, siloed systems. The NFL’s AWS cloud stack is what makes subsecond inferencing possible.
- The ecosystem approach: By combining internal expertise with external talent, such as with the Big Data Bowl, and vendor partnerships such as AWS Scientists, the NFL accelerated its R&D cycle by years.
A decade ago, Next Gen Stats was a novelty. Today, it has become one of the most critical components of the NFL. As the league moves into an AI-first future, the “Next Gen” label seems almost modest. Business leaders should follow the continuous innovation model the NFL went through to deliver immediate value on the data being generated and then build on the success.
Cisco Systems Inc. held its second annual AI Summit this week, with a star-studded lineup of artificial intelligence celebrities. Unlike most vendor events, the Cisco AI Summit was designed to be a “meeting of the minds,” bringing together the “builders of the AI economy” to help the industry move past the hype and address the practical realities of a world being reshaped by AI. From the shift toward agentic workflows to the demographic necessity of automation, here are five key thoughts that defined the summit:
1. 2026: The year agentic AI goes into production
Though 2025 was defined by widespread experimentation, the consensus among summit leaders is that 2026 marks the official turning point for agentic AI — autonomous systems capable of reasoning, planning and executing complex tasks.
Leading off, Cisco Chief Executive Chuck Robbins noted, “For all the enterprise customers who are here this week, we all believe 2026 is going to be a turning point for AI — this will be the year of agentic applications.” OpenAI Group PBC CEO Sam Altman echoed this sentiment in his session with Cisco President and Chief Product Officer Jeetu Patel, describing the current convergence of model capability and interface as another “ChatGPT moment.”
Altman observed that “this is the first time I felt another ChatGPT moment — a clear glimpse into the future of knowledge work.” The movement from “chatbots to agents” changes the fundamental architecture of work. As Patel explained, we are moving from intelligent assistants to systems that can proactively remediate infrastructure issues or even build full pieces of software with minimal human intervention.
Though use cases aren’t easy to find, they are out there. Last week at RingCentral’s Revenue Kick Off, I met with Liesel Perez, co-founder of Axis Integrated Mental Health, and she explained how her therapists run agentic agents in the background to capture notes and update systems for insurance purposes. This allows clinicians to pay better attention to patients and let agentic AI do the heavy lifting, which is an excellent example of the value the technology can bring. It’s a simple use case but one that can have a big impact on productivity and patient care.
2. Solving the trust deficit and the security prerequisite
A recurring theme throughout the summit was the significant trust deficit currently hindering AI adoption. I recently attended the World Economic Forum in Davos, and while AI was the key theme there, this concept of AI trust was pervasive in every session I attended and every attendee I talked to.
In previous technology shifts, security was often treated as an optional trade-off for productivity. In the AI era, security has become a non-negotiable prerequisite.
“If people don’t trust these systems, they’ll never use them,” Patel stated bluntly. This trust must extend across every layer of the stack: the data, the models, the infrastructure and the agents themselves. Cisco’s response has been the launch of AI Defense, a platform designed not just to use AI for cyber defense, but to secure AI itself against misuse and data leakage.
However, trust goes far beyond the technology, and this was the main theme of the panel with Robbins and Anne Neuberger, strategic advisor to Cisco, and Brett McGurk, special advisor for international affairs for Cisco. Neuberger emphasized that AI is the only way to counter modern cyberthreats effectively. Because software-defined networks are constantly changing, identifying “normal” vs. “anomalous” behavior requires AI’s speed to assist human defenders who can no longer keep up manually.
Both experts noted a significant disconnect in Washington D.C. Policymakers often regulate tools they do not use daily given security restrictions in high-level government offices. McGurk warned that imprecise regulation could allow competitors such as China to leapfrog the U.S.
Amazon Web Services Inc. CEO Matt Garman provided an easy-to-understand analogy that highlighted the importance of trust. He explained that if one tries to cross a canyon on a board, one will crawl across the board. “Put up handrails as guardrails and we run.” Trust gives us confidence and that leads to utilization which, in turn, creates the rising tide that benefits everyone.
As has been noted by so many people, AI is a team game, and I thought this quote from Robbins was a call to the entire industry: “None of us can do it alone, therefore trust is really imperative.” This is true, as it will let us run, not crawl, toward AI.
3. The demographic imperative: AI as a necessity
Perhaps the most interesting macroeconomic take came from Microsoft Corp. Chief Technology Officer Kevin Scott, who argued that AI is no longer a luxury, but a “biological necessity” for global society. Pointing to the peak high school graduation in countries such as Japan, Scott highlighted a looming labor crisis caused by aging populations and declining birth rates.
“Demographic data is clear that Japan is in population decline — China, Korea as well,” Scott noted. He believes AI is the only technological intervention capable of maintaining our quality of life as the labor pool shrinks. This shifts the narrative from AI “taking jobs” to AI “filling gaps” that human demographics can no longer sustain.
This aligns with the Silver Tsunami economic theory. For example, in rural America (where Scott’s own mother lives), the brain drain and aging demographics create “zero-sum” environments where one person’s gain is another’s loss. Scott views AI as the tool to turn these back into “non-zero-sum” problems by increasing individual productivity to a level that compensates for the missing workforce.
Scott’s session was a great thought exercise but did provide two contrasting futures:
- The optimistic case: Humans use AI to solve “super important problems with urgency” — curing diseases, managing the energy transition and supporting an aging society.
- The pessimistic vase: We fall into a “superficial mode,” using massive compute resources for distraction. He humorously notes his own kids use AI for biomedical engineering half the time, and the other half to create “pictures of green llamas with big butts.”
Which becomes true? The internet has shown we can do both, but solving problems and transforming the way we work, live, learn and play led the way, with the fun stuff coming much later.
4. Re-engineering work for ‘abundance’
Nvidia Corp. CEO Jensen Huang has assumed the role of the Nostradamus of AI. In his panel, he challenged leaders to adopt an abundance mindset. He argued that AI reduces the cost of intelligence by such an order of magnitude that we must stop thinking about how to save time on small tasks and start thinking about solving impossible problems.
“The definition of abundance is you look at a problem so big, you say, you know what, I’ll do it all,” Huang explained. He encouraged leaders to “let 1,000 flowers bloom” through experimentation rather than demanding immediate, line-item ROI spreadsheets. For Huang, the real risk is not being the first to adopt AI but being the last. “You’re not going to lose your job to AI,” he said. “You’re going to lose your job to someone who uses AI.”
This concept of augmenting labor instead of replacing is a bit like Schoedinger’s Cat in that it’s true and not true at the same time. One stat provided by WEF in Davos was that AI would indeed displace 92 million jobs but also create 170 million new ones. If one uses the internet as analogy, the same thing happened – we don’t buy airline tickets from a booth downtown but rather purchase it off a website. However, the internet democratized access to flying and now the airline industry employs more people than ever.
Though Huang is correct in that work needs to be re-engineered, it’s important for business leaders to reskill current employees so they can be part of the 170 million new jobs instead of being on the outside looking in.
5. Bridging the data gap with synthetic and physical data
The summit highlighted a looming bottleneck: We are running out of high-quality, human-generated data on the public internet. To continue the exponential curve of model improvement, the industry is pivoting toward synthetic data and machine-generated data.
World Labs Inc. CEO Dr. Fei-Fei Li pointed toward the next frontier: spatial intelligence. Whereas language models have been trained on clean text, the physical world of pixels and voxels is far messier. Li believes that for AI to reach true general intelligence or AGI, it must develop world models that understand 3D space, causality and gravity. “The ability to understand… the real 3D, 4D physical world is the foundation,” Li explained. This physical AI will unlock the next wave of value in robotics, healthcare and urban planning.
Li’s session raised some good points for information technology leaders as to why they need to look at AI as being more than chatbots. The first is that language is a relatively new intelligence, only about 500,000 years old, whereas perception, seeing and touching, has been evolving for more than 1.5 billion years. AGI requires mastering words and perception, and that’s the challenge World Labs is taking on.
Also, the path to generalized robots, or physical AI, is much harder than self-driving cars. A car just has to avoid touching things; a robot has to manipulate them without breaking them. The scarcity of 3D data is real, but the emergence of high-fidelity synthetic data is creating a flywheel that will accelerate physical AI faster than we think.
If a company’s AI strategy is 100% focused on text and data, it’s missing the 3D world where many businesses live. From the warehouse floor to the surgical suite, spatial intelligence is the horizontal layer that will define the next decade.
Comment on leadership: It’s the multiplier
One of the last but most important sessions at the Summit was from Cisco Chief People, Policy and Purpose Officer Francine Katsoudas. She and I have had several conversations, most recently in Davos, on how AI success is driven as much or more by leadership than by tech. She provided some interesting data that AI adoption is not a bottom-up grassroots movement, nor is it a top-down mandate delivered via email; it is a direct reflection of active leadership. Her research at Cisco indicates that the “lions” of the modern era (an analogy to ancient maps that used the phrase to highlight unexplored or dangerous territories) — ambiguity, ethical uncertainty and the gap between evolving work and static skills — can be tamed only through a transformation in leadership behavior.
Katsoudas challenged the common C-suite narrative that blames the workforce for slow transitions or skill gaps. Instead, she presented the leader as the primary engine of momentum. According to Cisco’s internal data:
- Adoption is personal: AI adoption does not follow a “corporate email surprise;” it follows the visible behavior of the leader.
- The 2x effect: When a leader actively integrates AI into their own workflow, the adoption rate of their team doesn’t just grow — it doubles.
- The new talent profile: Leaders must pivot from valuing only stability and past performance toward seeking curiosity, agency and tech enthusiasm across the entire enterprise, including finance, legal and people departments.
Katsoudas concluded with a call for leaders to move away from fear-based narratives and toward a stance of radical confidence in their people: “The future does not belong to those that wait for the map to be finished,” she said. “It belongs to those who fearlessly walk with the lion.”
Business leaders, you’re on deck to lead the way with AI.
Conclusion: Connecting the dots
This was an interesting summit for Cisco to host. It wasn’t about technology or the latest GPU but rather thought leadership. At the event, I spoke with Jim Kavanaugh, CEO of World Wide Technology, Cisco’s largest and best transformation partner. The reality is AI success requires a massive ecosystem of players and Cisco touches all of them in some form as it’s the network that ties all these AI things together. Cisco has been trying to not just catch the AI wave but drive thought leadership into this industry.
“They have more momentum around AI today regarding their core capabilities and infrastructure, but this event demonstrates their commitment to thinking even bigger about how AI is going to play into the broader Cisco portfolio,” he said. “More importantly, this event shows how Cisco is looking at AI beyond the company, how all the players here can be brought together to benefit customers and that’s a great pivot for Cisco and something the industry needs.”
Overall, it was a great event hosted by Cisco and that was clear based on the number of Fortune 500 chief information officers in attendance. Cisco Live EMEA is next week in Amsterdam, where we should get a dose of innovation as to how Cisco can help get AI from vision to reality.

