Featured
Reports

Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience

Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]

Continue Reading

Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation

April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]

Continue Reading

Nathan Howe, VP of Global Innovation at Zscaler talks mobile security

March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]

Continue Reading

Check out
OUR NEWEST VIDEOS

2026 ZKast #81- Why Your AI Strategy Will Fail Without Choice | The Equinix Distributed AI Advantage

5.8K views 15 hours ago

0 0

2026 ZKast #80 - Zoom’s Product Roadmap: Customer Obsession, AI, and the Future of Work

4.8K views April 27, 2026 9:41 am

2 0

Recent
ZK Research Blog

News

I had been waiting for the 2026 edition of Zoom Communications Inc.‘s Perspectives, its recently held annual get-together for industry analysts, because I find Zoom to be the most interesting vendor in the communications business today.

Though it has many competitors, its product roadmap has been markedly different. Other unified-communications-as-a-service and contact-center-as-a-service providers have been focused on going deeper in those stacks, but Zoom has complemented those efforts by expanding into many adjacent areas, such as e-mail, docs, sheets, notes and other productivity tools. In addition, the company offers frontline worker tools with Workvivo, small-business apps through Bonsai, and much more.

This has many industry watchers raising their eyebrows and wondering why Zoom would want to go head-to-head with Google LLC and Microsoft Corp. in their core areas of strength. At Perspectives, I was looking forward to gaining a better understanding of Zoom’s master plan for its next act.

The timing for the event was ideal, as business and information technology leaders have moved beyond thinking about video, chat and calls and are focusing more on artificial intelligence companions, digital workers and a relentless focus on business outcomes. Here are my key takeaways from Zoom Perspectives 2026:

Zoom is disrupting work, not communications

Zoom is repositioning itself as a “system of action” that moves beyond simple communications to fundamentally disrupt how work is done. By integrating its broad suite of tools — including meetings, phone, team chat, docs, sheets and the new personal note-taker, Zoom My Notes — into a unified platform, Zoom looks to bridge the gap from “conversation to completion.”

This strategy focuses on eliminating the “friction tax” of switching between disparate applications by enabling AI to reason over unstructured conversation data and trigger automated workflows. Rather than merely competing on feature sets with legacy giants like Microsoft and Google, Zoom is betting on a federated AI approach that delivers a seamless, “disposable” user interface tailored to the specific task at hand.

As Chief Executive Eric Yuan (pictured) explained during the event: “Over the past many years, we’ve focused only on the first step: rich communication. Now, we are aiming to embrace completion as well. Essentially, offer the customer a seamless experience to get a task done. It used to be two steps; now it’s just one step with AI.”

From passive information to active orchestration

While many platforms focus on organizing data, Zoom is repositioning its interface as a reasoning engine that actively guides the next steps of a project. By serving as a “semantic layer” between the user and their tools, the platform uses AI to identify commitments made during a conversation and translate them into coordinated actions across third-party systems such as Jira or Salesforce. This shifts Zoom’s role from a simple information display to an active orchestrator of business processes.

As Chief Marketing Officer Kim Storen noted, “We recognize that there’s an opportunity to add value across the lifecycle of conversations, from preparation to meetings and all the way through to completion of work, with the understanding and context that come with embedded AI.”

From synchronous utility evolves to persistent platform

A recurring theme among large-scale enterprise customers is the move toward platform consolidation — not merely a cost-saving exercise, but a strategy for information persistence.

The “utility phase” of the pandemic era was about keeping the lights on. The current phase is about eliminating the human latency caused by handoffs between global teams. My research shows that workers spend 40% of their time managing their work rather than doing their jobs.

For today’s chief information officers, the value proposition has shifted from video stream quality to capturing value from the data being generated. By leveraging persistent whiteboards, documents, and AI-generated context, organizations can seamlessly bridge time zones. When a development team in one hemisphere wraps up, the team in the next can resume work immediately, guided by an AI-curated “state of play” rather than a grueling four-hour status meeting.

AI as the new accountability partner

Enterprise AI discussions have shifted from “What can AI do?” to “How does AI change our social contract of work?” For many IT leaders, the Zoom AI Companion is much more than a meeting summary; it is becoming an accountability mechanism. When transcription and action-item tracking are enabled by default, the “social contract” of a meeting shifts.

The typical post-meeting ambiguity gives way to a digital record of commitments. If a stakeholder agrees to a deliverable, it is captured, assigned and tracked. This shift transforms AI from a passive assistant into a persistent partner that drives project velocity and ensures that “administrative drift” no longer stalls mission-critical workflows. These handoffs are what cause large enterprises to lose their nimbleness leading to falling behind smaller competitors.

What this means for the CIO

For the CIO, Zoom’s evolution marks a shift in the “center of gravity” of enterprise data.

  • Reducing tool fatigue: By consolidating fragmented tools into a single AI-enabled ecosystem, CIOs can reduce employees’ cognitive load and IT’s integration burden.
  • Data liquidity: The agentic approach means data no longer lives in silos. If the AI can track a project from a phone call to a whiteboard session to a final document, the CIO is no longer managing disparate apps; they are managing a continuous flow of corporate intelligence.
  • ROI of “found time”: The primary metric is shifting to “time to context.” The faster a worker can understand a project’s current state without human intervention, the higher the organizational throughput.

Implications for Zoom

As Zoom pivots toward this agentic future, the company faces several strategic imperatives:

  1. The identity challenge: Zoom must continue to combat the “video-only” perception. Its success depends on the market viewing it as a “work operating system” rather than a meeting tool.
  2. Platform interoperability: To be truly “agentic,” Zoom’s AI will need to integrate seamlessly with third-party ecosystems (Salesforce, ServiceNow, Jira). The more “open” their agentic framework is, the more indispensable Zoom becomes.
  3. Security and trust: As AI evolves from summarizing meetings to “holding people accountable” and managing workflows, the privacy and security of that data will face intense scrutiny. Zoom’s “AI-first” architecture must remain “Trust-first” to maintain its foothold in the regulated enterprise.

Ultimately, Perspectives signaled that Zoom is no longer content to be merely a venue for discussing work; it intends to be the engine through which work is done. It’s certainly a departure from where the industry has been, but Zoom has never been afraid to do things differently. The platform is ready, and products are rolling out at a torrid pace.

Can Zoom change the way people work? Only time will tell, but with companies making heavy AI investments, Zoom will certainly have an opportunity.

For years, the National Football League offseason was a period defined by information asymmetry. While front offices sat behind “glass walls” in war rooms, armed with proprietary Next Gen Stats and sophisticated modeling tools, the average fan was left to navigate a fragmented landscape of mock drafts, cap calculators, PDF guides and Twitter rumors.

Last month, the NFL and Amazon Web Services Inc. officially took down that wall with the launch of NFL IQ. Built on Amazon Quick, this is more than just another sports dashboard on a stats website; it’s the democratization of complex data. By transforming billions of data points into a self-service, interactive hub, the NFL is moving away from “data delivery” and toward “contextual reasoning.”

There have been many parallels drawn between artificial intelligence and the internet, but it’s this last point that is the most meaningful. The internet changed the way we work, live and learn by democratizing access to information. AI will have a similar but greater impact by democratizing access to expertise.

I recently spoke with Ari Entin, head of sports marketing for AWS, to better understand how this changes the game — not just for the Sunday ticket holder, but for the enterprise chief information officer.

From research analyst to ‘regular fan’

The core challenge of the modern NFL offseason is the “onus of effort.” As Entin noted during our conversation, the sheer volume of data available to fans has exploded, but the responsibility still lies with the individual to “troll and find different sources of information” to see whether it is even reliable. Popular fan sites include Pro Football Focus, The Athletic and Mel Kiper Jr. Though all these data sources are great, it’s up to the fan to connect the dots, and that’s the problem AWS is trying to solve.

“The impetus here for building NFL IQ with the Next Gen Stats team was: How do we take these, which were previously disparate moments — the Combine, free agency, pro days, the draft — and provide a holistic view?” Entin explained. By leveraging Amazon Quick, the NFL is providing fans with the exact same day-to-day research tool used by league insiders. This isn’t a “lite” version of the software; it’s the same engine, scaled for millions.

“We’re essentially bringing this next level of AI and analytics insights to fans exactly how folks in the league see it, too,” Entin told me. “We’re turning the experience from a research analyst role into just a regular fan where they can just ask a question and get an answer back in seconds.”

The power of ‘reasoning’ over retrieval

This week, a major update was announced with the debut of the NFL IQ AI Assistant. This is where the technical architecture becomes especially interesting for the enterprise. Most “sports chatbots” are basically advanced search engines that scrape public data. The NFL IQ AI Assistant, however, uses Amazon Quick’s orchestration capabilities to analyze more than 20 proprietary NFL data sources. It doesn’t just tell you who a team might select; it combines GM tendencies, cap space constraints and positional needs to explain why.

During our interview, Entin highlighted how this handles complex, multivariable queries. If a Seahawks fan asks who can replace a specific departing starter, the system doesn’t just look at a list of prospects. It evaluates:

  • Scheme fit: Does the prospect match the new offensive coordinator’s system?
  • Historical links: Does the player have connections to the new coaching staff?
  • Consensus value: Where is the player trending on “Grinding the Mocks” (a real-time consensus of thousands of experts)?

This “reasoning” layer is updated multiple times daily to ensure that if a trade gets on social media, the AI Assistant’s logic stays grounded in reality. Entin was clear on the guardrails: “These are all from verified NFL sources — full reported and verified paperwork-related things going through the league.”

Why the CIO should care: The ‘blank page’ problem

For the enterprise CIO, the NFL’s experiment is an excellent example of how to address the “blank page” problem. One of the main challenges in business intelligence adoption is that users often don’t know what questions to ask when faced with a blinking cursor. The NFL IQ tackles this by embedding the AI Assistant directly alongside the visual dashboard. The conversation doesn’t replace navigation; it enhances it.

This hybrid approach — where a user can view a trend in a chart and then immediately ask the AI to “explain that spike” — represents the future of software as a service. It addresses the “signal-to-noise” ratio that hampers most corporate data lakes.

As an example, the user interface for traditional BI is static dashboards and filters. With NFL IQ, this shifts to an interactive and conversational interface. Also, with BI, only specialized analysts could access and understand data, whereas with NFL IQ, through Quick, users can self-serve data. Also, with BI systems, data was refreshed weekly or even monthly, while NFL IQ is near-real time with intraday updates.

Final thoughts: The new architectural blueprint

The NFL IQ launch shows that AI-ready infrastructure isn’t just about having more GPUs; it’s about how data is managed. By using Quick to connect technical engineers and non-technical fans, the NFL has created a model for any industry — from healthcare to retail — that deals with complex, scattered datasets. They have successfully shifted from asking “What happened?” to asking “What should I do next?”

As Entin explained, the goal was to make the NFL’s most advanced insights accessible beyond technical barriers. For the rest of the business world, the message is clear: If a fan can use AI to “reason” through a seven-round mock draft, your regional managers should be able to do the same with their supply chains.

The front office is no longer a closed room. Thanks to the NFL and AWS, it’s now a conversation.

While security eyes are on the RSAC conference in San Francisco this week, the compute world is focused on KubeCon EU in Amsterdam. But the theme of artificial intelligence is the pervasive across both, as in enterprise information technology we’ve reached a point where “AI curiosity” has officially been replaced by “AI urgency.”

Every chief information officer I talk to is under immense pressure to move from those neat little research-and-development experiments to actual production-grade deployment. But as they scale, they’re hitting a wall that isn’t about the models or the data — it’s about the plumbing. Specifically, it’s the graphics processing unit infrastructure bottleneck.

For years, we’ve treated Kubernetes as the panacea to infrastructure woes. Need to scale? Throw it in a container. Need to orchestrate? K8s is your friend. But when you’re dealing with Nvidia Corp. Blackwell B300s and massive training clusters, the standard way of doing things is sharing overprovisioned environments or waiting weeks for dedicated hardware. These are recipes for project failure, only adding to the narrative that the majority of AI project fail.

Today at KubeCon, neocloud provider QumulusAI and vCluster, creators of virtual Kubernetes cluster technology, announced a partnership to address much of the friction between infrastructure agility and the rigid demands of high-performance GPUs.

The real cost of infrastructure friction

Today’s reality is that enterprise development teams are currently stuck in a “pick your poison” scenario.

  1. The wait-and-see approach: A dedicated GPU environment is requested, but the IT team needs time to provision and tells the requester to check back in three weeks. In the past, this has been an annoyance but in the AI race, three weeks is an eternity and could be the difference in being an industry leader or a laggard.
  2. The Wild West approach: Business units share a massively overprovisioned environment. It’s faster to get into, but it’s a security nightmare, and resource contention makes training runs highly unpredictable and ever harder to forecast when attempting to capacity plan.

This inefficiency is more than just an inconvenience; it’s a massive drain on return on investment, since time is money. When companies deal with hyperscalers or neocloud providers, they expect the kind of speed that Nvidia Blackwell B300s and RTXPRO 6000s promise. Having those chips sit idle while a developer fumbles a namespace configuration is the compute version of malpractice.

QumulusAI and vCluster: Partitioning power

The partnership between QumulusAI and vCluster brings customers a way to “slice and dice” high-end GPU power without the overhead of traditional virtualization. This gives customers more options but more importantly, the exact amount of GPU power they need to run their accelerated computing workloads, the primary one being AI.

QumulusAI came to market with a value proposition of building a turnkey, vertically integrated AI cloud. Think of QumulusAI as a company that didn’t just build a fast car, but designed the engine, the fuel and the highway it runs on. This “hyperspeed compute” setup provides massive power, but QumulusAI also provides the dashboard to keep all the horsepower under control. In fact, the company will let customers only use a piece of the engine if that’s all that’s required for the journey.

By integrating vCluster’s virtual Kubernetes technology, QumulusAI is essentially giving enterprises faster and more granular control of isolated environments. Instead of spinning up an entire physical cluster for every project, which is slow and expensive, teams can now spin up isolated virtual clusters on shared GPU hardware.

This gives developers the “feel” of a dedicated environment — complete with their own application programming interface server and full control — while the platform team gets to maximize the utilization of those incredibly expensive GPUs.

The vCluster AI Lab: Innovation at the edge

Perhaps the most interesting part of this news is the launch of the vCluster AI Lab. The lab should provide QumulusAI customers assurance they can continue to use the platform for the long term.

As the physical chips that are used for AI, such as GPUs, rapidly improve, the software managing them must stay ahead of the curve. This lab ensures that no matter how advanced the hardware becomes, the systems can handle the workload. It allows vCluster engineers to prototype how Kubernetes should handle emerging AI workloads in real time.

Accelerating the move to AI factories

As I’ve noted in my previous posts, in 2026 the goal for companies should be to move AI factories from being projects to production infrastructure. To get there, organizations need three things:

  • Access: Getting the latest silicon (such as the B300) without a two-year lead time.
  • Isolation: Ensuring that Team A’s training run doesn’t crash Team B’s inference model.
  • Speed: Moving from idea to environment in minutes, not months.

This partnership addresses all three points and allows a midsized enterprise to act like a large company and enterprises to act like hyperscalers. They get the security of an isolated environment and the performance of bare-metal GPUs, all managed through a unified Kubernetes stack.

Final thoughts

The AI race is going to be won by the companies that solve the operational headaches of GPU management. The technology is there, but can organizations deploy it in a way where it meets their needs now, doesn’t break the bank and can scale with them?

The partnership between QumulusAI and vCluster lowers the barrier to entry for secure, high-performance environments and makes it possible for AI teams to move as fast as their ideas. And in today’s market, speed isn’t just an advantage — it’s the only thing that matters.

The RSAC cybersecurity conference is this week and for the last two years, the conversation at the event has revolved around generative artificial intelligence — that is, models we talk to, and they talked back and act as a copilot.

At RSAC 2026, there has been a definite change in topic as the world has been shifting from conversational AI to agentic AI. The world is moving from AI that answers questions to AI that takes actions — software that can browse the web, execute code, manage your calendar and interface with corporate databases.

The poster child for this movement is OpenClaw, the open-source agent framework that has taken the developer world by storm. But as Jeetu Patel, Cisco Systems Inc.‘s chief product officer, noted during his RSAC keynote, “in the enterprise, power without governance isn’t innovation; it’s unmanaged risk.”

To bridge this gap, Cisco Monday unveiled DefenseClaw, an open-source security framework designed to wrap these “Claws” in a layer of enterprise-grade protection. For anyone following the “agentic” trend, this product announcement should allow companies to create the necessary security friction that actually allows speed. That might seem counterintuitive, but I’ll explain.

What exactly are ‘Claws’?

Before discussing securing Claws, one must understand what they are. In the current AI vernacular, a “Claw,” referring to agents built on frameworks such as OpenClaw or Nvidia Corp.’s NemoClaw) is an autonomous AI agent capable of reasoning and using tools. Unlike a standard large language model, which is a closed loop, a Claw uses the Model Context Protocol or MCP to reach out into the world.

Think of a Claw as a digital co-worker. You don’t just ask it to “summarize this email;” you tell it to “summarize this email, find the mentioned project in Jira, update the status to ‘in progress,’ and Slack the team the update.” To do this, the agent uses “Skills” — modular plugins that give it specific capabilities, such as running shell commands or accessing a specific application programming interface. Once the Claw learns this behavior, it will do this without being asked and continue to refine its skills, theoretically providing more value.

The nightmare scenario: Why agents are different

The very thing that makes Claws powerful makes them a security professional’s worst nightmare. Traditional security is built on the idea of a human user making a request. Agents break that model from the following:

  • The Skills supply chain: Much like the early days of browser extensions, “Skills” are often community-contributed. A skill that claims to “Format your Excel sheets” might secretly contain a curl command that exfiltrates your local credentials to a rogue server.
  • Prompt injection 2.0: In a chatbot, prompt injection might make the AI say something rude. In an agent, a “malicious” email read by the agent could contain instructions that force the agent to delete files or change database permissions.
  • Self-evolving risks: Agents are dynamic meaning their behavior changes based on the data they consume. For Claws, this could result in a skill that was clean today but then evolves to start exfiltrating data later. Unless every transaction is watched, the user would have no knowledge of this.

Enter DefenseClaw: The governance layer

DefenseClaw shouldn’t be thought of as an inhibitor to OpenClaw but rather its bodyguard. Built to integrate with Nvidia OpenShell, DefenseClaw acts as an automated security and inventory framework that can be deployed in under five minutes.

It functions through four primary technical pillars:

1. The pre-flight scan (admission control)

Before a “Skill” or an MCP server is allowed to run, DefenseClaw puts it through a gauntlet of scanners. This includes:

  • Skill Scanner: Analyzing the underlying code for malicious intent or hidden network calls.
  • CodeGuard: Static analysis of any code the agent itself generates to ensure it hasn’t “hallucinated” a security vulnerability into a script it’s about to run.
  • AI BOM (Bill of Materials): Automatically generating a manifest of every model, tool and plugin the agent touches.

2. Strict runtime sandboxing

In partnership with Nvidia, DefenseClaw leverages OpenShell to create a “deny-by-default” environment. If an agent tries to call an API that isn’t on the approved list, the network request is killed at the kernel level. The agent lives in a box; DefenseClaw decides what is allowed to enter or leave that box.

3. Intent-aware monitoring

This is where the Cisco network and observability DNA adds value. DefenseClaw doesn’t just look at code; it looks at telemetry. It streams every tool call, every prompt-response pair and every policy decision directly into Splunk. By analyzing the intent of a sequence of actions, the system can detect “abnormal behavior” — such as an agent suddenly trying to access sensitive financial data it has never touched before.

4. Agentic identity (Duo and zero trust)

Cisco is extending Duo to the agentic world. Every Claw is assigned a unique identity and mapped to a human “sponsor.” This ensures that if an agent goes rogue, there is a clear audit trail showing who deployed it and what permissions it was granted.

The goal: Moving from pilot to production

As part of it RSAC activities, Cisco released its Cyber Threat Trends Report, which found that 85% of enterprises are testing AI agents, but only 5% have moved them into production, highlighting the primary bottleneck to adoption is a very wide trust gap.

DefenseClaw aims to close that gap by making Claw security provable instead of probable. It transforms the agent from a “black box” into a governed corporate asset. By open-sourcing the framework, Cisco is betting that a standardized security layer will do for AI agents what SSL/TLS did for the web: Make it safe enough for everyone to use.

Final thoughts

Many industry watchers look at the agentic AI eras the Wild West, with new frontiers being discovered seemingly daily. Though this drives innovation and productivity to unprecedented levels, it also takes risks to equally high levels. By providing a framework that automates the “boring” parts of security, like inventory, scanning and sandboxing, Cisco is positioning itself as a network centric guardian on the road to the agentic workforce.

Claws are coming and they’re coming fast. Security needs to be in place before threats against them overwhelm information technology and cyber teams.

For years, the relationship between cybersecurity and business innovation has been a zero-sum game. Security teams were the “Department of No,” tasked with slowing down adoption to ensure safety. Given the business pressure to get artificial intelligence deployed, the security industry has been trying to flip this script by rethinking security along platform lines.

With the launch of Prisma AIRS 3.0, a redefined Prisma Browser, and Next-Generation Trust Security or NGTS, as part of its RSAC payload, Palo Alto Networks Inc. is not just trying to provide agentic guardrails but help companies move faster and become a catalyst for innovation.

Typically, with new technology, companies crawl, walk and run. By integrating security into the agentic systems and workflows, business will have confidence to move forward faster leading them to the “run” phase significantly faster.

From ‘AI that talks’ to ‘AI that acts’

The industry is currently shifting from generative AI, meaning chatbots, to agentic AI — autonomous entities that don’t just answer questions but execute multistep workflows, from coding assistance to automated customer support. As Ian Swanson, vice president of product for AI security at Palo Alto Networks, noted during an analyst briefing, this shift from “AI that talks” to “AI that acts” introduces systemic risks. Organizations are currently “blind to what AI does,” even if they monitor what it says.

Prisma AIRS 3.0 closes this visibility gap. It provides a comprehensive platform to discover agents wherever they live — in the cloud, in software-as-a-service applications or on local endpoints via the pending Koi acquisition. By employing “AI red teaming” to simulate context-aware attacks and scanning agent artifacts for excessive permissions, Palo Alto is bringing a “shift-left” mentality to the AI supply chain.

The browser as the new command center

Perhaps the most pragmatic move is the evolution of the Prisma Browser. Since employees spend roughly 85% of their day in a browser, Palo Alto is turning it into the primary “Secure AI Workspace.”

The secure browser can now distinguish between human and non-human (agent) identities in real time. This solves a major compliance hurdle: accountability. If an agent issues an unauthorized $5,000 invoice, the system doesn’t just block it; it identifies whether the error was a human prompt or an autonomous agent going rogue. As Yonatan Gotlib, product manager for Prisma Browser, explained, embedding large language models directly into the browser with inline Data Loss Prevention or DLP ensures that “unintended actions” don’t lead to catastrophic data exposure.

This greatly improves threat protection as users often fall victim to browser-based phishing that can lead to stolen credentials and ransomware. No matter how much training a company does, expecting every user to catch every scam is unrealistic. The secure browser can “see” things users can’t and prevent users from doing things that will ultimately cause harm.

Solving the ‘cryptographic reset’

While AI grabs the headlines, Palo Alto is also addressing a looming operational nightmare: the “cryptographic reset.” By 2029, certificate lifespans will shrink from years to just 47 days. For an enterprise with 5,000 certificates, that means 106 renewals every single day — a manual impossibility.

The new NGTS platform, integrated with CyberArk’s machine identity intelligence, automates this lifecycle. Richu Channakeshava, product manager for quantum-safe security, emphasized that this isn’t only about better threat protection; it’s a resiliency play. By turning the network into a sensor that discovers “shadow” certificates and automatically refreshes them, Palo Alto is preventing the very outages that take business-critical applications offline.

The valuation perspective: Platformization and precision

From a company standpoint, Palo Alto’s strategy furthers its “platformization” push. By integrating these disparate technologies, AI security, browser-based work, and quantum-safe cryptography, into the Strata Cloud Manager, it is creating an ecosystem with immense gravity.

The company is leveraging what it calls “Precision AI” to keep latency low, a critical factor for agentic workflows that might involve 50 different “handoffs” between agents. By using smaller, deterministic language models rather than bulky LLMs for detection, Palo is keeping security fast (measuring latency in milliseconds) and cost-effective. When looking at valuation, this shift from a defensive cost center to an essential “secure growth engine” justifies a premium as the foundational infrastructure of the AI enterprise.

Over the past half-year, security companies seem to have fallen out of favor with investors, despite putting up strong numbers. As customers make security a core part of their AI deployments, this will create a “rising tide,” for all security platform vendors with Palo Alto benefitting disproportionately given they have the largest install base and the broadest platform.

What this means for the customer

The ultimate takeaway for the chief information security officer is a change in perception. Security is no longer the bottleneck.

  • Business enabler: By securing the “agentic workspace,” leaders can now “greenlight strategic AI initiatives that were previously stalled” by risk concerns.
  • Operational ROI: Automating certificate management via NGTS moves organizations toward “zero-touch automation,” significantly reducing the manual labor of information technology teams.
  • Future-proofing: With its work on quantum-safe security, Palo is helping customers inventory their “cryptographic debt” and prepare for the post-quantum era before it becomes a crisis.

Final thoughts

Palo Alto Networks is betting that the winner of the AI race won’t be the company with the fastest agents or the newest models, but the one with the most trusted agents. By embedding security directly into the browser and the network layer, it’s ensuring that when a business moves at machine speed, it doesn’t fly off the proverbial guardrails.

It’s imperative that CISOs rethink their approach to security. The cobbling together of best-of-breed technologies has always been the norm but has never worked. In the agentic era, any delay or latency in agentic communications will cost companies real money and security is as important to success as graphics processing units or the network.

For years, the “wall” between storage and networking administrators has been a fixture of the enterprise data center. I spent the early part of my career as a network engineer and the arena of storage was a bit of a black box, and for most companies, that’s still the case.

This is because these networks operated differently. Storage teams obsessed over IOPS and durability, while networking teams lived and breathed latency and throughput. But at Nvidia GTC 2026, Chief Executive Jensen Huang just introduced a platform that effectively tears that wall down: the Nvidia BlueField-4 STX storage architecture.

Nvidia announced a modular reference architecture that delivers up to five times more token throughput and four times more energy efficiency compared with traditional central processing unit-based storage designs. However, it’s important to look past numbers. This innovation that just increases speed but is a rethink of how we define “storage” for the era of agentic artificial intelligence.

The rise of the ‘context layer’

We are moving past the era of simple “chatbots” into the era of agentic AI — systems that don’t just answer questions but execute multistep tasks across sessions. These agents require contextual working memory.

Traditional storage (think high-capacity, general-purpose arrays) is too slow for this. When an AI agent needs to recall a specific detail from a 10-hour conversation or a massive technical manual to take its next step, waiting for a traditional data path creates a bottleneck that leaves expensive graphics processing units sitting idle and there is no bigger waste of money than GPUs that aren’t being used.

The BlueField-4 STX introduces the Nvidia CMX (Context Memory Storage) platform. This isn’t just “more disk,” but rather a high-performance context layer that expands GPU memory across the rack. It allows AI factories to ingest data twice as fast and maintain the responsiveness required for long-context reasoning.

Hardware synergy: The Vera Rubin platform

The technical differentiation behind STX lies in its integration with the Nvidia Vera Rubin platform. The architecture employs a storage-optimized BlueField-4 processor that combines:

  • The Nvidia Vera CPU: Handling the heavy lifting of complex logic.
  • Nvidia ConnectX-9 SuperNIC: Providing the ultra-low-latency pipe.
  • Nvidia Spectrum-X Ethernet: Ensuring the fabric can handle the scale of an AI factory.

By offloading storage tasks from the general-purpose CPU to this specialized STX architecture, Nvidia is claiming a fourfold jump in energy efficiency. In an era where power availability is the single biggest constraint on data center expansion, that’s not just a “nice-to-have” — it’s the difference between scaling or stalling.

Why the network and storage admin must come together

This announcement serves as a final notice: The silos must end. If you are a network administrator, you are now in the storage business. If you are a storage administrator, you are now in the networking business.

  1. The network is the storage bus: With BlueField-4 STX and Spectrum-X, the “storage” is no longer a box at the end of a wire; it is a distributed layer of the network fabric itself. Performance tuning now requires a deep understanding of RDMA, RoCE, and how data moves between the CMX layer and the GPU.
  2. Latency is the only metric that matters: In traditional enterprise apps, a few milliseconds of storage latency was annoying. In agentic AI, it’s catastrophic to “token flow.” Admins must work together to eliminate every microsecond of friction in the data path.
  3. Unified management: The STX architecture relies on Nvidia DOCA and Nvidia AI Enterprise software. This means the software stack for managing your network interface is the same stack managing your storage acceleration.

Built with extreme co-design

The Nvidia BlueField-4 STX architecture is a product of what Huang calls extreme co-design. At the GTC, Nvidia held an analyst session on this topic and how it was used to create the new solution. Extreme co-design is a multidisciplinary engineering approach that treats the entire data center as a single, integrated unit to eliminate the traditional “wall” between networking and storage.

By tightly coupling the Vera CPU, ConnectX-9 SuperNIC and Spectrum-X Ethernet, Nvidia has created a distributed context layer that allows AI agents to access working memory with four times the energy efficiency and five times the token throughput of CPU-based designs. This synergy ensures that the network effectively becomes the storage bus, providing the ultra-low latency required for the multistep reasoning tasks of agentic AI.

Regarding the role of storage within this co-designed ecosystem, Senior Vice President of Networking Kevin Deierling noted: “Thinking takes planning. You write a to-do list. You need to store that somewhere, and so when Jensen was talking about STX and CMX, CMX is the cache optimized version of that. All of this needs to be optimized, because thinking requires memory, and that memory ultimately is part of this co-design optimization across the entire data center.”

This is just the latest product that Nvidia has created using this methodology. Others include Vera Rubin, Groq 3 LPX, Spectrum-X, IGX Thor and many others. It’s this ability to think at a system level that has created the moat Nvidia seems to have around it.

Broad industry momentum

The industry isn’t waiting around to see if this works. The list of partners is a “who’s who” of the infrastructure world.

  • Early adopters: Cloud and AI leaders such as CoreWeave, Oracle Cloud and Mistral AI are already moving toward STX for context memory.
  • Infrastructure partners: Heavyweights such as Dell Technologies, Hewlett Packard Enterprise, NetApp and Pure Storage (now Everpure) are co-designing systems based on this architecture.
  • Manufacturing: Supermicro and QCT are already building the physical STX-based racks.

The bottom line

It’s easy to look at BlueField-4 STX as a storage-optimized hardware refresh, but it’s bringing storage into the AI factory as an integrated component. It recognizes that storage for AI isn’t about long-term archiving — it’s about active reasoning.

For the information technology professional, the message from GTC is about staying ahead of the curve. Get out of your comfort zone and start learning the other side of the aisle. Storage and networking are coming together, and those engineers who work in silos will be on the outside looking in. The most successful data center architects of 2026 will be those who can speak “Spectrum-X” and “context memory” in the same breath.

Platforms based on STX are expected to hit the market in the second half of 2026. The clock is ticking.

As an industry, healthcare tends to be slow-moving and significantly behind others. There are many reasons for this, including budgets, availability of technology and the fact that any errors in healthcare can result in lost lives.

Healthcare transformation has been a big part of past Nvidia GTC conferences, and it was again this year. In fact, during his keynotes, Chief Executive Jensen Huang always calls out healthcare as being the industry where artificial intelligence can have the biggest impact on society.

At GTC26, the narrative around AI in healthcare changed. For years, we’ve talked about AI as a tool that lives on a screen, something that helps a radiologist spot a tumor or helps a researcher sort through data. But as Kimberly Powell, Nvidia’s vice president of healthcare and life sciences, made clear during an analyst-only session, the era of “screen AI” is over. We have entered the era of agentic and physical AI.

In the world of healthcare, a $10 trillion global industry currently facing an existential labor shortage, this shift isn’t just “cool tech.” It’s the only way the system survives. Powell put it bluntly: “AI is now hiring.” You aren’t just buying software anymore; you are hiring a digital or physical workforce to extend the reach of your clinicians.

Moving beyond the ‘AI scribe’ to agentic workflows

The last two years were defined by “AI scribes” — tools that turn spoken language into clinical notes. That was the opening act. Now, Nvidia is providing the “mosaic of agentic digital health platforms” that actually do the work.

Powell highlighted a company called Abridge, which illustrates this. It’s not just transcribing; it’s using generative AI to traverse and understand the patient journey. If a doctor mentions an MRI, the agent identifies that it doesn’t have the pre-authorization data. Instead of waiting six months for a manual back-and-forth, the agent handles it right there during the visit. As Powell noted, “these agentic systems can essentially have an agent call upon another agent, call upon another agent to traverse the otherwise workflow and journey of patients.”

The physical manifestation: healthcare robotics

While digital agents handle the paperwork, physical AI is moving into the operating room and the hospital hallways. Nvidia unveiled a massive suite of open tools at GTC designed specifically for healthcare robotics. This includes:

  • Open-H: The world’s largest healthcare robotics dataset with 700-plus hours of surgical video.
  • Cosmos-H: A model family that generates physically accurate synthetic surgical data so robots can “practice” in a digital twin before they ever touch a patient.
  • GR00T-H: A vision language action model that allows robots to understand text commands (like “pass the scalpel”) and translate them into precise physical motion.

Industry leaders like Johnson & Johnson, MedTec and CMR Surgical are already using these tools. The goal isn’t just a “robot arm” driven by a human; it’s a system with situational awareness that can manage instruments and sterile coordination in real-time.

Solving the digital divide and the ROI problem

One of the biggest concerns I hear from chief information officers is the “digital divide.” Will only the elite, high-budget health systems in Boston or San Francisco get these robots? Powell’s answer followed classic Nvidia playbook: Accelerated computing shrinks costs. She pointed out that while a consultation with an AI agent might have cost a dollar a few years ago, Nvidia’s latest hardware and software optimizations have driven that cost down to less than a cent.

By moving from capex (buying a multimillion-dollar robot) to opex (hiring AI as a service), rural and underfunded hospitals can finally compete. “We have to change this idea of capturing every user experience and feeding it back into the intelligence of the system to improve,” she explained.

The final bit: The ‘in silico’ revolution

It’s not just patient care that is being impacted as AI can revolutionize the lab. Traditionally, drug discovery was 90% “wet lab” (expensive, slow, manual) and 10% computer simulation. Nvidia is flipping that ratio.

With Nvidia BioNeMo, researchers can now model biology, DNA and chemical structures as if they were a language. Powell referenced a company at GTC that built an “AI scientist” capable of compressing six months of research into just 16 hours by spawning 200 agents to run analyses and write code.

Final thoughts

Nvidia is no longer just a “chip company” or even just a “platform company.” In healthcare, it has become a catalyst for modernization enabling the evolution of the physical workforce and biological research.

Whether it’s an AI agent handling insurance claims, a humanoid robot delivering linens to a burnt-out nurse, or a generative model designing a new protein, the primary theme for healthcare from GTC was that the “AI factory” has arrived in medicine. If you’re a healthcare CIO and you aren’t looking at how to “hire” this technology to solve your staffing crisis, you’re already behind.

The intersection of professional sports and cloud computing has enabled leagues and organizations to accelerate innovation. However, the partnership between the PGA Tour and Amazon Web Services Inc. is currently entering a new phase: the hyper-personalized era.

This week the golf world descended upon TPC Sawgrass for THE PLAYERS Championship, to watch Cam Young take the title. AWS and the PGA Tour are using the event to debut a suite of technologies that doesn’t just track the ball but interprets the game.

At THE PLAYERS, the PGA Tour introduced TOURCAST Range and tested agentic production, an AI-driven service to enable broadcasters to solve the “unsolvable” problem of golf: how to cover 123 players spread across 200 acres simultaneously.

The TOURCAST Range: Visualizing the ‘work’ before the ‘play’

For decades, the practice range was a black box where players disappeared for hours with fans having little understanding into who is hitting well at the range and who isn’t. At THE PLAYERS, that box is being pried open with the launch of TOURCAST Range.

Utilizing AWS’ and the Tour’s proprietary ShotLink powered by CDW platform, TOURCAST Range is a 3D interactive experience that brings the practice session to the fan’s screen. For the first time, fans aren’t just watching a video of a swing; They are seeing a digital twin of the range session. The system visualizes every shot with 3D traces and full scatterplots, providing granular metrics including:

  • Ball Flight Dynamics: Carry distance, ball speed, apex and curve.
  • Launch Mechanics: Precise launch and landing angles.
  • Practice Architecture: A breakdown of total balls hit and longest drives to show how a pro structures their warmup.

By incorporating year-to-date ShotLink stats into the range view, AWS enables fans to see if players’ morning sessions matches their seasonal form. This isn’t just data for data’s sake; it’s a narrative tool that allows fans to see a player struggling with a fade on the range before they ever step onto the first tee.

Agentic production: The rise of the AI director

Perhaps the most significant technical leap being tested at THE PLAYERS is the limited debut of Agentic Production on par-3 holes. In traditional broadcasting, selecting the right camera angle is a manual, labor-intensive process. A director sits in a truck, looking at dozens of monitors, and makes a split-second decision. Because of the cost and staffing required for this, roughly 70% of shots during a typical tournament never make it to air.

The Tour is using AWS AI, specifically Amazon Nova, to change the cost and speed equation. By evaluating camera data in real time, the AI ranks feeds based on framing, visibility and shot context. It essentially acts as an “AI Director,” identifying the most compelling angle and pairing it with real-time stats from ShotLink.

This is a “small-scale test” with big implications. The long-term goal is “Every Shot Live” across the entire season, not just at flagship events. By automating the selection process, the Tour can scale its content production exponentially without a linear increase in costs.

The favorite players hub: Hyper-personalization at scale

Though the new tools at THE PLAYERS focus on the “now,” the foundation for this season was laid earlier this year with the Favorite Players Hub. Golf fans are notoriously loyal to specific players, particularly in Europe where there may be only a single golfer from that country, but tracking a “niche” favorite through a standard leaderboard is difficult, especially if that player isn’t near the top.

The AWS-powered hub uses generative AI to curate personalized “storylines” for a fan’s chosen golfers. Instead of a generic highlights reel, users receive a feed of real-time stats, AI-generated summaries of their player’s round, and specific highlights — all updated automatically. This moves the PGA Tour app from a “pull” experience (where fans search for info) to a “push” experience (where the info finds the fan).

The technical engine: Agentic AI and the AWS partnership

The shift we are seeing is the transition from simple data collection to agentic AI. As highlighted in recent AWS technical insights, the Tour is moving toward systems that can take independent action — such as an AI “agent” that knows a player is approaching a milestone and automatically triggers a highlight package or a specialized data visualization for the broadcast.

The expanded partnership announced earlier this year cements AWS as the Tour’s Global Official Cloud Provider. It’s a move that transcends infrastructure; AWS is the architect of the Tour’s digital future, enhancing the World Feed and integrating artificial intelligence, machine learning and deep learning across the entire content lifecycle.

A legacy of innovation: AWS and PGA Tour (2021-present)

The current innovations at THE PLAYERS are the latest chapters in a partnership that began in 2021. To understand where they are going, it’s worth looking at what they’ve already built:

  • ShotLink powered by CDW Migration (2021): The partnership began by migrating the Tour’s massive library of historical data and real-time ShotLink data to the AWS cloud. This reduced latency and allowed for more complex calculations (like “Strokes Gained”) to be calculated in milliseconds.
  • Every Shot Live (2021-2022): AWS enabled the first iteration of “Every Shot Live” at THE PLAYERS, which required managing over 30,000 shots over four days, providing a dedicated stream for every single player in the field.
  • TOURCast and 3D Realism (2023): The launch of the new TOURCast experience transformed the leaderboard into a video-game-like interface. Using AWS, the Tour began rendering 3D hole images and “Putt Path” technology, showing the exact break of a ball on the green. This turned static stats into visual stories.
  • Predictive analytics: Win Probability (2024): Leveraging AWS machine learning, the Tour introduced “Win Probability” and “Make/Miss Cut” metrics. By running 10,000 simulations every 15 seconds, the system gives fans real-time insight into how a single birdie might change a player’s entire season trajectory.
  • AI commentary in TOURCast (2025): Before the agentic system, the Tour debuted automated play-by-play commentary. Powered by Amazon Bedrock, this feature provides written and (increasingly) audio context for shots, explaining not just what happened, but why it mattered — for example, “This par save keeps him inside the Top 10 projected FedExCup standings.”

The bottom line

The collaboration between AWS and the PGA Tour is an excellent example of how an organization can avoid “digital rot” by embracing the edge of the possible. By turning the practice range into a 3D data environment and testing AI-driven directing, they are ensuring that golf — a game played at a deliberate pace — is delivered at the speed of modern digital consumption.

Every information technology and business leader out there should use this as example of continually asking “what’s possible.” The PGA Tour, like many businesses has a loyal following and one might think change isn’t necessary, but fans, customers, students and others will go where their experience is best. AI is changing experiences faster than any time in history and constantly rethinking the status quo is critical for long-term leadership.

The initial phase of the artificial intelligence gold rush was defined by “The Build.” Hyperscalers and model builders raced to secure every available Nvidia Corp. H100 GPU, constructing massive, centralized cathedrals of compute.

But as the industry descends from the peak of inflated expectations toward real-world utility, the conversation is shifting. AI is moving from the lab to the factory floor, the retail aisle and the telco edge.

At Nvidia’s annual GTC today in San Jose, Cisco Systems Inc. laid out its blueprint for this transition. Cisco’s message is for AI to work in the enterprise, it requires more than just raw GPU power. It needs a “Secure AI Factory” — a full-stack, validated architecture that treats AI not as a science project, but as a high-value production line.

The shift from plumbing to intelligence

For decades, Cisco’s role in the data center was to provide the “plumbing” — the reliable, invisible pipes that moved data from point A to point B. But in an analyst briefing, Kevin Wollenweber, Cisco’s senior vice president and general manager of data center and internet infrastructure, explained Cisco’s role has fundamentally changed.

“The network has gone from just plumbing and infrastructure to really a critical component to what enables these models to learn and think,” he said. “Whether it’s connecting GPUs in a massive network efficiently to allow training workloads to run across tens of thousands of GPUs, or as we pivot more into inference, it’s about how we actually get low latency and high bandwidth access to storage.”

This shift is critical for Cisco and Nvidia customers alike. As workloads move from training (learning) to inference (doing), the bottleneck isn’t just the processor; it’s the ability to feed that processor data at the speed of thought. By integrating Nvidia’s Spectrum-X Ethernet platform with Cisco’s UCS compute and Nexus management, the two companies are attempting to standardize and simplify the AI stack. This is similar to the approach Cisco took with private cloud when it entered into a joint venture with VMware and EMC, and “VCE” created a turnkey, engineered solution for cloud.

The new KPI: Efficient token generation

Perhaps the most significant point mentioned in the briefing was the focus on “tokenomics.” In the enterprise, the value of AI is increasingly measured by the cost and speed of the output — the tokens. Wollenweber argued that the competitive moat for modern businesses will be built on how efficiently they can generate these tokens.

“The competitiveness for a lot of our customers is going to be around: how do we drive efficient token generation?” Wollenweber explained. “You’re going to have OpEx and engineering resources, but you have to look at actually how you can either leverage tokens efficiently or generate tokens efficiently to be able to grow in this ecosystem.”

This is why Cisco is pushing the “AI factory” concept. If an enterprise tries to “DIY” their AI infrastructure, they face a “complexity tax” that drains token efficiency. By providing a validated “Secure AI Factory” stack, Cisco and Nvidia are offering a way to bypass the architectural heavy lifting, allowing customers to focus on the workloads that drive return on investment.

The rise of agentic AI and the security gap

The briefing also touched on a massive looming shift in AI architecture: the move from human-led prompts to agentic AI. We are moving into an era where autonomous agents communicate with other agents to execute complex workflows. Wollenweber shared how this is already changing his own work habits: “I think the agentic era that we’re in is going to drive a lot more of that [on-premises demand] than people probably realized. I go into a meeting, a closed-laptop type of meeting with my executive team, and I make sure I kick off six agents before I leave to go generate work and do work for me while I’m sitting in a meeting.”

This “agentic” workflow creates a massive security headache. How do you secure a conversation between two autonomous agents? Cisco’s answer is to fuse security into the fabric itself. By extending their Hybrid Mesh Firewall into the Nvidia BlueField Data Processing Unit ecosystem, Cisco is placing a security guard at every single GPU entrance.

The implication for customers is to greatly simplify threat protection: security is no longer a “bolt-on” that adds latency; it is an offloaded process that happens on the DPU, ensuring that the “security tax” doesn’t slow down the “token generation.”

From the core to the ‘deep edge’

One of the most ambitious parts of its GTC announcement is the expansion into the telco edge. Through a partnership with AT&T, Cisco is taking these AI factory concepts and pushing them into the mobility network.

The goal is to solve the “Mobile Edge Compute Hangover.” For years, telcos built edge compute sites that struggled to find a clear revenue stream. Wollenweber believes distributed inferencing — running AI tasks such as video analytics or real-time sensor processing close to the source — is the “killer app” the edge has been waiting for.

By bringing Nvidia RTX Pro GPUs into the Cisco UCS edge portfolio, they are enabling what Wollenweber calls “distributed intelligence.” This isn’t just about big H100 clusters; it’s about putting the right amount of compute in the right place to make a decision in milliseconds.

This could solve the age-old problem of how telcos can make more money. Historically, they spend more and often the new technology reduces costs but rarely generates more revenue. The telcos have a great opportunity to offer the network and the token generation and reverse the declining revenue curve that has plagued them of years.

The ‘half-life’ of AI hardware

Finally, the briefing addressed the elephant in the room: the staggering cost and rapid obsolescence of AI hardware. For a chief financial officer, spending tens of millions on GPUs is terrifying when the next generation is always six months away. Cisco is countering this fear with a focus on “Time to First Intelligence.” Through new service offerings, Cisco is aiming to get massive clusters up and running in days rather than months.

“We all know that this equipment has a very, very short half-life,” Wollenweber noted. “The longer it sits on the shelf, the less value you get out of it before next generations are released. The faster we can get things up and running and generating tokens, the better it is for customers.”

In one Asia-Pacific deployment, Cisco managed to get a 1,000-GPU cluster fully validated and running workloads in less than a week. This operational speed is the true value proposition of the Cisco-Nvidia partnership. It’s not just about the silicon; it’s about the “velocity of AI.”

Conclusion: The industrialization of AI

For information technology leaders, the takeaway from Cisco’s GTC announcements is that the era of AI experimentation is closing, and the era of AI industrialization is beginning. Cisco is no longer content to be the plumber. By integrating Nvidia’s accelerated computing with its own security, networking and observability tools, including Splunk, Cisco is positioning itself as the operating system for the AI factory.

As Wollenweber concluded, the goal is simple: “Enable our customers to build everything end-to-end required: to manage, monitor and react to anything that we see.” For the enterprise, the “Secure AI Factory” isn’t just a new product — it’s the infrastructure required to capitalize on the token-driven economy.

As one would expect, artificial intelligence was a top theme at the recent MWC conference in Barcelona, but 6G was certainly prominent as well. This year, the discussions has pivoted from the maturation of 5G wireless networks to the “seamless path” toward 6G. But for those of us who have spent the better part of two decades watching G-cycles come and go, there was a healthy dose of skepticism at the show.

We’ve seen this movie before: massive capital expenditure, the promise of “revolutionary” services and the eventual, quiet realization that we’ve built a faster highway, only to struggle to persuade anyone to pay a higher toll.

The core question facing the industry at MWC26 isn’t just “What is 6G?” It’s whether we are finally moving past the era of infrastructure-for-the-sake-of-infrastructure and into an era of intelligence-for-the-sake-of-value.

The 6G evolution: More than just a speed bump

If 5G was defined by raw performance and massive connectivity, the path to 6G is fundamentally different. It is not a call to “rip and replace” the legacy we’ve spent billions building. The consensus — or at least the pragmatic view shared by industry leaders — is that 6G must be an evolution, not a reboot.

The pivot lies in moving away from viewing the network as a static pipe. Instead, the vision for 6G is an AI-native infrastructure. In this model, intelligence is not an overlay or a secondary software feature; it is woven from day one into the silicon, the Radio Access Network, or RAN, and the core.

The chronic monetization struggle

It’s no secret that the telecom industry has a “value capture” problem. When we look back at the 5G rollout, while the network performance improved significantly, the revenue models remained stubbornly tied to legacy consumption-based billing. Operators have spent years optimizing for internal efficiency — making the network “faster” and “denser” — but they have largely failed to identify and sell new, high-margin, revenue-generating services that consumers and enterprises actually recognize.

We have spent twenty years talking about “AI-powered services,” yet the examples that move the needle remain frustratingly scarce. We see occasional flashes of brilliance, such as T-Mobile’s live translation feature — an example of intelligence in the pipes — but these remain the exception. The rest of the effort has been focused on internal efficiencies, such as optimizing power usage or automating maintenance. Though these are excellent for the bottom line, they don’t generate new top-line growth.

Intel and the ecosystem: A ‘no moats’ strategy

A major factor in the industry’s slow pace of innovation has been the verticalization of solutions, which often traps operators in proprietary, walled-garden architectures. Intel Corp. is currently trying to break this cycle through a “no moats” strategy, providing an open, common platform — specifically the Xeon 6 family — that spans the entire network.

This open approach is gaining significant traction across the global telecom landscape. The list of operators actively leveraging Intel’s silicon for this transition is telling:

  • One of the major U.S. operators (still under NDA) is deploying vRAN (virtualized RAN) on Intel’s platforms to drive operational efficiency.
  • NTT and NTT DoCoMo are collaborating with Intel and Ericsson to modernize their networks, focusing on the integration of AI-ready infrastructure.
  • Vodafone is utilizing Xeon 6 processors for ORAN and vRAN deployments, signaling a move toward more flexible, software-defined radio networks.
  • Rakuten Mobile continues to be a bellwether for cloud-native, virtualization-based network deployments.
  • SK Telecom is deep in the core network modernization effort, proving that even the most complex, high-traffic nodes can thrive on a virtualized Intel backbone.

These partners are performing massive, carrier-grade due diligence on total cost of ownership and power efficiency, which is becoming increasingly critical as they introduce more compute-intensive AI workloads into the network.

Why AI changes the calculus (finally)

Is this time different? It might be, provided the industry shifts its focus from peak model science to operational scalability. The breakthrough isn’t going to come from a massive, monolithic AI model that solves everything. Instead, it’s coming from the rise of small language models and specialized inferencing tasks that run at the network edge.

Carriers are now looking at models with hundreds of millions or single-digit billions of parameters — models that are small, fast and cost-effective enough to run on standard server hardware without needing a specialized, power-hungry AI farm. This is where the “right compute for the right workload” philosophy comes into play. By leveraging existing, open-platform silicon, telcos can move inferencing closer to the data.

This allows for:

  • Real-time optimization: Using AI to improve channel estimation and link adaptation on the fly.
  • Predictive maintenance: Moving from reactive, “break-fix” models to proactive network health management.
  • Hardened security: Employing silicon-level features such as Crypto Acceleration and Trusted Domain Extensions to handle the increased threat landscape of AI-driven cyberattacks.

The verdict: A programmable future

6G will succeed or fail based on whether the industry can bridge the gap between “network efficiency” and “service innovation.” We are finally moving into a phase where the silicon is capable, the software frameworks — such as OpenVINO — are mature, and the infrastructure is ready to host intelligence natively.

The technology is no longer the bottleneck; the bottleneck is the business model. The operators that win in the 6G era won’t necessarily be the ones with the fastest peak speeds; they will be the ones who treat their network as a programmable, AI-native platform capable of launching new, high-value services in weeks, not years.

My message to the infrastructure providers and telcos coming out of MWC26 is to stop talking about the “future of 6G” and start proving it with the hardware and software we have today.

Enterprises are currently fighting a two-front war. On one side, there is an aggressive push toward AI adoption; on the other, an infrastructure landscape so fractured across edge, cloud and on-premises sites that scaling becomes nearly impossible.

This “complexity tax” is stalling innovation. For the modern operations team, the dream of lightning-fast artificial intelligence is being deferred by the manual labor of managing a dozen disconnected tools that provide plenty of alerts but almost no actual signal.

This week at F5 AppWorld in Las Vegas, the conversation shifted from the “what” of AI to the “how.” The message from F5? Organizations cannot secure or scale the AI era using a “Frankenstein” architecture of disconnected point products. To move forward, the industry is eyeing a massive consolidation of the networking and security stacks — a move toward what F5 calls the Application Delivery and Security Platform or ADSP.

This is something F5 has been moving toward for years. The company has been the undisputed leader in application delivery controllers or ADCs for years despite many companies both big and small taking runs at that business. Along the way, F5 has built a strong security portfolio and the coming together of the two products, XOps and F5 Insight, resulting in the ADSP.

The three friction points holding back AI

Before organizations can move forward with autonomous AI agents, they must resolve three fundamental conflicts currently stalling adoption:

1. The signal-to-noise ratio Modern information technology environments are saturated with data but starved for information. “They have a dozen tools, a thousand alerts and not enough signal,” Kunal Anand, chief product officer at F5, noted on a briefing with analysts. Without unified observability, identifying a bottleneck in an AI training pipeline or a security flaw in a large language model becomes a forensic exercise rather than a real-time fix.

2. The agentic security gap As we shift from chatbots to agentic AI — where AI agents autonomously interact with APIs to execute tasks — the attack surface gets exponentially bigger. Traditional web application firewalls or WAFs were built for human-to-app interactions. They are often blind to the Model Context Protocol traffic that defines the AI-to-AI economy.

3. The looming shadow of “Q-Day” While AI is the immediate priority, the “store now, decrypt later” threat of quantum computing is forcing a rethink of encryption. Organizations are hesitant to overhaul their entire delivery stack for AI if it isn’t also “crypto-agile” enough to survive the transition to post-quantum cryptography or PQC.

F5’s strategy: Collapsing the ‘mess’ into a platform

F5’s announcements this week center on the idea that application delivery and security are no longer separate domains. During the briefing, Chief Marketing Officer John Maddison emphasized that the goal is to be a “control point” regardless of where the app lives — whether it’s on-premises, in the cloud or sitting on Nvidia DPUs.

“F5 ADSP collapses that mess into a platform,” he explained. “With F5 Insight, we turn scattered telemetry into a clear story and the next best action. Then we extend that foundation for agentic AI workloads and future-focused cryptography, because the infrastructure is changing, ready or not.”

Key evolutions in the ADSP stack

To address these hurdles, F5 unveiled several major enhancements to its platform:

  • F5 Insight for ADSP: This is the “brain” of the operation. It leverages OpenTelemetry to provide unified visibility across hybrid and multicloud settings. Crucially, it uses AI-driven proactive guidance to generate “operational narratives,” allowing teams to prioritize vulnerabilities through natural language rather than digging through logs.
  • BIG-IP v21.1: A significant update for F5’s flagship software, introducing NIST-compliant PQC ciphers to protect against future quantum threats. It also adds Dynamic Client Registration to empower agentic AI with secure, automated resource access. The AI-WAF has been deeply integrated to secure the specialized traffic used by LLMs.
  • AI Remediate: Bridging the gap between “red teaming” (finding holes) and “guardrails” (blocking them), this new tool automates the creation of security policies to protect AI models in production.
  • NGINX Agentic Observability: By inspecting MCP metadata directly in the traffic path, NGINX now provides visibility into “shadow AI” activity — AI agents interacting with services without explicit IT oversight.

The transformative value: Beyond the perimeter

The shift toward a unified platform isn’t just about technical elegance; it’s about business velocity. Research from IDC suggests that by integrating these layers, organizations can “optimize operations, strengthen security and scale across their environments” more effectively than with siloed tools.

For the C-suite, the value proposition is clear: Convergence equals lower risk. By replacing dozens of disparate SKUs with simplified, value-driven bundles — as seen in F5’s new Distributed Cloud Services packaging — companies can reduce tool sprawl and proxy overload.

As Maddison noted during the briefing, the market for cybersecurity is expanding by roughly $3 billion thanks to AI-enabled applications. However, capturing that value requires an infrastructure that is “AI-aware.” Whether it’s ensuring session persistence for AI workloads or providing air-gapped API security for highly regulated industries, the platform approach is becoming the only viable way to manage the “geo-repatriation” of data and the rise of the agentic economy.

The bottom line

The “Age of AI” is quickly becoming the “Age of Complexity.” The winners won’t just be the companies with the best models, but those with the most resilient, observable and converged delivery platforms. As F5 moves to make its entire stack — from BIG-IP to NGINX to Distributed Cloud to a singular, intelligent fabric — it is allowing its customers to simplify an increasingly complex environment and start security for the AI-first future.

For years, the industry conversation around stadium technology has been stuck on a single, albeit important, metric: How many thousands of fans can simultaneously post a selfie to Instagram?

Though the “connected stadium” was once a differentiator, it has rapidly become a baseline requirement. I recently talked to the leadership at Ruckus Networks and the Los Angeles Football Club about the recent deployment of Wi-Fi 7 at BMO Stadium (pictured), and one of the big takeaways is the narrative around high-density Wi-Fi has shifted.

The measuring stick is no longer “more bars” or faster social media uploads. Instead, the Wi-Fi network is shifting from an on ramp for a connected spectator experience to a highly deterministic, operationally intelligent digital ecosystem.

The catalyst for this shift is the arrival of Wi-Fi, which is more than just a speed upgrade from older generations. While the industry has been busy debating the merits of private cellular versus Wi-Fi, the reality is that Wi-Fi 7, with its ultra-wide channels, multi-link operation and improved reliability is turning the stadium into a high-performance lab for innovation.

The determinism factor: Moving beyond ‘best effort’

The biggest limitation of previous Wi-Fi generations in high-density environments was the “best effort” nature of the connection. In a stadium filled with 22,000 shouting fans — all armed with mobile devices — the sheer noise floor could lead to latency spikes and dropped packets. For a fan trying to check a score, this is an annoyance. For a stadium operator relying on that network to process a payment, verify a ticket or scan a biometric identity, it is a business risk.

Wi-Fi 7 changes the equation. By introducing features like preamble puncturing, which allows the network to “ignore” or “carve out” interference in a channel rather than abandoning the entire spectrum, stadiums can now achieve a level of determinism that previously required expensive, dedicated private cellular infrastructure.

As LAFC Chief Technology Officer Christian Lau noted in our recent discussion, the network has evolved from a utility to a mission-critical asset. “Selecting Ruckus to build the first Wi-Fi 7 network in MLS was a strategic decision to extend our leadership on and off the pitch,” he said. “This network is the backbone for our entire digital ecosystem, ensuring seamless experiences from mobile ticketing and concessions to immersive fan engagement for every one of our 22,000 guests.” When you have a mission-critical system, like biometric access control or automated retail, one can’t afford “best effort.” Guaranteed latency is mandatory.

New use cases: The ‘store-in-a-box’ reality

So, what does this new level of connectivity look like in practice? It is enabling a new generation of operational flexibility that wasn’t possible even three years ago.

  • Autonomous retail: There has been a sharp rise in grab and go retail environments, such as Amazon’s “Just Walk Out.” Previously, these setups required permanent, hard-wired infrastructure. With the throughput of Wi-Fi 7, venues can deploy “store-in-a-box” concepts. A stadium can spin up a temporary merchandise stand in an ancillary concourse, connect it to the network wirelessly and be fully operational in hours rather than weeks.
  • Enhanced biometric security: As stadiums move toward biometric-led entry, the stakes for connectivity rise. Systems such as Zonar, which utilize radar to detect potential threats, require robust, low-latency connectivity to function in real-time. Moving these safety systems onto a secure, enterprise-grade Wi-Fi 7 network provides the agility to reconfigure entry points and security perimeters based on the specific needs of an event.
  • The content factory: Sports organizations are increasingly media companies. They need to ingest, edit and distribute high-definition content from the pitch to their servers in real-time. By employing Wi-Fi 7, broadcast crews can dump massive RAW files directly from the sideline without physical cabling, enabling a faster turnaround for social media engagement and marketing.

The future-proof foundation

A common point of contention is whether private cellular will displace Wi-Fi in sports venues. Though I think there was serious debate with venue operators a few years ago, reality has set in, and the future of the stadium is a converged, multi-access environment.

Private cellular remains ideally suited for broad coverage and high-mobility use cases — such as like tracking assets moving across a vast parking lot. However, for the high-density environment of the seating bowl, Wi-Fi 7 is the superior economic and technical choice. Furthermore, the ability to offload data traffic from expensive cellular networks onto the stadium’s Wi-Fi is a major revenue opportunity that venue operators are only just beginning to monetize.

As Bart Giordano, senior vice president and president of Ruckus Networks, emphasized, “this installation isn’t just about faster Wi-Fi; it’s about providing a reliable, enterprise-grade digital foundation that LAFC can build upon for years to come — powering new applications and revenue opportunities that engage a new generation of fans.”

The bottom line

The most forward-thinking chief information officeres in the industry, such as those at LAFC, have stopped thinking of the network as a “utility” and started thinking of it as an “asset.” When a stadium can dynamically reconfigure its connectivity based on the event, it isn’t just saving on cabling costs; it is opening up new revenue streams and operational efficiencies.

The transition to Wi-Fi 7 is about much more than speed tests or bragging rights. It’s about building a digital foundation that is flexible, reliable and intelligent enough to support the next decade of fan engagement. The stadiums that embrace this shift won’t just be “connected” — they will be future-proofed.

digital concept art in gold