Featured
Reports
Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience
Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]
Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation
April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]
Nathan Howe, VP of Global Innovation at Zscaler talks mobile security
March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]
Check out
OUR NEWEST VIDEOS
2026 ZKast #35 - Cybersecurity in a Contested World: Insights from Davos 2026 with Zscaler
11.1K views February 28, 2026 12:23 pm
8 3
2026 ZKast #34 - Is Retail the New Stadium? The Future of Connected Retail | NRF2026 Extreme
9.6K views February 28, 2026 12:00 pm
2 1
2026 ZKast #33 - RingCentral CX & AI: Moving from Workflow Automation to Real Business Outcomes
11.1K views February 26, 2026 8:57 am
7 1
Recent
ZK Research Blog
News
Cisco Systems Inc. has long been regarded as the market leader in networking, but over the past few years, the company has strived to position itself as “critical infrastructure for the artificial intelligence era.” It now seems to be making headway with that as the stock hits an all-time high.
This week at Cisco Live EMEA, in Amsterdam, Cisco delivered another payload of innovation targeted at helping customers move their AI from the “chatbot phase,” and jump into the agentic era. Agentic AI will create a marked improvement in productivity as it goes far beyond humans asking AI questions but enables autonomous agents to perform complex tasks, reason through workflows, and interact with enterprise data at large scale.
Though the vision of agentic AI paints a rosy future, it’s important to note that traditional infrastructure wasn’t cut out for the rigors of AI and most companies will be looking to do the most significant technology refresh since the early days of the internet. Cisco has been methodically upgrading its portfolio to meet these new demands. Here are the five most significant announcements from Cisco Live EMEA:
1. Silicon One G300: Terabit switching
The lead product item was the debut of the Cisco Silicon One G300. This switching silicon capable of a massive 102.4 Tbps of bandwidth, optimized for scaling out networks. As AI clusters scale toward “gigawatt-scale,” the network often becomes the bottleneck. The G300 tackles this with Intelligent Collective Networking, which provides 2.5 times better burst absorption than other alternatives.
For AI, the ability to handle bursts of traffic is critical to ensure data is delivered to the AI systems consistently and reliably, even over long distance. The Intelligent Collective Networking uses a combination of network features including shared packet buffering, path-based load balancing and network telemetry to improve performance.
In real-world terms, Cisco claims the new silicon can deliver 33% increased network utilization and a 28% reduction in job completion time when compared to non-optimized path selection, which would lead to more tokens generated at a lower cost. Based on my familiarity with off-the-shelf Ethernet, these claims seem reasonable, if not a bit conservative.
2. ‘AgenticOps’: The new IT operating model
Perhaps the most ambitious shift for Cisco is the expansion of AgenticOps. Cisco is moving from AI that merely observes to AI that reasons, decides, and acts. This isn’t just one tool; it’s a suite of autonomous capabilities integrated across networking, security and observability.
Key innovations include:
- Autonomous troubleshooting: End-to-end investigations that can cut Mean Time to Resolution from hours to minutes by validating multiple hypotheses simultaneously.
- Continuous optimization: Agents that autonomously tune RF, QoS and pathing to maintain user experience before a human even notices a degradation.
- Trusted validation: Risk-aware agents that assess network changes against live topology to identify potential “blast radius” issues before they cause an outage.
The concept of a “self-driving network” is something that the industry has bandied about for years, but historically IT pros have been cool on the idea. Over the past year, I’ve noticed a significant change in attitude as engineers are now starting to understand that AI is here to be a tool and allows them to work faster and smarter. I expect Cisco to continue to add more and more agentic capabilities while road mapping to fully autonomous somewhere in the next 24 to 36 months.
3. AI defense: Guarding the agentic supply chain
As agents become more autonomous, the security risks become more “semantically complex.” To address this, Cisco launched the biggest update to its AI Defense solution since it was initially announced. The highlight with this release is the AI Bill of Materials, which provides visibility into AI software assets and third-party dependencies. This is significant because it shifts security from tracking code to tracking “intent,” providing the visibility needed to manage the unique risks of autonomous agents. By documenting models and data dependencies, it allows enterprises to secure the AI supply chain against semantic threats that traditional firewalls simply can’t see.
Furthermore, Cisco is introducing Advanced Algorithmic Red Teaming. Unlike traditional security that looks for a single “bad” prompt, this uses adaptive multi-turn testing to see how an agent behaves over a long conversation. It’s designed to stop “poisoned tools” or prompts that try to hijack an agent’s authority.
At Cisco’s AI Summit, Amazon Web Services Inc. Chief Executive Matt Garman offered an analogy which highlighted the importance of guardrails. He explained that if one puts a board across a canyon, that person would crawl or walk very slowly across. That same board with handrails allows you to run. AI Defense gives companies confidence that their AI is doing what it should do and enables them to adopt it much faster.
4. Full-stack post-quantum cryptography
In an industry-first move, Cisco announced full-stack PQC protections within its new IOS XE 26 operating system. This is a “harvest now, decrypt later” defense strategy.
As AI workflows increasingly involve long-lived, sensitive data, the threat of future quantum computers cracking today’s encryption is real. Cisco is embedding PQC across its new 8000 Series Secure Routers and C9000 Smart Switches, aligning with evolving global regulatory guidance and ensuring that data remains encrypted even in the quantum age.
This should have appeal with regulated industries, governments or any organization where the time value of their data is several years. The timeline around quantum is still uncertain but it’s good Cisco offers some protection today in for when it arrives.
5. Nexus One: The unified AI fabric
To simplify the sheer complexity of these new technologies, Cisco is unifying its data center strategy under Nexus One. This is an integrated solution that brings together silicon, systems (such as the new N9000), optics and software under a single operating model.
A notable feature is the Native Splunk Platform Integration, expected in March, which allows customers to analyze network telemetry directly where it resides. This is critical for sovereign cloud deployments where data locality and compliance are paramount. Essentially, Cisco is giving enterprises a “single pane of glass” to manage everything from traditional workloads to massive AI training clusters.
During his Q&A with analysts, President and Chief Product Officer Jeetu Patel (pictured) talked about Cisco’s evolution to a platform or systems company. This is a good example, as Cisco historically had good technology but much of it was deployed in silos. Since Patel took over product, Cisco has been much more focused on delivering value at “Cisco” level rather than individual products.
The bottom line
Coming out of Cisco Live EMEA 2026 this week and AI Summit last week, it’s easy to see that the era of AI as a feature is coming to an end. We have entered the era of AI as the infrastructure.
By combining massive 100T silicon with autonomous “AgenticOps” and post-quantum security, Cisco is betting that the winner of the AI race won’t just be the company with the best model, but the company with the most resilient, secure and automated network to run it on. When ChatGPT burst on the scene, few thought of Cisco as an AI company, but it has delivered products consistently to help its customers move from AI vision to reality.
In the world of professional sports, “data-driven” is often a term tossed around to describe basic box scores. But for the National Football League, the last 10 years have represented a fundamental shift in how the game is measured, analyzed and even played.
This week, as the league reflects on a decade of its Next Gen Stats or NGS platform, the story isn’t just about football — it’s an excellent example of how cloud-native infrastructure and machine learning can transform an industry in real time.
What began in 2015 as a tentative experiment with radio-frequency identification or RFID tags has flourished into an artificial intelligence-led set of experiments and decisions fueled by Amazon Web Services Inc. Today, the partnership between the NFL and AWS serves as a model for the “intelligent enterprise,” processing millions of data points per game to deliver insights that were once considered impossible to quantify.
The origin: From tracking to intelligence
A decade ago, the NFL’s “Next Gen” journey started with hardware. The league embedded RFID chips into every player’s shoulder pads and within the football itself. Twenty ultra-wideband receivers were mounted in every stadium to capture the X/Y coordinates of all 22 players 10 times per second, and the ball 25 times per second.
“Football, for 100-plus years, has been a box score game,” Mike Band, NFL’s senior manager of research and analytics, noted in a recent retrospective. “You had yards, touchdowns and tackles. But those numbers only captured a sliver of what unfolded on the field.”
The early years focused on low-hanging fruit, metrics such as top speed and player separation. However, the raw data was just the substrate. The real breakthrough came in 2017 when the NFL formalized its partnership with AWS, moving the project from a tracking experiment to critical league infrastructure. By 2018, the league opened its tracking data to all 32 teams, putting every franchise on a common analytical footing.
Scaling the stack: The SageMaker era
The complexity of the questions grew: How difficult was that catch? What is the probability of a sack? The NFL needed more than just data storage; it needed advanced machine learning capabilities. The league turned to Amazon SageMaker to build, train and deploy models that could handle the high-velocity data streaming from the field.
In addition to SageMaker, the NFL has adopted many AWS tools, including Amazon Quick, which is an agentic AI-enabled workspace that acts as a set of “teammates” for business users. The NFL is using Quick to deliver real-time, interactive visualizations and answers to different stakeholders, including fans, broadcasters and analysts.
The first major milestone of this partnership was “Completion Probability,” launched in 2018. Built using an XGBoost machine learning model, it factored in 10 variables, including receiver separation and quarterback pressure, to assign a percentage to the likelihood of a catch.
Today, that single model has evolved into a library of more than 75 machine learning models running simultaneously. What’s equally impressive is the sheer scale of the data being generated and analyzed:
- Data ingestion: Every snap triggers the creation of a massive amount of physical data.
- Latency requirements: Models must return results in under 100 milliseconds to be relevant for live broadcasts.
- Volume: The system now produces between 500 and 1,000 unique stats per play.
It’s important to note that though NGS was initially created for broadcasters and fans, the data backbone now underpins everything from officiating and schedule creation to the “Digital Athlete” — an AWS-powered injury prediction tool that helps teams identify when players are at increased risk of injury.
During a media panel in San Francisco this week, Julie Souza (pictured, right), global head of sports for AWS, and Mackenzie Herzog (left), vice president of player health and safety for the NFL, discussed the impact AI has had on injuries. They explained it was the combination of the Digital Athlete and tens of thousands of simulated games that led to the dynamic kickoff rule, banning of the hip drop tackle, and a redesign of helmets, all of which led to the lowest injury rate the NFL has seen in decades.
Decoding the ‘game within the game’
One of the most recent and complex innovations to come out of the AWS-NFL lab is “Coverage Responsibility.” For decades, defensive performance was a statistical “black box.” If a quarterback didn’t throw at a cornerback, that corner’s effectiveness was invisible in the box score.
Using spatio-temporal transformer architectures, the same type of technology behind modern large language models, NGS can now identify defensive assignments in real time. The system can tell if a safety was disguising a coverage pre-snap or if a cornerback was “left on an island” in man coverage. This transforms the eye test of scouts into hard, verifiable data.
The league has also democratized this innovation through the “Big Data Bowl,” an annual competition where data scientists from outside the NFL are invited to solve league problems using tracking data. Many of the metrics seen on Amazon Prime’s Thursday Night Football today, such as “Pressure Probability,” originated as submissions from this open-source community.
The next frontier: Optical tracking and skeletal data
As the league looks toward the next decade, the NFL is already moving beyond the X/Y coordinate. The next evolution of Next Gen Stats involves “optical tracking,” using 4K camera arrays to capture the full 3D pose of a player.
Instead of seeing a player as a single dot on a screen, the system will soon track 30-plus points on a player’s body, including joints such as elbows, knees and hips. This skeletal data will unlock a new dimension of biomechanical analysis, allowing teams to analyze a quarterback’s throwing motion or a lineman’s leverage with millimeter precision.
Lessons for IT leaders from Next Gen Stats
For IT leaders and enterprise architects, the NFL’s decade with AWS offers three key takeaways:
- Context is king: Raw data is a liability until it is contextualized by machine learning models.
- Infrastructure dictates innovation: You cannot run real-time AI on legacy, siloed systems. The NFL’s AWS cloud stack is what makes subsecond inferencing possible.
- The ecosystem approach: By combining internal expertise with external talent, such as with the Big Data Bowl, and vendor partnerships such as AWS Scientists, the NFL accelerated its R&D cycle by years.
A decade ago, Next Gen Stats was a novelty. Today, it has become one of the most critical components of the NFL. As the league moves into an AI-first future, the “Next Gen” label seems almost modest. Business leaders should follow the continuous innovation model the NFL went through to deliver immediate value on the data being generated and then build on the success.
Cisco Systems Inc. held its second annual AI Summit this week, with a star-studded lineup of artificial intelligence celebrities. Unlike most vendor events, the Cisco AI Summit was designed to be a “meeting of the minds,” bringing together the “builders of the AI economy” to help the industry move past the hype and address the practical realities of a world being reshaped by AI. From the shift toward agentic workflows to the demographic necessity of automation, here are five key thoughts that defined the summit:
1. 2026: The year agentic AI goes into production
Though 2025 was defined by widespread experimentation, the consensus among summit leaders is that 2026 marks the official turning point for agentic AI — autonomous systems capable of reasoning, planning and executing complex tasks.
Leading off, Cisco Chief Executive Chuck Robbins noted, “For all the enterprise customers who are here this week, we all believe 2026 is going to be a turning point for AI — this will be the year of agentic applications.” OpenAI Group PBC CEO Sam Altman echoed this sentiment in his session with Cisco President and Chief Product Officer Jeetu Patel, describing the current convergence of model capability and interface as another “ChatGPT moment.”
Altman observed that “this is the first time I felt another ChatGPT moment — a clear glimpse into the future of knowledge work.” The movement from “chatbots to agents” changes the fundamental architecture of work. As Patel explained, we are moving from intelligent assistants to systems that can proactively remediate infrastructure issues or even build full pieces of software with minimal human intervention.
Though use cases aren’t easy to find, they are out there. Last week at RingCentral’s Revenue Kick Off, I met with Liesel Perez, co-founder of Axis Integrated Mental Health, and she explained how her therapists run agentic agents in the background to capture notes and update systems for insurance purposes. This allows clinicians to pay better attention to patients and let agentic AI do the heavy lifting, which is an excellent example of the value the technology can bring. It’s a simple use case but one that can have a big impact on productivity and patient care.
2. Solving the trust deficit and the security prerequisite
A recurring theme throughout the summit was the significant trust deficit currently hindering AI adoption. I recently attended the World Economic Forum in Davos, and while AI was the key theme there, this concept of AI trust was pervasive in every session I attended and every attendee I talked to.
In previous technology shifts, security was often treated as an optional trade-off for productivity. In the AI era, security has become a non-negotiable prerequisite.
“If people don’t trust these systems, they’ll never use them,” Patel stated bluntly. This trust must extend across every layer of the stack: the data, the models, the infrastructure and the agents themselves. Cisco’s response has been the launch of AI Defense, a platform designed not just to use AI for cyber defense, but to secure AI itself against misuse and data leakage.
However, trust goes far beyond the technology, and this was the main theme of the panel with Robbins and Anne Neuberger, strategic advisor to Cisco, and Brett McGurk, special advisor for international affairs for Cisco. Neuberger emphasized that AI is the only way to counter modern cyberthreats effectively. Because software-defined networks are constantly changing, identifying “normal” vs. “anomalous” behavior requires AI’s speed to assist human defenders who can no longer keep up manually.
Both experts noted a significant disconnect in Washington D.C. Policymakers often regulate tools they do not use daily given security restrictions in high-level government offices. McGurk warned that imprecise regulation could allow competitors such as China to leapfrog the U.S.
Amazon Web Services Inc. CEO Matt Garman provided an easy-to-understand analogy that highlighted the importance of trust. He explained that if one tries to cross a canyon on a board, one will crawl across the board. “Put up handrails as guardrails and we run.” Trust gives us confidence and that leads to utilization which, in turn, creates the rising tide that benefits everyone.
As has been noted by so many people, AI is a team game, and I thought this quote from Robbins was a call to the entire industry: “None of us can do it alone, therefore trust is really imperative.” This is true, as it will let us run, not crawl, toward AI.
3. The demographic imperative: AI as a necessity
Perhaps the most interesting macroeconomic take came from Microsoft Corp. Chief Technology Officer Kevin Scott, who argued that AI is no longer a luxury, but a “biological necessity” for global society. Pointing to the peak high school graduation in countries such as Japan, Scott highlighted a looming labor crisis caused by aging populations and declining birth rates.
“Demographic data is clear that Japan is in population decline — China, Korea as well,” Scott noted. He believes AI is the only technological intervention capable of maintaining our quality of life as the labor pool shrinks. This shifts the narrative from AI “taking jobs” to AI “filling gaps” that human demographics can no longer sustain.
This aligns with the Silver Tsunami economic theory. For example, in rural America (where Scott’s own mother lives), the brain drain and aging demographics create “zero-sum” environments where one person’s gain is another’s loss. Scott views AI as the tool to turn these back into “non-zero-sum” problems by increasing individual productivity to a level that compensates for the missing workforce.
Scott’s session was a great thought exercise but did provide two contrasting futures:
- The optimistic case: Humans use AI to solve “super important problems with urgency” — curing diseases, managing the energy transition and supporting an aging society.
- The pessimistic vase: We fall into a “superficial mode,” using massive compute resources for distraction. He humorously notes his own kids use AI for biomedical engineering half the time, and the other half to create “pictures of green llamas with big butts.”
Which becomes true? The internet has shown we can do both, but solving problems and transforming the way we work, live, learn and play led the way, with the fun stuff coming much later.
4. Re-engineering work for ‘abundance’
Nvidia Corp. CEO Jensen Huang has assumed the role of the Nostradamus of AI. In his panel, he challenged leaders to adopt an abundance mindset. He argued that AI reduces the cost of intelligence by such an order of magnitude that we must stop thinking about how to save time on small tasks and start thinking about solving impossible problems.
“The definition of abundance is you look at a problem so big, you say, you know what, I’ll do it all,” Huang explained. He encouraged leaders to “let 1,000 flowers bloom” through experimentation rather than demanding immediate, line-item ROI spreadsheets. For Huang, the real risk is not being the first to adopt AI but being the last. “You’re not going to lose your job to AI,” he said. “You’re going to lose your job to someone who uses AI.”
This concept of augmenting labor instead of replacing is a bit like Schoedinger’s Cat in that it’s true and not true at the same time. One stat provided by WEF in Davos was that AI would indeed displace 92 million jobs but also create 170 million new ones. If one uses the internet as analogy, the same thing happened – we don’t buy airline tickets from a booth downtown but rather purchase it off a website. However, the internet democratized access to flying and now the airline industry employs more people than ever.
Though Huang is correct in that work needs to be re-engineered, it’s important for business leaders to reskill current employees so they can be part of the 170 million new jobs instead of being on the outside looking in.
5. Bridging the data gap with synthetic and physical data
The summit highlighted a looming bottleneck: We are running out of high-quality, human-generated data on the public internet. To continue the exponential curve of model improvement, the industry is pivoting toward synthetic data and machine-generated data.
World Labs Inc. CEO Dr. Fei-Fei Li pointed toward the next frontier: spatial intelligence. Whereas language models have been trained on clean text, the physical world of pixels and voxels is far messier. Li believes that for AI to reach true general intelligence or AGI, it must develop world models that understand 3D space, causality and gravity. “The ability to understand… the real 3D, 4D physical world is the foundation,” Li explained. This physical AI will unlock the next wave of value in robotics, healthcare and urban planning.
Li’s session raised some good points for information technology leaders as to why they need to look at AI as being more than chatbots. The first is that language is a relatively new intelligence, only about 500,000 years old, whereas perception, seeing and touching, has been evolving for more than 1.5 billion years. AGI requires mastering words and perception, and that’s the challenge World Labs is taking on.
Also, the path to generalized robots, or physical AI, is much harder than self-driving cars. A car just has to avoid touching things; a robot has to manipulate them without breaking them. The scarcity of 3D data is real, but the emergence of high-fidelity synthetic data is creating a flywheel that will accelerate physical AI faster than we think.
If a company’s AI strategy is 100% focused on text and data, it’s missing the 3D world where many businesses live. From the warehouse floor to the surgical suite, spatial intelligence is the horizontal layer that will define the next decade.
Comment on leadership: It’s the multiplier
One of the last but most important sessions at the Summit was from Cisco Chief People, Policy and Purpose Officer Francine Katsoudas. She and I have had several conversations, most recently in Davos, on how AI success is driven as much or more by leadership than by tech. She provided some interesting data that AI adoption is not a bottom-up grassroots movement, nor is it a top-down mandate delivered via email; it is a direct reflection of active leadership. Her research at Cisco indicates that the “lions” of the modern era (an analogy to ancient maps that used the phrase to highlight unexplored or dangerous territories) — ambiguity, ethical uncertainty and the gap between evolving work and static skills — can be tamed only through a transformation in leadership behavior.
Katsoudas challenged the common C-suite narrative that blames the workforce for slow transitions or skill gaps. Instead, she presented the leader as the primary engine of momentum. According to Cisco’s internal data:
- Adoption is personal: AI adoption does not follow a “corporate email surprise;” it follows the visible behavior of the leader.
- The 2x effect: When a leader actively integrates AI into their own workflow, the adoption rate of their team doesn’t just grow — it doubles.
- The new talent profile: Leaders must pivot from valuing only stability and past performance toward seeking curiosity, agency and tech enthusiasm across the entire enterprise, including finance, legal and people departments.
Katsoudas concluded with a call for leaders to move away from fear-based narratives and toward a stance of radical confidence in their people: “The future does not belong to those that wait for the map to be finished,” she said. “It belongs to those who fearlessly walk with the lion.”
Business leaders, you’re on deck to lead the way with AI.
Conclusion: Connecting the dots
This was an interesting summit for Cisco to host. It wasn’t about technology or the latest GPU but rather thought leadership. At the event, I spoke with Jim Kavanaugh, CEO of World Wide Technology, Cisco’s largest and best transformation partner. The reality is AI success requires a massive ecosystem of players and Cisco touches all of them in some form as it’s the network that ties all these AI things together. Cisco has been trying to not just catch the AI wave but drive thought leadership into this industry.
“They have more momentum around AI today regarding their core capabilities and infrastructure, but this event demonstrates their commitment to thinking even bigger about how AI is going to play into the broader Cisco portfolio,” he said. “More importantly, this event shows how Cisco is looking at AI beyond the company, how all the players here can be brought together to benefit customers and that’s a great pivot for Cisco and something the industry needs.”
Overall, it was a great event hosted by Cisco and that was clear based on the number of Fortune 500 chief information officers in attendance. Cisco Live EMEA is next week in Amsterdam, where we should get a dose of innovation as to how Cisco can help get AI from vision to reality.
For decades, the pinnacle of sports broadcasting was defined by how many satellite trucks one could park outside a stadium. But as we head toward the 2026 Milan Cortina Winter Olympics starting this week, that era is officially in the rearview mirror.
NBCUniversal Media LLC has chosen Cisco Systems Inc. to deliver the AI networking technology for the Peacock Network’s “all-IP production” of the 2026 Milan Cortina Winter Olympics. The games take place in Italy Feb. 6-22, followed by the Paralympics March 6-15. Though Cisco and NBC have a longstanding relationship, this deployment represents a significant shift in how “Big Iron” events are produced. This is something information technology leaders should be watching as the headline is about more than the Olympics; it’s about transitioning the mission-critical, high bandwidth operations from a legacy closed network to an all-IP, AI-managed framework.
The Cisco-driven broadcast and online content will include many of the same features the partnership delivered during the 2024 Summer Olympics in Paris, including:
- Live coverage of events during the day on the NBC broadcast network
- Coverage of the entire Olympiad on the Peacock streaming service
- A nightly program, “Primetime in Milan,” showcasing top performances and events
- Coverage across multiple cable and digital platforms
VXLAN to make Olympics debut
NBC Sports will use Cisco’s Virtual Extensible LAN or VXLAN technology to prioritize operational efficiency and flexibility, effectively dissolving the geographic barriers that restrict live production, per the Cisco announcement. In past Games, production was often hamstrung by physical distance — certain tasks had to happen on-site because the network couldn’t handle the latency or the “layer 2” requirements of broadcast equipment.
Cliff Ryan, NBCUniversal’s vice president of network engineering, said the network has expanded its ASR-based MPLS Segment Routes WAN to accommodate expanding traffic, which will ensure NBC Sports can meet on-premises and hybrid cloud workflows. To achieve this, the network is employing Cisco CrossWorks Network Controller, or CNC, and WAN Automation Engine, or WAE. Ryan said this tooling gives NBC network engineers the ability to analyze how traffic moves across its critical backbone and provides best-in-class failure analysis and capacity planning insight.
What I like about the intersection of sports and tech is that it often provides a proving ground for different technology. For the enterprise IT leaders, this is a great proof of concept for the “borderless” data center. If one can produce a live, 4K global broadcast across an ocean with zero packet loss, the hybrid cloud latency issues most organizations face are suddenly much more solvable.
By the numbers: The IP media explosion
The shift we are seeing at the Olympics mirrors the pivot happening in the networking industry. The Business Research Co. found the global Live IP Broadcast Equipment market is expected to reach $1.9 billion in 2026, growing at a compound annual growth rate of over 15%.
The acceleration is driven by the below three pillars:
- Bandwidth demands: A single uncompressed 8K video stream now consumes roughly 48 gigabits per second. Traditional broadcast cables or SDI cannot scale to meet that demand.
- Operational efficiency: Moving to IP allows broadcasters to use commercial off-the-shelf hardware, reducing proprietary hardware costs by an estimated 30% to 50% over long-term cycles.
- The AI inflection point: AI-driven analytics and automated “Gold Zone” highlights can’t run on legacy systems since they don’t have the capacity or the agility. IP is the only viable option for scaling AI.
AI networking: From ‘nice to have’ to ‘table stakes’
Though “AI Networking” is still on the roadmap of many companies, for the Olympics the future is now, as NBC’s use of Cisco’s CNC and WAE to manage its backbone demonstrates.
The entire Olympics will be streamed live on Peacock while simultaneously feeding 4K linear broadcasts to millions of homes. Human engineers can’t troubleshoot congested links fast enough to deliver a consistent experience. The network must be self-healing and predictive. NBC’s engineering team is using these tools for improved failure analysis — essentially simulating outages before they happen to ensure the “five nines” of reliability that a $7.75 billion rights deal demands.
Why this matters to IT leaders
The takeaway for IT leaders outside of the media space is that the network is no longer just “plumbing” – for many companies, it is the business. Whether the network supports a hospital system, a global retail chain or a financial services firm, the requirements are converging with those of NBC Sports. Modern network requirements are:
- Zero-loss determinism: In a world of real-time data, a dropped packet is a lost customer or a failed transaction.
- Simplified orchestration: As NBC integrates on-prem and hybrid cloud workflows, the ability to manage both via a single dashboard (such as Cisco’s Nexus Dashboard) is the only way to correlate network issues with business metrics.
- Security at scale: With the eyes of the world on Milan, the threat surface is massive. Moving to an IP fabric allows for granular, zero-trust policy enforcement at the flow level — something legacy networks never could achieve.
Final thought
As we watch the opening ceremony from San Siro Stadium Friday Feb. 6, remember that the triumph of athletics you’re watching is enabled by modernized, software-defined infrastructure. With the right AI networking foundation, NBC Sports will be able to move mountains of data as easily as a downhill skier moves through powder.
For those in IT world, there is a lesson here: If the network isn’t ready for the “AI era,” you’re at the starting gate while the competition is halfway down the mountain. AI is coming and it requires a different type of network to enable it.
My year started off with a cornucopia of events – CES, the National Retail Federation show and the World Economic Forum in Davos — and though they’re three completely different events, there was one thread that cut across all of them: artificial intelligence. Like the internet did 30 years ago, AI will change the way we work, live, learn and play. However, also like the internet, AI will bring several new security threats, prompting organizations to rethink their cyber strategies.
At Davos, I discussed this with Mike Rich, chief revenue officer for Zscaler Inc. “Companies could move much faster if they had the right governance in place,” he said. “The pressure to deploy AI is coming from the CEO and being pushed down, and because the right governance and trust models are in place, that’s slowing things down.”
This week Zscaler announced it’s expanding its AI Security Suite with new features that give enterprises more visibility and control over how AI is being used across their environments. The idea is to make security less of a hurdle as companies move from experimenting with AI to systems that rely on agents and automation.
Most companies don’t know where AI is running inside their organizations. Generative AI tools are everywhere, AI is being baked into software-as-a-service platforms, and companies are building their own models and agents internally as well. Couple this “shadow AI” problem with rapid infrastructure changes, and security teams are left trying to figure out where the risks are.
Basic access control is also difficult to manage. AI traffic doesn’t look like normal user activity. Much of it comes from automated systems talking to each other, and traditional security tools weren’t built for that. Then there are the AI-specific threats. Attacks like prompt injection or model tampering go straight after the AI itself. Those don’t show up in traditional red teaming exercises or standard posture checks, which creates additional management challenges for security teams.
The reality is most existing security products weren’t built for AI because they were built in an era where the number of users, the location they worked in, and the nature of traffic was very predictable. AI agents can be invoked, run for 30 seconds and be deprecated, and that creates a level of randomness bordering on chaotic for security teams.
Adding to the challenge is that security teams don’t want to stitch together multiple point solutions. Zscaler’s security platform gives enterprises a clearer view of their entire AI environment. It shows where AI is running, who can access it, and what data it touches, so companies don’t lose visibility as AI becomes more embedded in day-to-day operations.
More specifically, the updated platform addresses enterprise AI security in three ways.
- AI Asset Management provides chief information security officers, the information technology department and governance teams a comprehensive inventory of AI apps, models, infrastructure, agents and usage. This brings visibility to shadow AI, understand what data AI touches and prioritize risk by providing visibility on AI usage.
- Secure Access to AI helps security teams safely enable sanctioned AI services like developer tools and AI models with Zero Trust controls, inline inspection and prompt classification to reduce data loss and misuse while preserving productivity. Zscaler data shows that AI has seen a massive 91% surge in activity with nearly 40% being blocked due to security concerns. The secure access can help enable businesses to use AI and give organizations the confidence to move forward.
- Secure AI Infrastructure and Apps enables application teams to protect AI development across the lifecycle with automated AI red teaming, prompt hardening, runtime guardrails and continuous risk posture assessment from build to runtime.
Zscaler says its approach is about managing AI security continuously instead of doing one-time assessments. The company is rolling out additional protections, including a new Model Context Protocol gateway for secure automation and AI deception tools that defend against attacks targeting AI models.
A comprehensive AI security suite like the one Zscaler offers can help change the way businesses view security. Historically, most organizational leaders look at security as something that gets in the way of the business, since that has been the reality. Security baked into the AI processes allows companies to move forward with confidence, turning security into a business enabler.
A key theme of Davos was “trust,” taking many forms. Within the context of AI usage, for companies to get the full value of AI, they must trust that the actions AI is taking are what they need to be, using the data AI is allowed to touch, and isn’t putting the company at risk.
As one would expect, artificial intelligence was a big theme at last week’s National Retail Federation’s annual event in New York City, as it was last year, but there was one subtle difference.
The 2025 edition was focused more on AI education, whereas I felt this year’s NRF focused more on use cases. In fact, one of the speakers I saw said something to the effect that NRF is no longer a technology show but rather a business outcomes event.
Concurrent with the event, Nvidia Corp. released its latest trends report, State of AI in Retail and Consumer Packaged Goods, which highlights how far retailers and consumer packaged goods companies have come in implementing AI. Nvidia surveyed global industry leaders and found that 58% of companies are “actively deploying” AI solutions. This marks a 16-point increase (42%) from the previous year, pointing to a growing level of maturity.
In retail, AI is moving from vision to reality
This indicates retail and CPG companies are starting to move past experimenting with AI and into more practical use cases. Tools that were once limited to pilots are now showing up in regular operations.
Overall, 91% of the retail industry is engaged with AI either by actively using it or assessing it. AI has contributed to revenue growth, according to 89% of the industry leaders surveyed by Nvidia. Most (95%) said AI has helped reduce annual costs. Ninety-two percent of executives plan to increase their AI budgets in the next year.
Customer-facing and back-office use cases are both on the rise. Digital commerce is at the top of the list. AI use across e-commerce, marketing and advertising rose from 57% to 61%. A key development in this space is agentic commerce, where AI agents respond to customer intent rather than offering recommendations. One example is a shopping assistant that provides personalized help to customers.
Agentic AI is gaining traction in retail. Nearly half of respondents (47%) said they are using or evaluating agentic AI. They named three key strategic goals that AI agents can address, which traditional automation can’t. Boosting speed and efficiency came up most often (57%), followed by enhancing customer experiences and personalization (40%), and improving decision-making using real-time data (40%).
Nvidia is focused on openness to accelerate AI adoption
To help accelerate AI adoption, Nvidia has been aggressively pushing an open mandate to democratize access to AI tools. At NRF I met with Azita Martin, vice president and general manager for AI for retail, CPG and QSR. “There are many technology companies at NRF,” she said. “Nvidia is focused on openness and interoperability, so customers have the flexibility to run their AI applications on any cloud or data center of their choice. As retailers start deploying AI agents at scale, the cost of inference can become very high. Nvidia is focused on optimizing inferencing speeds and reducing inferencing costs of open-source models.”
Martin then added, “What we are seeing is that retailers typically start their AI projects in the cloud with frontier models such as OpenAI and Gemini, but as the number of agents increase, the cost of inferencing grows significantly as these models generate a lot of tokens because of reasoning. We believe a combination of open-source and frontier models can bring that cost down as retailers move to build more agentic applications.”
The survey indicates that Nvidia’s message of open is being embraced as 79% said it’s important to integrate open-source models into their technology stack, largely because it gives them more control over how AI systems are trained using their own data.
In addition to the study, the company introduced two open-source AI blueprints designed to modernize different parts of retail without replacing existing systems. The first blueprint, Multi-Agent Intelligent Warehouse, targets supply chain operations and uses AI agents that run on top of existing systems. In essence, it acts as an AI command layer that mirrors real warehouse roles and turns fragmented telemetry into real‑time, evidence‑backed recommendations on issues like bottlenecks, staffing, equipment health and safety.
The second blueprint, Retail Catalog Enrichment, addresses what Nvidia calls a “sparse data” problem. This blueprint taps into generative AI to turn basic product images and data into detailed descriptions, which can be used in marketing. A typical scenario might involve a home goods retailer working with a set of product photos. Using the Nemotron vision language model, which is part of Retail Catalog Enrichment, retailers can create product metadata like color, material, capacity, style and use cases, along with localized titles and categories that feed search, recommendations and marketing.
The blueprints are part of Nvidia’s broader effort to help retailers deploy agentic AI across retail their retail operations. Nvidia said the next phase involves applying AI more directly inside warehouses and stores. So, retailers can interpret what’s happening on the floor and to optimize inventory and supply chain issues with less manual intervention.
Physical AI looks to ease supply chain pain
Supply chain challenges have only increased over the past year, according to 64% of industry leaders surveyed by Nvidia. Many companies are turning to AI to deal with that pressure, most often to improve operational efficiency and throughput (51%). Meeting customer expectations (45%) and improving traceability and transparency (38%) also came up frequently.
Physical AI is likely to address some of those supply chain challenges. It makes warehouses and distribution centers smarter by connecting AI to cameras, sensors and robotics and enabling physics-based simulation to create a digital twin of a warehouse or a store. The digital twin can predict the throughput of the facility by evaluating new layouts and operations before implementing them in the physical warehouse.
Though adoption is still early — with 17% of companies using or evaluating it — physical AI can potentially provide more than just task automation. Among the early adopters, physical AI is being used for intralogistics simulation and optimization (33%), robotic pick-and-place operations (23%) and smart forklifts and autonomous mobile robots (18%).
Some of the initial hurdles around AI are starting to ease. Data readiness, which was a major issue at the start of the generative AI wave, is less of a concern now. Only 13% of companies cited training data as a top challenge, a drop from 27% the year prior. Concerns over privacy and data sovereignty also declined from 22% to 18%.
Final thoughts
Retail, once thought of as a slow-moving industry with regard to technology adoption, now appears to be one of the fastest-moving on AI. My research has found that 90% of companies now compete on customer experience and 86% of consumers admitted they would leave a brand after one or two bad transactions. For retailers, the stakes are higher than ever, and every brand is looking for a way to serve its customers faster and more personalized.
However, budgets are still a reality, and this is where the work Nvidia is doing is so important. The company often gets called out for high prices, but retail decision makers should look beyond the cost of chips and servers and at the total cost of token generation, and this is where Nvidia and its ecosystem have been focused on.
As CES wound down, industry watchers are now turning their attention to the National Retail Federation show across the country in New York.
Retail, once is slow moving industry, is now ripe with change and the in-store environments have become operationally more complex. It’s now common to find more digital services and connected systems as the networks now support point-of-sale terminals, kiosks, cameras, sensors and guest Wi-Fi, all of which are expected to function without interruptions. At the same time, many retailers are doing this with fewer staff and resources. Also, there is little room for error as when the technology fails, retailers lost money.
To address these challenges, Hewlett-Packard Enterprise Co. today announced a number of product updates across its networking, analytics, and compute portfolios that are aimed specifically at retailers.
Continuous operations are the core tenet of HPE’s announcements. Since acquiring Juniper Networks, HPE has been positioning itself as “network first.” In a prebriefing, Gayle Levin, head of marketing for wireless products at HPE Aruba Networking, told me the company has a goal of being “the best networking business on the planet,” and will lean into its self-driving network narrative to accomplish this.
On my call, Levin noted that HPE is placing major emphasis on AI enabled operations to help retailers cope with staff shortages. “Retailers are struggling so much with staffing, and they’re managing stores remotely,” she said. “Having that kind of self-driving capability is a huge win for them.”
So much of the industry chatter regarding artificial intelligence is that the technology taking jobs, but in industries such as retail, business can’t hire people fast enough. AI can help remove many of the complex and repetitive tasks, enabling information technology operations to focus on strategic initiatives.
Self-driving networks for remote store operations
HPE ties much of its retail strategy to its broader push toward self-driving networks. The idea comes up repeatedly in retail conversations because sending IT support staff to stores is expensive and time-consuming and retailers are trying to avoid it whenever possible.
By combining telemetry, AI for IT operations, and automation, retailers can detect and resolve problems remotely. Levin pointed to customers that have significantly reduced on-site IT visits by relying more heavily on self-driving capabilities. For example, one major retail customer was able to cut store visits by 85% thanks to more self‑driving operations.
HPE is careful not to frame self-driving as an all-or-nothing proposition. Retailers can decide which actions are automated and which require approval from the IT team first. Levin explained: “This allows customers to move to self-driving at their own pace, and get to that comfort level.”
Smaller switches designed for the retail floor
On the hardware side, HPE is expanding the Aruba Networking CX 6000 switch family with fanless, compact eight-port models to address space and noise issues in retail. The switches are designed for checkout lanes, kiosks and other customer-facing areas, not wiring closets. They can be installed under counters or in tight overhead spaces.
These are one-gigabit switches that HPE is positioning as an entry point in the CX 6000 family. Retailers can move up to higher-capacity models if needed. Levin said the goal is to provide a consistent switching platform that can scale from a small convenience store to a large retailer, all managed through the same Aruba Central platform.
The new models support both Power over Ethernet and non-PoE configurations. With more PoE capacity than older models, retailers will be able to add newer devices without having to redesign their networks. This is especially beneficial for those moving to Wi-Fi 7.
Proactive monitoring for Wi-Fi 7 networks
Wi-Fi 7 is a recurring theme in HPE’s retail messaging. HPE updated the Aruba User Experience Insight, part of Aruba Central, with support for Wi-Fi 7 and new sensors. The sensors can identify issues related to network upgrades or configuration changes, helping IT teams establish performance baselines and test network health.
“It’s one of the first sensors that’s out there specifically designed for Wi-Fi 7 to look at not just the six-gigahertz channel utilization, but also some of the capabilities that are unique to Wi-Fi 7,” said Levin.
Levin explained that the sensors continuously run synthetic user tests — about a thousand per day — to proactively detect issues. The tests simulate common network interactions, such as user sign-on activity, to prevent problems before they affect real users.
Analytics to plan for seasonal retail demand
Another focus area for HPE is analytics. The company has integrated the Marvis virtual network assistant with Premium Analytics from its Juniper networking portfolio, giving retailers access to up to 13 months of data. This allows retailers to see how stores performed during the same period last year and track seasonal changes. All Mist customers have access to 30 days of data but the Premium subscription extends that by 10 more months for deeper analysis.
The analytics component is handled by HPE’s Mist AIOps platform, which collects and analyzes performance and location data, while Marvis is the interface where users interact with the data. Levin compared the experience to asking a question in a chat interface, rather than navigating traditional dashboards or reports.
“We’re able to use our natural-language interface in Marvis (from Juniper) and AIOps, then take all the rich data and get not only IT insights, but also business insights,” said Levin. “And because retail is so seasonal, it helps with those seasonality components as well. For example, how should I staff at Christmas this year based on what we saw last year?”
Levin said retailers are doing more than just network troubleshooting. They’re using the data for occupancy planning, to make staffing decisions, and to gauge how product placement affects traffic flow inside stores.
Keeping payment systems running at the core
All of the work HPE is doing at the store edge ultimately depends on what happens behind the scenes. The company is updating its fault-tolerant NonStop Compute platform, which is still widely used for payment processing. For instance, a large share of credit-card transactions pass through NonStop systems at some point.
The latest NonStop release can scale across thousands of nodes and delivers about 15% more performance than the previous generation. HPE also introduced Transparent Data Encryption to strengthen protection for payment and customer data.
Another notable change is how NonStop can be deployed. HPE now offers a software-based version that can run on standard infrastructure or in cloud environments. For retailers, that opens up more options. They can also access NonStop through HPE’s GreenLake platform, adding capacity during peak periods as needed.
The NonStop Compute platform is ideally suited when uptime is everything. On our call, HPE said 90% of retail transactions flow through the product. While there are many high-availability compute products available, there is a big difference between them and fault-tolerant solutions, which offer 100% redundancy in hardware and never go down. Prior to being an analyst, I worked as an IT pro at a financial firm, and we used similar systems to power our trading desk as any downtime meant lost revenue.
Looking ahead at NRF 2026 and beyond
HPE’s announcements at NRF 2026 give retailers a closer look at how these pieces fit together across stores, networks and core systems. Levin said the common thread is reducing operational friction for retailers: “This is really about the enhancements that we’re making so that we can deliver the backbone for the best retail experiences and transactions.”
For HPE, the set of announcements was a combination of Aruba, Juniper and its compute products. Historically, the various groups within HPE haven’t always come to market together, so it was good to see a set of innovations for retailers from “HPE.” Customers are now looking for business outcomes and that requires vendors to eliminate the historical silos that may live within their companies. Moving into 2026 it would be good to see HPE and its peers continue to announce products structured along the lines of customer challenges.
One of the highlights of the CES event last week was the Nvidia Corp. keynote delivered by Chief Executive Jensen Huang. Though there are many focal points to an event such as CES, the one pervasive theme is artificial intelligence, and no company has become more synonymous with AI than Nvidia.
That’s why thousands of people queued up for hours in advance to hear the latest and greatest vision and product news from the company’s leader. While there was lots of great product news, there were some important takeaways worth calling out above and beyond the product news.
Agentic will be the new interface for applications
During his keynote, Huang painted a picture of a world where the primary interface for many of the work tasks we do will shift to agentic agents. He cited several examples where this is happening today, including ServiceNow Inc., Palantir Technologies Inc. and Snowflake Inc. Historical interfaces, such as filling out spreadsheets, command lines and event graphical user interfaces are manually intensive and generally require the human to be the integration point between products.
Agentic agents, such as Nvidia Nemotron, are not only simpler but can reason, use tools, plan and search removing much of the heavy lifting involved in work today. My research has found that workers spend up to 40% of their time managing work instead of doing the actual work and agentic agents can take that time to zero. There is so much fear and around AI taking jobs, but much of the value is in allowing us to grow productivity by an order of magnitude because we no longer have to do the things that are low value.
Agentic agents are multi-everything
One of the more interesting parts of Huang’s talk track when he discussed his “a-ha” moment around multimodel AI. He talked about how Perplexity uses multiple large language models to get the most accurate results. “I thought it was completely genius,” Huang said. “Of course, in AI, we would call upon all the world’s greatest AI models to answer different questions at various points in the reasoning chain. This is why AI needs to be multimodel in nature.” He added that this allows the agentic agent to use the best model for the specific task.
Huang went on to explain that in addition to multimodel, agentic AI will be multimodal to understand speech, images, text, videos, 3D graphics and other forms of communications. The “multi” continues with deployment models as AI needs to be multicloud to enable models to reside in the optimal location. It’s important to understand, in this case, multicloud is inclusive of hybrid cloud. This becomes increasingly important with physical AI as robots, edge servers and other connected devices required access to the data and the models in real time and that requires localized services.
There have been several comparisons made to the Internet with AI and I think the biggest similarity is that AI, like the internet, will eventually embedded into everything we do and that requires AI to be multimodal, multimodel and multicloud.
Nvidia continues to redefine the network
Big companies make acquisitions all the time but there has been perhaps no more important acquisition to Nvidia than Mellanox. The company paid just under $7 billion to gain networking capabilities and that business now generates more than $7 billion every quarter. While the primary infrastructure announcement at CES for Nvidia was the Vera Rubin platform, the network is what enables the various components to work together. Vera is the CPU and Rubin the GPU with Vera Rubin NVL72 being the AI supercomputer where the 72 Rubin GPUs and 36 Vera CPUs are connected using NVLink, one of Nvidia’s networking products.
In fact, of the six Vera Rubin platform announcements as part of the CES payload, four were networking, these include:
- ConnectX-9 SuperNIC – Nvidia’s next generation NIC card designed to handle the massive throughput from Rubin GPUs. The 200G SerDes enables a total bandwidth of 1.6Tb/sec per GPU, double the previous version. This is designed for the rigors of scale-out.
- NVLink 6 Switch – The interconnect that allows multiple GPUs within a single rack to act as one processor. This is optimized for scale-up networking and provides 260 TB/s of bandwidth per rack.
- Spectrum-X Ethernet Photonics – This integrates silicon photonics to solve the power and increase the data center resilience compared to traditional or standard optical cabling. The co-packaged optics provide significant power reduction and increased uptime. The flagship SN6800 offers a whopping 409.6 Tb/sec of aggregate bandwidth supporting 512 800G Ethernet ports.
- BlueField-4 DPUs – This offloads many networking functions from the server. The new DPU features 64 Arm Neoverse V2 cores and has 6x processing capacity from BlueField-3.
Nvidia is making storage AI-native
Nvidia announced something called “Context Memory Storage,” which the company is positioning as the right storage architecture for the AI era. With the rise of AI, Nvidia has rethought processing, the network and is now doing the same to storage.
At CES, I sat down with Senior Vice President Gilad Shainer, who came to the company via the Mellanox acquisition. During our conversation we discussed how important it was the compute, network and storage be in lockstep with one another to provide the best possible performance. Shainer made an interesting point that the traditional tiers of storage are not necessarily optimized for inference, and that’s what Context Memory Storage is bringing.
It fundamentally redefines the storage industry by transitioning it from a general-purpose utility to a purpose-built, “AI-native” infrastructure layer designed specifically for the era of agentic reasoning. By introducing a new tier of storage, this platform bridges the critical gap between the capacity-limited GPU server storage and the traditional, general-purpose shared storage, effectively turning key-value cache into a first-class, shareable platform resource.
Instead of forcing GPUs to recompute expensive context for every turn in a conversation or multistep reasoning task, the platform leverages the BlueField-4 DPU to offload metadata management and orchestrate the high-speed sharing of context across entire compute pods. This architectural shift eliminates the “context wall” that previously stalled GPU performance, delivering up to five times higher token throughput and five times better power efficiency compared to traditional storage methods.
Auto innovation continues to accelerate
Many casual observers of the auto industry believe innovation has slowed down. This perception comes from the fact that about a half -decade ago, the industry started talking about level five self-drive and we still seem to be far from that. However, along the way the rise of digital twins, cloud to car development, camera vision and AI has allowed cars to be much safer and smarter than ever before. During his keynote, Huang highlighted that the new Mercedes-Benz CLA, built on the newly announced Alpamayo open model, achieved a five-star European New Car Assessment Program (EuroNCAP) safety rating.
Alpamayo will be a boon to the auto industry as it introduces the world’s first thinking and reasoning-based AV model. Unlike traditional self-driving stacks that rely on pattern recognition and often struggle with unpredictable “long-tail” scenarios, Alpamayo employs a Vision-Language-Action architecture to perform step-by-step reasoning, allowing a vehicle to solve complex problems — such as navigating a traffic light outage or an unusual construction zone — by explaining its logic through “reasoning traces.”
By open-sourcing the Alpamayo 1 model along with the AlpaSim simulation framework and 1,700 hours of real-world driving data, Nvidia provides automakers like Mercedes-Benz, Jaguar Land Rover, and Lucid with a powerful “teacher model” that can be distilled into smaller, production-ready stacks. This ecosystem significantly lowers the barrier to achieving Level 4 autonomy by replacing “black-box” decision-making with transparent, human-like judgment, ultimately accelerating safety certification and building the public trust necessary for mass-market autonomous deployment.
Final thoughts
Nvidia’s CES keynote was certainly packed with AI-based innovation. My only nitpick with the presentation was that I’d like to see the company lead with the impact and then roll into the tech. As an example, Huang went through great detail on how it contributed more models than anyone, then introduced Alpamayo and finally talked about the Mercedes safety score. He should have led with the Mercedes data point, since that’s the societal impact, and then later brought the tech in.
Across CES, one could see the impact that AI is having on changing the way we work and live, and no vendor has done more to bring that to life than Nvidia. Its keynote has become the marquee event within a show filled with high-profile keynotes, and I don’t expect the momentum it has to slow down anytime soon.
I’m a big fan of any technology that makes our lives easier. One example of this is Amazon’s Just Walk Out technology, which I consider to be the easiest check out experience available today. Customers tap their credit card on a reader, walk in a store, pick up whatever they want and then, as the name suggests, just walk out of the store and everything is charged to your account.
One of my goals at AWS re:Invent was to find what’s new with Just Walk Out and what to look forward to. At the event, I met with both Rajiv Chopra, vice president of JWO for AWS and Sarah Yacoub, senior manager of product marketing, to get an update.
Here are some of the key updates to Just Walk Out:
Technology deployment and infrastructure improvements
- Shift to “lane approach”: Instead of a full store retrofit with cameras covering the entire space, the current stadium deployments are using a “lane of cameras” effectively placed outside the concession area. This significantly reduces the infrastructure size, camera count, and build-out complexity, making it easier to attach to existing structures. It also obviates the need to do large-scale construction to deploy a Just Walk Out store.
- Reduced infrastructure footprint: Improvements have been made in the size of the additional MDF and backroom area required, leading to a smaller physical footprint.
- Cost reductions through technology optimization: Over the past few years, Just Walk Out has reduced deployment costs by approximately 50% through a combination of technology improvements and operational efficiencies. The AI algorithms have become exponentially more efficient, now handling variable ceiling heights (as low as six to seven feet), sloped floors, and inconsistent ceilings without requiring expensive general contracting changes. Installation has been simplified through retrofitting capabilities that reuse existing fixtures, gate plates that eliminate the need to core into cement (reducing permitting requirements) and streamlined camera plans requiring less low voltage wiring. These improvements reduce not just Just Walk Out costs, but total deployment costs including general contractor, electrician and designer expenses — making the technology more accessible across verticals.
Operational models and experience
- Just Walk Out becomes “just walk in”: In some travel locations, like Hudson Nonstop, the requirement to tap a credit card to enter was viewed by some consumers as a barrier to entry. This has now been removed by enabling the shopper to enter freely, browse, select items and the checkout/payment process happens at the exit instead of the traditional tap-to-enter model. This aims to reduce customer apprehension at the entrance but still results in the same frictionless experience.
- Shrink and loss prevention: Loss prevention is a huge metric for Just Walk Out. During my conversation, Yacoub mentioned that retailers using the technology have seen double-digit percentage decreases in loss, making it a significantly better solution compared to manned self-checkout, which is often subject to being tricked. The cameras see every activity from consumers but also acts as a psychological deterrent. For stores with high level of theft, such as CVS and Target could benefit greatly from Just Walk Out and presents a much better alternative than locking up merchandise. Cost has been used as an excuse by retailers, but the amount of loss would far outweigh the cost of Just Walk Out.
Market expansion and adoption
- Global presence: Just Walk Out, initially launched in the U.S., is now available in Canada, Australia, the U.K. and France, with more countries to come.
- Store count: The company is currently quoting over 300 locations and expects multi-store deals to continue increasing this number significantly in 2026.
- New verticals and value propositions:
- Stadiums: This continues to be a focus for Just Walk Out with many new stadiums being turned up in the U.S. and internationally, including Allianz Stadium in the U.K., Marvel Stadium in Australia — the first Just Walk Out store in the Southern Hemisphere — as well as Rod Laver Arena and Melbourne Cricket Ground in Australia, and venues in Canada such as Scotiabank Arena in Toronto and Scotiabank Saddledome in Calgary.
- Fulfillment centers/business and industry: Deploying Just Walk Out inside fulfillment centers, offices, and large factories to provide a 24/7 amenity to employees who have limited break times and no options nearby (food deserts).
- EV charging stations: Used at EV charging stations like Gridserve’s Electric Forecourts in the U.K. and IONNA’s Rechargery locations for convenience stores within rest areas, offering an unmanned space and a differentiator for the charging network.
- Healthcare: Deployments in hospitals for gift stores and convenience stores, often with badge pay integration for night shift staff, such as the University of California at San Diego Health’s McGrath Outpatient Pavilion.
- Universities: More than 60 locations are deployed, using meal dollars integration and offering specialized late-night selections (ice cream, snacks) in dorm residence halls, including UC San Diego with five campus stores.
Data and integration
- Loyalty integration: One of the biggest inhibitors to historical adoption was that Just Walk Out did not integrate with loyalty programs such as season ticket holder apps. A couple of years ago, one of the NHL team chief information officers mentioned that if he can’t let his best customers use the most convenient way to buy products, he didn’t want it. Chopra mentioned that this is no longer an issue and Just Walk Out can integrate with almost all loyalty and payment programs.
- Real-time access data: The other significant issue with Just Walk Out was there was no access to real time information. An NFL CIO explained to me that closest he could get to “real time,” was the data being stored in an Amazon S3 bucket the next day. Many retailers, which includes stadiums, like to have real time count of exactly what has been sold and Just Walk Out could not accommodate that. This issue has now been solved and fully integrates into inventory systems.
The biggest remaining issue with Just Walk Out is just consumer education and awareness. In some cases, the Just Walk Out brand isn’t front and center and takes a back seat to the store or a sponsor. This is common in stadiums and airports. When one flies into Harry Reid Airport in Las Vegas, the Hudson News location at the bottom of the escalator is a Just Walk Out-enabled store. Another example is at the Golden 1 Center, home of the Sacramento Kings, the store is branded, “PATH Grab and Go,” after the sponsor.
The issue with this is it’s very common for stadiums to have multiple grab-and-go systems and, though the experience is similar, there is a difference between Just Walk Out and competitors such as Ai-Fi and Zippin. The third-party branding leaves it up to the consumer to understand which system is in place and what the experience is.
The other challenge is just general awareness. I’ll often go to a stadium and see a line of people waiting in a regular checkout with only a handful of fans at a Just Walk Out store. Once consumers use it, they’ll generally use it again as the experience is so easy. This is where retailers, stadium owners and others should invest in some kind of “concierge,” that can help educate consumers.
Just Walk Out has grown in both terms of capabilities and deployment models and will be the future of retail. With advancements in AI and camera vision, self-service models can be fast and accurate and a “win” for customers, which means a win for the retailer.
Artificial intelligence leader Nvidia Corp. Monday announced the Nemotron-3 family of models, data and tools, and the release is further evidence of the company’s commitment to the open ecosystem, focusing on delivering highly efficient, accurate and transparent models essential for building sophisticated agentic AI applications.
Nvidia executives, including Chief Executive Jensen Huang, have talked about the importance open source plays in democratizing access to AI models, tools and software to create that “rising tide,” and bringing AI to everyone. The announcement underscores Nvidia’s belief that open source is the foundation of AI innovation, driving global collaboration and lowering the barrier to entry for diverse developers.
Addressing the new challenges in enterprise AI
As large language models achieve reasoning accuracy suitable for enterprise applications, on an analyst prebrief Nvidia highlighted three critical challenges facing businesses today:
- The need for a system of models: There has not been and will not ever be a single model to rule them all and organizations need a choice of models to build performant AI applications. What’s required is a system of models that work together – different sizes, modalities and orchestrators to deliver a multi-model approach.
- Specialization for the “last mile”: AI applications often “hit a ceiling” and must be specialized for specific domains such as healthcare, financial services or cybersecurity. This requires training models with large volumes of proprietary and expert-encoded knowledge.
- The cost of “long thinking”: More intelligent answers require extended reasoning, self-reflection and deeper deliberation — a process Nvidia calls “long thinking” or test-time compute. This significantly increases token usage and compute cost, demanding more token efficient architectures and inference strategies.
Nemotron-3: The most efficient open model family
Nvidia’s answer to the above challenges is the Nemotron-3 family, characterized by its focus on being open, accurate, and efficient. The new models use a hybrid Mamba-Transformer mixture-of-experts or MoE architecture. This design dramatically improves efficiency as it runs several times faster with reduce memory requirements.
The Nemotron-3 family will be rolled out in three sizes, catering to different compute needs and performance requirements:
- Nemotron-3 Nano (available now): A highly efficient and accurate model. Though it’s a 30 billion-parameter model, only 3 billion parameters are active at any time, allowing it to fit onto smaller form-factor GPUs, such as the L40S.
- Nemotron-3 Super (Q1 2026): Optimized to fit within two H100 GPUs, it will incorporate Latent MoE for even greater accuracy with the same compute footprint.
- Nemotron-3 Ultra (1H 2026): Designed to offer maximum performance and scale.
Improved performance and context length
Nemotron-3 offers leading accuracy within its class, as evidenced by independent benchmarks from testing firm Artificial Analysis. In one test, Nemotron-3 Nano was shown to be the most open and intelligent model in its tiny, small reasoning class.
Furthermore, the model’s competitive advantage comes from its focus on token efficiency and speed. On the call, Nvidia highlighted Nemotron-3 tokens-to-intelligence rate ratio, which is crucial as the demand for tokens from cooperating agents increases. A significant feature of this family is the 1 million-token context length. This massive context window allows the models to perform dense, long-range reasoning at lower cost, enabling them to process full code bases, long technical specifications and multiday conversations within a single pass.
Reinforcement learning gyms: The key to specialization
A core component of the Nemotron-3 release is the use of NeMo Gym environments and data sets for reinforcement learning, or RL. This provides the exact tools and infrastructure Nvidia used to train Nemotron-3. The company is the first to release open, state-of-the-art, full reinforcement learning environments, alongside the open models, libraries and data to help developers build more accurate and capable, specialized agents.
The RL framework allows developers to pick up the environment and start generating specialized training data in hours.
The process involves:
- Training a base model (starting from the NeMo framework).
- Practicing/simulating in “gym” environments to generate answers or follow instructions.
- Scoring/verifying the answers against a reward system (human or automated).
- Updating/retraining the model with the high-quality, verified data, systematically shifting it toward higher-graded answers.
This systematic loop enables models to get better at choosing actions that earn higher rewards, like a student improving their skills through repeated, guided practice. Nvidia released 12 Gym environments targeting high-impact tasks like competitive coding, math and practical calendar scheduling.
Nvidia’s expanded commitment to open source
The Nemotron release is backed by a substantial commitment across three areas:
Open libraries and research
Nvidia is releasing the actual code used to train Nemotron-3, ensuring full transparency. This includes the Nemotron-3 research paper detailing techniques like synthetic data generation and RL.
Nvidia researchers continue to push the boundaries of AI, with notable research including:
- Nemotron Cascade: A student model that outperformed its teacher (DeepSeek, a 500 billion- to 600 billion-parameter model) in coding, demonstrating that the scaling laws of AI continue to extend.
- RLP (Reinforcement Learning in Pretraining): A technique to train reasoning models to think for themselves earlier in the process.
High-quality data sets
Nvidia is shifting the data narrative from big data to smart and improved data curation and quality. To accomplish this, the company is releasing several new data sets:
- Pre-training data: More than 3 trillion new tokens of premium pre-training data, synthetically generated and filtered for “all signal, no noise” quality, using more than 1 million H100 hours of compute.
- Post-training data (Safe Instruction): A 13 million-sample data set using only permissively licensed model outputs, making it safe for enterprise use.
- RL datasets: 12 new reinforcement learning environment and a corpus of datasets covering 900,000 sample tasks and prompts in math, coding, games, reasoning, and tool use, making Nvidia one of the few open model providers releasing both the RL data and the environments.
- Nemotron-agent safety: This provides 10,800 labeled OpenTelemetry traces from realistic, multistep, tool-using agent workflows to help evaluate and mitigate safety and security risks in agentic systems.
Enterprise blueprints and ecosystem
Nvidia is providing reference blueprints to accelerate adoption, integrating Nemotron-3 models and acceleration libraries:
- IQ Deep Researcher: For building on-premises AI research assistants for multi-step investigations.
- Video search and summarization: Turning hours of footage into seconds of insight.
- Enterprise RAG: The most optimized, enterprise-ready retrieval-augment generation blueprint, accelerating every step of the retrieval pipeline.
The Nemotron ecosystem is broad, with day-zero support for Nemotron-3 on platforms such as Amazon Bedrock. Key partners such as CrowdStrike Holdings Inc. and ServiceNow Inc. are actively using Nemotron data and tools, with ServiceNow noting that 15% of the pretraining data for their Apriel 1.6 Thinker model came from an Nvidia Nemotron data set.
The industry is winding down the hype phase of AI and we should start to see more production use cases. The Nemotron 3 family is well-suited for this era as it provides a performant and efficient open-source foundation for the development of the next generation of Agentic AI, reinforcing Nvidia’s deep commitment to democratizing AI innovation.
Zoom Communications Inc. is a fascinating company in that it’s one of the few corporate technology brands that resonates with end users as well as information technology pros.
I’m aware of many instances where the IT organization was considering an alternate communications product but the demand from the user community was so strong that Zoom was purchased. Zoom’s ease of use made it the product of choice during the stay-at-home period of the pandemic and user loyalty grew from there.
Since then, Zoom has targeted much of its marketing at IT pros, but it’s going back to what has made it so unique with a new brand campaign, “Zoom Ahead.” Instead of targeting IT decision-makers, the company is putting its attention on the people who actually use the work platform. The concept for the campaign came out of customer research showing that everyday users still feel a strong connection to the platform, and Zoom sees that as something worth building on.
The first ad for the new campaign was developed with Colin Jost’s creative agency. “Saturday Night Live”‘s Bowen Yang anchors the humorous ad called “I Use Zoom!” and the ad itself feels like an SNL skit. Yang represents an IT-like figure asking people to download a complicated tool. But then the ad takes a turn, where Zoom frames the platform as something people choose because it’s simple and dependable, not just because IT picked it. It plays up how widely Zoom is used by office workers, business owners, and frontline staff.
“We’re not targeting IT buyers with this campaign. We’re actually looking to reach and engage the users,” Kimberly Storin, Zoom’s chief marketing office, said in a briefing with industry analysts. “These are the people that are making things happen every single day on Zoom. They are champions. They are change makers. It’s not about just speaking to them. We want to inspire them and empower them to speak up for the tools that help them work better and get more done.”
The ad also makes references to some of Zoom’s newer products, including AI Companion and its contact center tools. Those additions are intentional. Many still associate Zoom with meetings, particularly video. Yet the company wants this campaign to help broaden that view.
“This campaign is our reintroduction,” said Storin. “It’s a movement. What we’re trying to do is capture some iconic moments, tap into that cultural relevance. We gave it a comedic modern twist… like “Severance.” Ultimately, we feel it’s not only unexpected but also a little bit ridiculous, and it’s a reminder that Zoom is defined by the people who use it.”
The ad will debut on Dec. 31 during the College Football Playoffs and will appear again during the NFL Playoffs, the Golden Globes and the Super Bowl pre-show. It will continue to roll out across digital, social and out-of-home channels through 2026. Storin noted that more creative is planned for the spring, which shows this isn’t meant to be a short-term push.
Zoom sees the campaign as an opportunity to reset how people think about the company. It wants to bring the brand back into everyday conversation and highlight products many users don’t know about by using humor and references to pop culture. After all, users are the ones who historically have driven much of Zoom’s momentum.
It will be interesting to see where the company takes the campaign from here. While most of the communications vendors have stayed in their swim lanes, Zoom has ventured well outside it with home grown tools and acquisitions. To the surprise of many, me included, Zoom has added products such as e-mail and docs, two markets that industry watchers feel Microsoft’s stranglehold on is far too hard to break.
It has also added frontline worker capabilities with the acquisition of Workvivo as well as BrightHire, an AI-powered hiring platform and Bonsai, a small business management platform. This in addition to more traditional capabilities such as Zoom Phone and Zoom Contact Center.
With these moves, Zoom is attempting to disrupt not just communications but the way we work, and I believe this is the most misunderstood aspect of the company’s strategy. It didn’t build e-mail to be yet another e-mail client just like in didn’t build Docs to try to be a better word processor.
What Zoom wants is the data from e-mail, documents, hiring tools and the like, which can power Zoom AI Companion. In data sciences, there’s an axiom, “Good data leads to good insights,” and though that’s true, silos of data lead to fragmented insights and we have plenty of those. Zoom wants to be the hub of work, and the place users work in most of the day.
This is certainly an ambitious goal, but it’s nice to see a vendor try to achieve something big. I would look at Zoom Ahead as a starting point to get users to think about the product as the one they love for video, but long-term, one that can do so much more than they thought.
High-performance network provider Arista Networks Inc. today announced the next wave of innovations for its campus network solutions.
The new products include expansion of its Virtual ES with Path Aliasing, or VESPA, offering, which will make it easier for businesses to deploy large-scale mobility domains. The Santa Clara networking company also announced it is expanding its Autonomous Virtual Assistant, or AVA, its agentic artificial intelligence solution, to help organizations streamline AI operations use cases.
Arista is well-known in high-performance networking environments where mass scale is critical. It has been knocking on the enterprise campus door for some time, including wireless. This release presents an excellent opportunity for Arista to bring its strengths in reliability, operational simplicity and mass scaling to wireless domains, including outdoor domains.
The company’s rapid growth has been by delivering a single, consistent experience across the network. For its enterprise customers, this means AI and data center, cloud, campus, branch, wireless and wide-area network. The operational simplicity and scale have been achieved with EOS, its single operating system, unified data lake of streamed telemetry (NetDL) and AVA.
Arista looks to now bring its strengths to the scaling limits enterprises experience thanks to rapid growth in the number of clients and internet of things devices they deploy. VESPA brings campus networks the consistent, large-scale principles typically used in the data center by enabling customers to design massive Wi-Fi roaming domain networks that support more than a half-million clients and 30,000 access points.
“This also allows our customers to completely simplify the network design, because previously they had to worry about deploying a large campus,” Sriram Venkiteswaran, Arista’s senior director of product line management, said in a prebriefing. He added “They would worry about splitting the campus into multiple domains, each having to set up its own IP address, VLAN and sub-routing. So there’s a lot of design complexity involved in the traditional way. With this approach, having a single mobility domain, we’ve taken away all the complexity from designing the network.”
The second benefit of this solution, he explained, “goes back to us building a CNC [centralized network controller] across all layers of the network. Again, in the traditional controller world, when you have controller failures you typically have downtime of a minute or two, and that can be disastrous for some applications, especially in healthcare, where a doctor is on a call and then the controller fails and just drops the entire connection. It takes minutes to recover. And this is becoming more urgent, especially in native mission-critical environments such as manufacturing and healthcare. Customers want this seamless connectivity across their network. VESPA is designed to solve these two problems.”
One of Arista’s VESPA customers, Arizona State University, said the campus is transitioning to Arista’s controllerless Wi-Fi to “help shape and validate the development of Arista’s VESPA architecture — a standards-based approach designed to provide a seamless wireless roaming domain that improves connectivity across the university,” said Jorge De Cossio, senior director of digital infrastructure and enterprise technology for ASU.
This emphasis on campus mobility and agentic AI comes at a key time for Arista. Though many potential customers may think of the company as primarily a hyperscaler provider, that background plays well today, as the characteristics of campus and hyperscaler networks in the AI era are not significantly different. To serve both categories, a vendor needs to deliver reliable, always-on bandwidth and zero-trust operations, both of which Arista provides. As campus networks deploy more AI-driven solutions, its expertise should be appealing.
Focus on AIOps with AVA
On the pre-briefing, Jeff Raymond, Arista’s vice president of EOS software and services, told me that when the company talks to customers about what it can provide in the area of AIOps, some say they just “want an easy button,” while others say they barely trust anything but their command-line interface and question whether Arista is going to “automatically start self-driving my network.” Raymond said the company isn’t focusing on replacing jobs but rather using AVA’s AI capabilities to provide assistance to the network operator so that they can do their job better, focus on higher-order priorities, and get answers more quickly or prevent issues from happening.
Raymond said network teams are “typically a more cautious group” when it comes to deploying automation technologies such as AI. “Getting them to move to automation is still a little bit of a human change agent, and this is just one step.” AVA’s expanded capabilities include:
- Multi-domain event correlation across wired, wireless, data center, and security to pinpoint a single root cause;
- Agentic conversational and troubleshooting capabilities in Ask AVA for sophisticated, multi-turn dialogue that follows the user’s train of thought; and
- Continuous monitoring and automated root cause analysis for proactive issue identification.
Over the past year, I’ve noticed a marked change in the attitude regarding AI within the networking community. Coming into 2025, there was a tremendous amount of fear of AI taking one’s job. Now that AI has worked its way into our day to day lives, that opinion of AI has shifted from, “It’s going to take my job,” to “How did I ever do my job without it” What’s become clear is AI tools, such as AVA, aren’t the enemy, they’re engineers’ best friend because it lets them work faster and smarter.
Ruggedized platforms for industrial environments
Arista will also debut two new ruggedized platforms for deployment in industrial or outdoor environments across a variety of sectors. The platforms are a 20-port DIN Rail switch with an IP50 rating, and a 1RU 24-port switch with an IP30 rating. The IP ratings indicate the devices are suitable for use in industrial environments, since they can withstand extreme temperatures, vibrations and shocks.
The entry into the ruggedized area was a bit of a surprise to me because these products are typically lower-margin than traditional networking products and Arista is an extremely margin focused and the financial results reflect that. Raymond explained Arista isn’t moving into the ruggedized market as a new product category to lead with. Rather, this is for Arista’s manufacturing, warehouse and other customers where they buy other products from Arista but must go to a competitor for these switches. This rounds out the portfolio and lets the company extend the “end to end” Arista value proposition.
Arista says it expects the new software capabilities and switch platforms to be generally available in the first quarter of 2026.

