Featured
Reports

Nathan Howe, VP of Global Innovation at Zscaler talks mobile security

March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]

Continue Reading
Supporting AI Workloads is a Top Challenge for Organizations

Networking Decision-Makers Face Increasing Network Complexity

June 2024 // In the age of artificial intelligence (AI), networks have become increasingly important — as a way to […]

Continue Reading

Verizon Mobile Partners with Microsoft So Teams Can Energize the Mobile Workforce

December 2023 // For years, mobile employees have constituted a significant portion of the workforce. Since the start of the […]

Continue Reading

Check out
OUR NEWEST VIDEOS

2025 ZKast #32 with Rick Sacchetti, Area VP for Cisco at The Players Championship

7.3K views March 20, 2025 10:42 am

2 0

2025 ZKast #31 with Nathan Howe of Zscaler from MWC25

3.1K views March 20, 2025 10:26 am

1 0

2025 ZKast #31 with Craig Durr and Melody Brue from the RingCentral Analyst Summit

4.1K views March 20, 2025 10:20 am

0 0

Recent
ZK Research Blog

News

Palo Alto Networks Inc. last week unveiled its newest cloud security offering, Cortex Cloud. The latest iteration of the company’s Prisma Cloud, it’s natively built on Palo Alto’s Cortex AI-enabled security operations platform.

In its announcement, Palo Alto described Cortex Cloud as combining Cortex’s “best-in-class cloud detection and response (CDR) with industry-leading cloud native application protection platform (CNAPP) from Prisma Cloud for real-time cloud security.”

Cloud attack surfaces are a favorite target of cyberattacks, reflecting the continuing growth of enterprise cloud adoption and artificial intelligence usage. Cortex Cloud brings together multiple sources of data, automates workflows, and applies AI to deliver insights to reduce risk and prevent threats. The company designed Cortex Cloud to ingest and analyze data from third-party tools enabling to operate across the cloud ecosystem.

In a briefing with analysts, Scott Simkin, Palo Alto’s vice president of marketing, said Cortex Cloud gives security teams greater insight into what’s happening within their infrastructure, enabling them to act quickly and decisively. “One of the primary things we wanted to make better with Cortex Cloud is time to value, ease the workflow, ease of onboarding, and ease of reporting and dashboarding,” he said.

Cortex Cloud also consistently delivers capabilities such as role-based access control (RBAC) in one place for all cloud modules. “Now they’ve got it for all cloud modules and the SOC together,” Simkin said.

Key features

Built on Cortex, Cortex Cloud is designed to prevent cloud threats in real time. It leverages runtime protection so customers can achieve protection at a lower total cost of ownership that buying point products. Cortex Cloud includes:

Application security: Organizations can build secure apps and prevent issues during development from becoming production vulnerabilities that attackers can exploit. Cortex Cloud identifies and prioritizes issues across the development pipeline, providing end-to-end context across code, runtime, cloud, and third-party scanners.

Cloud posture: Cortex Cloud builds on Prisma Cloud’s cloud posture capabilities, combining cloud security posture management (CSPM), cloud infrastructure entitlement management (CIEM), data security posture management (DSPM), AI security posture management (AI-SPM), compliance, and vulnerability management (CWP) in one natively integrated platform.

Cloud runtime: Cortex Cloud natively integrates the unified Cortex XDR agent, including additional cloud data sources, to stop attacks in real time.

SOC: The transformation of SOC operations is core tenet of Palo Alto’s platform value proposition. To enabled this, Cortex Cloud works with Cortex XSIAM to extend detection and response capabilities from enterprise to the cloud for comprehensive, AI-driven security operations. Cortex Cloud natively integrates cloud data, context, and workflows within Cortex XSIAM to significantly reduce the mean time to respond to modern threats with a single, unified SecOps solution.

Improving time to value

Simkins said that the enhancements delivered by Cortex Cloud deliver value quickly to enterprises. “When you onboard a cloud account, you onboard it once, and every single posture control and runtime is now activated at the same moment with the click of a button. So time to value has been dramatically improved,” he said. “Unifying cloud and SOC within a broader security operations umbrella is the right decision to help enterprises stay ahead.

“Customers have told us over and over again they’re not looking to adopt individual posture controls,” Simkins said. “They’re looking to adopt cloud posture, runtime, or end-to-end security operations. So we listened to that feedback to get to a much simpler and easier to understand price and model.”

My perspective

With Cortex Cloud, Palo Alto is demonstrating the continuing platformization of security. As security functions become more standardized, it’s easier to roll them into enterprise platforms.

That transition has been occurring for a while. Next-generation firewalls and other security capabilities have been rolled into a single system. Enterprises no longer need to buy these components separately. I also see cloud-native application protection platforms having reached that point, so they can be rolled in as a SOC tool.

This evolution makes security platforms more comprehensive, responsive, and capable than ever before. The era of the standalone security app is rapidly coming to an end.

Availability

General availability for Cortex Cloud is Feb. 18. Simkins said upgrades for existing customers, through PAN’s partner ecosystem, will begin in April.

Cisco Systems Inc. managed to put up a strong “beat and raise” in its fiscal second-quarter earnings this week, and investors took the news positively as the stock is trading at an all-time high, excluding the overvaluation during the dot-com bubble.

Beyond the strong quarter, the results also highlighted several broader themes. Here are my five takeaways from Cisco’s most recent quarter:

Security is moving the needle

For the past several years, I have referred to security as the biggest opportunity Cisco had to grow its revenue and its stock price. Last May, I mentioned in a post how Jeetu Patel (pictured), then head of security and collaboration and now chief product officer, had retooled security.

Since then, Cisco has released a flurry of security innovations, including extended detection and response or XDR, AI Defense and Hypershield, and the recently announced Smart Switch, which uses data processing units to embed security into the network. Although growth was only 4%, the company is seeing good momentum in new products.

On the earnings call, Chief Executive Chuck Robbins talked about security order growth and the impact of new products.“Our security orders more than doubled again this quarter,” he noted. “In just 12 months, both Cisco Secure Access and XDR have gained more than 1,000 customers combined, and approximately 1 million enterprise users each.”

Moreover, he added, “Even before it’s in full production, Hypershield is also seeing solid momentum. In Q2, we booked major platform deals with two Fortune 100 enterprise customers who are leveraging Hypershield to deploy security into the network in a fundamentally new way.”

Right now, order growth is more important than actual revenue as most of it is sold with a subscription model, which leads to “revenue stacking,” and that takes a while to see meaningful results. Security growth is a key indicator for sustained growth because the industry is massive and the competitive landscape is highly fragmented. Capturing even a moderate amount of share will change how Cisco is perceived by investors and continue to move the stock up.

The platform effect is taking hold

Since becoming the company’s first CPO, Patel has been emphatic about creating a Cisco “platform” and while Cisco has used the term before, it was more a euphemism for product bundles. To describe his vision, Patel often refers to Apple Inc., which is the best example of a company that delivers great experiences through its platform. I’m “all in” on Apple because I can do many things I could not with a collection of best of breed products. I can iMessage on my laptop, I can cut text from my phone and paste it on my tablet, I can push a webpage from one device to another.

We are starting to see the fruits of the platform effect at Cisco with ThousandEyes integration across all its devices making troubleshooting easier. Some of the new security products leverage network telemetry to “see” things cyber only tools can’t. The new Webex codec uses network intelligence to deliver a high-quality experience over a low-bandwidth connection and it has a single AI agent that spans all its products.

During a conversation with Patel, he told me his goal is to build a platform that delivers “magical experiences which people love and tell others about.” He has been CPO less than a year, but so far, so good. Cisco Live US is coming up in June and we should see more evidence of it there.

AI will be a catalyst for growth

The hype around artificial intelligence is at an all-time high, but most of the focus has been on graphics processing units and servers. The reality is the network plays a critical role in AI performance and that has yet to be reflected by the investor community — but that should change soon. During the earnings call, management stated it expects to exceed $1 billion in AI product orders for fiscal year 2025, comprised of a broad set of products, including network infrastructure, optics and Unified Computing System servers.

To help customers accelerate deployments, Cisco recently rolled out its AI PODs, which are turnkey, end-to-end Cisco solutions that can be deployed and used immediately for AI training and inferencing. This is a good example of the platform effect cited above.

Splunk is about more than adding revenue

When Cisco acquired Splunk Inc., many investors I talked to made comments such as, “It paid $28 billion for $4 billion a year in revenue.” Given Splunk’s margins, that’s a decent return. However, the value of Splunk is about more than dollars contributed. Since closing the deal 11 months ago, I have seen Splunk integrated across multiple Cisco products.

As Robbins said on the earnings call, “Since Splunk became a part of Cisco almost 11 months ago, we continue to integrate our businesses and fuel synergies without disrupting momentum. During the quarter we also integrated Talos into Splunk’s newly released Enterprise Security 8.0 solution and AppDynamics into Splunk’s on-prem log observer.”

At the National Retail Federatio show, Splunk observability was on display as part of its retail solutions, and I’ve seen many Cisco-Splunk cross-selling deals in the field. One interesting trend to watch is how Cisco brings Splunk and its other products together to address the growing interest in digital resilience, which is being fueled by AI. More to come on that.

Cisco’s not trading as a software company yet

From a stock perspective, Cisco still looks like a hardware company despite some strong software metrics. On the call, Chief Financial Officer Scott Herren highlighted many metrics, such as annual recurring revenue, that point to it having made the shift from hardware to software. As he explained, “Total ARR ended the quarter at $30.1 billion, an increase of 22%, with product ARR growth of 41%. Total subscription revenue increased 23% to $7.9 billion, and now represents 56% of Cisco’s total revenue. Total software revenue was up 33% at $5.5 billion, with software subscription revenue up 39%.”

However, even at the higher stock price, Cisco’s price-to-earnings ratio is only about 17X, below the peer group average of 22X. It’s a company that has traded as a value stock for some time, since that’s what it has been. Security and AI give the company the chance to break into growth mode, but platform is the key to differentiation over the dozens of point products, many of which have significantly higher valuations than Cisco.

One last note on this quarter: The company announced that Gary Steele, former Splunk CEO and current president of Cisco go-to-market, will resign as of April 25. Although I have no official word from Cisco on why Steele is leaving, he has been a CEO since 1998, and post-Cisco, he will pursue another CEO gig. Assuming Robbins is at Cisco for the foreseeable future, I take the company’s statement of his wanting to be a CEO at face value.

The Kansas City Chiefs and Philadelphia Eagles have almost two weeks to develop a game plan for Super Bowl LIX in New Orleans Sunday, Feb. 9, but the technology team starts well before that. In fact, the planning and strategy for the next championship game — Super Bowl LX, which will be held in Levi’s Stadium in Santa Clara in February 2026 — are well underway.

Too early, you say? Well, it’s premature for the teams that hope to play in the game to make travel plans, but it’s a very different story for the information technology and cybersecurity professionals from the NFL and the San Francisco 49ers, the team that plays its home games at Levi’s Stadium (pictured).

A recent LinkedIn event by Cisco Systems Inc., one of the key providers of the Levi’s Stadium network, featured an in-depth discussion of how much planning, effort and technology goes into providing fast, secure connectivity for the teams, broadcasters, vendors and, of course, the fans who will pack the stadium along with their tens of thousands of mobile devices.

NFL and 49ers team up on tech

Aaron Amendolia, the NFL’s deputy chief information officer, has worked for the league for 21 seasons. He leads the NFL’s innovation team and oversees event technology and infrastructure. This year’s Super Bowl in New Orleans will be Amendolia’s 18th, more than even the GOAT himself, Tom Brady. He and his team will be busy with the 2025 game, and they’re already immersed in work for next year.

Costa Kladianos is the 49ers’ executive vice president and head of technology. He and his team handle tech for all home games, any postseason games the Niners host and numerous other events at the 68,500-seat stadium. After Super Bowl LX, another big job on his plate will be a different type of football: FIFA World Cup soccer games, some of which will be held at Levi’s Stadium.

Connectivity and much more

“You start to think about all the connectivity needed for the Super Bowl,” Amendolia said. “All the devices that come into a stadium on game day and all the buildout around that. We met with Costa’s team to talk about preparation for LX.”

And on game day, he added, “we’re planning for over 150,000 to 200,000 devices entering this building. But it’s not just about game days, but all the preparation around it. We have many partners, broadcasters, vendors, a diverse group of technology showing up, connecting to the network, and doing everything you need to deliver the games.”

I’ve interviewed many stadium CIOs, and Amendolia’s comments echo those of others, saying that the network is critical to every aspect of holding a game. Last year, I talked with a sports CIO who mentioned how a situation took the network down. He explained how he wasn’t sure it would be up by the game and had to explain to the owners that a game could not occur without a network.

Security systems, ticketing, point-of-sale, medical services and other critical services run on the network. The good news is the network did come back up in time, and the scare enabled the team to build a redundant data center. But this is the challenge that all stadium’s CIOs face and it’s magnified exponentially in a high-profile game, like the Super Bowl.

Wi-Fi plays a massive role in overall stadium connectivity, according to Kladianos. It’s about much more than fans logging in with their cell phones. “Wi-Fi is table stakes right now,” he explained. “Everybody’s bringing their device, everybody’s sharing the great time they’re having at the event, but it’s also what all our backend technology, including point-of-sale systems, runs on. We love to run on Wi-Fi because it just makes us flexible. We can quickly move a sales system, our point of sale outwards. We can get into the lines and go to the in-seat service. It gives us that flexibility to what we want, especially around the gates, getting people through the gates quicker, checking their tickets.”

AI increases the burden on stadium Wi-Fi

“AI requires a lot of bandwidth and processing power, and that has to go through the Wi-Fi in the stadium,” Kladianos said. “That becomes super-important as we go there because we want fans not to realize the experience they’re having in the Wi-Fi. We want them to know that it works. We currently have 1,200 access points throughout the stadium, and we’re looking to expand that as we head to 2026 to ensure that everyone has the same great experience they have everywhere else.”

Managing all the devices that require Wi-Fi access is extremely challenging, according to Kladianos. “Even with your best analyst, you need technology and tools to correlate those events,” he said. “AI is really where we’re looking.

Indeed, he added, “we’re going to validate which AI solution is going to return the best results. It’s exciting because you must correlate against something unique to sports. The sensors we have on the field with the players, the cameras we have doing optical tracking, our broadcast cameras capturing and getting that live event out to the points of sale, and the fan devices create a unique environment.”

With all that data, security is critical

“We look at AI as an opportunity, and we know with opportunities, there’s also the other side of the coin, which is threats,” Kladianos explained. “You want to be ahead of the game. So, with our partner Cisco, we’re putting in the latest and greatest monitoring solutions and everything they offer on the security side, on our firewalls, using threat intelligence.”

Moreover, he added, the team can take all its data, all its logs on the back end, and quickly use AI to summarize threats, because AI can do it a lot faster. “I have analysts in the group, so that’s really going to help us. In terms of other innovations in the stadium, our strategy for AI is the intelligent stadium,” he said. “We want to see how AI can enable everything we do to engage our fans.”

Few events are as closely watched as the Super Bowl. The 2024 game had more than 123 million viewers in the United States, and the NFL continues attracting new fans worldwide. That growing focus makes each Super Bowl a top-level Homeland Security concern on par with a presidential inauguration.

“Obviously, Super Bowl is a high-profile event, but also a high-value target for adversaries,” said Amendolia. “Our cybersecurity team, our CISO, they’re making sure that we implement AI responsibly, so we’re not causing any vulnerabilities ourselves, and we understand what’s going on in the outside world. It’s a lot of education and putting the right tools in place, but also communication with our partners. You think of all the different organizations from across the world, international broadcasters, domestic broadcasters, and digital experiences that come to the Super Bowl; you’re now bringing a whole ecosystem trying to get out their content around this live event with all the tools they bring in.”

Added Kladianos: “We have a full security operation center. We work closely with the NFL, local security agencies, the FBI and local police. We run different technology in terms of our high-definition camera systems using IP on the back end running through that network, making it super-important to have that low latency. These cameras are not just cameras; now, they’re analyzing super HD and super zoom. Using some of the AIs and the cameras, you can spot potential threats before they happen.”

Amendolia said his cyber team is using logging tools such as Splunk’s to bring everything to one place, as well as Cisco’s suite of security tools. He cited some stats: “350,000 connections blocked to malicious and blacklisted sites. 39,000 intelligence services detected and dealt with. 1,600 intrusion attempts foiled. Those are just the years we’ve worked with Cisco at the Super Bowl. These distinct things keep incrementally increasing. The target is there.”

Final thoughts

Though this is a sports-related story, the lessons learned can be applied to companies in all industries. A recent ZK Research/theCUBE Research survey found that 93% of respondents believe the network to be more critical to business operations than it was two years ago.

However, I find that with most companies, the network does not get the same level of C-level interest as the cloud or compute platforms, but the reality is that the network is the business for most companies. Ensuring a secure, rock-solid network is crucial to business operations in all industries.

Cisco Systems Inc. this week held its first AI Summit, a thought leadership event on the pivotal topics shaping the future of artificial intelligence — this one focused on the security of AI systems.

The summit was small and intimate, with about 150 attendees, including executives from about 40 Fortune 100 companies. I understand why the interest from top companies was so high, as the speaker list was impressive and included AI luminaries such as Alexandr Wang, founder and chief executive of Scale AI Inc.; Jonathan Ross, founder ad CEO of Groq Inc.; Aaron Levie, co-founder and CEO of Box Inc.; Brad Lightcap, chief operating officer of OpenAI; David Solomon, CEO of Goldman Sachs; and many others.

From a product perspective, Cisco leveraged AI Summit to announce a new tool called Cisco AI Defense, which, as the name suggests, safeguards AI systems. According to Cisco’s 2024 AI Readiness Index, only 29% of organizations feel equipped to stop hackers or unauthorized users from accessing their AI systems. AI Defense aims to change that statistic.

The product’s release is well-timed, as AI security is now at the top of business and information technology professionals’ minds. This week, I also attended the National Retail Federation show in New York. There, I attended three chief information officer events, with a combined attendance of about 50 IT executives.

Every IT executive at the three events was highly interested in AI. The primary thing holding most of them back was security, particularly for regulated industries such as healthcare, retail and financial services.

Cisco’s AI Defense is designed to give security teams a clear overview of all the AI apps employees use and whether they are authorized. For example, the tool offers a comprehensive view of shadow AI and sanctioned AI apps. It implements policies restricting employee access to unauthorized apps while ensuring compliance with privacy and security regulations.

One common theme from my IT discussions is that no one wants to be the “department of no,” but they also understand that without the proper controls, the use of AI can put businesses at risk. Also, it has been shown over time that when IT departments say no, users find a way around it. It’s better to provide options for users, and Cisco AI Defense offers the visibility and controls required for workers to be safe.

The tool is also helpful for developers because applications can be secured at every stage of the application lifecycle. During development, it pinpoints weaknesses in AI models so potential issues can be fixed early. This helps developers create secure apps immediately without worrying about hidden risks.

When it’s time to deploy those apps, AI Defense ensures they run safely in the real world. It continuously monitors unauthorized access, data leaks and cyberthreats. The tool provides ongoing security even after deploying an app by identifying new risks.

One of the tool’s unique attributes is its continuous validation at scale. One of the challenges of security AI is that while a company could use traditional tools to secure the environment at any point, guardrails will have to be adapted if the model changes. Cisco AI Defense uses threat intelligence from Cisco Talos and machine learning to continually validate the environment and automate the tool’s updates.

This also builds on Cisco’s security portfolio, which is taking shape nicely as a platform. In the analyst Q&A, I asked Cisco Chief Product Officer Jeetu Patel (pictured, left, with Cisco CEO Chuck Robbins), about the “1+1=3” effect if you use AI Defense with Hypershield. He corrected me and said four technologies created a “1+1+1+1=20.” These include Cisco Secure Access, Hypershield, Multi-Cloud Defense, and AI Defense.

“These four work in concert with each other, Patel said. “If you want visibility into the public cloud or what applications are running, Multi-Cloud Defense ties in with AI Defense and gives you the data needed to secure the environment. If you want to ensure enforcement on a top-of-rack switch or a server with an EBPF agent, that can happen as AI Defense is embedded into Hypershield.”

What’s more, he added, “we will partner with third parties and are willing to tie this together with competitor products. We understand the true enemy is the adversary, not another security company, and we want to ensure we have the ecosystem effect across the industry.”

DJ Sampath, Cisco’s vice president of product, AI software and platform, added, “AI Defense data would be integrated into Splunk, so all the demonstrated things will find their way into Splunk through the Cisco Add-On to enrich the alerts you see in Splunk.” Given the price Cisco paid for Splunk Inc., integrating more Cisco products and data into it will create a multiplier effect on revenue.

I firmly believe that share shifts happen when markets transition, and AI security provides a needle-moving opportunity for Cisco and its peers. AI will create a rising tide for the security industry, but the company that nails doing it easily will benefit disproportionately. The vision of what Cisco laid out is impressive, but the proof will come when the product is available. We shouldn’t have to wait long, since it’s expected to be available this March.

For those who missed it, the event will be rebroadcast next Wednesday, Jan. 22.

It’s NRF week in New York, which allows technology vendors to showcase innovation for the retail industry, and at the National Retail Federation show, HPE Aruba Networking rolled out several new products to help retailers tackle industry-specific challenges.

They included providing backup connectivity for mission-critical apps, supporting pop-up stores and simplifying information technology infrastructure deployment in retail environments.

Retail has been a core industry for the Hewlett Packard Enterprise Co. unit, which designed the new products to address the networking needs of large and small retail locations. The HPE Aruba Networking 100 Series Cellular Bridge is a key addition to the portfolio. It provides “always-on” connectivity if the primary network experiences a disruption, allowing retailers to stay up and running, even when setting up temporary pop-up locations and kiosks. The Cellular Bridge defaults to 5G but automatically switches to 4G LTE when needed.

“It’s about making sure that there is business continuity, especially for critical transactions like credit cards, and ensuring that it is always on whether anything else in the network fails,” Gayle Levin, senior product marketing manager for wireless at HPE Aruba, said in a briefing.

HPE Aruba is also expanding its retail offerings by combining networking and compute capabilities with the launch of the CX 8325H switch. The energy-efficient 18-port switch integrates with HPE ProLiant DL145 Gen 11, a compact, quiet server for edge computing. Together, these devices provide efficient computing and storage, while their space-saving design makes them ideal for small retail environments.

What I like about this product is that it combines technology from HPE’s computing side with networking from Aruba to create a solution for retail challenges. Most brick-and-mortar stores are space-constrained and do not have room for separate devices.

Moreover, HPE Aruba is expanding its Wi-Fi 7 lineup with 750 Series access points (APs). Like the 730 Series, the new APs can securely process internet of things data and handle a larger number of IoT devices. One of the compelling features of the 50 Series is its ability to run containerized IoT applications directly on the device without sending data to the cloud. Instead, it processes data at the edge, right where it’s collected.

IoT has exploded in retail and organizations in this industry, creating massive amounts of data, which means they also face extra security risks. IoT devices are easy targets for hackers because many still use default or weak passwords and outdated software, and connect to larger networks. In addition, they collect sensitive data like location or usage patterns. With so many devices in use, the number of potential attack points increases.

“In retail, brand reputation is critical,” Levin said. “We’re ensuring that the door lock is not being hacked to avoid exposure or added risk. IoT is supposed to help, but it’s doing the opposite.”

HPE Aruba addresses IoT security by integrating zero-trust into its products. For example, its access points prioritize securing IoT devices like cameras, sensors, and radio frequency identification or RFID labels, which are common entry points for hackers. The vendor also provides AI-powered tools like client insights and micro-segmentation to detect potential breaches proactively.

Central AI Insights is a new product created for retail curbside operations. It uses AI to automatically adjust Wi-Fi settings, reducing interference from things like people passing by outside, so customers and staff always have a reliable connection. If something goes wrong — whether it’s a network issue, an internet problem or a glitch in an app — Central AI Insights helps diagnose the issue. It also monitors IoT devices and can spot suspicious activity.

“It’s not just about using the network to support AI but also making the network work better using AI,” Levin said. “We’ve created specific insights that help retail. The idea is to make supporting these very large, distributed store ecosystems easier with a centralized IT department. So, they’re getting everything they need and use AI insights to understand where the problem is.”

HPE Aruba has a broad ecosystem of retail partners like Hanshow and SOLUM, which offer electronic shelf labels, or ESLs, and digital signage. Another partner, Simbe, has developed an autonomous item-scanning robot that tracks products, stock levels and pricing. VusionGroup uses computer vision AI and IoT asset management with ESLs and digital displays to help retailers track their inventory. Zebra Technologies provides RFID scanners, wearable devices and intelligent cabinets for omnichannel retailing.

HPE Aruba has upgraded its Central IoT Operations dashboard to simplify retailers’ management of IoT devices. The improved dashboard has a single interface, connects Wi-Fi APs to devices such as cameras and sensors, and integrates with third-party applications. I stopped by the HPE booth at NRF, where attendees could check out the hardware, see it in action with some retail demos, and experience the new software.

AI, digitization, omnichannel communications and IoT are creating massive changes in retail. Though these technologies may seem distinct, they share one commonality: They are network-centric. These new products from HPE Aruba enable retailers to deploy a modernized network that can act as a platform to enable companies to adapt to whatever trend is next.

Amazon Web Services Inc. made several announcements at the CES consumer electronics show last week regarding partnerships in the automotive industry that are aimed at furthering the rise of software-defined vehicles.

Building and delivering cars is increasingly becoming a software game that requires automotive manufacturers to take an ecosystem approach. The rise of software-defined vehicles, or SDVs, enables auto companies to work on parts or cars that have yet to be built. Also, updates can be made to finished products using over-the-air connectivity, something they could never do before.

AWS is partnering with several companies to make SDVs smarter and easier to develop. By using cloud computing, artificial intelligence and scalable tools, AWS is helping automakers build better cars that can be updated and improved over time.

Honda Motor Co. Ltd. is among the companies working with AWS to turn its cars into SDVs. The car company has created a “Digital Proving Ground,” or DPG, an AWS-enabled cloud simulation platform for digitally designing and testing vehicles. Using DPG, Honda can collect and analyze data such as electric vehicle driving range, energy consumption and performance. The platform reduces reliance on physical prototypes, speeding up development and lowering costs.

Historically, auto companies have had to build cars first and then test them. Though this seems reasonable, the cost and time taken can be very high as accidents happen, which creates delays, and niche use cases can be complex to test. For example, at dawn and dusk, sensors can malfunction because of the brightness. This can only be tested for a few minutes daily in the physical world. In a simulated environment such as the DPG, the sun can be held at the horizon, and millions of hours of simulation run.

Moreover, Honda uses AWS’ video streaming and machine learning tools to develop video analytics applications. Amazon Kinesis Video Streams processes and stores car camera footage to detect unusual movement around a car. If implemented in the real world, it could potentially alert drivers to nearby hazards and help prevent collisions.

Honda is also tapping into AWS generative AI services, specifically Amazon Bedrock. For example, it’s developing a new system that guides drivers to the best charging stations based on location, battery level, charging speed and proximity to shopping centers. The system provides secure communication between vehicles and the cloud while gathering driver preferences to offer personalized recommendations. It’s set to launch in Honda’s 0 Series EVs (pictured).

Honda’s partnership is notable, as it’s among the highest-volume manufacturers. Specialty EV companies were early interested in leveraging platforms such as AWS. A Honda partnership legitimizes that SDVs are the way forward for this industry.

Building on this momentum, AWS has also teamed up with HERE Technologies to enhance location-based services for SDVs. HERE provides advanced mapping technology, while AWS supplies the cloud tools to process large amounts of data. The companies are helping automakers build driver assistance systems, hands-free driving, EV routing and more.

HERE’s HD Live Map processes real-time sensor data to provide granular navigation and improve EV battery usage. The company just launched a new tool called SceneXtract, which simplifies testing by creating virtual simulations. Using a combination of HERE’s mapping technology and services like Amazon Bedrock, automotive developers run detailed simulations to test advanced driver assistance systems and automated driving. For instance, they can locate and export map data into test scenes, reducing the time, effort and cost involved in preparing simulations.

Additionally, AWS has partnered with automotive supplier Valeo to simplify the development and testing of vehicle software. Valeo announced the first three solutions during CES 2025: Virtualized Hardware Lab, Cloud Hardware Lab and Assist XR.

Virtualized Hardware Lab allows carmakers to test software on virtualized components, potentially speeding up development by up to 40%, according to Valeo. This cloud-based solution, hosted on AWS, will be available on AWS Marketplace yearly this year.

Valeo offers the Cloud Hardware Lab, a Hardware-in-the-loop-as-a-service solution for those who want access to large-scale testing systems. HIL combines hardware components with software simulations so companies can test how their software interacts with hardware systems. HILaaS allows companies to access Valeo’s advanced testing systems remotely through an AWS-hosted platform.

Lastly, Assist XR will provide roadside assistance, vehicle maintenance and other remote services. It will use AWS cloud infrastructure and AI tools to process real-time data from vehicles and their surroundings. This is one of many examples of the technologies needed to build safer, smarter and more efficient cars.

Going into CES, I was chatting with some media, and there is a perception that the automotive industry has seen little innovation over the past several years. Though I believe this statement is incorrect, I understand the source. Five or more years ago, fully autonomous vehicles were all the rage and were supposed to be here by now. This set an expectation that was not realistic. If the benchmark for innovation is level five AVs, then we aren’t there yet.

However, every year, incremental innovation has been made in the journey to fully autonomous, and we now have many features that make us better, smarter and safer drivers. 2025 won’t be the year of level five, but it will be another year in which we see more steps taken toward it.

Although the holiday gift-giving season may be over, Nvidia Corp. co-founder and Chief Executive Jensen Huang was in a very generous mood during his Monday keynote address at the CES consumer electronics show in Las Vegas. The leader in accelerated computing, which invented the graphics processing unit more than 25 years ago, still has an insatiable appetite for innovation.

Huang (pictured), dressed in a more Vegas version of his customary black leather jacket, kicked off this keynote with a history lesson on how Nvidia went from a company that made video games better to the AI powerhouse it is today. He then shifted into product mode and showcased his company’s continuing leadership in the AI revolution by announcing several new and enhanced products for AI-based robotics, autonomous vehicles, agentic AI and more. Here are the five I felt were most meaningful:

Cosmos for world-building

Nvidia’s Cosmos platform consists of what the company calls “state-of-the-art generative world foundation models, advanced tokenizers, guardrails and an accelerated video processing pipeline” for advancing the development of physical AI capabilities, including autonomous vehicles and robots.

Using Nvidia’s world foundation models or WFMs, Cosmos makes it easy for organizations to produce vast amounts of “photoreal, physics-based synthetic data” for training and evaluating their existing models. Developers can also fine-tune Cosmos WFMs to build custom models.

Physical AI can be very expensive to implement, requiring robots, cars and other systems to be built and trained in real-life scenarios. Cars crash and robots fall, adding cost and time to the process. With Cosmos, everything can simulated virtually, and when the training is complete, the information is uploaded into the physical device.

Nvidia is providing Cosmos models under an open model license to help the robotics and AV community work faster and more effectively. Many of the world’s leading physical AI companies use Cosmos to accelerate their work.

The Omniverse is expanding

Huang also announced new generative AI models and blueprints that expand and further integrate Nvidia Omniverse into physical AI applications. The company said leading software development and professional services firms are leveraging Omniverse to drive the growth of new products and services designed to “accelerate the next era of industrial AI.”

Companies such as Accenture, Microsoft and Siemens are integrating Omniverse into their next-generation software products and professional services. Siemens announced at CES the availability of Teamcenter Digital Reality Viewer, its first Xcelerator application powered by Nvidia’s Omniverse libraries.

New blueprints for developers

Nvidia debuted four new blueprints for developers to use in building Universal Scene Description (OpenUSD)-based Omniverse digital twins for physical AI. The new blueprints are:

Raising the bar for consumer GPUs

Nvidia announced the GeForce RTX 50 series of desktop and laptop graphics processing units. The RTX 50 series is powered by Nvidia’s Blackwell architecture and the latest Tensor Cores and RT Cores. Huang said it delivers breakthroughs in AI-driven rendering. “Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives,” he said. “Fusing AI-driven neural rendering and ray tracing, Blackwell is the most significant computer graphics innovation since we introduced programmable shading 25 years ago.”

The pricing of the new systems gave rise to a loud cheer from the crowd. The previous generation GPU, RTX 4090, retailed for $1,599. The low end of the 50 series, the RTX 5070, which offers comparable performance (1,000 trillion AI operations per second) to the RTX 4090, is available for the low price of $549. The RTX 5070 Ti, 1,400 AI TOPS is $749, the RTX 5080 (1,800 AI TOPS) sells for $999, and the RTX 5090, which offers a whopping 3,400 AI TOPS, is $1,999.

The company also announced a family of laptops where the massive RTX processor has been shrunk down and put into a small form factor. Huang explained that Nvidia used AI to accomplish this, as it generates most of the pixels using Tensor Cores. This means only the required pixels are raytraced, and AI is used to develop all the other pixels, creating a significantly more energy-efficient system. “The future of computer graphics is neural rendering, which fuses AI with traditional graphics,” Huang explained. Laptop pricing ranges from $1,299 for the RTX 5070 model to $2,899 for the RTX 5090.

Project DIGITS

Huang introduced a small desktop computer system called Project DIGITS powered by Nvidia’s new GB10 Grace Blackwell Superchip. The system is small but powerful. It will provide a petaflop of AI performance with 120 gigabytes of coherent, unified memory. The company said it will enable developers to work with AI models of up to 200 billion parameters at their desks. The system is designed for AI developers, researchers, data scientists and students working with AI workloads. Nvidia envisions key workloads for the new computer, including AI model experimentation and prototyping.

Enabling agentic AI

Rev Labaredian, vice president of Omniverse and simulation technology at Nvidia, told analysts in a briefing before Huang’s keynote that the massive shift in computing now occurring represents software 2.0, which is machine learning AI that is “basically software writing software.” To meet this need, Nvidia is introducing new products to enable agentic AI, including the Llama Nemotron family of open large language models. The models can help developers create and deploy AI agents across various applications — including customer support, fraud detection, and product supply chain and inventory management optimization.

Huang explained that the Llama models could be “better fine-tuned for enterprise use,” so Nvidia used its expertise to create the Llama Nemotron suite of open models. There are currently three models: Nano is small and low latency with fast response times for PCs and edge devices, Super is balanced for accuracy and computer efficiency, and Ultra is the highest-accuracy model for data center-scale applications.

Final thoughts

If it’s not clear by now, the AI era has arrived. Many industry watchers believe AI is currently overhyped, but I think the opposite. AI will eventually be embedded into every application, device and system we use. The internet has changed how we work, live and learn, and AI will have the same impact. Huang did an excellent job of explaining the relevance of AI to all of us today and what an AI-infused world will look like. It was a great way to kick off CES 2025.

As agents become connected, the value of every connected application will rise – provided vendors can work together to let their AI agents work together.

Last week, I attended AWS re:Invent 2024 with 60,000 of my closest friends. We were there to catch up on the latest and greatest in the cloud, particularly AI. One of the more interesting sessions was on the topic of the “Internet of Agents” by Vijoy Pandey, SVP/GM of Outshift by Cisco. For those unfamiliar with Outshift, it’s an internal incubator at Cisco focused on emerging technology in agentic AI, quantum networking, and next-gen infrastructure. As a separate group, Outshift can move at the speed of a startup while retaining access to the resources of a large company like Cisco. The concept of the “Internet of agents” is a simple one. The AI-based agents found in applications can communicate with each other bidirectionally. Pandey’s definition was “an open, internet scale platform for quantum-safe agent-agent and agent-human communication.” (More on why the term “quantum-safe” was included is at the end of this piece.)

Agent Sprawl

One might wonder what problem the AI-based agents are trying to solve here; it has not manifested itself yet. Yet I believe generative AI is one of those “game-changing” technologies that will alter almost every aspect of our lives. I predict, over time, that every application we use will have a generative AI interface built into it, like how every app has a search box today. Over time, these agents will go from reactive, where we ask them questions, to proactive, where specific agents push us the contextually important information we need to know. Consider the implications of this. Today, most workers use several applications – anywhere from half a dozen to over 50. As these apps evolve and add agents, we will face “agent sprawl,” where users will have as many virtual agents as they have apps. At re:Invent, I attended a session that had the CIO of a major bank participating in it, and he brought up how they’re building virtual assistants for their own apps but also are using Teams Copilot and Salesforce’s agent. Post-session, I asked him what he thinks the future looks like, and he told me he foresees a day when users have a “tapestry” of agents they need to pick and choose from. I followed up my question by asking him what he thinks working in that kind of environment would be like, and he said, “likely chaos.”

Fragmented Knowledge

The numerous agents cause several problems. The first is that the agent or assistant is only as knowledgeable as the data in the application. This can create fragmented insights. As an example, consider the case where a company has a great website that does a best-in-class job of showcasing a poorly built product. The web analytics and sales tools that are used before purchase might show high customer satisfaction scores as they measure pre-purchase satisfaction. Once the customer uses the product, the mood will turn from happy to upset, and the contact center will field calls regarding refunds and repairs. Using the generative AI interface to understand customer sentiment will yield different results. Also, as the agents shift from reactive to proactive, users will be bombarded with messages from these systems as they look to keep you updated and informed. I expect the apps to have controls, much like they do today, so we can control the interactions with the apps, but most users will keep critical apps on. It would be like a CEO having a team of advisors across every business unit in a company, all whispering in his ear at once.

Interconnecting Agents

This is where the Internet of agents brings value. By interconnecting, these assistants can share information, leading to less but more relevant information. In the scenario outlined above, a product owner or sales leader could be alerted when customer sentiment changes, as these pre-purchase agents communicate with call center agents to provide a holistic picture. This would enable the company to better understand what happened and take corrective action. Also, this will enable users to work in the applications they prefer but still access information from others. Today, a sales leader can pull data from CRM, contact center tools, sales automation applications, and other systems. The data must be brought together manually and likely correlated by people to find the insights. With the Internet of Agents, AI could perform analytics across multiple systems. The value can be described using Metcalf’s Law, which states that the value of any network is proportional to the square of the number of connected nodes. A network of two nodes has a value of four, whereas a network with 16 nodes has a value of 256, etc. As agents become connected, the value of every connected application will rise. To accomplish this, the vendors will need to agree to a set of standards and follow them – this is something Pandey and the team at Outshift are working on. This is where I hope the application providers learn from the sins of the past, as many of them have historically preferred walled gardens. One example is the UC messaging industry. Slack, Teams, Webex, Zoom, etc., all operate in silos, so a worker can’t send a message in Slack to a Teams user. Imagine how useless text messaging would be if one could only send messages to phones of the same manufacturer. The reality is that when systems are open and standards-based, it creates a rising tide, and everyone wins. A small piece of a big pie is worth far more than most of a small pie.

The Agents Are Coming

One final point: Pandey’s definition did include the term “quantum-safe.” I asked why that was included and was told that if one is building the next generation of secure connectivity, security should be future-proofed. Infusing quantum-safe protocols ensures that quantum nodes are added to the infrastructure and communications are secured even against “store now, decrypt later” attacks. This is consistent with conversations I’ve had with other security companies where their primary concern around quantum computing is bad actors stealing data today, then using quantum to decrypt it at a later date. To paraphrase Paul Revere, “The agents are coming, the agents are coming,” and I implore the vendor community to get together and standardize the communications between systems and to ensure they are secured. Adoption will be faster, users will be happier, and the value will be greater. Seems like a no-brainer to me.

‘Tis the season to ponder what to get that someone in your life who has everything. If you haven’t finished your Christmas shopping and have $249 to spend for a piece of technology about four inches wide, three-and-a-half inches high, and 1.3 inches thick, then Nvidia Corp. has the perfect gift.

The tech giant introduced the Jetson Orin Nano Super Developer Kit this week. Though that’s a big name for such a small product, don’t be fooled. The latest innovation from Nvidia packs a big wallop in its little package.

The Jetson Orin Nano Super Developer Kit is a small but mighty artificial intelligence computer that the company says “redefines AI for small edge devices.” And by mighty, the new product delivers up to 67 tera operations per second of AI performance. That’s a 1.7-times increase over its predecessor, the Jetson Orin Nano.

But if you already bought the original model, which sold for $499 and debuted just 18 months ago, don’t worry. The team at Nvidia isn’t pulling a Grinch move. A free software upgrade for all original Jetson Orin owners turns those devices into the new Super version.

What’s in the box?

The developer kit comprises an 8-gigabyte Jetson Orin Nano module and a reference carrier that accommodates all Orin Nano and Nvidia Orin NX modules. The company says this kit is “the ideal platform for prototyping your next-gen edge-AI product.”

The 8GB module boasts an Ampere architecture graphics processing unit and a six-core Arm central processing unit, which enables multiple concurrent AI application pipelines. The platform runs Nvidia AI software stack and includes application frameworks for multiple use cases, including robotics, vision AI and sensor processing.

Built for agentic AI

Deepu Talla, Nvidia’s vice president and general manager of robotics and edge computing, briefed industry analysts before the Dec. 17 announcement. He called the new Jetson Orin Nano Super Developer Kit “the most affordable and powerful supercomputer we build.” Talla said the past two years saw generative AI “take the world by storm.” Now, he said, we’re witnessing the birth of agentic AI.

“With agentic AI, most agents are in the digital world. And the same technology now can be applied to the physical world, and that’s what robotics is about,” he said. “We’re taking the Orin Nano Developer Kit and putting a cape on it to make it a superhero.”

And what superpowers will the Jetson Orin Nano Super Developer Kit have? In addition to increasing performance from 40 to 67 TOPS, the new kit will have much more memory bandwidth — from 68 to 102 gigabytes per second, a 70% increase.

“This is the moment we’ve been waiting for,” said Talla. Nvidia is increasing performance significantly on the same hardware platform by supercharging the software. “We designed [the original Orin Nano system] to be field upgradeable. As generative AI became popular and we did all the different testing, we can support all the old systems in the field without changing the hardware, just through software updates.”

On the call, Talla mentioned that the total available market for robots, also known as physical AI, is about half the world’s gross domestic product, or about $50 trillion. Is it that big? It’s hard to quantify, but I do believe the opportunity is massive. Robots represent the next frontier in agentic AI because they combine a physical form factor with advanced decision-making capabilities, bridging the gap between virtual intelligence and the real world.

Unlike purely virtual AI systems, robots can interact with their environment, perform tasks and adapt to dynamic situations, making them critical for solving complex real-world problems. Their ability to act autonomously while continuously learning from their surroundings allows them to tackle challenges that are difficult for traditional software and sometimes people.

In fields such as healthcare, logistics, retail and manufacturing, robots are already demonstrating their potential by automating repetitive tasks, improving precision and enhancing efficiency. As advancements in machine learning, computer vision, and natural language processing continue, robots will become more capable of understanding and responding to human needs with nuance. They can assist the elderly, manage warehouses or even conduct surgeries accurately and consistently, surpassing human capabilities.

Additionally, as robots gain greater autonomy, they will increasingly function as agentic AI — intelligent agents capable of making decisions, setting goals and pursuing actions without constant human oversight. This shift will unlock new possibilities in sectors such as exploration, disaster response and personal assistance, transforming robots into valuable partners for human endeavors. The convergence of AI, robotics, and automation is poised to redefine industries and everyday life.

One of the biggest challenges and expenses with robots is to train them. Creating all the possible scenarios to test a physical robot can take years. For example, teaching a robot to walk requires stairs, gravel roads, side hills and other scenarios. They can fall, get damaged, overheat or experience other events slowing training. Nvidia takes a “full stack” approach to physical AI, where training can be done virtually using synthetic data. When the training is complete, upload the information so the robot can do the tasks.

Planned rejuvenation

Many products that hit the market have been designed with planned obsolescence in mind, whether by design or just due to rapidly evolving technologies and components. Nvidia is doing the opposite. Call it “planned rejuvenation.”

Talla said this is possible because Nvidia designed the Jetson architecture to support faster performance. “We are increasing the frequency of the memory,” he said. “We are increasing the frequency of the GPU. In fact, we are also slightly increasing the frequency of the CPUs. And the power consumption will go up to 25 watts. But the hardware has been designed to support that already.” Jetson runs Nvidia AI software, including Nvidia Isaac for robotics, Nvidia Metropolis for vision AI, and Nvidia Holoscan for sensor processing.

These preconfigured kits are also part of why Nvidia has become the runaway leader in AI. Packaging up all the hardware and software required for a developer to get started significantly reduces development time. Nvidia’s peers offer many of the same building blocks, but the developer must put them together.

The new Jetson Orin Nano Super Developer Kit and software upgrades for owners of the original Jetson Orin Nano Developer Kit are available at nvidia.com.

This partnership dispels the myth that the only companies using Amazon Connect are newer brands looking to shake up the status quo.

No industry has more variables impacting customer service than airlines. Weather issues, mechanical problems, staffing, and other factors can delay or cancel flights. Bags can also get lost, prices can change, and planes can be oversold. This high level of unpredictability creates tension and customer service issues. Additionally, outdated systems, disconnected workflows, and limited self-service options have contributed to long wait times, customer frustration, and high operational costs. Recognizing these challenges, Air Canada, the country’s largest airline, has been deploying new technologies to improve customer service across its passenger and cargo divisions. Using the Amazon Connect cloud-based contact center platform, Air Canada has addressed existing customer service gaps to ensure that the airline’s systems meet practical, real-world demands. Amazon Web Services (AWS) recently hosted a session with Air Canada at re:Invent 2024, where Sebastian Cosgrove, director of global customer service at Air Canada, shared how Amazon Connect is helping the airline innovate customer service across its business. The most common issue people experience with contact centers is a lack of continuity, especially when moving from self-service to speaking with an agent. For example, when a person is transferred from self-service to a live agent, details about their issue are not always shared, so they’re forced to repeat themselves. Moreover, supervisors and managers often have limited data access, which hinders their ability to improve service quality. Anything less than stellar service will escalate the stress level of an already anxious passenger, so those disconnected moments only make situations worse. One of the biggest factors holding companies back from modernizing is cost, and Sheila Smith, the principal Amazon Connect specialist for AWS, spoke about this on the panel. She stated, “The single biggest cost in the contact center is not the cost of technology or network services; it’s the cost of agent resources. If we can find a way to increase containment through self-service experiences and deflect those (self-service) interactions from even coming into the contact center, that will drive a huge cost out of the business. It’s going to impact customer service as well positively.” Staffing remains the lion’s share of the costs of running a contact center. According to AWS data, 75 percent of contact center costs is agent staffing, which drives organizations toward more economical options like self-service when possible. Most people (70 percent) prefer self-service over speaking to agents. However, when an issue needs to be escalated to an agent, poor handoffs typically lead to frustration. AI has brought in a high level of fear of agent job replacement, but the reality is that companies like Air Canada can’t modernize until they cut people’s costs. The shift from people-heavy processes to digital methods aligns with current market trends. Gen AI enhances self-service across communication channels by guiding agents through complex issues and workflows. By simplifying workflows and reducing the need to switch between systems, Amazon Connect allows agents to pay more attention to complex customer needs. In 2022, Air Canada launched a comprehensive modernization initiative, beginning with its Aeroplan loyalty program and extending to reservations and specialty services, enabled by its transition to Amazon Connect. This move integrated self-service capabilities into a unified customer experience, introducing a new interactive voice response (IVR) system with automated call recording, transcription, and sentiment analysis. Agents gained real-time transcription tools and the ability to flag issues easily. For instance, Air Canada revamped the IVR system in its passenger division to improve the customer experience. When customers call, the system uses loyalty numbers to identify them. It then provides booking details and offers self-service options for easy tasks like selecting seats or checking baggage. For more complex issues, the system gathers context before connecting the customer to an agent, lowering the average handle time. These changes have led to a 15 percent drop in call volumes, an 8 percent decrease in abandonment rates, and $4.7 million CAD in savings by reducing full-time positions. The updated IVR system has also handled 78 percent of informational queries without requiring agent involvement. According to Cosgrove, the shift made a noticeable difference in Air Canada’s customer service efficiency, allowing agents to focus on higher-value tasks. “In November 2022, we were shy of 30,000 phone calls. Now, we’re consistently handling over 140,000 phone calls within the IVR. That’s the value Amazon Connect brought us and the benefits of self-service embedded in the IVR. We’ve started to realize these benefits,” said Cosgrove. The airline’s cargo division faced similar challenges but required a slightly different approach. Air Canada replaced manual, email-based processes with Salesforce Service Cloud and integrated them with Amazon Connect. This created a centralized system that manages workflows automatically, giving agents a unified view of customer details and case history. The airline later added telephony with Service Cloud Voice, reducing average handle times by two minutes per phone call and increasing agent productivity by over 20 percent. “Part of the reason we had such a huge gain was because before our integration with Service Cloud Voice, we had generic statuses such as away, ready, and training. Now we could mine the data to understand what the agents were doing and how we get them ready to always help our customers,” said Cosgrove. Integrating Service Cloud has also significantly improved Air Canada’s quality assurance (QA) process. Previously paper-based, QA is fully automated, with results displayed on dashboards. This transformation has saved the airline 89 hours per month that were spent on manual QA processing in the past. Air Canada is now focusing on the next phase of its customer service transformation. To help customers before issues are escalated to agents, virtual assistants powered by gen AI are being developed. These assistants will handle common queries and help streamline workflows. Guided workflows and issue summaries for live human agents are also in development, with feedback loops designed to refine the system based on real-world use. Additionally, the airline plans to introduce AI-supported chat capabilities, further enhancing its omnichannel approach to customer interactions. These advancements aim to make the customer journey more seamless while equipping agents with the necessary tools. Air Canada is taking a careful and phased approach to modernization, which allows it to adjust without disrupting operations. “When you’re talking about people, legacy systems, and multi-branch products, it takes time to make these changes,” said Cosgrove. “Anyone can quickly pull something out of the box, but if you want meaningful results, you must do it right first.” While the case study at re:Invent was specific to the airline industry, the lessons learned can be applied to other verticals. By automating mundane, repetitive tasks, agents can spend more time and deliver more value on complicated tasks or with higher-value customers. This enables the brand to handle more interactions with fewer people—better service, lower costs, and happier customers—a win-win-win. For AWS, showcasing a brand like Air Canada highlights how far the business unit has come. In the past, re:Invent’s Amazon Connect customers were brands like CapitalOne and Rocket Mortgage – the brands working to disrupt their industries. Air Canada is a blue-chip company looking to serve its customers better. At re:Invent, I met with banks, retailers, and other companies one would not put in the “disruptive” category. This dispels the myth that the only companies using Amazon Connect are newer brands looking to shake up the status quo. Since the product launched, I’ve positioned Connect as a dark horse to watch, as I believe, given its part of the broader Amazon and has access to AI, cloud features, and more, it was on track to be a market leader. Last year, it moved into the Leaders quadrant on the Gartner MQ; this year, it rolled a bevy of mainstream brands. Amazon Connect is a dark horse no more.

The National Hockey League and Amazon Web Services Inc. are working together to change how hockey is experienced, leveraging cloud technologies and data-driven insights to enhance production workflows and fan engagement.

At AWS re:Invent last week, representatives from both organizations joined a panel titled “NHL Unlocked: Live cloud production, sports data, and alternate feeds.” The panelists were Julie Souza, global head of sports for AWS; Grant Nodine, senior vice president of technology for the NHL; Brant Berglund, senior director of coaching and GM applications for the NHL; and Andrew Reich, senior industry specialist, BD, for AWS. They discussed their progress across a range of issues.

Souza opened the discussion by emphasizing the importance of collaboration in the partnership. “AWS isn’t just a tech vendor,” she said. “We’re working alongside the NHL to explore what’s possible and deliver real value to fans and the league.”

Souza said their shared commitment to innovation has been central to their progress, including advancements in live cloud production and analytics-driven storytelling.

This sentiment of “partner versus vendor” has been a consistent theme in my discussions with other sports entities. The PGA TOUR, Swimming Australia, the NFL and others have told me the AWS team actively gets involved in helping them consider what’s possible and bringing new ideas to the table.

Building a foundation for innovation

Nodine traced the journey to the league’s initial efforts to transition its video content to the cloud. This foundational step enabled automating processes such as encoding and scheduling, which are now critical to their operations. “You can’t do the exciting stuff,” Nodine noted, “until you’ve built the basics.”

Reich elaborated on the architecture supporting this transformation. Using AWS Elemental MediaConnect, the NHL created a streamlined pipeline for video ingest, storage and distribution. This setup makes nightly game broadcasts efficient and positions the league to experiment with new forms of content delivery.

Making sense of the data

The NHL’s adoption of player- and puck-tracking systems has unlocked unprecedented insights into the game. These systems collect billions of data points during a single night of games.

Berglund emphasized how this data helps deepen understanding for fans. “It’s not just about collecting stats,” he said. “It’s about turning that data into meaningful stories.”

One example is Ice Tilt, a period of play in which one team dominates possession and offensive pressure, pinning its opponents in the defensive zone and generating sustained momentum.

Berglund said he once asked player Jack Hughes how he recognizes momentum shifts during games. Hughes described it as “tilting the ice.” This once informal concept is now quantified and aligns with the NHL’s use of player positioning data to measure territorial momentum, turning the metaphor into a precise, trackable metric.

Reaching new audiences

Alternate broadcasts, such as the NHL Edge DataCast, showcase the league’s ability to tailor content to different audiences. The Big City Greens Classic, which adapted NHL games for a younger, animation-loving demographic, demonstrated the potential for these efforts. Souza noted that these initiatives are helping the NHL reach audiences who might not traditionally watch hockey. “By meeting fans where they are, we can make the game accessible to more people in ways that resonate with them,” Nodine added.

The league also creatively uses analytics, such as face-off probability, which calculates matchups and success rates in real time. This feature not only enriches broadcasts but also enables commentators to explore the nuances of gameplay more deeply.

A shift to live cloud production

In March 2023, the NHL reached a major milestone: It became the first North American league to produce and distribute a game entirely in the cloud.

Nodine recounted how this effort involved routing all camera feeds to the cloud for production by remote teams. The approach also promoted sustainable production practices by significantly reducing carbon emissions. “We’re not turning back,” Nodine said, citing both operational flexibility and environmental benefits as reasons to continue this path.

Reich highlighted how cloud-based workflows enable the league to experiment in ways traditional setups cannot. For example, by centralizing video feeds in the cloud, the NHL can produce alternate broadcasts or deliver content directly to fans in the arena through mobile devices.

The NHL deserves significant credit for using the cloud to produce games, as it was the first league to make the shift. For sports entities, production quality is obviously critical, as that’s how most fans engage with the brand. Before the NHL, most sports organizations were skeptical that the cloud could deliver comparable quality to producing the game on-premises. The NHL’s success leads to other leagues, such as the PGA TOUR and NFL, producing events in the cloud, but the NHL has the distinction of being first.

What’s next

As the NHL and AWS reflect on their progress, they are also exploring what’s next. Nodine pointed to opportunities in using artificial intelligence to streamline highlight generation and provide real-time insights for broadcasters. By automating some workflows, broadcasters could focus on storytelling, while fans could gain deeper insights into the game’s dynamics.

Alternate broadcasts remain a fertile ground for experimentation. Projects such as Big City Greens and NHL Edge DataCast have shown how targeted content can reach new audiences, and the technology behind these initiatives could inform traditional broadcasts in the future. For example, integrating metrics such as time on ice or Ice Tilt directly into standard broadcasts could provide fans with richer narratives without disrupting the viewing experience.

Souza summarized the approach as follows: “This is about thoughtful progress — identifying what works, refining it and integrating it in ways that enhance the game for everyone.” As the partnership evolves, the focus remains on making hockey more engaging, accessible, and dynamic for a global audience.

Some final thoughts

If you love hockey like I do (I’m Canadian, so I’m mandated to love hockey), you support any efforts to improve the fan experience. What I like most about the collaboration between the NHL and AWS is it helps casual fans better understand the game. It’s been said that AI lets the untrained eye see what the trained eye does, and features that highlight specific nuances can accelerate the learning of a game that can be confusing to non-hard-core fans.

Now, if only the Canucks can hold on until Stanley Cup playoff time.

Networking and complexity go hand in hand, like chocolate and peanut butter. Though this has been the norm, it’s playing havoc with business operations.

A recent ZK Research/Cube Research study found that 93% of organizations state the network is more critical to business operations than two years ago. In the same period, 80% said the network was more complex. Increasing complexity leads to blind spots, unplanned downtime, security breaches and other issues that affect businesses.

Extreme Networks Inc. today announced its Extreme Platform ONE connectivity platform to combat this. The back-end data lake combines data from networks, security tools and third parties such as Intel Corp., Microsoft Security and ServiceNow Inc. The platform is built on an artificial intelligence core to deliver conversational AI and autonomous networking. The goal is to automate wholly or at least partially many of the complex tasks associated with operating and securing a network.

The platform is flexible enough to serve multiple audiences. It includes a composable workspace that enables cross-team workflows. Although network engineers will most likely work with Extreme, the company has added security functionality and capabilities for that audience. Extreme also offers workflows, services and data for procurement and financing teams.

The latter audience often needs to be reminded about network infrastructure. As a former information technology executive, I am all too familiar with the pains of managing subscriptions, service contracts and licenses. This is often done on spreadsheets, which is time-consuming and error-prone and can frequently lead to overspending.

Extreme has built a dashboard that shows all relevant financial information, including contracts and renewal dates. This can help the customer better understand current and future trends and plan for upgrades.

For the network practitioner, the AI capabilities are targeted at troubleshooting complicated problems, which networks are filled with. Wi-Fi problems are the hardest to solve as there are so many variables. With a wired network, virtual local-area networks, duplex mismatches and other settings can often cause unexpected performance issues.

Finding these can take days, weeks, or even months, as replicating them can be challenging. AI sees all data across the network and can connect the dots that people can’t.

There is also an AI Policy Assistant that administrators can use to create, view, update and remove application policies. Policy administration is necessary but time-consuming and error-prone. Setting up policies initially is straightforward but keeping them up to date as people and devices move around the network or as applications change can be difficult, particularly in dynamic environments, which most companies are today because of the internet of things, cloud and work-from-home.

The rollout of Extreme Platform ONE is the culmination of many acquisitions and years of work. Today’s Extreme is a rollup of many network vendors, including Enterasys, Brocade, Avaya Networking and Motorola/Zebra. The purchase of Aerohive brought the company the cloud back end that is being leveraged in the current platform launch. Along the way, the company rationalized its product set and implemented “Universal Hardware,” which lets customers choose between different operating systems.

Extreme Platform ONE is well-timed with the current AI wave. The concept of the network platform has been bandied about for years but has yet to catch on.

Last week, I talked to Extreme Chief Technology Officer Nabil Bukhari (pictured), about the platform and why now. He told me this is the direction the company has been moving in since he took the role in 2020. AI makes a platform’s value proposition compelling today, as it requires a single set of data to deliver the best insights.

Companies that run one vendor for the WAN, another for Wi-Fi, and another for the campus network will have three sets of data, likely siloed, three AI engines leading to fragmented insights. For most companies, AI for operations is the way forward, and that will push more companies toward a platform approach.

Other vendors have followed the platform path. What I like about Extreme’s approach is that it uses AI as more than a troubleshooting tool. Though that’s a core function of the platform, it addresses issues at every step of the network lifecycle: planning, deployment, operations, optimization, security and renewals.

It as taken Extreme years to combine multiple products and unify the data set, but that’s done, and customers should see the benefits with the new Platform ONE.

digital concept art in gold