Featured
Reports

Nathan Howe, VP of Global Innovation at Zscaler talks mobile security

March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]

Continue Reading
Supporting AI Workloads is a Top Challenge for Organizations

Networking Decision-Makers Face Increasing Network Complexity

June 2024 // In the age of artificial intelligence (AI), networks have become increasingly important — as a way to […]

Continue Reading

Verizon Mobile Partners with Microsoft So Teams Can Energize the Mobile Workforce

December 2023 // For years, mobile employees have constituted a significant portion of the workforce. Since the start of the […]

Continue Reading

Check out
OUR NEWEST VIDEOS

2025 ZKast #41 with Elaine Chiasson from Amazom about the PGA TOUR relationship

2.5K views 6 hours ago

1 0

2025 ZKast #40 with Philippe Dore, CMO of the BNP Paribas Open

3.1K views March 31, 2025 6:10 pm

1 0

2025 ZKast #39 with Sarbjeet Johal at NVIDIA GTC 2025

2.3K views March 31, 2025 3:32 pm

1 0

Recent
ZK Research Blog

News

Last week all eyes were on Nvidia Corp.’s GTC, also known as the artificial intelligence show. But there was another event going on across the country as Enterprise Connect, the communications industry’s largest event was going on in Orlando, Florida.

AI was the theme at EC as the technology is changing the way we interact with each other and with customers and that’s where Zoom Communications Inc. unloaded a salvo of 45 agentic AI skills and agent enhancements for its Zoom AI Companion.

Zoom’s goal is to elevate its AI Companion offering across the Zoom platform by leveraging AI agentic skills, agents and models to deliver high-quality results, help users improve productivity, and strengthen relationships. Zoom AI Companion helps users get more done by executing routing and sometimes complex tasks. Customers can also use AI Companion’s task action and orchestration to execute and complete end to end processes.

Generative AI gave rise to a wave of “co-pilots” that would assist workers with their jobs. Agentic AI creates “co-workers” that can complete entire tasks on behalf of a person. Long-term, workers will manage a series of agentic AI “workers” that can pass tasks to each other until completion. A good example is a mortgage process, which is filled with discrete but routine tasks. This would be ideal for a series of agentic AI agents, assuming there is interoperability between them.

Summarizing the announcements:

Leveraging AI to focus on human interactions

Since the pandemic, work has become very transactional, and Zoom’s goal is to try and restore many of the human connections we had in the office. I’ve talked with Zoom executives about this, and the company believes AI is key to creating more human connections. I’ve described this as using AI to create digital proximity, even when we are all physically distant.

A good way to think about the role of AI is to consider how many executives have an assistant with a book of information on people they have met. Prior to the meeting, the executive will get an update of last discussion, key talking points and so on. Not all of us have the benefit of having a personal assistant but we can all have an agentic AI agent enabling use to bring more humanity to an increasingly digital world.

AI Companion for specialized skills and agents

The company said AI Companion will augment specialized agents that power Zoom Business Services, including:

  • Customer self-service: Zoom Virtual Agent leverages memory and reasoning skills to deliver empathetic and contextual conversations and task action to resolve complex issues from start to finish.
  • Virtual agents: AI Studio will enable users to create and deploy customizable virtual agents.
  • Expanded agentic skills platform-wide that leverage reasoning and memory to act and orchestrate task execution, conversational self-service, and agent creation.
  • In the future, customers will be able to leverage Zoom’s open platform to interact with third-party agents, including from ServiceNow, and create their own custom agents.
  • Revenue growth: Zoom Revenue Accelerator users will be able to benefit from a specialized agent for sales to help increase revenue through automated insights, personalized outreach, and enhanced prospecting.

The last bullet I find most intriguing as the return on investment should be relatively easy to measure. Early in my career, then Cisco Systems Inc. Chief Executive John Chambers talked about how to drive adoption of new technology, and he mentioned you need to “follow the money.” He explained how companies spend massive amounts of money on improving sales.

If a new technology can have even a small impact on sales, it will become a no-brainer. Zoom Revenue Accelerator can give the company some quick wins to highlight the value of agentic AI.

Custom AI Companion add-on

Organizations will be able to use the Custom AI Companion add-on (expected in April) to:

  • Create custom meeting templates and dictionaries with unique vocabularies to meet business needs.
  • Use AI Studio to expand AI Companion’s knowledge and skills to help drive decisions and actions and complete tasks.
  • Access a digital personal AI coach (expected in June) and custom meeting summary templates to meet the needs of industry verticals or use cases, including one-on-one meetings, customer intake, or brainstorming meetings.
  • Use custom Avatars for Zoom Clips to help scale video clip creation and avoid multiple takes by using a personalized AI-generated avatar to create clips with a user-provided script.

As part of its federated approach to AI, the Custom AI Companion add-on will incorporate small language models alongside Zoom’s third-party LLMs. Zoom has trained its SLMs with extensive multilingual data optimized for specific tasks to perform complex actions, should help it facilitate multi-agent collaboration.

Improving efficiency for better results

Zoom Docs enables workers to create high-quality content more efficiently and will have enhanced AI Companion capabilities with advanced references and queries to help users create writing plans based on context, search internal and external information for references, and use that information to create a business document based on user instructions.

Users will also be able to prompt AI Companion to automatically create data tables to enhance the usability and organization of content (expected in July).

Zoom Drive (expected in May), a central repository for Zoom Docs and other meeting and productivity assets, will “make it easier to find and access assets across Zoom Workplace.

New features are critical to long-term growth

Zoom is best known as a video meetings company, which stems from the success it had selling its core product when the entire world was working from home. We are now five years removed from the pandemic and Zoom has many “COVID contracts” that are up for renewal. The challenge for Zoom is that video meetings have become a standard feature across all communications platforms with Teams having the lions share despite an inferior product. Microsoft’s ability to bundle Teams with Office has led to massive adoption.

Zoom has one thing that Microsoft and the rest of the communications field does not and that’s high end-user pull through because people that use Zoom tend to love it. Typically, information technology pros make the decision for software, like Zoom, but I’ve talked to many IT decision makers that have brought Zoom in because of the demand from employees.

It must now leverage this “user love” to sell the Zoom platform, which is built on a single data set. Zoom’s AI capabilities can create unique experiences as it can pull together insights from employee and customer communications across calling, e-mail, chat, contact center, docs and more.

In its 18th annual GTC conference last week, Nvidia Corp. not surprisingly aimed to get audience members’ hearts pumping for what’s next — which obviously is the rapid evolution of artificial intelligence.

Nvidia co-founder and Chief Executive Jensen Huang once again played his traditional keynote speaker role. As usual, he was dressed in all black, including a leather jacket. On Tuesday, Huang held court without a script for more than two hours, introducing Nvidia’s upcoming products.

It’s all about AI

AI adoption is occurring rapidly across many industries. Businesses invest money and effort to show customers, partners and shareholders how innovative they are by leveraging AI. The success of this AI explosion depends on fast, reliable, innovative technologies. So, it makes sense that’s where Nvidia is focusing. Given the shockwaves created by January’s introduction of the DeepSeek-R1 LLM from the Chinese AI company of the same name, Huang eagerly shared all that Nvidia and its increasingly powerful, but expensive, graphics processing units, chip systems and AI-powered products can do.

Working in his customary rapid-fire presentation mode, Huang walked through a broad overview of industry trends and highlighted several recent and upcoming innovations from Nvidia. Here are some of the key announcements:

  • New chips for building and deploying AI models. The Blackwell Ultra family of chips is expected to ship later this year, and Vera Rubin, the company’s next-gen GPUs named for the astronomer who discovered dark matter, are scheduled to ship next year. Huang said Nvidia’s follow-on chip architecture will be named after physicist Richard Feynman and is expected to ship in 2028. Nvidia is in regular cadence in delivering the “next big thing” in GPUs, which is great for hyperscalers, but as the use of AI broadens to enterprises, it will be interesting to see if they can keep up with Nvidia. I’ve talked to many chief information officers who aren’t sure when to pull the trigger on AI projects going into production as models and infrastructure keep evolving at a pace never seen in computing before. Go now and start reaping the rewards or wait six months and perhaps get exponential benefits. It’s a tough call, but my advice is to go now as waiting just puts companies further behind. However, as a former CIO, I get the concern of moving now and risk being obsolete in a year.
  • Nvidia Dynamo, which Huang called “essentially the operating system of an AI factory,” is AI inference software for serving reasoning models at large scale. Dynamo is fully open-source “insanely complicated” software built specifically for reasoning inference and accelerating across an entire data center. “The application is not enterprise IT; it’s agents. And the operating system is not something like VMware — it’s something like Dynamo. And this operating system is running on top of not a data center but on top of an AI factory.” Dynamo is a great example of Nvidia’s “full stack” approach to AI. Though the company makes great GPUs, so do other companies. What has set Nvidia apart is its focus on the rest of the stack, including software.
  • DGX Spark is touted as the world’s smallest AI supercomputer, and DGX Station, which he called “the computer of the age of AI,” will bring data-center-level performance to desktops for AI development. Both DGX computers will run on Blackwell chips. Reservations for DJX Spark systems opened on March 18. DGX Station is expected to be available from Nvidia manufacturing partners such as ASUS, BOXX, Dell, HP, Lambda and Supermicro later this year. It’s important to note that DGX Spark isn’t designed for gamers but for AI practitioners. Typically, this audience would use a DGX Station as a desktop, which can run at $100,000 or so. DGX Spark is being offered starting at $3,999, a great option for heavy AI workers.
  • On the robotics front, which is part of the physical AI wave that’s coming, Huang announced partnerships with Google DeepMind and Disney Research. The partners will work to “create a physics engine designed for very fine-grained rigid and soft robotic bodies, designed for being able to train tactile feedback and fine motor skills and actuator controls.” Huang stated that the engine must also be GPU-accelerated so virtual worlds can live in super real time and train these AI models incredibly fast. “And we need it to be integrated harmoniously into a framework that is used by these roboticists all over the world.” A Star Wars-like walking robot called Blue, which has two Nvidia computers inside, joined Huang onstage to provide a taste of what is to come. He also said the Nvidia Isaac GROOT N1 Humanoid Foundation Model is now open source. Robots in the workplace, or “co-bots,” are coming and will perform many of the dangerous or repetitive tasks people do today. From a technology perspective, many of these will be connected over 5G, leading to an excellent opportunity for mobile operators to leverage the AI wave. The societal impact of this will be interesting to watch. Much of the fear around AI is the technology being used to replace people. During his keynote, Huang predicted, “By the end of this decade, the world is going to be at least 50 million workers short,” which is counter to traditional thinking. Since robots can do many of the dangerous and menial jobs people do today, will we really be 50 million people short? Hard to tell, but robots will be ready to fill the gap if required.
  • Shifting gears to automotive, General Motors has partnered with Nvidia to build its future self-driving car fleet. “The time for autonomous vehicles has arrived, and we’re looking forward to building, with GM, AI in all three areas — AI for manufacturing so they can revolutionize the way they manufacture” he said. “AI for enterprise, so they can revolutionize the way they design and simulate cars, and AI for in the car.” He also introduced Nvidia Halos, a chip-to-deployment AV Safety System. He said he believes Nvidia is the first company in the world to have every line of code — 7 million lines of code — safety-assessed. He added that the company’s “chip, system, our system software and our algorithms are safety-assessed by third parties that crawl through every line of code” to ensure it’s designed for “diversity, transparency and explainability.” At CES the innovation around self-driving was everywhere. If one rolls back the clock about a decade, many industry watchers predicted we would have fully autonomous vehicles by now, but they are still few and far between. AI in cars has come a long way and they are safer and smarter but the barrier to fully autonomous was higher than many expected, but I believe we are right around the corner.
  • Quantum day interesting but left big questions unanswered. The Thursday of GTC featured the first-ever quantum day, where Huang interacted with 18 executives from quantum companies over three panels. The event was certainly interesting as it introduced the audience to companies such as D-Wave, IonQ and Alice & Bob, but it did not answer the two questions on everyone’s mind. What are the use cases for quantum and when will it arrive? During the session, Huang did announce Nvidia plans to open a quantum research facility, scheduled to open later in 2025. He also suggested that 2026 quantum day would feature more use cases. When I’ve asked industry peers about quantum, I hear timelines anywhere from five years to 30 years. I believe it’s closer to five than 30 as once we see some use cases, that will “prime the pump,” and we should see a “rising tide,” much like we did with AI.

GTC 2025 is now in the rear-view mirror and while there was no “big bang” type of announcement, there was steady progress across the board to a world where AI is as common as the internet. This should be thought of as a GTC that lets companies digest how to use AI instead of trying to understand what the next big thing is. The breadth and depth of AI today shows it’s becoming democratized, which will lead to greater adoption — good for Nvidia but also the massive ecosystem of companies that now play in AI.

Artificial intelligence continues to be a focal point for companies in all areas of technology and communications as demand from enterprise customers continues to soar, but one of the underappreciated aspects of AI is that a network plays a critical role in the success of AI initiatives.

Despite the same type of “AI bump” the capital markets have given the chip companies; the network vendors have been aggressive with evolving their products to meet the demands of AI.

Arista Networks Inc., which is the network vendor that has done the most effective job of tying its growth to AI, Wednesday announced new capabilities for its EOS Smart AI Suite designed to improve AI cluster performance and efficiency.

The Santa Clara company introduced a feature called Arista Cluster Load Balancing, or CLB, in its Arista EOS Smart AI Suite to maximize AI workload performance with consistent, low-latency network flows. It also announced that its Arista CloudVision Universal Network Observability, or CV UNO, now offers AI observability for enhanced troubleshooting and issue inferencing to ensure reliable job completion at scale.

Cluster Load Balancing benefits

Based on RDMA queue pairs, Cluster Load Balancing enables high bandwidth utilization between spines and leaves. One of the aspects of AI clusters is that they typically have low quantities of large-bandwidth flows, which is unlike typical network such as e-mail and internet traffic. Traditional network infrastructure was never designed for AI, so they lack the necessary throughput for AI workloads.

That can lead to uneven traffic distribution and increased tail latency. CLB solves this issue with RDMA-aware flow placement to deliver uniform high performance for all flows while maintaining low tail latency. CLB optimizes bidirectional traffic flow — leaf-to-spine and spine-to leaf — to provide enterprises with balanced utilization and consistent low latency.

With CLB UNO, Arista is enabling the network to directly impact AI performance at an application level. “With CLB we look at network performance, but we will also integrate application-level performance, VM performance, all into one screen so that network engineers can figure out where performance issues are and quickly get find the root cause of it,” Praful Bhaidasna, head of observability products for Arista, told me.

Quantifying CLB benefits

I asked Brendan Gibbs, vice president of AI, routing and switching platforms for Arista, to quantify the benefits that CLB delivers. He said that though all organizations are different, the performance improvements are significant. “With clusters, a general rule of thumb is about 30% of time is spent in networking,” he said. “If we can provide an extra 8% or 10% of throughput on the links customers have already deployed, it means an Arista network is going to be higher-throughput, with a lower job completion time than the next nearest competitive platform.”

The performance boost is notable. With traditional networks, which use dynamic load balancing, or DLB, to optimize traffic, the best-performing networks operate at about 90% efficiency. I asked Gibbs about CLB versus DLB and he told me it can achieve 98.3% efficiency. Given the cost of GPUs, every information technology pro I’ve talked to about AI wants more network throughput to keep the processors busy, since inefficiency leads to dollars being wasted.

One of those customers is Oracle Corp., which is using Arista switches as it continues to grow its AI infrastructure. “We see a need for advanced load balancing techniques to help avoid flow contentions and increase throughput in ML networks,” Jag Brar, vice president and distinguished engineer for Oracle’s cloud infrastructure, said in Arista’s news release. “Arista’s Cluster Load Balancing feature helps do that.” I don’t normally pull quotes from press releases, but I did in this case as Oracle is usually tight-lipped about whom its suppliers are. The fact it provided a quote is meaningful, as it’s out of the norm for Oracle.

AI job visibility

Arista said that CV UNO unifies network, system and AI job data within the Arista Network Data Lake, or NetDL, by providing end-to-end AI job visibility. NetDL is a real-time telemetry framework that streams granular network data from Arista switches into NetDL, unlike traditional SNMP polling, which relies on periodic queries and can miss critical updates.

Although Arista makes great hardware, it’s the data that gives it operational and performance consistency across products. When Arista launched, each network device had its own network database, NetDB, but a few years ago, it evolved to a single data lake across its product and NetDL was born.

EOS NetDL delivers low-latency, high-frequency, event-driven insights into network performance. This is a key element for providing connectivity in large-scale AI training and inferencing infrastructure.

Benefits of EOS NetDL Streamer

  • AI job monitoring: A view of AI job health metrics, such as job completion times, congestion indicators and real-time insights from buffer/link utilization.
  • Deep-dive analysis: Provides job-specific insights by analyzing network devices, server NICs and related flows to pinpoint performance bottlenecks precisely.
  • Flow visualization: Uses the power of CV topology mapping to provide real-time, intuitive visibility into AI job flows at microsecond granularity to accelerate issue inference and resolution.
  • Proactive resolution: Finds anomalies quickly and correlates network and computer performance within NetDL to ensure uninterrupted, high-efficiency AI workload execution.

Availability

Arista said CLB is available today on its 7260X3, 7280R3, 7500R3, and 7800R3 platforms. It will be supported in Q2 2025 on the 7060X6 and 7060X5 platforms. Support for the 7800R4 platform is scheduled for the second half of this year.

CV UNO is available today, and the AI observability enhancements, currently in customer trials, are expected to be available in the second half of 2025.

After Bloomberg reported last week that Mitel Networks Corp. was preparing to file for Chapter 11, today the telecommunications company announced has done so, entering into an agreement with its senior lenders to optimize its capital structure and recapitalize its debt.

In 2018, Searchlight Capital Partners took Mitel private and has been the majority shareholder since then. With this week’s financial transaction, Mitel has entered into an agreement with an Ad Hoc Group of its senior lenders, junior lenders and other stakeholders to put a new ownership structure in place, ending Searchlight’s ownership of Mitel.

To execute the restructuring, Mitel will employ a prepackaged plan and file voluntary petitions seeking debt relief under Chapter 11 of the Bankruptcy Code. Historically, Chapter 11 restructuring was a long, drawn-out process that could put a lengthy pause on company operations until the proceeding was complete.

Over the past few years, both Avaya and C1 (formerly ConvergeOne) have used this process to enter and exit Chapter 11 with virtually no interruption to their business process. In Mitel’s case, there is no impact to customers, partners or employees, and they expect to complete the process in 60 to 90 days.

Once the financial transaction is complete, Mitel’s debt will be reduced significantly. The company has received a commitment for $60 million of new-money debtor-in-possession or DIP financing from some of the lenders to support the business through the restructuring process. Once approved by the court, the DIP financing and Mitel’s existing working capital will fund the day-to-day operations during the Chapter 11 process. Mitel has also received a commitment of $64.5 million of new exit financing when the plan is consummated. The new debt will be used to support its continued go-forward operations.

Success post-restructuring depends on several factors. This is akin to an individual filing for bankruptcy. If the person has a high-income paying job but is riddled with credit card debt, clearing the payment obligations sets the person up well for the future. If the person has no job, is unemployable and a high-debt load, corrective action, such as going to college, before filing for bankruptcy is necessary to ensure success.

With the previously mentioned examples, Avaya went through Chapter 11 under Chief Executive Alan Masarek, but the company structurally still had its challenges. Since then, it replaced him as CEO and new head honcho, Patrick Dennis, seems to have a cogent strategy in place focusing on the Global 1500.

C1 was in a similar position to where Mitel is today and is a great example of how this process can work. In April 2024, the company went through a similar restructuring and cut about $1.4 billion in debt. The systems integrator had gone on a buying spree and acquired several smaller value-added resellers and system integrators to create the large company we know today. Despite the high cash flow, debt was holding it back and since it went through its Chapter 11 process, the company has been much stronger and able to serve its customers better.

Mitel’s business operations are currently strong and have seen a resurgence as its hybrid cloud strategy takes hold. The challenge has been servicing the debt, particularly at these high interest rates that left the company little capital to work with. This process will result in Mitel’s balance sheet being deleveraged by approximately $1.15 billion and its annual cash interest payments to be reduced by $135 million.

For Mitel, the timing of this is right. The company has a strong product roadmap in place. It has built a hybrid cloud portfolio and has partnerships with the likes of Zoom and Genesys for customers that want a SaaS delivered offering. Though the trend for unified communications and contact center has been to move to the cloud, there is still strong demand of on-premises solutions.

And with Avaya bailing on the mid-market and focusing only on large customers, it leaves Mitel as the only provider of on-premises/hybrid cloud solutions for the midmarket, giving it a huge base of customers to go after. It’s worth noting that Mitel has evolved its portfolio to serve the needs of global companies but given the lack of competition in midmarket, that is the company’s-low hanging fruit.

Also, because of many of the global macro issues like security and outages, along with the potential of artificial intelligence, there has been a rebirth of interest in hybrid cloud solutions validating the strategy the company put in place two years ago. If Mitel wants to create channel incentives, offer customers buybacks or other strategies to go after these customers, it needed the capital and now it has access to more as much of their debt has been eliminated.

I had a chance to talk with Mitel CEO Tarun Loomba about the restructuring. “This is something we’ve been working on for a while,” he said. “We knew we had to address our capital structure to set ourselves up for long-term success. This is a proactive step that allows us to invest in the business, continue innovating and support our customers’ and partners’ evolving needs for secure, reliable communications solutions without missing a beat.”

Loomba further shared that Mitel’s strategy is to lead the hybrid communications market by leveraging their significant customer base and incumbency advantage, attract new customers with innovative hybrid solutions and services, and, as this announcement demonstrates, strengthen its core business to drive profitable and predictable growth.

For Mitel customers, this should be viewed as good news. Companies that continue to use Mitel do so because they have embraced the private, hybrid cloud model for communications. With a bigger focus on AI, security and compliance, there may have been some question as to whether Mitel had the resources to invest in its platform to ensure customers were getting the latest AI capabilities with the necessary security and guardrails. This also gives Avaya customers that wish to stay on-premises or use a hybrid model a viable option to migrate to as they continue to shift all resources to the G1500.

There have been many rumors regarding Mitel’s future and the restructuring indicates it’s here to stay for the foreseeable future. After the acquisition of Unify (former Siemens Enterprise) the company has about 20% share in UC seats globally, and with this financial reset, it can be more aggressive in growing its share.

These decisions are never easy, but for Mitel, this was the right decision to make. AI is acting as an accelerant to innovation and change in UC/CC, and being hampered by debt was only going to hold it back. This is the right move to take control of its future and come out stronger on the other side.

As MWC 2025, premier telecommunications service provider show still known by many as Mobile World Congress, gets underway this week in Barcelona, Cisco Systems Inc. made a splash early.

The networking company issued several product announcements that offer deeper network insights using artificial intelligence and automation. From connected devices to connectivity assurance, the solutions are designed to help service providers deliver reliable AI-powered experiences on a larger scale.

Here’s a roundup of the key announcements:

Expanding network visibility

To keep up with growing network complexity, service providers need better tools for managing network performance rather than just focusing on speed. Cisco is rolling out ThousandEyes Connected Devices, an extension of its network observability platform that provides deeper visibility into the last mile, including home networks.

ThousandEyes Connected Devices gives service providers a more complete view of the network so they can spot and fix problems early on. It integrates device agents into home network equipment, mobile devices, and laptops. The agents monitor both speed and latency (quality of service) and application performance (quality of experience). Data is processed and analyzed in the ThousandEyes cloud, which allows service providers to control network performance, predict issues and stay in compliance with regulations.

Typically service providers can only “see” up to the equipment they own, which would be something like a cable modem or a network interface device. With this release, the device agent sits in the customer network and allows Cisco to shift the visibility vantage point to perform active monitoring on a variety of different devices.

In a prebriefing, Joe Vaccaro, vice president and general manager of ThousandEyes, told me the agent has a very small footprint enabling it to be deployed over the air. In a short amount of time, service provides can have visibility into millions of more devices.

Cisco is taking an open-source approach to ensure compatibility across different frameworks and reference design kits. In addition to service providers, ThousandEyes is working with OEMs to embed the device agent capability into its products out of the box.

Cisco purchased ThousandEyes in 2020 and for the first couple of years, I thought Cisco had greatly underleveraged what most industry watchers felt was the best internet monitoring tool. However, over the past couple of years, Cisco has started to embed ThousandEyes into a wide range of its own and third-party products.

Enhancing service provider assurance

Cisco is introducing a new feature in its Provider Connectivity Assurance or PCA suite, which builds upon the Agile Services Networking architecture unveiled at Cisco Live Amsterdam 2025. PCA works with ThousandEyes to provide deeper insights into service provider networks. While ThousandEyes monitors the end-user experience to isolate network issues, PCA focuses on service provider infrastructure, offering visibility into what’s happening inside the network.

PCA now has an AI enabled set of capabilities that offer real-time analysis of data flows for Cisco routers. This helps service providers optimize network traffic to improve service quality without having to expand the infrastructure. It also lets them identify network congestion and performance issues at both the subscriber and cell tower levels.

For example, it can identify which subscribers might be experiencing poor quality and assess the capacity and status of cell towers. Service providers can use these insights to explore revenue opportunities.

There’s an obvious operational savings to this but can also lead to new streams of revenue. As an example, if a mobile operator can see they have excess capacity on their towers, they could offer local fixed wireless access services for backhaul in that area creating a new, monetizable service.

The hunt of new revenue streams has been a key topic of conversation at MWC for years but has remained somewhat a myth. AI can help service provides better manage their environments and that better understanding can help them understand what services to offer and in what regions.

Strengthening mobile networks

Cisco is expanding its Mobility Services Platform by adding new features to help service providers better monetize their mobile infrastructure investments. One of the key updates is the platform’s 5G Advanced Services readiness, which means service providers can now launch new network services that support both low-latency and high-density applications.

“We continue to see good momentum, not only in our Americas market, but the international market as well,” said Masum Mir, senior vice president and general manager of provider mobility at Cisco. “We will see more customers adopting this platform approach and using it not only for their internet of things services but also for business services, and getting revenue from their mobile infrastructure faster.”

Additionally, Cisco launched a programmable core for deploying network functions globally or in specific regions. It can be managed through programmable APIs and implemented with existing voice, data, and messaging services. Cisco is also offering an API that simplifies how third-party developers integrate their services into Cisco’s platform.

Alongside this, Cisco introduced a new Mobility Services Platform ecosystem aimed at helping industry-specific companies integrate their solutions into service provider networks worldwide. Several companies have already joined the ecosystem, including Ubicquia, Upstream, Kajeet, Linker Vision, Youtello, OnRelay and Productive.ai.

Final thoughts

Cisco and service providers have been off and on as many times as J-Lo and Ben Affleck. However, there is a noticeable shift happening.

At MWC I caught up with the chief technology officer of a large European service provider, and he told me this is the most innovation he has seen from Cisco in the better part of a decade. He specifically called out the appointment of Jeetu Patel as chief product officer, a seminal moment in the revival of the SP business inside of Cisco.

One of the most notable changes is that Cisco is now engaging telcos much earlier in the product development phase, which can help that audience keep up with modern technologies. The telco environment is much different than enterprise, as the former tends to move more slowly because the capital investments are much larger. By bringing service providers new technology earlier, the gap between product releases and deployment can be closed.

AI is the “next big thing” for service providers and those customers need to be ready when customers are or they risk losing that opportunity, like what happened with cloud. Cisco understand enterprise demands much better than telcos and can play a key role in helping them not just to lower costs but to monetize their networks in new and differentiated ways.

Patel spoke with theCUBE, SiliconANGLE’s livestreaming studio, today in Barcelona about the future of AI and automation in business:

Cisco Systems Inc. today expanded its partnership with Nvidia Corp. to help enterprises accelerate AI projects by making it easier to deploy a combined solution to modernize their data center in preparation for artificial intelligence.

The two companies have been partners for the better part of a decade. This five-year expansion is aimed at bringing flexibility to customers to meet the constantly changing demands that AI brings.

Cisco said it will work with Nvidia to create a cross-portfolio unified architecture to make building AI-ready data center networks easier. Cisco Silicon One will couple with Nvidia’s Spectrum-X Ethernet networking platform. This partnership is notable because Cisco’s Silicon One becomes the only third-party silicon included in Spectrum-X.

As part of the deal, Cisco will also build systems using Nvidia Spectrum silicon and Cisco operating system software. Customers will be able to standardize Cisco networking and Nvidia technology in their data centers simultaneously. The collaboration will create new market opportunities for Cisco by unifying the architectural model between front- and back-end networks and simplifying the management of a range of enterprise and cloud provider networks.

In a pre-announcement briefing, Kevin Wollenweber, Cisco’s senior vice president and general manager of data center and provider connectivity, told me the expanded partnership has two main objectives.

“We’re going to take our silicon technologies, our orchestration software, and a lot of the security infrastructure that Cisco brings to this space, and we’re going to embed it into the Spectrum-X platform,” he said. “We can now take our networking technologies — Silicon One, our network operating systems and our orchestration — and have them be part of that Spectrum-X architecture. So when the customer deploys the Spectrum-X platform, they’ll have the benefit of being able to choose between Nvidia silicon if they want to leverage that for the back end and Cisco if they want to leverage that for back-end or for the front-end networks that are being built around the GPU interconnects.”

Wollenweber added that other Nvidia network innovations, including adaptive routing, will run across the Spectrum-X platform, including Cisco silicon.

Customer benefits

The extended collaboration will open new opportunities for Cisco by unifying front-end and back-end architectural models. This will make it easier for organizations to manage disparate enterprise and cloud provider networks.

“The Spectrum X platform is the reference architecture,” Wollenweber said. “It’s the GPU, the smart NIC, the switching and all of the features and functionality, like adaptive routing, that run across that. We expect customers to be able to deploy that reference architecture, fully vetted and validated by Nvidia, but with the Cisco components inside.”

On the operational side, he added, “if they want to use their Nexus dashboard to manage their clusters, or they want to use the same management that they have on the back end as they already deployed on the front end, they can get consistency with what they’re already deploying, allowing them to more easily pull these AI infrastructure stacks, into their ecosystem.”

Joint solutions

Cisco said it will develop data center switches for the Nvidia Spectrum Ethernet platform. The open ecosystem will give customers greater choice and flexibility. Organizations can standardize on the Nvidia-X platform with both companies’ switch silicon-based architectures.

This will combine technologies from Cisco and Nvidia into a single management fabric. Cisco also will work with Nvidia to create and validate Nvidia Cloud Partner and Enterprise Reference Architectures based on Nvidia Spectrum-X with Cisco Silicon One, Hyperfabric and several other Cisco technologies. The companies also plan future collaborations and joint development of high-performance Ethernet solutions to enable customers to scale and secure AI deployments.

Wollenweber said the overarching goal is to provide customers with what they want. “Commonality and consistency is something we’ve been asked for by a lot of new customers deploying tons of system networking today and bringing some of these AI networking stacks into their ecosystem,” he said. “They’re asking for commonality and consistency with the networking they’re already deploying but with all the bells and whistles, and adaptive routing and efficiency gains that Nvidia has been driving into that Spectrum-X platform.”

For Nvidia, the association with Cisco brings an added level of credibility. Cisco is one of the most trusted data center companies, nearly ubiquitously deployed across the Fortune 500. Most customers prefer Ethernet rather than standing up a parallel technology such as InfiniBand. Recently, ZK Research and theCube Research conducted an AI networking study and found that 59% of respondents prefer Ethernet, since it’s a tried-and-true technology.

Though Nvidia does support its own Ethernet in Spectrum-X, it doesn’t have the same track record for reliability and performance that Cisco solutions have. The two companies partnering brings validated solutions to market, which speeds up adoption as customers do not have to go through the tweaking and tuning of the components.

I discussed the solution with Neil Anderson, vice president of cloud, infrastructure and AI solutions for World Wide Technology Holding Co., a global systems integrator. Given the volume of business WWT does with both companies, I wanted to get his perspective .

“Combining Cisco’s deep experience and track-record in networking with Nvidia’s advanced AI technology will only strengthen the outcome for our customers,,” he told me. “By partnering together on AI clusters, customers get the best of both worlds in a simplified and trusted architecture they can depend on for the Enterprise Data Center. We see this as a huge win for customers.”

Availability

Cisco will enhance its Silicon One switches to be compatible with Spectrum-X and Nvidia’s reference architecture. Updated products are expected to be available in the middle of this year. The updates will include a range of existing and new products, including Cisco Nexus and Nexus Hyperfabric and Cisco’s USC products. Cisco will announce the availability of new Spectrum switches at a later date.

Palo Alto Networks Inc. last week unveiled its newest cloud security offering, Cortex Cloud. The latest iteration of the company’s Prisma Cloud, it’s natively built on Palo Alto’s Cortex AI-enabled security operations platform.

In its announcement, Palo Alto described Cortex Cloud as combining Cortex’s “best-in-class cloud detection and response (CDR) with industry-leading cloud native application protection platform (CNAPP) from Prisma Cloud for real-time cloud security.”

Cloud attack surfaces are a favorite target of cyberattacks, reflecting the continuing growth of enterprise cloud adoption and artificial intelligence usage. Cortex Cloud brings together multiple sources of data, automates workflows, and applies AI to deliver insights to reduce risk and prevent threats. The company designed Cortex Cloud to ingest and analyze data from third-party tools enabling to operate across the cloud ecosystem.

In a briefing with analysts, Scott Simkin, Palo Alto’s vice president of marketing, said Cortex Cloud gives security teams greater insight into what’s happening within their infrastructure, enabling them to act quickly and decisively. “One of the primary things we wanted to make better with Cortex Cloud is time to value, ease the workflow, ease of onboarding, and ease of reporting and dashboarding,” he said.

Cortex Cloud also consistently delivers capabilities such as role-based access control (RBAC) in one place for all cloud modules. “Now they’ve got it for all cloud modules and the SOC together,” Simkin said.

Key features

Built on Cortex, Cortex Cloud is designed to prevent cloud threats in real time. It leverages runtime protection so customers can achieve protection at a lower total cost of ownership that buying point products. Cortex Cloud includes:

Application security: Organizations can build secure apps and prevent issues during development from becoming production vulnerabilities that attackers can exploit. Cortex Cloud identifies and prioritizes issues across the development pipeline, providing end-to-end context across code, runtime, cloud, and third-party scanners.

Cloud posture: Cortex Cloud builds on Prisma Cloud’s cloud posture capabilities, combining cloud security posture management (CSPM), cloud infrastructure entitlement management (CIEM), data security posture management (DSPM), AI security posture management (AI-SPM), compliance, and vulnerability management (CWP) in one natively integrated platform.

Cloud runtime: Cortex Cloud natively integrates the unified Cortex XDR agent, including additional cloud data sources, to stop attacks in real time.

SOC: The transformation of SOC operations is core tenet of Palo Alto’s platform value proposition. To enabled this, Cortex Cloud works with Cortex XSIAM to extend detection and response capabilities from enterprise to the cloud for comprehensive, AI-driven security operations. Cortex Cloud natively integrates cloud data, context, and workflows within Cortex XSIAM to significantly reduce the mean time to respond to modern threats with a single, unified SecOps solution.

Improving time to value

Simkins said that the enhancements delivered by Cortex Cloud deliver value quickly to enterprises. “When you onboard a cloud account, you onboard it once, and every single posture control and runtime is now activated at the same moment with the click of a button. So time to value has been dramatically improved,” he said. “Unifying cloud and SOC within a broader security operations umbrella is the right decision to help enterprises stay ahead.

“Customers have told us over and over again they’re not looking to adopt individual posture controls,” Simkins said. “They’re looking to adopt cloud posture, runtime, or end-to-end security operations. So we listened to that feedback to get to a much simpler and easier to understand price and model.”

My perspective

With Cortex Cloud, Palo Alto is demonstrating the continuing platformization of security. As security functions become more standardized, it’s easier to roll them into enterprise platforms.

That transition has been occurring for a while. Next-generation firewalls and other security capabilities have been rolled into a single system. Enterprises no longer need to buy these components separately. I also see cloud-native application protection platforms having reached that point, so they can be rolled in as a SOC tool.

This evolution makes security platforms more comprehensive, responsive, and capable than ever before. The era of the standalone security app is rapidly coming to an end.

Availability

General availability for Cortex Cloud is Feb. 18. Simkins said upgrades for existing customers, through PAN’s partner ecosystem, will begin in April.

Cisco Systems Inc. managed to put up a strong “beat and raise” in its fiscal second-quarter earnings this week, and investors took the news positively as the stock is trading at an all-time high, excluding the overvaluation during the dot-com bubble.

Beyond the strong quarter, the results also highlighted several broader themes. Here are my five takeaways from Cisco’s most recent quarter:

Security is moving the needle

For the past several years, I have referred to security as the biggest opportunity Cisco had to grow its revenue and its stock price. Last May, I mentioned in a post how Jeetu Patel (pictured), then head of security and collaboration and now chief product officer, had retooled security.

Since then, Cisco has released a flurry of security innovations, including extended detection and response or XDR, AI Defense and Hypershield, and the recently announced Smart Switch, which uses data processing units to embed security into the network. Although growth was only 4%, the company is seeing good momentum in new products.

On the earnings call, Chief Executive Chuck Robbins talked about security order growth and the impact of new products.“Our security orders more than doubled again this quarter,” he noted. “In just 12 months, both Cisco Secure Access and XDR have gained more than 1,000 customers combined, and approximately 1 million enterprise users each.”

Moreover, he added, “Even before it’s in full production, Hypershield is also seeing solid momentum. In Q2, we booked major platform deals with two Fortune 100 enterprise customers who are leveraging Hypershield to deploy security into the network in a fundamentally new way.”

Right now, order growth is more important than actual revenue as most of it is sold with a subscription model, which leads to “revenue stacking,” and that takes a while to see meaningful results. Security growth is a key indicator for sustained growth because the industry is massive and the competitive landscape is highly fragmented. Capturing even a moderate amount of share will change how Cisco is perceived by investors and continue to move the stock up.

The platform effect is taking hold

Since becoming the company’s first CPO, Patel has been emphatic about creating a Cisco “platform” and while Cisco has used the term before, it was more a euphemism for product bundles. To describe his vision, Patel often refers to Apple Inc., which is the best example of a company that delivers great experiences through its platform. I’m “all in” on Apple because I can do many things I could not with a collection of best of breed products. I can iMessage on my laptop, I can cut text from my phone and paste it on my tablet, I can push a webpage from one device to another.

We are starting to see the fruits of the platform effect at Cisco with ThousandEyes integration across all its devices making troubleshooting easier. Some of the new security products leverage network telemetry to “see” things cyber only tools can’t. The new Webex codec uses network intelligence to deliver a high-quality experience over a low-bandwidth connection and it has a single AI agent that spans all its products.

During a conversation with Patel, he told me his goal is to build a platform that delivers “magical experiences which people love and tell others about.” He has been CPO less than a year, but so far, so good. Cisco Live US is coming up in June and we should see more evidence of it there.

AI will be a catalyst for growth

The hype around artificial intelligence is at an all-time high, but most of the focus has been on graphics processing units and servers. The reality is the network plays a critical role in AI performance and that has yet to be reflected by the investor community — but that should change soon. During the earnings call, management stated it expects to exceed $1 billion in AI product orders for fiscal year 2025, comprised of a broad set of products, including network infrastructure, optics and Unified Computing System servers.

To help customers accelerate deployments, Cisco recently rolled out its AI PODs, which are turnkey, end-to-end Cisco solutions that can be deployed and used immediately for AI training and inferencing. This is a good example of the platform effect cited above.

Splunk is about more than adding revenue

When Cisco acquired Splunk Inc., many investors I talked to made comments such as, “It paid $28 billion for $4 billion a year in revenue.” Given Splunk’s margins, that’s a decent return. However, the value of Splunk is about more than dollars contributed. Since closing the deal 11 months ago, I have seen Splunk integrated across multiple Cisco products.

As Robbins said on the earnings call, “Since Splunk became a part of Cisco almost 11 months ago, we continue to integrate our businesses and fuel synergies without disrupting momentum. During the quarter we also integrated Talos into Splunk’s newly released Enterprise Security 8.0 solution and AppDynamics into Splunk’s on-prem log observer.”

At the National Retail Federatio show, Splunk observability was on display as part of its retail solutions, and I’ve seen many Cisco-Splunk cross-selling deals in the field. One interesting trend to watch is how Cisco brings Splunk and its other products together to address the growing interest in digital resilience, which is being fueled by AI. More to come on that.

Cisco’s not trading as a software company yet

From a stock perspective, Cisco still looks like a hardware company despite some strong software metrics. On the call, Chief Financial Officer Scott Herren highlighted many metrics, such as annual recurring revenue, that point to it having made the shift from hardware to software. As he explained, “Total ARR ended the quarter at $30.1 billion, an increase of 22%, with product ARR growth of 41%. Total subscription revenue increased 23% to $7.9 billion, and now represents 56% of Cisco’s total revenue. Total software revenue was up 33% at $5.5 billion, with software subscription revenue up 39%.”

However, even at the higher stock price, Cisco’s price-to-earnings ratio is only about 17X, below the peer group average of 22X. It’s a company that has traded as a value stock for some time, since that’s what it has been. Security and AI give the company the chance to break into growth mode, but platform is the key to differentiation over the dozens of point products, many of which have significantly higher valuations than Cisco.

One last note on this quarter: The company announced that Gary Steele, former Splunk CEO and current president of Cisco go-to-market, will resign as of April 25. Although I have no official word from Cisco on why Steele is leaving, he has been a CEO since 1998, and post-Cisco, he will pursue another CEO gig. Assuming Robbins is at Cisco for the foreseeable future, I take the company’s statement of his wanting to be a CEO at face value.

The Kansas City Chiefs and Philadelphia Eagles have almost two weeks to develop a game plan for Super Bowl LIX in New Orleans Sunday, Feb. 9, but the technology team starts well before that. In fact, the planning and strategy for the next championship game — Super Bowl LX, which will be held in Levi’s Stadium in Santa Clara in February 2026 — are well underway.

Too early, you say? Well, it’s premature for the teams that hope to play in the game to make travel plans, but it’s a very different story for the information technology and cybersecurity professionals from the NFL and the San Francisco 49ers, the team that plays its home games at Levi’s Stadium (pictured).

A recent LinkedIn event by Cisco Systems Inc., one of the key providers of the Levi’s Stadium network, featured an in-depth discussion of how much planning, effort and technology goes into providing fast, secure connectivity for the teams, broadcasters, vendors and, of course, the fans who will pack the stadium along with their tens of thousands of mobile devices.

NFL and 49ers team up on tech

Aaron Amendolia, the NFL’s deputy chief information officer, has worked for the league for 21 seasons. He leads the NFL’s innovation team and oversees event technology and infrastructure. This year’s Super Bowl in New Orleans will be Amendolia’s 18th, more than even the GOAT himself, Tom Brady. He and his team will be busy with the 2025 game, and they’re already immersed in work for next year.

Costa Kladianos is the 49ers’ executive vice president and head of technology. He and his team handle tech for all home games, any postseason games the Niners host and numerous other events at the 68,500-seat stadium. After Super Bowl LX, another big job on his plate will be a different type of football: FIFA World Cup soccer games, some of which will be held at Levi’s Stadium.

Connectivity and much more

“You start to think about all the connectivity needed for the Super Bowl,” Amendolia said. “All the devices that come into a stadium on game day and all the buildout around that. We met with Costa’s team to talk about preparation for LX.”

And on game day, he added, “we’re planning for over 150,000 to 200,000 devices entering this building. But it’s not just about game days, but all the preparation around it. We have many partners, broadcasters, vendors, a diverse group of technology showing up, connecting to the network, and doing everything you need to deliver the games.”

I’ve interviewed many stadium CIOs, and Amendolia’s comments echo those of others, saying that the network is critical to every aspect of holding a game. Last year, I talked with a sports CIO who mentioned how a situation took the network down. He explained how he wasn’t sure it would be up by the game and had to explain to the owners that a game could not occur without a network.

Security systems, ticketing, point-of-sale, medical services and other critical services run on the network. The good news is the network did come back up in time, and the scare enabled the team to build a redundant data center. But this is the challenge that all stadium’s CIOs face and it’s magnified exponentially in a high-profile game, like the Super Bowl.

Wi-Fi plays a massive role in overall stadium connectivity, according to Kladianos. It’s about much more than fans logging in with their cell phones. “Wi-Fi is table stakes right now,” he explained. “Everybody’s bringing their device, everybody’s sharing the great time they’re having at the event, but it’s also what all our backend technology, including point-of-sale systems, runs on. We love to run on Wi-Fi because it just makes us flexible. We can quickly move a sales system, our point of sale outwards. We can get into the lines and go to the in-seat service. It gives us that flexibility to what we want, especially around the gates, getting people through the gates quicker, checking their tickets.”

AI increases the burden on stadium Wi-Fi

“AI requires a lot of bandwidth and processing power, and that has to go through the Wi-Fi in the stadium,” Kladianos said. “That becomes super-important as we go there because we want fans not to realize the experience they’re having in the Wi-Fi. We want them to know that it works. We currently have 1,200 access points throughout the stadium, and we’re looking to expand that as we head to 2026 to ensure that everyone has the same great experience they have everywhere else.”

Managing all the devices that require Wi-Fi access is extremely challenging, according to Kladianos. “Even with your best analyst, you need technology and tools to correlate those events,” he said. “AI is really where we’re looking.

Indeed, he added, “we’re going to validate which AI solution is going to return the best results. It’s exciting because you must correlate against something unique to sports. The sensors we have on the field with the players, the cameras we have doing optical tracking, our broadcast cameras capturing and getting that live event out to the points of sale, and the fan devices create a unique environment.”

With all that data, security is critical

“We look at AI as an opportunity, and we know with opportunities, there’s also the other side of the coin, which is threats,” Kladianos explained. “You want to be ahead of the game. So, with our partner Cisco, we’re putting in the latest and greatest monitoring solutions and everything they offer on the security side, on our firewalls, using threat intelligence.”

Moreover, he added, the team can take all its data, all its logs on the back end, and quickly use AI to summarize threats, because AI can do it a lot faster. “I have analysts in the group, so that’s really going to help us. In terms of other innovations in the stadium, our strategy for AI is the intelligent stadium,” he said. “We want to see how AI can enable everything we do to engage our fans.”

Few events are as closely watched as the Super Bowl. The 2024 game had more than 123 million viewers in the United States, and the NFL continues attracting new fans worldwide. That growing focus makes each Super Bowl a top-level Homeland Security concern on par with a presidential inauguration.

“Obviously, Super Bowl is a high-profile event, but also a high-value target for adversaries,” said Amendolia. “Our cybersecurity team, our CISO, they’re making sure that we implement AI responsibly, so we’re not causing any vulnerabilities ourselves, and we understand what’s going on in the outside world. It’s a lot of education and putting the right tools in place, but also communication with our partners. You think of all the different organizations from across the world, international broadcasters, domestic broadcasters, and digital experiences that come to the Super Bowl; you’re now bringing a whole ecosystem trying to get out their content around this live event with all the tools they bring in.”

Added Kladianos: “We have a full security operation center. We work closely with the NFL, local security agencies, the FBI and local police. We run different technology in terms of our high-definition camera systems using IP on the back end running through that network, making it super-important to have that low latency. These cameras are not just cameras; now, they’re analyzing super HD and super zoom. Using some of the AIs and the cameras, you can spot potential threats before they happen.”

Amendolia said his cyber team is using logging tools such as Splunk’s to bring everything to one place, as well as Cisco’s suite of security tools. He cited some stats: “350,000 connections blocked to malicious and blacklisted sites. 39,000 intelligence services detected and dealt with. 1,600 intrusion attempts foiled. Those are just the years we’ve worked with Cisco at the Super Bowl. These distinct things keep incrementally increasing. The target is there.”

Final thoughts

Though this is a sports-related story, the lessons learned can be applied to companies in all industries. A recent ZK Research/theCUBE Research survey found that 93% of respondents believe the network to be more critical to business operations than it was two years ago.

However, I find that with most companies, the network does not get the same level of C-level interest as the cloud or compute platforms, but the reality is that the network is the business for most companies. Ensuring a secure, rock-solid network is crucial to business operations in all industries.

Cisco Systems Inc. this week held its first AI Summit, a thought leadership event on the pivotal topics shaping the future of artificial intelligence — this one focused on the security of AI systems.

The summit was small and intimate, with about 150 attendees, including executives from about 40 Fortune 100 companies. I understand why the interest from top companies was so high, as the speaker list was impressive and included AI luminaries such as Alexandr Wang, founder and chief executive of Scale AI Inc.; Jonathan Ross, founder ad CEO of Groq Inc.; Aaron Levie, co-founder and CEO of Box Inc.; Brad Lightcap, chief operating officer of OpenAI; David Solomon, CEO of Goldman Sachs; and many others.

From a product perspective, Cisco leveraged AI Summit to announce a new tool called Cisco AI Defense, which, as the name suggests, safeguards AI systems. According to Cisco’s 2024 AI Readiness Index, only 29% of organizations feel equipped to stop hackers or unauthorized users from accessing their AI systems. AI Defense aims to change that statistic.

The product’s release is well-timed, as AI security is now at the top of business and information technology professionals’ minds. This week, I also attended the National Retail Federation show in New York. There, I attended three chief information officer events, with a combined attendance of about 50 IT executives.

Every IT executive at the three events was highly interested in AI. The primary thing holding most of them back was security, particularly for regulated industries such as healthcare, retail and financial services.

Cisco’s AI Defense is designed to give security teams a clear overview of all the AI apps employees use and whether they are authorized. For example, the tool offers a comprehensive view of shadow AI and sanctioned AI apps. It implements policies restricting employee access to unauthorized apps while ensuring compliance with privacy and security regulations.

One common theme from my IT discussions is that no one wants to be the “department of no,” but they also understand that without the proper controls, the use of AI can put businesses at risk. Also, it has been shown over time that when IT departments say no, users find a way around it. It’s better to provide options for users, and Cisco AI Defense offers the visibility and controls required for workers to be safe.

The tool is also helpful for developers because applications can be secured at every stage of the application lifecycle. During development, it pinpoints weaknesses in AI models so potential issues can be fixed early. This helps developers create secure apps immediately without worrying about hidden risks.

When it’s time to deploy those apps, AI Defense ensures they run safely in the real world. It continuously monitors unauthorized access, data leaks and cyberthreats. The tool provides ongoing security even after deploying an app by identifying new risks.

One of the tool’s unique attributes is its continuous validation at scale. One of the challenges of security AI is that while a company could use traditional tools to secure the environment at any point, guardrails will have to be adapted if the model changes. Cisco AI Defense uses threat intelligence from Cisco Talos and machine learning to continually validate the environment and automate the tool’s updates.

This also builds on Cisco’s security portfolio, which is taking shape nicely as a platform. In the analyst Q&A, I asked Cisco Chief Product Officer Jeetu Patel (pictured, left, with Cisco CEO Chuck Robbins), about the “1+1=3” effect if you use AI Defense with Hypershield. He corrected me and said four technologies created a “1+1+1+1=20.” These include Cisco Secure Access, Hypershield, Multi-Cloud Defense, and AI Defense.

“These four work in concert with each other, Patel said. “If you want visibility into the public cloud or what applications are running, Multi-Cloud Defense ties in with AI Defense and gives you the data needed to secure the environment. If you want to ensure enforcement on a top-of-rack switch or a server with an EBPF agent, that can happen as AI Defense is embedded into Hypershield.”

What’s more, he added, “we will partner with third parties and are willing to tie this together with competitor products. We understand the true enemy is the adversary, not another security company, and we want to ensure we have the ecosystem effect across the industry.”

DJ Sampath, Cisco’s vice president of product, AI software and platform, added, “AI Defense data would be integrated into Splunk, so all the demonstrated things will find their way into Splunk through the Cisco Add-On to enrich the alerts you see in Splunk.” Given the price Cisco paid for Splunk Inc., integrating more Cisco products and data into it will create a multiplier effect on revenue.

I firmly believe that share shifts happen when markets transition, and AI security provides a needle-moving opportunity for Cisco and its peers. AI will create a rising tide for the security industry, but the company that nails doing it easily will benefit disproportionately. The vision of what Cisco laid out is impressive, but the proof will come when the product is available. We shouldn’t have to wait long, since it’s expected to be available this March.

For those who missed it, the event will be rebroadcast next Wednesday, Jan. 22.

It’s NRF week in New York, which allows technology vendors to showcase innovation for the retail industry, and at the National Retail Federation show, HPE Aruba Networking rolled out several new products to help retailers tackle industry-specific challenges.

They included providing backup connectivity for mission-critical apps, supporting pop-up stores and simplifying information technology infrastructure deployment in retail environments.

Retail has been a core industry for the Hewlett Packard Enterprise Co. unit, which designed the new products to address the networking needs of large and small retail locations. The HPE Aruba Networking 100 Series Cellular Bridge is a key addition to the portfolio. It provides “always-on” connectivity if the primary network experiences a disruption, allowing retailers to stay up and running, even when setting up temporary pop-up locations and kiosks. The Cellular Bridge defaults to 5G but automatically switches to 4G LTE when needed.

“It’s about making sure that there is business continuity, especially for critical transactions like credit cards, and ensuring that it is always on whether anything else in the network fails,” Gayle Levin, senior product marketing manager for wireless at HPE Aruba, said in a briefing.

HPE Aruba is also expanding its retail offerings by combining networking and compute capabilities with the launch of the CX 8325H switch. The energy-efficient 18-port switch integrates with HPE ProLiant DL145 Gen 11, a compact, quiet server for edge computing. Together, these devices provide efficient computing and storage, while their space-saving design makes them ideal for small retail environments.

What I like about this product is that it combines technology from HPE’s computing side with networking from Aruba to create a solution for retail challenges. Most brick-and-mortar stores are space-constrained and do not have room for separate devices.

Moreover, HPE Aruba is expanding its Wi-Fi 7 lineup with 750 Series access points (APs). Like the 730 Series, the new APs can securely process internet of things data and handle a larger number of IoT devices. One of the compelling features of the 50 Series is its ability to run containerized IoT applications directly on the device without sending data to the cloud. Instead, it processes data at the edge, right where it’s collected.

IoT has exploded in retail and organizations in this industry, creating massive amounts of data, which means they also face extra security risks. IoT devices are easy targets for hackers because many still use default or weak passwords and outdated software, and connect to larger networks. In addition, they collect sensitive data like location or usage patterns. With so many devices in use, the number of potential attack points increases.

“In retail, brand reputation is critical,” Levin said. “We’re ensuring that the door lock is not being hacked to avoid exposure or added risk. IoT is supposed to help, but it’s doing the opposite.”

HPE Aruba addresses IoT security by integrating zero-trust into its products. For example, its access points prioritize securing IoT devices like cameras, sensors, and radio frequency identification or RFID labels, which are common entry points for hackers. The vendor also provides AI-powered tools like client insights and micro-segmentation to detect potential breaches proactively.

Central AI Insights is a new product created for retail curbside operations. It uses AI to automatically adjust Wi-Fi settings, reducing interference from things like people passing by outside, so customers and staff always have a reliable connection. If something goes wrong — whether it’s a network issue, an internet problem or a glitch in an app — Central AI Insights helps diagnose the issue. It also monitors IoT devices and can spot suspicious activity.

“It’s not just about using the network to support AI but also making the network work better using AI,” Levin said. “We’ve created specific insights that help retail. The idea is to make supporting these very large, distributed store ecosystems easier with a centralized IT department. So, they’re getting everything they need and use AI insights to understand where the problem is.”

HPE Aruba has a broad ecosystem of retail partners like Hanshow and SOLUM, which offer electronic shelf labels, or ESLs, and digital signage. Another partner, Simbe, has developed an autonomous item-scanning robot that tracks products, stock levels and pricing. VusionGroup uses computer vision AI and IoT asset management with ESLs and digital displays to help retailers track their inventory. Zebra Technologies provides RFID scanners, wearable devices and intelligent cabinets for omnichannel retailing.

HPE Aruba has upgraded its Central IoT Operations dashboard to simplify retailers’ management of IoT devices. The improved dashboard has a single interface, connects Wi-Fi APs to devices such as cameras and sensors, and integrates with third-party applications. I stopped by the HPE booth at NRF, where attendees could check out the hardware, see it in action with some retail demos, and experience the new software.

AI, digitization, omnichannel communications and IoT are creating massive changes in retail. Though these technologies may seem distinct, they share one commonality: They are network-centric. These new products from HPE Aruba enable retailers to deploy a modernized network that can act as a platform to enable companies to adapt to whatever trend is next.

Amazon Web Services Inc. made several announcements at the CES consumer electronics show last week regarding partnerships in the automotive industry that are aimed at furthering the rise of software-defined vehicles.

Building and delivering cars is increasingly becoming a software game that requires automotive manufacturers to take an ecosystem approach. The rise of software-defined vehicles, or SDVs, enables auto companies to work on parts or cars that have yet to be built. Also, updates can be made to finished products using over-the-air connectivity, something they could never do before.

AWS is partnering with several companies to make SDVs smarter and easier to develop. By using cloud computing, artificial intelligence and scalable tools, AWS is helping automakers build better cars that can be updated and improved over time.

Honda Motor Co. Ltd. is among the companies working with AWS to turn its cars into SDVs. The car company has created a “Digital Proving Ground,” or DPG, an AWS-enabled cloud simulation platform for digitally designing and testing vehicles. Using DPG, Honda can collect and analyze data such as electric vehicle driving range, energy consumption and performance. The platform reduces reliance on physical prototypes, speeding up development and lowering costs.

Historically, auto companies have had to build cars first and then test them. Though this seems reasonable, the cost and time taken can be very high as accidents happen, which creates delays, and niche use cases can be complex to test. For example, at dawn and dusk, sensors can malfunction because of the brightness. This can only be tested for a few minutes daily in the physical world. In a simulated environment such as the DPG, the sun can be held at the horizon, and millions of hours of simulation run.

Moreover, Honda uses AWS’ video streaming and machine learning tools to develop video analytics applications. Amazon Kinesis Video Streams processes and stores car camera footage to detect unusual movement around a car. If implemented in the real world, it could potentially alert drivers to nearby hazards and help prevent collisions.

Honda is also tapping into AWS generative AI services, specifically Amazon Bedrock. For example, it’s developing a new system that guides drivers to the best charging stations based on location, battery level, charging speed and proximity to shopping centers. The system provides secure communication between vehicles and the cloud while gathering driver preferences to offer personalized recommendations. It’s set to launch in Honda’s 0 Series EVs (pictured).

Honda’s partnership is notable, as it’s among the highest-volume manufacturers. Specialty EV companies were early interested in leveraging platforms such as AWS. A Honda partnership legitimizes that SDVs are the way forward for this industry.

Building on this momentum, AWS has also teamed up with HERE Technologies to enhance location-based services for SDVs. HERE provides advanced mapping technology, while AWS supplies the cloud tools to process large amounts of data. The companies are helping automakers build driver assistance systems, hands-free driving, EV routing and more.

HERE’s HD Live Map processes real-time sensor data to provide granular navigation and improve EV battery usage. The company just launched a new tool called SceneXtract, which simplifies testing by creating virtual simulations. Using a combination of HERE’s mapping technology and services like Amazon Bedrock, automotive developers run detailed simulations to test advanced driver assistance systems and automated driving. For instance, they can locate and export map data into test scenes, reducing the time, effort and cost involved in preparing simulations.

Additionally, AWS has partnered with automotive supplier Valeo to simplify the development and testing of vehicle software. Valeo announced the first three solutions during CES 2025: Virtualized Hardware Lab, Cloud Hardware Lab and Assist XR.

Virtualized Hardware Lab allows carmakers to test software on virtualized components, potentially speeding up development by up to 40%, according to Valeo. This cloud-based solution, hosted on AWS, will be available on AWS Marketplace yearly this year.

Valeo offers the Cloud Hardware Lab, a Hardware-in-the-loop-as-a-service solution for those who want access to large-scale testing systems. HIL combines hardware components with software simulations so companies can test how their software interacts with hardware systems. HILaaS allows companies to access Valeo’s advanced testing systems remotely through an AWS-hosted platform.

Lastly, Assist XR will provide roadside assistance, vehicle maintenance and other remote services. It will use AWS cloud infrastructure and AI tools to process real-time data from vehicles and their surroundings. This is one of many examples of the technologies needed to build safer, smarter and more efficient cars.

Going into CES, I was chatting with some media, and there is a perception that the automotive industry has seen little innovation over the past several years. Though I believe this statement is incorrect, I understand the source. Five or more years ago, fully autonomous vehicles were all the rage and were supposed to be here by now. This set an expectation that was not realistic. If the benchmark for innovation is level five AVs, then we aren’t there yet.

However, every year, incremental innovation has been made in the journey to fully autonomous, and we now have many features that make us better, smarter and safer drivers. 2025 won’t be the year of level five, but it will be another year in which we see more steps taken toward it.

digital concept art in gold