Featured
Reports

Verizon Mobile Partners with Microsoft So Teams Can Energize the Mobile Workforce
December 2023 // For years, mobile employees have constituted a significant portion of the workforce. Since the start of the […]

“Private Cellular or Wi-Fi?” Isn’t an Either/Or Question: You Can Have Both
December 2023 // The world used to rely on wired connections. The phones we used back then plugged into the […]

Enterprises Have Big Plans for Wireless but Lack Unified Management
October 2023 // Siloed management, security and QoS leads to complexity and downtime. A converged multi-access wireless network* is the […]
Check out
OUR NEWEST VIDEOS
2025 ZKast #20 with Carolina Milanese on Cisco Live EMEA 2025
6.5K views 12 hours ago
0 0
2025 ZKast #19 With Gary Sorrentino of Zoom at NRF 2025
5.5K views February 18, 2025 2:39 pm
1 0
2025 ZKast #18 with 8x8 CEO Sam Wilson
6.3K views February 13, 2025 4:57 pm
3 2
Recent
ZK Research Blog
News


Cisco Systems Inc. this week held its first AI Summit, a thought leadership event on the pivotal topics shaping the future of artificial intelligence — this one focused on the security of AI systems.
The summit was small and intimate, with about 150 attendees, including executives from about 40 Fortune 100 companies. I understand why the interest from top companies was so high, as the speaker list was impressive and included AI luminaries such as Alexandr Wang, founder and chief executive of Scale AI Inc.; Jonathan Ross, founder ad CEO of Groq Inc.; Aaron Levie, co-founder and CEO of Box Inc.; Brad Lightcap, chief operating officer of OpenAI; David Solomon, CEO of Goldman Sachs; and many others.
From a product perspective, Cisco leveraged AI Summit to announce a new tool called Cisco AI Defense, which, as the name suggests, safeguards AI systems. According to Cisco’s 2024 AI Readiness Index, only 29% of organizations feel equipped to stop hackers or unauthorized users from accessing their AI systems. AI Defense aims to change that statistic.
The product’s release is well-timed, as AI security is now at the top of business and information technology professionals’ minds. This week, I also attended the National Retail Federation show in New York. There, I attended three chief information officer events, with a combined attendance of about 50 IT executives.
Every IT executive at the three events was highly interested in AI. The primary thing holding most of them back was security, particularly for regulated industries such as healthcare, retail and financial services.
Cisco’s AI Defense is designed to give security teams a clear overview of all the AI apps employees use and whether they are authorized. For example, the tool offers a comprehensive view of shadow AI and sanctioned AI apps. It implements policies restricting employee access to unauthorized apps while ensuring compliance with privacy and security regulations.
One common theme from my IT discussions is that no one wants to be the “department of no,” but they also understand that without the proper controls, the use of AI can put businesses at risk. Also, it has been shown over time that when IT departments say no, users find a way around it. It’s better to provide options for users, and Cisco AI Defense offers the visibility and controls required for workers to be safe.
The tool is also helpful for developers because applications can be secured at every stage of the application lifecycle. During development, it pinpoints weaknesses in AI models so potential issues can be fixed early. This helps developers create secure apps immediately without worrying about hidden risks.
When it’s time to deploy those apps, AI Defense ensures they run safely in the real world. It continuously monitors unauthorized access, data leaks and cyberthreats. The tool provides ongoing security even after deploying an app by identifying new risks.
One of the tool’s unique attributes is its continuous validation at scale. One of the challenges of security AI is that while a company could use traditional tools to secure the environment at any point, guardrails will have to be adapted if the model changes. Cisco AI Defense uses threat intelligence from Cisco Talos and machine learning to continually validate the environment and automate the tool’s updates.
This also builds on Cisco’s security portfolio, which is taking shape nicely as a platform. In the analyst Q&A, I asked Cisco Chief Product Officer Jeetu Patel (pictured, left, with Cisco CEO Chuck Robbins), about the “1+1=3” effect if you use AI Defense with Hypershield. He corrected me and said four technologies created a “1+1+1+1=20.” These include Cisco Secure Access, Hypershield, Multi-Cloud Defense, and AI Defense.
“These four work in concert with each other, Patel said. “If you want visibility into the public cloud or what applications are running, Multi-Cloud Defense ties in with AI Defense and gives you the data needed to secure the environment. If you want to ensure enforcement on a top-of-rack switch or a server with an EBPF agent, that can happen as AI Defense is embedded into Hypershield.”
What’s more, he added, “we will partner with third parties and are willing to tie this together with competitor products. We understand the true enemy is the adversary, not another security company, and we want to ensure we have the ecosystem effect across the industry.”
DJ Sampath, Cisco’s vice president of product, AI software and platform, added, “AI Defense data would be integrated into Splunk, so all the demonstrated things will find their way into Splunk through the Cisco Add-On to enrich the alerts you see in Splunk.” Given the price Cisco paid for Splunk Inc., integrating more Cisco products and data into it will create a multiplier effect on revenue.
I firmly believe that share shifts happen when markets transition, and AI security provides a needle-moving opportunity for Cisco and its peers. AI will create a rising tide for the security industry, but the company that nails doing it easily will benefit disproportionately. The vision of what Cisco laid out is impressive, but the proof will come when the product is available. We shouldn’t have to wait long, since it’s expected to be available this March.
For those who missed it, the event will be rebroadcast next Wednesday, Jan. 22.


It’s NRF week in New York, which allows technology vendors to showcase innovation for the retail industry, and at the National Retail Federation show, HPE Aruba Networking rolled out several new products to help retailers tackle industry-specific challenges.
They included providing backup connectivity for mission-critical apps, supporting pop-up stores and simplifying information technology infrastructure deployment in retail environments.
Retail has been a core industry for the Hewlett Packard Enterprise Co. unit, which designed the new products to address the networking needs of large and small retail locations. The HPE Aruba Networking 100 Series Cellular Bridge is a key addition to the portfolio. It provides “always-on” connectivity if the primary network experiences a disruption, allowing retailers to stay up and running, even when setting up temporary pop-up locations and kiosks. The Cellular Bridge defaults to 5G but automatically switches to 4G LTE when needed.
“It’s about making sure that there is business continuity, especially for critical transactions like credit cards, and ensuring that it is always on whether anything else in the network fails,” Gayle Levin, senior product marketing manager for wireless at HPE Aruba, said in a briefing.
HPE Aruba is also expanding its retail offerings by combining networking and compute capabilities with the launch of the CX 8325H switch. The energy-efficient 18-port switch integrates with HPE ProLiant DL145 Gen 11, a compact, quiet server for edge computing. Together, these devices provide efficient computing and storage, while their space-saving design makes them ideal for small retail environments.
What I like about this product is that it combines technology from HPE’s computing side with networking from Aruba to create a solution for retail challenges. Most brick-and-mortar stores are space-constrained and do not have room for separate devices.
Moreover, HPE Aruba is expanding its Wi-Fi 7 lineup with 750 Series access points (APs). Like the 730 Series, the new APs can securely process internet of things data and handle a larger number of IoT devices. One of the compelling features of the 50 Series is its ability to run containerized IoT applications directly on the device without sending data to the cloud. Instead, it processes data at the edge, right where it’s collected.
IoT has exploded in retail and organizations in this industry, creating massive amounts of data, which means they also face extra security risks. IoT devices are easy targets for hackers because many still use default or weak passwords and outdated software, and connect to larger networks. In addition, they collect sensitive data like location or usage patterns. With so many devices in use, the number of potential attack points increases.
“In retail, brand reputation is critical,” Levin said. “We’re ensuring that the door lock is not being hacked to avoid exposure or added risk. IoT is supposed to help, but it’s doing the opposite.”
HPE Aruba addresses IoT security by integrating zero-trust into its products. For example, its access points prioritize securing IoT devices like cameras, sensors, and radio frequency identification or RFID labels, which are common entry points for hackers. The vendor also provides AI-powered tools like client insights and micro-segmentation to detect potential breaches proactively.
Central AI Insights is a new product created for retail curbside operations. It uses AI to automatically adjust Wi-Fi settings, reducing interference from things like people passing by outside, so customers and staff always have a reliable connection. If something goes wrong — whether it’s a network issue, an internet problem or a glitch in an app — Central AI Insights helps diagnose the issue. It also monitors IoT devices and can spot suspicious activity.
“It’s not just about using the network to support AI but also making the network work better using AI,” Levin said. “We’ve created specific insights that help retail. The idea is to make supporting these very large, distributed store ecosystems easier with a centralized IT department. So, they’re getting everything they need and use AI insights to understand where the problem is.”
HPE Aruba has a broad ecosystem of retail partners like Hanshow and SOLUM, which offer electronic shelf labels, or ESLs, and digital signage. Another partner, Simbe, has developed an autonomous item-scanning robot that tracks products, stock levels and pricing. VusionGroup uses computer vision AI and IoT asset management with ESLs and digital displays to help retailers track their inventory. Zebra Technologies provides RFID scanners, wearable devices and intelligent cabinets for omnichannel retailing.
HPE Aruba has upgraded its Central IoT Operations dashboard to simplify retailers’ management of IoT devices. The improved dashboard has a single interface, connects Wi-Fi APs to devices such as cameras and sensors, and integrates with third-party applications. I stopped by the HPE booth at NRF, where attendees could check out the hardware, see it in action with some retail demos, and experience the new software.
AI, digitization, omnichannel communications and IoT are creating massive changes in retail. Though these technologies may seem distinct, they share one commonality: They are network-centric. These new products from HPE Aruba enable retailers to deploy a modernized network that can act as a platform to enable companies to adapt to whatever trend is next.


Amazon Web Services Inc. made several announcements at the CES consumer electronics show last week regarding partnerships in the automotive industry that are aimed at furthering the rise of software-defined vehicles.
Building and delivering cars is increasingly becoming a software game that requires automotive manufacturers to take an ecosystem approach. The rise of software-defined vehicles, or SDVs, enables auto companies to work on parts or cars that have yet to be built. Also, updates can be made to finished products using over-the-air connectivity, something they could never do before.
AWS is partnering with several companies to make SDVs smarter and easier to develop. By using cloud computing, artificial intelligence and scalable tools, AWS is helping automakers build better cars that can be updated and improved over time.
Honda Motor Co. Ltd. is among the companies working with AWS to turn its cars into SDVs. The car company has created a “Digital Proving Ground,” or DPG, an AWS-enabled cloud simulation platform for digitally designing and testing vehicles. Using DPG, Honda can collect and analyze data such as electric vehicle driving range, energy consumption and performance. The platform reduces reliance on physical prototypes, speeding up development and lowering costs.
Historically, auto companies have had to build cars first and then test them. Though this seems reasonable, the cost and time taken can be very high as accidents happen, which creates delays, and niche use cases can be complex to test. For example, at dawn and dusk, sensors can malfunction because of the brightness. This can only be tested for a few minutes daily in the physical world. In a simulated environment such as the DPG, the sun can be held at the horizon, and millions of hours of simulation run.
Moreover, Honda uses AWS’ video streaming and machine learning tools to develop video analytics applications. Amazon Kinesis Video Streams processes and stores car camera footage to detect unusual movement around a car. If implemented in the real world, it could potentially alert drivers to nearby hazards and help prevent collisions.
Honda is also tapping into AWS generative AI services, specifically Amazon Bedrock. For example, it’s developing a new system that guides drivers to the best charging stations based on location, battery level, charging speed and proximity to shopping centers. The system provides secure communication between vehicles and the cloud while gathering driver preferences to offer personalized recommendations. It’s set to launch in Honda’s 0 Series EVs (pictured).
Honda’s partnership is notable, as it’s among the highest-volume manufacturers. Specialty EV companies were early interested in leveraging platforms such as AWS. A Honda partnership legitimizes that SDVs are the way forward for this industry.
Building on this momentum, AWS has also teamed up with HERE Technologies to enhance location-based services for SDVs. HERE provides advanced mapping technology, while AWS supplies the cloud tools to process large amounts of data. The companies are helping automakers build driver assistance systems, hands-free driving, EV routing and more.
HERE’s HD Live Map processes real-time sensor data to provide granular navigation and improve EV battery usage. The company just launched a new tool called SceneXtract, which simplifies testing by creating virtual simulations. Using a combination of HERE’s mapping technology and services like Amazon Bedrock, automotive developers run detailed simulations to test advanced driver assistance systems and automated driving. For instance, they can locate and export map data into test scenes, reducing the time, effort and cost involved in preparing simulations.
Additionally, AWS has partnered with automotive supplier Valeo to simplify the development and testing of vehicle software. Valeo announced the first three solutions during CES 2025: Virtualized Hardware Lab, Cloud Hardware Lab and Assist XR.
Virtualized Hardware Lab allows carmakers to test software on virtualized components, potentially speeding up development by up to 40%, according to Valeo. This cloud-based solution, hosted on AWS, will be available on AWS Marketplace yearly this year.
Valeo offers the Cloud Hardware Lab, a Hardware-in-the-loop-as-a-service solution for those who want access to large-scale testing systems. HIL combines hardware components with software simulations so companies can test how their software interacts with hardware systems. HILaaS allows companies to access Valeo’s advanced testing systems remotely through an AWS-hosted platform.
Lastly, Assist XR will provide roadside assistance, vehicle maintenance and other remote services. It will use AWS cloud infrastructure and AI tools to process real-time data from vehicles and their surroundings. This is one of many examples of the technologies needed to build safer, smarter and more efficient cars.
Going into CES, I was chatting with some media, and there is a perception that the automotive industry has seen little innovation over the past several years. Though I believe this statement is incorrect, I understand the source. Five or more years ago, fully autonomous vehicles were all the rage and were supposed to be here by now. This set an expectation that was not realistic. If the benchmark for innovation is level five AVs, then we aren’t there yet.
However, every year, incremental innovation has been made in the journey to fully autonomous, and we now have many features that make us better, smarter and safer drivers. 2025 won’t be the year of level five, but it will be another year in which we see more steps taken toward it.


Although the holiday gift-giving season may be over, Nvidia Corp. co-founder and Chief Executive Jensen Huang was in a very generous mood during his Monday keynote address at the CES consumer electronics show in Las Vegas. The leader in accelerated computing, which invented the graphics processing unit more than 25 years ago, still has an insatiable appetite for innovation.
Huang (pictured), dressed in a more Vegas version of his customary black leather jacket, kicked off this keynote with a history lesson on how Nvidia went from a company that made video games better to the AI powerhouse it is today. He then shifted into product mode and showcased his company’s continuing leadership in the AI revolution by announcing several new and enhanced products for AI-based robotics, autonomous vehicles, agentic AI and more. Here are the five I felt were most meaningful:
Cosmos for world-building
Nvidia’s Cosmos platform consists of what the company calls “state-of-the-art generative world foundation models, advanced tokenizers, guardrails and an accelerated video processing pipeline” for advancing the development of physical AI capabilities, including autonomous vehicles and robots.
Using Nvidia’s world foundation models or WFMs, Cosmos makes it easy for organizations to produce vast amounts of “photoreal, physics-based synthetic data” for training and evaluating their existing models. Developers can also fine-tune Cosmos WFMs to build custom models.
Physical AI can be very expensive to implement, requiring robots, cars and other systems to be built and trained in real-life scenarios. Cars crash and robots fall, adding cost and time to the process. With Cosmos, everything can simulated virtually, and when the training is complete, the information is uploaded into the physical device.
Nvidia is providing Cosmos models under an open model license to help the robotics and AV community work faster and more effectively. Many of the world’s leading physical AI companies use Cosmos to accelerate their work.
The Omniverse is expanding
Huang also announced new generative AI models and blueprints that expand and further integrate Nvidia Omniverse into physical AI applications. The company said leading software development and professional services firms are leveraging Omniverse to drive the growth of new products and services designed to “accelerate the next era of industrial AI.”
Companies such as Accenture, Microsoft and Siemens are integrating Omniverse into their next-generation software products and professional services. Siemens announced at CES the availability of Teamcenter Digital Reality Viewer, its first Xcelerator application powered by Nvidia’s Omniverse libraries.
New blueprints for developers
Nvidia debuted four new blueprints for developers to use in building Universal Scene Description (OpenUSD)-based Omniverse digital twins for physical AI. The new blueprints are:
- Mega is used to create digital twins of factories or warehouses to test robots before they are used in real-world facilities.
- Autonomous vehicle simulation so AV developers can perform closed-loop tests using driving data to accelerate their development pipelines
- Omniverse Spatial Streaming to Apple Vision Pro lets developers create applications for immersive streaming of large-scale industrial digital twins to Apple Vision Pro.
- Real-Time Digital Twins for Computer Aided Engineering is a reference workflow built on Nvidia CUDA-X acceleration, physics AI and Omniverse libraries that enable real-time physics visualization.
Raising the bar for consumer GPUs
Nvidia announced the GeForce RTX 50 series of desktop and laptop graphics processing units. The RTX 50 series is powered by Nvidia’s Blackwell architecture and the latest Tensor Cores and RT Cores. Huang said it delivers breakthroughs in AI-driven rendering. “Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives,” he said. “Fusing AI-driven neural rendering and ray tracing, Blackwell is the most significant computer graphics innovation since we introduced programmable shading 25 years ago.”
The pricing of the new systems gave rise to a loud cheer from the crowd. The previous generation GPU, RTX 4090, retailed for $1,599. The low end of the 50 series, the RTX 5070, which offers comparable performance (1,000 trillion AI operations per second) to the RTX 4090, is available for the low price of $549. The RTX 5070 Ti, 1,400 AI TOPS is $749, the RTX 5080 (1,800 AI TOPS) sells for $999, and the RTX 5090, which offers a whopping 3,400 AI TOPS, is $1,999.
The company also announced a family of laptops where the massive RTX processor has been shrunk down and put into a small form factor. Huang explained that Nvidia used AI to accomplish this, as it generates most of the pixels using Tensor Cores. This means only the required pixels are raytraced, and AI is used to develop all the other pixels, creating a significantly more energy-efficient system. “The future of computer graphics is neural rendering, which fuses AI with traditional graphics,” Huang explained. Laptop pricing ranges from $1,299 for the RTX 5070 model to $2,899 for the RTX 5090.
Project DIGITS
Huang introduced a small desktop computer system called Project DIGITS powered by Nvidia’s new GB10 Grace Blackwell Superchip. The system is small but powerful. It will provide a petaflop of AI performance with 120 gigabytes of coherent, unified memory. The company said it will enable developers to work with AI models of up to 200 billion parameters at their desks. The system is designed for AI developers, researchers, data scientists and students working with AI workloads. Nvidia envisions key workloads for the new computer, including AI model experimentation and prototyping.
Enabling agentic AI
Rev Labaredian, vice president of Omniverse and simulation technology at Nvidia, told analysts in a briefing before Huang’s keynote that the massive shift in computing now occurring represents software 2.0, which is machine learning AI that is “basically software writing software.” To meet this need, Nvidia is introducing new products to enable agentic AI, including the Llama Nemotron family of open large language models. The models can help developers create and deploy AI agents across various applications — including customer support, fraud detection, and product supply chain and inventory management optimization.
Huang explained that the Llama models could be “better fine-tuned for enterprise use,” so Nvidia used its expertise to create the Llama Nemotron suite of open models. There are currently three models: Nano is small and low latency with fast response times for PCs and edge devices, Super is balanced for accuracy and computer efficiency, and Ultra is the highest-accuracy model for data center-scale applications.
Final thoughts
If it’s not clear by now, the AI era has arrived. Many industry watchers believe AI is currently overhyped, but I think the opposite. AI will eventually be embedded into every application, device and system we use. The internet has changed how we work, live and learn, and AI will have the same impact. Huang did an excellent job of explaining the relevance of AI to all of us today and what an AI-infused world will look like. It was a great way to kick off CES 2025.


As agents become connected, the value of every connected application will rise – provided vendors can work together to let their AI agents work together.
Agent Sprawl
One might wonder what problem the AI-based agents are trying to solve here; it has not manifested itself yet. Yet I believe generative AI is one of those “game-changing” technologies that will alter almost every aspect of our lives. I predict, over time, that every application we use will have a generative AI interface built into it, like how every app has a search box today. Over time, these agents will go from reactive, where we ask them questions, to proactive, where specific agents push us the contextually important information we need to know. Consider the implications of this. Today, most workers use several applications – anywhere from half a dozen to over 50. As these apps evolve and add agents, we will face “agent sprawl,” where users will have as many virtual agents as they have apps. At re:Invent, I attended a session that had the CIO of a major bank participating in it, and he brought up how they’re building virtual assistants for their own apps but also are using Teams Copilot and Salesforce’s agent. Post-session, I asked him what he thinks the future looks like, and he told me he foresees a day when users have a “tapestry” of agents they need to pick and choose from. I followed up my question by asking him what he thinks working in that kind of environment would be like, and he said, “likely chaos.”Fragmented Knowledge
The numerous agents cause several problems. The first is that the agent or assistant is only as knowledgeable as the data in the application. This can create fragmented insights. As an example, consider the case where a company has a great website that does a best-in-class job of showcasing a poorly built product. The web analytics and sales tools that are used before purchase might show high customer satisfaction scores as they measure pre-purchase satisfaction. Once the customer uses the product, the mood will turn from happy to upset, and the contact center will field calls regarding refunds and repairs. Using the generative AI interface to understand customer sentiment will yield different results. Also, as the agents shift from reactive to proactive, users will be bombarded with messages from these systems as they look to keep you updated and informed. I expect the apps to have controls, much like they do today, so we can control the interactions with the apps, but most users will keep critical apps on. It would be like a CEO having a team of advisors across every business unit in a company, all whispering in his ear at once.Interconnecting Agents
This is where the Internet of agents brings value. By interconnecting, these assistants can share information, leading to less but more relevant information. In the scenario outlined above, a product owner or sales leader could be alerted when customer sentiment changes, as these pre-purchase agents communicate with call center agents to provide a holistic picture. This would enable the company to better understand what happened and take corrective action. Also, this will enable users to work in the applications they prefer but still access information from others. Today, a sales leader can pull data from CRM, contact center tools, sales automation applications, and other systems. The data must be brought together manually and likely correlated by people to find the insights. With the Internet of Agents, AI could perform analytics across multiple systems. The value can be described using Metcalf’s Law, which states that the value of any network is proportional to the square of the number of connected nodes. A network of two nodes has a value of four, whereas a network with 16 nodes has a value of 256, etc. As agents become connected, the value of every connected application will rise. To accomplish this, the vendors will need to agree to a set of standards and follow them – this is something Pandey and the team at Outshift are working on. This is where I hope the application providers learn from the sins of the past, as many of them have historically preferred walled gardens. One example is the UC messaging industry. Slack, Teams, Webex, Zoom, etc., all operate in silos, so a worker can’t send a message in Slack to a Teams user. Imagine how useless text messaging would be if one could only send messages to phones of the same manufacturer. The reality is that when systems are open and standards-based, it creates a rising tide, and everyone wins. A small piece of a big pie is worth far more than most of a small pie.The Agents Are Coming
One final point: Pandey’s definition did include the term “quantum-safe.” I asked why that was included and was told that if one is building the next generation of secure connectivity, security should be future-proofed. Infusing quantum-safe protocols ensures that quantum nodes are added to the infrastructure and communications are secured even against “store now, decrypt later” attacks. This is consistent with conversations I’ve had with other security companies where their primary concern around quantum computing is bad actors stealing data today, then using quantum to decrypt it at a later date. To paraphrase Paul Revere, “The agents are coming, the agents are coming,” and I implore the vendor community to get together and standardize the communications between systems and to ensure they are secured. Adoption will be faster, users will be happier, and the value will be greater. Seems like a no-brainer to me.

‘Tis the season to ponder what to get that someone in your life who has everything. If you haven’t finished your Christmas shopping and have $249 to spend for a piece of technology about four inches wide, three-and-a-half inches high, and 1.3 inches thick, then Nvidia Corp. has the perfect gift.
The tech giant introduced the Jetson Orin Nano Super Developer Kit this week. Though that’s a big name for such a small product, don’t be fooled. The latest innovation from Nvidia packs a big wallop in its little package.
The Jetson Orin Nano Super Developer Kit is a small but mighty artificial intelligence computer that the company says “redefines AI for small edge devices.” And by mighty, the new product delivers up to 67 tera operations per second of AI performance. That’s a 1.7-times increase over its predecessor, the Jetson Orin Nano.
But if you already bought the original model, which sold for $499 and debuted just 18 months ago, don’t worry. The team at Nvidia isn’t pulling a Grinch move. A free software upgrade for all original Jetson Orin owners turns those devices into the new Super version.
What’s in the box?
The developer kit comprises an 8-gigabyte Jetson Orin Nano module and a reference carrier that accommodates all Orin Nano and Nvidia Orin NX modules. The company says this kit is “the ideal platform for prototyping your next-gen edge-AI product.”
The 8GB module boasts an Ampere architecture graphics processing unit and a six-core Arm central processing unit, which enables multiple concurrent AI application pipelines. The platform runs Nvidia AI software stack and includes application frameworks for multiple use cases, including robotics, vision AI and sensor processing.
Built for agentic AI
Deepu Talla, Nvidia’s vice president and general manager of robotics and edge computing, briefed industry analysts before the Dec. 17 announcement. He called the new Jetson Orin Nano Super Developer Kit “the most affordable and powerful supercomputer we build.” Talla said the past two years saw generative AI “take the world by storm.” Now, he said, we’re witnessing the birth of agentic AI.
“With agentic AI, most agents are in the digital world. And the same technology now can be applied to the physical world, and that’s what robotics is about,” he said. “We’re taking the Orin Nano Developer Kit and putting a cape on it to make it a superhero.”
And what superpowers will the Jetson Orin Nano Super Developer Kit have? In addition to increasing performance from 40 to 67 TOPS, the new kit will have much more memory bandwidth — from 68 to 102 gigabytes per second, a 70% increase.
“This is the moment we’ve been waiting for,” said Talla. Nvidia is increasing performance significantly on the same hardware platform by supercharging the software. “We designed [the original Orin Nano system] to be field upgradeable. As generative AI became popular and we did all the different testing, we can support all the old systems in the field without changing the hardware, just through software updates.”
On the call, Talla mentioned that the total available market for robots, also known as physical AI, is about half the world’s gross domestic product, or about $50 trillion. Is it that big? It’s hard to quantify, but I do believe the opportunity is massive. Robots represent the next frontier in agentic AI because they combine a physical form factor with advanced decision-making capabilities, bridging the gap between virtual intelligence and the real world.
Unlike purely virtual AI systems, robots can interact with their environment, perform tasks and adapt to dynamic situations, making them critical for solving complex real-world problems. Their ability to act autonomously while continuously learning from their surroundings allows them to tackle challenges that are difficult for traditional software and sometimes people.
In fields such as healthcare, logistics, retail and manufacturing, robots are already demonstrating their potential by automating repetitive tasks, improving precision and enhancing efficiency. As advancements in machine learning, computer vision, and natural language processing continue, robots will become more capable of understanding and responding to human needs with nuance. They can assist the elderly, manage warehouses or even conduct surgeries accurately and consistently, surpassing human capabilities.
Additionally, as robots gain greater autonomy, they will increasingly function as agentic AI — intelligent agents capable of making decisions, setting goals and pursuing actions without constant human oversight. This shift will unlock new possibilities in sectors such as exploration, disaster response and personal assistance, transforming robots into valuable partners for human endeavors. The convergence of AI, robotics, and automation is poised to redefine industries and everyday life.
One of the biggest challenges and expenses with robots is to train them. Creating all the possible scenarios to test a physical robot can take years. For example, teaching a robot to walk requires stairs, gravel roads, side hills and other scenarios. They can fall, get damaged, overheat or experience other events slowing training. Nvidia takes a “full stack” approach to physical AI, where training can be done virtually using synthetic data. When the training is complete, upload the information so the robot can do the tasks.
Planned rejuvenation
Many products that hit the market have been designed with planned obsolescence in mind, whether by design or just due to rapidly evolving technologies and components. Nvidia is doing the opposite. Call it “planned rejuvenation.”
Talla said this is possible because Nvidia designed the Jetson architecture to support faster performance. “We are increasing the frequency of the memory,” he said. “We are increasing the frequency of the GPU. In fact, we are also slightly increasing the frequency of the CPUs. And the power consumption will go up to 25 watts. But the hardware has been designed to support that already.” Jetson runs Nvidia AI software, including Nvidia Isaac for robotics, Nvidia Metropolis for vision AI, and Nvidia Holoscan for sensor processing.
These preconfigured kits are also part of why Nvidia has become the runaway leader in AI. Packaging up all the hardware and software required for a developer to get started significantly reduces development time. Nvidia’s peers offer many of the same building blocks, but the developer must put them together.
The new Jetson Orin Nano Super Developer Kit and software upgrades for owners of the original Jetson Orin Nano Developer Kit are available at nvidia.com.


This partnership dispels the myth that the only companies using Amazon Connect are newer brands looking to shake up the status quo.


The National Hockey League and Amazon Web Services Inc. are working together to change how hockey is experienced, leveraging cloud technologies and data-driven insights to enhance production workflows and fan engagement.
At AWS re:Invent last week, representatives from both organizations joined a panel titled “NHL Unlocked: Live cloud production, sports data, and alternate feeds.” The panelists were Julie Souza, global head of sports for AWS; Grant Nodine, senior vice president of technology for the NHL; Brant Berglund, senior director of coaching and GM applications for the NHL; and Andrew Reich, senior industry specialist, BD, for AWS. They discussed their progress across a range of issues.
Souza opened the discussion by emphasizing the importance of collaboration in the partnership. “AWS isn’t just a tech vendor,” she said. “We’re working alongside the NHL to explore what’s possible and deliver real value to fans and the league.”
Souza said their shared commitment to innovation has been central to their progress, including advancements in live cloud production and analytics-driven storytelling.
This sentiment of “partner versus vendor” has been a consistent theme in my discussions with other sports entities. The PGA TOUR, Swimming Australia, the NFL and others have told me the AWS team actively gets involved in helping them consider what’s possible and bringing new ideas to the table.
Building a foundation for innovation
Nodine traced the journey to the league’s initial efforts to transition its video content to the cloud. This foundational step enabled automating processes such as encoding and scheduling, which are now critical to their operations. “You can’t do the exciting stuff,” Nodine noted, “until you’ve built the basics.”
Reich elaborated on the architecture supporting this transformation. Using AWS Elemental MediaConnect, the NHL created a streamlined pipeline for video ingest, storage and distribution. This setup makes nightly game broadcasts efficient and positions the league to experiment with new forms of content delivery.
Making sense of the data
The NHL’s adoption of player- and puck-tracking systems has unlocked unprecedented insights into the game. These systems collect billions of data points during a single night of games.
Berglund emphasized how this data helps deepen understanding for fans. “It’s not just about collecting stats,” he said. “It’s about turning that data into meaningful stories.”
One example is Ice Tilt, a period of play in which one team dominates possession and offensive pressure, pinning its opponents in the defensive zone and generating sustained momentum.
Berglund said he once asked player Jack Hughes how he recognizes momentum shifts during games. Hughes described it as “tilting the ice.” This once informal concept is now quantified and aligns with the NHL’s use of player positioning data to measure territorial momentum, turning the metaphor into a precise, trackable metric.
Reaching new audiences
Alternate broadcasts, such as the NHL Edge DataCast, showcase the league’s ability to tailor content to different audiences. The Big City Greens Classic, which adapted NHL games for a younger, animation-loving demographic, demonstrated the potential for these efforts. Souza noted that these initiatives are helping the NHL reach audiences who might not traditionally watch hockey. “By meeting fans where they are, we can make the game accessible to more people in ways that resonate with them,” Nodine added.
The league also creatively uses analytics, such as face-off probability, which calculates matchups and success rates in real time. This feature not only enriches broadcasts but also enables commentators to explore the nuances of gameplay more deeply.
A shift to live cloud production
In March 2023, the NHL reached a major milestone: It became the first North American league to produce and distribute a game entirely in the cloud.
Nodine recounted how this effort involved routing all camera feeds to the cloud for production by remote teams. The approach also promoted sustainable production practices by significantly reducing carbon emissions. “We’re not turning back,” Nodine said, citing both operational flexibility and environmental benefits as reasons to continue this path.
Reich highlighted how cloud-based workflows enable the league to experiment in ways traditional setups cannot. For example, by centralizing video feeds in the cloud, the NHL can produce alternate broadcasts or deliver content directly to fans in the arena through mobile devices.
The NHL deserves significant credit for using the cloud to produce games, as it was the first league to make the shift. For sports entities, production quality is obviously critical, as that’s how most fans engage with the brand. Before the NHL, most sports organizations were skeptical that the cloud could deliver comparable quality to producing the game on-premises. The NHL’s success leads to other leagues, such as the PGA TOUR and NFL, producing events in the cloud, but the NHL has the distinction of being first.
What’s next
As the NHL and AWS reflect on their progress, they are also exploring what’s next. Nodine pointed to opportunities in using artificial intelligence to streamline highlight generation and provide real-time insights for broadcasters. By automating some workflows, broadcasters could focus on storytelling, while fans could gain deeper insights into the game’s dynamics.
Alternate broadcasts remain a fertile ground for experimentation. Projects such as Big City Greens and NHL Edge DataCast have shown how targeted content can reach new audiences, and the technology behind these initiatives could inform traditional broadcasts in the future. For example, integrating metrics such as time on ice or Ice Tilt directly into standard broadcasts could provide fans with richer narratives without disrupting the viewing experience.
Souza summarized the approach as follows: “This is about thoughtful progress — identifying what works, refining it and integrating it in ways that enhance the game for everyone.” As the partnership evolves, the focus remains on making hockey more engaging, accessible, and dynamic for a global audience.
Some final thoughts
If you love hockey like I do (I’m Canadian, so I’m mandated to love hockey), you support any efforts to improve the fan experience. What I like most about the collaboration between the NHL and AWS is it helps casual fans better understand the game. It’s been said that AI lets the untrained eye see what the trained eye does, and features that highlight specific nuances can accelerate the learning of a game that can be confusing to non-hard-core fans.
Now, if only the Canucks can hold on until Stanley Cup playoff time.


Networking and complexity go hand in hand, like chocolate and peanut butter. Though this has been the norm, it’s playing havoc with business operations.
A recent ZK Research/Cube Research study found that 93% of organizations state the network is more critical to business operations than two years ago. In the same period, 80% said the network was more complex. Increasing complexity leads to blind spots, unplanned downtime, security breaches and other issues that affect businesses.
Extreme Networks Inc. today announced its Extreme Platform ONE connectivity platform to combat this. The back-end data lake combines data from networks, security tools and third parties such as Intel Corp., Microsoft Security and ServiceNow Inc. The platform is built on an artificial intelligence core to deliver conversational AI and autonomous networking. The goal is to automate wholly or at least partially many of the complex tasks associated with operating and securing a network.
The platform is flexible enough to serve multiple audiences. It includes a composable workspace that enables cross-team workflows. Although network engineers will most likely work with Extreme, the company has added security functionality and capabilities for that audience. Extreme also offers workflows, services and data for procurement and financing teams.
The latter audience often needs to be reminded about network infrastructure. As a former information technology executive, I am all too familiar with the pains of managing subscriptions, service contracts and licenses. This is often done on spreadsheets, which is time-consuming and error-prone and can frequently lead to overspending.
Extreme has built a dashboard that shows all relevant financial information, including contracts and renewal dates. This can help the customer better understand current and future trends and plan for upgrades.
For the network practitioner, the AI capabilities are targeted at troubleshooting complicated problems, which networks are filled with. Wi-Fi problems are the hardest to solve as there are so many variables. With a wired network, virtual local-area networks, duplex mismatches and other settings can often cause unexpected performance issues.
Finding these can take days, weeks, or even months, as replicating them can be challenging. AI sees all data across the network and can connect the dots that people can’t.
There is also an AI Policy Assistant that administrators can use to create, view, update and remove application policies. Policy administration is necessary but time-consuming and error-prone. Setting up policies initially is straightforward but keeping them up to date as people and devices move around the network or as applications change can be difficult, particularly in dynamic environments, which most companies are today because of the internet of things, cloud and work-from-home.
The rollout of Extreme Platform ONE is the culmination of many acquisitions and years of work. Today’s Extreme is a rollup of many network vendors, including Enterasys, Brocade, Avaya Networking and Motorola/Zebra. The purchase of Aerohive brought the company the cloud back end that is being leveraged in the current platform launch. Along the way, the company rationalized its product set and implemented “Universal Hardware,” which lets customers choose between different operating systems.
Extreme Platform ONE is well-timed with the current AI wave. The concept of the network platform has been bandied about for years but has yet to catch on.
Last week, I talked to Extreme Chief Technology Officer Nabil Bukhari (pictured), about the platform and why now. He told me this is the direction the company has been moving in since he took the role in 2020. AI makes a platform’s value proposition compelling today, as it requires a single set of data to deliver the best insights.
Companies that run one vendor for the WAN, another for Wi-Fi, and another for the campus network will have three sets of data, likely siloed, three AI engines leading to fragmented insights. For most companies, AI for operations is the way forward, and that will push more companies toward a platform approach.
Other vendors have followed the platform path. What I like about Extreme’s approach is that it uses AI as more than a troubleshooting tool. Though that’s a core function of the platform, it addresses issues at every step of the network lifecycle: planning, deployment, operations, optimization, security and renewals.
It as taken Extreme years to combine multiple products and unify the data set, but that’s done, and customers should see the benefits with the new Platform ONE.


Amazon Web Services Inc. Chief Executive Matt Garman delivered a three-hour keynote at the company’s annual re:Invent conference to an audience of 60,000 attendees in Las Vegas and another 400,000 watching online, ad they heard a lot of news from the new leader, who became CEO earlier this year after joining the company in 2006.
The conference, dedicated to builders and developers, offered 1,900 in-person sessions and featured 3,500 speakers. Many of the sessions were led by customers, partners and AWS experts. In his keynote, Garman (pictured) announced a litany of advancements designed to make developers’ work easier and more productive.
Here are nine key innovations he shared:
AWS will play a big role in AI
Garman kicked off his presentation by announcing the general availability of the company’s latest Trainium chip — Trainium2 — along with EC2 Trn-2 instances. He described these as the most powerful instances for generative artificial intelligence thanks to custom processors built in-house by AWS.
He said Trainium2 delivers 30% to 40% better price performance than current graphics processing unit-powered instances. “These are purpose-built for the demanding workloads of cutting-edge gen AI training and inference,” Garman said. Trainium2 gives customers “more choices as they think about the perfect instance for the workload they’re working on.”
Beta tests showed “impressive early results,” according to Garman. He said the organizations that did the testing — Adobe Inc., Databricks Inc. and Qualcomm Inc. — all expect the new chips and instances will deliver better results and a lower total cost of ownership. He said some customers expect to save 30% to 40% over the cost of alternatives. “Qualcomm will use the new chips to deliver AI systems that can train in the cloud and then deploy at the edge,” he said.
When the announcement was made, many media outlets painted Trn2 as Amazon looking to go to war with Nvidia Crop. I asked Garman about this in the analyst Q&A, and he emphatically said that was not the case. The goal with its own silicon is to make the overall AI silicon pie bigger where everyone wins. This is how Amazon approaches the processor industry, and there is no reason to assume it will change how it handles partners other than having headlines be clickbait. More Nvidia workloads are run in the AWS cloud, and I don’t see that changing.
New servers to accommodate huge models
Today’s models have become very big and very fast, with hundreds of billions to trillions of parameters. That makes them too big to fit on a single server. To address that, AWS announced EC2 Trainium2 UltraServers. These connect four Trainium2 instances — 64 Trainium2 chips — all interconnected by high-speed, low-latency Neuronlink connectivity.
This gives customers a single ultranode with over 83 petaflops of compute power from a single compute node. Garman said this will have a “massive impact on latency and performance.” It enables very large models to be loaded into a single node to deliver much better latency and performance without having to break it up across multiple nodes. Garman said Trainium3 chips will be available in 2025 to keep up with gen AI’s evolving needs and provide the landscape customers need for their inferences.
Leveraging Nvidia’s Blackwell architecture
Garman said AWS is the easiest, most cost-effective way for customers to use Nvidia’s Blackwell architecture. AWS announced a new P6 family of instances based on Blackwell. Coming in early 2025, the new instances featuring Nvidia’s latest GPUs will deliver up to 2.5 times faster compute than the current generation of GPUs.
AWS’s collaboration with Nvidia has led to significant advancements in running generative AI workloads. Bedrock gives customers model choice: It’s not one model to rule them all but a single source for a wide range of models, including AWS’ newly announced Nova models. There won’t be a divide between applications and gen AI applications. Gen AI will be part of every application, using inference to enhance, build or change an application.
Garman said Bedrock resonates with customers because it provides everything they need to integrate gen AI into production applications, not just proofs of concept. He said customers are starting to see real impact from this. Genentech Inc., a leading biotech and pharmaceutical company, wanted to accelerate drug discovery and development by using scientific data and AI to rapidly identify and target new medicines and biomarkers for their trials. Finding all this data required scientists to scour many external and internal sources.
Using Bedrock, Genentech devised a gen AI system so scientists can ask detailed questions about the data. The system can identify the appropriate databases and papers from a huge library and synthesize the insights and data sources.
It summarizes where it gets the information and cites the sources, which is incredibly important so scientists can do their work. It used to take Genentech scientists many weeks to do one of these lookups. Now, it can be done in minutes.
According to Garman, Genentech expects to automate five years of manual efforts and deliver new medications more quickly. “Leading ISVs, like Salesforce, SAP, and Workday, are integrating Bedrock deep into their customer experiences to deliver GenAI applications,” he said.
Bedrock model distillation simplifies a complex process
Garman said AWS is making it easier for companies to take a large, highly capable frontier model and send it all their prompts for the questions they want to ask. “Then you take all of the data and the answers that come out of that, and you use that output and your questions to train a smaller model to be an expert at one particular thing,” he explained. “So, you get a smaller, faster model that knows the right way to answer one particular set of questions. This works quite well to deliver an expert model but requires machine learning involvement. You have to manage all of the data workflows and training data. You have to tune model parameters and think about model weights. It’s pretty challenging. That’s where model distillation in Bedrock comes into play.”
Distilled models can run 500% faster and 75% more cheaply than the model from which they were distilled. This is a massive difference, and Bedrock does it for you,” he said. This difference in cost can turn around the gen AI application ROI from being too expensive to roll it out in production to be very valuable. You send Bedrock sample prompts from your application, and it does all of the work.
But getting the right model is just the first step. “The real value in Generative AI applications is when you bring your enterprise data together with the smart model. That’s when you get really differentiated and interesting results that matter to your customers. Your data and your IP really make the difference,” Garman said.
AWS has expanded Bedrock’s support for a wide range of formats and added new vector databases, such as OpenSearch and Pinecone. Bedrock enables users to get the right model, accommodates an organization’s enterprise data, and sets boundaries for what applications can do and what the responses look like.
Enabling customers to deploy responsible AI — with guardrails
Bedrock Guardrails make it easy to define the safety of applications and implement responsible AI checks. “These are guides to your models,” said Garman. “You only want your gen AI applications to talk about the relevant topics. Let’s say, for instance, you have an insurance application, and customers come and ask about various insurance products you have. You’re happy to have it answer questions about policy, but you don’t want it to answer questions about politics or give healthcare advice, right? You want these guardrails saying, ‘I only want you to answer questions in this area.’”
This is a huge capability for developing production applications, Garman said. “This is why Bedrock is so popular,” he explained. “Last year, lots of companies were building POCs for gen AI applications, and capabilities like Guardrails were less critical. It was OK to have models ‘do cool things.’ But when you integrate gen AI deeply into your enterprise applications, you must have many of these capabilities as you move to production applications.”
Making it easier for developers to develop
Garman said AWS wants to help developers innovate and free them from undifferentiated heavy lifting so they can focus on the creative things that “make what you’re building unique.” Gen AI is a huge accelerator of this capability. It allows developers to focus on those pieces and push off some of that undifferentiated heavy lifting. Q Developer, which debuted in 2023, is the developers’ “AWS expert.” It’s the “most capable gen AI assistant for software development,” he said.
Q Developer helped Datapel Systems “achieve up to 70% efficiency improvements. They reduced the time needed to deploy new features, completed tasks faster, and minimized repetitive actions,” Garman said.
But it’s about more than efficiency. The Financial Industry Regulatory Authority or FINRA has seen a 20% improvement in code quality and integrity by using Q Developer to help them create better-performing and more security software. Amazon Q has the “highest reported acceptance rate of any multi-line coding assistant in the market,” said Garman.
However, a coding assistant is just a tiny part of what most developers need. AWS research shows that developers spend just one hour a day coding. They spend the rest of the time on other end-to-end development tasks.
Three new autonomous agents for Amazon Q
According to Garman, autonomous agents for generating user tests, documentation and code reviews are now generally available. The first enables Amazon Q to generate end-to-end user tests automatically. It leverages advanced agents and knowledge of the entire project to provide developers with full test coverage.
The second can automatically create accurate documentation. “It doesn’t just do this for new code,” Garman said. “The Q agent can apply to legacy code as well. So, if a code base wasn’t perfectly documented, Q can understand what that code is doing.”
The third new Q agent can perform automatic code reviews. It will “scan for vulnerabilities, flag suspicious coding patterns, and even identify potential open-source package risks” that might be present,” said Garman. It will identify where it views a deployment risk and suggest mitigations to make deployment safer.
“We think these agents can materially reduce a lot of the time spent on really important, but maybe undifferentiated tasks and allow developers to spend more time on value-added activities,” he said.
Garman also announced a new “deep integration between Q Developer and GitLab.” Q Developer functionality is now deeply embedded in GitLab’s platform. “This will help power many of the popular aspects of their Duo Assistant,” he said. Teams can access Q Developer capabilities, which will be natively available in the GitLab workflows. Garman said more will be added over time.
Mainframe modernization
Another new Q Developer capability is performing mainframe modernization, which Garman called “by far the most difficult to migrate to the cloud.” Q Transformation for Mainframe offers several agents that can help organizations streamline this complex and often overwhelming workflow. “It can do code analysis, planning, and refactor applications,” he said. “Most mainframe code is not very well-documented. People have millions of lines of COBOL code, and they have no idea what it does. Q can take that legacy code and build real-time documentation that lets you know what it does. It helps let you know which applications you want to modernize.”
Garman said it’s not yet possible to make mainframe migration a “one-click process,” but with Q, instead of a multiyear effort, it can be a “multiquarter process.”
Integrated analytics
Garman introduced the next generation of Amazon SageMaker, which he called “the center for all your data, analytics and AI needs.” He said AWS is expanding SageMaker by adding “the most comprehensive set of data, analytics, and AI tools.” SageMaker scales up analytics and now provides “everything you need for fast analytics, data processing, search data prep, AI model development and generative AI” for a single view of your enterprise data.
He also introduced SageMaker Unified Studio, “a single data and AI development environment that allows you to access all the data in your organization and act on it with the best tool for the job. Garman said SageMaker Unified Studio, which is currently in preview, “consolidates the functionality that analysts and data scientists use across a wide range of standalone studios in AWS today.” It offers standalone query editors and a variety of visual tools, such as EMR, Glue, Redshift, Bedrock and all the existing SageMaker Studio capabilities.
Even with all these new and upgraded products, solutions and capabilities, Garman promised more to come.


Veeam Software Group GmbH, the market share leader in data resilience, today announced a new $2 billion investment from several top investment firms.
The Seattle-based company said its valuation now stands at $15 billion, which is about the same as the valuation of Commvault Systems Inc. and Rubrik Inc. combined. Investors in what the company calls an oversubscribed round are led by TPG, with participation from Temasek, Neuberger Berman Capital Solutions and others. Morgan Stanley managed the round.
Recently, I had an in-depth conversation with Veeam Chief Executive Officer Anand Eswaran and Chief Financial Officer Dustin Driggs about what enabled Veeam to reach this point in its evolution and, more importantly, where the company is going from here.
What the funding will enable
“We have huge ambitions of growth and profitability,” said Eswaran. “Having extremely well-capitalized investors will help us if we want to make some big moves. We can already make small, medium, and large moves ourselves because of the balance sheet and how profitable we are. But if we want to do something Earth-shattering, we have the investors who will be a key part of this process going forward with us.”
Previously, company insiders owned 100% of the Veeam. This round brings in diversified investors that “will be with us for the duration of the journey,” according to Eswaran.
He called the level of investment “great validation” because the firms conducted a “massive independent analysis” of the company before investing. Veeam’s financial results and market share growth, which have been steadily upward, demonstrate why the investors were eager to get on board. Eswaran cited four key reasons for Veeam’s growth and its attractiveness to outside investors:
- “It starts with our best-in-class product, the foundation of our No. 1 market share. No. 1 in scale, growth and profit.”
- “The strength of our ecosystem. 34,000-plus partners and the global scale and reach of more than 550,000 customers, including 77% of the Fortune 500, in 150-plus countries.”
- “Our balance of scale, growth and profitability is unique, not just in our category but across the software industry.”
- “We have a huge TAM [total addressable market]. But at the end of the day, I bet my life on the people I work with. The experience they bring to the table makes a huge difference.”
These concur with conversations I’ve had with customers, partners, resellers and the investment community. Backup and recovery is a well-established market, and the historical market leader is filled with legacy vendors such as Dell Technologies Inc. and Veritas Technologies LLC that brought little innovation, leaving the door open for a company such as Veeam to step in and take share.
The company was founded in 2006 and experienced slow and steady growth. It was wholly acquired by Insight for $5 billion in 2020. The next year, Eswaran joined Veeam as its CEO after successful tenures at RingCentral Inc. and Microsoft Corp. That coincided with Veeam adding several new products, including support for Office 365, AWS, Azure and Kubernetes through the acquisition of Kasten. Since then, the company has not looked back, and about a year ago passed Dell for top dog in backup and recovery, according to IDC, leading to this massive round of funding and high valuation.
Focus on ARR growth — and the enterprise
The Veeam leaders said they expect to finish 2024 with more than $1.7 billion in annualized recurring revenue, 29% EBITDA, rapidly expanding enterprise sales and 129% subscription net dollar retention from enterprise sales.
“Historically, we’ve focused on the mid-market, but over the last several years, the enterprise focus has been paying off, with more than half of our revenues coming in from the enterprise,” Eswaran said. “We have 2,200-plus customers spending more than $100,000 in ARR with us. We have over 80 customers spending a million dollars or more in ARR.”
CFO Driggs said the company’s rapid growth has been done economically. “We’re not incurring additional debt to fuel this growth,” he said. “We also generate significant free cash flow for the business. We’re funding the innovation that we need to continue to grow organically, off of our balance sheet, off of the free cash flow we’re generating. We have a super-healthy business model about the comps we see.”
Eswaran said it takes a different approach to succeed with enterprise customers than in the midmarket. “Companies fail because they try the same approach across their go-to-market for both ends, all customer segments, and that’s a failing proposition,” he said. “We’ve been very deliberate about preserving the strength and solidifying SMB and mid-market, as well as expanding and capturing more share now in enterprise and larger enterprise.”
Veeam has added more than 8,000 new customers in the last two years, according to Eswaran. The trend has been for installed-base customers to purchase multiple products from Veeam. “This multi-product go-to-market portion will be a very key part of how we land and expand. A large part of our revenues will come from expansion” with existing customers.
The importance of data resilience
As Veeam has evolved and expanded, the company has focused on providing its growing and diverse customer base with solutions that enable data resilience. “That’s what we stand for,” said Eswaran.
“For effective data resilience for every company, you need to think about it across this entire lifecycle, starting with ensuring you back up data correctly,” he said. “Then, you can recover instantly. Data is portable across technologies, platforms, everything you need to do on security, well beyond multifactor authentication and end-to-end encryption, and then the very specific use cases for AI, for data intelligence, which is critical. So, all this coming together will create the ultimate resilience posture for companies. And that’s why the entire company is grounded on our purpose, safeguarding the digital world with exceptional resilience and intelligence.”
BaaS is a major growth area
Historically, customers have used Veeam as a service managed and cloud service providers offer. Now, Veeam is focusing on delivering its own backup-as-a-service offering. “This is going to be the first full year of a first-party BaaS service with the new Veeam Data Cloud,” Eswaran said. “It will create the next wave of growth for us.”
Eswaran said the company is bullish on the capabilities of the Veeam Data Cloud. “With just one new workload — Azure, and the entire momentum around Microsoft 365 — we’re going to finish 2024 at $50 million in ARR from Veeam Data Cloud and BaaS and have set ambitious goals for 2025,” he said. “You can expect that every one of the workloads we protect will be offered on Veeam Data Cloud. In 2024, it was just two workloads, but we expect to exceed 10 workloads by the end of 2025, and then it will snowball and amplify and accelerate even more.”
With all Veeam has accomplished — and its potential for future growth — Eswaran’s pride in the organization and its people is crystal-clear. “When we can work with cities such as New Orleans and Fort Lauderdale that have been breached and get service back to the citizens quickly, those are the things that make this feel like a purpose, which our employees have really rallied around, of creating resilience in a digital-first world,” he said.
Financially, next on tap for Veeam is an initial public offering of stock. Although no timetable is set, it should have fleshed out its artificial intelligence story more when it goes public.
I’ve asked Veeam leadership, Eswaran included, about this in the past, and they’ve all echoed the same sentiment. Veeam holds massive customer data, and the company should be able to use AI to see what the naked eye can’t. This could be particularly valuable in the world of cyber, where, through the use of AI, Veeam could find malware that has yet to be discovered, anomalous data patterns could indicate unauthorized access or even malicious insider activity.
I’ve heard that data is the new gold in the AI era. If that’s true, and most industry watchers would agree, then the ability to protect, back up and recover that gold is equally valuable. Proof of that is a massive infusion of funding from many tier-one investment shops.

