Featured
Reports
Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience
Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]
Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation
April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]
Nathan Howe, VP of Global Innovation at Zscaler talks mobile security
March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]
Check out
OUR NEWEST VIDEOS
2025 ZKast #188 - 🏈 AWS & NFL: Player Safety, Global Expansion, & Gen AI | Julie Souza
views November 25, 2025 3:56 pm
2025 ZKast #187 - : A Cisco Partner Perspective on IT Strategy with NWN's Dean Fernandes
6.3K views November 24, 2025 5:09 pm
0 0
2025 ZKast #186 - Cisco's CPO Jeetu Patel on AI, Silicon One, and $7 Trillion Infrastructure Build
31.6K views November 21, 2025 10:57 am
0 0
Recent
ZK Research Blog
News
There is a growing duality of opposing forces that needs to be dealt with if customers are to have success with artificial intelligence.
My research shows that more than 90% of organizations believe the network to be more important to business operations than it was two years ago. At the same time, almost the same number believe it to be more complex. These opposing forces of complexity and importance needs to get solved if companies are to attain the return on investment they seek with AI.
This week at Cisco Partner Summit, the company’s annual reseller event, Cisco Systems Inc. unveiled a new digital platform that provides information technology teams with a tool built on the unification of the company’s data to monitor technology, run system check and fix issues before they escalate. Built with AI, Cisco IQ combines automation, analytics and Cisco’s own technical insights into a single dashboard.
The reinvention of Cisco Customer Experience, which is Cisco’s support and services organization, is something Liz Centoni, a Cisco executive vice president and chief customer experience officer, has been working on since she became leader of the group about 18 months ago. What’s interesting about Centoni is that she has a product background as opposed one of services, but that helped in transforming the team.
At Partner Summit I asked Centoni why having come from product was an advantage. “Cisco is a product company and CX is here to support the technology,” she said. “The goal of Cisco IQ is to fundamentally change the nature of supporting and servicing customers by proactively addressing problems before they emerge.” Like much of the industry, Cisco’s support model has traditionally focused on fixing problems after they occur. Though this reactive motion has been the norm, it keeps engineers in firefighting mode.
Because of its massive footprint, Cisco has a tremendous amount of infrastructure data – perhaps more than any other vendor. During her keynote, Centoni explained how agentic AI is used to change the service model.
“CX is the sweet spot for agentic because it gives us the opportunity to change the nature of how we interact with our customers,” she said. “We become trusted advisers, not just service requestors or case processors as our teams have complete context.”
Historically, she added, “we solved problems by throwing more people into the mix, but this is exactly what an agentic system was built for. It’s continuously learning, predicting and understanding the whole stack.” It’s important to note the “we” Centoni referred to was inclusive of the more than a half-million partners Cisco has, as many of them rely on Cisco CX as part of their services.
Cisco IQ combines several key capabilities. It allows IT teams to run on-demand assessments for security, configurations and compliance, but also emerging areas such as quantum readiness and regulatory checks. The assessments present potential risks or misconfigurations, along with clear guidance on how to address these issues. Beyond assessments, Cisco IQ provides visibility into an organization’s entire asset inventory. For example, it shows device health, software versions and lifecycle timelines.
All of this is enabled by agentic AI agents that analyze, diagnose and resolve problems. Cisco’s research found that 93% of its customers believe agentic AI will create more personalized, proactive and predictive experiences. That expectation aligns with Cisco’s own vision where every interaction feels tailored to the customer’s unique needs.
Cisco IQ is built on a series of purpose-built agents that work together to improve service. One looks at documents and creates a knowledge base, others diagnose devices, retrieves information, and handles remediation. These agents work together to provide solutions through the Cisco IQ interface. Cisco’s goal is to build hundreds of these agents that talk to one another and orchestrate the work for its customers.
Centoni shared an example of how Cisco IQ can read and interpret complex technical documents, of which Cisco has many, and turn that information into automated system checks. During a demo, Cisco IQ performed a security assessment that showed how many devices were affected and where the issues were. From there, IT teams could click to see more details, including AI-generated summaries that explained the problem in plain language. The same assessment could be repeated to confirm that all the issues were resolved. A process that once required people to read long documents and cross-check configurations was mostly automated.
Organizations have several flexible options when it comes to deploying Cisco IQ, which will roll out in the second half of FY2026. It can be deployed as a software as a service platform, hosted and maintained by Cisco. It can be installed on-premises inside a company’s own data center but still tethered to Cisco’s cloud. In highly secure environments, Cisco IQ can run offline (air-gapped), without external network connections.
Centoni noted that Cisco IQ is part of a broader effort to simplify and unify CX across all of Cisco’s service models. As part of the rollout, Cisco is consolidating its services into two offerings: Cisco Support with standard, enhanced and signature tiers, and Cisco Professional Services, available as either a subscription or onetime engagements.
During my discussion, I asked Centoni why this was announced at Partner Summit versus Cisco Live, which is targeted at users. She explained that partners are key to how Cisco plans to deliver Cisco IQ. Partners can support their customers no matter how their systems are set up and at every stage, from planning and deployment to ongoing management. The platform gives partners access to the same automation and intelligence tools Cisco uses internally.
In the next few quarters, Cisco will trial Cisco IQ with a select set of partners, and then roll it out broadly. An interesting part of the process and a test for Cisco IQ, is that the company is not asking its partners, or even its own teams, to go through any steps to get it up and running. Those steps will be dynamic, with a goal of meeting customers where they are and what their intent is. Cisco IQ uses generative AI and agentic AI to be able to provide the right instructions and the right information to customers and partners.
Centoni wrapped up her keynote by talking about the evolution of Cisco CX. “This is not just repackaging of what we already have,” she said. “We’re delivering real time, passive insights, comprehensive infrastructure assessments and proximity troubleshooting powered by AI, which enables to deliver what customers want resiliency, simplicity and faster time to value.”
Cisco IQ represents a new approach in how IT delivers value in an AI-driven era. By reducing day-to-day friction and giving organizations the tools to act sooner, they can spend more time focusing on innovation and resilience rather than firefighting.
Even in auto racing, it’s all about the data today. Amazon Web Services Inc. is a global partner of Formula One and has been the official cloud and machine learning provider for the league since 2018. The dynamic, longstanding partnership comes down to three core pillars: transforming data into racing intelligence, fan experience enhancement and technical transformation.
These three pillars are built on a foundation of data. F1 is, by far, the most intensive sports environment in the world. Each car is outfitted with more than 300 sensors and generates more than 1 million data points per second during a race. The 10 race teams will receive all the data for their two cars only and Formula One can see a narrow band of data across all 20 cars.
F1 analytics built on a foundation of data
Simultaneously, environmental data is being collected as well. More than 20 trackside weather stations continuously monitor temperature, humidity, wind speed and direction. Tires degradation is monitored through surface temperature sensors, pressure monitoring systems and track telemetry. Additionally, high-precision GPS tracks cars within millimeters to provide real-time position updates, speed through corners, racing line analysis and overtaking probability.
During a typical race weekend, AWS processes more than 5 billion data points across all the cars and other systems. This equates to about 500 terabytes of data, which is transmitted over dual 10-gigabit-per-second fiber lines from the Event Technical Center at the track to the Media & Technical Center at Biggin Hill in the south of England. From that location, live video and data is sent over two dedicated AWS Direct Connect links into the AWS cloud.
It’s this massive scale of data that enables AWS to deliver insights and innovation to F1, transforming how teams compete and enabling fans to experience the sport in different ways. One might think given the criticality of F1 data it might be faster and lower cost to stand up a private cloud. At the recent F1 US Grand Prix event in Austin, I met with Ceileidh Siegel, global head of digital innovation for AWS Industries, and she explained that the latency is mere milliseconds and leveraging the cloud removes much of the complexity of continually trying to stand up and tear down private clouds at each race.
Machine learning models are trained before every race and season on Amazon SageMaker and stored in an S3 bucket. Live streaming data is ingested via the Amazon API Gateways and orchestrated using AWS Lambda storing the results in Amazon DynamoDB, writing logs to Amazon Cloud Watch, and sending the results back to the API Gateway. Having processed the results on Amazon Graviton, all this data is then sent back to F1’s Media & Technology Centre and made available to the race teams and worldwide broadcast in under a second.
F1 Insights turn data into actions
For the race teams, AWS and F1 developed a series of data points called “F1 Insights” that can be used to plan race strategy but then also adapt during the race, which is why the millisecond latency matters. F1 Insights was launched in 2018 with three data points but now has 20. Siegel described F1 Insights as a “portfolio of digital products that are built on data to serve different needs.”
As an example, Close to the Wall can measure how close a driver comes to a wall on a turn within a millimeter of accuracy. Pit Strategy Battle can help teams understand if they may get passed on a pit stop. Others include Car Analysis, Braking Performance, Exit Speed and Projected Knockout Time. Some of these data points are used to enhance the broadcast, while others are only race team- or league-facing.
Track Pulse improves storytelling for the broadcast
For the broadcasters, AWS developed “Track Pulse,” which offers data to enhance storytelling and support commentators. “Track Pulse literally and figuratively uses data to look around corners and build a story that helps commentators do their jobs better with higher accuracy,” Siegel explained. “It also provides options for stories they think will be unfolding as the race goes on. It also quickly creates the graphics that will pop up on screen to tell the story.”
Track Pulse represents a fundamental shift in how the story of Formula 1 is told by turning complex data into interesting narrative that resonate with existing but also new fans. The results bear that out as F1 has seen a 47% increase in viewer understanding with real-time engagement across 200-plus markets.
Insights are also for the IT pro
So much of the focus on AI in sports has been for the fan and teams but AWS is also addressing the needs of the information technology pro. F1 has implemented a generative AI based solution using Amazon Bedrock, Agents and Knowledge Bases where IT staff can submit queries through a chat interface to an AI powered virtual assistant. The AI assistant provides instant, relevant responses, speeding up the decision-making process. For particularly complex issues, there’s still the option to escalate to human experts.
The impact has been dramatic resulting in an 86% reduction in issue resolution time. This means faster, more informed decision-making for race control, and more consistent application of technical regulations. F1 is exploring ways to expand this technology to other areas of operations, enhance fan engagement with AI-driven insights, and further improve race strategy and performance.
F1 cars redesigned in the AWS cloud for more competitive racing
Every couple of years F1 releases new standards to improve the race cars with a goal of increasing competitiveness. In 2022 F1 issued new standards to better understand how the cars interact with each other from modeling computational fluid dynamics, which can be thought of as a virtualized wind tunnel. Historically, F1 would build a physical wind tunnel that could house one car.
Using AI and the cloud, a digital twin was built that could simulate two cars and understand the aerodynamic wake of the trailing car. With the old cars, the wake would push straight back, resulting in a loss of downforce while racing closely. The new design pushes the wake up and lets drivers get even closer, leading to more passing opportunities.
The result is 30% more overtakes since 2022. The car will be going through another redesign in 2026 so we shall see what that brings.
F1 car redesign built on Graviton silicon
One of the critical components of compute is the processor it runs on. Amazon offers a wide range of processors, including its own Graviton processors, which results in up to a 40% savings over other chips. The previously mentioned redesign was moved from an on-premises workload to the AWS cloud running on Graviton.
At the track, I spoke to Ali Saidi, vice president and distinguished engineer for AWS, about this. “With on prem, F1 could only run one simulation every three days,” he said. “By moving to AWS, the increase in compute capacity combined with simulation enabled them to run several simulations per day, significantly speeding up development time. The ability to iterate several times per day advanced knowledge so much faster than could be done before.”
IT and business leaders should use F1’s journey as a lesson learned
Though the F1 and AWS partnership will draw a lot of eyeballs because of the immense popularity of racing today, the lessons learned here are applicable to all businesses. We are entering the AI era where data becomes an organizations most valuable asset. Siegel highlighted this when she said, “The caliber and velocity of your data foundation equals the caliber and velocity of your products,” meaning you’re only as good as your data allows you to be.
Not every organization will generate F1-level volumes of data, but every company is generating more and more every year. It’s critical to understand what data the business has, be able to bring it together and then use AI to find those critical insights.
Scaling artificial intelligence is not just a compute problem but increasingly a network issue, a reality that Arista Networks addressed Wednesday by unveiling its latest switch family targeted toward AI data centers.
It’s new generation, the R4 Series platform is based on the Broadcom Jericho3 Qumran3D silicon and is designed for AI, cloud data centers and routed backbone deployments. Last week I talked with Arista about this and Brendan Gibbs, its vice president of AI, routing and switching platforms, told me the goal was to “reduce total cost of ownership, while ensuring high performance, low AI job completion time, low power consumption and integrated security,” which is certainly a lofty goal.
From a performance perspective, the 800-gigabit-per-second R4 system supports high-capacity data center/AI clusters and sets a new high-water mark for speed with the introduction of 3.2-terabits-per-second HyperPorts. Anyone tracking the explosion in AI spending knows that networking is now as strategic as compute.
Though 3.2-Tbps ports might seem like a lot, in AI environments, if all the graphic processing units are running at 400G, that would max out a network running at that speed. This creates the need for a higher aggregation spine. The capacity Arista offers addresses today’s needs but also offers a bit of headroom for growth.
Under the hood
The new R4 routers, which feature efficient two-tier leaf-and-spine designs, deliver a range of fixed and modular solutions for scalable and multiple use cases. All R4 products deliver the full range of EOS (Arista’s operating system)-based L3 features for modern architectures using EVPN, VXLAN, MPLS and SR/SRv6. Each R4 system ensures predictable latency via hierarchical deep packet buffering, coupled with scalable protection against packet loss during transient congestion.
Gibbs said AI growth continues to stoke significant demand for next-generation AI spines. “That drives the need for a dense 800 Gig in the backbone,” he said. “We’re also seeing strong demand for traditional data centers. AI gets all the press these days, but a lot of our business is traditional data center networking with enterprises. And we’re seeing speed changes there as well. We’re seeing repatriation of data from public cloud back to on-prem, or just workload expansion with workload complexity down at the circle level.”
800G for AI centers
The sweet spot for the new router series is high-scale data center backbones, data center interconnections, AI training and inferencing and scale-across routing. To meet the demand for high-speed transport in the very high port densities required by AI centers, Arista provides a range of density options, including petabit scale across cloud/AI titans, neoclouds, service provider and enterprise customer segments, according to the company’s announcement.
Security is a foundational element of the new 800GbE-based offerings in the 7800R4 and 7280R4 families. Each platform supports wire-speed encryption on every port simultaneously with TunnelSec, including MACsec, IPsec and VXLANsec options. The multilayer encryption technologies protect customer data in transit from malicious interception.
To buffer or not to buffer
Though there are several network solutions for scale-across and AI data centers, there is some industry debate as to whether to use shallow or deep buffers. Using shallow or deep buffers in AI networks, particularly in deep learning models, involves a tradeoff between memory usage/latency and throughput/training stability. The argument against buffering is that, when the network is oversubscribed, the buffers get filled and then drained, creating latency.
On the call, I discussed this with Martin Hull, vice president and general manager of cloud and AI. “The buffers are there as a protection mechanism,” he explained. “When a packet comes in, it is sent along and only buffered if the destination is not ready to receive it. In this case, there are two choices – drop it or buffer it and the former create far more latency and longer AI job completion times.”
Gibbs added, “While these are called deep buffers, technically they are hierarchical hybrid buffers with on chip shallow buffers and on package deep buffers. If you’re managing and tuning the workloads, packets go in and out with no problem and the network will be ultra-low latency. However, if something crops up with congestion, either in the box or at the far end, shallow buffers will drop the packets causing retransmission and skyrocket latency and job completion time.”
Improved efficiency
Gibbs said the new platforms leverage the same EOS used across the company’s portfolio and expanded to address new use cases. He said the new R4 portfolio delivers significant advantages, including the lower job completion time and lower power per gigabit per second, improving the efficiency on a watt-per-gig basis. The rise in AI data centers has put a microscope on power utilization and the new systems from Arista are more efficient while introducing the 3.2-Tbps links.
The 7280R4, which features a compact fixed form factor, complements the 7800R4. Both lines feature the same data plane forwarding capabilities, enabling customers to right-size their solutions to match needed port speeds, densities, space and network architectures.
Key 7280R4 family enhancements include:
- 32-port 800 GbE system, ideal as an AI/DC spine or backbone router
- 64-port 100 GbE with 10-port 800 GbE system, ideal for an AI/DC leaf
New data center leaf switches
Arista is also introducing new 7020R4 Ethernet leaf switches for high-speed direct server connectivity as an AI or DC leaf. The switches are designed for organizations with complex workloads, heterogeneous environments, and high-end workstations.
The 7020R4 family’s capabilities include 10GbE Ethernet with copper or 25GbE SFP port options, as well as 100 GbE uplinks with wirespeed TunnelSec encryption per port for cybersecurity protection.
Availability
The 7800R4 modular systems, a pair of new linecards, and the two new 7280R4 platforms are already shipping. The new 7020R4 Chassis platforms and a new 7800R4 with HyperPort are scheduled to ship in Q1 of next year.
Final thoughts
Unlike the speculative “dot-com” overbuilds, today’s data and AI centers are responding to seemingly endless business demand. GPU and accelerator utilization is at record highs, and the incremental business value delivered is tangible, not theoretical. Arista’s leadership in the 800GbE switching market and its aggressive portfolio expansion are well-timed to benefit from a 90% average annual growth rate in this segment over the next five years.
As artificial intelligence continues to steamroll its way into every part of our lives, Nvidia Corp.’s GPU Technology Conference events have grown in importance, and this week’s fall installment in Washington, D.C. provided the customary showcase of how and what is possible.
Nvidia has been at the heart of the AI revolution since it began as the company has ascended from graphics processing unit and gaming company to the hub for the world’s AI infrastructure. From foundational hardware to open-source software, spanning vertical industries from pharmaceuticals to manufacturing and government, every announcement at GTC points toward scaling AI deployments through a full-stack AI factory. Given the event was in our nation’s capital, a subtheme of U.S. leadership was pervasive throughout the event, particularly in the keynote by Nvidia Chief Executive Jensen Huang (pictured).
Here were the major announcements from GTC DC 25:
The pharmaceutical AI factory: Lilly’s Blackwell SuperPOD
There is no greater proof point of the value of technology than a customer deployment. At GTC DC, Nvidia announced Eli Lilly’s unveiling of pharma’s largest AI factory comprised of a DGX SuperPOD powered by more than 1,000 Nvidia Blackwell Ultra GPUs. This serves as an example for how proprietary data, foundation models and federated learning can be used to accelerate drug discovery and personalized medicine. AI will have an impact on many industries, but it’s poised to revolutionize healthcare and have it leap forward by decades.
Lilly’s platform, TuneLab, opens its massive data trove to biotech partners while maintaining strict privacy, and supports imaging, antibody generation, gene therapy R&D and supply chain optimization via digital twins and agentic robotics. The scale of the deployment is staggering – the factory has the compute equivalent of 7 million Cray supercomputers, able to solve 9 quintillion math problems every second. While much of the chatter around AI is job elimination, this deployment will create 13,000 high-wage manufacturing jobs as well as 500 jobs for engineers, scientists and technicians.
AI factories for government and secure operations
As one would expect, much of Huang’s narrative during his keynote was regarding government policies and deployments. Nvidia’s AI Factory for Government reference design equips federal agencies to fully operationalize AI while adhering to the most stringent security standards — FedRAMP clouds, high-assurance environments and integrated cybersecurity with partners such as CrowdStrike Holdings Inc. and Palantir Technologies Inc.
This blueprint brings mission-critical AI to regulated industries, supported by Nvidia Enterprise software and secure, composable infrastructure. Major defense primes, including Lockheed’s Astris AI and Northrop Grumman, are adopting these models for secure, efficient deployment at scale, reinforcing U.S. leadership in both innovation and national security. Huang was quite clear in his message: The U.S. government is not standing still and it needs to keep up the pace to ensure the U.S. wins the AI race.
The BlueField-4 platform: The operating system for gigascale AI
As demand for trillion-token workloads explodes and creates new challenges, Nvidia’s answers with BlueField-4, its newest DPU — delivering 800Gbps, 6x compute and native support for secure, multitenant clouds. BlueField-4 enhances every aspect of the AI data pipeline: storage, networking, security and zero-trust isolation.
Supported by ConnectX-9 SuperNICs, it enables efficient scalability for both on-premises and cloud-based AI factories. Major server OEMs and cybersecurity vendors (from Armis Inc. to Palo Alto Networks Inc.) are already integrating BlueField-4 into next-gen platforms, underlining Nvidia’s blueprint-as-product vision.
Manufacturing and robotics: The rise of physical AI
Nvidia unveiled an expanded Mega Omniverse Blueprint for factory-scale digital twins, with Siemens and FANUC first to align. Industry giants such as Foxconn, Toyota, TSMC and Lucid are simulating entire production lines, plants and logistics in virtual environments, enabling rapid optimization, predictive maintenance and resilient supply chains.
Amazon Robotics shortens development cycles from years to months by combining Omniverse AI and simulation; humanoid robots such as Figure’s Digit are trained on millions of reinforcement scenarios, making them capable assistants for both dangerous and dexterous work. Every Nvidia keynote I have seen this year has advanced vision of physical AI become a reality
While some look at autonomous machines and robots as taking jobs, Huang discussed using physical AI as a way of closing America’s 50 million-worker deficit, while bringing greater productivity and safety in manufacturing. Physical AI can push productivity up and Omniverse is the way to train machines cost effectively.
Telecom’s quantum leap: AI-native stacks, open source and 6G
On the telco front, Nvidia and Nokia Corp. announced the Aerial RAN Computer Pro, bringing an AI-native wireless stack to 6G. With partners such as T-Mobile USA Inc., Dell Technologies Inc. and Cisco Systems Inc., this collaboration is promises to give America’s telecom a shot in the arm by shifting from proprietary systems to programmable. The Nvidia reference architecture supports multimodal sensing, edge AI and spectral agility. Cisco and MITRE’s early 6G applications use vision and radio data to offer real-time situational awareness for public safety and industrial monitoring, and MITRE’s AI-driven spectrum management offers game-changing efficiency by targeting interference without mass shutdowns.
Innovation is accelerated even further by Nvidia’s open-sourcing of Aerial software; now, researchers can instantly prototype full-stack 5G/6G networks and AI-RAN on desktop supercomputers like DGX Spark. This democratizes access for telecom researchers, startups and hyperscalers alike, as AI transforms wireless networking.
Autonomous mobility: Hyperion Drive and Uber’s robotaxi fleet
One staple of GTCs is autonomous car news. At this year’s event, Nvidia announced it is partnering with Uber Technologies Inc. to deploy the world’s largest level 4 autonomous fleets — scaling to 100,000 vehicles, all ride-hailing-ready with AGX Hyperion 10 and DRIVE Thor. Nvidia provides the compute and sensor backbone, Uber the global network, with OEMs such as Lucid, Mercedes-Benz and Stellantis joining in.
The DRIVE platform supports foundation models and reasoning VLA (Vision-Language-Action) models, with the Halos Certified Program debuting as the first third-party safety certification lab for AI vehicles. This integrated approach means more than just autonomous driving as it addresses industry standards for vehicle safety, continuous machine learning and scaling across continents.
Quantum computing and AI physics: Redefining engineering
Nvidia’s vision for accelerated quantum and physics-driven simulation were front and center this GTC. Its announcement of NVQ Link — the world’s first quantum-GPU interconnect — enables scalable quantum error correction and hybrid supercomputing, laying the technical foundation for next-gen science and cryptography. In engineering and design, the integration of AI PhysicsNeMo and Domino NIM microservices is pushing aerospace and automotive modeling workflows up to 500 times faster, letting designers iterate and optimize in real time, multiplying engineering throughput and unlocking rapid innovation. It’s still not clear what the timing for quantum is, but Nvidia is creating the underlying infrastructure to ensure quantum can move fast.
Conclusion: The factory, not the feature
Nvidia’s GTC DC announcements indicate a broader but necessary shift in the industry. AI, quantum, agentic and physical intelligence must be built into the fabric of national infrastructure, supply chains and business operations. The era of the “AI factory” — full-stack, reference-architected, open, secure and programmable — is here. The company is not just providing parts, but orchestrating a resilient, scalable platform for America’s industrial renaissance, global competitiveness and future technological leadership.
Huang pushed the national message hard as AI leadership is up for grabs with many other nations pushing forward aggressively. He mentioned the U.S. has often lead technology trends, but its way behind with wireless technology, ceding leadership to overseas companies. GTC DC wasn’t just an event but rather an opportunity to create an inflection point where hardware, software and partnerships come together to move AI forward with the U.S. leading the charge.
Contact-center-as-a-service leader NiCE Ltd. is holding its analyst event in Vienna, Austria, this week, and it’s the first one with Scott Russell as captain of the Starship NiCE.
The chief executive isn’t the only new leader, as Michelle Cooper is now running marketing and Jeff Comstock takes over as the president of products and technology. Russell, Cooper and Comstock all succeeded some of the longest-tenured and successful executives in communications as Barak Eilam, Einat Weiss and Barry Cooper respectively, all stepped down within the last year. This is also the first analyst event with Cognigy in the NiCE portfolio.
With all these changes, all industry analyst eyes were on the content, with Russell leading off. Below are some interesting takeaways from his keynote.
Speed is a choice – but won’t be for long
Russell opened the session with a comment that “Speed is a choice,” talking about the need to operate fast in the artificial intelligence era. “That doesn’t mean everything has to be fast and reckless, but it means that when you’re going to do something be purposeful and be precise,” he said. “Move quickly to get things rolling but be willing to adjust direction.”
And that includes NiCE: “We are we are obliged to our customers, to our partners, to our shareholders, to all the stakeholders, to be able to seize the moment and move quickly,” he said.
I do believe we are early enough in the AI cycle that speed can still be a choice, but to quote John Chambers, during his time as Cisco Systems Inc.’s CEO, he often said, “Market transitions wait for no one.” For customers this means, since we are still early in the AI cycle, there is time to be thoughtful about AI, but the ball needs to get rolling soon and that should be taken as a warning.
I was at an AI event a couple of months ago where one of the speakers said, “There are no fast followers in AI, you lead and you fall behind.” So right now, speed is a choice, but it won’t be for very long. Get moving with AI now.
Customer experience can’t be ‘Frankensteined’
Customer experience is much more than just contact center. In addition to the core contact center capabilities that enable brands to talk to customers, there are several adjacent functions, such as workforce management or WFM, scheduling and quality management, that are required to operate a contact center. Most of the CCaaS providers partner with companies such as Verint Systems Inc. and Calabrio to fill those gaps. Though this has been effective in the past, Frankensteining a CX solution by bolting several products together won’t work in the AI era, since the data from each product will live in silos.
I’ve said many times that, with AI, good data leads to good insights, but silos of data leads to fragmented insights and that can result in inconsistent actions. A decade ago, NiCE was a WFM provider but showed great vision when it acquired CCaaS vendor inContact and brought the two products together. Since then, it has built many of the other capabilities required to service customers and this single, unified stack gives it a unique platform advantage. Evidence of this is that when NiCE wins a competitive deal, it often replaces a dozen or more other vendors.
Cognigy and NiCE create a 1+1=5 value proposition
A unified data set and the platform gives NiCE an advantage. Now the question is, what will NiCE do with this data? Enter agentic AI provider Cognigy GmbH, which boasts many well-known brands as customers. Lufthansa, Bosch, Bayer, Toyota and many others have used Cognigy to transform the way they interact with customers. NiCE has the data and now can apply Cognigy’s agentic AI capabilities.
Prior to being acquired by NiCE, Cognigy was already a partner but also works with many of NiCE’s competitors. One question that was raised at acquisition time was whether NiCE would keep Cognigy open. At the time, NiCE said it would and Russell reiterated that at Analyst Summit as it’s what’s best for the customer.
However, by owning the asset, NiCE can better control the roadmap and keep the development of Cognigy in lockstep with the parent company. If executed on correctly, Cognigy plus any CCaaS vendor should be 1+1=3, but when NiCE is involved that should be more like 1+1=5.
Voice is far from dead
With all the digital channels available to customers today, many have called for the death of voice but, in the words of Mark Twain, “The reports of my death are greatly exaggerated.” “Everyone is prophesizing the doom of voice, but voice interactions continue to grow,” Russell said, and he backed that up with a chart that showed voice grew 27% last year. That’s certainly well behind the growth 42% digital channels grew and 65% boost AI saw, but it’s healthy nonetheless.
I believe as AI agents get better at understanding natural language, the use of AI voice will create a voice renaissance as people will prefer to talk to brands versus email or chat. It’s the default interface we all have and the easiest to use, if done correctly. Agentic agents can understand what we say and find information much more quickly than people.
There will come a day in the not-too-distant future where, when you call a brand, you’ll prefer the AI agent over the human for simple, repetitive tasks. Human voice is far from dead, and the use of AI voice will eventually dwarf other forms of communications.
AI shifts CX decision-making
Historically, NiCE and its peers have sold through the contact center business unit, but AI will change that. During Russell’s presentation he mentioned how “AI will melt the org chart by orchestrating workflows across the front, middle and back office.”
This will have a profound impact on contact center decisions as it will move to more to the C-suite. At the event, I caught up with Cognigy’s vice president of AI transformation, Thys Waanders, and he told me that many wins initially went through the chief information officer and more and more now chief digital officers. This is something I had discussed previously with Joe Rittenhouse, CEO of Converged Technology Professionals, a leading services firm, and he echoed that sentiment when he told me, “We see AI being driven at a C-Suite level more than at the contact center business unit.”
The reason for this is that, at the business unit level, change can be scary because it threatens jobs, changes roles and redefines workflows. I’m not saying there aren’t a handful of forward-thinking contact center leaders that push change and disruption, but in general, disruption rarely happens at the business unit.
Server admins didn’t bring in virtualization, telecom managers weren’t on board with voice over IP and don’t expect all contact center managers to jump for joy with AI. Cognigy’s ability to bridge the gap between NiCE and the C-Suite is one of the underappreciated aspects of this acquisition, but change is coming, and Cognigy’s experience here will pay big dividends for NiCE.
Final thoughts
Agentic AI is going to have a significant impact on the way brands deal with customers. NiCE has built a strong platform and is widely recognized as the CCaaS leader as evidenced by its top position in both Gartner’s Magic Quadrant and Forrester’s Wave.
Now the industry is changing, which typically threatens incumbents because they are market leaders rarely want to upset the apple cart in which they’re a leader. Cognigy gives NiCE the AI suite, and it needs to use this to evolve its marketing, sales motion and partner programs to address new buyers that will have a different mandate than their traditional customer.
Russell has things pointed in the right direction, but 2026 will be a year of proving the vision he laid out can turn into execution.
Veeam Software Group GmbH today announced a definitive agreement to acquire Securiti Inc. for $1.725 billion, by far the largest purchase made by the company to date.
Though Veeam has made many acquisitions under the tenure of Chief Executive Anand Eswaran (pictured), this is the first in the security market. As a data protection company, Veeam can be thought of as “security-adjacent,” but now squarely enters the artificial intelligence cybersecurity race.
Securiti had raised $156 million from three rounds of funding with major investors, including Mayfield, General Catalyst, Workday Ventures and Capital One Ventures. The $1.725 billion is about three times its last valuation of $575 million. Though this may seem steep, valuation in the software-as-a-service and cybersecurity markets is often judged by revenue multiples.
Securiti has not reported revenue publicly, but the private equity tracking sites estimate that in the last 12 months, revenue was in the $300 million to $400 million range, which would suggest an acquisition multiple of about five times revenue. This might seem aggressive, but it’s defensible for a high-growth, category-defining asset in a hot market segment.
Also, it has always been my belief that if an acquisition transforms a company, which this should for Veeam, then a couple of hundred million here and there won’t make a difference year from now.
AI supremacy will be built on data and this purchase enables Veeam to bring together data protection, security, governance, privacy and AI trust into a unified platform enabling its customers to accelerate safe AI at large scale. Post-acquisition, Securiti will be a separate division under the broader Veeam company, like the way Kasten is run today.
The combined companies will benefit Veeam customers as it will bring new capabilities to access all their structured and unstructured data, enabling them to realize four big benefits:
- Understand their data everywhere with a unified, real-time data command graph of their entire data estate.
- Secure AI and data by unifying the Securiti’s data security posture management or DSPM capabilities, including compliance, data governance, access controls and data privacy, with Veeam’s security and data capabilities.
- Recover and rollback AI and data with precision, restoring clean data instantly across cloud, on-premises, SaaS, apps, hybrid or multicloud environments, and addressing rogue AI behavior by tracing, auditing and rolling back with precision without disrupting unaffected systems.
- Realize the value of data for AI by rapidly creating trusted AI agents, based on secured data, and creating enterprise-wide search in minutes that respects entitlements with privacy controls and runtime guardrails built in. This will enable customers to deploy the best-of-breed AI pipelines with confidence.
Strengths of the combined company
Before the announcement, I spoke with Eswaran and Securiti CEO Rehan Jalil to discuss why they made the deal and what benefits they expect the combination of Veeam and Securiti will deliver to customers.
Eswaran said the union of Seattle-based Veeam and Securiti, which is headquartered in San Jose, will provide organizations with “one trusted data graph scanning primary and secondary data and one command center, which is the key to understand and secure your data.”
Jalil, who will become president of security and AI for Veeam once the transaction closes, said his company provides capabilities that fit well with Veeam’s data protection, backup and disaster recovery solutions.
“Securiti brings complementary technology, unified controls, which are security controls, access controls and privacy and AI controls,” he said. “Some of these categories are really on a fire, like DSPM. Some of the largest organizations, including four airlines in North America, big banks, major telcos and some of the largest retailers use Securiti for these use cases.”
He summed up Securiti’s philosophy succinctly: “We provide customers with all the security for their data to make sure bad things don’t happen to the data. But if something goes wrong, we provide resilience.”
The power of the Securiti knowledge graph
Much of Securiti’s value comes from its Data Command Graph, which constructs a knowledge graph that maps a network of all data assets. This graph provides a contextual relationship between every object tied to data, including:
- Files, tables, and columns (sensitive data).
- Users and user roles (identity).
- AI models and agents that can access the data.
- Regulatory and compliance requirements.
This single-pane-of-glass Data Command Center allows users to flip the view instantly:
- A chief information security officer can ask: “Which AI agents are touching files with sensitive PII, and where is that data backed up?”
- A data officer can ask: “Show me the full lineage and context of the data fueling this specific AI model.”
This level of comprehensive, real-time context enables actionability, including labeling data for AI agents, controlling access and ensuring cross-border policy enforcement.
The growing role of AI
As with most industries, AI is playing an ever-bigger role in how companies operate, particularly in the technology sector. The acquisition of Securiti AI is expected to inject a significant growth vector to Veeam’s strong, fast-growing business.
“This is where we basically come together to create the first platform which represents the entire data estate in a meaningful way,” said Eswaran. “And we combine everything Veeam has stood for with Securiti’s data command graph, which will then be taken up one layer to look across this entire data estate, and look across the entire agentic AI capabilities. And that’s how you start to get to accelerating safe AI at scale.
“Veeam and Securiti coming together secures the future,” he said. “What we’re trying to do here is what no company has done before—provide a unified view of data to accelerate safe AI at scale.”
From a competitive position, this helps Veeam continue to pull away from the legacy backup and recovery vendors such as Dell Technologies Inc. and Cohesity Inc.’s Veritas. It also creates an interesting competitive dynamic with Rubrik Inc., whose go-to-market has been the strength of its DSPM. Rubrik has a strong DSPM solution but doesn’t have Veeam’s data recovery chops to act on incidents. I brought this up on my call with Eswaran and he agreed and told me old -chool snapshots and recovery, which is how data protection works today, needs to give way to autonomous AI.
Acquisition bolsters Veeam’s IPO positioning
This move is a textbook example of a late-stage private company making a strategic, high-impact acquisition designed to maximize its public market debut. The biggest benefit is the company can re-rate the business from “backup” to “AI security,” which should have greater investor appeal and a higher valuation multiple.
It also enhances the stickiness and value to Veeam’s existing customer base creating a strong platform advantage. Eswaran has yet to give a timeline to IPO, but this creates a direct AI play, which squarely moves Veeam to where the puck is going.
At IBM Corp.‘s TechXchange 2025 event last week in Orlando, Florida, artificial intelligence was the primary theme, as it is at every event today. But the messaging and announcements from this conference were about getting customers over the hump and moving AI from vision to adoption.
The pace of technological change is faster than I’ve ever seen it in my almost three decades as an analyst and two decades as an information technology pro before that. The rapid evolution of technology is being driven by AI and more specifically, generative AI, which lets us interact with systems and data in an entirely new way.
It’s important to note that this isn’t just an incremental change to businesses; it’s a fundamental overhaul of the tools we use, the data we rely on, the security we build and the systems we deploy. It’s akin to the massive change we saw with the internet, although AI will dwarf that transition. AI is causing unprecedented data growth, AI agents are proliferating quickly.
The central question for every organization is: How do you keep up? To help customers with this, the TechXchange keynote addressed the following issues.
The infusion challenge: Building usability and trust
Despite the hype, only a small fraction of businesses are seeing value from generative AI. A recent MIT study found that only 5% of organizations are claiming success with their generative AI pilots. The massive 95% is held back not by a lack of availability, but by a challenge of usability and fidelity. The millions of models on platforms like Hugging Face may be available, but how many meet a bank’s or pharmaceutical company’s stringent requirements for security, data governance, explainability, and adherence to regulatory standards?
Deployments require making AI fit the enterprise, not the other way around. This involves deep integration into existing infrastructure and a commitment to the high standards enterprises have maintained for decades. This enterprise-first approach is the core of the recently announced strategic partnership between IBM and Anthropic. As Anthropic CEO Dario Amodei noted on stage at TechXchange, this collaboration is focused on driving adoption faster by combining Anthropic’s models with IBM’s deep understanding of enterprise tech stacks, infrastructure, and the complexity of change management in regulated industries.
IBM does not have the sizzle of an AI startup but it’s decades of enterprise experience, coupled with its massive consulting practice, provides the essential trust, scale and domain-specific knowledge required to move from theoretical AI potential to practical, secure business execution. No one does “big” better than IBM and with AI, that’s a critical component of success for enterprises.
Given the large numbers of AI failures, it’s important to understand what the steps to success are. At the event, IBM laid out the following foundational pillars.
Ecosystem: Power in partnership
Enterprise-grade AI will not be from a single vendor. It demands a broad ecosystem that brings together model providers, cloud providers, and hardware vendors. This collaboration ensures that the models and technologies delivered are not only powerful but are optimized, scalable, and secure across diverse enterprise environments. At TechXchange, IBM announced partnerships with Anthropic, which will integrate Claude LLMs into IBM products. Beyond Anthropic, IBM highlighted ecosystem work with Qualcomm, Salesforce, SAP, Dell, Box and others.
Developer tools: Introducing ‘Project Bob’
The second pillar focuses on maximizing developer productivity, moving beyond simple code creation to task completion. The goal is a to boost efficiency by turning tasks that once took days or weeks into processes that take minutes or hours. One example IBM gave was upgrading an application from Java 7 to Java 17. This can be done in mere minutes now versus the hours and hours it did before.
This is the vision behind Project Bob, IBM’s internal tool that has already seen strong results, delivering over 45% productivity gains for more than 6,000 internal developers. Bob is designed for the entire software lifecycle, supporting both the inner loop (coding, debugging, testing) and the outer loop (deployment, resilience, CI/CD, compliance). A core feature is the shift towards literate programming, allowing developers to express their intent in natural language, which the system then translates into code, automating the mundane and augmenting the complex.
The success of Project Bob hinges on its simple user experience, which should foster high engagement and utilization. IBM positioned Bob not as a simple code generator, but as a knowledgeable partner — a “distinguished engineer” for junior staff, or a capable assistant for seasoned experts.
This supports the thesis that AI won’t take peoples jobs but rather, it’s people that use AI that will. Project Bob can be a programmer’s best friend if used correctly.
Infrastructure simplification: The knowledge graph with ‘Project Infragraph’
The final pillar tackles the growing complexity of the infrastructure required to deploy AI. Unlike sandbox development, managing production infrastructure involves live customer data, real-time user load, and critical security implications. Current infrastructure is highly fragmented across public clouds, private clouds and various management tools (Terraform, Ansible and others), making it hard for both human and AI operators to achieve full context.
IBM developed Project Infragraph, for the HashiCorp Cloud Platform, as a solution to the complexity. It is a real-time, graph-oriented database of all infrastructure assets and provides the following:
- Unifying silos: Infragraph aggregates data from public/private clouds, the supply chain such as Artifactory and GitLab, security tooling like Wiz and Sneak, and IBM’s own systems like Instana and Turbonomics.
- Operational intelligence: This unified knowledge graph makes infrastructure easy to understand by mapping the relationships between all components. This is critical for rapid remediation. As an example, instantly querying all impacted web instances during a vulnerability like an OpenSSL patch, and then initiating an automated, linked remediation workflow (for example, revoking a golden image in Packer and redeploying via Terraform), eliminating the need for manual spreadsheets and email chains.
- Foundation for agents: Infragraph provides a common data set built on a data layer showing a knowledge graph that can be queried by AI agents, enabling automated operations through natural language processing interfaces and systems like IBM Concert. This allows for intelligent, proactive actions on the infrastructure data.
IBM is banking on its ecosystem combined with productivity-enhancing developer tools like Project Bob, and the contextual operational intelligence of Infragraph, with enterprise security layered on is a roadmap to successful, secure and scalable generative AI adoption.
I attend more than my fair share of events every year, but my favorite is the Global Citizen Festival in New York City. For those not familiar with it, it’s an international education and advocacy organization dedicated to ending extreme poverty and its system causes. The goal of Global Citizen is to end extreme poverty by 2030.
The Global Citizen Festival is an annual music festival and advocacy event put together by the organization to drive policies and financial commitments from work leaders, corporations and philanthropists towards the overall goal. The event draws more than 60,000 people to address poverty, sustainability and education around the world. This year the event created a record-breaking 4.3 million actions to protect the Amazon, secure clean energy for homes in Africa and support children’s education worldwide.
Each year, in partnership with Global Citizen, Cisco Systems Inc. recognizes young people for making a positive impact with the Cisco Youth Leadership Award. The winners receive $250,000 toward their efforts of ending extreme poverty. I had the privilege of interviewing this year’s award recipient Esther Kimani (pictured), founder of Farmer Lifeline Technologies.
Kimani grew up in a Kenyan farming village. Her family, like many others, would lose a third or nearly half of their crops to pests and disease. Kimani was the only girl from her village to attend a university, where she studied math and computer science and wanted to apply her knowledge to help farmers back home. She came up with the idea of using AI-enabled cameras to monitor crops and alert farmers to pest and disease outbreaks. That’s how Farmer Lifeline Technologies was created.
The tool uses cameras to track fields and sends farmers a text message when it detects signs of pests or disease. The alerts go out in the farmer’s local language and suggest what action they should take. Compared with existing options like drones or lab testing, Farmer Lifeline Technologies is only a few dollars per month and can cover several farms at once. Kimani explained that drones can cost up to $100 an hour and private labs charge about $60 per test, which is unaffordable for small farmers, many of whom struggle to make that much in day.
If things don’t change, local farmers in Africa face devastating crop loss, up 70% of the food produced, according to Kimani. That wasted food could feed more than a billion people. Kimani believes that cutting these losses is just as important as trying to grow more food. It could create new opportunities for young people in agriculture, in addition to making a major difference for women in farming.
“Women carry the weight of both farming and family because they provide more than 50% of the labor force in these rural farming communities,” she said. “We thought about a technology that would serve them, so that when they’re still working in their homes, they can get a notification. They don’t have to be on the farm. An SMS is enough for them to take action and save their crops while there’s still time.”
Developing Farmer Lifeline Technologies required a lot time and sacrifice. Kimani had to tap into her personal savings to support her team in the beginning. Training the AI models was also a long process that involved sorting data and improving accuracy for different crops. Refining the tool took several years.
The tool has matured since its inception in 2020. It’s now integrated with a global database application programming interface, which helps identify pests and diseases across thousands of crop species in different regions. Although Farmer Lifeline Technologies currently operates in Kenya, with Cisco’s backing, Kimani hopes to expand into the East African region as well.
“Whether a camera is mounted in a New York apple farm, in the Philippines, in East Africa or in Kenya, it will still work the same because of the global database API,” said Kimani. “And we are taking tremendous steps towards our intellectual property. We’ve filed for an African one and are pursuing a global patent as well.”
Today, Kimani and her team are no longer limited to pretrained models. They’re building their own, which means better accuracy and more opportunities to scale. The team is also working to extend the reach of each camera unit, so it can scan more farmland. At the moment, one camera can cover about four to five farmers. Kimani plans to expand that to nearly 30 farmers per unit in the future.
The goal is to reach 1 million farmers by 2030. Compared with the 7 million small farmers in Kenya, 33 million across Africa and 500 million across the global south, “1 million is a drop in the ocean,” according to Kimani. Still, she believes this milestone is just the beginning.
She said winning the award has been “mind-blowing” and gives her company access not only to funding, but also to mentorship and technical support from Cisco engineers. As an example, the models currently run on a laptop and can take weeks to run. Kimani is working with Cisco to better understand compute options than can speed up the processing time.
Solving this problem will have a greater impact than just producing more food. “In Kenya, farmers are putting in the work and with the right support, they can produce enough food to feed themselves and grow the economy,” Kimani said. “Farming has the potential to absorb more youth, many of which are unemployed.”
When asked what advice Kimani has for other young entrepreneurs, she said: “To these young people who have problem-solving minds, let’s not be afraid to step out there. It starts as an idea, but you need to make the first move. It will be bold. Some of us are called crazy, and we get hard slaps on our entrepreneurship journey. But at the end of the day, the impact that we make will fuel us to continue doing what we do.”
On a related note, at the event, I ran into the first Cisco Youth Leadership Award winner, Wawira Njiru, founder and CEO of Food4Education. Her company uses IoT and mobile technology to simplify the process of kids being able to purchase subsidized lunches at school. When I first met her, in 2019, Food4Education was being used to feed about 3,000 kids per day. Since winning the award, she has scaled the company up and is now feeding more than 600,000 per day.
Kimani’s success and recognition through the award should be a reminder to all that one person can indeed make a difference. Hopefully her story, along with previous award winners, such as Njiru, will inspire other young people with an idea to be bold, take that thought and put it into action.
It seems every day there’s a new announcement regarding an economic event with artificial intelligence.
Recently, Nvidia Corp. made an investment in OpenAI and then OpenAI turned around and took a stake in Advanced Micro Devices Inc. Then there was the $6.3 billion deal between CoreWeave Inc. and Nvidia.
Why so much activity? The reason is that while the world craves AI, the infrastructure to support the demand isn’t there. In fact, earlier this year, McKinsey authored a report projecting that the world hunger for AI compute power is generating a capital market that is likely to grow beyond $6.7 trillion in 2030. The demand for AI is creating a flood of funding for compute and that’s not likely to slow down any time soon.
Much of the investments have been made by a few hyperscalers such as Google LLC, Meta Platforms Inc. and OpenAI. This week, QumulusAI, a Georgia-based neocloud, announced a financing model that could open a new financial spigot. The company closed a $500 million nonrecourse financing facility, a deal structured and implemented by Permian Labs and distributed through the USD.AI Protocol, the world’s first blockchain-native credit market specifically designed to finance the physical infrastructure of the AI sector.
On the surface, this might look like a line of credit, but it’s a funding template that can democratize access to AI infrastructure capital, creating an alternative to legacy financing methods and accelerating the journey to the neocloud era. One of the notable points of the financing is that it couples compute infrastructure to blockchain credit markets, such as decentralized finance or DeFi. This is important because it solves problems in funding large-scale, high-growth AI projects while enhancing the efficiency and accessibility of capital.
This innovative model essentially uses tokenization to turn traditionally illiquid assets (such as hardware) into tradable, transparent and instantly collateralizable digital assets on a global scale.
Leadership and strategy for the age of neoclouds
Neoclouds offer bare-metal access to the graphics processing units of high-performance computing systems, as well as training and inference of AI models. QumulusAI recently tapped Michael Maniscalco to be its new chief executive to help scale the company and the financing is obviously a critical component of being able to grow at AI speeds.
The veteran executive witnessed the huge supply-demand imbalance of the current landscape, citing that the bulk of world-class compute is tied up in large AI research labs. The net effect is a gigantic, underserved world of global businesses, research centers, and AI startups that require reliable, economical compute.
The key differentiator of QumulusAI is vertical integration. The organization possesses the entire stack, from managed power supplied by controlled plants and proprietary data centers to the eventual GPU accelerated cloud products. It is vertical integration that allows for improved cost management, reliability and flexibility that modern AI workloads require.
This strategy is quicker, cheaper and less bound by the obstacles of power sourcing at scale. It enables QumulusAI to address customers anywhere in the world without the limitations of typical infrastructure developments.
The DeFi blueprint: GPUs as tokenized assets
The $500 million facility is the engine behind QumulusAI’s ability to execute on its fast-paced roadmap and the model creates a potential new playbook for AI infrastructure capital. The facility is combination of decentralized finance with real-world assets such as U.S. Treasury bills, corporate debt or other tangible assets. Conventional financing of such mass-deployments of hardware is typically slow, highly dilutive of equity, and creates burdensome debt. QumulusAI’s nonrecourse facility avoids these problems, enabling the company to borrow stablecoins by up to 70% of approved GPU deployments in real time.
The Permian Labs’ strategy is the key and utilizes the following:
Tokenization: Permian Labs issues GPU Warehouse Receipt Tokens. GWRTs are legally attached to the underlying hardware GPUs and their projected revenue streams, in effect, tokenizing the hardware as a financeable commodity.
Decentralized collateral: The USD.AI Protocol is a blockchain-based credit market that accepts such GWRTs as collateral. It gives QumulusAI direct access to liquidity instantly.
Liquidity provision: Stablecoin liquidity is provided by on-chain depositors in search of yield from real-world, income-generating underlying assets. The protocol’s dual-token design guarantees a transparent, scalable conduit for institutional capital to directly flow into real-world infrastructure build-out.
This architecture establishes transparent and immediate liquid credit for compute infrastructure, enabling QumulusAI to finance on a nondilutive basis. By demonstrating that tokenized physical assets can fund AI infrastructure, with decentralized credit markets, this template unlocks new avenues for new neocloud operators around the world.
Market impact and outlook
The combined financial and strategic muscle of DeFi funding and neocloud distribution positions QumulusAI as a potential disruptor of the status quo in cloud. The company is prioritizing serving customers in fast-growing segments, for example, AI coding workloads and heavy research computing loads, that need highly reliable, highly scalable, and economical resources.
Usability was brought up in a conversation I had with Maniscalco as a second differentiation layer. Since developers value reliability and rapid turnaround time, the company is bringing improved user experiences and developer-friendly user interfaces as a second layer on top of its rugged, vertical silicon-to-systems compute platform.
Since the facility is also nonrecourse and nondilutive, QumulusAI can maintain maximum equity and agility as it scales rapidly, enabling it to respond fast to the hyper-speed evolution of AI hardware, for example new generations of Nvidia GPUs. In essence, QumulusAI is leveraging the most advanced technology in finance (DeFi and RWA tokenization) to accelerate the most demanding technology in infrastructure (AI supercompute).
This $500 million facility should be looked at as a blueprint that could be replicated by other operators, paving the way for a more distributed, competitive and capital-efficient future for the entire AI industry. By formalizing a link between decentralized finance and critical physical infrastructure, QumulusAI is defining how trillions of dollars in compute capital will flow into the global economy for decades to come, accelerating the democratization of AI and that’s good for everyone.
Cisco Systems Inc. today announced its 8223 routing system, powered by its new Silicon One P200 chip — a new network system designed to unlock artificial intelligence’s potential through massive scale.
Earlier this year, Nvidia Corp. introduced the concept of “scale-across” architectures as AI is now hitting the limits of a single data center. A “unit of compute” was once a server then evolved into a rack and then the entire data center. Scale across enables multiple data centers to act as a single unit of compute, and Cisco designed the 8223 and P200 silicon for the specific rigors of this task.
The new chip sets a new high-water mark for networking with a whopping 51.2 terabits per second of full-duplex throughput. There are two models of the new router. Both are 3RU and have 64 800G ports.
The 8223-64EF uses OSFP optics while the 8223-64E uses QSFP. While both modules achieve the same total data rate, their key differences are in size, thermal management and backward compatibility. These distinctions influence their suitability for different network environments, such as high-density data centers versus telecom applications. Also, OSFP supports both Ethernet and Infiniband standards, while QSFP is used primarily in Ethernet networks.
Single data center AI infrastructure has reached its limits
Though the capacity of these routers may seem off the charts, AI is consuming network traffic at unprecedented rate. As AI models double in size every year, the infrastructure required to train them has ballooned. This has pushed hyperscalers past the point of being able to scale-up and scale-out, leaving the only path forward as scale-across. This scale-across migration is driving a massive increase in long-haul traffic.
On a pre-briefing, Cisco Senior Vice President Rakesh Chopra talked about this. He mentioned that a scale-across network needs approximately 14 times more bandwidth than the traditional wide-area network interconnect and could require up to 16,000 ports to deliver 13 petabits per second of bandwidth for a massive AI cluster.
Trying to achieve this with older modular chassis would require thousands of modular chassis, becoming prohibitively expensive, power-hungry and complex to manage. With scale-across, this can be done with only about 2,000 ports, which is a fraction of the previously estimated 16,000.
Deep buffer differentiator for Cisco
A key part of Cisco’s strategy is its use of deep buffers — a feature typically associated with traditional routers, not the shallow-buffered switches favored in internal AI clusters. This is arguably the most significant architectural point of divergence from competing approaches such as Nvidia Spectrum-XGS Ethernet. What’s interesting about this is that deep buffers have not been used for AI infrastructure as they are perceived to actually slow down AI workloads.
Deep buffers are thought to be detrimental to AI networking, particularly for distributed training workloads, because they cause high latency and jitter, which severely degrade the performance of AI models. The concept comes from the notion that with deep buffers, the buffers need to be repeatedly filled and then drained and that causes jitter in the transmission of data between GPUs.
Though deep buffers prevent packet loss during congestion (microbursts), which is good for throughput, the tradeoff is a phenomenon called bufferbloat. AI workloads, especially distributed training involving multiple GPUs, are highly sensitive to latency and synchronization issues.
To Cisco’s credit, it addressed this proactively in the analyst call and explained how it can overcome perceived limitations of deep buffers. Cisco’s argument is that it’s not the existence of deep buffers that causes the problem but rather congestion that causes them to fill in the first place. Chopra argues, “The problem is the fact that you’ve done a bad job of load balancing and avoiding congestion control.”
The other thing to realize is that even if the buffers are filling and draining that doesn’t affect job completion time as AI workloads are synchronous in nature. “AI workloads wait for the longest path through the network to complete which effects mean transmission time, not the maximum transmission time,” Chopra explained.
The introduction of deep buffers creates better reliability for long distance, scale across networks supporting AI workloads. Losing a single packet forces a massive rollback to a checkpoint, a process that is very expensive to do when AI training runs for months. The P200’s deep buffering capabilities are designed to absorb massive traffic surges from training, ensuring performance is maintained and power is not wasted on re-processing. Through good congestion management, Cisco can marry the benefits of deep buffering without the historical downsides.
Security and flexibility future-proof the AI fabric
Recognizing the criticality of the data moving across data centers, security is baked deep into the 8223. The system offers line-rate encryption using post-quantum resilient algorithms for key management, a level of future-proofing essential for multiyear AI training jobs. Furthermore, a Root of Trust is embedded in the silicon, guaranteeing integrity from manufacturing to deployment, safeguarding against physical tampering.
Also, Cisco is embracing operational flexibility. The 8223 is initially available for open-source SONiC deployments, targeting hyperscalers and large AI data center builders who often prefer open options. Support for IOS XR is coming shortly after, which will allow the platform to serve traditional Data Center Interconnect (DCI), core, and backbone WAN use cases, expanding the total addressable market significantly beyond the core AI-cloud customers.
The P200 chip will also be available in modular platforms and disaggregated chassis and will power the Cisco Nexus portfolio (running NX-OS) for the enterprise data center, ensuring the same foundational technology and architectural consistency across the entire AI ecosystem. This multifaceted deployment strategy positions Cisco to capture a significant portion of the well north of $10 billion TAM for networking equipment in the AI cloud sector.
It’s important to note that both Cisco and Nvidia now offer scale-across networking products with Cisco leveraging deep buffer and Nvidia with shallow buffers. Though industry watchers will want to pit one versus the other, the reality is the demand for AI networking is so great that both can succeed. Cisco’s approach is ideally suited for distributed AI interconnects where network resiliency is critical. Nvidia’s approach is better aligned with low latency scenarios where predictable, minimal latency is an absolute priority for fast training cycles.
AI has created a rising tide, and options are good for customers.
As Nvidia Corp. announced new robotics innovations at last week’s Conference on Robot Learning in South Korea, the company continues to extend its product line with new capabilities and enhancements.
Nvidia announcements at CoRL included:
- The Isaac GR00T N1.6 open foundation reasoning vision language action model that provides robots with humanlike reasoning to break down complex instructions and execute tasks using prior knowledge and common sense.
- Availability of the open-source Newton Physics Engine in the company’s Isaac Lab. A co-development with Google DeepMind and Disney Research enables the creation of more capable and adaptable robots.
- New Cosmos World Foundation models that provide developers with the ability to generate diverse data for accelerating training physical artificial intelligence models at scale.
“Humanoids are the next frontier of physical AI, requiring the ability to reason, adapt and act safely in an unpredictable world,” Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia, said in an analyst briefing before the conference. “With these latest updates, developers now have the three computers to bring robots from research into everyday life — with Isaac GR00T serving as the robot’s brains, Newton simulating their body, and Nvidia Omniverse as their training ground.”
Lebaredian said nearly half of the papers accepted at CoRL cited the use of Nvidia technology, including accelerated computing platforms and software libraries.
Nvidia’s long reach in the robotics world
Nvidia has taken an architectural approach to physical AI with three computer system to build robots. Blackwell DGX or HGX AI systems to train the physical AI model, RTX PRO servers and Omniverse with Cosmos to simulate and test physical AI models, and Jetson AGX Thor powered by Blackwell for on-robot inference. “We build the three computers, plus the open models, open-source simulation frameworks and data pipelines that run on these computers for physical AI developers,” explained Lebaredian.
Robots that reason
The updates to GR00T N1.6 include using Cosmos Reason as its long-thinking brain. Cosmos Reason is an open, customizable reasoning VLM, or vision language model, for physical AI. It will bring human-like reasoning to humanoids, allowing them to break down complex instructions and execute tasks using prior knowledge and common sense. It will also let humanoids move and handle objects simultaneously with more torso and arm freedom to complete tough tasks like opening heavy doors while carrying items.
“Reasoning is the best tool we have currently for extending from the set of things that we fed into the AIs for their training into novel new paradigms and environments,” Lebaredian said.
Unlike large language models that can be trained using human knowledge from the internet, such data doesn’t exist for training physical AI models according to Lebaredian. “Real-world data is costly and potentially dangerous to capture, and pre-training only goes so far,” he explained. “To train models like GR00T, we need a scalable and cost-effective way to generate large, diverse and physically accurate data.”
To provide that training, Nvidia announced that new versions of the Cosmos-Predict and Cosmos-Transfer World Foundation models will be available soon.
Cosmos-Predict is a breakthrough in robot training as it generates future states from an initial state, providing the data that’s lacking today. This new release unifies three separate models into one, cutting post-training time, which reduces complexity and that lowers compute cost. It delivers higher quality than previous versions, as well as open-source models of similar size. It now supports multiview outputs for multisensor robots and autonomous vehicles, and can produce videos of up to 30 seconds.
“Cosmos-Transfer performs world-to-world style transfer, and the latest version is 3.5 times smaller than before,” Lebaredian continued. “This smaller footprint lowers compute cost and makes it easier for developers to augment and scale training data. Together, these models enable the generation of hundreds of virtual sensor-rich environments for robot training, reducing reliance on real-world data collection.”
The role of supercomputers
As “the era of robot reasoning” begins, Lebaredian said “robots need a supercomputer that can power the entire system from brain to body.” Nvidia’s announcement last month of Jetson Thor, which is designed for physical AI and robotics and is powered by a Nvidia Blackwell GPU and 128 gigabytes of memory, delivers “the AI performance to run the latest models, including the Isaac GR00T and Cosmos World Foundation models,” he said.
Leveraging open source
In his briefing, Lebaredian addressed how Nvidia’s open-source approach to Newton and Nvidia Isaac GR00T benefits the robotics community, particularly researchers and startups.
“The very nature of how research happens, how we advance the frontier of human knowledge, is about openness,” he said. “It’s about sharing information between all of the researchers so they can advance together. The only way to really do that well in the computing world is by also sharing the same software, algorithms and techniques that are both developed and the tools and pipelines behind them used to actually conduct the research.”
With robotics and physical AI, Larabedian added, “we’re still on the frontiers. It’s critical that we advance these frontiers together. To do that, we have to contribute to open source and do this all in the open so everyone can move together. Nvidia, being in the position that it’s in, can disproportionately help by putting our vast resources behind the software development and the technology necessary to power all of this, and we’re doing so.”
As Cisco Systems Inc. held its WebexOne conference this week in San Diego, to no one’s shock the theme of the 2025 event was artificial intelligence — and, of all the markets Cisco plays in, the ones Webex addresses have the most direct end user impact.
“We are squarely in the next era of AI, where we are moving from chat bots that answer questions to agents that are going to conduct tasks and jobs almost fully autonomously on our behalf,” said Cisco Chief Product Officer Jeetu Patel. This shift has some profound implications to enterprises, the most notable one that productivity growth will be based on team productivity and workflow automation versus individual productivity, which is where we are today.
Cisco has shifted its vision to be in line with AI trends. At the last couple of WebexOne events, the company has worked with a vision of “Distance Zero,” which is using AI to enable us to all work seamlessly regardless of where we are. Over the last couple of years Cisco has delivered innovation to fulfill on the mission. This includes new hardware such as room bars, ceiling mics, smarter cameras as well as AI features in Webex such as meeting catchup, translation and transcription.
During his keynote this year, Patel laid out where AI was going by talking about the evolution of work. Historically collaboration has been about people interaction with other people. More and more people are interacting with AI, and soon, we will have autonomous AI to AI communication and task coordination.
Cisco has evolved Distance Zero to what it calls “Connected Intelligence,” which is its vision for the future of the workplace. Patel described this as “Build a workplace today for the workforce of tomorrow – a workforce that includes both humans and agents communicating with each other.”
All the announcements at WebexOne 2025 were in support of this. Here are the notable ones:
New AI agents for Webex Suite
Cisco is integrating specialized AI agents directly into the Webex platform to tackle specific tasks:
- Task Agent: Automatically generates action items from meeting transcripts and can perform tasks in third-party applications such as creating a ticket in Jira.
- Notetaker Agent: Captures real-time transcriptions and summaries of meetings, including impromptu in-person sessions.
- Polling Agent: Proactively listens to a meeting’s discussion and generates live polls to boost audience engagement and gain instant participant input.
- Meeting Scheduler: Autonomously identifies the need for follow-up meetings, finds open times and automatically schedules meetings.
- AI Receptionist for Webex Calling: An always-on virtual assistant that handles routine inquiries, transfers calls, and schedules appointments.
RoomOS 26 for Cisco devices
Cisco’s biggest competitive advantage in collaboration is its devices. A little under a decade ago the company made the strategic decision to load these devices up with Nvidia Corp. chips and that allows them to do much of the AI functions, such as background noise removal, on the device instead of having to burn PC processing capabilities. This newest version of its device operating system includes the following new features:
- Audio Zones: This feature allows information technology teams to digitally define boundaries within a physical meeting space. The AI-powered Ceiling Mic Pro then uses this information to focus its pickup, effectively blocking out distracting noises and background sounds from predefined exclusion areas. This ideally suited for large meeting rooms and shared spaces.
- Director: This AI agent works with cinematic camera control. It autonomously uses embedded camera intelligence to anticipate and adapt to the flow of the meeting, creating engaging views by automatically framing, zooming and switching between speakers and presenters.
- Notetaker Agent Integration: It enables the above mentioned Notetaker Agent to be used in in-person local meetings and impromptu huddles by transcribing and summarizing the discussion in real-time. The obviates the need for a worker to have to tun on a third-party note taking agent on laptop or mobile device.
- Workspace Advisor Agent: This agent uses the sensors and cameras in Cisco devices to create a 3D “digital twin” of the physical meeting room. This lets the IT teams the data to optimize room design and ensure everyone in the room has a great experience.
Devices present an interesting opportunity for Cisco. While there are many device manufacturers, Cisco’s are unique in that they are loaded with features beyond a typical device. The devices have ThousandEyes integration, which makes troubleshooting much easier.
They are also loaded with sensors to understand environment conditions and, as mentioned earlier, have Nvidia graphics processing units for AI capabilities. One could argue that collaboration endpoints are now critical branch infrastructure on part with Wi-Fi access points and security devices. This could create significant upside for the collaboration business.
AI Canvas integration with Control Hub
At Cisco Live in June, the company rolled out AI Canvas, which is a generative AI workspace designed to transform how IT team manage, troubleshoot and automate operations across its infrastructure stack. The product works with other Cisco management tools and is designed to enable multiple teams to work together. At launch AI Canvas was designed to work with Cisco network infrastructure.
Since then, the company integrated security and Splunk to it. At WebexOne Cisco announced the integration with Webex Control Hub which will make it simpler to troubleshoot collaboration problems. Control Hub is widely used by the Cisco customer base and the AI Canvas integration and take what are typically solitary processes and turns them into collaborative ones.
AI quality management
This feature is for Webex Contact Center and oversees and optimizes the performance of both human and AI agents. While there are many quality management tools they focus on human or virtual agents. Cisco becomes the first vendor to provide contact center supervisors with a single view for quality with AI-assisted scoring, real time insights, coaching for human agents, and performance recommendations for AI agents. As contact centers start to blend humans and AI agents, AI QM that spans people and machines will become a requirement. I believe Cisco is the first vendor to have brought these two areas together.
Final thoughts
Overall, the tone of WebexOne 2025 shifted from last year. The 2024 edition was largely still evangelizing AI. This year it was about putting AI into action. I talked to several customers that are now feeling the urgency to get AI deployed into their workforce and contact center. In his keynote, Patel talked about how AI will enable a 10-fold improvement in productivity and that companies that do not embrace AI will quickly become irrelevant.
The announcements from Cisco are well-aligned to helping organizations boost productivity and improving customer experience. The work from home period created a shift in unified-communications-as-a-service and contact-center-as-a-service buying, and Cisco was on the outside looking in. The rise of AI will create another shuffling in share and this time, Cisco appears ready.
In addition to all the new features, Cisco has made good progress with its platform strategy in which its brought security, collaboration, observability and networking together. One of its three focus areas is “Future-Proof Workplaces,” which, in reality, is an AI-enabled workplace. From the product announcements, its portfolio is now able to use its platform to deliver a unique offering and take much of that share back.
One last note: The keynote began with a shout-out to Jay Patel, who passed away in September. Jay was senior vice president and general manager of contact center solutions, but more than that was one of the nicest people I’ve met in the industry.
He joined Cisco about four years ago in the acquisition of IMI Mobile and brought a new attitude to Cisco. The company has always been a strong engineering company but often did things that might be considered good for the company in the short term but not the best thing for customers. “He was he was obsessed with what’s right for the customer and working backwards from there,” Jeetu Patel said, and that’s why he was so popular with customers.
Rest in peace, Jay Patel.

