Featured
Reports

Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience
Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]

Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation
April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]

Nathan Howe, VP of Global Innovation at Zscaler talks mobile security
March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]
Check out
OUR NEWEST VIDEOS
2025 ZKast #137 with Pej Roshan of Menlo Security at Black Hat 2025
2.1K views 20 hours ago
0 0
2025 ZKast #136 - Episode 4 of What's New in CX with Juanita Coley - LivePerson, Zoom, Genesys, 8x8
3.3K views August 27, 2025 4:11 pm
1 0
2025 ZKast #135 Steve Inman of Chicago Cubs & Joe Rittenhouse, CT Pros on RingCentral & Innovation
4.1K views August 26, 2025 12:46 pm
3 1
Recent
ZK Research Blog
News


Nvidia Corp. recently held an industry analyst briefing on the topic of physical artificial intelligence, and Chief Executive Jensen Huang has been consistent in his talk track in every keynote he has done this year that physical AI is the next wave of AI. In fact, he has often stated that eventually anything that moves – from lawnmowers to forklifts to cars — will be autonomous giving rise to the physical AI era.
Though most people think of physical AI or the world of robots as the stuff of science fiction and a niche technology, the benefits will be widespread. I recently talked with a chief information officer from a healthcare organization in the Mid-Atlantic and he explained that autonomous wheelchairs would enable patients to be taken curbside without having to use a person enabling that clinician to spend more at a patient’s bedside. Retailers can use robots to scan shelves for better inventory control, and anyone who flies United Airlines has likely seen the robot that moves around the lounge for people to place used dishes in.
Presenter Rev Lebaredian, Nvidia’s vice president of Omniverse and simulation technology, dove deep into this fascinating — and fast-growing — segment of the AI boom.
The physical AI era has arrived
Making AI useful and productive for the real world is the realm of physical AI. But what defines the boundaries between the different types of AI?
“Generative AI involves models we’ve all been using, such as large language models and maybe some image models,” said Lebaredian. “Essentially, you give it some input, and output comes up. With LLMs, the input tokens are text, and the outputs are also text.” But with physical AI, the model is different. “We bring in input that would be the equivalent of what the sensors on a robot would experience,” he explained.
When it comes to AI, the terms “robot” and “physical AI” are general terms for a broader category. This includes humanoid robots, manipulator arms, self-driving cars or anything else that moves. However, physical AI also extends to things such as radio towers — anything that could sense the physical world and then go operate in it.
These robotic devices input sensor data, including the equivalent of what language models also input, such as text and other modes of input. “We can combine our understanding of abstract knowledge in the LLMs along with an understanding of the physics of the world to then output action tokens,” he said. “These are actions that end up controlling an embodiment of the robot. On a manipulator arm, that would be the torques and forces that are created by the motors to change the angles of the points on the robot arm. It could also be the steering, braking, and acceleration of a self-driving car. It could be anything that’s a control signal for the actual body. The application of this is endless.”
Real-world applications
Lebaredian said while there have been breakthroughs and steady progress in physical AI development and refinement, there’s plenty of work to be done in this segment of the industry. “The things we need to do in the real world are obviously extremely valuable. Once we crack this problem of physical AI, we can enhance everything from factories and warehouses to all of transportation, and humanoid and other robots can do the equivalent of human physical labor.”
Why is this important? Labor shortages present a real challenge to industries and companies that are seeking to grow. In many industries labor shortages are massive. Businesses are having a hard time hiring enough skills factory and warehouse workers and people to stock retail shelves. This article on meteorspace.com states that the US warehousing industry is facing a shortage of more than 35,000 workers, with companies such as Amazon reporting turnover rates of over 150%, so while hiring people is difficult, keeping them is even harder.
Looking ahead, many countries are facing an aging and declining population creating a situation where producing the same volume of goods year over year becomes increasingly difficult. There are also global supply chain issues due to geopolitics that are resulting in a great deal of manufacturing that was once farmed out to manufacturing hubs in Asia and elsewhere being reshored, especially in the U.S.
But is physical AI technology ready to take advantage of this opportunity?
The era of general-purpose robots
Are the mechanical and, more importantly, the software and AI technologies needed to build and operate sophisticated general-purpose robots ready for the job? Lebaredian believes so.
“For the first time, we have a line of sight to building the algorithms, to building the brains of a general-purpose, robust robot. The industry had the capability to build physical robots for quite a while. We’ve been introducing mechatronics and robotics into the industrial space for decades now, but we didn’t have the capability of making them intelligent enough so that those robots can see and act autonomously in a general way. We had to program them specifically to do one task repeatedly,” he said.
The invention and evolution of AI has accelerated robot and physical AI development. The massive amount of innovation from Nvidia and the rest of the AI industry has created the ability to build a “brain” Lebaredian referred to and to democratize it across all the domains and physical spaces. That wasn’t possible five years ago, but today it is.
Nvidia’s role in AI-driven robotics
Lebaredian made it clear that Nvidia isn’t in the business of building robots or other AI devices, such as autonomous vehicles. But the company plays a critical role in making those devices possible and capable of accomplishing vital activities.
The company has had tremendous success building reference architectures for systems across almost every industry it serves, and physical AI is no different. Nvidia comes up with the blueprints for physical AI and then enables others to leverage it. Nvidia has three computer platforms for physical AI: Omniverse & Cosmos on RTX PRO for simulation, the DGX and HGX for training, and the Jetson Thor AGX for deployment and operation. The three computers run on Nvidia’s popular Blackwell GPUs.
The Jetson Thor AGX provides the brains for the automated vehicles that Nvidia is helping to develop. “That’s a very important computer,” said Lebaredian. “It needs to be power-efficient while being extremely powerful. It’s a specialized kind of computer. It has to be able to deal with lots of sensor data and execute advanced AI models with a lot of compute but doing that efficiently in a specific power envelope requires lots of specific software to run it.
“All three computers, because they’re built on the same architecture and run the same algorithms, and all of your software is portable between all three, they’re all backwards and future compatible as well, and future proof as well, with all the NVIDIA architectures,” Lebaredian said.
The rise of AI factories
Lebaredian says AI factories powered by Nvidia’s DGX and HGX unified AI development platforms play a crucial role in the development of AI-driven robotics. The AI factories take the raw data from the physical world and the tasks that required to execute in the physical world, and they output models—effectively the brain of the robot—and then uploaded that information to Jetson.
But even with such cutting-edge development tools and platforms, building physical AI systems is a challenging task. “To create a brain, to create any AI, you need massive amounts of data, and you need massive amounts of the right data,” Lebaredian explained. “You need accurate data, and you need well-labeled data for the knowledge space, and that’s hard. It’s already hard getting all the data we scrape the internet; we find all this information that’s readily available, and it’s still not enough.”
He says the data just doesn’t exist anymore and collecting it by capturing it through sensors in the physical world “is just too expensive, too time-consuming, and in many cases, too dangerous or even impossible to get with the accuracy we need. You just cannot have enough of the right sensors in the physical world with the accuracy that we need.
“The only way to really generate all of the data we need to collect it is by first taking the rules of the physical world — physics — and replicating it inside a computing system, building a simulator of that physical world, and that simulator becomes a generator for the kind of data that we need to feed into the AI factory, which could then produce the AI algorithms we then deploy,” Lebaredian said.
But the work doesn’t end once the simulators are created. “We also need these simulations, not only as a data generator, but to test them before we deploy those AI brains that we train onto the real robot,” he said. “We need to test them for millions and millions of hours, drive our AI-driven AV vehicles for millions and millions of miles before unleashing them onto the world. And the best place to do that, and the fastest place, the least expensive place to do it, and the least dangerous place, is in simulation.”
The simulation computer fulfills two functions. It’s the data generator that feeds the AI factory, and it’s also where the testing and validation is done before deploying these physical AI systems in the real world. This is a critical step as without it, companies would have to build physical environment to tests robots. They can get damaged when they fall, tests can be incomplete, and it takes a long time to create new scenarios.
Success requires cooperation
Though Nvidia is rightfully proud of all the company has contributed to the growth of AI, it will require continued collaboration from all technology sectors to make the full potential of physical AI a reality.
“No one company can solve all of these problems,” Lebaredian said. “It’s just way too large and way too big. We’re building the core computational part of this. We’re building the three computers with all the operating systems for that, but we need many in the ecosystem to build the layers of software on top, to build the physical hardware, and every combination in between. The only way we are going to build physical AI that’s robust, that really addresses all of the needs of the industries I mentioned in that $100 trillion market, is by doing it together.”
Most of the media focus and hype around AI is focused on generative AI, but physical AI is right around the corner and will change all our lives. Very soon, vacuums, lawn mowers, golf carts and other consumer devices will be autonomous as well as industrial equipment.


Palo Alto Networks Inc. kicked off the annual Black Hat USA security conference in Las Vegas this week with today’s announcement of its Cortex Cloud Application Security Posture Management solution.
The ASPM offering is designed to fix security issues before cloud and AI applications have been deployed. The traditional method of securing apps is a highly fragmented set of manual processes. Instead of a single, unified platform, developers rely on a collection of point products and manual processes that are disconnected from each other. This method is often characterized as “tool sprawl” and has no single source of truth.
Cortex Cloud ASPM operates on the concept of moving security to the earliest stages of development, also known as shifting left. Instead of waiting until an application is deployed to find vulnerabilities, the platform integrates directly into the developer’s workflow and continuous integration and continuous delivery or CI/CD pipelines. This allows it to scan code for misconfigurations, compliance violations and other vulnerabilities in the source code, open-source libraries and infrastructure as code templates as well as identify hardcoded API keys and passwords in the code.
This release extends Cortex Cloud — introduced earlier this year — which combined the company’s cloud-native application protection platform, or CNAPP, and its cloud detection and response, or CDR, technologies to deliver real-time security. Palo Alto has been the most active security vendor in evangelizing the value of a security platforms and this is another example of the value of bringing a set of tools together.
In a prebriefing for industry analysts, Cameron Hyde, product marketing manager for application security, said that as Palo Alto moves from Prisma Cloud to Cortex Cloud, the company wants to more tightly align three pillars — data integration, AI-driven intelligence and automation — as it extends these capabilities to the SOC for tight synergies on the underlying data.
One of the discussion points on the call was the impact of AI on coding. While it is certainly true that organizations can write code at a pace never seen before, it’s also true that the accelerated use of AI can push insecure code into production at an equally unprecedented rate. As this happens, traditional application security approaches struggle to prevent risks, only alerting security teams after they’ve already slipped into production.
Customer benefits: Context is king
Palo Alto says Cortex Cloud ASPM fully integrates with and enhances the application security offerings already available in Cortex Cloud to deliver benefits including:
- Risk prevention: Using full application and business context to proactively stop security issues from reaching production by enforcing guardrails without slowing development.
- Prioritization: Avoiding false alarms by pinpointing critical, exploitable risks without requiring developers to use different tools. Leveraging an open ecosystem of native and third-party scanners to correlate findings with full code, cloud, runtime and business context.
- Eliminating manual remediation: Security and development teams can avoid backlogs by applying automation throughout the entire application lifecycle.
“When we talk with customers about prevention, they mostly say they cannot really prevent,” Sarit Tager, vice president of product management, said in the analyst briefing. “They say, ‘It’s too much, the developers will suffer.’ And we point out that without prevention, it may cost more when you go to production, since you’ll need to figure out who actually wrote the code and how to go back and rebuild it. All of that is really expensive in terms of developer time.”
Leveraging AppSec partners
Cortex Cloud features an open AppSec partner ecosystem to enable customer organizations to consolidate data from third-party code scanners into a centralized platform for comprehensive visibility. The goal is to combine native ASPM data with third-party vendor insights to provide organizations with a stronger security posture that doesn’t require them to change tools.
Palo Alto’s AppSec partners include Checkmarx, Snyk and Veracode. The integration with third parties has been a core component of Palo Alto’s platform strategy for the past several years. No security vendor can do everything and by partnering, Palo Alto can fill in the gaps in its platform.
Cortex Cloud ASPM early access is underwa, with general availability expected to be in October.
AI is having a massive impact on coding and companies of all sizes are now using the technology to spin up thousands of lines of code daily versus the few hundred that could be accomplished with people. Along with this, organizations need to rethink how the code is secured through AI enabled automated systems.


Palo Alto Networks Inc.‘s announcement Tuesday of its intent to acquire CyberArk for $25 billion implies a heavy price tag, as its shares fell on the news. But I believe it to be a good, long-term strategic move for Palo Alto and a logical extension of its platformization strategy.
Valuation is interesting to look at but highly overrated long-term. If an acquisition is a good one and helps transform a company, then the purchase price won’t matter over time. Consider the purchase of Mellanox Technologies Ltd. by Nvidia Corp., which was almost $7 billion in 2019. Given that it moved Nvidia into networking and was the foundation for innovations like NVLink and NVSwitch, the company could have paid twice what it did, and we still would have looked at it today as a good deal.
CyberArk enables Palo Alto to go after the identity market, which should flourish in the agentic and physical artificial intelligence era. Post-acquisition news, Palo Alto CEO Nikesh Arora (pictured) was on CNBC and discussed this with Jim Cramer. “I’ve always paid attention to markets when they inflect, because inflection points create the opportunity for us to enter markets,” he said. “I believe with the AI wave we’re seeing, with 88% of all ransomware attacks driven by credential theft, identity is an unsolved problem.”
This topic of conversation came up with Arora at an analyst roundtable at the recent RSA Conference. He discussed the concept of allowing agents to complete tasks on our behalf and the security challenges associated with this. A simple example would be to ask an airline’s agentic agent to rebook a flight for you. One would need to give permission for the agent to do that.
The logical extension of this is then to have the airline agent rebook your hotel, car rental, dinner reservations and so on. The challenge with this is do you give your usernames and passwords for the various services to the airline? A third-party agent? A digital twin of yourself? There are many possibilities, all of which will be used to some degree.
Also, with the rise of physical AI, each of those devices needs an “identity” to operate securely within their environments. I recently spoke with a chief information officer of a healthcare organization, and we were discussing using autonomous wheelchairs to take patients curbside, obviating the requirement to have a clinician take the person. That would allow for the clinicians to spend more time bedside rather than doing a task that could be automated. However, in healthcare, security is paramount, creating the need for a holistic identity solution.
CyberArk will plug in nicely with Palo Alto on several fronts. The first is the convergence of privileged access management, or PAM, and identity and access management, or IAM. The two are similar but operate at different levels. The latter is broad in scope and manages identities and permissions for all people, devices and apps. PAM can be considered a specialized subset of IAM where it focuses specifically on securing and managing high-level “privileged” users.
Historically, PAM was more expensive to deploy than IAM, so its use was limited. By rolling it into their platform, Palo Alto can offer PAM at the same cost as IAM, enabling it to be used on every device, user, machine and AI agent. The concept of “proliferation of privilege” has been bandied about for a while, but with standalone platforms it’s hard to scale.
Also, this expands Palo Alto’s platform capabilities. The identity industry is like every other submarket of security, in that it’s highly fragmented, with the various vendors solving a piece of the security problem. Palo Alto has done an excellent job of acquiring point products into its platforms and then using the data to “see” across the attack surface with more breadth and depth. With threat actors continually focusing on identity for breaches, bringing CyberArk into its platform makes sense and overdue.
The concept of the platformization is simple to understand and has been happening for over a decade. New vendors pop up to solve a problem, as the features get standardized, then get rolled into a larger platform. The best example of this is the next-generation firewall. At one time customers purchased firewalls, IPS systems, virtual private networks and more as point products. Today, no one does that, as the features were standardized and rolled into the firewall. Similarly, secure web gateways, cloud access security broker, zero trust and so on were all separate products and now they’ve been rolled into a security service edge stack.
On the interview with Cramer, Arora talked about this. “Long term, a billion-dollar revenue company should not be public,” he said. “They should be part of a bigger entity which allows for the leverage and scale required to create large amounts of cash flow and high market cap.” He was addressing a financial audience here, but the piece Arora omitted was the larger entity: if the technology is integrated correctly, companies can find and react to breaches faster and more accurately.
I’m not sure I agree that there should be no publicly traded security companies of a billion in revenue, but his thesis is correct, particularly in the AI era. Security is now an AI game, which requires data and lots of it. Point products are limited to the data within their silos where the platform vendors have a much broader set of data to work with. The platform vendors need to have the technical chops to know what to do with the data, but that’s something Palo Alto has shown it is excellent at, as evidenced by its success with the large number of acquisitions its done.
Agentic agents, robots and AI are coming and that requires security teams to rethink their approach to identity. Palo Alto scooped up CyberArk to address this, but I’m sure the other identity players will be in the cross hairs of other security companies. Okta, you’re on the clock.


Recently Zoom Video Communications Inc. held its annual industry analyst event, Perspectives, at its headquarters in San Jose, and it revealed much about the ongoing evolution of the videoconferencing company.
The company’s first act was built on video and given a massive steroid shot during the COVID era, which turned a company few had heard of into a household name. Since then, the company has added a boatload of new features, added enterprise clients, reduced the churn in its online business, and moved into adjacent markets, most notably contact center.
Despite this, the stock is about the same price it was pre-pandemic, which is partially because the communications industry is out of favor with investors. A bigger factor is that Zoom’s strategy is somewhat misunderstood, and I went to Perspectives to clarify in my mind what its next act will look like. Here are my top five takeaways from Zoom Perspectives.
Zoom is attempting to disrupt work, not communications
Zoom’s rise from startup to market leader was accomplished by disrupting the status quo. In a crowded market, Zoom created a product that disrupted on ease of use (the fact that ease of use was a differentiator is another story). Many industry watchers believe Zoom is trying to disrupt communications with an integrated unified communications/contact center offering but that’s not the focus. During his keynote, Chief Executive Eric Yuan (pictured) talked about how “work is broken” and he’s right. My research has found that 40% of a worker’s time is spent managing work instead of doing the job. This comes from having to flip constantly among documents, e-mail, chat and other applications. Communications is part of this, but Zoom is aiming to use AI and its suite of products to fix all the problems created by more and more applications.Zoom apps are about the data, not the apps
When Zoom launched Docs and Mail, there was a large amount of skepticism, since it’s tough to out-Doc and out-Mail Microsoft when its incremental cost of adding the products is zero because of the way it licenses its products. However, Zoom didn’t set out to build a better mail client or Word document. In fact, in both cases, the user interface is OK but nothing that will wow a user. What is valuable, though, is having all the data in a single location. As an example, prior to the event, the analyst relations team at Zoom sent me a document. If I can’t remember if it was sent through Zoom Chat or E-mail, I would need to search both. Because it’s unified, one search looks across both. Now extend this to all forms of collaboration and then apply AI. Zoom will be unique in its ability to leverage its AI Companion across back office and front office workflows. Unseating Microsoft is a significant challenge as those workloads are “free,” but Microsoft’s apps are siloed and could be an Achilles heel in the AI era.Industry specialization is a differentiator for Zoom
At the event, one of the more compelling sessions is when Randy Maestre, head of industry marketing, walked me around some of the vertical-specific solutions Zoom has. I found the solutions for front-line workers particularly compelling as it enabled this class of user easily tap into Zoom Chat, Video, Calling and the like through the apps they are already. It’s easy to give clinicians access to Zoom, but it’s difficult to get them to use it if they must flip between Epic and the Zoom client. Zoom has integrations with more than 1,000 apps, many of them focused on a user other than the knowledge worker. According to AI4SP, the number of front-line workers is four time the total of knowledge workers, leaving this a massively untapped market for the UC industry.The channel is now Zoom’s friend
At one time Zoom and its channel partners were heading down a divergent path. Zoom had a reputation of paying partners late, stealing deals out from under them and other activities that caused partners to look elsewhere. Two critical hires for Zoom were Mike Conlon as head of Americas channel and Nick Tidd, head of global channel, both of whom are longtime channel vets with experience at companies such as HP, Poly, Mitel and Cisco systems that remain the gold standard for channel programs. Tidd walked the analysts through a bunch of data, such as quote-to-cash times being reduced, channel volumes going up and other data points that indicate a reverse in channel sentiment. For me, the truth lies in channel feedback where partners large and small have unanimously told me Zoom’s interactions with them have significantly improved and they’re bringing the company into more deals. It has been a long, winding road for Zoom’s channel program, but it looks like it’s heading in the right direction.Zoom’s 2.0 story has yet to be told
Under former Chief Marketing Officer Janine Pelosi, Zoom came to market with a simple story – “Meet Happy” — which resonated with millions of people who were forced to work from home and then extended the happiness into their personal lives. Zoom is more than a videoconferencing company now and it can’t rely on that for its mission. In fact, shifting the focus off video is the right thing to do as video has now become a feature, rather than a product. At Perspectives, I asked newly appointed CMO Kim Storin what should we expect from Zoom marketing in the future. Will we see a continuation of Meet Happy or perhaps a pivot to something new? The question might have been a bit unfair given she’s only been in the role a couple of months. She said she’s currently working on that, but there would be a bridge to the past, as that’s what’s always made Zoom successful. If Zoom believes work is fundamentally broken, I’d like to see Zoom pivot from that company and be significantly more aggressive in calling out the companies that have broken work – most notably Microsoft and to a lesser extent, Google. The artificial intelligence era is not for the bashful, and Zoom can use this market transition to get people to think of it in entirely new way.The points I’ve laid out certainly aren’t without their challenges. Though I agree with Yuan’s premise that work needs a rethink, taking share from Microsoft in its core areas of documents and e-mail won’t be easy. However, Microsoft itself did this years ago when its Windows-based products and bundled licenses took share from the likes of WordPerfect and cc:Mail. Others have tried, most notably Google, but Google continues to fumble around with its apps suite.
For Zoom, AI cracked the door open. Time will tell if it has the aggressiveness to step through and be the work disruptor. One final note: The company is sitting on almost $8 billion in cash, giving it a massive war chest to acquire companies to accelerate its journey.


Since launching its Generative AI Innovation Center in 2023, Amazon Web Services Inc. has had one primary goal: help customers turn the potential of artificial intelligence into real business value. Now, the company has invested an additional $100 million in the center to enable customers to pioneer the new wave of autonomous agentic AI systems.
Post-announcement, I talked with Taimur Rashid, managing director of generative AI innovation and delivery, who oversees the center. He told me that education about AI continues to be a big part of the Center’s mission. “As new as generative AI is as a technology, one of the things that we can do to help our customers along that journey is educating them, showing them the art of the possible.”
To make that goal a reality, Rashid said AWS has been steadily expanding its gen AI capabilities. “We’ve added machine learning capabilities, gen AI capabilities and Bedrock, which is a foundational platform for building gen AI applications,” he said. “By also bringing human expertise, we can really help customers with that overall journey.”
As you would expect, the AWS Generative AI Innovation Center isn’t a building or campus. It’s a global organization of AWS experts that work closely with customers worldwide to help them successfully navigate, learn what AI can offer, and build AI capabilities at scale. Working with the center, customers can launch deployment-ready solutions in as little as 45 days. It’s this combination of collaboration, curated content and expert support that makes the center unique.
The human factor is key
AWS believes that there is an important role for people to enable gen AI to deliver on its promise to benefit a wide range of customers. “We are a multidisciplinary team of AI strategists, and forward-deployed engineers,” Rashid said. “We can really be very intentional about helping customers with how to look at gen AI, and then from there, productionizing systems so that they can ultimately get the business value out of it.”
He also noted that customers want to educate their teams. “They want to ensure that they can utilize the technology in the best way. What are the learnings? What are the best practices and approaches?” he added. “That’s where we help bridge that gap. Our most experienced customers in the enterprise space all the way to medium-size, even emerging startups have reached out to us saying ‘we need some unique help with how we look at model customization.”
One example he pointed out: “RobinAI, with its AI platform for the legal industry, is a great example of that. They specifically wanted to fine tune models to help lawyers and paralegals process hundreds of pages, and they got our expertise around that too.”Another customer that’s working closely with the AWS team to ensure it gains the full benefits of gen AI is Jabil, a large manufacturing company. Rashid explained that in just three weeks, it deployed an intelligent shop-floor assistant using Amazon Q with more than 1,700 policies and specifications across multiple languages, reducing the average troubleshooting time while improving diagnostic accuracy. There’s technical help that AWS offers, but as Jabil started to adopt it, it required some guidance to optimize the cost and make it more efficient.
The center can help organizations kickstart their AI plans. Almost every business and information technology leader I have talked with has dozens, even hundreds of proposed AI projects. The technology is so new that most customer teams are not yet fully equipped with gen AI skills.
They have literacy around data and experience with classical machine-learning models, but when you look at gen AI, they are dealing with a plethora of large language models. Customers want help to determine which model to use. The AWS Generative AI Innovation Center helps customers better understand how gen AI can be used most effectively.
Not surprisingly, Rashid said the gen AI choices available to the typical company can be overwhelming. “A senior executive from a travel and hospitality company told me they had identified 300 use cases and needed help prioritizing them,” he said. “There’s a whole rubric of things that we help customers with, because either the technology is too new, or their teams have not been upskilled on it. We do it for them, which not only helps the customer navigate the space, but we teach them as we go so, they can be more self-sufficient over time.”
Past is prologue
When AWS opened the center in 2023, customers looked at chatbots as their best AI entry point. “As they gained experience and saw all the things they could accomplish with AI, we saw more use cases around content summarization or generation,” Rashid recalled. “It’s like how things quickly progressed at the advent of cloud computing.”
Like gen AI, he added, “cloud was a new emerging technology; a paradigm shift for many people. So, we invested quite heavily in teaching customers, enabling coursework through training and certification. We’re making very similar efforts with AI, too. In fact, I think with AI we must be a lot more intentional, because it’s not only a technical competency that we have to educate customers on. We have to show it in a more immersive way.”Leveraging partners
Partners are a key part of the Innovation Center’s work. Last year AWS started a Partner Innovation Alliance that brings a subset of its gen AI competency partners closer to the center and teaches them the center’s methodologies and approaches. As a way of scaling, AWS is taking the best practices it has learned along the way and educating its partners. It currently has 19 partners in this Innovation Alliance, including Deloitte, Booz Allen Hamilton and Capgemini. There are also several boutique partners, these are ones that are born in the cloud or digital-native consulting partners, as well as regional coverage in markets such as Korea and Latin America.
AWS also has Innovation Center teams in various geographies around the world. “There’s a broad set of things that every region looks at from a gen AI perspective,” Rashid said. “In the Middle East and Africa — and even in Europe — we see a huge emphasis around sovereign AI. Customers are asking how they could use AI to advance many aspects of their society and their nations from health care and government services to education. What’s nice about how we’re structured is we have resources within those regions that can respond very quickly and in alignment with our regional sales teams to meet some of the unique needs that we see in different geos.”
Embracing startups
The AWS Generative AI Innovation Center team is also prioritizing working with startups. Though AWS has a long history, it has been more methodical of late.
Startups bring unique technology. By bringing this audience into the Innovation Center, AWS can help startups get enterprise-ready so they can jointly service customers. This is an obvious win for the startup but also AWS as it creates some consistency in experience.
Avoiding agent overload
As in most areas of life, there can be too much of a good thing in the world of agentic AI. Specifically, as agentic AI continues its explosive growth, how can organizations avoid having 100 applications that come with 100 agents all trying to chat at users and give advice on what to do?
That’s one of the goals of AWS’ recently announced preview release of Amazon Bedrock AgentCore, which enables customers to securely deploy and manage a large number of agents.
“During a recent trip to New York, every agent conversation I had was about ‘how should we think about this world of integration and permissions when it comes to agentic AI?’” said Rashid. “That’s why the launch of AgentCore is so timely. The primitives [foundational, reusable building blocks that enable AI systems to act autonomously and achieve complex goals] that are offered through AgentCore help establish not only integration, which is one aspect, but then the data permissions that must go with it.”
Ultimately, he added, as companies get their agents to learn reason and then act, permissions become very important. “Right now, we have building blocks which are important — such as MCP [Model Context Protocol] and AgentCore,” he said. “It’s about how you put them together to integrate them into the existing fabric of the application without having to do a massive overhaul. Over time, companies and teams will get data better integrated. They’ll get a more specific application strategy, but I do think you’ll see a lot of agents. We’re early in that cycle right now, but it’s very important for us to guide customers to avoid the problem.”
There isn’t a company I talk to that isn’t interested in gen AI, but new landscapes can be confusing and hold customers back. The AWS Generative AI Innovation Center is an excellent resource for AWS customers to understand all the technology, how to deploy and to ensure that as they look to scale up gen AI, they are maximizing benefits while reducing risk.


The artificial intelligence sprint is on, and not just within companies: This race is being held at a geographic level as well.
The Middle East has been very active with AI, as has India and, of course, the U.S. This week the Indonesian government is taking a major step toward establishing itself as an AI thought leader and achieving its sovereign AI goals by supporting the efforts of Nvidia Corp., Cisco Systems Inc. and the Indonesian telecommunications leader Indosat to establish an AI Center of Excellence in the country.
The project will build on sovereign AI initiatives announced last year by Indonesian tech leaders and Nvidia. The CoE will support Indonesian AI research, develop local AI talent, and help startup companies deliver innovations to build out the nation’s AI infrastructure.
With the support of Indonesia’s Ministry of Communications and Digital Affairs (Komdigi), the CoE will include a new Nvidia AI Technology Center that will leverage Nvidia’s Inception program, which provides technical expertise and co-marketing support to startups. It also will offer training and certification from the Deep Learning Institute to help nurture local AI talent.
Vikram Sinha, head of Indosat, said the company believes “AI must be a force for inclusion — not just in access, but in opportunity. With the support of global partners, we’re accelerating Indonesia’s path to economic growth by ensuring Indonesians are not just users of AI, but creators and innovators.”
Golden 2045 vision
The CoE will include an AI factory, which is specialized infrastructure to create value from data by managing the entire AI lifecycle. It also will feature a full-stack Nvidia AI infrastructure ranging from the company’s Blackwell graphics processing units and Cloud Partner reference architectures to its AI Enterprise software.
The center’s Sovereign Security Operations Center Cloud Platform is a Cisco-powered system that combines AI-based threat detection, localized data control and managed services for the AI factory.
The CoE is part of an Indonesian initiative called Golden Vision 2045. The project is focused on using digital technologies to bring together government, enterprises, startups, and higher education to drive cross-industry productivity, efficiency, and innovation. The target date is significant as 2045 will mark 100 years of Indonesian independence.
Core AI pillars
The CoE has four key goals for driving Indonesia’s AI strategy:
Sovereign infrastructure: To bolster Indonesia’s digital future and cultivate domestic innovation, Indosat and Nvidia are collaborating on the expansion of the nation’s premier sovereign AI infrastructure. This new platform is engineered for scale and high performance, emphasizing national self-sufficiency in AI. It will provide a secure, high-performance environment for AI operations, specifically tailored to help Indonesia achieve its digital aspirations. A key component of this effort is Indosat’s AI Factory, Lintasarta, which will be the first entity in Southeast Asia to integrate the Nvidia GB200 NVL72, specifically designed to enhance generative AI and high-performance computing capabilities.
Secure AI workloads: To protect Indonesia’s digital assets and intellectual property, Cisco will provide the infrastructure to connect and secure the countries information and assets. This infrastructure will have security features embedded within the network, forming a resilient backbone for the nation’s AI Center of Excellence. Central to this effort is a Sovereign Security Operations Center Cloud Platform, which marks the first time Splunk and Cisco’s Managed Security Services Solutions have been used together in Indonesia. This SOC will combine AI-driven threat detection with local data controls and effortless integration with national systems. This empowers Indonesian organizations to effectively secure their digital holdings and meet regulatory requirements.
AI for all: The AI Center of Excellence is on a mission to ensure hundreds of millions of Indonesians gain access to AI by 2027. This will be made possible using by Indosat’s widespread mobile network infrastructure. This push is fundamentally about democratizing AI, removing geographical divides, and fostering a new generation of empowered developers throughout the country. The ultimate vision is a future where AI’s benefits are universally shared among all citizens.
Talent and ecosystem development: The center is making a substantial investment in Indonesia’s human capital, aiming to train 1 million individuals in critical digital areas like networking, security and AI by 2027. This ambitious target is supported by both Nvidia and Cisco. Nvidia will facilitate this through its AI Technology Center for research, its Inception program for startups, and its Deep Learning Institute for professional development. For its part, Cisco will leverage its Networking Academy to deliver training, as part of its commitment to upskill 500,000 Indonesians by 2030. These combined efforts are designed to create a future-ready workforce that can drive Indonesia’s digital economy.
Already, more than two dozen independent software vendors and startups are using Indosat’s AI infrastructure to develop technologies for accelerating and improving workflows in areas such as higher education and research, food security, bureaucratic reform, smart cities, mobility and healthcare.
Plans include Indosat and Nvidia developing and deploying AI-RAN (artificial intelligence radio access network) technologies capable of reaching larger audiences by using AI over wireless networks. And the government is developing trustworthy AI frameworks consistent with Indonesian values for the safe, responsible development of AI and related policies.
This type of public-private partnership could serve as a blueprint for development and deployment of AI-driven initiatives in other countries, which would help level the playing field to help ensure smaller and developing nations can take advantage of the AI revolution.
The investment Indonesia is making could pay big dividends in a short period of time and AI promises to reshape the global economy, like the way the Internet did. By investing in its citizens, Indonesia is ensuring as AI evolves and gets embedded into the fabric of the way we live, it will be ready to capitalize on the opportunity.


With all the hype around artificial intelligence, work should be easier and more efficient these days. But that’s far from the truth.
Employees are still spending hours searching and sifting through information and second-guessing AI responses. Coveo Solutions Inc.‘s latest Employee Experience Relevance Report takes a closer look at how workplace tools are falling short on EX and why more companies need to have a better approach to knowledge discovery.
The report, based on a survey of 4,000 people who work for companies with 5,000 or more employees across the U.S. and U.K., uncovered growing frustration. On any given day, employees navigate through about four different systems just to find the information they need to get their work done. That translates to spending nearly three hours a day just searching for the right information. Many believe the information they find is irrelevant, and roughly half have run into issues with AI hallucinations.
Nearly half of the employees surveyed report feeling frustrated when they don’t have access to the right tools or information. Frustration related to unhelpful tools has only increased over the past few years, from 28% in 2022 to 40% in 2025. Meanwhile, confidence has dropped. Skilled employees are second-guessing the quality and speed of their work because their company’s systems aren’t keeping up.
These frustrations are part of a broader problem. Information is scattered across too many systems. Even as companies adopt more AI tools, a quarter of employees don’t know where to look when they need answers. The same tools that organizations have put in place to fix disconnected systems are still ineffective in many cases according to the employee feedback from the survey: including intranets (31%), enterprise search tools (24%), and generative AI (15%).
When using gen AI, trust is a challenge. A whopping 42% of employees fact-checked answers provided by AI tools in 2025, up from 36% in 2024. Trust is only slightly higher in enterprise-approved tools than in open-source tools like ChatGPT, with just 17% of employees fully trusting responses from internal systems and 14% trusting open-source tools. Gen Z and Millennials are the most likely to double-check AI-generated responses, with fact-checking rates of 47% and 44%, respectively.
There’s a good reason for the skepticism. Nearly half of the respondents experienced a hallucination when it comes to using AI, with 22% saying it happened during work. These aren’t limited to minor issues either. They’re happening in core business functions such as software development, information technology and executive leadership. Weekly hallucinations were reported by 60% or more in each group. Hallucinations are also prevalent in industries with quick decision-making, such as software, IT, finance and accounting.
AI hasn’t delivered on the promise of reducing the time it takes to find information. Employees are still moving among multiple systems and sources to search for information needed to do their job. Those in tech roles navigate five or six systems on average.
And yet, the top reported use case for generative AI isn’t employee productivity, it’s customer self-service (34%), followed by knowledge management (28%) and data analytics (26%). Employee productivity is lower on the list at 26%. The findings suggest that many AI deployments are still more customer-focused than employee-facing.
Across large organizations, looking for information amounts to millions of hours wasted each year. Gen Z employees tend to spend the most time searching for information, compared with other age groups such as Baby Boomers. Technical and customer-facing roles are most affected when information isn’t easily accessible. Many (27%) employees can’t find urgently needed information, such as when helping a customer or closing a deal.
Adding to the challenge is the sheer volume of irrelevant content employees have to sift through. According to the report, 42% of the information employees encounter doesn’t even apply to their role. That figure hasn’t changed much over the past three years. In some sectors such as electronics and hardware, it’s worse, with 51% of content deemed irrelevant. In field or technical support roles, it’s about 47%.
It’s clear from the report findings that current systems aren’t helping people focus as they should. The hope is that gen AI could turn things around. According to 42% of the employees surveyed, their organizations have already invested in gen AI tools and training aimed at improving information search in the workplace.
However, for gen AI to be beneficial, a few things must happen next. Organizations should shift their focus from giving employees access to information to making it relevant. That means grounding AI in reliable data, cutting down on system sprawl, and making sure employees can trust the answers they get. Otherwise, organizations risk having technology that creates more complexity than clarity.
I had a chance to talk to Coveo Chief Operating Officer John Grosshans about the results and how Coveo helps customers with the challenges cited in the study. “Coveo’s job is the find the best answer to the most complex questions,” he said. “The agentic AI era is transforming employee and customer experiences. Our AI-powered search and product discovery enables organizations to remove the friction found at work today, which reduces cognitive overload and delivers the right insight at the right moment.”
The results of the study are consistent with my research that found that employees spend up to 40% of their time managing work instead of doing work. This comes from having information scattered across multiple applications and systems. It’s impossible to remember where to look for certain information. When the answer to a question lies in multiple systems, the user becomes the integration point for the data, which leads to errors and more frustration. Coveo’s AI enabled search works cross-system and takes something that was a huge point of frustration and makes it easy.


Arista Networks Inc. today announced it will acquire VeloCloud SD-WAN from Broadcom Inc., a deal that puts to bed ongoing reports that surfaced about six weeks ago.
The purchase, for which a price wasn’t given but was reportedly about $1 billion, gives Arista a best-in-class software-defined wide-area network solution to complement its current high-end 7000 series router which run the Cloud EOS operating system. VeloCloud has a wide range of cloud-managed SD-WAN offerings with integrated security enabling Arista to reach another tier of customer.
VeloCloud was one of the pioneers in SD-WAN and has found a wide range of use cases for its products that have traditional connectivity but also WiFi and 5G for WAN access. At previous MWCs, VeloCloud demonstrated how its SD-WAN was being used to connect emergency vehicles.
For Arista, VeloCloud fills a gap in its portfolio, enabling it to offer end-to-end networking solutions to a wider range of customers. The company was founded on offering high-end switching for data centers and high frequency trading. From there it expanded, added Wi-Fi with the acquisition of MoJo Networks and then added a line of campus switches
Its high-end routers are used by cloud providers, service providers and large enterprises but branch office connectivity was a gap. Now it adds one of the leading SD-WAN firms and can better compete when an end-to-end solution is required. Gartner Inc.’s last SD-WAN Magic Quadrant had VeloCloud as one of six leaders.
Also, VeloCloud beings integrated firewalls and other security features to add to Arista’s threat management, network detection and response, and network access control capabilities. Arista isn’t known as a security vendor but has quietly built a strong portfolio. Though it’s likely to not lead with security, it has a broad enough set of capabilities to address the emerging trend of integrating security in the network.
With respect to artificial intelligence, VeloCloud current offers VeloRAIN (Robust AI Networking) and VeloBrain. The former uses AI to improve the security and performance of distributed AI workloads (VeloCloud for AI). The latter, announced earlier this year, is an AI operator to help network engineers better manage their environments (AI for VeloCloud), addressing both sides of the AI coin. Arista was early out of the gates with networking for AI as its high-performance products are ideally suited for the rigors of AI. It also offers autonomous virtual assistant or AVA for AI operations.
The most obvious question with this acquisition is how Arista integrates VeloCloud with its products. One of the key differentiators for Arista has been that EOS is the single operating system that powers all its products. This has enabled everything to be managed through CloudVision and for the Arista DataLake to be the single source of data for AI operations.
Though I have not talked to the company about this, history shows that when it does make acquisitions, it eventually ports the products to EOS. However, VeloCloud is much bigger than MoJo, Awake, BigSwitch and other purchases and has a massive installed base, so the company will need to tread carefully here and ensure customer disruption is minimized. Arista tends to err on the side of caution when it comes to its customers, so my assumption is Arista will keep VeloCloud on its own operating systems and gracefully migrate over time.
For VeloCloud, this ends the long quixotic journey for a company that never quite fit inside the company that owned it. When VeloCloud became part of VMware, it was known as one of the leaders in SD-WAN.
However, then-CEO Pat Gelsinger was very aggressive in his tone of how with VeloCloud and Nicira would disrupt network engineers, which obviously wasn’t popular with the audience of people buying the products. VMware dropped the VeloCloud branding and, despite having some of the best technology in SD-WAN, that opened the doors for competitors to start taking share. Eventually, VMware brought back the branding of “VeloCloud by VMware,” which had better appeal to networkers, removing a big barrier in the sales process.
When the Broadcom acquisition took place and the strategy was to go all-in on VMware Cloud Foundation, it became obvious VeloCloud would need to find a new home. At MWC25, the booth was branded VeloCloud by Broadcom, but in my conversations with executives it seemed that was temporary, although removing the VMware association did seem to help with the brand.
With Arista, VeloCloud finds itself with a company that has one motivation, and that’s to make the best-performing networking products. That’s a win for VeloCloud but more importantly its customers.
This adds to a busy week for Arista. The Gartner Magic Quadrant for Wired/Wireless LAN dropped, and the company was rated the highest in ability to execute for companies in the Visionaries Quadrant. Adding to its current portfolio of products, Arista this week announced the following new products:
- The 710XP, a compact, fanless 12-port PoE switch that is also 60-watt capable designed for remote office and branch office deployments.
- The O-435, a ruggedized outdoor Wi-Fi 7 access point built for industrial and outdoor environments.
- The C-400, an entry-level 2×2 tri-radio Wi-Fi 7 indoor access point aimed at cost-effective, high-volume service provider-managed branch environments, including small and medium-sized businesses, multi-dwelling units and small remote offices.
One final piece of news: Todd Nightingale (pictured) has joined Arista as president and chief operating 0fficer. Nightingale most recently served as chief executive of Fastly Inc., but prior to that he was executive vice president and general manager of Cisco Systems Inc.’s Enterprise Networking and Cloud division.
Nightingale joins a stacked Arista leadership team that includes Ken Duda, Mark Foss, John McCool and, of course, CEO Jayshree Ullal. Nightingale’s experience at Meraki, one of the early cloud networking vendors focused on ease of use, will be invaluable to Arista as it looks to broaden its appeal to a wider range of customers.


Now that Hewlett Packard Enterprise Co. and Juniper Networks Inc. this weekend settled with the U.S. Department of Justice, the $14 billion deal that will bring Juniper’s networking and security assets into the HPE portfolio will enable HPE to continue its transformation to a networking-first company.
Many industry watchers, me included, had started to wonder if any agreement could be reached prior to the upcoming July 9 court date. Last week HPE held its annual “Discover” user event and it was clear, while the company was optimistic the deal would close, it had still been pushing forward with driving new artificial intelligence features into the Aruba networking portfolio. The DOJ’s assertion was that the combination of Cisco and HPE+Juniper would control over 70% of the U.S. enterprise wireless local-area networking market and this would create competitive imbalance.
In my opinion, this has always been a weak argument as Cisco’s share is so large that it plus almost anyone would have dominant share. Also, history has shown that dominant share can always be eroded quickly when markets transition. At one time, Cisco owned more than 90% of the branch router market, but the rise of software-defined wide-area networking created a transition that enabled VMware VeloCloud, Fortinet, HPE and others to grab big chunks of share. The smartphone market once dominated by Palm, was usurped by BlackBerry and then again upended with the rise of Apple and Android.
AI is coming to networking, and it will create its own transition for both wired and wireless networks. Last year, I conducted an “AI in networking” study with Bob Laliberte from theCube Research and we asked how likely companies would be to switch vendors if the AI was better and more than 90% said they would. There’s nothing unique about North American Wi-Fi and change will come with better products, so Justice chief Pam Bondi and the rest of the DOJ could have just let things play out. But alas, they didn’t, and it tossed a bit of monkey wrench into the acquisition of Juniper by HPE.
Settling didn’t come without a price. HPE had to make a couple of concessions. The first is the company has agreed to divest its “Instant On” campus and branch Wi-Fi business, which is targeted to companies with little to no information technology staff. Though this is an excellent product, it’s not a core part of the business and the impact should not be significant. HPE has 180 days to find a buyer for the business which includes all assets, R&D, intellectual property and customers related to Instant On.
The other concession is the HPE has agreed to license Mist for up to two winning bidders of the license, which will be determined through an auction process. Upon closing (no date was given), HPE will have 180 days to auction the AI Ops for Mist Source Code to a bidder that is acceptable to the United States, which will be determined by the DOJ. The government has allowed for three or more companies to bid. If multiple bidders exceed $8 million, only two will be granted rights to the source code.
Also, at the option of the winning licensee, HPE-Juniper will facilitate the transfer of up to 30 Juniper engineers familiar with the source code and up to 25 sales personnel who are experienced in selling Mist. HPE-Juniper will also provide financial incentives to encourage employees to transfer to the acquiring company. The license will include a one-year non-solicit provision preventing other employees to be poached. Lastly, 12 months of transition services will be included for the winning bidders.
Licensing the Mist code is a curious provision. I understand how divesting Instant On reduces the combined market share. But forcing HPE-Juniper to license Mist doesn’t do anything but put best-in-class AI Ops networking software in the hands of another company. That’s perplexing and, in my opinion, shows the DOJ didn’t really understand the intricacies of this.
Regardless, all the hurdles are now out of the way and other requiring a judges signature all hurdles are out of way and both companies can move on. At Discover, Neri led off with networking and was emphatic at that the network will play a key role in not just AI, but modern IT. It’s the foundation for everything today as we live in a world where everything is connected. The HPE Aruba Networking portfolio is excellent and complementing that with Mist AI can act as an accelerant for HPE to establish itself as not just a compute leader but in networking.
For Juniper, the company has been one of the most innovative AI networking companies and now it gives it a much bigger R&D budget, more customers and the ability to tie into a compute stack for integrated solutions.
Over the past 18 months I’ve talked to several Juniper and HPE customers and they were a combination of curious, nervous and excited. Most customers understand the importance of AI to networking but have been concerned about what happens to their products moving forward. Will HPE keep one over the other? Will some be end-of-lifed? Will customers be forced to transition? These are all excellent questions I’ve asked HPE and Juniper management.
Both companies have been consistent with their responses that there are no plans to disrupt customers, and both sets of products will live on and eventually brought together but no one will be forced. If one looks back at how HPE handled Aruba, the commentary is consistent with this. This creates more work for HPE, but it’s the right thing to do and the company has generally deferred to customer satisfaction, even if it’s a hit to margins.
Though I was never a fan of the DOJ’s basis for the lawsuit, it is good to see the settlement as all parties can now move forward. Will Juniper be the elixir that transforms HPE? Only time will tell, but at least now the two companies can move forward and that’s important not just to them but to their customers, channel partners and the broader networking industry.


One of the major announcements at contact-center-as-a-service leader NiCE Ltd.‘s Interactions user event in Las Vegas last week was a partnership with Snowflake Inc., the cloud-based data warehousing company.
This might seem like a strange partnership as typical partners for contact center vendors include customer relationship management companies, service firms and the like, but this addresses a huge customer pain point, which is management of data.
At the event, NiCE positioned the collaboration as being able to “unlock the full value of customer interaction data” across the front, middle and back office. The collaboration combines NiCE CXone Mpower’s AI for customer service automation with Snowflake’s Secure Data Sharing to enable joint customers to access and update data to automate customer service at scale. By leveraging their complementary capabilities, the companies said, they will make accessing the wealth of customer data significantly easier to unlock the insights stored in traditional silos.
When I talk to customers about what holds them back from moving forward with artificial intelligence in the contact center, it’s data management. Customer related data is scattered everywhere, which creates an AI problem. There’s an axiom in data science that states, “Good data leads to good insights,” and that’s true. But silos of data lead to fragmented insights, and that’s what most companies have today.
The partnership with Snowflake creates a single source of truth for the data that NiCE will use for its AI features and functions.
The recent collaboration between NiCE and Snowflake offers several significant benefits, the primary concentration being on the consolidation of data, automation and improved customer experience. Here are some details of three significant benefits:
Breaking data silos and merging customer interaction data
With the partnership, organizations can securely and easily share front office generated customer data with middle and back-office systems using Snowflake Secure Data Sharing. This enables data previously siloed across different departments or applications to be merged into a single, trusted data lake on Snowflake. This unification enables AI to be applied end-to-end across the customer journey and prevent data fragmentation.
Enabling automation across the enterprise
By connecting customer interaction data with the underlying operational systems, organizations can automate cross departmental business process, which historically have been difficult to impossible to automate. Examples include service fulfillment, billing, claims processing and account updates. This speeds up processes, improves accuracy and efficiency across the customer lifecycle.
Accelerating AI adoption and enabling smarter business choices
With a secure, centralized data lake that can be shared across the company, organizations can get the most of their customer experience data for AI use cases. This includes better reporting, richer analytics and the ability to enable AI-driven workflows and more personalized customer interactions. By providing a single point of truth for interaction data, the partnership enables better decision making and facilitates the creation of AI-driven CX. In short, accurate AI begets more AI.
NiCE now available on the AWS Marketplace, making purchasing easier for customers
Another partnership announced at Interactions was that NiCE’s CXone Mpower platform is now available in the Amazon Web Services Inc. Marketplace. This has raised some eyebrows by some industry watchers because AWS has its own product, Amazon Connect, available on the Marketplace, and it’s easy to assume that Amazon would direct customers to it.
However, that assumption isn’t true. AWS often has competing vendor, including Talkdesk and Genesys, in their marketplace as the goal is to give customers choice. I’ve talked to the Marketplace team about its philosophy, and they’ve told me there is no preferential treatment given to Amazon products over competitive. This creates a situation where the product teams need to ensure the homegrown product is competitive as, if it’s not, customers will pass on it in favor of another. The customer wins as they get choice and the competitiveness drives product innovation. The other benefit to customers is they can spend down any unused credits by purchasing through the Marketplace, which simplifies purchasing.
Also, as part of this agreement, NiCE and AWS will co-innovate, enabling customers to leverage their own data across the customer journey and combine NiCE’s own models with AWS services. Under this agreement, NiCE and AWS are actively co-innovating, allowing organizations to leverage their own data across the front, mid and back office in a way that combines NiCE CX AI-specific models with new AWS technology, such as Amazon Q and Bedrock.
I’ve seen a fair amount of rhetoric since the announcement regarding NiCE not having a history of partnering and this isn’t exactly true. Though the company does choose to build its own technology in adjacent areas, such as in workforce and quality management, which has given it a platform advantage, it has a long history of partnering with companies to create bilateral value.
It has a wide range of system integrator, service provider and technology partners that add to NiCE’s offering. Snowflake and AWS are an extension of that.


Jeetu Patel, Cisco Systems Inc.‘s president and chief product officer, told me before last week’s annual user event that it would be “the most consequential Cisco Live of the past decade and perhaps longer.”
There were a few reasons for Patel’s bullishness. The first is artificial intelligence. The core tenet of my research is that share shifts happen when markets transition, and Cisco’s ability to articulate its strategy now will allow it to rise when the AI tide does. If it’s crisp with its execution, it should benefit disproportionally. Cisco missed on cloud computing, but the company is laser-focused on AI and it’s not going to be left on the outside looking in.
Also, it has been just under a year since Patel became chief product officer, and it was time to unveil what the company has been working on. In my conversation with Patel, he indicated the payload of products would be the largest from Cisco in recent history. Last year Patel boldly stated, “Cisco would unrecognizable a year from now, in a positive way,” with respect to product, and it was critical the company demonstrate that this year.
True to his word, the company did roll out a massive number of new products across the board, from network equipment to Wi-Fi to security, but the product updates are a small part of the Cisco transformation story. Here are my top five takeaways from Cisco Live:
- The network is a core component of AI. Every event – vendor or industry – has had a strong focus on AI. However, much of the attention always goes back to the graphics processing unit and the role of processing. Equally important is the network, as something needs to connect the GPU systems and be the fabric that connects data to the AI systems. Early adopters leaned into InfiniBand as it has been a tried-and-true technology, but as the market appeal has broadened, so has the interest in Ethernet-based solutions. InfiniBand is here to stay, but the growth is capped with the growth in Ethernet. Now that Cisco has revamped its switching portfolio, which includes the data processing unit-enabled Smart Switches, it should be primed to catch the AI networking wave.
- Cisco growth is uncapped. Historically, one of Cisco’s growth challenges was the limitations on addressable market. It had the lion’s share of wide-area network, Wi-Fi, campus networking, VoIP and other markets. In ones where it was a challenger, such as unified communications and contact center, it either didn’t have a quality product or there was an existing dominant vendor – or both. Things have changed and, in a Q&A with the analysts, Patel made the statement that Cisco is “no longer TAM-constrained.” For years, I’ve stated security was the biggest needle-moving opportunity, but its products weren’t in a position to compete. With the release of XDR, AI Defense, Hypershield and a completely rebuilt line of firewalls, Cisco can more effectively target security buyers as well as its historical strength of networkers who were interested in security. AI networking is an emerging market that likely has a bigger TAM than any market Cisco currently plays in. Then there is Splunk, where Cisco has been fast to create many “Cisco + Splunk” stories enabling better cross-selling.
- Silicon One creates a competitive advantage. Most of the networking industry uses merchant silicon, primarily from Broadcom. Cisco had bucked this trend in favor of building its own network processor. The reason for this is that it enables the company to build silicon that is tailored to its needs, and this becomes critical in the AI era. At the event, I talked with Cameron Ferdinands, director of networking and data center engineering for Groq, and asked him why it chose Silicon One. “The 51.2TB chip gives them 512 100-Gig interfaces,” he said. “When we looked at who should be our switching and ASIC vendor, this and the speed at which Cisco is moving at won out as unique differentiators for Cisco Silicon One.”
Another proof point for the high-performance capabilities for Silicon One is the recent announcement where it is the only third-party processors to be used in Nvidia’s Spectrum-X switch. Looking ahead, it will be interesting to see if customers continue to embrace vendors that use Broadcom. It’s no secret that Broadcom has irked many VMware customers with massive price increases. Some businesses have begrudgingly accepted the price hikes, while others are actively moving off VMware. Some of these customers may choose not to purchase network infrastructure that uses Broadcom because of their dissatisfaction with the company. This could enable Cisco to take share back, particularly in the data center.
- CX transformation is aligned with product. One of the more notable executive appointments made by Chief Executive Chuck Robbins was naming Liz Centoni chief customer experience officer, moving the CX (services) organization under her. Centoni is a longtime Cisco vet and has run many product areas, including compute, the internet of things, and applications. Putting services under her was a curious move for Robbins since she’s best known as a product person, but she is proving to be the right leader for the group at this moment in time. Many of Cisco’s information technology peers, such as IBM and Hewlett Packard Enterprise, have services groups for services’ sake.
With Cisco, the CX group is in place to ensure customers are successful deploying Cisco technology. Because of this, having a product person run services makes sense because it creates better alignment between the two areas. Over the past year, Centoni has used AI to transform the group and solve what she calls “boring problems,” such as configuration errors and closing Technical Assistance Center calls. Historically, CX and product have worked to do what’s best for the customer, but each marched to its own beat, which created conflicting interests. With Centoni in charge of services and Patel heading product, the two can work together to ensure they are moving forward in lock step.
With all the new products, the biggest risk to Cisco is that customers and partners can’t absorb all the new technology and move at the speed products are being released. This is where services-product alignment becomes important as Centoni’s organizations can help customers understand how to use the large amount of innovation.
- Cisco has embraced multivendor. Over the years, Cisco has been accused of building closed, proprietary systems to “lock in” customers. This was true a decade ago, but recently, the company has embraced multivendor. One proof point for this is that collaboration rivals, Microsoft Teams and Zoom, can run natively on Cisco Collaboration endpoints. Also, at Cisco Live, the company talked about its support for Palo Alto Networks, CrowdStrike and Microsoft in its XDR solution. Supporting competitors is something Patel has hammered home even though it initially it made people at Cisco uncomfortable. I recall a conversation with him a couple of years ago when he mentioned, “If there is a vendor that was acting as a headwind, we need to figure out how to make it a tailwind. That’s good for the customer and good for us.” At Cisco Live, I talked with Joe Berger, vice president of digital experiences for World Wide Technology, one of Cisco’s biggest partners. “We appreciate Cisco’s openness to supporting multi-vendor environments — it’s a win for companies like ours,” he said. “Our customers increasingly expect integrated, best-of-breed solutions, and Cisco’s support makes it easier for us to deliver on those expectations.”
All this shows that the company delivered on Patel’s promised that the company would be “unrecognizable.” A massive set of new products, embracing openness, and a deep focus on AI have the company in the best position it has been in for decades. Cisco’s stock is at a 25-year high, and it seems the company is poised to use AI to accelerate its growth. The mood coming out of Cisco was positive, and for good reason but for Patel, Centoni and the rest of the executive team, this is where the work starts.


Customer Contact Week, being held in Las Vegas, is where customer experience vendors gather to show off the latest and greatest innovation. This year’s show is particularly interesting as artificial intelligence is now in full swing with all vendors that work in the contact center ecosystem.
At the show, cloud contact center provider Talkdesk Inc., rolled out a new platform called Customer Experience Automation or CXA, which involves specialized AI tools or “agents” that work together to manage everything from front-end interactions to back-office tasks. The goal of CXA is to help organizations automate complex customer service workflows using AI.
In a traditional contact center, organizations either had to invest in highly trained teams and niche tools to handle complicated issues or settle for basic automation to complete simple tasks. CXA is Talkdesk’s solution to the problem. The new platform assigns specific jobs to AI agents that share context and coordinate with each other. This makes it easier to automate more intricate, cross-functional processes.
“We bring all the complex tasks together under the guise of multi-agent orchestration — tasks that require specialized knowledge of specialized tools can be brought together under a central brain that orchestrates and runs all those things with the customer experience as the primary objective,” Crystal Miceli, senior vice president of product and industry marketing at Talkdesk, said during a pre-briefing.
CXA follows a “discover, build, orchestrate and measure” process, which starts by identifying issues in current customer experience, forecast the gains if it gets automated and provides suggestions for fixing them. Instead of relying on code, the platform uses a prompt-based setup for creating AI agents tailored to those issues. With just a few clicks, information technology teams can test and launch the automations they created.
Once the automations are live, CXA monitors their performance to see what’s working, what’s not, and what needs to be adjusted to improve the CX. The entire process — discovering opportunities, building workflows, orchestrating AI agents and measuring results — is driven by agentic AI. It actively analyzes data, acts and continues to learn and improve over time by pulling in context from customer relationship management and other industry-specific systems.
“We’ve got the understanding of what shifts the mood from highly negative to highly positive or from frustrated to gratified. We’ve got the ability to build new knowledge based on this incredible store of interaction data combined with connections to CRM and systems of record, and we’re applying our own models on top of those very industry-specific ones,” said Miceli. “This allows us to hit the most complex use cases in healthcare, financial services, travel, and so on, because we know what they are.”
Miceli shared an example of a use case in the travel industry where a customer calls in after missing a connecting flight. Behind the scenes, there’s a lot going on, such as rebooking the flight, checking the customer’s loyalty status, updating gate information and much more. Currently, all of that is handled by separate systems and customer service teams, making the customer feel detached and frustrated. CXA connects all those moving parts, so the customer is taken care of from start to finish and not bounced around between agents and automated systems.
CXA is already live in Talkdesk’s Industry Experience Clouds, which focus on the most challenging sectors like healthcare, financial services, and utilities. These sectors were the initial focus for Talkdesk because they present major opportunities for automation. Examples include managing healthcare workflows, coordinating home mortgages and handling utility outages — all of which require synchronization across systems and teams.
One early adopter, Las Vegas Valley Water District, is already using components of Customer Experience Automation. The utility, which provides water to the Las Vegas Valley in Southern Nevada, is leveraging AI agents to improve how it manages service outages, coordinates field crews, and issues customer credits.
“The CXA platform essentially replaces the Ascend AI [contact center platform],” said Miceli. “It’s still embedded inside CX Cloud. So, if you’re a CX Cloud customer or contact center customer, you get all of this and it’s immediately accessible. We shifted to interaction-based pricing, so customers can access all our innovations right away.”
CXA helps with an AI problem that has yet to emerge but is coming. Every application, service provider and vendor will create agentic agents to complete specific tasks. While this might seem useful, it could lead to complexity being higher than pre-deployment as each system will focus on its own silo.
Talkdesk CXA orchestrates communications between agents to complete not just a single task, but a complete process. Without this, the use of agentic agents could create a more complex environment than prior to AI.