Featured
Reports

Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience
Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]

Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation
April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]

Nathan Howe, VP of Global Innovation at Zscaler talks mobile security
March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]
Check out
OUR NEWEST VIDEOS
2025 ZKast #142 with Philippe Laulheret from Cisco Talos a Black Hat 2025
0 views 6 hours ago
0 0
2025 ZKast #141 with Joe Marshall of Talos, a Cisco Company at Black Hat 2025
4K views September 8, 2025 11:56 am
1 0
2025 ZKast #140 - What's new in CX #5 with Juanita Coley - Verint Edition
12.1K views September 5, 2025 2:49 pm
1 0
Recent
ZK Research Blog
News


Cloud communications provider RingCentral Inc. announced today announced it’s acquiring CommunityWFM, a workforce management provider.
CommunityWFM will be integrated into RingCentral’s homegrown contact center solution, RingCX. No purchase price was given and CommunityWFM took in no funding, but its revenue is believed to be somewhere in the $5 million to $10 million range. WFM provider Verint Systems Inc. was acquired late last month by Thoma Bravo for a little over two times revenue. Applying the same multiple, I would expect this acquisition to be under $20 million.
The motivation behind this is for RingCentral to have its own WFM solution. For those not familiar with the category, WFM is a critical component of contact operations and includes functions to, as the name suggests, manage the workforce. This includes forecasting interactions, scheduling, agent tracking, time off management and more.
Historically, contact center providers offered core communications capabilities, such as calling, messaging and email, and partnered with WFM providers. In 2016, WFM provider NiCE Ltd. showed tremendous vision when it acquired inContact to be the first provider to offer a fully integrated WFM-contact center offering.
Since then, other providers, such as Amazon.com Inc. and Zoom Communications Inc. have followed this trend by building their own WFM. One could make the argument that a decade ago, having a unified solution was a “nice to have,” but in the artificial intelligence era, where data drives the quality of AI, it will be critical to have an all-in-one offering.
I asked Juanita Coley from Solid Rock Consulting, whom I consider the industry’s most foremost authority on WFM, for her reaction. “I’m not surprised by this move as the industry is starting to understand how important the management of the workforce is,” she told me. “The problem with the CX space is that vendors have been trying to separate work from the actual customer experience but they’re two sides of the same coin.”
She added that “if you look out in the future, most interactions will be partially handled by an AI and by a human and businesses need to understand how to plan for that. People say AI will automate work, and it will, but organizations need to know how to orchestrate that with people, and that’s what WFM does.”
RingCentral came to market as a unified-communications-as-a-service provider and has maintained a leadership position in Gartner’s Magic Quadrant for over a decade, but growing RingCX has been a top priority for the company. On a prebriefing, Jim Dvorkin, senior vice president of CX products at RingCentral, talked about RingCX momentum: “As of the end of Q2 we have over 1,200 RingCX customers. Looking at our million-dollar-plus total contract value deals, RingCX is attached to over 50% of them.” Adding CommunityWFM will certainly help with this.
There are several interesting implications of this deal:
Impact to existing CommunityWFM partners
Like most WFM providers, CommunityWFM works with many contact-center-as-a-service providers, including many RingCentral competitors such as Five9 Inc. and Talkdesk Inc. On the analyst call, RingCentral was proactive in addressing this and said CommunityWFM will continue to provide a standalone solution and that support for partners will remain unchanged.
Not all providers would take this approach, but it’s my belief that if you do what’s in the best interest of the customer, that’s generally the right thing. And enabling CommunityWFM to continue to work with other companies, even competitors, is the best thing for customers. Over time, I expect RingCentral will create a “1+1=3” value proposition with CommunityWFM, but there will be no impact to existing partners.
Implications for other RingCentral partners
In the area of contact center, RingCentral has a long list of partners. The company partners with NiCE for advanced and large contact center use cases. It also has partnerships with WFM providers, Verint and Calabrio Inc., which was acquired by Thoma Bravo in 2021. On the call, the company discussed this and said it will continue these partnerships. With NiCE, which is sold under the brand, RingCentral Contact Center, understanding the differences is straightforward. NiCE is sold into large enterprises or where customers have a complex environment. RingCX is for small to midmarket customers with more basic use cases.
Verint and Calabrio both have a broader set of features, so if a customer wants RingCX with an enterprise-grade WEM offering, it should use them. If the customer requires and integrated offering without the large number of bells and whistles, CommunityWFM would fit the bill. Again, over time, I would expect RingCentral’s integration to create some unique capabilities lessening the reliance of Verint and Calabrio.
Also, this provides and excellent hedge against Thoma Bravo’s recent takeover of Verint and planned integration with Calabrio. Though Thoma Bravo has an excellent track record, PE firms are motivated by profitability, and there’s more than a puncher’s chance that innovation at Verint slows down. By owning CommunityWFM, RingCentral can better control its own destiny.
The complex mesh of offerings and partners will require RingCentral to be crystal-clear that its sales force and reseller community understand how to position which offering. The company has done a good job of this historically, but it can’t take this for granted.
Will RingCentral continue to make acquisitions in markets adjacent to CCaaS?
As mentioned before, growing RingCX is a top priority for the company and its normally a lead talking point on investor calls. Though CommunityWFM adds to RingCX, it certainly isn’t as broad as a company such as Verint. It offers all the core WFM capabilities such as forecasting and scheduling intraday but does offer many of the workforce engagement management or WEM adjacencies such as quality monitoring, performance management and speech analytics.
Obviously, RingCentral could not address this on the call, but it’s my expectation this is an area we should expect to see the company and its competitors focus on. There’s a handful of WEM pure plays such as Centrical that come to mind.
Final thoughts
This is an excellent acquisition for RingCentral because it gives it more data with which to apply its AI to and enables it to offer customers a more complete customer experience solution. In the area of customer experience, the contact center industry has way too many silos, and silos of data invariably lead to fragmented insights, since the AI engines do not have an end-to-end view of the customer journey. Though this acquisition doesn’t completely do that for RingCentral, it certainly moves the ball forward.
I asked Joe Rittenhouse, chief executive of Converged Technology Professionals, one of RingCentral’s largest resellers, about this deal and he was extremely bullish. “This acquisition adds strength to the RingCX platform, creating a unified solution that unlocks critical data and paves the way for future AI ambitions,” he said. “It also now levels up offerings available to the middle market CC giving RingCentral a significant competitive advantage.”
It’s a strong move for RingCentral but I’m sure not its last.


Palo Alto Networks Inc. today added more capabilities to its fast-growing Prisma SASE (Secure Access Service Edge) platform by leveraging AI to create what the company calls “a blueprint for the AI-ready enterprise.”
The Secure Access Service Edge service delivers protection against AI-powered threats, data security that adapts to how information flows, and unified operations capable of intelligent scaling. These new features break the mold of “legacy SASE,” which is focused on replacement of traditional wide-area network technology with cloud-first offerings. All the new features in Prisma SASE 4.0 are geared toward enabling companies to protect against AI-driven threats and to safeguard data wherever it resides or moves to.
The innovations of Prisma 4.0 focus on three key areas:
- Deploying SaaS agent security to “safeguard the AI frontier”: Prisma SASE 4.0 provides direct oversight of AI agents. As employees connect tools like Microsoft Copilot to sensitive corporate data, these agents can act autonomously, creating new pathways for data leaks through unvetted prompts or risky plugins. This creates new risks, and the new SaaS Agent Security gives security teams the visibility it needs to see which agents are in use, control data access and block risky activities. AI based innovation is important but can’t be at the cost of putting a business at risk.
- Defending against modern web threats: Prisma Access Browser Advanced Web Security finds and neutralizes malware in real time before it causes any damage. This provides an important layer of defense that most solutions miss. For many organizations, the browser is the desktop and the ability to neutralize attacks that assemble inside the browser can thwart attacks that often bypass traditional security tools.
- Protecting high-value private applications from cyberattacks: Private applications are difficult to security but often the crown jewels of companies. This makes them an ideal target for threat actors. The new Private App Security offering automates the protection of these important applications and continually updates security policies.
For Palo Alto, SASE has been a strong growth engine. On its recent earnings call, Chief Executive Nikesh Arora (pictured) discussed the importance of SASE and its contribution to total contract value and annualized recurring revenue.
“This quarter, we won our largest SASE deal ever, a $60 million contract with a global professional services firm covering nearly 200,000 seats,” he noted. “This was in addition to a record number of eight-figure SASE deals. We’re gaining share. For the last year, we displaced incumbent SASE vendors in over 70 accounts exceeding $200 million in TCV. Our SASE ARR grew 35% year-over-year more than twice as fast as the overall market. We now have over 6,300 SASE customers and account for one-third of the Fortune 500.”
Adding to its portfolio of features in its SASE platform, particularly with AI focused features, will strengthen is offering and make it significantly stickier, making it harder to do to Palo Alto as what it’s currently doing to others.
Using AI to fight AI
Since threat actors have access to the same AI tools as enterprises, it’s an ever-escalating arms race, with the good guys doing all they can to stay ahead of the bad guys. With browser-centric attacks capable of bypassing network controls as attackers use malware designed to take advantage of interactive sessions by weaponizing the domain name systems, Palo Alto is providing enterprises with more powerful tools for the battle.
With Prisma SASE 4.0, Palo Alto is bringing security directly into the user’s experience to stop threats as they appear. The company said the AI-powered Advanced Web Protection in the Prisma Access Browser inspects fully rendered web pages in real time, catching threats that only trigger after page load or user interaction, without requiring transport-layer decryption. This provides markedly better security versus trying to train users to figure out whether a page is legitimate or not.
Other new capabilities include Private Application Security, which consolidates application firewall layers, automatically generating application fingerprints. This enables enterprises to detect anomalies and block botnets, API abuse and unpatched “day zero” exploits without relying on constant manual updates.
In a pre-announcement briefing, Carmine Clementelli, director of SASE product marketing for Palo Alto Networks, highlighted the value of the company’s new AI Agent Security capabilities. “It allows us to provide visibility to all AI agents that are involved,” he said. “Right now, we support Microsoft Copilot Studio and ServiceNow platforms, where all the agents can be deployed. And we provide visibility into all the agents that connect to corporate SaaS applications and their risks and over-permissions. Customers can see all the agents, all their risks, and can help stop unauthorized data access for these agents.”
Modernized data loss prevention
Traditional data loss prevention or DLP approaches, which were designed for structured fields and keyword-based rules, aren’t built to stop unstructured content, for example images, source code or AI-generated text. This creates many blind spots which leads to a flood of false positives.
Prisma SASE 4.0 addresses this with the inclusion of SaaS Security Posture Management, which provides continuous, real-time visibility into the behavior of software-as-a-service-based AI agents, copilots and plugins that now connect directly to corporate data, accelerating productivity but also expanding risk.
The company says the latest version of Prisma SASE continuously discovers and monitors SaaS-based AI agents, giving administrators visibility into which agents are accessing sensitive data, how they are being used and where risks emerge. The guardrails govern user interactions and block unauthorized access, adding to its AI protection. This ensures organizations can adopt AI responsibly without stalling innovation.
Palo Alto says the new capabilities and other key SASE features will be available later this year.


Inter is a large digital financial services corporation based in Brazil with its U.S. headquarters in Miami. It began operating in 1994 as Banco Inter, a traditional bank, and became Brazil’s first fully digital, cloud-based bank in 2015. Inter provides online banking and other financial services, ranging from investments to travel, shopping and rewards, to companies and individuals throughout the Americas via its super app.
As Inter expanded and evolved, the company needed to ensure it was doing everything possible to protect the transactions and financial information of its 40 million customers against relentless threat actors. That led the Inter cybersecurity team to form a close working relationship with Zscaler Inc. and its cloud-based Zero Trust Exchange Platform.
At this year’s Zscaler Zenith Live 2025 event in Las Vegas, I sat down with three key members of Inter’s cybersecurity team — Orlandino Neves, executive cybersecurity manager; Paulo Calvo, data protection manager; and Rickson Martins, cybersecurity analyst — to discuss the company’s unique needs and its successful relationship with Zscaler.
Overcoming the limitations of traditional cybersecurity solutions
Neves said that as Inter began its rapid expansion in Brazil and beyond, the cybersecurity team faced limitations with its existing security infrastructure. “It was a big problem for us because the firewall-based VPNs,” he said. “We have 4,000 users on the firewall, and as we began expanding globally, our executives who traveled the world needed to connect from Europe to Brazil. The latency was so high it was a big problem.”
Another challenge the team faced was how to protect uploads in the cloud. “It was impossible to use traditional firewall connectivity because we had so much traffic, it was hard to have reliable throughput,” Neves recalled.
Enter Zscaler
It was clear the firewall-based VPN had reached its limit, and the bank needed a different type of solution to meet the growing demands. “As we moved to the cloud, we knew we had to rethink security and the legacy firewall approach would not scale so we looked at moving all of Inter to a zero-trust environment,” stated Neves. “We started a POC with Zscaler that turned into a major project. We deployed ZIA [Zscaler Internet Access] for our security service edge and workloads, and ZPA [Zscaler Private Access] for users. As the first digital bank in Brazil, Inter was also the first bank in Latin America to move our load 100% to the cloud. It was a big challenge, but it’s working very well.”
Neves said Inter has long been a disruptive, forward-thinking company, especially around its vision for the cloud. Inter worked with the Brazilian government to deliver on the promise of digital transformation and led the effort to change policies that had required financial services organizations to maintain an on-premises presence.
From that beginning in 2022, Inter has deployed essentially the entire Zscaler platform and its broad capabilities. Neves said Inter was Zscaler’s first-ever workload protection use case to secure its AWS workloads. Inter is also using Zscaler’s zero-trust based solutions for better data protection. This includes the data security and posture management or DSPM solution, for which Inter worked with Zscaler as design partner in the development of the product.
Expanding the relationship
Since 2022, Inter has been continuously evaluating new ways of working with Zscaler to support its aggressive growth and international expansion strategies. One of the more advanced use cases is to use Zscaler to secure its generative AI initiatives. Inter’s use of its own generative AI solution, InterGPT, and public AI tools such as ChatGPT is secured by Zscaler’s Generative AI Security solution, which provides Inter with full visibility into what users are sharing and alerts the security team so they can investigate any risky chats.
Martins discussed with me the importance Zscaler will play in Inter’s ability to roll out gen AI securely. “We use the gen AI report that Zscaler offers so we know what kind of LLMs are being used and what kind of prompts they are sending as well as what data is going into those prompts,” he said. “Zscaler is crucial for us to know what kind of data is going to AI, and what kind of models the users are using, and what kind of information they are sent. This gives us the confidence to know the company data is protected.”
Paulo Calvo, Inter’s data protection manager, talked about the quality of collaboration between Inter and its Zscaler counterparts. “When you have a good team, you can do anything,” he said. ” They let us know everything new that is happening,” Martins added. “We have QBR meetings that are very productive. They come to our headquarters and we chat about everything. We expose the problems we are currently having, the things that we like, and anything we don’t like. They haven’t changed, despite Zscaler acquiring many companies and adding a lot of new features and converting a lot of features in its products. They remain easy to work with.”
Martins added that “we always get at least a due date for something they are researching for us. And we are heavy users. So when we open tickets, they are not always easy. Usually, when we come to them with a bug, they have to open internal tickets to solve it, and they always provide feedback and update us about everything. Sometimes we can’t even keep up with them because we have other things to do, and I sometimes forget to answer the support team, but they are awesome. They play a big role in our environment. They always help us with our issues.”
Maturing Inter’s security risk posture
As with many large organizations, the Inter cybersecurity team is always interested in improving its security posture. For that, it used Zscaler’s Risk360 solution to understand the issues they face and the current state. “We have a big and complicated environment with a lot of APIs and the like. Risk360 allowed us to understand where the critical issues were and what actions we need to take,” explained Martins. He added, “With the ability to measure where we are and then take action, we have improved our score by 25%.”
Inter’s commitment to the highest level of security throughout its operations is tied to the company achieving its growth objectives.
“Inter has about 40 million customers and wants to add a million more each month through 2026,” said Calvo. “We are an aggressive digital bank. We don’t use security for protection. We use security to support expansion. And the numbers tell the story.”
Final thoughts
Cloud, AI and other digital technologies have changed the way business use and operate technology faster than ever before. These changes must be accompanied by a complete rethink of security as protecting an ever-eroding perimeter no longer works. Placing security devices at strategic points in the network is expensive and manually intensive and does not scale.
Zero trust reimagines security by applying the concept of least-privilege access to every device, user and application. Zero trust was initially used to replace edge firewalls and VPNs for remote users but it’s a concept that can be applied everywhere.
Inter’s security evolution should be looked at as a good lesson learned for other companies as the cloud forced businesses to modernize the network, compute and applications and it’s time to modernize security.


Artificial intelligence is at the heart of almost every part of an organization’s strategy, and that includes information technology operations. However, many organizations aren’t seeing clear business results. In fact, a recent MIT Study found that 95% of generative AI projects are failing. One of the big reasons for failure is that AI is being deployed in silos resulting in partial insights into the broader ecosystem.
In IT, technology teams have dealt with a patchwork of tools and dashboard and relied on “swivel chair” management. This obviously won’t work with AIOps as the AI engine needs to understand the end-to-end environment.
Further, the holy grail of AIOps is autonomous self-driving operations, which alleviate IT from mundane tasks that take time and money. Juniper Networks Inc.‘s Mist platform, introduced over a decade ago, was purpose-built with AI in mind, leveraging automation and insight to optimize user experiences.
Built into Mist is the Marvis AI engine and Assistant, which uses high quality data, advanced AI and machine learning data science, and a conversational interface to simplify deployment and troubleshooting. Now under Hewlett Packard Enterprise Co., Mist today has been brought together with Aruba Networks to form what they are calling the “secure AI-native network,” which is a blend of leading AIOPs, product breadth and security to solve real customer and partner needs. Ultimately the company has a vision of using the platform to bring all HPE Networking products under common cloud management and AI engine with centralized operations.
Self-driving networks with agentic AI
HPE Networking is framing agentic AI as a catalyst for self-driving networks, complementing the journey the company has been on for some time. In addition to previous agentic capabilities, which includes reenforced learning, open APIs and autonomous tools that proactively monitor and fix issues across multiple domains, the company has made additional enhancements to the Mist platform. They further shift networking from a reactive role, where issues are fixed after they happen, to a proactive role, where issues are anticipated and fixed automatically.
“One thing that we added is the ability to choose specific areas for self-driving mode that don’t require human intervention,” said Jeff Aaron, vice president of product and solution marketing at HPE. “If a switch port is stuck or an AP is running non-compliant software, for example, you can tell Marvis to go fix it on its own. We provide reporting to show which features were fixed autonomously, how they were fixed, and why the decision was made so IT still has complete visibility into what is happening.”
In addition, Marvis got a back-end upgrade, leveraging more generative AI capabilities and agentic workflows for even better real-time troubleshooting. The assistant has always used natural language processing and understanding to understand simple language queries and provide insightful answers on par with human experts.
Recently, gen AI has been introduced to Marvis’ robust data science toolbox for even more human-like interactions. Agentic workflows enable better correlation across domains for faster and more accurate troubleshooting. For example, an office outage can easily be pinpointed to a wide-area network capacity issue with recommended fixes based on feedback from wired, wireless, WAN and other agents.
Furthermore, Marvis’ AIOps capabilities have been expanded further into the data center through tighter integration with Juniper Apstra’s contextual graph database. This allows Marvis to analyze infrastructure configurations and provide answers to data center-related inquiries using the same Marvis conversational interface employed elsewhere in the network.
Aaron noted that in the past, Marvis had to launch Apstra via application programming interfaces to make data center changes, but now more can be done right form within the Mist cloud. This upgrade brings the data center closer to parity with wireless networking when it comes to self-driving, where Marvis has had more mature capabilities.
Finally, HPE Networks also expanded their ability to proactively predict and prevent video issues using what it calls a large experience model or LEM. This pulls in billions of data points from Zoom and Microsoft Teams clients and correlates it with networking data to identify the root cause of video issues. The LEM framework has now been augmented with data from Marvis digital experience twins, or Minis, which probe the wired, wireless, WAN and data center networks autonomously when users aren’t even present to provide even richer data for predictive and proactive troubleshooting.
Business impact and competitive landscape
The secure, AI-native network with the latest Marvis updates builds on the benefits customers are already seeing with Mist. The impact shows up in different ways across industries. ServiceNow reported a 90% reduction in network trouble tickets, while Blue Diamond Growers cut the time spent managing networks by 80%. Gap achieved 85% fewer truck rolls, and Bethesda Medical reported 85% faster upgrades.
HPE Juniper is further along in real-world use cases than most of its competitors. While many competitors are showing conceptual use cases, HPE Juniper already has many of its AI-native features available, supported by a long history with Mist and deeper AI maturity. In fact, Mist came to market using AI to troubleshoot Wi-Fi, which I consider one of the toughest network technologies to support.
Mist’s value was always measured by its ability to cut down on network trouble tickets, which resonated with IT teams. That message still matters, but now the emphasis is on broader business outcomes. The unified HPE Juniper platform is relevant to both IT and business leaders who want to see measurable results from their investments.
“The benefits of the platform are better operational experiences,” Aaron said. “That leads to better end-user benefits across our joint customers. In theory, the end user shouldn’t even know the network exists — it just does what it needs to do. Ultimately, it’s about better business outcomes. You can drive more agility with less business risk, and you can get greater productivity.”
On a grander scale, the race to AI-native networking is heating up. Most vendors will have solid AI and automation stories within a year. The differentiator will be whether they can deliver true end-to-end, cross-network capabilities. Historically, network teams treated the campus, WAN, wireless network and data center as separate entities, but in reality, it’s one network and a vendors AI engine needs to span from the data center out to the cloud to automate IT operations.
Though there are always challenges, Aaron noted that both HPE and Juniper have a strong track record of integrating solutions, with Aruba and Mist being two respective examples. This latest announcement shows that innovation isn’t slowing down post-acquisition, with more development promised on both Aruba and Mist products as they collectively journey to a common goal of AI-native self-driving.


The implementation of artificial intelligence generates wide-ranging effects throughout all industrial sectors through its generative capabilities and large language models together with autonomous systems and scientific discovery applications, but true AI capabilities have been hard to achieve because the foundation of the infrastructure needs a complete redesign.
AI computational engines have earned their rightful position in the spotlight through the continued and rapid evolution of GPUs but the network infrastructure which connects these processors remains equally or potentially more important. Though there have been incremental improvements, the network has yet to see a major overhaul, like the way compute has.
Off-the-shelf networks can deliver the connectivity requirements within a rack or a data center for traditional compute requirements but are challenged with the demands of AI. This is why Nvidia created Spectrum-X. The problem is then exacerbated when the scope of the network exceeds a single location.
AI deployment at global scale faces increasing network limitations from conventional networking technologies. At the recent Hot Chips 2025 event, Nvidia announced Spectrum-XGS Ethernet (pictured) to support the concept of an AI super-factory that turns multiple physical locations into a single, logical, super AI factory. Through its pioneering “scale-across” networking approach Spectrum-XGS Ethernet serves as a fundamental technology that makes giga-scale AI possible which was previously unimaginable.
High-performance compute clusters use InfiniBand as the networking protocol because it delivers both low latency and high throughput capabilities. These solutions worked well for tightly coupled systems but struggled to address AI requirements at geographic scale. The rise of LLMs and generative AI brought forth new problems because trillion-parameter models need enormous data movement across thousands of GPUs to maintain their synchronized operation.
Off-the-shelf Ethernet functions as a common solution but it lacks AI native capabilities for this specific application requirement. The combination of unpredictable performance alongside extended latency and network congestion creates major delays which results in the underutilization of expensive GPU resources.
Equal-Cost Multi-Path or ECMP routing as a standard approach generates “elephant flows,” which refer to sustained large data transfers that cause specific network paths to become overloaded while other paths stay inactive. The entire training process experiences delays because of this bottleneck, which restricts scalability. The large dimensions of current AI systems require an advanced networking solution that adjusts to changing workloads while delivering consistent and reliable performance.
Nvidia Spectrum-XGS Ethernet represents an extension to its Spectrum-X Ethernet platform which serves as the company’s solution to this challenge. The “GS” part in Spectrum-XGS Ethernet represents giga-scale functionality that enables a new networking approach which unifies “scale-up” and “scale-out” capabilities with “scale-across” functions. AI developers can connect data centers spread across major cities and nations or continents through this new capability to create a unified giga-scale AI system.
The Spectrum-XGS Ethernet platform achieves this by integrating several key components closely. The Spectrum-4 Ethernet switch operates at an industry-leading rate of 51.2 Tbps. The ConnectX-8 and BlueField-3 SuperNICs operate alongside the switch to provide dedicated acceleration for AI workloads.
The SuperNICs operate as data processing accelerators that take away CPU work to facilitate quick lossless GPU-to-GPU data transfers. The hardware operates under the control of Nvidia software together with custom algorithms that provide end-to-end telemetry alongside automatic congestion control mechanisms. The system’s algorithms dynamically adjust data packet routing to prevent congestion while maintaining consistent performance throughout extended network distances.
Spectrum-XGS Ethernet is important to the long-term growth of AI as the fundamental problem of scale-across distance gets solved by it. Data centers face physical boundaries regarding space and power and cooling when they reach their maximum capacity, so the only growth strategy involves facility expansion or interconnection.
The distributed architecture enabled by Spectrum-XGS Ethernet becomes a functional and efficient way to accomplish this. Through its capability to connect separate data centers into a single, unified system, early adopters can establish a single AI factory that spans multiple locations. The unified AI factory capability eliminates the requirement for a massive expensive single facility by allowing companies to deploy their AI infrastructure through modular flexible modules.
The critical need for Spectrum-XGS Ethernet goes beyond scalability because it serves to optimize the operation of AI workloads. Running a big LLM demands an enormous amount of resources fast. The communication system among thousands of GPUs needs to be orchestrated to ensure both low latency and minimal jitter.
Spectrum-XGS Ethernet contains features which adapt routes while precisely managing latency to fulfill specific requirements. The Nvidia Collective Communications Library or NCCL framework shows that this technology increases performance by nearly two times in cross-data center environments. The performance gain goes beyond an incremental improvement as it directly reduces training durations which enable faster model development for researchers and companies. The competitive advantage this technology provides leads to market leadership because a few hours of training time decide market success or failure.
Spectrum-XGS Ethernet delivers substantial financial benefits together with operational advantages to organizations. The Spectrum-X Ethernet platform with Spectrum-XGS Ethernet integrates easily into current data center architectures because it uses standards-based Ethernet while providing a more adaptable and affordable solution than proprietary systems. The combination of high performance along with enhanced efficiency results in reduced total cost of ownership (TCO) for the platform.
The solution enables a better ROI for AI hardware as it eliminates network bottlenecks that would otherwise cause GPU idle time. The ability to handle distributed networks as a unified system streamlines operational management and reduces the complexity optimizing connectivity over geographic distances. Real-time telemetry in combination with advanced management tools enable both proactive fault diagnosis and predictive maintenance which boosts operational efficiency and system uptime.
Nvidia’s Spectrum-XGS Ethernet is an important innovation that will enable future AI developments that may not have been able to be achieved before. The solution addresses historical bottlenecks that can limit AI infrastructure effectiveness at scale by connecting compute across geographic distances. Through its ability to build giga-scale AI super-factories, Spectrum-XGS Ethernet will speed up the development and implementation of next-generation AI models and applications.


As generative artificial intelligence tools are becoming more prevalent in the workplace, employees are accessing these tools via personal accounts on company devices, pasting in sensitive data, and downloading content — all of which creates potential security risks. Meanwhile, cybercriminals are capitalizing on this trend by weaponizing AI and impersonating trusted tools.
Menlo Security Inc. recently released a new report that takes a closer look at how gen AI is shaping today’s workplace. The data was collected over 30 days (May-June 2025) using Menlo’s telemetry. During this period, web traffic and gen AI interactions were analyzed from hundreds of global organizations. Since most gen AI tools are accessed via a browser, Menlo was able to observe browser traffic to gen AI sites and regional adoption trends.
To frame its findings in a broader context, Menlo also cites Similarweb data showing that between February 2024 and January 2025, traffic to gen AI sites jumped from 7 billion visits to more than 10.5 billion visits. That’s a 50% increase in less than a year.
About 80% of gen AI use still happens in the browser, a convenient option for most users because it works across virtually all devices and operating systems. ChatGPT, unsurprisingly, tops the list. It now has about 400 million weekly users. Yet the vast majority, 95%, are on the free tier.
The benefit of the free tier is that its free, but as the saying goes, you don’t get what you don’t pay for. The advanced tier uses better models and gives more accurate responses, which is important in a business context. Also, OpenAI’s privacy policy states it may use the data provided to train its models. Users can opt out of this, but many shadow AI users may not be aware of this. For business or sensitive data, using a paid tier such as ChatGPT Enterprise or the API ensures your data is not used for training models by default.
There’s no doubt that gen AI adoption has skyrocketed globally. While the Americas saw the most total traffic, gen AI use is growing fastest in the Asia-Pacific. In China, 75% of organizations are implementing gen AI in some way. Nearly as many, 73%, are doing the same in India. However, Europe and the Middle East are adopting gen AI more slowly, which is attributed to stricter data protection laws and regulatory frameworks.
Given the popularity of gen AI tools, organizations are increasingly seeing them in the workplace. According to a TELUS Digital survey cited in Menlo’s report, 68% of employees are using public tools such as ChatGPT through personal accounts. What’s even more concerning: Fifty-seven percent admitted to pasting sensitive company information in these tools. In just one month, Menlo observed more than 155,000 copy attempts and more than 313,000 paste attempts involving gen AI.
Many organizations flagged this content as sensitive or restricted, including personal information, financial data, login credentials and intellectual property. Employees may unintentionally leak data while using gen AI to summarize a report or write an email, according to Menlo. But sharing information isn’t the only problem. Employees download PDFs and text files from gen AI tools, which may have embedded malware or phishing links.
It’s also becoming more difficult to distinguish between legitimate and fake AI tools, with malicious browser extensions on the rise. Menlo tracked nearly 600 phishing sites pretending to be legitimate gen AI, often masking themselves as ChatGPT or Copilot in their domain names. Between December 2024 and February 2025, researchers tracked more than 2,600 lookalike domain names and impersonation websites.
Cybercriminals are jumping on the bandwagon like everyone else, using gen AI to make their phishing attacks more convincing and tailored to specific individuals. For example, they’re combining AI-written phishing emails with other tactics that exploit browser flaws. This has resulted in a 130% year-over-year increase in zero-hour phishing attacks, which hit before security systems know they exist.
The use of “shadow” tools with workers is nothing new and should not be a surprise with gen AI. Since users have computers the use of consumer grade tools has been the norm. Mobile devices, internet accounts, e-mail, cloud are just a few examples. When workers have a way of making their lives easier, they will use whatever tools they have at their disposal.
If the company does not give them a viable option, that’s when the use of “shadow” apps and tools boom. Right now with AI, many companies are reviewing policies and trying to determine the best path forward while the report clearly states users are charging ahead.
Going forward, organizations need to take control of how gen AI is used. Menlo stresses the importance of eliminating shadow AI by limiting access to consumer-facing gen AI tools via personal accounts in the workplace. Organizations should make approved AI tools the only ones employees are allowed to use. On top of that, they should enforce data loss prevention policies to restrict actions such as copy/paste, file uploads and downloads. DLP is necessary to apply the right level of protection.
Menlo also recommends inspecting gen AI browser traffic and focusing closely on high-risk file types such as PDFs and DOCX. The files may appear harmless, but they often hide malware or phishing links. Adopting zero-trust security, particularly on unmanaged devices used by contractors and third parties, is another important safeguard. With zero-trust security, organizations can verify every user and device before granting them access to the corporate network.
Finally, Menlo emphasizes educating users about the risks of public gen AI tools. Once employees turn to tools outside the information technology department’s control, it becomes easy for sensitive company data to end up in the hands of cybercriminals. It’s impossible to ban gen AI use completely in the workplace due to its popularity. However, if employees understand the risks and use only company-approved tools, organizations can create a work environment where gen AI is helpful instead of harmful.
Although the use of alternate tools is not new, it has come to AI faster than other technologies I have seen. IT leaders need to get out in front of this and ensure the proper controls and safeguards are in place before employees unknowingly put company data at risk.


Nvidia Corp. recently held an industry analyst briefing on the topic of physical artificial intelligence, and Chief Executive Jensen Huang has been consistent in his talk track in every keynote he has done this year that physical AI is the next wave of AI. In fact, he has often stated that eventually anything that moves – from lawnmowers to forklifts to cars — will be autonomous giving rise to the physical AI era.
Though most people think of physical AI or the world of robots as the stuff of science fiction and a niche technology, the benefits will be widespread. I recently talked with a chief information officer from a healthcare organization in the Mid-Atlantic and he explained that autonomous wheelchairs would enable patients to be taken curbside without having to use a person enabling that clinician to spend more at a patient’s bedside. Retailers can use robots to scan shelves for better inventory control, and anyone who flies United Airlines has likely seen the robot that moves around the lounge for people to place used dishes in.
Presenter Rev Lebaredian, Nvidia’s vice president of Omniverse and simulation technology, dove deep into this fascinating — and fast-growing — segment of the AI boom.
The physical AI era has arrived
Making AI useful and productive for the real world is the realm of physical AI. But what defines the boundaries between the different types of AI?
“Generative AI involves models we’ve all been using, such as large language models and maybe some image models,” said Lebaredian. “Essentially, you give it some input, and output comes up. With LLMs, the input tokens are text, and the outputs are also text.” But with physical AI, the model is different. “We bring in input that would be the equivalent of what the sensors on a robot would experience,” he explained.
When it comes to AI, the terms “robot” and “physical AI” are general terms for a broader category. This includes humanoid robots, manipulator arms, self-driving cars or anything else that moves. However, physical AI also extends to things such as radio towers — anything that could sense the physical world and then go operate in it.
These robotic devices input sensor data, including the equivalent of what language models also input, such as text and other modes of input. “We can combine our understanding of abstract knowledge in the LLMs along with an understanding of the physics of the world to then output action tokens,” he said. “These are actions that end up controlling an embodiment of the robot. On a manipulator arm, that would be the torques and forces that are created by the motors to change the angles of the points on the robot arm. It could also be the steering, braking, and acceleration of a self-driving car. It could be anything that’s a control signal for the actual body. The application of this is endless.”
Real-world applications
Lebaredian said while there have been breakthroughs and steady progress in physical AI development and refinement, there’s plenty of work to be done in this segment of the industry. “The things we need to do in the real world are obviously extremely valuable. Once we crack this problem of physical AI, we can enhance everything from factories and warehouses to all of transportation, and humanoid and other robots can do the equivalent of human physical labor.”
Why is this important? Labor shortages present a real challenge to industries and companies that are seeking to grow. In many industries labor shortages are massive. Businesses are having a hard time hiring enough skills factory and warehouse workers and people to stock retail shelves. This article on meteorspace.com states that the US warehousing industry is facing a shortage of more than 35,000 workers, with companies such as Amazon reporting turnover rates of over 150%, so while hiring people is difficult, keeping them is even harder.
Looking ahead, many countries are facing an aging and declining population creating a situation where producing the same volume of goods year over year becomes increasingly difficult. There are also global supply chain issues due to geopolitics that are resulting in a great deal of manufacturing that was once farmed out to manufacturing hubs in Asia and elsewhere being reshored, especially in the U.S.
But is physical AI technology ready to take advantage of this opportunity?
The era of general-purpose robots
Are the mechanical and, more importantly, the software and AI technologies needed to build and operate sophisticated general-purpose robots ready for the job? Lebaredian believes so.
“For the first time, we have a line of sight to building the algorithms, to building the brains of a general-purpose, robust robot. The industry had the capability to build physical robots for quite a while. We’ve been introducing mechatronics and robotics into the industrial space for decades now, but we didn’t have the capability of making them intelligent enough so that those robots can see and act autonomously in a general way. We had to program them specifically to do one task repeatedly,” he said.
The invention and evolution of AI has accelerated robot and physical AI development. The massive amount of innovation from Nvidia and the rest of the AI industry has created the ability to build a “brain” Lebaredian referred to and to democratize it across all the domains and physical spaces. That wasn’t possible five years ago, but today it is.
Nvidia’s role in AI-driven robotics
Lebaredian made it clear that Nvidia isn’t in the business of building robots or other AI devices, such as autonomous vehicles. But the company plays a critical role in making those devices possible and capable of accomplishing vital activities.
The company has had tremendous success building reference architectures for systems across almost every industry it serves, and physical AI is no different. Nvidia comes up with the blueprints for physical AI and then enables others to leverage it. Nvidia has three computer platforms for physical AI: Omniverse & Cosmos on RTX PRO for simulation, the DGX and HGX for training, and the Jetson Thor AGX for deployment and operation. The three computers run on Nvidia’s popular Blackwell GPUs.
The Jetson Thor AGX provides the brains for the automated vehicles that Nvidia is helping to develop. “That’s a very important computer,” said Lebaredian. “It needs to be power-efficient while being extremely powerful. It’s a specialized kind of computer. It has to be able to deal with lots of sensor data and execute advanced AI models with a lot of compute but doing that efficiently in a specific power envelope requires lots of specific software to run it.
“All three computers, because they’re built on the same architecture and run the same algorithms, and all of your software is portable between all three, they’re all backwards and future compatible as well, and future proof as well, with all the NVIDIA architectures,” Lebaredian said.
The rise of AI factories
Lebaredian says AI factories powered by Nvidia’s DGX and HGX unified AI development platforms play a crucial role in the development of AI-driven robotics. The AI factories take the raw data from the physical world and the tasks that required to execute in the physical world, and they output models—effectively the brain of the robot—and then uploaded that information to Jetson.
But even with such cutting-edge development tools and platforms, building physical AI systems is a challenging task. “To create a brain, to create any AI, you need massive amounts of data, and you need massive amounts of the right data,” Lebaredian explained. “You need accurate data, and you need well-labeled data for the knowledge space, and that’s hard. It’s already hard getting all the data we scrape the internet; we find all this information that’s readily available, and it’s still not enough.”
He says the data just doesn’t exist anymore and collecting it by capturing it through sensors in the physical world “is just too expensive, too time-consuming, and in many cases, too dangerous or even impossible to get with the accuracy we need. You just cannot have enough of the right sensors in the physical world with the accuracy that we need.
“The only way to really generate all of the data we need to collect it is by first taking the rules of the physical world — physics — and replicating it inside a computing system, building a simulator of that physical world, and that simulator becomes a generator for the kind of data that we need to feed into the AI factory, which could then produce the AI algorithms we then deploy,” Lebaredian said.
But the work doesn’t end once the simulators are created. “We also need these simulations, not only as a data generator, but to test them before we deploy those AI brains that we train onto the real robot,” he said. “We need to test them for millions and millions of hours, drive our AI-driven AV vehicles for millions and millions of miles before unleashing them onto the world. And the best place to do that, and the fastest place, the least expensive place to do it, and the least dangerous place, is in simulation.”
The simulation computer fulfills two functions. It’s the data generator that feeds the AI factory, and it’s also where the testing and validation is done before deploying these physical AI systems in the real world. This is a critical step as without it, companies would have to build physical environment to tests robots. They can get damaged when they fall, tests can be incomplete, and it takes a long time to create new scenarios.
Success requires cooperation
Though Nvidia is rightfully proud of all the company has contributed to the growth of AI, it will require continued collaboration from all technology sectors to make the full potential of physical AI a reality.
“No one company can solve all of these problems,” Lebaredian said. “It’s just way too large and way too big. We’re building the core computational part of this. We’re building the three computers with all the operating systems for that, but we need many in the ecosystem to build the layers of software on top, to build the physical hardware, and every combination in between. The only way we are going to build physical AI that’s robust, that really addresses all of the needs of the industries I mentioned in that $100 trillion market, is by doing it together.”
Most of the media focus and hype around AI is focused on generative AI, but physical AI is right around the corner and will change all our lives. Very soon, vacuums, lawn mowers, golf carts and other consumer devices will be autonomous as well as industrial equipment.


Palo Alto Networks Inc. kicked off the annual Black Hat USA security conference in Las Vegas this week with today’s announcement of its Cortex Cloud Application Security Posture Management solution.
The ASPM offering is designed to fix security issues before cloud and AI applications have been deployed. The traditional method of securing apps is a highly fragmented set of manual processes. Instead of a single, unified platform, developers rely on a collection of point products and manual processes that are disconnected from each other. This method is often characterized as “tool sprawl” and has no single source of truth.
Cortex Cloud ASPM operates on the concept of moving security to the earliest stages of development, also known as shifting left. Instead of waiting until an application is deployed to find vulnerabilities, the platform integrates directly into the developer’s workflow and continuous integration and continuous delivery or CI/CD pipelines. This allows it to scan code for misconfigurations, compliance violations and other vulnerabilities in the source code, open-source libraries and infrastructure as code templates as well as identify hardcoded API keys and passwords in the code.
This release extends Cortex Cloud — introduced earlier this year — which combined the company’s cloud-native application protection platform, or CNAPP, and its cloud detection and response, or CDR, technologies to deliver real-time security. Palo Alto has been the most active security vendor in evangelizing the value of a security platforms and this is another example of the value of bringing a set of tools together.
In a prebriefing for industry analysts, Cameron Hyde, product marketing manager for application security, said that as Palo Alto moves from Prisma Cloud to Cortex Cloud, the company wants to more tightly align three pillars — data integration, AI-driven intelligence and automation — as it extends these capabilities to the SOC for tight synergies on the underlying data.
One of the discussion points on the call was the impact of AI on coding. While it is certainly true that organizations can write code at a pace never seen before, it’s also true that the accelerated use of AI can push insecure code into production at an equally unprecedented rate. As this happens, traditional application security approaches struggle to prevent risks, only alerting security teams after they’ve already slipped into production.
Customer benefits: Context is king
Palo Alto says Cortex Cloud ASPM fully integrates with and enhances the application security offerings already available in Cortex Cloud to deliver benefits including:
- Risk prevention: Using full application and business context to proactively stop security issues from reaching production by enforcing guardrails without slowing development.
- Prioritization: Avoiding false alarms by pinpointing critical, exploitable risks without requiring developers to use different tools. Leveraging an open ecosystem of native and third-party scanners to correlate findings with full code, cloud, runtime and business context.
- Eliminating manual remediation: Security and development teams can avoid backlogs by applying automation throughout the entire application lifecycle.
“When we talk with customers about prevention, they mostly say they cannot really prevent,” Sarit Tager, vice president of product management, said in the analyst briefing. “They say, ‘It’s too much, the developers will suffer.’ And we point out that without prevention, it may cost more when you go to production, since you’ll need to figure out who actually wrote the code and how to go back and rebuild it. All of that is really expensive in terms of developer time.”
Leveraging AppSec partners
Cortex Cloud features an open AppSec partner ecosystem to enable customer organizations to consolidate data from third-party code scanners into a centralized platform for comprehensive visibility. The goal is to combine native ASPM data with third-party vendor insights to provide organizations with a stronger security posture that doesn’t require them to change tools.
Palo Alto’s AppSec partners include Checkmarx, Snyk and Veracode. The integration with third parties has been a core component of Palo Alto’s platform strategy for the past several years. No security vendor can do everything and by partnering, Palo Alto can fill in the gaps in its platform.
Cortex Cloud ASPM early access is underwa, with general availability expected to be in October.
AI is having a massive impact on coding and companies of all sizes are now using the technology to spin up thousands of lines of code daily versus the few hundred that could be accomplished with people. Along with this, organizations need to rethink how the code is secured through AI enabled automated systems.


Palo Alto Networks Inc.‘s announcement Tuesday of its intent to acquire CyberArk for $25 billion implies a heavy price tag, as its shares fell on the news. But I believe it to be a good, long-term strategic move for Palo Alto and a logical extension of its platformization strategy.
Valuation is interesting to look at but highly overrated long-term. If an acquisition is a good one and helps transform a company, then the purchase price won’t matter over time. Consider the purchase of Mellanox Technologies Ltd. by Nvidia Corp., which was almost $7 billion in 2019. Given that it moved Nvidia into networking and was the foundation for innovations like NVLink and NVSwitch, the company could have paid twice what it did, and we still would have looked at it today as a good deal.
CyberArk enables Palo Alto to go after the identity market, which should flourish in the agentic and physical artificial intelligence era. Post-acquisition news, Palo Alto CEO Nikesh Arora (pictured) was on CNBC and discussed this with Jim Cramer. “I’ve always paid attention to markets when they inflect, because inflection points create the opportunity for us to enter markets,” he said. “I believe with the AI wave we’re seeing, with 88% of all ransomware attacks driven by credential theft, identity is an unsolved problem.”
This topic of conversation came up with Arora at an analyst roundtable at the recent RSA Conference. He discussed the concept of allowing agents to complete tasks on our behalf and the security challenges associated with this. A simple example would be to ask an airline’s agentic agent to rebook a flight for you. One would need to give permission for the agent to do that.
The logical extension of this is then to have the airline agent rebook your hotel, car rental, dinner reservations and so on. The challenge with this is do you give your usernames and passwords for the various services to the airline? A third-party agent? A digital twin of yourself? There are many possibilities, all of which will be used to some degree.
Also, with the rise of physical AI, each of those devices needs an “identity” to operate securely within their environments. I recently spoke with a chief information officer of a healthcare organization, and we were discussing using autonomous wheelchairs to take patients curbside, obviating the requirement to have a clinician take the person. That would allow for the clinicians to spend more time bedside rather than doing a task that could be automated. However, in healthcare, security is paramount, creating the need for a holistic identity solution.
CyberArk will plug in nicely with Palo Alto on several fronts. The first is the convergence of privileged access management, or PAM, and identity and access management, or IAM. The two are similar but operate at different levels. The latter is broad in scope and manages identities and permissions for all people, devices and apps. PAM can be considered a specialized subset of IAM where it focuses specifically on securing and managing high-level “privileged” users.
Historically, PAM was more expensive to deploy than IAM, so its use was limited. By rolling it into their platform, Palo Alto can offer PAM at the same cost as IAM, enabling it to be used on every device, user, machine and AI agent. The concept of “proliferation of privilege” has been bandied about for a while, but with standalone platforms it’s hard to scale.
Also, this expands Palo Alto’s platform capabilities. The identity industry is like every other submarket of security, in that it’s highly fragmented, with the various vendors solving a piece of the security problem. Palo Alto has done an excellent job of acquiring point products into its platforms and then using the data to “see” across the attack surface with more breadth and depth. With threat actors continually focusing on identity for breaches, bringing CyberArk into its platform makes sense and overdue.
The concept of the platformization is simple to understand and has been happening for over a decade. New vendors pop up to solve a problem, as the features get standardized, then get rolled into a larger platform. The best example of this is the next-generation firewall. At one time customers purchased firewalls, IPS systems, virtual private networks and more as point products. Today, no one does that, as the features were standardized and rolled into the firewall. Similarly, secure web gateways, cloud access security broker, zero trust and so on were all separate products and now they’ve been rolled into a security service edge stack.
On the interview with Cramer, Arora talked about this. “Long term, a billion-dollar revenue company should not be public,” he said. “They should be part of a bigger entity which allows for the leverage and scale required to create large amounts of cash flow and high market cap.” He was addressing a financial audience here, but the piece Arora omitted was the larger entity: if the technology is integrated correctly, companies can find and react to breaches faster and more accurately.
I’m not sure I agree that there should be no publicly traded security companies of a billion in revenue, but his thesis is correct, particularly in the AI era. Security is now an AI game, which requires data and lots of it. Point products are limited to the data within their silos where the platform vendors have a much broader set of data to work with. The platform vendors need to have the technical chops to know what to do with the data, but that’s something Palo Alto has shown it is excellent at, as evidenced by its success with the large number of acquisitions its done.
Agentic agents, robots and AI are coming and that requires security teams to rethink their approach to identity. Palo Alto scooped up CyberArk to address this, but I’m sure the other identity players will be in the cross hairs of other security companies. Okta, you’re on the clock.


Recently Zoom Video Communications Inc. held its annual industry analyst event, Perspectives, at its headquarters in San Jose, and it revealed much about the ongoing evolution of the videoconferencing company.
The company’s first act was built on video and given a massive steroid shot during the COVID era, which turned a company few had heard of into a household name. Since then, the company has added a boatload of new features, added enterprise clients, reduced the churn in its online business, and moved into adjacent markets, most notably contact center.
Despite this, the stock is about the same price it was pre-pandemic, which is partially because the communications industry is out of favor with investors. A bigger factor is that Zoom’s strategy is somewhat misunderstood, and I went to Perspectives to clarify in my mind what its next act will look like. Here are my top five takeaways from Zoom Perspectives.
Zoom is attempting to disrupt work, not communications
Zoom’s rise from startup to market leader was accomplished by disrupting the status quo. In a crowded market, Zoom created a product that disrupted on ease of use (the fact that ease of use was a differentiator is another story). Many industry watchers believe Zoom is trying to disrupt communications with an integrated unified communications/contact center offering but that’s not the focus. During his keynote, Chief Executive Eric Yuan (pictured) talked about how “work is broken” and he’s right. My research has found that 40% of a worker’s time is spent managing work instead of doing the job. This comes from having to flip constantly among documents, e-mail, chat and other applications. Communications is part of this, but Zoom is aiming to use AI and its suite of products to fix all the problems created by more and more applications.Zoom apps are about the data, not the apps
When Zoom launched Docs and Mail, there was a large amount of skepticism, since it’s tough to out-Doc and out-Mail Microsoft when its incremental cost of adding the products is zero because of the way it licenses its products. However, Zoom didn’t set out to build a better mail client or Word document. In fact, in both cases, the user interface is OK but nothing that will wow a user. What is valuable, though, is having all the data in a single location. As an example, prior to the event, the analyst relations team at Zoom sent me a document. If I can’t remember if it was sent through Zoom Chat or E-mail, I would need to search both. Because it’s unified, one search looks across both. Now extend this to all forms of collaboration and then apply AI. Zoom will be unique in its ability to leverage its AI Companion across back office and front office workflows. Unseating Microsoft is a significant challenge as those workloads are “free,” but Microsoft’s apps are siloed and could be an Achilles heel in the AI era.Industry specialization is a differentiator for Zoom
At the event, one of the more compelling sessions is when Randy Maestre, head of industry marketing, walked me around some of the vertical-specific solutions Zoom has. I found the solutions for front-line workers particularly compelling as it enabled this class of user easily tap into Zoom Chat, Video, Calling and the like through the apps they are already. It’s easy to give clinicians access to Zoom, but it’s difficult to get them to use it if they must flip between Epic and the Zoom client. Zoom has integrations with more than 1,000 apps, many of them focused on a user other than the knowledge worker. According to AI4SP, the number of front-line workers is four time the total of knowledge workers, leaving this a massively untapped market for the UC industry.The channel is now Zoom’s friend
At one time Zoom and its channel partners were heading down a divergent path. Zoom had a reputation of paying partners late, stealing deals out from under them and other activities that caused partners to look elsewhere. Two critical hires for Zoom were Mike Conlon as head of Americas channel and Nick Tidd, head of global channel, both of whom are longtime channel vets with experience at companies such as HP, Poly, Mitel and Cisco systems that remain the gold standard for channel programs. Tidd walked the analysts through a bunch of data, such as quote-to-cash times being reduced, channel volumes going up and other data points that indicate a reverse in channel sentiment. For me, the truth lies in channel feedback where partners large and small have unanimously told me Zoom’s interactions with them have significantly improved and they’re bringing the company into more deals. It has been a long, winding road for Zoom’s channel program, but it looks like it’s heading in the right direction.Zoom’s 2.0 story has yet to be told
Under former Chief Marketing Officer Janine Pelosi, Zoom came to market with a simple story – “Meet Happy” — which resonated with millions of people who were forced to work from home and then extended the happiness into their personal lives. Zoom is more than a videoconferencing company now and it can’t rely on that for its mission. In fact, shifting the focus off video is the right thing to do as video has now become a feature, rather than a product. At Perspectives, I asked newly appointed CMO Kim Storin what should we expect from Zoom marketing in the future. Will we see a continuation of Meet Happy or perhaps a pivot to something new? The question might have been a bit unfair given she’s only been in the role a couple of months. She said she’s currently working on that, but there would be a bridge to the past, as that’s what’s always made Zoom successful. If Zoom believes work is fundamentally broken, I’d like to see Zoom pivot from that company and be significantly more aggressive in calling out the companies that have broken work – most notably Microsoft and to a lesser extent, Google. The artificial intelligence era is not for the bashful, and Zoom can use this market transition to get people to think of it in entirely new way.The points I’ve laid out certainly aren’t without their challenges. Though I agree with Yuan’s premise that work needs a rethink, taking share from Microsoft in its core areas of documents and e-mail won’t be easy. However, Microsoft itself did this years ago when its Windows-based products and bundled licenses took share from the likes of WordPerfect and cc:Mail. Others have tried, most notably Google, but Google continues to fumble around with its apps suite.
For Zoom, AI cracked the door open. Time will tell if it has the aggressiveness to step through and be the work disruptor. One final note: The company is sitting on almost $8 billion in cash, giving it a massive war chest to acquire companies to accelerate its journey.


Since launching its Generative AI Innovation Center in 2023, Amazon Web Services Inc. has had one primary goal: help customers turn the potential of artificial intelligence into real business value. Now, the company has invested an additional $100 million in the center to enable customers to pioneer the new wave of autonomous agentic AI systems.
Post-announcement, I talked with Taimur Rashid, managing director of generative AI innovation and delivery, who oversees the center. He told me that education about AI continues to be a big part of the Center’s mission. “As new as generative AI is as a technology, one of the things that we can do to help our customers along that journey is educating them, showing them the art of the possible.”
To make that goal a reality, Rashid said AWS has been steadily expanding its gen AI capabilities. “We’ve added machine learning capabilities, gen AI capabilities and Bedrock, which is a foundational platform for building gen AI applications,” he said. “By also bringing human expertise, we can really help customers with that overall journey.”
As you would expect, the AWS Generative AI Innovation Center isn’t a building or campus. It’s a global organization of AWS experts that work closely with customers worldwide to help them successfully navigate, learn what AI can offer, and build AI capabilities at scale. Working with the center, customers can launch deployment-ready solutions in as little as 45 days. It’s this combination of collaboration, curated content and expert support that makes the center unique.
The human factor is key
AWS believes that there is an important role for people to enable gen AI to deliver on its promise to benefit a wide range of customers. “We are a multidisciplinary team of AI strategists, and forward-deployed engineers,” Rashid said. “We can really be very intentional about helping customers with how to look at gen AI, and then from there, productionizing systems so that they can ultimately get the business value out of it.”
He also noted that customers want to educate their teams. “They want to ensure that they can utilize the technology in the best way. What are the learnings? What are the best practices and approaches?” he added. “That’s where we help bridge that gap. Our most experienced customers in the enterprise space all the way to medium-size, even emerging startups have reached out to us saying ‘we need some unique help with how we look at model customization.”
One example he pointed out: “RobinAI, with its AI platform for the legal industry, is a great example of that. They specifically wanted to fine tune models to help lawyers and paralegals process hundreds of pages, and they got our expertise around that too.”Another customer that’s working closely with the AWS team to ensure it gains the full benefits of gen AI is Jabil, a large manufacturing company. Rashid explained that in just three weeks, it deployed an intelligent shop-floor assistant using Amazon Q with more than 1,700 policies and specifications across multiple languages, reducing the average troubleshooting time while improving diagnostic accuracy. There’s technical help that AWS offers, but as Jabil started to adopt it, it required some guidance to optimize the cost and make it more efficient.
The center can help organizations kickstart their AI plans. Almost every business and information technology leader I have talked with has dozens, even hundreds of proposed AI projects. The technology is so new that most customer teams are not yet fully equipped with gen AI skills.
They have literacy around data and experience with classical machine-learning models, but when you look at gen AI, they are dealing with a plethora of large language models. Customers want help to determine which model to use. The AWS Generative AI Innovation Center helps customers better understand how gen AI can be used most effectively.
Not surprisingly, Rashid said the gen AI choices available to the typical company can be overwhelming. “A senior executive from a travel and hospitality company told me they had identified 300 use cases and needed help prioritizing them,” he said. “There’s a whole rubric of things that we help customers with, because either the technology is too new, or their teams have not been upskilled on it. We do it for them, which not only helps the customer navigate the space, but we teach them as we go so, they can be more self-sufficient over time.”
Past is prologue
When AWS opened the center in 2023, customers looked at chatbots as their best AI entry point. “As they gained experience and saw all the things they could accomplish with AI, we saw more use cases around content summarization or generation,” Rashid recalled. “It’s like how things quickly progressed at the advent of cloud computing.”
Like gen AI, he added, “cloud was a new emerging technology; a paradigm shift for many people. So, we invested quite heavily in teaching customers, enabling coursework through training and certification. We’re making very similar efforts with AI, too. In fact, I think with AI we must be a lot more intentional, because it’s not only a technical competency that we have to educate customers on. We have to show it in a more immersive way.”Leveraging partners
Partners are a key part of the Innovation Center’s work. Last year AWS started a Partner Innovation Alliance that brings a subset of its gen AI competency partners closer to the center and teaches them the center’s methodologies and approaches. As a way of scaling, AWS is taking the best practices it has learned along the way and educating its partners. It currently has 19 partners in this Innovation Alliance, including Deloitte, Booz Allen Hamilton and Capgemini. There are also several boutique partners, these are ones that are born in the cloud or digital-native consulting partners, as well as regional coverage in markets such as Korea and Latin America.
AWS also has Innovation Center teams in various geographies around the world. “There’s a broad set of things that every region looks at from a gen AI perspective,” Rashid said. “In the Middle East and Africa — and even in Europe — we see a huge emphasis around sovereign AI. Customers are asking how they could use AI to advance many aspects of their society and their nations from health care and government services to education. What’s nice about how we’re structured is we have resources within those regions that can respond very quickly and in alignment with our regional sales teams to meet some of the unique needs that we see in different geos.”
Embracing startups
The AWS Generative AI Innovation Center team is also prioritizing working with startups. Though AWS has a long history, it has been more methodical of late.
Startups bring unique technology. By bringing this audience into the Innovation Center, AWS can help startups get enterprise-ready so they can jointly service customers. This is an obvious win for the startup but also AWS as it creates some consistency in experience.
Avoiding agent overload
As in most areas of life, there can be too much of a good thing in the world of agentic AI. Specifically, as agentic AI continues its explosive growth, how can organizations avoid having 100 applications that come with 100 agents all trying to chat at users and give advice on what to do?
That’s one of the goals of AWS’ recently announced preview release of Amazon Bedrock AgentCore, which enables customers to securely deploy and manage a large number of agents.
“During a recent trip to New York, every agent conversation I had was about ‘how should we think about this world of integration and permissions when it comes to agentic AI?’” said Rashid. “That’s why the launch of AgentCore is so timely. The primitives [foundational, reusable building blocks that enable AI systems to act autonomously and achieve complex goals] that are offered through AgentCore help establish not only integration, which is one aspect, but then the data permissions that must go with it.”
Ultimately, he added, as companies get their agents to learn reason and then act, permissions become very important. “Right now, we have building blocks which are important — such as MCP [Model Context Protocol] and AgentCore,” he said. “It’s about how you put them together to integrate them into the existing fabric of the application without having to do a massive overhaul. Over time, companies and teams will get data better integrated. They’ll get a more specific application strategy, but I do think you’ll see a lot of agents. We’re early in that cycle right now, but it’s very important for us to guide customers to avoid the problem.”
There isn’t a company I talk to that isn’t interested in gen AI, but new landscapes can be confusing and hold customers back. The AWS Generative AI Innovation Center is an excellent resource for AWS customers to understand all the technology, how to deploy and to ensure that as they look to scale up gen AI, they are maximizing benefits while reducing risk.


The artificial intelligence sprint is on, and not just within companies: This race is being held at a geographic level as well.
The Middle East has been very active with AI, as has India and, of course, the U.S. This week the Indonesian government is taking a major step toward establishing itself as an AI thought leader and achieving its sovereign AI goals by supporting the efforts of Nvidia Corp., Cisco Systems Inc. and the Indonesian telecommunications leader Indosat to establish an AI Center of Excellence in the country.
The project will build on sovereign AI initiatives announced last year by Indonesian tech leaders and Nvidia. The CoE will support Indonesian AI research, develop local AI talent, and help startup companies deliver innovations to build out the nation’s AI infrastructure.
With the support of Indonesia’s Ministry of Communications and Digital Affairs (Komdigi), the CoE will include a new Nvidia AI Technology Center that will leverage Nvidia’s Inception program, which provides technical expertise and co-marketing support to startups. It also will offer training and certification from the Deep Learning Institute to help nurture local AI talent.
Vikram Sinha, head of Indosat, said the company believes “AI must be a force for inclusion — not just in access, but in opportunity. With the support of global partners, we’re accelerating Indonesia’s path to economic growth by ensuring Indonesians are not just users of AI, but creators and innovators.”
Golden 2045 vision
The CoE will include an AI factory, which is specialized infrastructure to create value from data by managing the entire AI lifecycle. It also will feature a full-stack Nvidia AI infrastructure ranging from the company’s Blackwell graphics processing units and Cloud Partner reference architectures to its AI Enterprise software.
The center’s Sovereign Security Operations Center Cloud Platform is a Cisco-powered system that combines AI-based threat detection, localized data control and managed services for the AI factory.
The CoE is part of an Indonesian initiative called Golden Vision 2045. The project is focused on using digital technologies to bring together government, enterprises, startups, and higher education to drive cross-industry productivity, efficiency, and innovation. The target date is significant as 2045 will mark 100 years of Indonesian independence.
Core AI pillars
The CoE has four key goals for driving Indonesia’s AI strategy:
Sovereign infrastructure: To bolster Indonesia’s digital future and cultivate domestic innovation, Indosat and Nvidia are collaborating on the expansion of the nation’s premier sovereign AI infrastructure. This new platform is engineered for scale and high performance, emphasizing national self-sufficiency in AI. It will provide a secure, high-performance environment for AI operations, specifically tailored to help Indonesia achieve its digital aspirations. A key component of this effort is Indosat’s AI Factory, Lintasarta, which will be the first entity in Southeast Asia to integrate the Nvidia GB200 NVL72, specifically designed to enhance generative AI and high-performance computing capabilities.
Secure AI workloads: To protect Indonesia’s digital assets and intellectual property, Cisco will provide the infrastructure to connect and secure the countries information and assets. This infrastructure will have security features embedded within the network, forming a resilient backbone for the nation’s AI Center of Excellence. Central to this effort is a Sovereign Security Operations Center Cloud Platform, which marks the first time Splunk and Cisco’s Managed Security Services Solutions have been used together in Indonesia. This SOC will combine AI-driven threat detection with local data controls and effortless integration with national systems. This empowers Indonesian organizations to effectively secure their digital holdings and meet regulatory requirements.
AI for all: The AI Center of Excellence is on a mission to ensure hundreds of millions of Indonesians gain access to AI by 2027. This will be made possible using by Indosat’s widespread mobile network infrastructure. This push is fundamentally about democratizing AI, removing geographical divides, and fostering a new generation of empowered developers throughout the country. The ultimate vision is a future where AI’s benefits are universally shared among all citizens.
Talent and ecosystem development: The center is making a substantial investment in Indonesia’s human capital, aiming to train 1 million individuals in critical digital areas like networking, security and AI by 2027. This ambitious target is supported by both Nvidia and Cisco. Nvidia will facilitate this through its AI Technology Center for research, its Inception program for startups, and its Deep Learning Institute for professional development. For its part, Cisco will leverage its Networking Academy to deliver training, as part of its commitment to upskill 500,000 Indonesians by 2030. These combined efforts are designed to create a future-ready workforce that can drive Indonesia’s digital economy.
Already, more than two dozen independent software vendors and startups are using Indosat’s AI infrastructure to develop technologies for accelerating and improving workflows in areas such as higher education and research, food security, bureaucratic reform, smart cities, mobility and healthcare.
Plans include Indosat and Nvidia developing and deploying AI-RAN (artificial intelligence radio access network) technologies capable of reaching larger audiences by using AI over wireless networks. And the government is developing trustworthy AI frameworks consistent with Indonesian values for the safe, responsible development of AI and related policies.
This type of public-private partnership could serve as a blueprint for development and deployment of AI-driven initiatives in other countries, which would help level the playing field to help ensure smaller and developing nations can take advantage of the AI revolution.
The investment Indonesia is making could pay big dividends in a short period of time and AI promises to reshape the global economy, like the way the Internet did. By investing in its citizens, Indonesia is ensuring as AI evolves and gets embedded into the fabric of the way we live, it will be ready to capitalize on the opportunity.