Featured
Reports

Verizon Mobile Partners with Microsoft So Teams Can Energize the Mobile Workforce

December 2023 // For years, mobile employees have constituted a significant portion of the workforce. Since the start of the […]

Continue Reading

“Private Cellular or Wi-Fi?” Isn’t an Either/Or Question: You Can Have Both

December 2023 // The world used to rely on wired connections. The phones we used back then plugged into the […]

Continue Reading

Enterprises Have Big Plans for Wireless but Lack Unified Management

October 2023 // Siloed management, security and QoS leads to complexity and downtime. A converged multi-access wireless network* is the […]

Continue Reading

Check out
OUR NEWEST VIDEOS

2025 ZKast #6 With Bob O'Donnell from TECHnalysis on Cisco AI Summit

5.6K views 5 hours ago

0 0

2025 ZKast #5 with Vijoy Pandey, SVP, Outshift by Cisco on the topic of Internet of Agents

5K views 21 hours ago

3 0

2025 ZKast #4 with Juanita Coley of Solid Rock Consulting on the CX Landscape in 2025

3.1K views January 16, 2025 3:32 pm

2 0

Recent
ZK Research Blog

News

As agents become connected, the value of every connected application will rise – provided vendors can work together to let their AI agents work together.

Last week, I attended AWS re:Invent 2024 with 60,000 of my closest friends. We were there to catch up on the latest and greatest in the cloud, particularly AI. One of the more interesting sessions was on the topic of the “Internet of Agents” by Vijoy Pandey, SVP/GM of Outshift by Cisco. For those unfamiliar with Outshift, it’s an internal incubator at Cisco focused on emerging technology in agentic AI, quantum networking, and next-gen infrastructure. As a separate group, Outshift can move at the speed of a startup while retaining access to the resources of a large company like Cisco. The concept of the “Internet of agents” is a simple one. The AI-based agents found in applications can communicate with each other bidirectionally. Pandey’s definition was “an open, internet scale platform for quantum-safe agent-agent and agent-human communication.” (More on why the term “quantum-safe” was included is at the end of this piece.)

Agent Sprawl

One might wonder what problem the AI-based agents are trying to solve here; it has not manifested itself yet. Yet I believe generative AI is one of those “game-changing” technologies that will alter almost every aspect of our lives. I predict, over time, that every application we use will have a generative AI interface built into it, like how every app has a search box today. Over time, these agents will go from reactive, where we ask them questions, to proactive, where specific agents push us the contextually important information we need to know. Consider the implications of this. Today, most workers use several applications – anywhere from half a dozen to over 50. As these apps evolve and add agents, we will face “agent sprawl,” where users will have as many virtual agents as they have apps. At re:Invent, I attended a session that had the CIO of a major bank participating in it, and he brought up how they’re building virtual assistants for their own apps but also are using Teams Copilot and Salesforce’s agent. Post-session, I asked him what he thinks the future looks like, and he told me he foresees a day when users have a “tapestry” of agents they need to pick and choose from. I followed up my question by asking him what he thinks working in that kind of environment would be like, and he said, “likely chaos.”

Fragmented Knowledge

The numerous agents cause several problems. The first is that the agent or assistant is only as knowledgeable as the data in the application. This can create fragmented insights. As an example, consider the case where a company has a great website that does a best-in-class job of showcasing a poorly built product. The web analytics and sales tools that are used before purchase might show high customer satisfaction scores as they measure pre-purchase satisfaction. Once the customer uses the product, the mood will turn from happy to upset, and the contact center will field calls regarding refunds and repairs. Using the generative AI interface to understand customer sentiment will yield different results. Also, as the agents shift from reactive to proactive, users will be bombarded with messages from these systems as they look to keep you updated and informed. I expect the apps to have controls, much like they do today, so we can control the interactions with the apps, but most users will keep critical apps on. It would be like a CEO having a team of advisors across every business unit in a company, all whispering in his ear at once.

Interconnecting Agents

This is where the Internet of agents brings value. By interconnecting, these assistants can share information, leading to less but more relevant information. In the scenario outlined above, a product owner or sales leader could be alerted when customer sentiment changes, as these pre-purchase agents communicate with call center agents to provide a holistic picture. This would enable the company to better understand what happened and take corrective action. Also, this will enable users to work in the applications they prefer but still access information from others. Today, a sales leader can pull data from CRM, contact center tools, sales automation applications, and other systems. The data must be brought together manually and likely correlated by people to find the insights. With the Internet of Agents, AI could perform analytics across multiple systems. The value can be described using Metcalf’s Law, which states that the value of any network is proportional to the square of the number of connected nodes. A network of two nodes has a value of four, whereas a network with 16 nodes has a value of 256, etc. As agents become connected, the value of every connected application will rise. To accomplish this, the vendors will need to agree to a set of standards and follow them – this is something Pandey and the team at Outshift are working on. This is where I hope the application providers learn from the sins of the past, as many of them have historically preferred walled gardens. One example is the UC messaging industry. Slack, Teams, Webex, Zoom, etc., all operate in silos, so a worker can’t send a message in Slack to a Teams user. Imagine how useless text messaging would be if one could only send messages to phones of the same manufacturer. The reality is that when systems are open and standards-based, it creates a rising tide, and everyone wins. A small piece of a big pie is worth far more than most of a small pie.

The Agents Are Coming

One final point: Pandey’s definition did include the term “quantum-safe.” I asked why that was included and was told that if one is building the next generation of secure connectivity, security should be future-proofed. Infusing quantum-safe protocols ensures that quantum nodes are added to the infrastructure and communications are secured even against “store now, decrypt later” attacks. This is consistent with conversations I’ve had with other security companies where their primary concern around quantum computing is bad actors stealing data today, then using quantum to decrypt it at a later date. To paraphrase Paul Revere, “The agents are coming, the agents are coming,” and I implore the vendor community to get together and standardize the communications between systems and to ensure they are secured. Adoption will be faster, users will be happier, and the value will be greater. Seems like a no-brainer to me.

‘Tis the season to ponder what to get that someone in your life who has everything. If you haven’t finished your Christmas shopping and have $249 to spend for a piece of technology about four inches wide, three-and-a-half inches high, and 1.3 inches thick, then Nvidia Corp. has the perfect gift.

The tech giant introduced the Jetson Orin Nano Super Developer Kit this week. Though that’s a big name for such a small product, don’t be fooled. The latest innovation from Nvidia packs a big wallop in its little package.

The Jetson Orin Nano Super Developer Kit is a small but mighty artificial intelligence computer that the company says “redefines AI for small edge devices.” And by mighty, the new product delivers up to 67 tera operations per second of AI performance. That’s a 1.7-times increase over its predecessor, the Jetson Orin Nano.

But if you already bought the original model, which sold for $499 and debuted just 18 months ago, don’t worry. The team at Nvidia isn’t pulling a Grinch move. A free software upgrade for all original Jetson Orin owners turns those devices into the new Super version.

What’s in the box?

The developer kit comprises an 8-gigabyte Jetson Orin Nano module and a reference carrier that accommodates all Orin Nano and Nvidia Orin NX modules. The company says this kit is “the ideal platform for prototyping your next-gen edge-AI product.”

The 8GB module boasts an Ampere architecture graphics processing unit and a six-core Arm central processing unit, which enables multiple concurrent AI application pipelines. The platform runs Nvidia AI software stack and includes application frameworks for multiple use cases, including robotics, vision AI and sensor processing.

Built for agentic AI

Deepu Talla, Nvidia’s vice president and general manager of robotics and edge computing, briefed industry analysts before the Dec. 17 announcement. He called the new Jetson Orin Nano Super Developer Kit “the most affordable and powerful supercomputer we build.” Talla said the past two years saw generative AI “take the world by storm.” Now, he said, we’re witnessing the birth of agentic AI.

“With agentic AI, most agents are in the digital world. And the same technology now can be applied to the physical world, and that’s what robotics is about,” he said. “We’re taking the Orin Nano Developer Kit and putting a cape on it to make it a superhero.”

And what superpowers will the Jetson Orin Nano Super Developer Kit have? In addition to increasing performance from 40 to 67 TOPS, the new kit will have much more memory bandwidth — from 68 to 102 gigabytes per second, a 70% increase.

“This is the moment we’ve been waiting for,” said Talla. Nvidia is increasing performance significantly on the same hardware platform by supercharging the software. “We designed [the original Orin Nano system] to be field upgradeable. As generative AI became popular and we did all the different testing, we can support all the old systems in the field without changing the hardware, just through software updates.”

On the call, Talla mentioned that the total available market for robots, also known as physical AI, is about half the world’s gross domestic product, or about $50 trillion. Is it that big? It’s hard to quantify, but I do believe the opportunity is massive. Robots represent the next frontier in agentic AI because they combine a physical form factor with advanced decision-making capabilities, bridging the gap between virtual intelligence and the real world.

Unlike purely virtual AI systems, robots can interact with their environment, perform tasks and adapt to dynamic situations, making them critical for solving complex real-world problems. Their ability to act autonomously while continuously learning from their surroundings allows them to tackle challenges that are difficult for traditional software and sometimes people.

In fields such as healthcare, logistics, retail and manufacturing, robots are already demonstrating their potential by automating repetitive tasks, improving precision and enhancing efficiency. As advancements in machine learning, computer vision, and natural language processing continue, robots will become more capable of understanding and responding to human needs with nuance. They can assist the elderly, manage warehouses or even conduct surgeries accurately and consistently, surpassing human capabilities.

Additionally, as robots gain greater autonomy, they will increasingly function as agentic AI — intelligent agents capable of making decisions, setting goals and pursuing actions without constant human oversight. This shift will unlock new possibilities in sectors such as exploration, disaster response and personal assistance, transforming robots into valuable partners for human endeavors. The convergence of AI, robotics, and automation is poised to redefine industries and everyday life.

One of the biggest challenges and expenses with robots is to train them. Creating all the possible scenarios to test a physical robot can take years. For example, teaching a robot to walk requires stairs, gravel roads, side hills and other scenarios. They can fall, get damaged, overheat or experience other events slowing training. Nvidia takes a “full stack” approach to physical AI, where training can be done virtually using synthetic data. When the training is complete, upload the information so the robot can do the tasks.

Planned rejuvenation

Many products that hit the market have been designed with planned obsolescence in mind, whether by design or just due to rapidly evolving technologies and components. Nvidia is doing the opposite. Call it “planned rejuvenation.”

Talla said this is possible because Nvidia designed the Jetson architecture to support faster performance. “We are increasing the frequency of the memory,” he said. “We are increasing the frequency of the GPU. In fact, we are also slightly increasing the frequency of the CPUs. And the power consumption will go up to 25 watts. But the hardware has been designed to support that already.” Jetson runs Nvidia AI software, including Nvidia Isaac for robotics, Nvidia Metropolis for vision AI, and Nvidia Holoscan for sensor processing.

These preconfigured kits are also part of why Nvidia has become the runaway leader in AI. Packaging up all the hardware and software required for a developer to get started significantly reduces development time. Nvidia’s peers offer many of the same building blocks, but the developer must put them together.

The new Jetson Orin Nano Super Developer Kit and software upgrades for owners of the original Jetson Orin Nano Developer Kit are available at nvidia.com.

This partnership dispels the myth that the only companies using Amazon Connect are newer brands looking to shake up the status quo.

No industry has more variables impacting customer service than airlines. Weather issues, mechanical problems, staffing, and other factors can delay or cancel flights. Bags can also get lost, prices can change, and planes can be oversold. This high level of unpredictability creates tension and customer service issues. Additionally, outdated systems, disconnected workflows, and limited self-service options have contributed to long wait times, customer frustration, and high operational costs. Recognizing these challenges, Air Canada, the country’s largest airline, has been deploying new technologies to improve customer service across its passenger and cargo divisions. Using the Amazon Connect cloud-based contact center platform, Air Canada has addressed existing customer service gaps to ensure that the airline’s systems meet practical, real-world demands. Amazon Web Services (AWS) recently hosted a session with Air Canada at re:Invent 2024, where Sebastian Cosgrove, director of global customer service at Air Canada, shared how Amazon Connect is helping the airline innovate customer service across its business. The most common issue people experience with contact centers is a lack of continuity, especially when moving from self-service to speaking with an agent. For example, when a person is transferred from self-service to a live agent, details about their issue are not always shared, so they’re forced to repeat themselves. Moreover, supervisors and managers often have limited data access, which hinders their ability to improve service quality. Anything less than stellar service will escalate the stress level of an already anxious passenger, so those disconnected moments only make situations worse. One of the biggest factors holding companies back from modernizing is cost, and Sheila Smith, the principal Amazon Connect specialist for AWS, spoke about this on the panel. She stated, “The single biggest cost in the contact center is not the cost of technology or network services; it’s the cost of agent resources. If we can find a way to increase containment through self-service experiences and deflect those (self-service) interactions from even coming into the contact center, that will drive a huge cost out of the business. It’s going to impact customer service as well positively.” Staffing remains the lion’s share of the costs of running a contact center. According to AWS data, 75 percent of contact center costs is agent staffing, which drives organizations toward more economical options like self-service when possible. Most people (70 percent) prefer self-service over speaking to agents. However, when an issue needs to be escalated to an agent, poor handoffs typically lead to frustration. AI has brought in a high level of fear of agent job replacement, but the reality is that companies like Air Canada can’t modernize until they cut people’s costs. The shift from people-heavy processes to digital methods aligns with current market trends. Gen AI enhances self-service across communication channels by guiding agents through complex issues and workflows. By simplifying workflows and reducing the need to switch between systems, Amazon Connect allows agents to pay more attention to complex customer needs. In 2022, Air Canada launched a comprehensive modernization initiative, beginning with its Aeroplan loyalty program and extending to reservations and specialty services, enabled by its transition to Amazon Connect. This move integrated self-service capabilities into a unified customer experience, introducing a new interactive voice response (IVR) system with automated call recording, transcription, and sentiment analysis. Agents gained real-time transcription tools and the ability to flag issues easily. For instance, Air Canada revamped the IVR system in its passenger division to improve the customer experience. When customers call, the system uses loyalty numbers to identify them. It then provides booking details and offers self-service options for easy tasks like selecting seats or checking baggage. For more complex issues, the system gathers context before connecting the customer to an agent, lowering the average handle time. These changes have led to a 15 percent drop in call volumes, an 8 percent decrease in abandonment rates, and $4.7 million CAD in savings by reducing full-time positions. The updated IVR system has also handled 78 percent of informational queries without requiring agent involvement. According to Cosgrove, the shift made a noticeable difference in Air Canada’s customer service efficiency, allowing agents to focus on higher-value tasks. “In November 2022, we were shy of 30,000 phone calls. Now, we’re consistently handling over 140,000 phone calls within the IVR. That’s the value Amazon Connect brought us and the benefits of self-service embedded in the IVR. We’ve started to realize these benefits,” said Cosgrove. The airline’s cargo division faced similar challenges but required a slightly different approach. Air Canada replaced manual, email-based processes with Salesforce Service Cloud and integrated them with Amazon Connect. This created a centralized system that manages workflows automatically, giving agents a unified view of customer details and case history. The airline later added telephony with Service Cloud Voice, reducing average handle times by two minutes per phone call and increasing agent productivity by over 20 percent. “Part of the reason we had such a huge gain was because before our integration with Service Cloud Voice, we had generic statuses such as away, ready, and training. Now we could mine the data to understand what the agents were doing and how we get them ready to always help our customers,” said Cosgrove. Integrating Service Cloud has also significantly improved Air Canada’s quality assurance (QA) process. Previously paper-based, QA is fully automated, with results displayed on dashboards. This transformation has saved the airline 89 hours per month that were spent on manual QA processing in the past. Air Canada is now focusing on the next phase of its customer service transformation. To help customers before issues are escalated to agents, virtual assistants powered by gen AI are being developed. These assistants will handle common queries and help streamline workflows. Guided workflows and issue summaries for live human agents are also in development, with feedback loops designed to refine the system based on real-world use. Additionally, the airline plans to introduce AI-supported chat capabilities, further enhancing its omnichannel approach to customer interactions. These advancements aim to make the customer journey more seamless while equipping agents with the necessary tools. Air Canada is taking a careful and phased approach to modernization, which allows it to adjust without disrupting operations. “When you’re talking about people, legacy systems, and multi-branch products, it takes time to make these changes,” said Cosgrove. “Anyone can quickly pull something out of the box, but if you want meaningful results, you must do it right first.” While the case study at re:Invent was specific to the airline industry, the lessons learned can be applied to other verticals. By automating mundane, repetitive tasks, agents can spend more time and deliver more value on complicated tasks or with higher-value customers. This enables the brand to handle more interactions with fewer people—better service, lower costs, and happier customers—a win-win-win. For AWS, showcasing a brand like Air Canada highlights how far the business unit has come. In the past, re:Invent’s Amazon Connect customers were brands like CapitalOne and Rocket Mortgage – the brands working to disrupt their industries. Air Canada is a blue-chip company looking to serve its customers better. At re:Invent, I met with banks, retailers, and other companies one would not put in the “disruptive” category. This dispels the myth that the only companies using Amazon Connect are newer brands looking to shake up the status quo. Since the product launched, I’ve positioned Connect as a dark horse to watch, as I believe, given its part of the broader Amazon and has access to AI, cloud features, and more, it was on track to be a market leader. Last year, it moved into the Leaders quadrant on the Gartner MQ; this year, it rolled a bevy of mainstream brands. Amazon Connect is a dark horse no more.

The National Hockey League and Amazon Web Services Inc. are working together to change how hockey is experienced, leveraging cloud technologies and data-driven insights to enhance production workflows and fan engagement.

At AWS re:Invent last week, representatives from both organizations joined a panel titled “NHL Unlocked: Live cloud production, sports data, and alternate feeds.” The panelists were Julie Souza, global head of sports for AWS; Grant Nodine, senior vice president of technology for the NHL; Brant Berglund, senior director of coaching and GM applications for the NHL; and Andrew Reich, senior industry specialist, BD, for AWS. They discussed their progress across a range of issues.

Souza opened the discussion by emphasizing the importance of collaboration in the partnership. “AWS isn’t just a tech vendor,” she said. “We’re working alongside the NHL to explore what’s possible and deliver real value to fans and the league.”

Souza said their shared commitment to innovation has been central to their progress, including advancements in live cloud production and analytics-driven storytelling.

This sentiment of “partner versus vendor” has been a consistent theme in my discussions with other sports entities. The PGA TOUR, Swimming Australia, the NFL and others have told me the AWS team actively gets involved in helping them consider what’s possible and bringing new ideas to the table.

Building a foundation for innovation

Nodine traced the journey to the league’s initial efforts to transition its video content to the cloud. This foundational step enabled automating processes such as encoding and scheduling, which are now critical to their operations. “You can’t do the exciting stuff,” Nodine noted, “until you’ve built the basics.”

Reich elaborated on the architecture supporting this transformation. Using AWS Elemental MediaConnect, the NHL created a streamlined pipeline for video ingest, storage and distribution. This setup makes nightly game broadcasts efficient and positions the league to experiment with new forms of content delivery.

Making sense of the data

The NHL’s adoption of player- and puck-tracking systems has unlocked unprecedented insights into the game. These systems collect billions of data points during a single night of games.

Berglund emphasized how this data helps deepen understanding for fans. “It’s not just about collecting stats,” he said. “It’s about turning that data into meaningful stories.”

One example is Ice Tilt, a period of play in which one team dominates possession and offensive pressure, pinning its opponents in the defensive zone and generating sustained momentum.

Berglund said he once asked player Jack Hughes how he recognizes momentum shifts during games. Hughes described it as “tilting the ice.” This once informal concept is now quantified and aligns with the NHL’s use of player positioning data to measure territorial momentum, turning the metaphor into a precise, trackable metric.

Reaching new audiences

Alternate broadcasts, such as the NHL Edge DataCast, showcase the league’s ability to tailor content to different audiences. The Big City Greens Classic, which adapted NHL games for a younger, animation-loving demographic, demonstrated the potential for these efforts. Souza noted that these initiatives are helping the NHL reach audiences who might not traditionally watch hockey. “By meeting fans where they are, we can make the game accessible to more people in ways that resonate with them,” Nodine added.

The league also creatively uses analytics, such as face-off probability, which calculates matchups and success rates in real time. This feature not only enriches broadcasts but also enables commentators to explore the nuances of gameplay more deeply.

A shift to live cloud production

In March 2023, the NHL reached a major milestone: It became the first North American league to produce and distribute a game entirely in the cloud.

Nodine recounted how this effort involved routing all camera feeds to the cloud for production by remote teams. The approach also promoted sustainable production practices by significantly reducing carbon emissions. “We’re not turning back,” Nodine said, citing both operational flexibility and environmental benefits as reasons to continue this path.

Reich highlighted how cloud-based workflows enable the league to experiment in ways traditional setups cannot. For example, by centralizing video feeds in the cloud, the NHL can produce alternate broadcasts or deliver content directly to fans in the arena through mobile devices.

The NHL deserves significant credit for using the cloud to produce games, as it was the first league to make the shift. For sports entities, production quality is obviously critical, as that’s how most fans engage with the brand. Before the NHL, most sports organizations were skeptical that the cloud could deliver comparable quality to producing the game on-premises. The NHL’s success leads to other leagues, such as the PGA TOUR and NFL, producing events in the cloud, but the NHL has the distinction of being first.

What’s next

As the NHL and AWS reflect on their progress, they are also exploring what’s next. Nodine pointed to opportunities in using artificial intelligence to streamline highlight generation and provide real-time insights for broadcasters. By automating some workflows, broadcasters could focus on storytelling, while fans could gain deeper insights into the game’s dynamics.

Alternate broadcasts remain a fertile ground for experimentation. Projects such as Big City Greens and NHL Edge DataCast have shown how targeted content can reach new audiences, and the technology behind these initiatives could inform traditional broadcasts in the future. For example, integrating metrics such as time on ice or Ice Tilt directly into standard broadcasts could provide fans with richer narratives without disrupting the viewing experience.

Souza summarized the approach as follows: “This is about thoughtful progress — identifying what works, refining it and integrating it in ways that enhance the game for everyone.” As the partnership evolves, the focus remains on making hockey more engaging, accessible, and dynamic for a global audience.

Some final thoughts

If you love hockey like I do (I’m Canadian, so I’m mandated to love hockey), you support any efforts to improve the fan experience. What I like most about the collaboration between the NHL and AWS is it helps casual fans better understand the game. It’s been said that AI lets the untrained eye see what the trained eye does, and features that highlight specific nuances can accelerate the learning of a game that can be confusing to non-hard-core fans.

Now, if only the Canucks can hold on until Stanley Cup playoff time.

Networking and complexity go hand in hand, like chocolate and peanut butter. Though this has been the norm, it’s playing havoc with business operations.

A recent ZK Research/Cube Research study found that 93% of organizations state the network is more critical to business operations than two years ago. In the same period, 80% said the network was more complex. Increasing complexity leads to blind spots, unplanned downtime, security breaches and other issues that affect businesses.

Extreme Networks Inc. today announced its Extreme Platform ONE connectivity platform to combat this. The back-end data lake combines data from networks, security tools and third parties such as Intel Corp., Microsoft Security and ServiceNow Inc. The platform is built on an artificial intelligence core to deliver conversational AI and autonomous networking. The goal is to automate wholly or at least partially many of the complex tasks associated with operating and securing a network.

The platform is flexible enough to serve multiple audiences. It includes a composable workspace that enables cross-team workflows. Although network engineers will most likely work with Extreme, the company has added security functionality and capabilities for that audience. Extreme also offers workflows, services and data for procurement and financing teams.

The latter audience often needs to be reminded about network infrastructure. As a former information technology executive, I am all too familiar with the pains of managing subscriptions, service contracts and licenses. This is often done on spreadsheets, which is time-consuming and error-prone and can frequently lead to overspending.

Extreme has built a dashboard that shows all relevant financial information, including contracts and renewal dates. This can help the customer better understand current and future trends and plan for upgrades.

For the network practitioner, the AI capabilities are targeted at troubleshooting complicated problems, which networks are filled with. Wi-Fi problems are the hardest to solve as there are so many variables. With a wired network, virtual local-area networks, duplex mismatches and other settings can often cause unexpected performance issues.

Finding these can take days, weeks, or even months, as replicating them can be challenging. AI sees all data across the network and can connect the dots that people can’t.

There is also an AI Policy Assistant that administrators can use to create, view, update and remove application policies. Policy administration is necessary but time-consuming and error-prone. Setting up policies initially is straightforward but keeping them up to date as people and devices move around the network or as applications change can be difficult, particularly in dynamic environments, which most companies are today because of the internet of things, cloud and work-from-home.

The rollout of Extreme Platform ONE is the culmination of many acquisitions and years of work. Today’s Extreme is a rollup of many network vendors, including Enterasys, Brocade, Avaya Networking and Motorola/Zebra. The purchase of Aerohive brought the company the cloud back end that is being leveraged in the current platform launch. Along the way, the company rationalized its product set and implemented “Universal Hardware,” which lets customers choose between different operating systems.

Extreme Platform ONE is well-timed with the current AI wave. The concept of the network platform has been bandied about for years but has yet to catch on.

Last week, I talked to Extreme Chief Technology Officer Nabil Bukhari (pictured), about the platform and why now. He told me this is the direction the company has been moving in since he took the role in 2020. AI makes a platform’s value proposition compelling today, as it requires a single set of data to deliver the best insights.

Companies that run one vendor for the WAN, another for Wi-Fi, and another for the campus network will have three sets of data, likely siloed, three AI engines leading to fragmented insights. For most companies, AI for operations is the way forward, and that will push more companies toward a platform approach.

Other vendors have followed the platform path. What I like about Extreme’s approach is that it uses AI as more than a troubleshooting tool. Though that’s a core function of the platform, it addresses issues at every step of the network lifecycle: planning, deployment, operations, optimization, security and renewals.

It as taken Extreme years to combine multiple products and unify the data set, but that’s done, and customers should see the benefits with the new Platform ONE.

Amazon Web Services Inc. Chief Executive Matt Garman delivered a three-hour keynote at the company’s annual re:Invent conference to an audience of 60,000 attendees in Las Vegas and another 400,000 watching online, ad they heard a lot of news from the new leader, who became CEO earlier this year after joining the company in 2006.

The conference, dedicated to builders and developers, offered 1,900 in-person sessions and featured 3,500 speakers. Many of the sessions were led by customers, partners and AWS experts. In his keynote, Garman (pictured) announced a litany of advancements designed to make developers’ work easier and more productive.

Here are nine key innovations he shared:

AWS will play a big role in AI

Garman kicked off his presentation by announcing the general availability of the company’s latest Trainium chip — Trainium2 — along with EC2 Trn-2 instances. He described these as the most powerful instances for generative artificial intelligence thanks to custom processors built in-house by AWS.

He said Trainium2 delivers 30% to 40% better price performance than current graphics processing unit-powered instances. “These are purpose-built for the demanding workloads of cutting-edge gen AI training and inference,” Garman said. Trainium2 gives customers “more choices as they think about the perfect instance for the workload they’re working on.”

Beta tests showed “impressive early results,” according to Garman. He said the organizations that did the testing — Adobe Inc., Databricks Inc. and Qualcomm Inc. — all expect the new chips and instances will deliver better results and a lower total cost of ownership. He said some customers expect to save 30% to 40% over the cost of alternatives. “Qualcomm will use the new chips to deliver AI systems that can train in the cloud and then deploy at the edge,” he said.

When the announcement was made, many media outlets painted Trn2 as Amazon looking to go to war with Nvidia Crop. I asked Garman about this in the analyst Q&A, and he emphatically said that was not the case. The goal with its own silicon is to make the overall AI silicon pie bigger where everyone wins. This is how Amazon approaches the processor industry, and there is no reason to assume it will change how it handles partners other than having headlines be clickbait. More Nvidia workloads are run in the AWS cloud, and I don’t see that changing.

New servers to accommodate huge models

Today’s models have become very big and very fast, with hundreds of billions to trillions of parameters. That makes them too big to fit on a single server. To address that, AWS announced EC2 Trainium2 UltraServers. These connect four Trainium2 instances — 64 Trainium2 chips — all interconnected by high-speed, low-latency Neuronlink connectivity.

This gives customers a single ultranode with over 83 petaflops of compute power from a single compute node. Garman said this will have a “massive impact on latency and performance.” It enables very large models to be loaded into a single node to deliver much better latency and performance without having to break it up across multiple nodes. Garman said Trainium3 chips will be available in 2025 to keep up with gen AI’s evolving needs and provide the landscape customers need for their inferences.

Leveraging Nvidia’s Blackwell architecture

Garman said AWS is the easiest, most cost-effective way for customers to use Nvidia’s Blackwell architecture. AWS announced a new P6 family of instances based on Blackwell. Coming in early 2025, the new instances featuring Nvidia’s latest GPUs will deliver up to 2.5 times faster compute than the current generation of GPUs.

AWS’s collaboration with Nvidia has led to significant advancements in running generative AI workloads. Bedrock gives customers model choice: It’s not one model to rule them all but a single source for a wide range of models, including AWS’ newly announced Nova models. There won’t be a divide between applications and gen AI applications. Gen AI will be part of every application, using inference to enhance, build or change an application.

Garman said Bedrock resonates with customers because it provides everything they need to integrate gen AI into production applications, not just proofs of concept. He said customers are starting to see real impact from this. Genentech Inc., a leading biotech and pharmaceutical company, wanted to accelerate drug discovery and development by using scientific data and AI to rapidly identify and target new medicines and biomarkers for their trials. Finding all this data required scientists to scour many external and internal sources.

Using Bedrock, Genentech devised a gen AI system so scientists can ask detailed questions about the data. The system can identify the appropriate databases and papers from a huge library and synthesize the insights and data sources.

It summarizes where it gets the information and cites the sources, which is incredibly important so scientists can do their work. It used to take Genentech scientists many weeks to do one of these lookups. Now, it can be done in minutes.

According to Garman, Genentech expects to automate five years of manual efforts and deliver new medications more quickly. “Leading ISVs, like Salesforce, SAP, and Workday, are integrating Bedrock deep into their customer experiences to deliver GenAI applications,” he said.

Bedrock model distillation simplifies a complex process

Garman said AWS is making it easier for companies to take a large, highly capable frontier model and send it all their prompts for the questions they want to ask. “Then you take all of the data and the answers that come out of that, and you use that output and your questions to train a smaller model to be an expert at one particular thing,” he explained. “So, you get a smaller, faster model that knows the right way to answer one particular set of questions. This works quite well to deliver an expert model but requires machine learning involvement. You have to manage all of the data workflows and training data. You have to tune model parameters and think about model weights. It’s pretty challenging. That’s where model distillation in Bedrock comes into play.”

Distilled models can run 500% faster and 75% more cheaply than the model from which they were distilled. This is a massive difference, and Bedrock does it for you,” he said. This difference in cost can turn around the gen AI application ROI from being too expensive to roll it out in production to be very valuable. You send Bedrock sample prompts from your application, and it does all of the work.

But getting the right model is just the first step. “The real value in Generative AI applications is when you bring your enterprise data together with the smart model. That’s when you get really differentiated and interesting results that matter to your customers. Your data and your IP really make the difference,” Garman said.

AWS has expanded Bedrock’s support for a wide range of formats and added new vector databases, such as OpenSearch and Pinecone. Bedrock enables users to get the right model, accommodates an organization’s enterprise data, and sets boundaries for what applications can do and what the responses look like.

Enabling customers to deploy responsible AI — with guardrails

Bedrock Guardrails make it easy to define the safety of applications and implement responsible AI checks. “These are guides to your models,” said Garman. “You only want your gen AI applications to talk about the relevant topics. Let’s say, for instance, you have an insurance application, and customers come and ask about various insurance products you have. You’re happy to have it answer questions about policy, but you don’t want it to answer questions about politics or give healthcare advice, right? You want these guardrails saying, ‘I only want you to answer questions in this area.’”

This is a huge capability for developing production applications, Garman said. “This is why Bedrock is so popular,” he explained. “Last year, lots of companies were building POCs for gen AI applications, and capabilities like Guardrails were less critical. It was OK to have models ‘do cool things.’ But when you integrate gen AI deeply into your enterprise applications, you must have many of these capabilities as you move to production applications.”

Making it easier for developers to develop

Garman said AWS wants to help developers innovate and free them from undifferentiated heavy lifting so they can focus on the creative things that “make what you’re building unique.” Gen AI is a huge accelerator of this capability. It allows developers to focus on those pieces and push off some of that undifferentiated heavy lifting. Q Developer, which debuted in 2023, is the developers’ “AWS expert.” It’s the “most capable gen AI assistant for software development,” he said.

Q Developer helped Datapel Systems “achieve up to 70% efficiency improvements. They reduced the time needed to deploy new features, completed tasks faster, and minimized repetitive actions,” Garman said.

But it’s about more than efficiency. The Financial Industry Regulatory Authority or FINRA has seen a 20% improvement in code quality and integrity by using Q Developer to help them create better-performing and more security software. Amazon Q has the “highest reported acceptance rate of any multi-line coding assistant in the market,” said Garman.

However, a coding assistant is just a tiny part of what most developers need. AWS research shows that developers spend just one hour a day coding. They spend the rest of the time on other end-to-end development tasks.

Three new autonomous agents for Amazon Q

According to Garman, autonomous agents for generating user tests, documentation and code reviews are now generally available. The first enables Amazon Q to generate end-to-end user tests automatically. It leverages advanced agents and knowledge of the entire project to provide developers with full test coverage.

The second can automatically create accurate documentation. “It doesn’t just do this for new code,” Garman said. “The Q agent can apply to legacy code as well. So, if a code base wasn’t perfectly documented, Q can understand what that code is doing.”

The third new Q agent can perform automatic code reviews. It will “scan for vulnerabilities, flag suspicious coding patterns, and even identify potential open-source package risks” that might be present,” said Garman. It will identify where it views a deployment risk and suggest mitigations to make deployment safer.

“We think these agents can materially reduce a lot of the time spent on really important, but maybe undifferentiated tasks and allow developers to spend more time on value-added activities,” he said.

Garman also announced a new “deep integration between Q Developer and GitLab.” Q Developer functionality is now deeply embedded in GitLab’s platform. “This will help power many of the popular aspects of their Duo Assistant,” he said. Teams can access Q Developer capabilities, which will be natively available in the GitLab workflows. Garman said more will be added over time.

Mainframe modernization

Another new Q Developer capability is performing mainframe modernization, which Garman called “by far the most difficult to migrate to the cloud.” Q Transformation for Mainframe offers several agents that can help organizations streamline this complex and often overwhelming workflow. “It can do code analysis, planning, and refactor applications,” he said. “Most mainframe code is not very well-documented. People have millions of lines of COBOL code, and they have no idea what it does. Q can take that legacy code and build real-time documentation that lets you know what it does. It helps let you know which applications you want to modernize.”

Garman said it’s not yet possible to make mainframe migration a “one-click process,” but with Q, instead of a multiyear effort, it can be a “multiquarter process.”

Integrated analytics

Garman introduced the next generation of Amazon SageMaker, which he called “the center for all your data, analytics and AI needs.” He said AWS is expanding SageMaker by adding “the most comprehensive set of data, analytics, and AI tools.” SageMaker scales up analytics and now provides “everything you need for fast analytics, data processing, search data prep, AI model development and generative AI” for a single view of your enterprise data.

He also introduced SageMaker Unified Studio, “a single data and AI development environment that allows you to access all the data in your organization and act on it with the best tool for the job. Garman said SageMaker Unified Studio, which is currently in preview, “consolidates the functionality that analysts and data scientists use across a wide range of standalone studios in AWS today.” It offers standalone query editors and a variety of visual tools, such as EMR, Glue, Redshift, Bedrock and all the existing SageMaker Studio capabilities.

Even with all these new and upgraded products, solutions and capabilities, Garman promised more to come.

Veeam Software Group GmbH, the market share leader in data resilience, today announced a new $2 billion investment from several top investment firms.

The Seattle-based company said its valuation now stands at $15 billion, which is about the same as the valuation of Commvault Systems Inc. and Rubrik Inc. combined. Investors in what the company calls an oversubscribed round are led by TPG, with participation from Temasek, Neuberger Berman Capital Solutions and others. Morgan Stanley managed the round.

Recently, I had an in-depth conversation with Veeam Chief Executive Officer Anand Eswaran and Chief Financial Officer Dustin Driggs about what enabled Veeam to reach this point in its evolution and, more importantly, where the company is going from here.

What the funding will enable

“We have huge ambitions of growth and profitability,” said Eswaran. “Having extremely well-capitalized investors will help us if we want to make some big moves. We can already make small, medium, and large moves ourselves because of the balance sheet and how profitable we are. But if we want to do something Earth-shattering, we have the investors who will be a key part of this process going forward with us.”

Previously, company insiders owned 100% of the Veeam. This round brings in diversified investors that “will be with us for the duration of the journey,” according to Eswaran.

He called the level of investment “great validation” because the firms conducted a “massive independent analysis” of the company before investing. Veeam’s financial results and market share growth, which have been steadily upward, demonstrate why the investors were eager to get on board. Eswaran cited four key reasons for Veeam’s growth and its attractiveness to outside investors:

  1. “It starts with our best-in-class product, the foundation of our No. 1 market share. No. 1 in scale, growth and profit.”
  2. “The strength of our ecosystem. 34,000-plus partners and the global scale and reach of more than 550,000 customers, including 77% of the Fortune 500, in 150-plus countries.”
  3. “Our balance of scale, growth and profitability is unique, not just in our category but across the software industry.”
  4. “We have a huge TAM [total addressable market]. But at the end of the day, I bet my life on the people I work with. The experience they bring to the table makes a huge difference.”

These concur with conversations I’ve had with customers, partners, resellers and the investment community. Backup and recovery is a well-established market, and the historical market leader is filled with legacy vendors such as Dell Technologies Inc. and Veritas Technologies LLC that brought little innovation, leaving the door open for a company such as Veeam to step in and take share.

The company was founded in 2006 and experienced slow and steady growth. It was wholly acquired by Insight for $5 billion in 2020. The next year, Eswaran joined Veeam as its CEO after successful tenures at RingCentral Inc. and Microsoft Corp. That coincided with Veeam adding several new products, including support for Office 365, AWS, Azure and Kubernetes through the acquisition of Kasten. Since then, the company has not looked back, and about a year ago passed Dell for top dog in backup and recovery, according to IDC, leading to this massive round of funding and high valuation.

Focus on ARR growth — and the enterprise

The Veeam leaders said they expect to finish 2024 with more than $1.7 billion in annualized recurring revenue, 29% EBITDA, rapidly expanding enterprise sales and 129% subscription net dollar retention from enterprise sales.

“Historically, we’ve focused on the mid-market, but over the last several years, the enterprise focus has been paying off, with more than half of our revenues coming in from the enterprise,” Eswaran said. “We have 2,200-plus customers spending more than $100,000 in ARR with us. We have over 80 customers spending a million dollars or more in ARR.”

CFO Driggs said the company’s rapid growth has been done economically. “We’re not incurring additional debt to fuel this growth,” he said. “We also generate significant free cash flow for the business. We’re funding the innovation that we need to continue to grow organically, off of our balance sheet, off of the free cash flow we’re generating. We have a super-healthy business model about the comps we see.”

Eswaran said it takes a different approach to succeed with enterprise customers than in the midmarket. “Companies fail because they try the same approach across their go-to-market for both ends, all customer segments, and that’s a failing proposition,” he said. “We’ve been very deliberate about preserving the strength and solidifying SMB and mid-market, as well as expanding and capturing more share now in enterprise and larger enterprise.”

Veeam has added more than 8,000 new customers in the last two years, according to Eswaran. The trend has been for installed-base customers to purchase multiple products from Veeam. “This multi-product go-to-market portion will be a very key part of how we land and expand. A large part of our revenues will come from expansion” with existing customers.

The importance of data resilience

As Veeam has evolved and expanded, the company has focused on providing its growing and diverse customer base with solutions that enable data resilience. “That’s what we stand for,” said Eswaran.

“For effective data resilience for every company, you need to think about it across this entire lifecycle, starting with ensuring you back up data correctly,” he said. “Then, you can recover instantly. Data is portable across technologies, platforms, everything you need to do on security, well beyond multifactor authentication and end-to-end encryption, and then the very specific use cases for AI, for data intelligence, which is critical. So, all this coming together will create the ultimate resilience posture for companies. And that’s why the entire company is grounded on our purpose, safeguarding the digital world with exceptional resilience and intelligence.”

BaaS is a major growth area

Historically, customers have used Veeam as a service managed and cloud service providers offer. Now, Veeam is focusing on delivering its own backup-as-a-service offering. “This is going to be the first full year of a first-party BaaS service with the new Veeam Data Cloud,” Eswaran said. “It will create the next wave of growth for us.”

Eswaran said the company is bullish on the capabilities of the Veeam Data Cloud. “With just one new workload — Azure, and the entire momentum around Microsoft 365 — we’re going to finish 2024 at $50 million in ARR from Veeam Data Cloud and BaaS and have set ambitious goals for 2025,” he said. “You can expect that every one of the workloads we protect will be offered on Veeam Data Cloud. In 2024, it was just two workloads, but we expect to exceed 10 workloads by the end of 2025, and then it will snowball and amplify and accelerate even more.”

With all Veeam has accomplished — and its potential for future growth — Eswaran’s pride in the organization and its people is crystal-clear. “When we can work with cities such as New Orleans and Fort Lauderdale that have been breached and get service back to the citizens quickly, those are the things that make this feel like a purpose, which our employees have really rallied around, of creating resilience in a digital-first world,” he said.

Financially, next on tap for Veeam is an initial public offering of stock. Although no timetable is set, it should have fleshed out its artificial intelligence story more when it goes public.

I’ve asked Veeam leadership, Eswaran included, about this in the past, and they’ve all echoed the same sentiment. Veeam holds massive customer data, and the company should be able to use AI to see what the naked eye can’t. This could be particularly valuable in the world of cyber, where, through the use of AI, Veeam could find malware that has yet to be discovered, anomalous data patterns could indicate unauthorized access or even malicious insider activity.

I’ve heard that data is the new gold in the AI era. If that’s true, and most industry watchers would agree, then the ability to protect, back up and recover that gold is equally valuable. Proof of that is a massive infusion of funding from many tier-one investment shops.

When it comes to building campus networks, there is a religion around stacking versus chassis-based systems. In my network engineer days, I lived on both sides of that holy war. Initially, it was chassis or nothing, but I worked for a big financial firm with large budgets and didn’t give much credence to other options. As time went on, I began to appreciate a stack’s flexibility and budget flexibility, as one could start with a small network stack and add to it when required. However, neither solution is perfect. Stacking, developed decades ago, hasn’t evolved much. One of the big benefits of a stack is that it simplifies switch management by treating multiple devices as a single entity. However, these solutions rely on proprietary tools and protocols, limit flexibility with fixed topologies like rings or chains, and can often lock a company into a specific vendor. The network has continued to grow in importance but has also grown more complex. A recent ZK Research/Cube Research study found that 93% of organizations believe the network is more critical to business operations than two years ago, but in that same time frame, 80% believe the network has become more complex. Given the network’s role, where it connects everything in a company, simplicity must be part of the innovation cycle in modernized network infrastructure. This week, Arista Networks Inc., best known for its high-performance networks, announced new features that bring the benefits of stacking without the associated problems. Its latest “SWitch Aggregation Group,” or SWAG, is a feature in EOS that allows multiple Arista switches to be managed with one IP address and from a single command-line or CLI interface. The second one is leaf-spine stack or LSS management, a feature in Arista’s CloudVision platform that organizes and collectively manages switches, regardless of their physical arrangement. These capabilities address issues such as conserving IP address pools, a growing concern for organizations managing large, distributed networks. “We are modernizing stacking by solving the problem to benefit customers so that they can get more IP address space efficiency localization on campus — not just with managing a problem they have, conserving IP addresses, but, more importantly, helping them with another major challenge, reducing their total cost of ownership on tools,” said Kumar Srikantan, vice president and general manager of Zero Trust campus networking at Arista. Organizations usually stick to familiar workflows and tools rather than adopt entirely new approaches, even if they are open to transitioning over time. To address this, Srikantan said Arista provides stacking to replicate the behavior of legacy systems, allowing customers to “lift and shift” their existing setups while gradually migrating to modern architectures. Arista’s approach extends the widely used leaf-spine architecture to campus networks. Customers already using Arista’s technology in data centers can more easily transition to campus networks. SWAG supports standard ethernet cabling, diverse topologies, and modern designs like leaf-spine. Customers can manage up to 40 switches in a cluster, eliminating the need for proprietary cables and making the system more adaptable to different network setups. Leaf-spine architectures have seen rapid adoption in data centers, as they require scaling up quickly and cost-effectively with minimal disruption. Campus networks are seen as more endpoints connected because of mobility and IoT and have the same requirements. “We have this feature called smart system upgrade, which operates on an individual switch level,”s aid Sriram Venkiteswaran, director of product management at Arista. “But now, with LSS management, customers can organize the switches however they want.” SWAG and LSS management expand on multi-chassis link aggregation groups or MLAG capabilities to meet different customer needs. Though MLAG provides redundancy and load balancing across switches, it doesn’t offer single IP address management or a unified CLI, which some customers prefer for simplicity. SWAG fills this gap by grouping switches into a single cluster with one IP address and CLI, making it easier to manage distributed networks and transition from legacy stacking systems. CloudVision LSS complements SWAG by decoupling the management layer from the physical network architecture. It allows switches to be logically grouped and managed across different physical setups, whether leaf-spine or traditional stacking. It also provides advanced management features like telemetry, provisioning, and artificial intelligence-driven insights. “When we launch, we don’t expect customers to move away from MLAG to a SWAG model,” Venkiteswaran concluded. “I think SWAG will see more adoption with customers migrating away from an existing legacy competitive solution that uses stacking or similar setups. That’s where we would see most of our SWAG deployments.” Though the campus network arena is crowded, Arista offers compatibility between legacy and modern systems, enabling customers to migrate gracefully from one generation to another. Its mission of providing a single operating system across all its products has made it a de facto standard in high-performance computing, cloud, and data centers, and it’s now targeting the campus in a bigger way. Together, SWAG and LSS simplify operations, while MLAG remains a strong option for those who don’t need single-IP management but want robust network performance. These technologies give customers the flexibility to choose what fits their needs.

The new feature uses AI to process tasks, provides insights, and makes changes using natural language interactions.

SIPPIO, a voice enablement provider for resellers, recently announced an AI-based business intelligence tool built into the SIPPIO Portal. It is the first use of AI I have seen to improve the process of producing and quoting orders. The tool processes tasks, provides insights, and makes changes using natural language interactions. The new offering, SIPPIO Beacon, is in preview with select partners and will be available early next year. In its announcement, the Annapolis, Maryland-based company called Beacon a significant leap in its mission to “make communications fast, easy, and flexible.” SIPPIO Beacon will use generative AI to “transform clicks into conversations.” When a partner needs an answer for a customer proposal, insights for upsell opportunities, or to make changes to customer accounts, SIPPIO says they can ask Beacon for the information. George Tarzi, director of development and innovation for SIPPIO, calls the product a breakthrough. “Instead of doing activities with clicks, remembering where to go, training on a new platform, etc., the goal with SIPPIO Beacon is to make all of those activities just as easy as having a conversation.”

Leveraging GenAI to Simplify Processes

SIPPIO’s business is to make unified communications more accessible and efficient while enhancing the value resellers and service providers offer their customers. Generative AI's power lies in its ability to simplify complex, manually intensive processes using natural language processing, analytics, and automation. SIPPIO Beacon is an excellent example of AI in action. It’s designed to deliver faster results and significantly reduce human errors. “Our goal with Beacon is for it to be your assistant in the portal for placing orders and things like that, but it can be your own personal business analyst and sales analyst,” said Tarzi. “It can go through, for example, your customers, look at the subscriptions and licenses they have, and you can ask it to recommend customers that could take advantage of other add-on services.”

Designed to Drive Efficiency and Improve Customer Service

SIPPIO created Beacon so service providers and resellers can use the new tool to find the appropriate information to respond to customer inquiries, and Beacon can help them make changes using the SIPPIO Portal, Microsoft Teams Admin Center, or Zoom. SIPPIO Beacon users can create quotes, activate services, or change configuration via voice prompts. The tool can deliver business insights based on customer data, such as suggesting which accounts to contact for upsell opportunities. It will also pull from public and private data to provide business analysis. “Because we’re giving it access not only to the internet but also to internal documents and data, it already has the understanding and the context of what you're asking it,” Tarzi said. “So if you're having trouble getting started on a project, you can simply tell Beacon what you want to do and the end goal and ask for recommendations on how to get started. It will go through not only information on the internet but also all the internal documentation we've given it access to and give you the most complete and best start on any telecommunication project.”

Company Promises Ongoing Innovation

Once the product goes live, SIPPIO is committed to continued iteration. “As 2025 progresses, we're going to be adding feature after feature to make SIPPIO Beacon the most robust AI on the market,” Tarzi said. “As it answers more questions and understands more context, it will learn based on those questions. If it gives you a bad response, tell it that wasn't the answer you sought, and it will remember. So the next time you ask, it will know how you want it to answer.”

Final Thoughts

I look at generative AI as the most transformative technology since the Internet. Early in the Internet cycle, Cisco CEO John Chambers repeatedly stated that the Internet would change how we work, live, learn and play, and I believe generative AI will also change how we work, live, learn and play. To date, the use cases for the technology have revolved around copilots and assistants, but generative AI can be much more – an actual co-worker. SIPPIO Beacon is an excellent example of this where generative AI is not just consolidating and summarizing information, but rather taking a series of complex tasks and taking action, enabling people to focus on less routine and replicable workplace tasks.

The recent event demonstrated the company’s successful strategy in targeting industry verticals, smoothing the way for virtual agents and navigating the business challenge of CX.

Recently, Talkdesk held its first analyst summit in a few years. During the event, the analysts got a deep dive into the company’s strategy, opportunities, and roadmap. Overall, it was a positive experience; it solidified some of my thoughts about the company and made me reconsider others. Below are six of the key thoughts I had coming out of the event.

AI Is Everywhere in CX, So Vendors Need to Differentiate.

As expected, the analysts got a heavy dose of AI at the event. In fact, when CMO Neville Letzerich presented to the group, he showed a slide that stated Talkdesk wants to be perceived as the “Most innovative company in the space with AI at the heart of everything we do.” While that’s a bold statement, every CX vendor would try to make that claim, and when one looks at the list of “AI services” offered by the contact centers, most have the same core offerings – virtual agents, agent assist, AI generate notes CSAT, etc. When asked about differentiation, most vendors will claim, “Ours is better,” which is likely true for specific use cases, but highlighting those is essential. The best way to show those differences is through customer stories, as the benefits of those capabilities can be quantified. On a related note, Talkdesk has a few unique AI features. The first is Talkdesk Navigator, which uses AI to route calls to the best virtual or human agent. No training is required, and the setup and decision trees are automated. The second is Mood Insights, which uses AI to capture the “mood” of the customer by analyzing sentiment, tone, and several other factors. Differentiated features only last a short time, so it would be good to see Talkdesk highlight use cases while they are unique to them.

The Vertical Industry Focus Enables Talkdesk to Punch Above Its Weight.

Talkdesk is a relatively small CCaaS provider in a crowded marketplace. Because it is privately held, it receives a different media focus than a publicly traded company. This could cause Talkdesk some go-to-market challenges, as larger vendors can drown it out. To combat this, the company is not trying to be all things to all people. Instead, it focuses on the following verticals: financial services, healthcare, retail, government, and transportation/hospitality. Former CMO Kathie Johnson initiated this go-to-market motion; Johnson came from Salesforce, which has long used industry specialization to differentiate itself in a crowded CRM market. During his presentation to analysts, Letzerich outlined how Talkdesk is executing on the verticals targeting: the company hires people from the industries to run the practice areas, partners with the leading vendors in those verticals, and attends trade shows to showcase innovative solutions.

Talkdesk Has Found New Ways to Deliver Its Product.

There wasn’t much “news” from the event, although the new product generally isn’t the focus of analyst events, but the two press releases announced over the three days had the same theme – make it easier for Talkdesk to deliver its capabilities to a broader audience. Talkdesk Embedded is a set of low-code/no-code tools that enable companies, ISVs, and others to “embed” Talkdesk capabilities into applications. For example, a financial services company could embed Talkdesk Conversations into ServiceNow or Oracle. This initiative will help the company with its industry focus and make inserting Talkdesk capabilities into workflows easier. The other news item was Windstream Enterprise’s announcement that it will utilize the newly released Talkdesk Express as its contact center solution for SMBs. Talkdesk Express is a purpose-built CCaaS product for the small—to midsize market that includes omnichannel communications, IVR, analytics, self-service tools, and more. Telcos, such as Windstream, are often the vendor for SMBs’ communications needs, as the product's sale can be tied to connectivity.

TalkDesk Is Betting Customers Will Eventually Prefer Virtual Agents.

The rise of agentic AI is creating virtual agent experiences that are virtually indistinguishable from humans. The only distinction is that one never needs to wait on hold to talk to the virtual agent. At the analyst summit, Talkdesk showed a demo similar to what Webex did at WebexOne. In this demo, a person could speak to a virtual agent, interrupt it, or change the flow of questions, and the virtual agent would not miss a beat. I’ve asked some contact center managers if they foresee a day when the virtual agent will be preferred over a human, and most have said no. However, this was said in the early days of online banking and mobile check deposits. I recall many industry experts proclaiming that because of trust, people would always prefer to have money interactions handled by others. We know this is not true, as online banking dominates people-centric interactions. Virtual agents are always available and will have faster access to more accurate information for most simple use cases. I predict that within two years, the industry will start to see a shift where not having virtual agents will become a competitive disadvantage.

Nobody Owns CX End-to-End And This Is a Business Challenge.

The contact center industry has made a difficult pivot to CX. The problem is that so have sales and marketing vendors, providers of web analytic tools, and anyone else involved in the customer journey. In reality, CX encompasses everything from the first click on a website to product retirement, but each step along this journey requires a different set of vendors and tools. The challenge is that everyone from Talkdesk to Sprinkler to Adobe to Content Square is in the CX space, each with their own set of analytics and insights. The reality is no one owns the end-to-end CX, and that’s a problem for customers as the insights provided from a sales and market perspective may be at odds with what the service is showing. This was brought up as a point of conversation in one of the analyst roundtables. While every vendor can have end-to-end CX as the vision as to what they would like to deliver, today, there isn’t a solution. Given the complexity of gathering data across the customer journey, one isn’t likely soon. Vendors in the CX ecosystem should provide clarity as to where they can deliver value.

AI Will Displace Agents But Give Rise to New Revenue Opportunities.

There are few tech markets on which Wall Street analysts are more bearish than CCaaS. The reason is that when experts are asked about the impact of AI, one of the top use cases is to replace agents. Typically, when I ask the business leaders at the CCaaS companies whether AI will cannibalize agents, I get non-committal answers where the speakers try and explain how this might not happen. I’ll give Talkdesk CEO Tiago Pavia props for addressing my question head-on. He believes the number of agents will decline, but the growth in revenue from AI and digital tools should be greater than the revenue loss from the reduced number of agents. I believe the AI opportunity will significantly overtake the revenue loss from agent seat compression, as a great virtual experience will create a stellar experience that will make people more inclined to reach out to brands. An interesting byproduct of this is that the revenue model from CCaaS will shift from a per seat model to a consumption-based model and bring in factors like seasonality. For example, one would expect a retailer to see a spike in usage between Thanksgiving and Christmas, or an accounting firm would see a usage surge at tax time. The CCaaS vendors have not had to deal with these factors, but will now come into play. Overall, it was a good event for Talkdesk. The company ended the event by calling itself, what everyone does, the most innovative AI company in CX. It’s too early to call an AI leader in this industry, and it will be interesting to watch which vendors can deliver capabilities that companies can use and see a positive outcome with. It was a solid event, but there was more to come.

The hype around artificial intelligence is at an all-time high. Sometimes, in tech, the reality never matches the hype. With AI, though, I do believe it’s warranted.

I view the adoption of AI as similar to the adoption of internet technologies. We talked about the internet, hyped it, created economic reports, and then we stopped talking about it because it became embedded into everything we do. From an adoption perspective, we aren’t quite there yet with AI, but a recent report shows that many organizations already use AI, and it’s on the verge of widespread adoption across most business sectors.

C1, a global technology solution provider focusing on “elevating connected human experiences,” published a report this month showing how quickly and pervasively generative AI is used by businesses. The report “The Era of AI-Powered Connected Human Experience is Underway examines how organizations use AI to improve automation, develop new products and services, create software and more.

C1 surveyed more than 500 decision-makers from several sectors, including education, finance, healthcare and manufacturing. The findings show the strategic role AI is already playing. Here are some highlights:

  • 100% of the organizations surveyed are creating new key performance indicators for monitoring AI applications in their businesses. The chief focus is on improving the quality of interactions.
  • 99% have stepped up their AI adoption, and 51% of the leaders surveyed said they are “significantly expediting” AI integration across their operations.
  • 80% of respondents said gen AI is “essential for enhancing employee collaboration and the quality of work.”
  • On the customer front, 76% of organizations feel AI will be “integral to elevating customer interaction quality and experience.”

Of the data points above, the first is the most notable. The creation of new KPIs will help companies understand how to measure processes in the AI era. For example, with contact centers, a legacy tried and true KPI is “average handle time,” or AHT, to measure how long a customer call takes.

With virtual agents handling mundane tasks, AI-enabled contact centers see AHT increase as live agents do more complex tasks. Early adopters of AI use metrics like upsell opportunities or AI generate CSAT as new metrics. This is something all businesses will have to do and the data shows that’s happening now.

Not a one-trick pony

One of the more interesting aspects of the research is the variety of ways organizations rely on gen AI for critical activities.

  • 85% are using gen AI to enhance automation.
  • 84% use it to co-develop products and services.
  • 84% say gen AI helps them develop code.
  • 68% are using it for virtual assistants or copilots.

Given how rapidly gen AI has taken hold in so many businesses, it’s interesting to note that when it comes to its effects on workers, 90% of the company leaders who participated in the survey say AI “enhances the human experience in the workplace.” Examples cited include adopting virtual assistants, automation solutions, and tools to support dynamic workflows.

One interesting dynamic of this is how AI changes work. If AI saves workers time and enables tasks to be completed sooner, should companies change people’s goals? For example, if salespeople are no longer required to put information in customer relationship management services because AI can automate that, should more meetings and closed business be expected? I’ve asked CEOs, heads of human resources and line-of-business managers this, and there is no consensus opinion, but it does appear workers’ goals will need to be adjusted.

Executives not oblivious to gen AI risks

As with any new technology, it can take time to determine with a high degree of certainty whether the innovation is benign and beneficial or potentially dangerous. That process may be moving faster on gen AI than with almost any other technology that has hit the market in decades.

Even as organizations rapidly innovate with gen AI and deploy it widely, the report acknowledges that business leaders “must recognize the potential risks and challenges of adopting generative AI.” The areas where the report states “organizations should exercise caution” include data privacy, intellectual property concerns, and cybersecurity risks. According to survey respondents, the “shortage of skilled professionals” in AI development, deployment, and maintenance is tied to those concerns.

Some 65% of respondents said their organization has “revised its cybersecurity protocols” regarding potential AI-related risks to address these concerns. C1 notes in the report that the caution among early adopters “underscores the importance of strategic AI planning and implementation.”

This is a case where security and compliance must be built into the design of the new AI-infused processes. Often, companies adopt new technologies, and the security issues are looked at post-production. Given the risks with AI are so high, and will be public, making it part of the rollout is prudent.

Early AI adoption tied to first-mover advantages

Survey respondents who reported “high to very high use” of gen AI come from a broad group of industries:

  • Utilities: 86%
  • Healthcare: 81%
  • Manufacturing: 75%
  • Finance and insurance: 68%
  • Hospitality: 68%
  • Education: 65%

What’s notable about the above industries is that the highest adopting ones are typically slow-moving, with processes filled with “human latency.” AI will have a massive payback in these verticals. I recently talked to a hospital administrator who told me every missed appointment costs the organization thousands of dollars because of staffing and equipment costs. AI is being used at that facility to automate patient reach-out and appointment confirmation, leading to a 90% reduction in missed appointments.

C1 cites the “potentially significant” implications of being a first mover in gen AI. “Organizations learning from their generative AI implementations have an advantage over those without generative AI-based capabilities in use. Those gaining experience are doing so at an accelerating rate while those cautiously approaching implementation of generative AI are at risk of falling too far behind.” We saw this play out in the Internet era as companies we had never heard of before disrupted the tried-and-true vendors. Expect to see the same with AI.

The AI era is here and will change every aspect of our lives. I’ve asked business and information technology leaders for recommendations on how their peers should get started with AI if they have not yet. Their advice is to jump and start trying things. There’s an expression that some people make things happen, some people watch things happen, and the rest wonder what happened. You don’t want to be in this last category with AI.

One of the highlights at Palo Alto Networks Inc.’s the most recent version of its Ignite on Tour event series, this one at its Santa Clara headquarters, was founder and Chief Technology Officer Nir Zuk’s presentation of the cybersecurity company’s 2025 predictions for the security industry.

In the security world, Zuk has a broad following and is never afraid to express his opinions. He has called endpoint detection and response a dead technology and labeled one of his larger competitors “the place acquisitions go to die.” Because of his outspoken style, I was looking forward to hearing what he had to say as security is going through a major transition.

Given the complex cybersecurity landscape, organizations must address longstanding inefficiencies in their security operations centers, or SOCs, and prepare for emerging threats. According to Zuk, 2025 is set to be a pivotal year for transformation. He envisions a future where modernized strategies and advanced technologies redefine operations. Here are some of his key predictions for cybersecurity in the upcoming year:

Measuring and reducing detection and response times

Zuk predicts a significant shift will happen with the widespread adoption of metrics such as mean time to detect, or MTTD, and mean time to respond, or MTTR, as benchmarks for security performance. Many organizations today lack processes to measure these metrics, and the results are alarming. Detection often takes weeks, and response times are measured in days. Cybercriminals have enough time to exploit vulnerabilities, exfiltrate sensitive data, disrupt operations and launch further attacks.

“Adversaries are much more automated nowadays, and it’s effortless for them to try thousands of times to break into your infrastructure in different ways,” Zuk said. “All they need to do is succeed once. If you miss them, they’re in, and the odds are suddenly against you. 2025 will be the year where you will measure MTTD and MTTR.”

This is an interesting prediction, as the security industry has never had any metrics to measure effectiveness. However, metrics are needed to help focus investments. Companies are spending record amounts on cybersecurity, and breaches continue to happen at an accelerated rate. Without metrics, it’s hard to understand where to focus continued investment, so Zuk’s concept has merit. Companies should track MTTD and MTTR.

If these metrics become a standard measure of cybersecurity effectiveness, the SOCs will be able to focus on reducing inefficiencies responsible for slow detection and response times. In addition to measuring the metrics, organizations must invest in tools, processes and strategies to improve them.

The rise of AI-driven SOC architectures

Zuk’s second prediction is that SOCs will need a complete overhaul to lower MTTD and MTTR to acceptable levels — ideally, minutes. The traditional approach to cybersecurity operations, where human analysts are central to detecting and responding to threats using various tools, will no longer be sufficient. Instead, future SOCs will rely on artificial intelligence to handle routine detection and response, with people stepping in for more complex cases.

To make this transition, organizations will need to phase out legacy tools such as security information and event management, or SIEM, endpoint detection and response, or EDR, and security orchestration, automation and response, or SOAR. However, the transition won’t happen overnight. Though 2025 may not see the full implementation of AI-driven SOCs, organizations will adopt these technologies to modernize their cybersecurity operations.

“What’s required is a complete re-architecture,” Zuk said. “We need to move from a SOC where everything is centered around the analysts, and the analyst is being assisted by technology to a SOC that’s being run by machine learning or AI-assisted by humans.”

I’ve long been a critic of SIEM and felt that technology had run its course. The concept of having a single dashboard to collect alerts and help security professionals find vulnerabilities is reasonable, but the reality is that SIEMs push too much data with too many false positives to be useful. Instead, businesses should be looking to AI-driven security tools, such as Palo Alto Networks’ Cortex XSIAM, that can automate the heavy lifting and let security teams focus on remediation.

Consolidating data with unified data lakes

The third prediction is that organizations will move toward a single, consolidated data lake as the backbone for cybersecurity operations. Data is often siloed across multiple systems, creating inefficiencies and increasing costs. A unified data lake will collect information from across the infrastructure — networks, endpoints, cloud services, applications and more — providing a comprehensive dataset for AI-driven SOCs to analyze.

This will have implications beyond SOCs. The same data lake can be leveraged by other cybersecurity functions, such as domain name system security, internet of things security and even cloud security. Furthermore, managing one large dataset is far more cost-effective than maintaining multiple systems requiring separate storage and processing resources. Organizations can reduce redundancies and energy consumption by ingesting, processing and analyzing data in a single instance.

“These are all the good reasons why cybersecurity and all the different cybersecurity functions that need a good amount of data will be migrating towards using a single data lake — starting with SOC and cloud security, and in the future, moving into more and more cybersecurity functions,” said Zuk.

In my opinion, the rise of a unified data cloud is critical to the success of AI in security. In data sciences, there’s an expression, “Good data leads to good insights,” and that’s undoubtedly true. What’s not talked about is that silos of data lead to fragmented insights, so if a company is using dozens of security vendors, each with its own data set, that will significantly limit the effectiveness of AI. Platformization is the right strategy as it leads to a unified data lake which will result in better AI.

Preparing for the quantum computing threat

As a bonus, Zuk made an additional prediction about quantum computers. He said that though they are not expected to pose an immediate threat to encryption in 2025, organizations must consider long-term risks. Cybercriminals could potentially record encrypted data today and decrypt it years later using advanced quantum technology. This raises the need for forward-looking strategies, including the adoption of post-quantum encryption.

“If your organization cares about it, then maybe it’s time to start doing the math of when to deploy post-quantum encryption, which means it resists the current known attacks against cryptography,” he said. “So it would be best if you started thinking: when is the right time to deploy it such that your data will remain secret in the future.”

The timeline for adopting post-quantum encryption will vary by organization. Those handling highly sensitive data may choose to act immediately, while others may delay until quantum computers become a clearer threat.

There’s still some uncertainty about the effectiveness of post-quantum algorithms. Although these algorithms are designed to withstand quantum attacks, there’s no guarantee that they will remain secure against emerging decryption methods. Organizations should evaluate risks and update their encryption strategies as new technologies emerge.

Only time will tell if Palo Alto Networks’ predictions will come true, but I’ve talked to more than enough security professionals to say that security needs to evolve and modernize. As I said earlier, businesses continue to fall behind despite spending record amounts on security tools. Staying with the status quo has not worked and will not work. Every organization should strive for a security platform as the foundation for an AI-driven SOC with metrics to help guide the team.

digital concept art in gold