Featured
Reports

Verizon Mobile Partners with Microsoft So Teams Can Energize the Mobile Workforce

December 2023 // For years, mobile employees have constituted a significant portion of the workforce. Since the start of the […]

Continue Reading

“Private Cellular or Wi-Fi?” Isn’t an Either/Or Question: You Can Have Both

December 2023 // The world used to rely on wired connections. The phones we used back then plugged into the […]

Continue Reading

Enterprises Have Big Plans for Wireless but Lack Unified Management

October 2023 // Siloed management, security and QoS leads to complexity and downtime. A converged multi-access wireless network* is the […]

Continue Reading

Check out
OUR NEWEST VIDEOS

Revolutionizing Sports Commentary with Generative AI Tools #golf #sports

104 views 9 hours ago

0 1

2024 ZKast #136 with Craig Durr from the Collab Collective at WebexOne

1.6K views November 5, 2024 5:45 pm

0 0

Will we see Ultimate Fan Experience Immersion in Baseball? #sports #baseball

504 views November 4, 2024 8:17 am

7 0

Recent
ZK Research Blog

News

At Reverb24, Bandwidth shared several new developments aimed at improving enterprise communications.

Bandwidth recently held its first-ever customer and analyst event, Reverb24, at its headquarters in Raleigh, NC. Bandwidth is an interesting company because its platform powers most UCaaS / CCaaS industry. Most mainstream providers use its Communications Cloud, including Zoom, Webex, Five9, Nice, and others. About 50% of Bandwidth’s $700M+ revenue comes from this business, but historically that was both a curse and a blessing as Bandwidth rode a wave of usage during the pandemic – when its partners were flying high during the pandemic, so was its business. But, last year (2023) saw the communications industry cool off, driving a decline in Bandwidth’s Global Communications Platform (GCP) business. The solution was to double down on its bets in selling directly to enterprises and to the largest text messaging platforms. Over the past few years, Bandwidth has diversified the business with communications platform as a service (CPaaS), with 20% of its revenue in messaging growing at 30%+ and direct-to-enterprise (5% of revenue) growing at 25%. GCP will continue to be “steady as she goes” and help stabilize the company, but the enterprise and messaging businesses have become the growth engine. These include solutions and services that deliver a more consistent global experience, simplified messaging management, and enhanced emergency services. Here’s a roundup of the top five announcements from the event.

1. Universal Platform for Global Communication

Bandwidth launched a new Universal Platform, providing a consistent global experience for real-time communications. The platform is built on Bandwidth's global network, which recently underwent significant Internet protocol (IP) upgrades in Europe and the U.S. to improve overall reliability. Bandwidth has also expanded its infrastructure with two new data centers in Toronto and Vancouver. “By adding new features, capabilities, and global network services to our platform, we’ve made it so much easier to work with a single trusted platform to enter into many new markets,” said David Morken, Bandwidth’s co-founder, chairman, and CEO, during a news briefing. This includes features such as voice authentication and expanded its API suite as well as the enhancements listed below. The breadth of features is one reason why most UCaaS / CCaaS providers rely on Bandwidth. Think of it as the engine that powers cloud communications, enabling the vendors that build on top of them to focus on value-added features. To further improve reliability, Bandwidth uses artificial intelligence (AI) and machine learning (ML) to monitor the network and detect issues. The technology helps reduce downtime and keeps the network running smoothly. It also ensures that communication services delivered via the Universal Platform are secure and meet standards like STIR/SHAKEN, which is designed to combat illegal robocalls and spoofed numbers by verifying call legitimacy.

2. Registration Center for Campaign Management

Bandwidth introduced a Registration Center, simplifying the management of text messaging campaigns through a single interface. The initial beta release focuses on short code registration, providing users with an easy-to-use interface and webhook notifications to guide them through the registration process. The goal of the Registration Center is to streamline the experience by eliminating repetitive tasks. For example, once a user has submitted a short code brief, Bandwidth automatically applies the information to toll-free campaigns. Therefore, users don’t have to repeat the same details multiple times. This feature is intended to save time and reduce the hassle of managing messaging campaigns across different channels. (Editor’s note: A ‘short code brief’ is a document that details a mobile messaging program and is required for obtaining a short code.)

3. Rich Messaging Capabilities with Google

Bandwidth has registered as a rich business messaging (RBM) partner with Google. RBM allows companies to send rich communication services (RCS) messages to customers with enhanced features like high-quality images, videos, emojis, and interactive elements. While RCS has been around since 2007, adoption has been slow due to a lack of interoperability between Apple and Android devices. Things changed when Apple announced its support for RCS, prompting mobile carriers also to embrace the technology. As a result, messaging providers, including Bandwidth, have been working to offer business-grade RCS. (Editor’s note: RCS is a global standard supported by cellular carriers, though it can be used across different types of devices.) Bandwidth is optimistic about the potential of RCS, supporting it in all major markets. That said, as more companies accelerate their rich messaging strategies, WhatsApp (used by 2.5 billion people globally) has become a leading platform for adoption and has largely functioned as a replacement for SMS/MMS, since RCS took so long to be adopted. Bandwidth is prioritizing WhatsApp as part of its rich messaging strategy going forward. “We’ve hit a critical inflection point in the messaging market and are bullish about the opportunity that RCS represents. 2025 is the year that our industry takes a major step forward into rich messaging. That’s why Bandwidth has registered as an RBM partner with Google, setting ourselves up to enable RCS across key markets, just like we do with SMS,” said Caitlin Long, director of product management for programmable services at Bandwidth. RCS has been on the horizon for over a decade as the heir apparent to SMS. The failure of RCS to become mainstream isn’t due to technology, as it has far more features than SMS. The problem has been agreement among the mobile operators and, notably, Apple stymied efforts by creating a walled garden around iMessage. Apple’s support of RCS removes a huge hurdle and will not only bring Apple/Android interoperability but with all the messaging apps.

4. Number Reputation Management for Spam Calls

Bandwidth developed a Number Reputation Management service, ensuring that important calls are appropriately displayed and more likely to be answered. Number Reputation Management helps companies monitor and correct how their phone calls are labeled, particularly when mistakenly marked as “spam.” Bandwidth’s solution has a dashboard for insights and alerts, giving companies a complete view of their phone number reputations. Notifications are sent when a number is potentially flagged as spam or scam so companies can act quickly. Bandwidth also helps carriers remove any incorrect labels. The solution has a five-part system. First, it registers outbound phone numbers with major U.S. mobile carriers to reduce the chances of calls being mislabeled. Next, it allows companies to monitor their number reputations across the industry, including with major mobile carriers and consumer apps. Additionally, it consists of a display testing feature that checks how calls appear on different U.S. mobile carriers and operating systems. “Number Reputation Management is designed to be simple and easy to use. As a carrier, we have exclusive access to data about your calling patterns. And the cherry on top is threshold alerting, so you can ensure you have all the resources you need to protect your number reputation. These tools provide actionable insights to help your enterprise regain control over your number reputation and maintain high-quality communications,” said Lauren Brockman, senior director of product management at Bandwidth.

5. Alternate Location Routing for Emergency Services

Bandwidth is set to launch Alternate Location Routing (ALR) in early 2025. ALR is an emergency service designed for mobile users outside the U.S. It routes emergency calls to the appropriate public safety organization when a user’s location changes. All companies have to do is set up and update pre-validated emergency addresses as users move through the mobile network. ALR tackles the limitations of conventional emergency services tied to fixed locations like office phones by providing the flexibility needed for today’s mobile workforce. “ALR isn’t just another emergency solution—it’s a unique innovation that bridges the gap between traditional and next-gen global emergency capabilities. It solves what has traditionally been a very siloed, country-specific emergency services ecosystem by delivering a seamless, nomadic, compliant emergency solution,” said Morken. ALR, accessible via the Bandwidth App, has built-in data center redundancies which helps prevent outages and service interruptions. One key feature of ALR is its emergency services application programming interface (API), which automates tasks like setting up devices (endpoints) and managing emergency addresses. Automation simplifies the process, so companies don’t have to deal with the complexity of managing emergency services across different regions.

Bandwidth’s Return to Office

One final note: this week, Amazon CEO Andy Jassy issued a mandate that workers would need to return to the office five days a week. Bandwidth had already made this move, and part of the visit to the campus was for investors, customers, and analysts to see what Bandwidth’s version of return to office looks like. Bandwidth CEO David Morken told me the company wanted to remove any possible barrier to having people return to the office and “make it awesome.” The interior of their headquarters was built with collaboration in mind. While there are obvious cubes and offices for people to call home, there are also several open collaboration spaces, meeting rooms, work pods, phone booths, and more. Any way one wants to work can be accommodated. What’s more interesting is the supporting infrastructure. The Bandwidth campus has a full-size gym, cycling tracks, jogging trails, a soccer pitch, frisbee golf, a basketball court, and a Montessori school for daycare. The company provides 90-minute lunch breaks so employees can work out, shower, and return to their desks. This type of campus is easier for Bandwidth to build in Raleigh, NC, than it would be if a company were in Manhattan, Seattle, or Silicon Valley, as there is much more available land. But there is still a good lesson to be learned. If you want employees to be happy returning to the office – and Bandwidth employees seem to genuinely like it – make the workplace more than cubes and offices. I do believe having people in the office adds to culture, and 4-5 days a week in the office will (again) be the norm, but it needs to be done thoughtfully.

Just in time for the 2024 season, the National Football League and Amazon Web Services Inc. continue to push the boundaries of artificial intelligence and machine learning in football. Through a partnership that began in 2017, the NFL has used AWS technology to improve player performance analysis, game strategies and fan experiences.

Upcoming NFL games will showcase a new AI-powered tool, Tackle Probability, that analyzes and predicts the likelihood of a defender making a tackle in real time. The tool identifies the most dependable defenders and the hardest-to-catch ball carriers. It also provides data on key performance metrics, such as missed tackles and successful attempts, offering teams valuable insights for offensive and defensive strategies.

More specifically, Tackle Probability looks at 20 different factors, including the position and speed of each defender, every 10th of a second. Using these data points, an AI model, trained on five years of past game data, calculates the likelihood of a tackle happening at any given moment in a play. From this data, the model creates new stats like how often defenders attempt tackles without missing or how frequently running backs force missed tackles. This helps coaches see which players are the most reliable at tackling or avoiding tackles.

Tackle Probability is a feature within Next Gen Stats, the NFL’s player and ball tracking platform, which relies heavily on AWS to process an enormous amount of data collected from games. The platform gathers over 500 million data points each season, providing the NFL with advanced statistics that improve the viewing experience and aid gameplay decisions. This includes rule changes such as the new Dynamic Kickoff to minimize high-speed collisions and injuries during kickoffs by adjusting player positioning and movement.

Digital Athlete is another tool developed using AWS to improve player safety. The tool simulates game and practice scenarios to help coaches and medical staff assess injury risks so they can develop prevention and recovery plans for each player. One could think of this as building a digital twin of players and then running them through various scenarios to better understand when and how injuries occur. Teams can use that data to avoid those situations and keep the players on the field longer.

In addition to improving player safety, AWS is working with NFL Media to implement Amazon Q Business, an AI assistant that answers business and production-related questions. It acts like an automated helpdesk for common technical or operational inquiries. NFL Media also introduced an Amazon Bedrock-powered research tool that allows production teams to gather insights and footage from specific plays in the NFL’s Next Gen Stats database using simple language prompts.

The NFL believes these developments will boost productivity by automating routine tasks and speeding up research. Rather than spending time on repetitive activities, teams can focus on creating high-quality content across NFL Media’s properties, such as the NFL Network, NFL Films, NFL.com, NFL+, the NFL app and social media channels.

Many innovations, including Tackle Probability, are inspired by submissions from the AWS Big Data Bowl, created five years ago to develop new ways of using the NFL’s Next Gen Stats. The program creates an open platform for engineers, data scientists, students, and others without sports experience to get involved in football analytics.

The competition has grown fourfold since its inception in 2019, with more than 230 submissions from 400 participants last year and participants from more than 75 countries in the half-decade that it has run. By democratizing access to data, the NFL and AWS can accelerate creativity and innovation. For the participants since the inception of the Big Data Bowl, more than 50 participants have landed jobs in professional sports analytics, with 30+ being hired by NFL teams or player-tracking vendors.

Sports leagues have a wealth of data, much of which has gone untapped for decades. One of the challenges is finding experts who understand data and the respective sport to derive value. Generative AI enables anyone, with or without a data sciences or sports background, to find new ways of leveraging that data to deliver operational value and create new fan experiences.

Workplace strategists see video calling as a way to keep employees engaged and healthy during the pandemic, and usage is skyrocketing. Will this forever tip the scales on video use?

From one vertical to the next and across the globe, corporate decision makers are huddling up over how best to protect their people from the coronavirus disease, COVID-19. Many have curtailed or restricted travel, and others have broadened the scope of their work-at-home programs as fear over the pandemic spreads. Use of video meeting services, which had already been on an upward trend at businesses looking to create more collaborative and engaged workforces, has skyrocketed. In a blog post earlier this week, for example, Sri Srinivasan, SVP and GM of Cisco's Team Collaboration Group, reported on the Webex usage spike the company has seen since the outbreak began. He shared a number of stats, including that free Webex sign-ups in impacted countries are up sevenfold over pre-outbreak rates. Other cloud-based videoconferencing services providers, such as Zoom and Highfive, have seen similar jumps, and many are taking proactive plans aimed at helping businesses ratchet up their use of video meetings.

We've Seen This Before

An increase in video usage due to a global event is certainly nothing new. I recall, for example, that the 9/11 attacks created a surge in the stock prices of video vendors of the day, Polycom and Tandberg, as videoconferencing became a go-to alternative to in-person meetings. But is it different this time? Will the increased use of video stick around even after the health threat dissipates? You might be skeptical, but I do think it's different this time... not because people will permanently stop traveling, but because the technology is so much better. In 2001, if a business wanted to use videoconferencing, it had to buy tens of thousands of dollars of hardware. In many cases, this limited implementation to executive-level conference rooms. On top of the expense, the user interfaces were too complicated for the average executive, so once travel bans lifted, many went back to traveling or having audio-only conference calls. Moreover, businesses today are much more in tune with the idea of working from home or other remote locations and better appreciate the need to create positive employee experiences. Video meetings have become part of the modern corporate mindset. But travel is a business essential that's not going to disappear, and once the hysteria around COVID-19 calms, companies will ease the restrictions and people will be hopping on planes and getting back to regular travel in short order. Where I think we'll see a lasting impact is on audio-only conferencing, with video calls sticking as a replacement.

Conversion Trends

As all the video vendors are reporting, the pause in travel has caused a surge in the use of video services, which means more and more people are seeing not only how easy these are to use but also that the quality is great. This could very well change the minds of anybody who ever had to deal with the complicated and clunky video systems of old... and carried a bias against the use of video technology ever since. This coronavirus gives workplace technology planners another shot at converting everybody to the video cause. As we're seeing in response to the coronavirus, video usage is about so much more than saving a few shekels on plane tickets or reducing exposure to potential health issues. The reason to use video in the first place and the reason it will stick this time is because video calls are superior to audio calls for employee engagement, and the systems are much easier to use now than in the past. With the mandate to improve employee experience, video is really the only option. In fact, your employees may even find video meetings much easier to launch than audio conferences — a screen tap or mouse click vs. dialing a number and inputting long PIN codes and conference IDs. This is the message I heard from an SVP at a San Francisco-based PR agency who told me her firm has seen a significant boost in the amount of video calls over the past couple of years. She attributed the uptrend to video's ability to drive engagement when in-person meetings aren't feasible. At a previous firm, which "met" with clients via audioconferencing, meaningful engagement between participants was fairly low, she said. People on the other end of a call would be on e-mail or tuning out for other reasons. Now with video-first solutions, everyone is actively engaged; a video meeting is almost as good as being there. Another trend for video is that meeting participants are now more relaxed over video calls. When I asked the PR SVP about this, she told me that when her firm first started using video people didn't want to join via video if they weren't dressed in office attire. Now people attend video calls wherever they are, often dressed casually, as work and life blend. "People are more relaxed about being 'seen,' and they realize they communicate more effectively with video on," she told me. And age doesn't matter. Millennials may be the first employee group to think video-first, but workers of all ages tend to use video once they've tried today's technology. To answer my original question: Yes, it is different this time, not because reaction to coronavirus brings something unique to the table. It's different because the solutions are so easy to use, the quality so high, and employee engagement so important that almost all of the historical reasons to not use video are now gone.
As IT complexity continues to rise, businesses are facing an increasingly challenging cybersecurity environment. Ransomware attacks have increased nearly 18 percent over the past year, according to a new report released by Zscaler’s security research arm, ThreatLabz. This surge in activity has significantly disrupted business operations, causing prolonged downtime, data loss, and costly recovery efforts. Here’s what you need to know to keep your business safe and secure.

Increasing Attacks and Payments

The 2024 Ransomware Report is based on data collected from Zscaler’s cloud security platform, Zero Trust Exchange (ZTE), which processes more than 500 trillion signals daily. The data and ThreatLabz’ analysis of ransomware samples use reverse engineering and malware automation to provide a comprehensive view of ransomware trends. Brett Stone-Gross, Zscaler’s director of threat intelligence, said ransomware is one of the most significant threats companies face as part of the current cybersecurity environment. “We’re seeing increases in ransom demands, we’re seeing increases in attacks, and we’re also seeing increases in actual payment numbers,” he told ZK Research in a recent interview. One of the key findings is the growing focus on high-value targets by groups like Dark Angels. The group has been effective by seeking out a few multibillion-dollar companies and extracting large ransoms while avoiding attention from law enforcement, resulting in a record ransom payment of $75 million by a Fortune 50 company—nearly double the previous highest known amount. ThreatLabz believes the Dark Angels strategy may influence other ransomware groups in 2025, leading to more focused attacks on big companies.

New Industries Being Targeted

There is also a shift happening in terms of which industries are targeted. Manufacturing, healthcare, and technology sectors remain top targets due to the critical nature of their operations. The energy sector, in particular, saw a 500 percent increase in attacks in the last year. These sectors are attractive to cybercriminals because disruptions can have severe consequences, making companies more likely to pay ransom quickly. Another factor in these verticals is the rise of IT / OT integration. In my discussions with IT leaders, particularly in healthcare and manufacturing, organizations are connecting non-IT devices to their networks at an unprecedented rate. Most of these devices do not have any inherent security capabilities, leaving the door wide open for a threat actor to come in and hijack the company’s data, leading to a ransom demand. My research shows that the number of IoT devices will nearly double in the next five years, growing from 16B today to 30B.

The Most Active Ransomware Groups

Despite efforts by law enforcement, ransomware attacks continue to rise. The report found a 58 percent increase in companies exposed to data leak sites compared to the previous year. The U.S. accounted for nearly 50 percent of all attacks, followed by the UK, Germany, Canada, and France. However, these statistics don’t fully represent the total number of ransomware incidents, as many go unreported or are settled privately. “In terms of the number of attacks,” Stone-Gross said, “the U.S. increased more than 100 percent, so it’s a prime target. U.S. businesses are falling victim to these attacks more than any other country by far.” The most active ransomware groups between 2023 and 2024 were LockBit, BlackCat, and 8Base. ThreatLabz identified five ransomware groups with different approaches that will likely be dominant in 2024 and 2025:
  • Dark Angels: Targets a select few companies and steals large amounts of data before encrypting systems.
  • LockBit: Targets many victims through a large affiliate network using various ransomware variants.
  • BlackCat: Known for targeting multiple platforms until it shut down in March 2024, its evolving techniques will likely influence future operations.
  • Akira: This newer group has gained attention with its aggressive affiliate-driven model and a ransomware variant that’s hard to detect.
  • Black Basta: This group has adapted to disruptions in its access networks by using social engineering tactics.

Ransomware Forecast

Looking ahead, the report predicts more attacks on high-value targets, an increase in voice-based social engineering (“vishing”) attacks, and an increased use of generative artificial intelligence (AI) to create more convincing campaigns. AI-generated voices with local accents are expected to make these attacks more effective and harder to detect. Ransomware attacks that focus on data theft rather than just encryption are also expected to rise. This approach allows criminals to operate more quickly and effectively, using the threat of data leaks to pressure victims into paying ransom. The healthcare sector will likely remain a prime target due to its valuable data and reliance on outdated systems. “Previously, ransomware groups would steal a few hundred gigabytes to maybe a terabyte of data,” said Stone-Gross. “Now, we’re seeing tens of terabytes, up to a hundred terabytes of data. This is causing more pressure on companies to pay these large ransoms. We think that trend is going to continue.”

Combating Ransomware

Stone-Gross said companies can take preventive measures to strengthen their cybersecurity strategies and stay informed on emerging threats. For example, multifactor authentication (MFA) can add an extra layer of security, making it harder for unauthorized users to gain access. Meanwhile, keeping software up to date and applying the latest security patches as soon as they are available helps address existing weaknesses. “Make sure you have network monitoring, endpoint monitoring, and an end-to-end layered approach,” he said. “In addition to that, we recommend a zero trust architecture. Many companies that are falling victim to these attacks have flat networks. Someone authenticates with a VPN and has free range to access from there. With zero trust, you minimize your exposure. You can’t attack what you can’t see.” Additionally, by enforcing least-privileged access, organizations can ensure that users only have access to resources for their specific roles. AI-powered network monitoring tools can examine user behavior and adjust access privileges. Together, these tools can prevent cybercriminals from escalating their access and moving deeper into the network. There is a rule of thumb that security pros should keep in mind and that complexity is the enemy of good security. Hybrid work, cloud computing, mobile phones, and AI have all made the environment exponentially more complex and impossible to secure using old-school methodologies. Ransomware isn’t going away so security leaders need to ensure that company data is protected as well as possible with up-to-date security technologies.

VMware by Broadcom‘s recent annual user event Explore in Las Vegas was like no other I can remember in recent history.

The most notable change was the crowd size. Since acquiring VMware Inc., Broadcom Inc. has clarified that the focus is on the top 5,000 customers, which created a limited audience.

That had a ripple effect on the event as the lack of audience created an Expo Hall that was a virtual ghost town, particularly for the sponsors. The VMware booth had a moderate number of people, but many of the sponsor booths were empty. The same can be said for the learning labs, which are typically packed, given the primary audience is the information technology practitioner.

Despite the smaller audience, the show’s content and discussions gave me many insights. Here are my top five thoughts from VMworld Explore:

New management brings new priorities

At Explore 2023, Broadcom Chief Executive Hock Tan (pictured) stood up and told the audience that “not much would change.” That turned out to be Tan pandering to a nervous audience. Since then, everything has changed, with Broadcom having revamped the channel program, the way customers buy products, customer focus and much more. One of the most notable points is that the portfolio has been reduced from 8,000 to four. This is a double-edged sword for customers as the four SKUs make it easier to buy products, but it also forces organizations to purchase things they may not need or where they prefer to use a third party, eliminating choice. During his keynote and Q&A with analysts, Tan had no qualms about blaming previous management for the portfolio’s previous state.

I understand why Tan was so critical of previous management, but I felt he wasn’t being fair. Under former CEO Pat Gelsinger, VMware was an engineering-led company that constantly acquired and built new technology. It didn’t always fit together, but the company strived to be on the leading edge. Under Broadcom, financial strength will be the driver and that means VMware will shift to a fast follower.

Customer are paying more, and they aren’t happy about it

Broadcom has been open about the fact that pricing would change. Perpetual licensing is gone in favor of subscription. What does this mean for the customer? No one knew for sure except that customers would likely pay more. Explore allowed me to talk to many customers of all sizes. Every customer I spoke to expressed that staying with VMware would cost them more, with an increase ranging from 30% to over 300%. Needless to say, no one was happy about it. Many customers, particularly enterprise-class, feel stuck as their infrastructure is built around VMware, and decoupling it is next to impossible. Many customers are looking for alternatives, and Nutanix Inc. is coming up most often. This is a calculated decision by Broadcom; it knows many customers will leave, but those that stay will use Broadcom more and, in turn, pay the company more.

It’s all about VCF

The big news from the show was VMware Cloud Foundation 9.0, the company’s private cloud platform. Tan described the product as “AWS on-prem,” which is directionally correct but an exaggeration of the capabilities, since VCF’s handful of features doesn’t come close to the thousands that Amazon Web Services Inc. offers. That said, many customers are returning data and workloads to private clouds, and the VMware stack is excellent.

From a product perspective, if it’s not aligned with VCF, it will likely go by the wayside. At the event, I talked to a customer about ESXi Embedded product, which many original equipment manufacturers use to build VMware functions into products. This is used in many mission-critical environments, such as public safety and energy. The product has been shut down, leaving the OEM partners holding the bag.

“Forcing everything to VCF throws away a lot of innovation done at the edges with partners like us,” one partner told me. “Police cars, control systems and factory robots all used Embedded ESXi, and now those customers and partners are left scrambling.”

Broadcom is driven by its financial metrics, particularly its operating margin, and the decision to consolidate its products to a handful of skews will keep profits high. It will also cause a deterioration of customers, but I’m sure Broadcom considers that collateral damage.

The private cloud opportunity is real and significant

The public-private cloud debate has raged on for the better part of the past five years. Currently, several macro issues are causing companies to look at private clouds. These include the war in Ukraine, which created concerns around data sovereignty, security issues, AI, which requires control over data, and the recent CrowdStrike issue, which put a lens on business resiliency.

I expect private cloud growth to outpace public cloud, although off a much smaller base. VMware’s turnkey approach with a fully engineered solution should enable it to grow the VCF installed base inside the top 5,000 customers, where the company is focused.

The broader opportunity is hybrid multicloud, including edge and public cloud. VMware Edge Compute is one of the leading solutions, and its hypervisor is portable across all hyperscalers. I would have liked to have seen Tan’s keynote talk track extend to hybrid multicloud instead of solely focusing on private cloud, but that opportunity is certainly there. The company has an excellent story, led by its software-defined wide-area network and secure access service edge portfolio, which I called out in this post.

The end of VMware Explore is coming

I suspect if VMware had not committed to Explore this year, there would not have been an event. If one looks at other Broadcom companies, none has user events, and, in reality, the company doesn’t need it. As has been discussed, the focus is on the top 5,000 customers, which are the biggest of the big, and typically, vendors will go to those customers to talk roadmap, pricing or anything else relevant to them, making an event such as Explore less relevant to them.

Also, historically, Explore was used to roll out new products, and with the keynotes being so full of updates, it took a lot of work to keep up. The limited set of products means fewer announcements and less of a reason to have a marquee keynote. In fact, the one this year was only an hour, with no product-led discussion as in years past.

From my conversations with VMware employees, the company is committed to 2025 and maybe 2026, but after that, I’m expecting VMware Explore to follow CA World into being nothing but memories.

VMware Explore 2024 is in the books and as one would have expected, it was markedly different than in years past. The rationalized portfolio and focus on simplicity are good for many customers but not for all. I’m sure Broadcom has done its due diligence and is expecting collateral damage from lost customers, but that’s to be expected when the focus shifts from products to profits.

Unlike previous VMware Explore/VMWorld events, which are typically filled with product announcements, this year’s VMware by Broadcom’s user event, Explore 2024, was very light on the news. The most notable announcement at the event in Las Vegas was the unveiling of VMware Cloud Foundation 9 and the company’s mission to be the leader in helping companies build out private clouds.

One of the product announcements that flew under the radar, however, was the updates to the VMware VeloCloud portfolio, the company’s software-defined edge solution, which includes software-defined wide-area network, secure access service edge, security service edge and edge computing. Although the connection between private cloud and SASE may not be obvious, they’re both critical components of the broader vision of hybrid multicloud.

Since the birth of the cloud, “public” and “private” clouds have been positioned as an either/or. The fact is, customers want both, and they also want edge computing. The ideal deployment model would be a single, logical cloud that spans public, private and edge systems. This is the vision of hybrid multicloud.

With hybrid multicloud, the network plays a crucial role in its success, providing connectivity among edges, private and public clouds. SASE is the right WAN architecture as it can optimize traffic between locations, even when the internet is used for transport. Also, SASE brings security into the fold, which becomes more important as data is scattered is spread across the environment.

VMware VeloCloud launched several significant enhancements to its software-defined edge portfolio at the event. VMware has upgraded its VeloCloud Edge appliances to support a mix of broadband, fixed wireless access and satellite. The enhancements in VeloCloud Edge 710 and the new 720 and 740 models will ensure that these devices provide constant and reliable connectivity. This will allow organizations to improve the reliability and speed of their voice, video and application data at the edge.

Also, VMware has integrated VeloCloud SD-WAN points of presence with Symantec’s PoPs. Organizations can use this combined solution to manage their network and security needs in one place. The benefits include faster and more reliable connections, improved security and the ability to reach cloud services worldwide. This update builds on an earlier release of VeloCloud SASE, which combines network and security functions into a single, cloud-based service.

Historically, VMware has tapped partners to provide the SSE for SASE, but with the Broadcom acquisition, the company has been able to tightly couple VeloCloud SD-WAN with Symantec for a true, single-vendor SASE solution. At the event, I asked Sam Ragosti, Director of Product Marketing for VMware, whether VMware would continue supporting SSE partners, and he told me, “Our strategy is to give customers a choice. We can support those who want a single-vendor solution. We also recognize that some customers may already have a preferred SSE vendor and for those, we will continue to partner if that’s required.”

Additionally, VMware has updated its Edge Compute Stack to simplify the deployment and management of edge artificial intelligence workloads. One key feature is zero-touch orchestration, which automates deployment and application lifecycle management across multiple sites. Organizations can deploy and update systems with minimal human intervention instead of manually configuring each device or application in different locations. This is particularly beneficial for companies that need more information technology resources.

Another feature is the pull-based architecture, where edge devices initiate communication with the central management system only when necessary, such as to check for updates or confirm configurations. This reduces the constant load on the central management system, so it can handle a larger number of devices and sites without overburdening the IT infrastructure.

The inclusion of edge computing in SASE is unique to VMware. At Explore, I talked to Sanjay Uppal, senior vice president and general manager of the Software Defined Edge Division at Broadcom, about why edge computing is part of his division, and he explained, “As more workloads move to the edge, compute, networking and security will need to come together to deliver great experiences. We have already seen strong demand in industries such as oil and gas and manufacturing, but we expect AI at the edge to drive further drive the convergence of computing with SASE.”

Lastly, the edge infrastructure and application monitoring feature tracks edge devices’ and applications’ performance and health. Organizations can assess their systems’ performance across locations using pre-built dashboards and monitoring tools. Together, these features simplify the management of edge computing environments, helping organizations maintain consistency even with dispersed and resource-limited operations.

Edge computing is set to play an important role in deploying AI applications, with global spending expected to reach $232 billion in 2024. VMware’s enhancements will allow organizations to operate more efficiently in a distributed environment, addressing the growing needs of both AI and non-AI workloads.

Changes to the sales organization and a headcount reduction are signs of a maturing company ready to build in proactive service and revenue management.

Recently, Five9 reported its second quarter FY24 results. While the quarter was strong, the forward-looking guidance was light, which sent the stock tumbling over 25%. I asked the company about the light outlook, and a spokesperson stated, “We reduced our 2024 revenue guidance by 3.8%, primarily driven by macro headwinds.” While the company cited macro as an issue, its guidance contradicted Five9’s publicly traded peers, which all echoed a more consistent outlook. I do believe factors that could slow the growth of the economy played a part in the reduced guidance: global issues, talent shortages, AI uncertainty, and the election are causing customers to rethink IT investments. The other factor was sales execution, and the company took action to address this. That said, Five9 may see some looming weakness that the other CCaaS providers will not, but we won’t know that until next quarter. Post results, the company has taken some action to right the ship, which should cause the stock to rebound over time. The first was to promote Matt Tuckness, VP of Global Customer Success, to EVP of Sales and Customer Success. On the earnings call, leadership described the move as “promoting an accomplished 10-year Five9 veteran, giving us a single 100% dedicated sales leader.” Tuckness will be focused on $1M to $10M TCV deals, which makes sense given that Five9 has been concentrating on growing its enterprise base. On the earnings call, Scott Berg from Needham questioned the timing and commented appointing Tuckness to this role seemed like “a knee jerk reaction” to a single quarter and mentioned the books for the past five quarters had been strong. Dan Burkland, Five9 President who had previously run sales as part of his job, commented, “This was absolutely not a knee-jerk reaction. It’s a situation where we had an EVP of sales several years ago. I’ve been stretched thin with different responsibilities across the company, and we wanted to ensure we had somebody every day who is 100% dedicated to sales execution.” Given that Five9 is trying to win more enterprise accounts with longer sales cycles and more complicated deals, putting Tuckness in that role is the right thing to do. The bigger question was why they didn’t put a dedicated person in the EVP of sales role earlier. On a follow-up call with the company, I asked Burkland why he hadn’t made the change earlier, and he said that the team had previously executed very well, so there was no need to. To Burkland’s point, Five9 has been the model of revenue consistency, and this quarter will likely prove to be a hiccup, but the decision to have a resource laser-focused on day-to-day sales execution is the right one. Five9 also announced it was laying off about 7% of the workforce, which equates to about 185 workers. This layoff was the first in the company's history, which is a surprise given Five9 has used acquisitions as a growth engine as seen in their most recent acquisition news with the intent to acquire Acqueon, a leading real-time revenue execution platform. Typically, when a company makes an acquisition, they accrue excess people from overlapping areas first, then have to rationalize headcount later. The fact that Five9 has never had to do this is a testament to the strong growth the company has seen, as the demand to add people has outweighed any people brought on board from an acquisition. When asked about why the need to reduce headcount during the earnings call, the company stated, “This change allows us to continue to focus on profitable growth and long-term business resilience. We remain focused on serving the needs of our global customers and partners while making strategic investments to continue innovating.” Layoffs are never viewed positively, as they affect people’s lives and livelihoods. However, layoffs are a part of doing business. Most UCaaS and CCaaS providers staffed up during the pandemic and then had to cut bloated headcounts when the post-pandemic market softened. Five9 did not go through that, but the market has changed, and, unfortunately, it was time to trim a bit of the staff. Five9 said the acquisition of Acqueon will help accelerate the Five9 vision by merging expertise in inbound and outbound communications to deliver personalized, proactive customer experiences across marketing, sales, and service. In addition, Acqueon unlocks access to additional market opportunities for sales, proactive service, and revenue management. The structure of the Five9 teams has remained the same, and Acqueon will continue to operate as Acqueon, as a business unit within Five9. The longer-term plan is to absorb the Acqueon brand under the larger Five9 brand. Overall, Five9 had a more than solid quarter. Q2 was a record-breaking quarter, and the company exceeded the $1B ARR run rate for the first time. Total subscription revenue grew by 17%, and the company has a strong balance sheet with over $1B in cash. All of the activity over the past year indicates a Five9 maturing as it grows into a bigger company. Late last year, Niki Hall joined Five9 as its CMO and revamped marketing. Over the past few quarters, the company has appointed regional sales leads, and now there’s a dedicated sales leader. The headcount reduction adjusts Five9’s cost structure to where the business is. This activity should enable Five9 to return to putting up the “beat and raises” that we have all been accustomed to.

A new report from Cognigy looks at how AI agents could address modern customer service issues while also improving customer experience’s efficiency and effectiveness.

To AI or not to AI? That’s the question for many contact center leaders today. AI agents are a double-edged sword in that if deployed with the proper use cases, businesses can benefit significantly with improved brand loyalty, increased sales, and lower operational expenses. If done incorrectly, using AI will create a negative experience and drive customers away. The key is understanding the capabilities, where to deploy, and where not. AI Agents should be viewed as a critical technology that can significantly improve customer experience and operational efficiency. Most technology deployments strive to accomplish one, but AI agents can do both. Businesses should adopt AI as a long-term investment, starting with targeted use cases to achieve quick wins and scaling up as they realize the benefits. Business leaders, particularly CX professionals, must treat AI as a core component of their customer service capabilities to achieve and sustain a leadership position. Market leadership has always ebbed and flowed, but it’s happening much faster today. Businesses need AI to adapt quickly to market trends. People can no longer analyze information fast enough to move at digital speeds, but machines can. The key is understanding where and how to apply AI agents to maximize the benefits and minimize risk. A recent report from Cognigy, titled "AI Agents for Your Business," provides an in-depth analysis of how AI agents transform the customer service landscape. In the report, Cognigy, a company that provides AI-driven customer service solutions, focuses on how AI agents can address critical issues such as labor shortages, high turnover rates, and increasing customer demands while enhancing overall service efficiency and effectiveness.

AI Agent Capabilities

The report notes that AI agents combine two primary technologies: conversational AI and generative AI. Conversational AI enables agents to engage in dialogues that mimic human conversations, handling multiple turns and contexts across different channels. This technology integrates with backend systems, which allows AI agents to execute tasks effectively. Generative AI complements this by generating contextually appropriate responses and content on the fly, which enhances the interaction's relevance and personalization without the risks of generating inaccurate information. This was a critical point in the report. While generative AI has stolen most of the AI headlines since the launch of chatGPT, it’s not a panacea to all contact center woes. The combination of generative AI and conversational AI drives the value as it creates a more human response. Contact center leaders should not over-rotate to generative AI because of today's hype and focus on both areas.

The Benefits of AI Agents

The report emphasizes several key benefits, among them, 24/7 operation, multilingual capacity, automation of routine tasks and rapid replies which are accurate and context-aware. As contact centers are often challenged to deliver services around the clock in multiple languages, particularly when it’s a lightly used one, AI can meet that challenge.

Success Key: Start Small

For businesses considering AI agents, Cognigy suggests starting with specific use cases where AI can immediately impact, such as automating simple customer queries or providing real-time transactional support. This phased approach helps organizations see quick returns while building confidence in AI capabilities. Start simple is a point I’ve made to many contact center leaders. If one could plot all contact center interactions on a 2x2 grid with the axes being the complexity of interaction and frequency, AI agents should be used in the high frequency and low complexity quadrant. Anything complicated should immediately flip to a human as it likely requires the human touch. Most contact center interactions would fall into the low complexity / high-frequency category, offloading things like password resets and account balances and letting human agents focus on higher-value interactions.

Understanding Potential Challenges

Although AI agents offer numerous benefits, implementation presents challenges, including integration with existing IT infrastructure, initial training requirements, and ongoing management of AI systems. The report also suggests two ways to address these challenges: Choose an AI solution that is easy to integrate and support, or engage AI vendors that offer comprehensive after-sales support and training. Both are important, but I suggest a third recommendation: get the organization's data house in order. In my conversations with contact center leaders, the top barrier is the need for more data readiness. CX-related data is scattered everywhere, and the company needs to ensure it can use most of it to improve contact center operations. Support and training are critical recommendations. AI success requires ongoing training, data integrations, model tweaking, and other tasks that most companies have never done. Ensure your vendor can navigate you through initial deployment but will be there as the system needs tuning or market conditions change.

Labor Shortages and Agent Turnover

The Cognigy report highlights a significant challenge: the labor shortage in contact centers, compounded by high turnover, which research estimates is currently around 31%. Finding and retaining skilled customer service representatives is increasingly difficult, leading to operational inefficiencies. Cognigy says AI agents are positioned to solve these challenges because they automate high-volume, low-complexity tasks. They operate continuously across various channels and languages, significantly reducing the burden on human agents. This automation helps stabilize operations by mitigating the impact of labor shortages and high turnover rates. By handling routine tasks, AI agents enable human agents to focus on more complex interactions, improving efficiency and service quality. This point is prescient as it dispels the myth of AI killing the contact center industry. Many investors believe that AI will replace so many contact center jobs that the overall TAM for contact center software will be a fraction of what it is today. Contact centers have a massive job shortage, and AI agents can close that gap. AI agents will enable the human agents to provide higher value service, creating more valuable agents with rewarding work. This should help lower the high churn rate contact centers see today.

Enhancing Customer Experience and Supporting Human Agents

AI agents can help enhance the overall customer experience. They can manage inquiries in multiple languages and provide real-time support, including knowledge lookups and sentiment analysis. This functionality enables AI agents to handle Tier 1 cases efficiently, including straightforward queries and routine transactions. As a result, freed from these repetitive tasks, human agents can concentrate on more nuanced and complex customer interactions that require empathy and advanced problem-solving skills. Companies should integrate AI into the customer service infrastructure as a core component, like CRM systems and case management tools. This integration ensures that AI agents contribute effectively to improving customer service and operational efficiency.

The communications artificial intelligence wars continue unabated as every unified communications-as-a-service and contact center-as-a-service vendor loads its products with new capabilities to one-up the competition.

Today RingCentral Inc. added several new capabilities to RingCX, its AI-powered contact center solution. The California-based contact center solutions provider has added about 300 features to RingCX in the past quarter, bringing the total to more than 1,300.

It has been under a year since RingCentral introduced the product. At the time, it offered basic contact center capabilities and was a good add-on to its UCaaS base, but it posed no threat to the incumbents. Nine months later, the product has been beefed up with all the core and several advanced capabilities, enabling RingCentral to win contact center-only deals. To date, RingCentral has added more than 350 new RingCX customers.

The new AI-centric capabilities are designed to enable RingCentral to expand its market penetration. They will be generally available in the U.S. in the coming months, and international availability is expected in early 2025. But in what it’s calling an early-access preview, RingCentral will let customers use the latest offerings so it can collect real-world user feedback. The company said it would finalize pricing and other details before general availability.

Why expand AI? It’s what customers want

The fundamental thesis of my research is that market share changes only happen when markets transition. This means jumping into a market late never works.

Before the launch of RingCX, the company offered a contact center as a resale of NICE, and many industry watchers had been expecting RingCentral to roll their own for years. Instead, the company waited for the right moment and used the AI inflection point to come to market.

RingCX isn’t an older product with AI bolted on – rather, it was built with AI as a foundational component. Now, RingCentral is expanding those capabilities to provide greater value to its customers. The new RingCX capabilities announced today include:

  • Native AI Agent Assist: As contact center agents engage with customers, AI will listen to the calls and give agents real-time contextual suggestions for responding effectively to questions, objections and other issues. The goal is to ensure more accurate, timely and personalized resolutions so agents can resolve calls more quickly and effectively. The company said AI Agent Assist can incorporate existing company content — web pages, user guides, troubleshooting manuals and the like — into real-time “native-AI” agent suggestions. The new capabilities were designed to make contact center technology easy for businesses to set up and use. RingCentral will continue to offer more sophisticated and customized enterprise capabilities through its relationship with Balto.
  • AI Supervisor Assist: Another upcoming RingCX innovation, which I expect will be well-received, will use that same real-time listening capability to provide immediate notification of any issues that arise during calls to the contact center. The company says supervisors will get one-click access to detailed transcripts and concise conversation summaries. The idea is for supervisors to be able to assess situations rapidly so they can take action that will benefit both customers and agents.
  • AI Coaching Insights: Though the instant information sent to supervisors will undoubtedly have value, the bigger picture of refining the skills and performance of contact center agents has even greater long-term potential. Coaching Insights builds on RingCentral’s existing RingSense AI Quality Management offering. The new AI-based solution will automatically analyze every customer interaction of each agent and produce automated “coaching” suggestions. The customized feedback will identify any knowledge gaps an agent needs to work on. The goal is to ensure agents have consistent, high-quality calls with every customer. “AI gives a single dashboard for each agent’s performance,” Andy Watson, senior product marketing manager for RingCentral’s CX portfolio, said on a product briefing. “It will tell us what they’re good at and opportunities for improvement.”
  • Bring-your-own IVA: Customers already using an intelligent virtual agent to deliver customer service over voice and digital channels can use open application programming interfaces to integrate their preferred IVA to RingCX. Watson said the company’s APIs will connect any IVA using Restful APIs, removing entry barriers for many organizations.

After being briefed, I contacted Joe Rittenhouse, co-chief executive of Converged Technology Pros, one of RingCentral’s top resellers, and asked him his opinion. “RingCX, albeit late, has been a game changer for us,” he told me. “We have been impressed by the pace of innovation, which has quickly closed the feature gap. The focus on ease of use has been a differentiator with customers.”

Despite years of innovation and a shift to the cloud, the average contact center interaction often leaves something to be desired. AI is the most significant change agent in this industry, perhaps ever, as it enables brands to provide accurate, personalized information faster whether a human agent is used or not.

The new RingCX capabilities are the latest example of how pervasive AI is becoming in countless customer-facing business engagements. There is much more to come from all corners of the AI marketplace. I can’t wait to see what’s next, as the contact center industry stands to benefit from AI disproportionately.

Juniper Networks Inc. is focusing on helping customers move artificial intelligence in their networks from vision to reality, and today it laid out its plans at its AI-Native NOW customer event at the New York Stock Exchange.

Over the last several years, Juniper has developed strong AI networking products, and interest in AI for networking is extremely high, as engineers need help keeping up with modern networks’ complexity. A recent ZK ResearchCube Research study found that 93% of organizations consider the network to be more important to business operations than it was two years ago. In the same time frame, 80% of respondents also stated the network is more complex.

If everything stays the same, the growth in importance and complexity will eventually create an untenable situation. Enter AI as the solution. However, my rule on information technology projects is that the solution should never be more complicated than the problem, and AI in the network is very complex today.

Juniper aims to simplify the process of deploying AI with what it’s calling a “Blueprint for AI-Native Acceleration,” which addresses the challenges of each step of the AI-networking journey. This includes the following phases: Learn, Try, Buy, Deploy and Optimize. Here are the specifics of each:

Learn

Most network professionals will be new to AI, and nothing scares an engineer more than being new, as it can lead to errors, downtime and risk. Juniper is rolling out targeted training courses for different stakeholders.

For business leaders, Juniper has developed a free course to educate on how AIOps can be used strategically to optimize network operations, which will lead to a better performing network at a lower cost. Having the education directed at business leaders can help drive change faster. Engineers often resist change, which can hold the company back, which is why change needs to be driven from the top down.

For engineers, Juniper offers a range of hands-on classes and certifications that can help them kickstart their AI knowledge. Topics include AIOps, wireless, switching, routing, security, cloud and automation. These classes are free as part of Juniper’s blueprint initiative.

Try

Juniper is putting its money where its mouth is. In my conversations with Juniper executives, all the way up to Chief Executive Rami Rahim (pictured), the company is confident that, if it can get customers to try the product, they’ll become a customer. For a limited time, Juniper will offer qualified prospects the ability to kick the tires on a set of products for free or a heavily discounted price.

This initiative is well-timed, as AI superiority could cause customers to switch from their incumbent. In the previously mentioned survey, more than 90% of respondents stated they would be willing to switch network vendors if the AI is better. A sample of “try before you buy” products include:

  • Wi-Fi: Free access points and 90-day trial of wireless assurance software.
  • Data center: Advanced license for Juniper Apstra data center assurance software for the price of a standard license. Also, customers have access to the industry’s first Ops4AI Lab, hosted by Juniper to validate performance and functionality of their AI models.
  • WAN routing: Free flexible service credits to apply towards migration services and training, free first year of support including Juniper Support Insights, plus free three-year trial of Juniper Paragon automation platform.
  • Security: Buy one year of security subscription, get two years free.
  • Juniper Support Insights: Free support offer for enhanced customer support.

Buy

Historically there has been one way to buy network products. Write a big check at time of purchase and then pay maintenance annually. More and more, customers are looking for different options to smooth out the lumpiness associated with upgrades. Juniper is offering flexible purchasing options. This includes:

  • Enterprise agreements: Custom and packaged Enterprise Agreement options to simplify managing many licenses with a staggered deployment.
  • Network-as-a-service: Juniper offers the ability to procure networks via a periodic payment plan or fixed-term subscription, via its managed network partners.

Deploy

Deploying infrastructure is never an easy task as there are many things to consider. The company has created several “Juniper Validated Designs” that, as the name suggests, comprise a validated, tested and proven guide to implementing Juniper technology. This takes the specific platform, software version and other factors to give customers the confidence that when they deploy, it’s going to work as advertised. Juniper also has its down deployment Services to support its customers as the products are installed.

Optimize

In many ways, once the technology is deployed is when the hard work starts. Environments are far from static and there is always a large amount of tweaking and tuning to do. Juniper has AI-powered support that the company claims will resolve 80% of routing service inquiries autonomously.

It’s also introducing AI Care Services that give customers access to personalized support. This includes on-going validation and optimization and quarterly reviews.

As a proof point, Juniper customer, Bryan Ward, lead network engineer for New Hampshire-based Dartmouth College, talked about his experience with Juniper. Higher education is notoriously light on the number of IT staff for the size of population they need to support, making automation a “must have.” Dartmouth deployed more than 2,000 access points across the campus in just two days, which is remarkable considering the breadth of area that a school covers. The school network supports more than 30,000 Wi-Fi devices and 25,000 wired endpoints.

From Ward’s calculations, Dartmouth realized nine times faster deployment and a 10-fold reduction in IT ticket escalations. Students are amongst the most demanding of users and rock-solid Wi-Fi is mandatory.

As an analyst, it has been my thesis that share change happens when markets transition and AI is creating the biggest shift in networking since the early days of the internet. From its Mist acquisition, Juniper has been at the AI game longer than most other vendors and has an excellent set of products. The AI Network Blueprint is a good complement to the product as it addresses the “how” as opposed to the “what.”

The big elephant in the Juniper room is obviously the pending acquisition by Hewlett Packard Enterprise Co. The Blueprint can easily translate to HPE’s products and could help the combined company deliver a unified solution and be prescriptive as to how the two product lines can come together to solve business problems.

There hasn’t been a tech tailwind as strong as artificial intelligence since the early days of the internet. Many companies are vying to be the kingpin in the AI battleground, with Nvidia Corp. taking the early lead.

The company has kept that position by taking a systems approach to AI. One of the key differentiators for Nvidia has been NVLink and NVSwitch, which enabled better and faster connectivity between graphic processing units to help with inferencing.

LLMs continue to grow in size and complexity, so demand for efficient, high-performance computing systems has also grown. In a recent blog post, Nvidia examined the role of NVLink and NVSwitch technologies in enabling the scalability and performance required for large language model inference, particularly in multi-GPU environments.

After reading the post, I was intrigued, so I interviewed Nvidia’s Dave Salvator, director of accelerated computing products, Nick Comly, product manager for AI platform inference, and Taylor Allison, senior product marketing manager for networking for AI, to understand better how NVLink and NVSwitch can significantly speed up the inferencing process.

The NVLink and NVSwitch architecture

Salvator told me that the architecture of NVLink and NVSwitch is critical. “It’s already helping us today and will help us even more going forward, delivering generative AI inference to the market,” he said.

In reality, the points made are fundamental network principles that have never been applied at the silicon layer. For example, performance would be terrible if we connected several computers with point-to-point connections, but performance would improve dramatically through a switch.

“That’s a good way of thinking about it,” he told me. “I mean, point-to-point has a lot of limitations, as you correctly point out. The blog gets into this notion of talking about computing versus communication time. And the more communication becomes part of your performance equation, the more benefits you’ll ultimately see from NVSwitch and NVLink.”

The challenge of multi-GPU inference

In the blog, Nvidia notes that LLMs are computationally intense, often requiring the power of multiple GPUs to handle the workload efficiently. In a multi-GPU setup, the processing of each model layer is distributed across different GPUs.

However, after each GPU processes its portion, it must share the results with other GPUs before proceeding to the next layer. This step is crucial and demands extremely fast communication between GPUs to avoid bottlenecks that could slow down the entire inference process.

Traditional methods of GPU communication, such as point-to-point connections, face limitations as they distribute available bandwidth among multiple GPUs. As the number of GPUs in a system increases, these connections can become a bottleneck, leading to increased latency and reduced overall performance.

Nvidia NVLink: solving GPU-to-GPU communication

NVLink is Nvidia’s solution to the challenges of GPU-to-GPU communication in large-scale models. In the Hopper platform generation, it offers a communication bandwidth of 900 gigabits per second between GPUs, far surpassing the capabilities of traditional connections. NVLink ensures that data can be transferred quickly and efficiently between GPUs while minimizing latency and keeping the GPUs fully utilized. The Blackwell platform will increase the bandwidth to 1.8 terabits per second, and the NVIDIA NVLink Switch Chip will enable 130 TB/s of GPU bandwidth in one 72-GPU NVLink domain (NVL72).

Taylor Allison shared some further details about NVLink. “NVLink is a different technology from InfiniBand,” he told me. “We’re able to leverage some of the knowledge and best practices that we have from the InfiniBand side of the house with the design of this architecture — in particular, things like in-network computing that we’ve been doing for a long time in InfiniBand. We’ve been able to port those to NVLink, but they’re different.”

He quickly compared InfiniBand and Ethernet and then described how NVLink fits in. “InfiniBand, like Ethernet, is using a traditional switching/routing protocol — an OSI model you don’t have in NVLink,” he said. “NVLink is a compute fabric and uses different semantics.”

He told me that NVLink is a high-speed interconnect technology that enables a shared memory pool. Ethernet and InfiniBand have different paradigms. Nvidia designed NVLink’s architecture to scale with the number of GPUs, ensuring the communication speed remains consistent even with GPUs added to the system. This scalability is crucial for LLMs, where the computational demands continuously increase.

NVSwitch: enabling nonblocking communication

To further enhance multi-GPU communication, Nvidia introduced NVSwitch, a network switch that enables all GPUs in a system to communicate simultaneously at the total NVLink bandwidth. Unlike point-to-point connections, where multiple GPUs must split bandwidth, NVSwitch ensures that each GPU can transfer data at maximum speed without interference from other GPUs.

“Blackwell has our fourth generation of NVSwitch,” Salvator said. “This is a technology we’ve been evolving. And this is not the first time we’ve done a switching chip on our platform. The first NVSwitch was in the Volta architecture.” He added that NVSwitch delivers benefits on both the inference and training sides.

Training and inferencing

“Training is where you invest in AI,” Salvator told me. “And when you go to inference and deploy, an organization starts seeing the return on that investment. And so if you can have performance benefits on both sides, the presence of the NVSwitch and the NVLink fabric is delivering value.”

NVSwitch’s nonblocking architecture enables faster data sharing between GPUs, critical for maintaining high throughput during model inference. This especially benefits models such as Llama 3.1 70B, which has substantial communication demands. Using NVSwitch in these scenarios can lead to up to 1.5 times greater throughput, enhancing the overall efficiency and performance of the system.

Case study: impact on Llama 3.1 70B model

The blog post looked at NVLink and NVSwitch’s impact on using the Llama 3.1 70B model. In Nvidia’s test, the results showed that systems equipped with NVSwitch outperformed those using traditional point-to-point connections, particularly when handling larger batch sizes.

According to Nvidia, NVSwitch reduced the time required for GPU-to-GPU communication and improved overall inference throughput. This improvement translates to faster response times in real-world applications, crucial for maintaining a seamless user experience in AI-driven products and services.

Looking ahead: the Blackwell architecture

Nvidia’s Blackwell architecture introduces the fifth generation of NVLink and new NVSwitch chips. These advancements increase the bandwidth by two times to 1,800 GB/s per GPU and efficiency of GPU-to-GPU communication, enabling the processing of even larger and more complex models in real time. Only time will tell on this, though.

Some final thoughts

Nvidia’s NVLink and NVSwitch technologies are critical components in the ongoing development of LLMs. In thinking about these technologies and the rapid pace of development, there are three key points to keep in mind:

  • Enhanced GPU communication is on the way: Nvidia’s NVLink and NVSwitch will improve GPU-to-GPU data transfer and reduce latency in LLM inference.
  • Scalability for larger models is achievable: These technologies enable efficient scaling in multi-GPU systems while maintaining high performance even as model sizes increase.
  • Nvidia has Blackwell in the wings: The upcoming Blackwell architecture will introduce further advancements, boosting performance for even more complex AI models.

These developments are exciting, and it will be interesting to see how the industry and customers respond. Nvidia continues to push the AI envelope, and that has kept it in the lead, but the race is far from over.

Cisco Systems Inc. provided positive numbers in its fiscal fourth-quarter results Wednesday, and there’s a story behind those numbers.

The networking giant posted a modest revenue beat of $13.64 billion, $100 million more than consensus estimates. Gross margin, boosted by the acquisition of Splunk Inc., came in at a whopping 67.5%, the highest number for Cisco in 20 years. Product order growth rose 14% year over year, 6% excluding Splunk.

Looking ahead, first-quarter revenue guidance came in at $13.65 billion to $13.85 billion, in line with the expected $13.76 billion. The full-year fiscal 2025 number is expected to be $55 billion to $56.2 billion, with the midpoint slightly ahead of the $55.6 billion Wall Street was expecting. Investors bid up the stock almost 7% today as a result.

But there’s more to the story than the numbers. Here are my top five takeaways from the quarter:

The digestion period is coming to an end

Some previous quarters’ results disappointed investors, as growth had slowed down. The company explained that after the pandemic, customers had ordered more products that could be implemented, creating what Cisco described as “digestion” issues. Other infrastructure vendors echoed this sentiment, corroborated by many chief information officers I have spoken to.

On the earnings call, Chief Executive Chuck Robbins specifically mentioned this when discussing order growth: “We saw steady demand as we closed the year with total product order growth of 14% and growth of 6% excluding Splunk, indicating that the period of inventory digestion by our customers is now largely behind us, as we expected.”

Is the digestion period over? I’m not ready to call that yet, but I think it’s nearing its end. This is also a word of caution to infrastructure vendors. Customers are currently buying artificial intelligence-related infrastructure ahead of their capabilities. Ensure they know how to deploy, have the best practices and can turn purchases into business outcomes, or we will see AI-related indigestion in six months.

Cisco innovation is driving sales

One of Cisco’s myths is that it only acquires and does not innovate. That’s far from true. Though Cisco has been an acquisition machine, many of its leading-edge products are homegrown.

In the security area, XDR and Secure Access were built in-house, both of which Robbins called out as they gained traction. Hypershield and Hyperfabric are both on the horizon, and the company has big expectations for those products.

Though the networking business was down this quarter, orders have returned to growth, and Cisco still holds most of the market share. Most of its networking products run the homegrown Silicon One network processor.

The acquisitions also can lead to homegrown innovation. For example, Webex is now loaded with many AI features in collaboration. Its background noise removal, which Webex does better than others, was built on technology from Babble Labs, which it acquired in 2020. Though one could argue it’s not Cisco innovation, technology, particularly from tuck-ins, stopped being “acquisitions” and shifted to homegrown after that many years.

Cisco’s innovation is something the company should highlight more regularly to all stakeholders, including investors.

Making the shift to a platform company

Roll the clock back 20-plus years; I was at a value-added reseller and was selling a bunch of “Cisco on Cisco.” Customers and partners inherently believed Cisco voice over internet protocol on a Cisco network had better quality and that Cisco security on a Cisco network was more secure.

Sometime in the last two decades, leadership changes, silos and other factors led to many internal silos. Though Cisco may have had a strong “network,” “security” or “collaboration” story, it has been a long time since it has had a Cisco value proposition.

Today, having a platform strategy is crucial for success in AI, as so many moving parts need to work together. Also, in AI, data is a differentiator. At Cisco Live, Executive Vice President Jeetu Patel (pictured) referred to data as the “new gold” for AI. With Splunk combined with its network telemetry, security intelligence, collaboration insights and observability, Cisco has arguably more AI-related data than any infrastructure vendor. The key is bringing the silos together to create a 1+1=3 scenario.

On the call, Cisco mentioned “several $100 million-plus transactions.” These included a global logistics company that will use several Cisco products, including switching, routing, Splunk, collaboration and services, to enable automation and new innovations such as AI-powered robots.

Another example is a North American airline that’s using Cisco switching, routing, wireless, security, collaboration and services to improve operational efficiency and enable future AI and machine learning applications.

On an analyst Q&A with Chief Financial Officer Scott Herren, I asked him about the margin implications of these large deals. “The platform strategy will allow us to take advantage of better integration and a better experience for our customers because the products are tightly integrated,” he explained. “This isn’t about changing margins on a deal-by-deal basis but more about accelerating revenue growth, which gives us better margin leverage across the company.”

Channel partners have been waiting for this pivot. After the earnings call, I talked with Amrit Chaudhuri, C1’s chief growth officer. “C1 is one of Cisco’s leading partners, and we are excited about Cisco’s shift to a platform strategy,” he told me. “Our customers want unified offerings [networking, security, AI and collaboration], allowing us to deliver broader, better-performing solutions.”

On a related note, on the earnings call, Cisco announced that Jeetu Patel is now the company’s chief product officer and will have security, collaboration and networking under him. This is an excellent move, and Patel is the right person for the job, as he has never been afraid to make bold moves to shake things up.

AI is on the precipice of being a tailwind for Cisco

Investors have been waiting to see if AI would act as a headwind or tailwind for Cisco. The company has alluded to pending deals with statements such as “line of sight” and references to pipeline. This was the first quarter with meaningful sales. On the call, Robbins stated, “We continue to capitalize on the multibillion-dollar AI infrastructure opportunity. We have now crossed $1 billion in AI orders with webscale customers, with three of the top hyperscalers deploying our Ethernet AI fabric, leveraging Cisco-validated designs. We expect an additional $1 billion of AI product orders in fiscal year ’25.”

Cisco was referring specifically to deals that involve infrastructure to support AI initiatives. In reality, the AI tailwind will be much more significant. Hypershield, XDR, networking and other capabilities are all being powered by AI. Cisco should be able to make the case that a refresh will deliver better outcomes, driving more AI-related sales.

Layoffs are about cost reduction and reallocation

The cost-reduction initiatives overshadowed the positive news from the quarter. The company announced a 7% reduction in force, which equates to about 4,000 employees from its global workforce. There are a few points here worth looking at. The first is that Cisco is a very active acquirer with many overlapping job functions when the new company is rolled in. This typically results in headcount reduction to start the fiscal year, so I’m not sure why industry watchers are surprised.

In this case, there is an element of cost reduction but also reallocation of talent. On the call, Herren addressed this: “This is a continuation of what you have seen us do. At Investor Day, we talked about having already pivoted on the R&D front, little more than 50% of our R&D spend into those three areas, into AI, cloud and security. Obviously, networking continues to be incredibly important to us and we’ll continue to support that space as well. But it’s looking for efficiencies as we look across the company really in every way so that we can take those resources and allocate them into the fastest-growing spaces.”

As long as I’ve followed Cisco, it has been a financially disciplined company and it continues to do the things it needs to in order to remain a market leader. The company reported the highest operating margin in its history and I would expect more of this in the future.

Final thoughts

This was a good quarter for Cisco, but the company is still transforming into a software company in many ways. The stock is trading like a hardware company, well below the multiples of a software company, despite subscription being 51% of revenue. The company needs to grow its core networking business, which will help shine a light on the other aspects of its operations. Good start. More to come.

digital concept art in gold