Featured
Reports

Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience

Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]

Continue Reading

Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation

April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]

Continue Reading

Nathan Howe, VP of Global Innovation at Zscaler talks mobile security

March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]

Continue Reading

Check out
OUR NEWEST VIDEOS

2025 ZKast #75 with Hemant Mohan of AWS at IBM Think 2025

3.5K views May 23, 2025 12:30 pm

1 0

2025 ZKast #74 with Eric Denholtz of Lamps Plus at VeeamON 2025

9.4K views May 21, 2025 5:17 pm

0 0

2025 ZKast #73 With Savio Rodrigues, VP of Service Partners for IBM at Think 2025

14.2K views May 16, 2025 2:31 pm

1 0

Recent
ZK Research Blog

News

Last week I was at the industry event, RSAC and this week I’m at IBM Think, a vendor event and this got me thinking about the difference between the two types of conferences and how much they’ve changed over the years.

Flashback to 15 years ago: If you wanted to experience a blockbuster tech event, you were probably booking your ticket to something like Networld + Interop. Back then, tech expos such as Interop and SUPERCOMM were the “it” events, allowing vendor carnival barkers to yell over each other to grab your attention. Sure, you could catch glimpses of exciting innovations, but it felt more like tech industry speed-dating: all surface, no depth.

Fast forward to today, and the rise of dedicated tech vendor conferences has turned vendor events into specific tech-centric hubs where you can dive deep into select technologies without the distraction of brightly colored booth swag or gimmicky sales pitches.

There aren’t as many industry events as there were in the past. The ones that remain, such as MWC, EnterpriseConnect and the recently held RSAC, have become a place to meet and for people to build relationships rather than for the practitioner to learn. An interesting byproduct of this is that industry events seem to have a much lower percentage of actual information technology buyers compared with years past.

Juxtapose this with vendor events, which are geared more toward specific job functions, specifically the practitioner and increasingly C-level. Complementing that are ecosystem partners, resellers, investors and industry disruptors, all there to talk shop. Because of this, most attendees at vendor events are IT pros looking to plan out their purchases for the next 12 months.

It wasn’t until the pandemic forced these events into digital limbo — or placed them in suspended animation — that I realized just how valuable they are. After years of virtual meetings and webinars trying (and failing) to replicate the unique aspects of these gatherings, I can confidently say virtual events can’t replace the effectiveness of a tech vendor event.

The reality is, when you’re at a physical event, there’s power in the palpable energy of brainstorming with like-minded “birds of a feather,” the unfiltered honesty of hallway discussions, and the serendipity of bumping into someone at the coffee station who just might inspire your next career move. Dedicated vendor conferences deliver all that plus deep-dive roadmaps, hands-on demos and a peek behind the curtain at what’s coming next. Industry events can replicate some element of networking but not the engineering-specific content.

Over the next few years, as IT continues to grow in complexity, I’m expecting these events will only grow in importance. Whether it’s about cybersecurity, AI-powered tools or cloud everything, these conferences are where innovation is born and celebrated. Attendees will leave with more than just a bag full of SWAG — they will exit with the connections, ideas and inspiration to fuel the next big breakthrough.

Many of my peer analysts prefer dedicated analyst-only events. Though these certainly add value, my preference has been vendor events because I can interact with people that use the company’s products as part of their day-to-day job and that’s something that can’t be shown on a PowerPoint.

Here are 10 excellent examples of some recent vendor events and what I thought was the most meaningful knowledge gained:

  • AWS Re:Invent. The Amazon Web Services portfolio continues to grow in breadth and depth. Re:Invent provided many sessions on how Amazon services can be used together to create broader, portfolio value and how to derive short-term value from AI.
  • Zscaler Zenith Live. The concept of zero-trust security is no longer a foreign one but getting from today’s security environment to zero trust everywhere can be complicated. At Zenith Live, Zscaler focused on the Zero Trust Exchange and how to implement the technology to dramatically reduce their attack surface. Over the past few years, Zscaler has done a nice job of keeping the practioner content while adding in tracks specific to the C-level.
  • Canva Create. Canva is the hot new thing and has turned creative upside down. It’s a relatively young company but Canva Create had many sessions on how to get started with it. Canva is the first vendor I have seen in a long time that has a realistic shot at disrupting Microsoft and Adobe.
  • Nvidia GPU Technology Conference. This event was a good mix of vision and hands on learning. Engineers at GTC learned about the latest advancements in AI, accelerated computing, and related software, along with their applications across various industries. Developers also love GTC because the event helps them fast-track accelerated computing apps.
  • Cisco Live. Networking is a critical part of AI and Cisco is the biggest network vendor. Attendees of Cisco Live learned about the network and security roadmap but how the integration of the two offers a unique way to secure artificial intelligence. As AI grows in importance, so does the value of Cisco Live.
  • Extreme Connect: The current Extreme Networks has been put together through a series of acquisitions, for which most of the integration work is done. The last connect focused on its strength in Wi-Fi and its Fabric and how those fit into the broader portfolio strategy.
  • Zoomtopia. The very nature of work has changed, and few vendors have as broad a set of tools to reach every type of worker – remote or hybrid. Zoomtopia has provided prescriptive guidance on how to ensure workers of any type – from back-office to front-office — can use Zoom technology to be more efficient.
  • NICE Interactions. NICE is the king of contact center, and its Interactions show has been filled with guidance on how to bring AI into customer service, securely and with minimal risk. NICE always does a great job of parading customer after customer on stage to help prove the value.
  • VeeamON. Over the past few years, the boring industry of backup and recovery has evolved into business resiliency and no vendor has been more important in that shift than Veeam. Veeam technology is a critical component of recovering from all kinds of outages, including ransomware. Backup and recovery have shifted from something no one cared about to a board level discussion and that’s on display at VeeamON.
  • IBM Think. What I like most about Think is how it front-ends of the user event with a Partner Plus Day. This lets it take its AI, cloud and other content and deliver how to information to its customers but also drive its portfolio value throughout its massive partner ecosystem. Many companies split these events, but I like the two being part of one event, as they are two sides of the same coin.

Palo Alto Networks Inc. kicked off this week’s RSA Conference in San Francisco by introducing new capabilities for its ever-expanding security portfolio.

The announcements were focused on its two major platforms: network security and Cortex.

Prisma Access Browser 2.0

Palo Alto Networks has introduced Prisma Access Browser 2.0 into its secure access service edge offering. In late 2023, Palo Alto acquired Talon to jump into the secure enterprise browser market and now it has made the offering part of its SASE stack. Capabilities include:

  • Safely enabling generative artificial intelligence use and protecting data with real-time visibility, access control and user coaching to accurately secure sensitive data with LLM-powered context-based classification to prevent leaks or breaches.
  • Real-time defense against sophisticated web attacks to detect evasive and targeted attacks, such as AI-generated cloaking and SaaS-hosted phishing.
  • A reimagined unified user experience that provides maximum performance for modern web and SaaS applications while enabling users to easily launch legacy infrastructure from the same browser.

Other new Prisma SASE capabilities include endpoint data loss prevention; integration into Prisma SD-WAN to support new productivity apps and extend enhanced user-to-app performance to the branch; simplifying the information technology experience with a next-generation unified SASE agent; and the addition of Oracle Cloud Infrastructure to extend the global reach of Prisma SASE and deliver cloud resiliency and greater uptime.

Secure enterprise browsers aren’t new, but the market has been in a bit of a renaissance. Older solutions required users to install a separate browser, where current solutions, such as Prisma Access Browser run in Chrome and Edge making it invisible to the user. Also, with the permanency of remote work, for many organizations, the browser has become the primary workspace. Securing the user and data at the browser brings consistent security to the first line of defense.

Cortex XSIAM 3.0

Palo Alto unveiled the 3.0 version of Cortex XSIAM, the next version of its SecOps platform. Among the new features are proactive exposure management and advanced email security, which enable customers to consolidate more functions onto Cortex with better and faster results, providing a proof point as to the value of platforming their security operations center.

Cortex Exposure Management prevent attacks by using AI to analyze the massive amounts of data to prioritize and remediate actions across the attack surface. What’s interesting about the release is it changes the role of XSIAM. Palo Alto Chief Executive Nikesh Arora often talks about security tools being designed for “peacetime” or “wartime,” with XSIAM being the former. Exposure management adds in element of war time and playing a dual role.

Other new capabilities include:

  • Providing a unified solution to uncover risks across native network, endpoint, and cloud scanners that can integrate with third-party sources.
  • Reducing alert noise based on actual risk by using AI to prioritize high-risk, exploitable vulnerabilities — without the need for compensating controls — and eliminating false alarms.
  • Preventing future attacks by creating new projections for critical risks in native network, endpoint and cloud security solutions, and automating remediation with playbooks across first- and third-party tools.
  • Stopping sophisticated email-based attacks with Cortex Advanced Email Security by:
    • Detecting advanced fishing and email-based threats with large language model-powered analytics that continuously learn from emerging threats.
    • Using built-in automation to stop attacks in real time, automatically remove malicious emails, disable compromised accounts and isolate affected endpoints.
    • Extending detection and response with email context that correlates email, identity, endpoint and cloud data to show the full attack path to facilitate incident response.

Palo Alto expects the new SASE features, Exposure Management, and Advanced Email Security to be generally available in Q4 of fiscal 2025, which ends July 31.

Prisma AIRS

Palo Alto also introduced Prisma AIRS (pictured), which is an AI security platform designed to protect the entire enterprise AI ecosystem, including applications, agents, models and data. It addresses both traditional and AI-specific threats, enabling organizations to deploy AI more confidently.

Built on the Secure AI by Design portfolio that Palo Alto launched in 2024, Prisma AIRS capabilities include:

  • AI Model Scanning checks AI models for vulnerabilities to enable enterprises to secure AI ecosystems against a wide range of risks, including model tampering, malicious scripts and deserialization attacks.
  • Posture Management delivers insight into AI ecosystem security posture risks due to a number of issues, including excessive permissions, sensitive data exposure to platform misconfigurations, and access misconfigurations.
  • AI Red Teaming enables security teams to identify potential exposure and risks proactively. They can use a Red Teaming agent to stress-test AI deployments by performing automated penetration tests on AI apps that adapt how an attacker would.
  • Runtime Security protects LLM-powered AI apps, models and data against runtime threats, including prompt injection, malicious code, toxic content, resource overload, and more.
  • AI Agent Security enables enterprises to secure agents, including those built on no-code/low-code platforms, against various new agentic threats.

During RSAC, Palo Alto announced the intent to acquire Protect AI. As the name suggests, the vendor focuses on security AI and machine learning systems. At the event I talked with Anand Oswal, senior vice president and general manager of network security at Palo Alto, about the acquisition. “When the deal does close, Protect AI will become part of the Prisma AIRS team, accelerating our journey to comprehensively secure every app, agent, data set and model,” he said, so there’s more to come.

For Palo Alto, this was a strong set of announcements as it expands the definition and capabilities of its platforms. Almost every security professional I talk to has bought into the concept of the platform but struggle with how to get from where they are today to the future state of a platform.

The challenge for Palo Alto, and the other platform vendors, will be to help companies migrate from multivendor, multitool environment and consolidate down to a few platforms. It’s what customers want. Now the vendors need to help them get there.

It’s nearing the end of April and for the security industry, this means it’s time for the RSAC Conference. Over the next week or so, about 50,000 people will flock to the Moscone Center to take in the latest and greatest in cyber.

One company that has retooled much of its portfolio over the past year is Cisco Systems Inc. and it’s used RSAC to launch many new products. At the 2023 conference, the company introduced its extended detection and response solution and last year it added enhancements to Identity Intelligence with Duo, more capabilities with Cisco Hypershield and Splunk-Cisco integrations.

At RSAC 2025, Cisco continued its security binging with the following:

  • Additional enhancements to Cisco XDR
  • Cyber Vision industrial internet of things and operational technology integration with Hybrid Mesh Firewall.
  • AI supply chain risk management — visibility into and control of the AI supply chain.
  • A partnership to integrate Cisco’s AI defense capabilities into ServiceNow.

Here are details on each:

XDR 2.0 AI enhancements

Under Chief Product Officer Jeetu Patel (pictured), Cisco has been focused on bringing together the historically siloed security and infrastructure domains with a goal of providing better security outcomes while lower operating and capital expenses. It’s an ambitious goal, but the company is looking for AI to help achieve this. Adding to the complexity is that AI workloads introduce a whole new set of security challenges, particularly for the mid-market.

The value of XDR is that it can look across the entire attack surface – from network to endpoint to email and web but then identify the lateral movement of an attack, in near real time. The Cisco enhancements give it the ability to bring XDR to the mid-market, which hasn’t had access to it before.

XDR 2.0 uses AI is to close the gap between the bazillion alerts companies get to find the truly malicious ones and then take advantage of the automated response capabilities. Security teams can use agentic artificial intelligence to build a tailored investigation plan and then execute it. There is so much data being generated today that people can no longer analyze in manually, but AI can. In fact, one could look at AI as the missing piece to realize the promise of XDR.

Cyber Vision with Hybrid Mesh Firewall

Security for AI is also a component of Cisco’s Cyber Vision IoT and operational technology capabilities enhancements. Cyber Vision takes the asset inventory of the IoT endpoints, checks against any of the vulnerabilities that might be in place, organizes all these assets into groups and then the secure firewall is able to communicate seamlessly with this collection of IoT devises to enforce segmentation as well as firewall policies.

As Cisco brings more security into the fabric of the network with a Hybrid Mesh Firewall, it can read and understand feedback from Cyber Vision to make sure that it is automating least privileged controls to devices on a factory floor, as well as to users, whether they’re remote or in a branch office.

For Cisco customers, Security Cloud Control is the interface used to define policies and the intent behind them, and then enforce everywhere, where the application and workloads may be running — on firewall variants, on secure workload, on Hypershield, as well as secure access. Historically, Cisco has had good security tools, but the management was scattered across different systems. Under Patel, Tom Gillis and Raj Chopra, Cisco has done a much better job of simplify workflows to complement the products.

AI supply chain and risk management

AI supply chain and risk management are other areas in which Cisco is enhancing security. AI is being infused into every corporate application and business process and that creates new risks. As an example, downloading a model from a source such as Hugging Face creates risk, as the exposed models can be infected with malware.

Cisco has worked on all the artifacts around AI, not just the usage of AI, but how they are being built and modeled. Cisco has visibility into the entire supply chain around AI and can enforce the right kind of controls, be it on the endpoint of the developer or the usage of that particular application.

Partnership with ServiceNow

Cisco also announced a product and go-to-market partnership with ServiceNow to bring together Cisco’s AI risk and governance portfolio within ServiceNow in a hybrid model. Joint customers will realize the value of the partnership as they start to adopt AI more holistically. This spans a wide range of the use cases, whether it is the visibility of the application being used, the models, and the kinds of attacks they may be vulnerable to, including real-time protection.

Final thoughts

For Cisco, success in security is critical to accelerating growth. Security is a massive, highly fragmented market and even moderate success will move the needle on Cisco growth and stock price. Tying security to Cisco’s dominate share position in networking gives it a unique approach that is highly defensible.

Historically, Cisco has treated the domains as individual silos and created complexity for its customers. Much of the innovation in security for AI and AI for security has come by bringing these two worlds together – something long overdue for Cisco and its customers.

As Veeam Software Group GmbH, the market leader in data resiliency, holds its annual user event VeeamON in San Diego this week, it’s on a roll, continuing to stretch its lead over the legacy vendors.

Recently IDC released its Data Protection Software Tracker and Veeam grew 12%, outpacing IBM Corp.’s 5% growth and well ahead of Dell Technologies Inc. and Veritas Technologies LLC, which shrank 10% and 15%, respectively, allowing Veeam to stretch its share lead.

At VeeamON, the company made several announcements, including a partnership with CrowdStrike Holdings Inc., protection for Entra ID, ransomware updates, a new Linux-based appliance and more. One of the more interesting announcements, to help companies understand where they are with their data protection was the unveiling of the Data Resilience Maturity Model, or DRMM.

This era of information technology is driven by data, and that has raised the stakes on cyberattacks and IT outages. During his keynote, Chief Executive Anand Eswaran (pictured) laid out the truth when he said, “Cybercrime is a business, and that business is booming.”

The DRMM is a framework which enables organizations to objectively assess their true resilience posture and take decisive, strategic action to close the gap between perception and reality to improve data resiliency against proliferating cyberattacks.

Veeam led the creation of a consortium of industry experts, which includes MIT and McKinsey, tasked with identifying the state of enterprise data resilience and providing recommendations for improving it. During his presentation, Eswaran showed some data that highlighted a significant disconnect between how chief information officers see their organizations’ data resilience and the reality.

The study found that 70% of organizations believe they have the necessary levels of data resilience to protect their company. In fact, 30% consider themselves “above average” in this area. However, once customers went through the DRMM exercise, the model found that fewer than 10% had an acceptable level of data resiliency to react to an event without disrupting the business.

The fact there is a gap isn’t a surprise shouldn’t be a shock. In every example of a maturity model I have seen, businesses overestimate their capabilities. Rarely have I seen a gap this large and it’s something business and IT leaders need to take seriously. It has been well-documented that the artificial intelligence era is fueled by data and if there’s one thing events such as CrowdStrike’s have taught, it’s that most companies can’t recover quickly from a disruptive event.

The report quantified the impact of downtime as it highlighted that IT outages costs Global 2000 companies more than $400 billion annually, with $200 million in losses per company from outages, reputational damage and operational disruption.

Other key findings include:

  • 74% of organizations fail to meet best practices, operate at the two lowest maturity levels, and have data recovery risk exposure.
  • Organizations at the highest maturity level recover from outages seven times faster, experience three times less downtime, and suffer four times less data loss than their less mature peers.

Data resiliency has always been important but is now critical to survival. Eswaran expressed some urgency for companies to get a handle on their data when he stated, “Most companies are operating in the dark. The Veeam DRMM is more than just a model; it’s a wakeup call that equips leaders with the tools and insights necessary to transform wishful thinking into actionable, radical resilience, enabling them to start protecting their data with the same urgency as they protect their revenue, employees, customers and brand.”

Through a series of questions, the DRMM provides an empirical framework for organizations to assess their current resilience posture, identify gaps, and implement targeted improvements. Insights MIT and cybersecurity companies Palo Alto Networks Inc. and Splunk Inc. contributed to the research. The DRMM categorizes organizations across four data resilience maturity horizons:

  • Basic: Reactive and manual, highly exposed
  • Intermediate: Reliable but fragmented, lacking automation
  • Advanced: Strategic and proactive, yet missing full integration
  • Best-in-class: Autonomous, AI-optimized, fully resilient

Resilience goes beyond data

“Data resilience isn’t just about protecting data, it’s about protecting the entire business,” said Eswaran. I agree with this sentiment, as improving in this area can be the difference between shutting down operations during an outage or keeping the company running.

For many organizations, it can be the difference between paying a ransom or being able to quickly recover. Also, with the world moving into the AI era, a best-in-class data resiliency model should be considered a foundational component of companies’ AI strategy.

A rigorous, vendor-agnostic framework

To develop the DRMM, Veeam worked with experts in operational efficiency. McKinsey and Dr. George Westerman, principal research scientist at the MIT Sloan School of Management, led the effort to create a rigorous, vendor-agnostic framework. The model was designed to assess an organization’s ability to ensure data resilience across three core dimensions:

  • Aligning business and risk: A data strategy that integrates business goals with resilience planning, ensuring organizations can anticipate threats, enforce governance, and maintain compliance.
  • Empowering resilience through alignment and action: True resilience is driven by empowered teams and standardized execution. Organizations can respond decisively during a disruption by investing in skilled talent, aligned leadership and clear cross-functional protocols. Defined workflows and governance ensure continuity, while collaboration and training enable teams to adapt and recover quickly and confidently.
  • Technology supports resilience across six key areas:
  • Backup: Secure protection of data that’s located on-premises or across clouds.
  • Recovery: Agile restoration of critical systems, even at scale.
  • Architecture and portability: Scalability across heterogeneous and often hybrid environments.
  • Security: Prevention and anticipated remediation against cyberthreats.
  • Reporting and intelligence: Real-time visibility and insights to support compliance, improve recovery, and optimize operations.

Key benefits of DRMM

During his presentation, Eswaran highlighted several benefits:

  • Revenue protection
  • Cost optimization
  • Compliance and risk management
  • Brand integrity and consumer trust

Veeam cited the example of a global bank that followed the maturity model and improved revenue protection by improving reliance. By reducing mean time to resolution for vital IT systems, the bank achieved 99.99% uptime for critical applications, no cybersecurity-related outages for the past three years, and $300,000 in savings per outage.

Closing the gap between the boardroom and budgets

At VeeamON, I talked with several IT professionals about the DRMM. Though the technical folks found the information interesting, CIOs and chief information security officers look at it as a way of closing the gap between the boardroom and deployments. One CISO I had talked to discussed how she tried to get budget for Veeam as she were worried about ransomware, but the budget kept getting denied. And then the organization got hit, leaving it unable to transact business for about a week while everything was getting restored.

The CISO told me if she had access to this, it would have armed her better to more accurately quantify the risks at a board level, which would have given her a better opportunity to have the budget approved before they were hit with ransomware.

Copies of the report are available at https://go.veeam.com/wp-data-resilience-maturity-model.

This is the time of year when professional football teams are hoping to get the next Tom Brady, Joe Montana or Deion Sanders. Every now and then there are a couple of “can’t miss” prospects such as Peyton Manning or Saquon Barkley, but there are often far more draft busts than hits. It raises the question of why this is given the amount of money and resources teams pour into scouting, the NFL Combine, workout days and more.

The reason is that scouting is an imperfect science. A player at a top-ranked school, such as Alabama or Georgia, rarely faces competition on par with his team, skewing his statistics to the positive. Conversely, players from smaller schools do not get the airtime to show their skills. NFL Hall of Fame quarterback Kurt Warner is a great example of this, as he went undrafted after playing for the University of Northern Iowa but had a stellar NFL career.

At the recent MIT Sloan Sports Analytics Conference, I met a company, SumerSports, that is trying to change this by using AI-based video analytics to study every player on every play to understand whether they are making the right decisions that lead to success.

At the event I spoke with Chief Executive Lorrissa Horton, and she explained the mission of the company is to use AI to evaluate “every single player at the frame level, which leads to the snap level and then game level.” However, despite the rich set of data, Horton clarified it was important to bring in their knowledge of seasoned football professionals to ensure the AI is interpreting the data correctly.

She mentioned the company currently has 20 NFL veterans that work alongside the data scientists to bring these two worlds together. Horton admitted these are currently two worlds that have not overlapped, but the “two sides” have done a good job of learning from each other to understand problems and how data and analytics can solve it.

SumerSports is currently using its AI platform for the following three specific use cases.

Optimized roster construction: By analyzing player performance data in detail, SumerSports helps teams identify undervalued players, make strategic trades and build rosters that maximize on-field performance within salary cap constraints.

Enhance player evaluation: The platform provides advanced tools for evaluating player skills, potential and fit within a team’s system. This goes beyond traditional scouting methods, incorporating a wider range of data points to provide a more comprehensive assessment.

Improve decision-making: SumerSports offers insights that can inform various strategic decisions, from game-day play calling to long-term roster planning.

There are currently several other data sources for football, and I asked Horton as to how SumerSports was different. “The combination of humans and AI allow us to combine subjective and objective information into a single point of view,” she said. “Also, our frame level information is very important as we look the different roles for every player on a given snap. This lets us see how receivers create space, how the ball is being caught and then grade each player on every play. This gives us a better understanding of how each player contributes and how they can improve.”

Over time, I fully expect all teams in every league at all levels to use AI. In the near term, it will be interesting to see how fast NFL teams will adopt AI-based tools such as SumerSports. The company currently has several NFL and NCAA teams using the product and is working on more, but resistance from “old school” thinking does exist.

At the conference, Horton was on a panel with Scott Pioli, who has had front-office jobs with the Patriots, the Chiefs, the Jets, the Ravens and others. Pioli admitted that SumerSports represents a threat to many people in football and can “scare” them, as he put it, but did state that attitude must change.

He cited his own epiphany moment was when he met Washington Commands QB Jayden Daniels when he was at LSU. Daniels came out of a practice session with a virtual reality headset and Pioli explained his first thought was that Daniels needs to stop playing games and take football seriously.

Daniels explained the VR headset allowed him to experience a stadium he had never played at before to better prepare. He knew where the clocks were, he could hear fan noise and understand the distractions. Pioli used this as his “aha” moment for the advancement of technologies such as VR in football.

Long-term, there are endless possibilities for SumerSports. It can go deeper into football and expand to high school football; it can create training tools or expand to other sports. Horton said all are on the table, but right now the company is laser-focused on building the best collegiate and professional football tool.

For those following the draft, the company has created a free draft guide giving its own rankings of players. For those following the draft, it’s worth the download to get the additional insights to see what kind of player your favorite team has drafted.

The New England Patriots are among the most successful teams in NFL history. As impressive as their 11 Super Bowl appearances and six championships are, the team is just one piece of a diverse, successful organization — Kraft Group — that also owns the New England Revolution of MLS soccer, forest products, construction, real estate development, and private equity and venture investing companies. The organization also owns and operates Gillette Stadium in Foxborough, Massachusetts home of the Patriots and Revolution and, next year, one of the U.S. venues for the FIFA World Cup soccer tournament.

To support activities throughout Kraft Group, the organization is constantly upgrading and expanding its network and telecommunications systems to accommodate more than 60,000 fans at the stadium and other business needs.

The World Cup will be the ultimate litmus test for the network and technology foundation since it will bring a massive number of people to the venue and will require temporary fan activation zones outside the stadium. Michael Israel, chief information officer of the Kraft Group described hosting the World Cup as having to put on seven Super Bowls in a four-week period. The infrastructure needs to hold up to these demands and provide the necessary levels of security to protect the games, which are watched globally.

To handle that critical work, the Kraft Group this week entered into a five-year strategic partnership with Boston-based NWN, one of the largest technology solution providers in the U.S., to transform the technology framework underpinning the entire Kraft Group business portfolio.

Jim Sullivan, president and chief executive of NWN, said the company is “excited to engineer and deliver an innovative and secure IT framework that scales with the organization and supports their long-term business objectives.” Key projects will include network connectivity upgrades to support new applications, modernized cloud-based collaboration solutions, and AI-enabled applications that improve the stadium experience for fans and players.

“Gillette Stadium is used throughout the year for a variety of events, and it is key for this venue to be as accommodating for our guests as possible,” said Jim Nolan, chief operating officer of Kraft Sports + Entertainment. “Partnering with NWN ensures we have the newest technological capabilities to exceed fan expectations.” He said NWN’s experience and “ability to bring us new technologies” while supporting the current infrastructure, “is key to making our facilities a place where guests can always stay connected.”

New initiatives to enrich the fan experience

Gillette Stadium, like most modern sports venues, is a microcosm of society with technology powering every element of fan experience and back-office operations. Ticketing is mobile and soon to be done using facial recognition, there are “grab and go” food services, mobile payment options and other fan facing technologies that rely on the network, which includes more than 1,800 Extreme Networks Inc. access points on a wired Cisco Systems Inc. network. Because tailgating has become so popular at NFL games, the Wi-Fi needs to be extended to the parking lots, adding another layer of complexity.

The enhanced connectivity and AI-driven solutions will support new initiatives, such as wayfinding applications and the expansion of Gillette Stadium’s internet protocol television or IPTV network. The goal is to enable people attending games, concerts or other events in the stadium’s convention spaces to connect with the Gillette Stadium wayfinding application to find the most direct route to their seat, locate stadium amenities and services, and acquire tickets quickly and easily.

That becomes critically important for the World Cup because many of the visitors are from other countries and have never visited Gillette Stadium before. The IPTV network expansion includes an enhanced digital viewing experience and improved content delivery with interactive features designed to increase fan engagement during live events.

Israel told me the investment in new technologies and connectivity is dedicated to meeting the needs of fans who are “expecting more in their experience here. Years ago, Wi-Fi was a nice-to-have, but now their experience is one in which they’re truly locked into us. They’re engaging more and more from their device itself.”

The team offers even more engaging services to fans, from “autonomous purchasing in our food or retail locations to day-of-game activities, it just becomes one in which it’s just becoming more and more a direct relationship with the guests as opposed to a relationship with the masses,” he said.

Though Israel and team have a lot on their plate right now, there is more coming. The organization is currently looking to modernize its contact center to leverage AI to personalize customer experience. After that, World Cup mayhem starts and will consume most of the IT organization’s time. Post-FIFA, the Kraft Group will look at a holistic Wi-Fi upgrade from the existing Wi-Fi 6 network. The team is currently building a new training facility for the players that will offer a modernized experience.

After that, all the firewalls and security will be looked at and likely get and upgrade. If that weren’t enough, the Kraft Group is looking to build a new soccer stadium in Everett, which will be managed remotely from Patriot Place. Israel mentioned it’s critical to work with a partner, like NWN, what will learn their environment and invest with them.

Network-agnostic fan connection

Israel said having a reliable, high-performance Wi-Fi network throughout the stadium from ticket gates to the concession stands and souvenir shops is critical. “All of our gates are on Wi-Fi. 500 point-of-sale terminals are on Wi-Fi. We have a variety of use cases. We’re looking at installing more monitoring-type devices that can tell where our employees are on the grounds, integrating our geographic mapping software into the Wi-Fi and other devices so we can monitor parking, deliveries, and everything happening simultaneously.”

NWN CEO Sullivan said the goal is to partner with the Patriots and Kraft Group to meet the needs of fans at the stadium and the organization’s other business activities. “We’re looking holistically at what Michael is doing over the next five years, and we’re going to work with him to bring best-of-breed solutions by leveraging our experience management platform and service delivery capabilities. Whatever Michael and his team need, we can deliver a holistic solution versus swapping tech out component by component.”

Innovation-led growth at NWN

Sullivan said NWN has grown from a regional solution provider with “a couple of hundred million in sales in 2019” to “more than a billion in sales” last year. As technologies have evolved, including the fast growth of artificial intelligence, NWN has grown and expanded with them.

“We’re thinking about everything from delivering the network and infrastructure services and solutions Michael needs for the next five years, and then things that connect to that infrastructure to enable new fan experiences, new player experiences, such as an IPTV system,” explained Sullivan. On the drawing board are other new applications, such as wayfinding digital tools that help people navigate through the stadium and digital walls with entertainment and information that connect to that infrastructure.

“We’re focusing on delivering the capabilities of a next-generation infrastructure, and we’re going to partner with the Patriots for continued innovations, including agentic AI-powered solutions to do some of these other solutions Michael and his team dream up,” said Sullivan. “One of the things that really attracted us to this partnership was just the alignment of the innovation and the openness to explore, experiment, and drive new ways of creating an incredible experience for everyone.”

A recent survey from ZK Research found that 93% of IT leaders believe the network is more important to business operations than it was just two years ago. In that same time, 80% find managing the network to be more complex.

That creates an interesting juxtaposition as organizations, like Kraft Group, need more technology to move the business forward but complexity creates risk. By partnering with NWN, Israel and team can ensure they are using the latest and greatest best-of-breed technology and, as its services partner, NWN can help minimize the impact of the complexity.

Unless you’ve been sleeping under a rock, it should be obvious that artificial intelligence is all the rage. Though there are many use cases and industries that AI could affect, customer experience, CX for short, is considered by many to be the low-hanging fruit of AI.

However, the potential for AI to enable CX solutions is fraught with high risk and high reward. Do it right, you increase loyalty, gain customers, improve mindshare and enjoy the cornucopia of benefits. Do it wrong and you become a punchline in AI-gone-wrong stories and perhaps cause irreparable damage.

This has left some customers skittish to move forward with AI in CX even though the value proposition is compelling. Recently, TeKnowledge, a global tech services provider specializing in artificial intelligence, CX and cybersecurity, has partnered with Genesys, a provider of AI CX tools such as contact center, conversational AI and virtual assistants. Together, they aim to help to simplify the deployment of AI, derisk it but also equip teams with the right skills — turning AI adoption into real business value.

TeKnowledge was founded in 2010 as a support services provider in Europe, the Middle East and Africa, handling everything from basic to advanced support for tech companies. It then expanded to Latin America and then Asia-Pacific in the following years. Today, it has grown into a global company with 19 hubs and 6,000 experts who guide organizations through every stage of the technology lifecycle: designing, implementing, adopting and support.

The company is one of Microsoft Corp.‘s global and longstanding partners and is recognized as a Microsoft Solutions Partner. Approximately 70% of its experts specialize in Microsoft Productivity Business Process and Intelligent Cloud solutions such as Azure and SQL Server. It told me it’s committed to digital skilling by providing comprehensive training programs and certifications in Microsoft solutions. The TeKnowledge ecosystem is built around the platforms its customers rely on most and Microsoft is a de facto standard for most organizations.

TeKnowledge also offers advisory and skilling services, not just for tech vendors but also for governments. This includes managing global support operations, training government employees in Qatar, and handling complex, large-scale projects. TeKnowledge, Microsoft and the Qatar Ministry of Communications and Information Technology entered a partnership and jointly set up the Digital Center of Excellence in Doha, a digital skilling initiative that supports the digital transformation ambition of the country.

Security has always been a focus area for TeKnowledge, particularly in digital transformation projects. The company’s managed security services are based on Microsoft’s Sentinel SIEM and Defender endpoint security platforms. In a conversation with Mahmood Lockhat, chief technology officer of TeKnowledge, he told me it designs security into every project it does. This should prove to a differentiator for TeKnowledge in the CX market, as many communications-centric partners do not have a deep understanding of security.

In fact, historically, I’ve found the awareness of security in the unified communications and contact center space is extremely low, and security and compliance tend to be an afterthought other than in highly regulated verticals. In fact, very few of the unified communications and contact center vendors or channel partners ever talk about security. With data-centric AI coming, it’s critical to bake security into the project.

The timing for the Genesys is interesting because it coincides with TeKnowledge going on a hiring spree and bringing on board many of Avaya’s experts and leaders in CX, AI and professional services, and promoted talent from within, and expanded leadership roles in its Americas hub.

TeKnowledge is set to become a global reseller for Genesys Cloud, one of the most widely deployed CX solutions. Beyond sales, TeKnowledge will provide managed services and implementation support for Genesys Cloud.

The addition of Genesys into the TeKnowledge portfolio combined with the Avaya team could create significant share shift in the EMEA region. Avaya has a stated focus on the global 1500, most of which are in the U.S. The leadership team at TeKnowledge has many of the relationships with companies in and out of the G1500 and now have a product in Genesys that’s available globally.

TeKnowledge is well-versed in Microsoft AI, which addresses employee productivity. The combination of Microsoft and Genesys gives TeKnowledge a “best of breed” suite for communications. Lockhat also emphasized the unique advantage of Genesys’s open architecture and presence in both regulated and open markets:

“We chose Genesys because we feel that from a global perspective and the verticals that they serve, they’ve got the most global reach,” he said. “One of the other aspects is around regulated markets like the Middle East, Africa and India. They have a solution that caters to regulated and open markets. Overall, these were the main drivers.”

In recent months, experts observed a rising demand for AI and automation, with organizations aiming to cut their reliance on human agents by approximately 25% each year. Many plan to automate up to 80% of their processes within the next one to four years using virtual assistants, robotic process automation, and other tools. The partnership with Genesys gives TeKnowledge the technology to meet this demand.

From an enterprise perspective, though the promise of AI is to boost productivity, early use cases are focused on how to reduce costs and increase efficiency through automation. TeKnowledge can help with this by automating many mundane tasks but then bringing in AI to handle more complex interactions. TeKnowledge’s model of consulting led engagements, enables them to better meet customer demands by identifying the low-hanging fruit to help companies get started with AI, something that is currently a struggle for many companies.

Although the AI messaging from the UC and CX providers looks very similar, the functions and capabilities can vary widely. As the demand for AI and automation grows, it’s critical for customers to pick the right platform and ensure it has the right capabilities and security. Partnerships between a services company and vendor such as the TeKnowledge and Genesys can greatly simplify and de-risk deployments for customers. As AI in CX matures, I would expect to see more partnerships like this emerge.

Today is the 25th opening day at Oracle Park, home of the San Francisco Giants baseball team, and the latest Oracle Park renovations showcase new technologies aimed at enhancing the fan experience.

In mid-March, the Giants held a media event to discuss the latest and greatest with the team. Though much of the discussion was with the new players as well as fun information, such as which players would have bobbleheads this season, I was there to learn about the technology upgrades coming to the park.

The most notable changes this season are to the concessions where the team is bringing in three different providers to simplify the concession for the fan. These include:

AiFi frictionless ExtraMile beverage marketplace

This is a “grab-and-go” store where a fan can scan a credit card, walk in, pick items and walk out. AiFi is like Zippin and Amazon Just Walk Out in its ease of use for the consumer.

These solutions use camera vision and AI to understand what consumers are grabbing from the shelves are extremely accurate but do require a dedicated amount of space and typically some construction to be done, which is why most venues have a limited number of these. Because these systems use cameras, they must be able to “see” the items, so they are ideal for packaged goods but do struggle with items such as hot foods that are in generic packaging.

After media day, Giants Chief Information Officer Bill Schlough, widely regarded as one of the top sports CIOs, gave me a tour of the stores and we talked about why the club is using AiFi. He told me all three grab-and-go vendors are similar in experience, but it was AiFi’s ability to integrate with season ticket holder programs and other back-end systems that helped it win Oracle Park.

Mashgin Doggie Diner self-check-out systems

While AiFi is grab and go, Mashgin is grab, place on a scanner, pay and go. This makes it an ideal solution for hot foods as it can scan barcodes and understand what’s in a closed package.

One of the big advantages of Mashgin brings is there is no requirement to do construction although Oracle Park is building a diner-like environment for it. The Mashgin technology is a standalone scanner and can be placed anywhere. Typically sporting venues do put them in a closed environment to prevent people from grabbing food and walking off, but the tech itself is the network-connected scanner.

Tapin2 self-order kiosk

For those fans who want a partial self-service experience, the Tapin2 stations will allow patrons to order from a screen, pay for the item and pick up the order assembled by a person.

While the combination of AiFi, Mashgin and Tapin2 will make eating and drinking easier at Oracle Park, there are a handful of other upgrades that will improve game day experience.

For one, Oracle Park will be expanding its Go-Ahead Entry systems. Last season the park became the second venue in Major League Baseball to deploy the facial recognition entry systems.

I’ve used the system and was impressed with ease of set up and how fast it works. A patron can upload a photo into the MLB Ballpark App and when passing through a Go-Ahead Entry gate, the system will greet the fan and post how many are in the party, and everyone simply walks through. In 2024, Oracle deployed four Go-Ahead stations deployed across two gates – Second and King, and Lefty O’Doul. This year a fifth will be added at the Marina Gate. It’s a great experience and I encourage everyone going to a game to try it.

The 6-gigahertz spectrum for Wi-Fi 6E will be turned on. In 2023, the Giants deployed Extreme Networks Wi-Fi 6E access points but could not turn on the 6Ghz spectrum because it was not yet approved for outdoor use. In December of 2024, the FCC finally gave it the OK and fans with 6Ghz capable devices will see a noticeable bump in speeds. In stadium environments, the best speeds I have seen with 5Ghz is about 300 megabits per second, but with 6Ghz, fans can expect to get close to a gigabit per second given the right circumstances, such as proximity to access points.

Technology has certainly become a critical part of fan experience at all sporting events. The upgrades to Oracle Park will allow fans to enter the stadium faster and reduce the friction in ordering food and beverage allowing for more time to be spent watching and less time standing in line. That’s a win for the people paying to watch the games.

This was a big week for Palo Alto Networks Inc., as the cybersecurity market leader celebrated turned 20 this week with multiple activities, including its “Ignite on Tour” event in New York and ringing the closing bell at the New York Stock Exchange. At Ignite Wednesday, Palo Alto announced a multiyear partnership with the National Hockey League to strengthen cybersecurity across the league — from behind-the-scenes operations to what fans experience online and in arenas.

Of all the sport leagues, the NHL has been at the forefront of innovation. The league was the first to offer proprietary camera angles through their app, has been using AI to better understand competitive dynamics and developed the digitally enhanced dashboards. With so much of the fan experience being digital, the importance of security grows exponentially.

The NHL has been working with Palo Alto’s cybersecurity tools since 2009, including next-generation firewalls, cloud security services, and AI solutions. One of those tools, AI Access Security, lets the League use AI applications without exposing sensitive data. NHL employees also get secure internet access through Palo Alto’s Prisma Access Browser.

This multiyear partnership goes beyond just protecting systems and people. It supports the NHL’s larger mission to run its digital operations securely while offering fans a safer online experience. As part of the agreement, Palo Alto will also get exclusive marketing rights, tying its brand more closely to the NHL through social media, digital content and other promotional channels.

During the Ignite event, David Munroe, senior vice president of information technology and security for the NHL, talked about the importance of security. “We have become a very data-oriented organization,” he said. “Awhile ago, people would watch the game, enjoy it, see who the winner is and go home.”

Now, he said, “there are so many more aspects to the game – the stats, metrics such as how fast players skate, shot speed and other data coming in that we utilize in different ways.” He went on to explain that “data is used for broadcast, it’s fan-facing and used for marketing. The league is always trying to grow its fan base and showing data in different ways can help with that.”

The shift to a data-driven environment has raised the stakes in security. Sport leagues have shifted to digital tickets, facial entry, cashless payments and other technologies that require customer data that, if stolen, could have huge repercussions. One aspect that Munroe did not bring up, but I know is on all sports chief information security officers’ minds is the impact online gambling has had on the game.

Thoough it’s fun for the average fan to build a $10 parlay that involve Alex Ovechkin chasing down Gretzky, sports betting is huge business and that creates equally large risks. This involves everything from frustrated gamblers harassing players that didn’t meet an expected goal to nation-states targeting players from certain countries to medical information that could indicate whether a player returns. Any breach of data undermines the integrity of the sport.

To help protect all stakeholders, the NHL turned to Palo Alto Networks. Munroe explained why the platform approach was so important. “When you have a lot of discrete solutions, it becomes difficult to manage and while we like best of breed, we want the best end-to-end solution in place,” he explained onstage. “A platform creates a consistent experience across all our business needs.”

In my discussions with security leaders, the value of platform is becoming better understood. At Black Hat last year, a CISO from a major bank told me that best of breed everywhere does not lead to best-in-class threat protection. In fact, it works against the goal as it creates too many blind spots between the different vendors. While he had not consolidated down to a single security vendor, he had reduced the number of security providers from over a hundred to under five.

As artificial intelligence continues to make the information technology environment more complex, it will be interesting to see if more businesses follow the lead of NHL and other organizations that have embraced security platformization. During the event, the analysts had a roundtable with Palo Alto Chief Executive Nikesh Arora, who was extremely bullish regarding the opportunity that lies ahead as he declared the era of “best of breed” to be over.

I’ve been a believer in the platform concept for years but, for most businesses, there is no easy button in making the shift to a platform. Most companies have a massive amount of technical debt, processes built around certain vendors, and security teams trained in certain tools. AI will make protecting an organization increasingly difficult as the volumes of data and the speed at which companies operate will accelerate, and that could cause Arora’s prediction to come true sooner than later.

Two words are practically ubiquitous in technology discussions these days: artificial intelligence and cloud. At last week’s Nvidia GTC conference in San Jose, many emerging technology companies discussed and demonstrated how they leverage AI and the cloud to deliver innovative products and solutions to their customers.

One interesting case study came from Nebius, a Netherlands-based AI full-stack infrastructure company, which is one of only a handful of Reference Architecture Nvidia Cloud Partners. Gleb Kholodov, head of foundational services, and Oleg Federov, head of hardware R&D, delivered an interesting presentation titled, “From zero to scale: How to build an efficient AI cloud from scratch.

As the AI era moves from the domain of the hyperscalers to other organizations, having best practices from a company that built an AI cloud will be useful. They walked through the company’s process of building an AI cloud business from the ground up. Nebius went from concept to a fully operational system running tens of thousands of Nvidia GPUs connected via a 400-gigabits-per-second InfiniBand network in just one year.

Here are some notable points from the session:

Getting started — and a fast setback

Nebius was formed from the break-up of Russian company Yandex. Building AI clouds wasn’t the company’s original intention. “At first, we thought we’d be building sovereign clouds, but then ChatGPT really took off, and we decided to pivot and power this emerging AI gold rush instead,” explained Kholodov. An immediate challenge – the limited license granted under the Yandex break-up deal to the cloud stack the Nebius team had helped build in their previous lives – became a blessing in disguise for Kholodov, Federov and their colleagues.

“We had exactly one year to rebuild the entire platform — in high quality — or shut down,” recalled Kholodov. “It was a chance to change our mindset, rethink our priorities, modernize our tech stack and decide on our values and what we’re optimizing for and really reflect that in our design. The time pressure — pretty immense, I would say — kept us focused and helped us cut down the non-essentials.”

Smaller regions — and more of them

Nebius pivoted from its original plan to “deploy a few fully independent bigger regions complete with three availability zones and tons of services,” said Kholodov, because though that approach worked reasonably well for sovereign clouds, “for AI, it just did not cut it.” In a 180-degree move, Nebius changed its focus to building many smaller regions that would be fully independent in terms of fault tolerance, data residency and the like, but interconnected from a management perspective so clients could manage all resources from a single web console.

Adapting to its new model required Nebius to deploy one region every quarter. “To achieve that, we had to architect our cloud for speed of deployment and operational efficiency,” Kholodov recalled. “Deploying new regions fast is not just about software. The hardware needs to get there first, be installed quickly and serviced efficiently.”

In the AI era, innovation moves faster than ever, and the lessons learned from Nebius is something other organizations can leverage as they look to scale their AI infrastructure plans.

Four key goals

Federov, the hardware lead, said the focused on four “really important things”:

  1. Sustainability of server specification: Thermal and power efficiency, fast deployment, not by a single server, but by several racks, modules, and data centers as well as easy-to-change firmware for any of the components. Federov said the team needed quick fixes for problems such as security threats. “If we needed a new functionality, we implemented it quickly.” He said their love of F1 auto racing inspired the team. “We thought of maintaining our servers as F1 pit stops, so it should be quick, safe and easy.”
  2. Efficient design: Its server design enabled Nebius to maintain them “with one hand, because the other one, in the data center is always occupied by a laptop so you can see the task you are doing.” This statement is obviously a euphemism but the point on efficiency is one that has been overlooked in data center design in the past.
  3. Optimal flow control: To model airflow inside the servers, Nebius uses software like what F1 Racing uses to understand how air flows around a car. This is how Nebius developed optimizations like different plenums of air for CPU and GPU power, dual and single rotor fans for different parts of the serve and its own implementation of PID algorithms.
  4. Design efficiency: Nebius designed its servers to be highly efficient. “Our servers require up to 23% less energy on a full load,” said Federov. An added bonus of this approach was less noise pollution, which made it easier for engineers to communicate, reducing errors and delivering better service-level agreements.

Building servers like LEGOs

Federov and his team leveraged the concept of LEGOs to assemble large, modern AI server racks based on the Open Compute Project. “We were happy to know that Nvidia GB200 servers use OCP,” he said. “That’s how our vision, even on hardware, is aligned. We build not only servers and racks; we build data centers. This is the only way we can make the most of our hardware optimization, reach the desired efficiency, and incorporate our sustainability principles.”

Adjusting on the fly to succeed in AI

Even though Nebius had to rewrite its cloud plan, the company’s mission remained constant: “Give high-quality AI infrastructure and services to customers of all types and sizes, at an affordable price, and on terms that fit their clients’ needs the best, be it reserve, on-demand or spot,” said Kholodov. He said the company’s original stack concept, aimed at the average cloud user, needed a lot of services, managed databases and multiple types of VPNs. But AI required a different approach. “To succeed in this AI market, we needed to focus and slow down the offering,” he explained. The team had to rethink what it was — and what it wasn’t — and “shake off our megalomania of trying to be the only cloud that you ever need and instead aspire to become the best cloud for all your AI needs,” Kholodov said.

Building the cloud

After sorting out the hardware, Nebius needed to build the cloud on top of it — in just a year. To meet its aggressive goal, the company had to reduce complexity. This meant simplifying things with design choices and infrastructure, avoiding circular dependencies and being ready to use whatever tools are available in the market to meet their aggressive timeline.

“We knew we would be learning as we go,” he said. “We knew that some choices that we make in the beginning, while they’ll definitely be helpful to lift us off of the ground, may not be the right choices that will help us scale. We needed to retain the utmost flexibility by being able to change anything we needed under the hood without impacting the higher levels that customers are exposed to.”

Choosing Kubernetes

While some of its services could operate on top of a hardware-as-a-service design, for the bulk of them, Nebius needed a higher-level platform, so the company decided to go with Kubernetes. “Kubernetes is not exactly your typical go-to choice for building public clouds,” Kholodov said. “It’s primarily for containers. It has some scalability to it, and it’s convenient.” For the data plane, Nebius deployed a virtualization stack with three pillars: virtualization of compute, network and storage.

“Within three months, we were able to hit the ground running,” said Kholodov, “launching our first VM on the freshly installed Kubernetes. And that unlocked all the development on the higher levels. We still had to customize it pretty heavily, but we’re not afraid to do that for any of the components in the stack; we have engineers who can touch every single layer.”

Not finding anything they liked on the market, the Nebius team wrote its own container network interfaces to give its customers the utmost control over their virtual networks. For Nebius customers, compute is just virtual machines, as the complexity of the underlying Kubernetes is masked by the software.

Close working relationship with Nvidia

With just a year to bring its AI cloud to market, Federov said the Nebius team had to be creative and stay focused. “We had to cut the nonessentials, but we did not cut corners,” he said. “Our cloud adheres to Nvidia’s reference architecture, and that was recently acknowledged by Nvidia, which granted us the status of reference platform Nvidia cloud partner” for Nebius’s competencies, including compute, Nvidia AI, networking, visualization and Nvidia virtual desktops.

What the Nebius team learned along the way

“We came from the world of VMs, of selling compute power,” Kholodov said. “In the AI world, especially with AI trading, people don’t want to buy just compute. They want to buy compute that directly contributes to the progress of their model training, with the clusters continuing to get bigger. In fact, some of our customers run clusters as big as 4000 GPUs.”

Federov added that in addition to designing hardware, how it is produced and tested is critical. “It all starts in the factories,” he said. “It’s important to make a small data center right in front of the assembly lines so you can apply special environmental conditions [temperature, humidity, and more] and the latest firmware for all components. We add specialized testing toolsets like Nvidia 3DMark, for example, for GPUs. It’s important to try to mimic client workloads in the factory — how they specifically use the hardware at this particular stage.”

Lessons learned: Change is constant in the AI era

The most important takeaway from the Nebius session is that change does not stop and it’s important to embrace it. The Nebius team faced so many changes over the past year with success coming from its ability to be adaptable and resilient. Kholodov discussed how Nebius initially tried to avoid unpredictability, as we all do, but quick realized that change and unpredictability needs to be baked into their plans now and anything it brings to market.

Information technology executives in charge of AI projects need to have the same mindset. Everything in the AI ecosystem – hardware, software, policies, people and more — is unpredictable. Embrace this and change with it and adapt be ready for whatever comes – that’s the only path to AI success.

Last week all eyes were on Nvidia Corp.’s GTC, also known as the artificial intelligence show. But there was another event going on across the country as Enterprise Connect, the communications industry’s largest event was going on in Orlando, Florida.

AI was the theme at EC as the technology is changing the way we interact with each other and with customers and that’s where Zoom Communications Inc. unloaded a salvo of 45 agentic AI skills and agent enhancements for its Zoom AI Companion.

Zoom’s goal is to elevate its AI Companion offering across the Zoom platform by leveraging AI agentic skills, agents and models to deliver high-quality results, help users improve productivity, and strengthen relationships. Zoom AI Companion helps users get more done by executing routing and sometimes complex tasks. Customers can also use AI Companion’s task action and orchestration to execute and complete end to end processes.

Generative AI gave rise to a wave of “co-pilots” that would assist workers with their jobs. Agentic AI creates “co-workers” that can complete entire tasks on behalf of a person. Long-term, workers will manage a series of agentic AI “workers” that can pass tasks to each other until completion. A good example is a mortgage process, which is filled with discrete but routine tasks. This would be ideal for a series of agentic AI agents, assuming there is interoperability between them.

Summarizing the announcements:

Leveraging AI to focus on human interactions

Since the pandemic, work has become very transactional, and Zoom’s goal is to try and restore many of the human connections we had in the office. I’ve talked with Zoom executives about this, and the company believes AI is key to creating more human connections. I’ve described this as using AI to create digital proximity, even when we are all physically distant.

A good way to think about the role of AI is to consider how many executives have an assistant with a book of information on people they have met. Prior to the meeting, the executive will get an update of last discussion, key talking points and so on. Not all of us have the benefit of having a personal assistant but we can all have an agentic AI agent enabling use to bring more humanity to an increasingly digital world.

AI Companion for specialized skills and agents

The company said AI Companion will augment specialized agents that power Zoom Business Services, including:

  • Customer self-service: Zoom Virtual Agent leverages memory and reasoning skills to deliver empathetic and contextual conversations and task action to resolve complex issues from start to finish.
  • Virtual agents: AI Studio will enable users to create and deploy customizable virtual agents.
  • Expanded agentic skills platform-wide that leverage reasoning and memory to act and orchestrate task execution, conversational self-service, and agent creation.
  • In the future, customers will be able to leverage Zoom’s open platform to interact with third-party agents, including from ServiceNow, and create their own custom agents.
  • Revenue growth: Zoom Revenue Accelerator users will be able to benefit from a specialized agent for sales to help increase revenue through automated insights, personalized outreach, and enhanced prospecting.

The last bullet I find most intriguing as the return on investment should be relatively easy to measure. Early in my career, then Cisco Systems Inc. Chief Executive John Chambers talked about how to drive adoption of new technology, and he mentioned you need to “follow the money.” He explained how companies spend massive amounts of money on improving sales.

If a new technology can have even a small impact on sales, it will become a no-brainer. Zoom Revenue Accelerator can give the company some quick wins to highlight the value of agentic AI.

Custom AI Companion add-on

Organizations will be able to use the Custom AI Companion add-on (expected in April) to:

  • Create custom meeting templates and dictionaries with unique vocabularies to meet business needs.
  • Use AI Studio to expand AI Companion’s knowledge and skills to help drive decisions and actions and complete tasks.
  • Access a digital personal AI coach (expected in June) and custom meeting summary templates to meet the needs of industry verticals or use cases, including one-on-one meetings, customer intake, or brainstorming meetings.
  • Use custom Avatars for Zoom Clips to help scale video clip creation and avoid multiple takes by using a personalized AI-generated avatar to create clips with a user-provided script.

As part of its federated approach to AI, the Custom AI Companion add-on will incorporate small language models alongside Zoom’s third-party LLMs. Zoom has trained its SLMs with extensive multilingual data optimized for specific tasks to perform complex actions, should help it facilitate multi-agent collaboration.

Improving efficiency for better results

Zoom Docs enables workers to create high-quality content more efficiently and will have enhanced AI Companion capabilities with advanced references and queries to help users create writing plans based on context, search internal and external information for references, and use that information to create a business document based on user instructions.

Users will also be able to prompt AI Companion to automatically create data tables to enhance the usability and organization of content (expected in July).

Zoom Drive (expected in May), a central repository for Zoom Docs and other meeting and productivity assets, will “make it easier to find and access assets across Zoom Workplace.

New features are critical to long-term growth

Zoom is best known as a video meetings company, which stems from the success it had selling its core product when the entire world was working from home. We are now five years removed from the pandemic and Zoom has many “COVID contracts” that are up for renewal. The challenge for Zoom is that video meetings have become a standard feature across all communications platforms with Teams having the lions share despite an inferior product. Microsoft’s ability to bundle Teams with Office has led to massive adoption.

Zoom has one thing that Microsoft and the rest of the communications field does not and that’s high end-user pull through because people that use Zoom tend to love it. Typically, information technology pros make the decision for software, like Zoom, but I’ve talked to many IT decision makers that have brought Zoom in because of the demand from employees.

It must now leverage this “user love” to sell the Zoom platform, which is built on a single data set. Zoom’s AI capabilities can create unique experiences as it can pull together insights from employee and customer communications across calling, e-mail, chat, contact center, docs and more.

In its 18th annual GTC conference last week, Nvidia Corp. not surprisingly aimed to get audience members’ hearts pumping for what’s next — which obviously is the rapid evolution of artificial intelligence.

Nvidia co-founder and Chief Executive Jensen Huang once again played his traditional keynote speaker role. As usual, he was dressed in all black, including a leather jacket. On Tuesday, Huang held court without a script for more than two hours, introducing Nvidia’s upcoming products.

It’s all about AI

AI adoption is occurring rapidly across many industries. Businesses invest money and effort to show customers, partners and shareholders how innovative they are by leveraging AI. The success of this AI explosion depends on fast, reliable, innovative technologies. So, it makes sense that’s where Nvidia is focusing. Given the shockwaves created by January’s introduction of the DeepSeek-R1 LLM from the Chinese AI company of the same name, Huang eagerly shared all that Nvidia and its increasingly powerful, but expensive, graphics processing units, chip systems and AI-powered products can do.

Working in his customary rapid-fire presentation mode, Huang walked through a broad overview of industry trends and highlighted several recent and upcoming innovations from Nvidia. Here are some of the key announcements:

  • New chips for building and deploying AI models. The Blackwell Ultra family of chips is expected to ship later this year, and Vera Rubin, the company’s next-gen GPUs named for the astronomer who discovered dark matter, are scheduled to ship next year. Huang said Nvidia’s follow-on chip architecture will be named after physicist Richard Feynman and is expected to ship in 2028. Nvidia is in regular cadence in delivering the “next big thing” in GPUs, which is great for hyperscalers, but as the use of AI broadens to enterprises, it will be interesting to see if they can keep up with Nvidia. I’ve talked to many chief information officers who aren’t sure when to pull the trigger on AI projects going into production as models and infrastructure keep evolving at a pace never seen in computing before. Go now and start reaping the rewards or wait six months and perhaps get exponential benefits. It’s a tough call, but my advice is to go now as waiting just puts companies further behind. However, as a former CIO, I get the concern of moving now and risk being obsolete in a year.
  • Nvidia Dynamo, which Huang called “essentially the operating system of an AI factory,” is AI inference software for serving reasoning models at large scale. Dynamo is fully open-source “insanely complicated” software built specifically for reasoning inference and accelerating across an entire data center. “The application is not enterprise IT; it’s agents. And the operating system is not something like VMware — it’s something like Dynamo. And this operating system is running on top of not a data center but on top of an AI factory.” Dynamo is a great example of Nvidia’s “full stack” approach to AI. Though the company makes great GPUs, so do other companies. What has set Nvidia apart is its focus on the rest of the stack, including software.
  • DGX Spark is touted as the world’s smallest AI supercomputer, and DGX Station, which he called “the computer of the age of AI,” will bring data-center-level performance to desktops for AI development. Both DGX computers will run on Blackwell chips. Reservations for DJX Spark systems opened on March 18. DGX Station is expected to be available from Nvidia manufacturing partners such as ASUS, BOXX, Dell, HP, Lambda and Supermicro later this year. It’s important to note that DGX Spark isn’t designed for gamers but for AI practitioners. Typically, this audience would use a DGX Station as a desktop, which can run at $100,000 or so. DGX Spark is being offered starting at $3,999, a great option for heavy AI workers.
  • On the robotics front, which is part of the physical AI wave that’s coming, Huang announced partnerships with Google DeepMind and Disney Research. The partners will work to “create a physics engine designed for very fine-grained rigid and soft robotic bodies, designed for being able to train tactile feedback and fine motor skills and actuator controls.” Huang stated that the engine must also be GPU-accelerated so virtual worlds can live in super real time and train these AI models incredibly fast. “And we need it to be integrated harmoniously into a framework that is used by these roboticists all over the world.” A Star Wars-like walking robot called Blue, which has two Nvidia computers inside, joined Huang onstage to provide a taste of what is to come. He also said the Nvidia Isaac GROOT N1 Humanoid Foundation Model is now open source. Robots in the workplace, or “co-bots,” are coming and will perform many of the dangerous or repetitive tasks people do today. From a technology perspective, many of these will be connected over 5G, leading to an excellent opportunity for mobile operators to leverage the AI wave. The societal impact of this will be interesting to watch. Much of the fear around AI is the technology being used to replace people. During his keynote, Huang predicted, “By the end of this decade, the world is going to be at least 50 million workers short,” which is counter to traditional thinking. Since robots can do many of the dangerous and menial jobs people do today, will we really be 50 million people short? Hard to tell, but robots will be ready to fill the gap if required.
  • Shifting gears to automotive, General Motors has partnered with Nvidia to build its future self-driving car fleet. “The time for autonomous vehicles has arrived, and we’re looking forward to building, with GM, AI in all three areas — AI for manufacturing so they can revolutionize the way they manufacture” he said. “AI for enterprise, so they can revolutionize the way they design and simulate cars, and AI for in the car.” He also introduced Nvidia Halos, a chip-to-deployment AV Safety System. He said he believes Nvidia is the first company in the world to have every line of code — 7 million lines of code — safety-assessed. He added that the company’s “chip, system, our system software and our algorithms are safety-assessed by third parties that crawl through every line of code” to ensure it’s designed for “diversity, transparency and explainability.” At CES the innovation around self-driving was everywhere. If one rolls back the clock about a decade, many industry watchers predicted we would have fully autonomous vehicles by now, but they are still few and far between. AI in cars has come a long way and they are safer and smarter but the barrier to fully autonomous was higher than many expected, but I believe we are right around the corner.
  • Quantum day interesting but left big questions unanswered. The Thursday of GTC featured the first-ever quantum day, where Huang interacted with 18 executives from quantum companies over three panels. The event was certainly interesting as it introduced the audience to companies such as D-Wave, IonQ and Alice & Bob, but it did not answer the two questions on everyone’s mind. What are the use cases for quantum and when will it arrive? During the session, Huang did announce Nvidia plans to open a quantum research facility, scheduled to open later in 2025. He also suggested that 2026 quantum day would feature more use cases. When I’ve asked industry peers about quantum, I hear timelines anywhere from five years to 30 years. I believe it’s closer to five than 30 as once we see some use cases, that will “prime the pump,” and we should see a “rising tide,” much like we did with AI.

GTC 2025 is now in the rear-view mirror and while there was no “big bang” type of announcement, there was steady progress across the board to a world where AI is as common as the internet. This should be thought of as a GTC that lets companies digest how to use AI instead of trying to understand what the next big thing is. The breadth and depth of AI today shows it’s becoming democratized, which will lead to greater adoption — good for Nvidia but also the massive ecosystem of companies that now play in AI.

digital concept art in gold