Featured
Reports

Verizon Mobile Partners with Microsoft So Teams Can Energize the Mobile Workforce

December 2023 // For years, mobile employees have constituted a significant portion of the workforce. Since the start of the […]

Continue Reading

“Private Cellular or Wi-Fi?” Isn’t an Either/Or Question: You Can Have Both

December 2023 // The world used to rely on wired connections. The phones we used back then plugged into the […]

Continue Reading

Enterprises Have Big Plans for Wireless but Lack Unified Management

October 2023 // Siloed management, security and QoS leads to complexity and downtime. A converged multi-access wireless network* is the […]

Continue Reading

Check out
OUR NEWEST VIDEOS

2024 ZKast #26 With Tom Brannen from OnConvergence

2.4K views March 17, 2024 11:37 am

10 0

2024 ZKast #25 with Greg Schoeny of World Wide Technology from Mobile World Congress 2024

7.6K views March 13, 2024 10:40 am

17 0

2024 ZKast #24 with Markus Nispel of Extreme Networks at MWC

8.2K views March 11, 2024 8:22 am

19 0

Recent
ZK Research Blog

News

Vlad Shmunis’s return to the CEO office marks the beginning of a cautious plan for growing its customer base outside the usual constituencies.

This week, RingCentral announced its Q4 fiscal year 2023 results, the first results after Vlad Shmunis retook the captain’s chair of the Starship RingCentral. On Tuesday, February 20th, the company delivered results that were in line with Street expectations. RingCentral posted revenue of $571 million, which was $1 million above analysts’ consensus. Operating income came in at $117 million, which was $3 million ahead of the expected number. Looking ahead, the company put up light guidance for the upcoming quarter and full year. RingCentral guided to 8% growth for Q1 and for the full fiscal year, while the Street had a 9% estimate. This light guide shouldn’t be a surprise as RingCentral has historically been conservative, and given all the macro uncertainties, this is consistent with the past. Shmunis has often told me he would rather under-promise and over-deliver. One notable point from this call was the strength of the enterprise business. Shmunis stated, “I am particularly proud to report that this segment (enterprise defined as $100,000+ in annual recurring revenue) has just achieved $1 billion of annual recurring revenue (ARR).” He also highlighted that 20% of the Fortune 1000 are RingCentral customers. Amongst these are two of the world's three largest hotel brands, one of the largest car rental companies, and many financial services companies. Shmunis summed up the enterprise momentum by highlighting a couple of customer wins. A Fortune 100 insurance company purchased 20,000 UCaaS seats and a Global 500 retailer bought 18,000 UCaaS seats. These data points should help shed the image that RingCentral is only for SMBs. While steady as she goes is the theme for RingCentral right now, Shmunis noted a concerted “land and expand” approach on the call. New products like RingCX, RingSense for Sales, and RingCentral Events are expected to deliver $100 million in combined ARR by the end of 2025. Cross-selling additional products creates greater stickiness, which improves net retention, a critical metric for investors, and leads to bigger deals with higher margins. On the investor call, Shmunis explained the go-forward strategy, “I returned as CEO to deliver on our strategy of; one, delivering durable growth and value from our core; two, expanding our TAM by turning RingCentral into a multi-product, AI-first communications leader. To that end, we have recently added three new products to our portfolio: RingCX, our native, AI-first contact center; RingSense for Sales, our conversation intelligence platform for sales professionals; and RingCentral Events, our new virtual, hybrid, and onsite events platform.” It’s worth noting that the $100 million is getting a running start as the company has seen early adoption. According to Shmunis, the company currently has one hundred paying RingCX customers, up from 50 when the company launched in November. He cited two Fortune 1000 companies who each purchased over 1000 seats. This is good to see as, when RingCX launched, many industry watchers positioned the product as targeted at smaller businesses. The company also claims “hundreds” of paying RingSense for Sales customers, which is impressive given it was launched in the back half of 2023. Regarding RingCentral Events, recall the company purchased Hopin’s Event’s business about six months ago for this capability. Since then, hundreds of customers have hosted large events on it, including Spotify, Reddit, and Hubspot, which were specifically called out. This “expand” strategy creates interesting competitive dynamics for the UCaaS and CCaaS providers. Currently, businesses are using multiple cloud communication companies; I’ve seen data that suggests the average enterprise has five or more providers. It’s common to see a scenario where Teams, Zoom, RingCentral, Webex, and NICE are all deployed. I don’t think anyone would argue that consolidating has a financial benefit, as organizations are likely paying for redundant capabilities. The bigger questions are: do you go to a single vendor or keep multiple ones? What are the anchor services that determine which ones are “must-haves?” Answering those questions isn’t easy. I believe few companies of significant size will go to one vendor. For the rest, businesses need to consider which application is the anchor and decide from there. For example, chat has become one of the most important applications, which is one of the things Teams does well, so companies continued to deploy Teams despite its deficiencies in other areas, such as calling. For the other vendors, their perceived core capability is what they came to market with. For RingCentral, it's calling, and for Zoom, it’s video, even though the former has made great strides in video and vice versa. This happens when a vendor has so much success doing one thing that no one looks to them for anything else. As RingCentral and its peers embark on their “expand” strategy, they must showcase their strength in these new areas. The best way to do this is through customer examples. I liked how the company specifically called out some RingCentral Event wins, but I would like to see more of this as people generally will be skeptical about a capability once you prove you can do it. I recall a conversation with Shmunis when the company started putting up bigger wins a few years ago. He told me something to the effect of, “No one thought we could do 5000 seats - until we did. Then people said we couldn’t do 10,000, and then we did.” Coming out of this quarter, the conservative guide was prudent, and now it comes down to executing and growing its ARR outside of its core business.

For businesses, decisions about investing in AI are complex and challenging.

Artificial Intelligence (AI) is reshaping business operations, from network troubleshooting and cybersecurity to customer service and communications. As investment in AI reaches new heights, organizations must weigh its benefits against cost, environmental impact, ethical concerns, and implementation challenges. The global system integrator World Wide Technology (WWT) recently hosted a tech talk with leaders from Cisco, Intel, and NetApp to discuss key considerations for adopting AI in business. They examined various AI investment options and outlined an effective AI investment strategy. This included ways to address the skills gap in AI, and tactics for incorporating security and sustainability into an AI strategy. Here are the key takeaways from that discussion.

Transforming Operations & Enhancing Security/Privacy With AI Investment

The impact of artificial intelligence on operational efficiency and security is significant, and its applications are diverse. Cisco leverages AI in security through predictive analytics and pattern recognition. This application of AI allows Cisco to identify potential cyber threats before they can cause harm proactively. By analyzing data patterns and detecting anomalies, Cisco’s AI-driven security approach enables faster response times and improved threat mitigation in networking. NetApp focuses on AI’s ethical use and deployment to enhance security, particularly in protecting intellectual property (IP) and sensitive data. The company prohibits using public generative AI services within the internal network, having developed its own secure version. This ensures NetApp data, as well as that of its clients, remains protected. “Looking at Twitter, Facebook, and Instagram, I fear that AI can be weaponized,” said Paras Kikani, senior director of solutions engineering at NetApp. “So, we have to be responsible and ensure that we’re not only implementing AI the right way but also protecting ourselves simultaneously. The IP that you all hold just can’t go into the public domain.” Intel works with partners to develop large domain-specific language models tailored to specific industries like finance, healthcare, and manufacturing. The goal is to make AI intuitive and practical, focusing on real-world problems and areas where customers genuinely need solutions. For example, Boston Consulting Group (BCG) and Intel have teamed up to create an AI model trained on BCG’s confidential data, spanning over 50 years. BCG employees can now retrieve and summarize information that was previously difficult to find using a chatbot powered by Intel AI hardware and software while keeping the data private. Cisco is also cautious, advising against the use of public generative AI services. For instance, Cisco has its own internal AI platform leveraging Microsoft’s Azure AI capabilities. This reflects a broader trend Cisco observed among its customers. Financial services firms tend to be wary of AI, whereas manufacturing companies are more open to it. Cisco believes providing employees with viable, secure alternatives to public AI tools is essential. “You can’t say no,” said Eric Knipp, Cisco’s vice president of systems engineering, Americas. “Just like with security, they will find ways around it if you make it hard. Versus giving them a tool that they can work with, backed up by your own internal policies.”

Sustainability & Cost Considerations for Deploying AI

Industry leaders are deeply aware of the sustainability challenges, notably the high energy consumption associated with generative AI. It requires extensive floor space and cooling in an era where there is a trend toward reducing power usage and creating smaller, hybrid environments. Organizations risk being overly enthusiastic about AI investment, leading them to adopt it without fully understanding its purpose or impact on the environment. An AI investment strategy must therefore consider sustainability and cost-efficiency. “GenAI is not exactly a green technology. A ChatGPT search takes about 100 times more power than a typical Google search. As we think about driving these types of solutions into our customers’ environments or into our enterprise environments, we need to be cognizant of the potential impact that’s going to make,” said Knipp. AI implementation is costly due to the need for high-end graphics processing units (GPUs), high-performance storage, and extensive datasets. Balancing these expenses puts more strain on already tight IT budgets. According to Kikani, AI must prove its value in “helping the core business” by generating revenue, fueling growth, reducing risk, cutting costs, or optimizing resource use. It’s important to thoroughly understand AI, including its intended purpose and how to use it most efficiently, effectively, responsibly, and securely. “Everybody is so enamored with AI that we’re getting ahead of ourselves without really understanding what AI is. It’s a tool. It’s a hammer, it’s a nail. It’s not going to replace everything,” said Travis Palena, Intel’s global channel sales director for data center and AI.

Future Workforce: Bridging the Skills Gap

People who have spent years in specific roles must now adapt to new demands and technologies, as evidenced by the recent layoffs at major tech companies. Cisco, Intel, and NetApp acknowledge an industry-wide need for apprenticeship programs, internships, and educational initiatives to help foster the next generation of tech-savvy professionals. NetApp, for example, has a program called Sales, Support and Services (S3) Academy, which provides training to recent college graduates and those with a few years of experience. However, NetApp also recognizes the importance of continuous learning for midcareer professionals to succeed in today’s fast-paced tech world. “As our corporate responsibility, we need to build programs to not only help our early career individuals but also people who have been in industry for five to 10 years and haven’t had a chance to learn something new,” said Kikani. Cisco is approaching the skills gap by leveraging its existing Network Academy program to recruit people who may not have a four-year degree but can learn relevant tech skills. This initiative reflects a broader perspective on talent acquisition and the value of looking beyond conventional four-year degree holders. The Department of Defense’s SkillBridge is an example of a program that helps veterans transition to corporate jobs. Cisco has recruited more than 120 veterans through this program so far.

Bottom Line: AI Investment

The decisions about investing in AI are complex and challenging. But by focusing on early-career individuals, nontraditional talent pools, and veterans, organizations have the opportunity to broaden their recruitment strategies and invest in the current workforce to meet the challenges of rapidly evolving technologies like AI.

CloudFabrix’s Data Fabric works with Cisco's Observability Platform to automate data ingestion pipelines and provide insights into inventory and analytics.

We’ve been talking about the autonomous enterprise for many years now. Especially in the wake of all the AI hype, the idea that an enterprise can be put on autopilot and run itself is still bandied about. But what is happening to enable that?

I recently sat down with Shailesh Manjrekar, CMO at CloudFabrix, about his company’s partnership with Cisco. Manjrekar described CloudFabrix as “the company that can enable your autonomous enterprise journey.” The partnership with Cisco focuses on a few core elements of that journey: automated data integration, enrichment, contextualization, and composability with its “Observability pipelines.”

CloudFabrix is certainly an interesting company. The management team has had several successful startups in the past, with three acquisitions by Cisco, including Jahi Networks (acquired in 2004), Pari Networks (acquired in 2011), and Cloupia (acquired in 2012).

CloudFabrix’s Data Fabric works with Cisco's Observability Platform, automates data ingestion pipelines, and provides insights into inventory and analytics with its Observability Pipelines. Manjrekar gave me some background on the partnership. He says they complement Cisco’s Observability Platform.

“There are three elements to this data fabric he told me. “First, it allows us to connect with any data source. Second, we bring in all that data, then normalize it and enrich it with new real-time topology information automatically—all of this happens in the pipeline. Then, you can run correlation and clustering and all the insights. So, we process that data, convert it into OTEL, and then ingest it into the Cisco platform.”

The CloudFabrix approach

There are a number of critical key elements of the CloudFabrix approach. The company has several modules that work with Cisco’s observability platform, including:

  • CloudFabrix vSphere Observability
  • CloudFabrix SAP Observability
  • CloudFabrix Campus Analytics
  • CloudFabrix Asset/Intelligence
  • CloudFabrix Operational Intelligence
  • CloudFabrix Infrastructure Observability

These modules allow CloudFabrix to “plug in” to the various applications and services to simplify deployment. These have been released in the past six months, with five in December and January. The speed at which these have been rolled out shows the openness and flexibility of the Cisco platform.

“Our time to market is accelerating, and that's because of all the automation we’ve been able to develop,” Manjrekar told me. “And it's not just us benefiting. We’ve become the trusted advisor for other partners who want to leverage our automation.”

Up close on a few of the modules

CloudFabrix sees itself as one of the building blocks for Cisco’s Observability Platform—not just another platform module developer. Looking at a few of the modules, it’s easy to see why.

  • The vSphere Observability Module works with Cisco’s FSO (full stack observability), which enables visibility up and down the IT stack at a very granular level, including VMs, clusters, networks, and storage environments. One important point worth noting. OTEL compliance helps avoid vendor lock-in.
  • The Asset Intelligence Module ingests telemetry from IT assets into FSO. Companies then get full-stack visibility into assets that can help them understand impacts on the network and infrastructure.
  • The SAP Observability module removes silos across the entire SAP landscape, including business context, applications, and infrastructure. This enables companies to contextualize data and gain real-time visibility to improve their SAP resilience.

A few final thoughts

Cisco’s technology has long been the de facto standard in networking and is the connective tissue for the business world. CloudFabrix enables customers to get more out of the Cisco dollars already spent by enabling greater automation.

The experience of the CloudFabrix leadership team working with Cisco—including CEO Raju Datla, CPO Bhaskar Krishnamsetty, and CTO Raju Penmetsa—should offer partners and customers a level of confidence that the company has the know-how to take its plan from the drawing board to reality.

Over the past two years, I have seen a dramatic about-face regarding IT pros and their attitudes towards automation. Pre-pandemic, the notion of IT that runs itself was viewed skeptically as threatening people’s jobs. Today, IT pros are overwhelmed with complexity. Recently, I ran a survey asking 500 IT pros what they need from the infrastructure providers to support digital initiatives, and the top response was automation, supporting the vision of the “autonomous enterprise.” For those not yet on board, your company is only as agile as the infrastructure enables. Manual and CLI-driven operations will hold the company back.

Historically, threat actors would work diligently to hack through next-generation firewalls, endpoint detection systems and other traditional security tools — something that takes significant work and is often for nothing, since perimeter security is excellent today.

For the bad guys, a better approach is to go through the users. Once credentials are stolen, the threat actor typically has access to all the systems the worker does, which is sometimes everything. I recently talked to a penetration tester who said he can typically compromise the company that hired him within an hour, and it’s always through the user channel.

Companies spend billions on securing the different parts of the environment. From the network to the cloud to the endpoint, we have ignored a critical area: the browser. Today, the browser is the desktop, with people spending most of their day working with software-as-a-service-based applications. I’ve talked to many chief information officers who have made a concerted effort to move all their apps to browser-based ones because it makes hybrid work easier. Browsers provide a consistent experience regardless of where a worker is.

One of the challenges of securing a business with a large percentage of employees who work from home, which is most companies today, is that the user often winds up being the decision maker as to whether they should click on a link, use a certain app or respond to an email that may or may not come from the person it said it did. Security will not work if the user is the integration point for the technology.

That’s the reason behind Menlo Security Inc.’s Secure Enterprise Browser, introduced today. Menlo’s new solution takes the decision process out of the user’s hands by securing the browser directly, providing enterprise-class security directly to the browser.

The cloud-delivered solution is powered by Menlo’s Secure Cloud Browser, which is currently used by millions of enterprise workers. The product offers end-to-end visibility and dynamic policy enforcement directly in browser sessions. This approach blocks phishing, malware and ransomware in real time. Among the new features:

Security Browser Posture Manager enables security professionals to perform browser configuration assessments and instant attack surface analyses. For some reason, the browser security is often ignored, partly because there are so many updates from the browser providers. In its press release, Menlo cited that in 2023, 175 critical vulnerabilities and exposures were deemed high or critical, and more than 125 new features were added to Chromium. This technology supports Google Chrome and Microsoft Edge, two of the most commonly used enterprise browsers.

The effort it would take a security team to track all changes across all corporate browsers manually would be overwhelming, which is why it’s often overlooked. The new feature from Menlo completely automates this process.

Browser Extension and Security Client bring zero-trust access to various devices, users and applications. The Menlo Browser Extension brings self-service capabilities and supports unmanaged devices. The Menlo Security Client provides users with cloud-based access to legacy applications for users that need support for Secure Shell Protocol and Remote Desktop Protocol. This includes apps such as Windows Terminal Server and Remote Desktop software. With this capability, workers can run virtually any needed application with Menlo.

Last-mile data protection is like data loss prevention on steroids. Data protection can be applied through the cloud. Menlo supports cut, copy and paste control, user input limits, watermarking and data masking. This capability helps companies combat data loss to apps, such as ChatGPT, as it disables unprotected data from being leaked through the browser.

Many information technology organizations rely on virtual desktop infrastructure systems to enable users to securely work from anywhere, but VDI clients typically offer a poor user experience, and they do not have the same level of control as a secure browser. VDI offers basic security, but Menlo’s Secure Enterprise Browser adds exploit protection, zero trust and isolated cloud browsing in a way that’s nearly invisible to the user. With security, the less intrusive the better, as it limits user frustration.

Founded in 2012, Menlo Security has been around for a little over a decade, and many, me included, had considered browser security to be a solution looking for a problem. Fast-forward a decade, and a “perfect storm” has been created, which should kickstart Menlo into another wave of growth.

SaaS applications have become the norm, hybrid work almost mandates a location-independent way of working, and the generative AI providers, all browser-based, have created a wave of users pushing unsanctioned company data through the web browser. Now add in advancements in phishing and spam, it’s easy to make a case that browser security should be a critical component of every organization’s cyber strategy.

When Tarun Loomba became chief executive of Mitel Networks Inc. in 2021, the company made an interesting pivot.

While the entire communications industry has been shifting to a software-as-a-service-based model, leveraging the public cloud, Mitel took a step back, looked at its strengths and the market, and decided to dedicate itself to serving customers via private or hybrid clouds.

Given the momentum around the public cloud, it was easy to look at Mitel’s decision as nothing more than head-scratching, but the reality is that customers want greater business outcomes veruss a delivery model conversation, and the public cloud isn’t always the right answer. Making the right choice of cloud architectures depends on understanding the outcomes the business desires. This, in turn, requires asking the right questions to meet those outcomes.

After the big push during the pandemic to digitize everything, the first technical question derived from the business outcomes is whether rapid provisioning is required or the need to scale up and down quickly. According to public sources, about 500 million office workers are thought to have this need. In addition to this, the addressable market for digital solutions among deskless workers is approximately 1.5 billion workers globally.

Businesses must choose the best architecture that meets their business needs, and this leads to one size that doesn’t fit all. Organizations with a mix of use cases, with a variety of privacy and security laws and requirements, will be better served with a mix of connected solutions and a hybrid implementation.

To help decision-makers, I have created a “cloud decision tree,” shown below. The flow can be followed, and business requirements can be mapped to technical requirements, which helps guide to the best cloud model. For example, the next question asks, “Do you need your infrastructure to be private and not on shared infrastructure.” Many businesses in regulated industries, such as healthcare and finance, would fall into this category.

If one follows the line of questions, many businesses will end up with a hybrid environment that leverages the strengths of public and private clouds. For those looking for proof of this, look no further than the hyperscalers and large SaaS vendors, as they all have their private cloud stacks to complement their public offerings.

I recall being at AWS re:Invent when the company rolled out its private cloud solution. In the analyst Q&A, I asked then AWS CEO Andy Jassy why the company chose to do that after being the poster child for public clouds for decades, and he explained, “It’s what our customers have been asking for.”

Most of the tech industry has embraced a hybrid cloud, but communications have changed slowly. This shouldn’t be a huge surprise, as communications has never been an early adopter. It was late to shift to internet protocol, to the cloud, to cloud-native and now to hybrid cloud.

I’m certainly not criticizing the communications industry. There’s a reason unified communications and the contact center have been late adopters to the cloud and not to hybrid multicloud. Phone systems, collaboration tools and contact centers are the most mission-critical of applications, and a down system means lost money. This is why communications teams often deal with an “If it ain’t broke, don’t fix it” mentality.

To date, cloud adoption in communications has been using a public cloud model, but that’s because organizations with more complex requirements stood pat until they could chart a path that cloud solve requirements such as data sovereignty customization and control over security. This is why most of the early adopters of unified communications as a service and contact center as a service were small to midsized businesses.

Looking ahead, I do expect UCaaS and CCaaS growth to continue to be strong, but I also expect to see a rise in private and hybrid cloud deployments as the cloud delivery model, regardless of where it is located, delivers greater agility and feature velocity and meets the needs of hybrid work better. For Mitel, this puts it in a unique competitive position. Given the erosion and shift in focus by the other traditional UC vendors, combined with Mitel’s shift in strategy to deliver innovation to UC, it is arguable that Mitel has never been in a better position to capture UC share, versus UCaaS.

I want to be clear: I’m not saying on-prem UC and private cloud will outpace UCaaS/CCaaS growth, but I believe the decline will be slower than most people expect as these technology transitions take a long time. Someone once asked me if I foresee a day when all communications are in the cloud, and my response was, maybe, but I’ll long be dead when that happens. Until then, Mitel can grow through innovation combined with the ability to take share from all the others that have exited and have moved to SaaS only.

Mitel’s focus on business outcomes supported by a mix of solutions for its customers’ high-value use cases will be the key to driving momentum and differentiating from the pure cloud-only providers. This is part of why Mitel’s strategy is so interesting. It puts customer outcomes in front of the debate over the type of deployment method. Mitel has committed to its strategy of focusing on UC and has evolved it by acquiring the Unify assets from Atos.

I’m also interested in seeing what Mitel’s new Chief Marketing Officer Eric Hanson brings to the table. Hanson comes to Mitel from cyber vendor OneSpan Inc., and before that, communications provider Fuze Inc., which was acquired by 8×8 Inc. He is an experienced marketing leader with deep knowledge of the communications market and seems to be experienced in driving go-to-market and brand awareness in the technology sector. This is what Mitel needs at this point in its history.

The bet Mitel made almost five years ago was certainly contrarian to the rest of the industry. But Mitel would not have survived being another “me too” UCaaS/CCaaS provider because its products were too far behind the massive number of competitors. Instead, it focused on its strengths. Now the industry pendulum is swinging back, and Mitel has found itself in a good position to use this as a tailwind.

For the better part of the last decade, customer experience has ranked near or at the top of every information technology and business leader’s priority list.

That’s because 90% of companies compete on CX, significantly up from 26% five years ago. Another interesting supporting data point from my research is that, in 2023, 71% of millennials admitted changing loyalties to a brand from a single bad experience.

With that in mind, it’s unsurprising that businesses have spent heavily on improving their online presence, as this is where most customer interactions start. However, a recent study found that the investment isn’t getting the payback companies seek.

The Contentsquare 2024 Digital Experience Benchmark Report, based on analyzing data from over 3,500 websites, is now out. It includes some interesting data nuggets on the challenges that digital leaders face. With the proliferation of online services and a nearly universal focus on user experiences, traffic is up everywhere and user experiences should be better than ever. Contentsquare’s new report shows that’s not the case.

Ad spend is up, traffic is down

According to the report, total web visits have fallen despite increased ad spend, leading to a higher cost per visit. And when it comes to user experiences, frustration remains a significant issue, affecting 40% of user sessions.

The report emphasizes the importance of optimizing digital experiences to combat frustration and maximize the value of each visit. Customers have so many choices today that if they experience any frustration during their on-line interaction, it’s fast and easy to jump to the next brand. Significant factors affecting the user experience and resulting in frustration include slow page loads and JavaScript errors.

Contentsquare says that understanding visitor intentions and tailoring the online journey is crucial for improving conversion rates. The report shows that experimenting with traffic sources, optimizing mobile experiences, and reducing friction in digital journeys are options. It’s important to note that just because something is working today doesn’t mean it will in a day, a month, or next year. Consumers are fickle, so it’s critical to think of the online journey that is constantly being revisited.

The report also provides insights on app traffic, engagement, and conversion rates, along with tips for boosting in-app conversions, including:

  • Get new users onboarded and ready to convert (fast)
  • Make app navigation a (flexible) pathway to conversion
  • Make your product detail page your return-on-investment machines
  • Make your checkout a knockout
  • Maximize engagement, minimize frustration

The company fleshed out those points in a previous blog post.

As traffic shifts to mobile, conversions lag

Contensquare’s report underscores the shift to mobile traffic, the importance of engagement, and the impact of traffic sources on outcomes. Although paid traffic is increasing, mobile conversion rates are still lower than desktop. Although bounce rates have stabilized, converting social traffic is still a struggle.

Understanding visitor intentions, tailoring online journeys, and experimenting with traffic sources are crucial for improving conversion rates. Focusing on optimizing mobile experiences, increasing engagement and improving conversion rates can drive better outcomes.

Key takeaways

As I looked at the report, I was left with several actions companies can take to ensure their digital efforts are successful, including:

  • Optimizing the digital experience: With rising marketing spend and declining traffic, it’s critical to optimize the digital experience to combat frustration and maximize the value of each visit.
  • Focusing on the mobile journey: As traffic shifts toward mobile, companies must experiment with predictive mobile journeys to better align with the shorter mobile visits.
  • Reducing friction: User frustration from low-loading pages and poor response to visitor interaction can significantly impact engagement and the number of visits. Addressing friction in digital journeys is key to improving the experience.
  • Experimenting with traffic sources and entry pages: Relying on the same channels and landing pages won’t lead to success. Rather, experimenting with the traffic mix and testing entry pages to start visitor journeys in the best possible spot is the way to go. Tailoring the journey to visitor intentions will almost certainly improve conversion rates.

Nearly every sports team plays in facilities carrying a corporate brand, such as Lumen Field, the home of the Seattle Seahawks, or T-Mobile Park, the Mariners’ home field. Not the Seattle Kraken. The most recent team to be added to the National Hockey League plays in The Climate Pledge Arena.

It’s the only facility named after a cause. The other differentiator for the former Key Arena is that it’s the only carbon-neutral sports facility in the world.

To get a better understanding of the naming of the arena and how it achieved net-zero status, I recently took a trip to the Emerald City and met with Rob Johnson, senior vice president of sustainability for the Seattle Kraken, as well as Kaan Yalkin, partnerships and engagement lead for The Climate Pledge at Amazon.com Inc.

Yalkin told me that Amazon co-founded the Climate Pledge in September 2019 and set aggressive goals. “We set out to be net zero by 2040, 10 years ahead of the Paris Agreement,” he said. “We decided we’d create a global community and a vision of other companies willing to take on that commitment with us.”

Amazon’s commitment soon translated into the naming of the home for the NHL expansion team, the Seattle Kraken, which started play in 2021. “Shortly after we co-founded the Climate Pledge, we announced that we’d purchased the naming rights to this building, and we would name it after the pledge to serve as a regular and long-lasting reminder of the urgency around the climate crisis,” he said.

Yalking added that the Climate Pledge has grown into a community of more than 450 companies committed to being net zero by 2040 across 55 countries and 35 industries.

The arena is already climate-neutral

As befits its name, The Climate Pledge Arena is ahead of the net zero schedule. It’s climate-neutral now. “It’s a phenomenal achievement,” Johnson said. “I’m proud of our ownership group for setting us on this course. Even in a city like Seattle, where sustainability is a key part of our values in the Pacific Northwest, you still need ownership who want to step up and take first-of-its-kind commitments in the sports and live entertainment space.”

Climate Pledge Arena is an all-electric facility that harvests rainwater from the roof in a 15,000-gallon cistern buried under the arena. The setup — the first “rain-to-ring Zamboni system” in the NHL — collects rain from a quarter of the arena’s roof, which it uses to resurface the ice for games and practices.

In addition, fans can get free public transit for every Climate Pledge Arena event, which makes getting there easier and also reduces the carbon footprint. “We’re a functionally zero-waste facility, which means that we’re diverting more than 90% of our waste away from landfills to recycling and composting,” he said. “There’s all kinds of really incredible activations that are happening here — from eliminating single-use plastics to sourcing our food from within 300 miles — that we’re proud of and think differentiates us from many other arenas in the world.”

Using tech to remove friction

In addition to the commitment to net zero waste, as you would expect from Amazon, the arena has some cutting-edge tech, including the “walkout technology” the company uses in its stores.

“This has removed much friction in the fan experience,” Yalkin said. “We all know how stressful it is to get something to eat or drink at a sporting event or a concert. Being able to use your palm or quickly insert your credit card to walk in and grab something and walk right out has completely changed the fan experience.”

Still more to do

Even after the initial success, they’re not resting on their laurels. Johnson told me that a couple of crucial elements are very important to the future of the arena.

“The first is implementing fully renewable resources to power the building,” he said. “Right now, we’re buying renewable energy credits to effectively net zero energy. But we’re working hard with our local utility, Seattle City Light, to stand up new solar panels on the eastern side of Washington, where it doesn’t rain much. Secondly, we still have a lot of work to do with our food and beverage partners to think about ways to be even more sustainable in the building. Right now, we’re sourcing 60% or more of our food ingredients from within 300 miles. While that’s a huge number, we’d love to get that number up to a gold standard in the industry of about 75%.”

Some final thoughts

I love hockey. It has always been a working-class sport where hard work was the only way to success. So the collaboration and hard work that led to a new hockey arena on the cutting edge of sustainability and technology has been great to witness.

Sports can play a role in popularizing sustainability and caring for our environment. I applaud the efforts of Amazon, the Kraken and the Climate Pledge Arena.

Cisco Systems Inc.‘s fiscal second-quarter earnings report today was certainly a mix of good and bad.

The networking giant put up a solid quarter, with earnings per share of 87 cents, slightly ahead of its guide, and revenue of $12.8 billion, which was at the high end of its estimate from last quarter. Although these numbers were strong relative to the expectations set a quarter ago, it’s important to remember that Cisco issued a cautionary outlook, given a high degree of uncertainty from its customers.

Looking ahead to the third quarter, it appears the level of customer uncertainty has increased as Cisco applied a greater deal of conservatism to its numbers. The company guided to revenue of $12.1 billion to $12.3 billion, well behind the consensus estimate of $13.1 billion. The company predicts earnings of 84 to 86 cents, whereas the Street expected 92 cents.

To help manage costs, Cisco announced laying off about 5% of its workforce. Coming into the quarter, industry chatter was that Cisco might lay off as much as 15%, so the 5% was significantly smaller than I expected. This will create a pretax charge of approximately $800 million, enabling Cisco to rightsize its costs with the business.

On the earnings call, Cisco management explained (as they did last quarter) that customers had purchased products but not yet implemented them, creating the “air pocket” the company is experiencing. During the call, Chief Executive Chuck Robbins made this comment: “As we discussed last quarter and subsequently saw in other technology provider results, customers have been taking time since the start of our fiscal 2024 to deploy the elevated levels of products shipped to them in recent quarters, and this is taking longer than our initial expectations.”

This is consistent with the feedback I’ve heard from customers and channel partners. During the pandemic, the supply chain was very constrained, and it caused this effect where customers bought as much as they could when products became available. It’s like when you go to the grocery store hungry, you buy more food than you need, then you have a surplus of it at home, and then you buy much less in subsequent visits.

If one believes the commentary from Robbins, and again, it’s consistent with customer feedback I’ve heard, it’s just a matter of time before businesses spend with Cisco again. Is it one to two quarters, as the company predicts? I’m not sure of that, but Cisco has been using Meraki activations as a proxy for the broader company, and that business unit is tracking to the time frame the company guided to.

Regarding product categories, networking is the business unit that’s taken the biggest hit. Networking has been and continues to be Cisco’s largest product category, accounting for 55% of the company’s revenue. It came in at a shade over $7 billion, which was a 12% year-over-year decline. The company cited slowness in the enterprise, service provider, and cloud. This leads me to believe Cisco’s commercial business (SMB) stood strong with respect to networking, which makes sense as those smaller companies can digest technology much faster as their environments are simpler.

Regarding other products, security was up 3% YoY, with Cisco calling out zero trust as a key growth driver. Over the past year, Cisco has completely retooled security, and now it’s starting to bear some fruit. The RSA Conference is right around the corner, and I’m expecting to see more security innovation from Cisco. If it gets security right, this can move the needle for Cisco more than any other product category, given the highly fragmented nature of the business.

Collaboration was also up 3%, with Cisco highlighting devices and calling. This is another area Cisco has heavily invested in. At last year’s WebexOne event, the company rolled out its revamped cloud contact center solution, and it has had its foot on the gas getting customer trials going.

Also, Cisco has a “seeding” program to get more devices in customers’ hands, particularly Microsoft customers, as Cisco devices can run Teams natively. Last year, the program slowed down as device availability was limited, but channel partners have told me the program is now flourishing, so look for more device growth in future quarters.

Both security and collaboration have had generative AI-based agents introduced in the past six months. This should act as a driver for continued sales as the products get easier to use. One challenge for Cisco is trying to differentiate its AI from competitors.

For example, while every UC vendor has background noise removal, only Cisco can remove everything but voices or the active speaker’s voice. I’ve tried many products in Starbucks, airports, and other noisy places, and Cisco has an edge. However, many customers seem unaware of the differences.

Observability revenue rose 16%, driven by growth in ThousandEyes. In my Cisco Live EMEA post, I discussed how Cisco has been integrating ThousandEyes into more Cisco products. For those unfamiliar with ThousandEyes, it is the industry’s best Internet monitoring tool, and by integrating it into other Cisco products, Cisco is giving its customers better visibility across the “stack” than its competitors. With digital experience management becoming critical for hybrid work, Observability should be a key growth driver and eventually pull through other Cisco products.

Looking ahead, on the call, the company reiterated its recent partnership with Nvidia Corp. Robbins mentioned, “We continue to capitalize on the multibillion-dollar AI infrastructure opportunity. This quarter, we announced the next phase in our partnership with Nvidia to offer enterprises simplified, cloud-based, and on-prem AI infrastructure.” He added, “We are clear beneficiaries of AI adoption.” Cisco then quantified the opportunity when it stated that “The majority of that $1 billion in orders (mentioned previously) will turn into revenue in our FY25.”

Cisco also gave an update on Splunk Inc. The company stated, based on the positive progress to date made on regulatory approvals, that Cisco expects the deal to close in the first or early second quarter of calendar 2024. I believe the initial timeline was October of 2024, meaning Cisco is well ahead of the plan.

It will be interesting to see how investors view Cisco post-Splunk as the deal adds $4 billion in additional annual recurring revenue to Cisco. Right now, Cisco is viewed as a hardware company. Its EV/LTM multiple is just under three times, which is consistent with hardware vendors. Splunk, a software company is trading in the eights, which is the low end of software companies.

Given Cisco has transitioned much of its business to software and Splunk adds a much bigger chunk, one would think valuation would go up. However, it must get through this air pocket first. More to come, I’m sure.

It’s an understatement to say that artificial intelligence has been on top of every information technology and business leader’s priority list since the release of ChatGPT. The easy-to-use generative AI engine gave everyone a glimpse of what’s possible when it comes to the infusion of AI into our lives, and one of the areas in which it has the most promise is cybersecurity.

Protecting an enterprise, particularly in this mobile, cloud and hybrid work era, has become nearly impossible without AI as corporate assets are scattered everywhere, and finding threats and breaches has become akin to finding a needle in a stack of needles. People can’t wade through the massive amounts of data available to security operations and make sense of it. Machines can — quickly and accurately.

Today the de facto standard in DDI (DNS, DHCP, IPAM), Infoblox Inc., announced its AI-driven security operations solution, SOC Insights. Although there are many AI-infused security solutions today, the one from Infoblox is unique in that it uses DNS intelligence as part of its data set.

For those unfamiliar with DNS, it’s a network’s first touchpoint with the internet. When a user types www.siliconangle.com, a DNS system, such as Infoblox, converts that into an IP address, directing the computer to a particular internet-based server.

This can be particularly useful for cybersecurity because DNS systems have a good understanding of which systems are valid or malicious. For example, a user might get a spam message directing people to a “lookalike domain” where a character has been replaced with something similar. For example, “siIiconangle.com” looks like the real URL, but the lowercase ‘L’ has been replaced with a capital “I,” making it indistinguishable to the user. But Infoblox would know.

This is why I’ve always felt DNS security is the biggest “no-brainer” in security. Executivegov.com found that 92% of malicious activity can be blocked by using DNS. This may seem like a high number, but every network connection starts with a DNS query, allowing Infoblox to see traffic well before firewalls, IPS, EDR and other core security tools.

The new AI SOC Insights takes Infoblox’s massive amount of DNS information, analyzes and correlates it, and provides actionable insights and responses for SOC engineers to eliminate threats before they hit the enterprise network.

This can be of great value to the SOC engineer because it makes existing SOC tools better. One of the challenges with tools such as security orchestration, automation and response, or SOAR, and SIEM, or security information and event management, is that they need more data, leading to many false positives.

SOC Insights can eliminate much of the “noise” before it hits the SOC tools, making them more efficient. This reduces what SOC engineers call “alert fatigue” and enables them to focus on the remaining alerts.

The Tines Security Automation Voice of the SOC Analyst report, found that 60% of SOC analysts stated their workloads are growing. I’m a little surprised this number isn’t higher, but I’m guessing the other 40% hit a maximum level of work, are perpetually behind and can’t measure the growing workloads.

Another interesting data point comes from the Verizon Data Breach Investigations Report, which found that 55% of survey respondents said critical alerts are being missed weekly and even daily. Again, I believe this number to be higher, but most teams can’t quantify what’s being missed, so it just seems like noise upon more noise. In both cases, leveraging the DNS-based AI Security from Infoblox can bring those numbers down.

Infoblox has included several core capabilities in its BloxOne Threat Defense Offering. This includes features such as DNS failover checks, security policy optimization, DNS threat feed monitoring and high-risk web content. Customers can purchase an add-on to Security Insights with advanced capabilities such as phishing and malware detection, botnet discovery and lookalike domain monitoring. Given the importance of DNS, I was glad to see Infoblox embed core capabilities into its products, giving all customers the benefits of AI.

Looking ahead, it will be interesting to see how Infoblox leverages the rest of its suite of products. DNS data is powerful and provides an “early warning indicator,” but the company also has insights into DHCP, which hands out IP addresses to corporate devices, and IP address management, which helps companies manage their IP addresses. The correlation of this information can help find infected devices, quarantine endpoints and conduct other activities to minimize the “blast radius” of a breach.

Also, in a conversation with Craig Sanderson, vice president of product management for Infoblox, he told me the company is investigating what it can do with generative AI. A natural language interface into Infoblox has the potential to let SOC engineers interact with DNS data in an entirely new way. One could ask what likely lookalike domains are and block them in advance or bring other capabilities to enable the SOC engineer to get in front of the threat actors.

Hackers are using AI more than ever, and it’s critical that security teams fight fire with fire. I know many security teams are nervous about handing control over to machines. But AI is the only way to combat a world where AI is used for nefarious activities.

With the rise of AI and its massive impact on networking, Cisco has pivoted swiftly to enable customers to utilize its converged networks to run taxing AI apps.

Recently, Cisco held a Tech Talk focused on Cisco’s Silicon One and how the company believes you can converge a network without compromising security, performance, and manageability. The discussion also shed light on how Cisco is dealing with the rise of AI and the role of the network. Pierre Ferragu, who heads up the global technology infrastructure research team at New Street Research, hosted the session. He was joined by Eyal Dagan, EVP of the company’s Common Hardware Group, and Rakesh Chopra, Cisco Fellow in the Common Hardware Group.

The need to do something different

Rakesh led the discussion by looking back to the start of Cisco Silicon One in late 2019. “We realized that we at Cisco—and everybody else in the industry—had been making the same mistakes over and over and over again,” he said. “If you approach a problem with the same organizational structure and technology, you will get the same outcome.” This caused the company to shake things up. “We first created a new organization at Cisco that Eyal Dagan runs,” he said. “This new organization is focused on building one architecture in silicon that you can use across your network and also across different business models.” This is a marked difference in an approach that caused Cisco many problems in the past. Much of Cisco’s portfolio has come to the company through acquisitions. To minimize customer disruption, Cisco left the organizational structures in place, which created several challenges, many of which were customer-facing. The company had too many product lines with different operating systems and management tools. Cisco now has a single architecture, which starts with Silicon One. This brings simplified management and feature consistency to customers while reducing Cisco’s R&D costs as redundant development is no longer done.

Covering the full network and solution space

Cisco set its focus broadly so it could cover the full network and solution space, Rakesh added. The company invested many years and dollars to enable the convergence of routing and switching. The company also plowed more than a billion dollars into Silicon One. The company sees this as a fundamental industry shift. “We are the industry’s first truly scalable networking silicon architecture,” he said. “And that becomes very important when you start thinking about the role of silicon in AI networking.”

Helping customers build AI networks on Ethernet

Rakesh said he likes to think about AI in two buckets. He sees the first bucket—using artificial intelligence to improve Cisco products and services—as a large part of Cisco’s revenue. But, as important as that is, the company's main focus is selling Cisco products to enable its customers to build AI networks. Regarding web scalers, Eyal noted that two kinds of networks in data centers are critical to running AI apps. In addition to the front-end network we’re all familiar with, there’s a back-end network, typically InfiniBand, that has historically been used to connect storage clusters and the like. Cisco also sees Ethernet as a solution—especially in the web-scale world. “We have customers who are deployed at scale with Ethernet-based AI networks,” he said. “And all of the others are actively trialing Ethernet AI.” Cisco also says it can provide significant efficiency boosts for power-hungry, saving a megawatt of power for a single AI/ML cluster.

Choosing the right silicon model

Eyal came on to discuss costs and operating in the web-scale world. Three silicon models exist - merchant, ASIC, and fabless COT (or customer-owned tooling). “Cisco used to be an ASIC house, and we still use, in some cases, merchant silicon,” he said. “But we moved in a big way in the last five or six years to a fabless COT.” At the same time, there has been a move from branded boxes to white boxes in the back end. This is for a simple reason—the bill of materials. White boxes, in the right environment, can be much more affordable.

Final thoughts

Since it launched Silicon One, Cisco has been working hard to develop it into a viable alternative to the typical ASIC approach. With the rise of AI and its massive impact on networking, the company has pivoted swiftly to enable customers to utilize its converged networks to run taxing AI apps. The presentation from Eyal and Rakesh was a helpful peak into their strategic thinking and a welcome respite from the AI vaporware we’re treated to daily.

Super Bowl LVIII on Sunday, with the Kansas City Chiefs beating the San Francisco 49ers, was a nail-biter. These two teams were the best the NFL had to offer, and they were good for several reasons – good players and sound game plans — but one thing both teams have in common is that they are well-prepared.

Careful preparation was just as crucial behind the scenes. Unlike other events, the Super Bowl presents a logistical challenge for the hosting city. Even for Las Vegas, which is used to hosting big events, the Super Bowl is unique. For the NFL, it’s an ongoing race without a finish line.

Aaron Amendolia, deputy chief information officer for the NFL, and Rishma Khimji, CIO for Harry Reid International Airport, discussed their preparation for Super Bowl LVIII in a recent webinar hosted by Norman Rice, chief commercial officer at Extreme Networks.

The discussion was an intriguing view into how the organizations grappled with the technical challenges presented by the influx of roughly 400,000 fans into Las Vegas for the game. While Taylor Swift’s challenging journey from Japan to Allegiant Stadium grabbed the headlines, the NFL and the team at Harry Reid faced many more hurdles.

Khimji said her organization preps for the Super Bowl every year because people flock to Las Vegas whether or not they’re hosting the game. And her goal remains the same. “We want to make sure that there’s a seamless journey,” she said. “You get off the plane, you might have to go get some baggage, you get your baggage, you’ve got to get transportation to your hotel, casino resort or wherever you’re going.”

Las Vegas had a good dry run to prepare for the Super Bowl when F1 came to the Strip on Nov. 18, 2023. “F1 was a great experience for us,” she said. “It allowed us to really hone in on our planning to support larger amounts of passengers coming through the airport that increased traffic.”

But even with F1 under its belt, Las Vegas was a new Super Bowl city, which brings unique, if familiar, challenges to the NFL, according to Amendolia.

“Every time we go to a new city, we have to work with what that city brings as far as talent, technology, and the landscape, the environment,” he said. “Vegas is different — everything is unique. And we have to try to make this feel cohesive and connect that fan experience with our OnePass app and how we engage fans digitally.”

But it doesn’t end there. Many other groups, including the teams themselves as well as the media and international and domestic broadcasters, have very specific tech needs. And the planning for the weekend starts well ahead — two years ahead.

“We used to have shorter planning cycles,” he said. “They became much longer because of all the technology and the interconnected nature that we need onsite — and the birth of cloud technology becoming the mainstay and making sure that our risk and resiliency plans are that strong throughout the event.”

For Extreme Networks Inc., the presence of Taylor Swift in the stadium was a familiar sight. Rice said the Eras Tour concerts, many of which happened in NFL venues, rivaled the Super Bowl for engagement and data consumption. So, he asked Amendolia what happens when those two data-hungry worlds collide.

“I thought: This is my team’s opportunity to show Taylor Swift something, show her: ‘Look how much data the Super Bowl moves,’” Amendolia said. “So I feel like Taylor is going to get great WiFi service in the stadium, she’s going to have great cell service around the perimeter, and we’re going to show her how it’s done.”

For Khimji, preparing for the Super Bowl was all about resiliency. “We’ve got great backups, we’re utilizing the cloud where we need to, we’re utilizing segmented systems where we need to,” she said. “We’ve got a great segmented network with this beautiful mesh that Extreme has helped us develop here at the airport. That really allows us to have that resiliency and redundancy built in so that we are not down.”

Even with Las Vegas only a few days away (the webinar was recorded pre-Super Bowl), Amendolia was thinking about New Orleans, San Francisco and Los Angeles — cities that will host the big game in the coming years.

“So, now you’re thinking — as a CIO as a CTO — the technology is all going to change,” he said. “Four years out is a long time in technology, so we can’t specify specific technology metrics or goals. What we have to do is look at projecting capacity forward.”

He added that the partnership with Extreme has been a key to building for the future. Planning and great technology were the keys to Extreme Networks winning the Super Bowl from a network perspective with its tight partnerships with the NFL and Harry Reid International Airport.

Fortinet Inc. this week announced its FortiGate Rugged 70G with 5G Dual Modem, the latest appliance from the company, built on its fifth-generation security processor.

The product has been hardened and is specifically designed to meet the demands of industrial environments. It simplifies the complex and costly infrastructure needed for high-performance networking in remote locations, while its rugged design ensures that it can withstand the harshest conditions.

FortiGate Rugged 70G combines several critical functions into a single compact device that can be deployed anywhere. The product has advanced security features, including secure boot, biometric verification and a next-generation firewall with artificial intelligence-powered FortiGuard services. It also supports local area network and wide area network security, including software-defined WAN, or SD-WAN, and zero-trust network access with 5G connectivity.

“What differentiates this in the market is the convergence of all these functions into a small form factor,” said Rami Rammaha, secure SD-WAN director of product marketing at Fortinet. “That will simplify things for customers when operating their networks. And with the dual active-active connections, customers can take advantage of the high-performance 5G and have high availability and redundancy.”

The appliance employs Fortinet’s security processing unit, or SP5, outperforming standard off-the-shelf processors in encryption, firewall performance, SSL inspection and energy efficiency. According to Rammaha, FortiGate Rugged 70G consumes 88% less power than other processors, aligning with the growing demand for energy-efficient and sustainable solutions in industrial environments.

Fortinet’s willingness to spin its own silicon has been a key differentiator since its inception. Cybersecurity is extremely processor-intensive, and by making its own chips, Fortinet can design silicon optimized specifically for the task at hand.

This is the same value proposition that a company such as Nvidia Corp. delivers with a graphics processing unit. Central processing units cannot meet the demands of gaming graphics, but a GPU is a chip designed specifically for graphics, hence the name. Similarly, Fortinet’s SP5 is custom designed for security. In addition to price/performance benefits, the SP5 allows Fortinet to deliver consistent features across all its products, so an engineer using any Fortinet product will have a similar experience.

Beyond industrial environments, Fortinet discovered an unexpected use case in banking, specifically for remote automated teller machines. One large U.S. bank needed a more efficient way to handle its extensive network of 28,000 ATMs in stadiums, malls and outdoors. The bank sought a solution to manage traffic across multiple data centers and provide reliable connectivity, even in places without wired internet access.

The bank switched from using multiple products for a competitor to a single FortiGate Rugged 70G, simplifying its deployment and management process. The inclusion of SD-WAN capabilities allowed the bank to precisely direct traffic and apply quality of service measures, thereby improving the user experience. Another key requirement was dual 5G connectivity with active-active support, allowing for seamless failover between two carriers, AT&T and Verizon, to ensure availability and minimize service interruptions.

Finally, centralized, software-defined management provided easy orchestration of services across multiple sites and gave the bank complete visibility of its network. This comprehensive feature set, packed into a compact platform, not only met the bank’s immediate requirements but also “attracted interest from other banks,” said Rammaha.

Despite its long name, the FortiGate Rugged 70G with 5G Dual Modem is a versatile appliance for secure and reliable network connectivity in challenging environments. Its energy efficiency, durability and centralized control make it ideal for operational technology environments.

digital concept art in gold