Gigamon CTO Discusses Observability vs. Deep Observability

This syndicated post originally appeared at Zeus Kerravala – eWEEK.

Gigamon CTO explains the difference between observability and deep observability, and the impact on cybersecurity.

Today’s organizations typically have some combination of on-premises, private cloud, public cloud, and multi-cloud environments. The one common denominator across all of these environments is the network. Due to the importance of networks, it’s becoming increasingly important to use observability technology to understand the performance and other aspects of networks, based on the data they generate.

I interviewed Shehzad Merchant, chief technology officer at Gigamon, a provider of network visibility solutions that can monitor usage all the way up the infrastructure stack. We discussed the importance of observability to understand the internal state of networks.

We also discussed the extension of observability using network intelligence, called deep observability. Merchant explained the difference between observability and deep observability, and how the two technologies complement each other. Highlights of the ZKast interview, done in conjunction with eWEEK eSPEAKS, are below.

  • Observability refers to monitoring workloads in the cloud using metrics, events, logs, and traces (MELT). Software development and IT operations – DevOps – relies on observability to deal with large workloads, using warehousing and querying techniques to extract intelligence from data. However, observability looks at things from the inside out, which means it can’t sufficiently protect hybrid and multi-cloud environments from bad actors.
  • There are different tactics, techniques, and procedures (TTP) that bad actors use in cyberattacks like phishing and spear phishing (targeted attacks toward individuals). Once bad actors establish a presence and escalate privileges, they use other techniques to laterally spread across the infrastructure. For example, they can turn off endpoint logging for short periods of time, then turn it back on.
  • In the context of observability and TTP, this creates gaps in the telemetry data precisely at the points where it’s needed. Logs generated from endpoints and other systems produce massive – often irrelevant – amounts of data. This significantly increases the cost of security infrastructure and the time to detection and triage, because the queries take longer. Therefore, observability isn’t reliable for security.
  • Deep observability, on the other hand, provides the outside in or the network-based perspective, which is lacking in observability. It extracts network intelligence from traffic and fills in the gaps in security. When it comes to incident response, using the network-based approach results in a much faster time to detection and time to resolution.
  • Traditional security tools can detect a breach on a specific device or endpoint, but they don’t have end-to-end knowledge. Endpoint detection and response (EDR) tools can identify a breach on an endpoint but not its root cause. So, these tools aren’t effective at cyberattack response. Without the right telemetry data to triage, both the extent of the breach and the intent of the breach remain unknown. Deep observability finds the breadcrumbs that lead to suspicious activity with the help of network traffic data.
  • Zero trust is an architecture that addresses the security needs of data-driven cloud environments, which are growing increasingly complex. The foundation of zero trust is understanding the dependency between applications and workloads, then applying policies to control access. It can’t be done without the network-based visibility provided by deep observability.
  • Observability and deep observability are complementary technologies. One is an extension of the other and both are essential. Securing the network requires knowledge of how its elements are performing. It’s important to have multiple layers as part of an observability strategy. This is driving more organizations toward gathering granular network data and tying it into their observability solutions.

Author: Zeus Kerravala

Zeus Kerravala is the founder and principal analyst with ZK Research. Kerravala provides a mix of tactical advice to help his clients in the current business climate and long term strategic advice.