Observability is a market that has grown exponentially over the past few years. It started as a buzzword in late 2015 and early 2016, and since then we’ve seen dozens of insurgent companies rush in, eager to stake their claim in a new (albeit loosely defined) market. Incumbent companies also rebranded their wares, hoping to capture some of the momentum that comes from the new term.
Venture capital saw the market opportunity, too. By my conservative estimate, observability companies raised nearly $2 billion over roughly the last two years alone (I’m including some cybersecurity companies, which I’ll explain later).
In the seven or so years since its inception, the observability market has fragmented and specialized into categories such as application, data, network and edge observability. On the artificial intelligence and machine learning side, there is model observability and AIOps. Connecting it all are observability pipelines.
While more businesses than ever know about observability and have built it into their tech strategy, category awareness still has a long way to go. And though the dollars were flying fast and furious a couple of years ago, the economic cycle has turned against risk, and growth funding rounds are harder to come by. It’s time to step back and examine the key trends defining today’s observability market.
Staffing Shortages: A Blessing and Curse to Observability
Many companies in the observability space, regardless of their subcategory, represent net-new work for IT and security teams. A new application performance monitoring (APM) platform may offer a marginal improvement around APM’s golden signals, but that marginal improvement is a substantial lift for the end users tasked with implementing and deploying the new platform. While software may get sold, overworked ITOps and SecOps teams may decide the effort isn’t worth the perceived ROI, leaving the software on the shelf.
On the other hand, observability products that reduce toil for operators will continue to outpace market growth rates. IT departments face acute shortages in skilled staff, leaving companies with little choice but to seek out software that automates high-effort, low-value tasks or otherwise offers a force multiplier to existing teams. Subcategories like AIOps and observability pipelines can address those needs.
Consolidation is Needed to Simplify the Market
Another looming hurdle for the nascent observability market is feature similarity between companies in a subcategory. Two subcategories where this is apparent are application and data observability. In the application observability space, several companies were founded in the wake of the availability of extended Berkeley Packet Filter (eBPF) technology. eBPF allows you to extend the capabilities of the Linux kernel at runtime, and a common use case is to trace applications running in user space. While this is a great technology with several uses, there are simply too many companies with the same idea, the same go-to-market strategy and same limited feature set.
Data observability is another subcategory riddled with clones. Data observability started as a reaction to operational brittleness in the modern data stack, and there are only a few companies with marked differentiation here. I agree with Matt Turck’s assessment that data observability products will drift into the well-defined (but arguably less sexy) data quality category.
CFOs are Looking to Reduce Tool Sprawl and Cut Costs
The challenges with raising growth rounds have been well documented. VC funding represents the supply side of the finance puzzle and IT budgets are the demand side. The demand side is getting squeezed as CFOs clamp down on budget growth. Deals are taking longer to close and often involve more parties eager to reject a budget request from another team because it means more money for their team.
A common refrain from CFOs today is that any new spending must be more than offset by corresponding savings somewhere else. This puts the products representing net-new work with little marginal utility at risk. Observability vendors today need a strong consolidation (e.g. can reduce tool sprawl, serve multiple teams, etc.) and cost savings story to survive in a difficult budget climate.
Cybersecurity remains a budget bright spot. According to a recent survey from IDC, 27% of respondents stated cybersecurity budgets were immune from budget cuts. With changing SEC rules highlighting the need for board-level cybersecurity expertise and reporting of breaches, coupled with the new National Cybersecurity Strategy, the priority around preserving these budgets is clear.
Cybersecurity is Pure Greenfield for Observability
A few short years ago, SecOps teams, threat hunters and incident response teams didn’t have enough data to create an accurate picture of their environment. Endpoint data was sparse and incomplete, cloud-based application deployments were still uncommon, the network remained between the office and data center, and so on.
Today’s SecOps teams, threat hunters and incident response teams have too much data to understand their environment. There are too many alerts and too little context, making finding the signal through the noise a difficult, if not impossible, proposition. From a compliance perspective, much of this data must be retained for longer periods, driving up storage costs.
This is where the principles of observability benefit cybersecurity teams. Much like site reliability engineers need to trace an error well after it occurred, cybersecurity professionals need to trace a breach or find threats over huge volumes of data. They need robust exploration tools, rational data storage and routing options, and, most of all, the ability to do this while maintaining budgets. This is where observability and cybersecurity converge.
Seven years young, observability remains a nascent market despite its rapid growth. Though the market will face growing pains, the opportunities and upside are bright.