At its Perform 2024 event today, Dynatrace unfurled a Dynatrace OpenPipeline that makes it possible to apply analytics to multiple types of data sources in real time.
The company also unveiled a Data Observability offering that can be used to vet the quality and lineage of data being exposed to the Davis artificial intelligence (AI) engine at the core of the Dynatrace observability platform to reduce false positives that otherwise might generate an alert in addition to helping reduce the volume of data that might need to be stored.
Finally, Dynatrace announced it is starting today extending the reach of its observability platform to the large language models (LLMs) that are used to create generative AI platforms. That capability will, for example, make it simpler to monitor the consumption of tokens that are used to provide access to these models.
Steve Tack, senior vice president for product for Dynatrace, said Dynatrace OpenPipeline will enable organizations to streamline the collection of data in a way that will enable them to apply observability more broadly by applying stream processing algorithms to petabytes of data.
Scheduled to be available in 90 days, the Dynatrace OpenPipeline capability enables IT teams to ingest and route observability, security, and business events data from any source and in any format at the point of ingestion, including unstructured data that is automatically converted into a usable format. That data can then be enriched to enable deeper analytics.
It also provides IT teams with more control over which data they analyze, store or exclude from analytics, which in turn helps reduce the total cost of observability, noted Tack.
Finally, it presents IT teams with the ability to apply more customizable security and privacy controls to how that data is employed, he added.
Dynatrace is pursuing a multimodal approach to AI that spans predictive, causal and generative models. Collectively, that approach makes it simpler to identify the root cause of issues, identify anomalies that might disrupt services and streamline incident management by, for example, providing summarizations of events in a natural language format.
It’s not clear how broadly observability will be applied beyond the data collected to manage DevOps workflows and other IT processes, but it’s clear there is a high correlation between IT events and business outcomes as organizations become more dependent on software. As AI becomes more pervasively employed, it should become simpler to apply analytics to a much wider range of types of data to surface the relationship between IT events and business processes. In effect, Dynatrace is making it simpler to apply best data engineering practices to collect, manage and analyze that data, noted Tack.
In the meantime, DevOps teams should revisit how their pipelines are currently constructed to make it simpler to capture all relevant data. As critical as telemetry data collected from the DevOps platform is to ensure application development and delivery occurs as fast as possible, that data is only one factor in a larger equation. The challenge and the opportunity now is determining how best to apply AI to correlate it all.