The Five Shades of Observability: Business, Operations, Pipelines, Models, and Data Quality

It’s tempting to dismiss observability as another overused buzzword. But this emerging discipline offers substantive, multi-faceted methods for enterprises to compete in a turbulent global economy.

As enterprises expand their businesses, digitize their operations, and embrace analytics, they create a world of complexity. Business expansion brings new products, customers, suppliers, and geographies. Digital transformation brings new users, applications, devices, and IT infrastructure. And analytics depends on accurate, timely data that describes the health of the business. Enterprises must somehow understand and orchestrate all these moving pieces.

Observability seeks to help. This emerging discipline offers processes and tools that observe the health of the business, IT systems, and data on a real-time basis. It enables enterprises to monitor many indicators, correlate them, generate alerts, identify issues, assess root causes, and remediate issues. This means listening to lots of noise and extracting just the right signals. 

Observability comprises five sub-disciplines, each of which has its own product category: business, operations, data pipeline, model, and data quality. For another take, also check out "What Is Data Observability?" by my colleague Sanjeev Mohan.


Observability comprises five sub-disciplines: business, 

operations, data pipelines, models, and data quality.


  • Business observability monitors hundreds of thousands or more business metrics—such as revenue, cost, transactions—and identifies trends, correlations, and anomalies. Analysts and managers use business observability products to monitor the health of the business and identify issues that need immediate remediation.  Business observability vendors include Outlier, Anodot, and Sisu. Business intelligence (BI) vendors such as Yellowfin, ThoughtSpot, and Qlik also address business observability.

As an example, Samsung uses business observability to monitor metrics for new releases of its Galaxy smartphone and adjust customer offers based on what they learn.

  • Operations observability studies the performance, availability, and utilization of applications and their underlying infrastructure, as well as the experience of application users. ITOps, DevOps, and CloudOps engineers use operations observability products to optimize the performance of applications and containers, as well as storage, compute, and network resources. Vendors include  AppDynamics, Dynatrace, New Relic, Chronosphere, and Splunk. You can view operations observability as the new name for application performance management (APM).

The Japanese credit card payment processor Vesca uses operations observability to monitor and optimize the performance of its hybrid IT infrastructure.

  • Data pipeline observability studies the performance, availability, and utilization of the data pipelines that support analytics. Data engineers use data pipeline observability products to optimize the performance of data delivery, as well as the containers, storage, compute, and network resources that support pipelines. Vendors include Acceldata, Unravel, Databand, and Pepperdata.

The advertising technology firm PubMatic uses data pipeline observability to remove bottlenecks in data delivery.

  • Model observability assesses the performance, accuracy, bias, explainability, and legality of artificial intelligence/machine learning (AI/ML) models, both in production and during the model training phase. Data scientists, ML engineers, and governance officers use model observability products to ensure their AI/ML models deliver business results and comply with industry regulations such as the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA). Model observability vendors include Acceldata. In addition, data science platform vendors such as DataRobot, Domino Data Lab, and Iguazio address model observability. 

Steward Health Care, a hospital operator, uses model observability to track the accuracy of model outputs.

  • Data quality observability assesses the accuracy, completeness, consistency, and timeliness of data that feeds both operational and analytical applications. Data engineers and governance officers use data quality observability products to catalog, profile, validate, and track the lineage of various data assets. Vendors include Monte Carlo, BigEye, Great Expectations, Databand, and Anomalo. You can view data quality observability as the monitoring component of data operations (DataOps).

Auto Trader uses data quality observability to monitor and remediate data errors and other quality issues in its online car marketplace.

Should we expect these observability product categories to converge as they observe business metrics, operations, data pipelines, models, and data quality? To some degree, yes. Operations observability and data pipeline observability both study similar resources and metrics, so it makes sense for those vendors to extend into one another’s product segments. It also makes sense for enterprises to have one product that observes data pipelines, models, and data quality. Acceldata offers a product like this. 

But in the meantime, enterprises should distinguish these five flavors and map them to their specific requirements. Only then can they use observability to optimize their business, systems, and data—and compete in the 2020s.

Kevin Petrie

Kevin is the VP of Research at Eckerson Group, where he manages the research agenda and writes about topics such as data integration, data observability, machine learning, and cloud data...

More About Kevin Petrie