I share a lot of this sentiment, although I struggle more with the setup and maintenance than the diagnosis.
It's baffling to me that it can still take _so_much_work_ to set up a good baseline of observability (not to mention the time we spend on tweaking alerting). I recently spent an inordinate amount of time trying to make sense of our telemetry setup and fill in the gaps. It took weeks. We had data in many systems, many different instrumentation frameworks (all stepping on each other), noisy alerts, etc.
Part of my problem is that the ecosystem is big. There's too much to learn:
OpenTelemetry, OpenTracing, Zipkin, Micrometer, eBPF, auto-instrumentation, OTel SDK vs Datadog Agent, and on and on. I don't know, maybe I'm biased by the JVM-heavy systems I've been working in.
I worked for New Relic for years, and even in an observability company, it was still a lot of work to maintain, and even then traces were not heavily used.
I can definitely imagine having Claude debug an issue faster than I can type and click around dashboards and query UIs. That sounds fun.
I completely agree w/ your points about why observability sucks:
- Too much setup
- Too much maintenance
- Too steep of a learning curve
This isn't the whole picture, but it's a huge part of the picture. IMO, observability shouldn't be so complex that it warrants specialized experience; it should be something that any junior product engineer can do on their own.
> I can definitely imagine having Claude debug an issue faster than I can type and click around dashboards and query UIs. That sounds fun.
> Part of my problem is that the ecosystem is big. There's too much to learn: OpenTelemetry, OpenTracing, Zipkin, Micrometer, eBPF, auto-instrumentation, OTel SDK vs Datadog Agent, and on and on. I don't know, maybe I'm biased by the JVM-heavy systems I've been working in.
We've had success keeping things simple with VictoriaMetrics stack, and avoiding what we perceive as unnecessary complexity in some of the fancier tools/standards.
First. Love that more tools like Honeycomb (amazing) are popping up in the space. I agree with the post.
But. IMO, statistics and probability can’t be replaced with tooling. As software engineering can’t be replaced with no-code services to build applications…
If you need to profile some bug or troubleshoot complex systems (distributed, dbs). You must do your math homework consistently as part of the job.
If you don’t comprehend the distribution of your data, the seasonality, noise vs signal; how can you measure anything valuable? How can you ask the right questions?
We need more automation. Less data, more insight. We're at the firehose stage, and nobody's got time for that. ML-based anomaly detection is not widespread and automated RCA barely exists. We'll have solved the problem when AI detects the problem and submits the bug fix before the engineers wake up.
Mostly agree although I think ease of instrumentation is getting pretty good. At least in the Python ecosystem, you set some env vars and run opentelemtry-bootstrap and it spits out a list of packages to add. Then you run your code with the otel cli wrapper and it just works.
Datadog is equally as easy.
That alone gets you pretty good canned dashboards on vendors that have built in APM views.
The rest definitely rings true and I suspect some of it has come with the ease of software development. You need to know less about computer fundamentals and debugging with the proliferation of high level frameworks, codegen, and AI.
I also have noticed a trend that brings observability closer to development with IDE integration which I think is a good direction. Having the info "silo'd" in an opaque mega data store isn't useful.
Of course that sucks. Just enable full time-travel recording in production and then you can use a standard multi-program trace visualizer and time travel debugger to identify the exact execution down to the instruction and precisely identify root causes in the code.
Everything is then instrumented automatically and exhaustively analyzable using standard tools. At most you might need to add in some manual instrumentation to indicate semantic layers, but even that can frequently be done after the fact with automated search and annotation on the full (instruction-level) recording.
You're not the first person I've met that has articulated an idea like this. It sounds amazing. Do you have an idea about why this approach isn't broadly popular?
cost and compliance are non-trivial for non-trivial applications. Universal instrumentation and recording creates a meaningful fixed cost for every transaction, and you must record ~every transaction; you can't sample & retain post-hoc. If you're processing many thousands of TPS on many thousands of nodes that quickly adds up to a very significant aggregate cost even if the individual cost is small.
For compliance (or contractual agreement) there are limitations on data collection, retention, transfer, and access. I certainly don't want private keys, credentials, or payment instruments inadvertently retained. I dont want confidential material to be distributed out of band or in an uncontrolled manner (like your dev laptop). I probably don't even want employees to be able to _see_ "customer data." Which runs head long in to a bunch challenges where low level trace/sampling/profiling tools have more less open access to record and retain arbitrary bytes.
Edit: Im a big fan of continuous and pervasive observability and tracing data. Enable and retain that at ~debug level and filter + join post-hoc as needed.
My skepticism above is about continuous profiling and recording (ala vtune/perf/ebpf), which is where "you" need to be cognizant of risks & costs.
> with the rise of [...] microservices, apps were becoming [...] too complex for any individual to fully understand.
But wasn't the idea of microservices that these services would be developed and deployed independently and owned by different teams? Building a single app out of multiple microservices and expecting a single individual to debug it sounds like holding it wrong, which then requires this distributed tracing solution to fix it.
This doesn't mention a possible future. Honeycomb is a columnar database that ingests wide spans. Add richer context and pass it to an AI SRE tool. This seems like a data engineering issue
Hello! Yes, you are right - observability and APM have both been around for many decades, but the incarnations that most people are familiar with are the ones that emerged in the 2010s.
My intention wasn't for this post to be a comprehensive historical record. That would have taken many more words & would have put everyone to sleep. My goal was to unpack and analyze _modern observability_ - the version that we are all dealing w/ today.
> Observability made us very good at producing signals, but only slightly better at what comes after: interpreting them, generating insights, and translating those insights into reliability.
I'm a data professional who's kind of SRE adjacent for a big corpo's infra arm and wow does this post ring true for me. I'm tempted to just say "well duh, producing telemetry was always the low hanging fruit, it's the 'generating insights' part that's truly hard", but I think that's too pithy. My more reflective take is that generating reliability from data lives in a weird hybrid space of domain knowledge and data management, and most orgs headcount strategy don't account for this. SWEs pretend that data scientists are just SQL jockeys minutes from being replaced by an LLM agent; data scientists pretend like stats is the only "hard" thing and all domain knowledge can be learned with sufficient motivation and documentation. In reality I think both are equally hard, it's rare that you find someone who can do both, and that doing both is really what's required for true "observability".
At a high level I'd say there are three big areas where orgs (or at least my org) tend to fall short:
* extremely sound data engineering and org-wide normalization (to support correlating diverse signals with highly disparate sources during root-cause)
* telemetry that's truly capable of capturing the problem (ie. it's not helpful to monitor disk usage if CPU is the bottleneck)
* true 'sleuths' who understand how to leverage the first two things to produce insights, and have the org-wide clout to get those insights turned into action
I think most orgs tend to pick two of these, and cheap out on the third, and the result is what you describe in your post. Maybe they have some rockstar engineers who understand how to overcome the data ecosystem shortcomings to produce a root-cause analysis, or maybe they pay through the nose for some telemetry/dashboard platform that they then hand over to contract workers who brute-force reliability through tons of work hours. Even when they do create dedicated reliability teams, it seems like they are more often than not hamstrung by not having any leverage with the people who actually build the product. And when everything is a distributed system it might actually be 5 or 6 teams who you have no leverage with, so even if you win over 1 or 2 critical POCs you're left with an incomplete patchwork of telemetry systems which meet the owning team's (teams') needs and nothing else.
All this to say that I think reliability is still ultimately an incentive problem. You can have the best observability tooling in the world, but if don't have folks at every level of the org who understand (a) what 'reliable' concretely looks like for your product and (b) have the power to effect necessary changes then you're going to get a lot of churn with little benefit.
This is a super insightful comment & there is a bunch that I want to respond to but I can't do it all neatly in one comment. Hahaha
I'll choose this point:
> reliability is still ultimately an incentive problem
This is a fascinating argument and it feels true.
Think about it. Why do companies give a shit about reliability at all? They only care b/c it impacts bottom line. If the app is "reliable enough" such that customers aren't complaining and churning, it makes sense that the company would not make further investments in reliability.
This same logic is true at all levels of the organization, but the signal gets weaker as you go down the chain. A department cares about reliability b/c it impacts the bottom line of the org, but that signal (revenue) is not directly and attributable to the department. This is even more true for a team, or an individual.
I think SLOs are, to some extent, a mechanism that is designed to mitigate this problem; they serve as stronger incentive signals for departments and teams.
I'd +1 incentives, primarily P&L/revenue/customer acquisition/retention, with a small carve out for "culture." I've worked places, and for people, where the culture was to "do the right thing" or focus on user experience as the objective which influenced decisions like paying more (time and money) for better support. For the SDEs and line teams it wasnt about revenue or someone yelling at them, they just emulated the behavior they saw around them which led to better observability/introspection/reliable/support. Which, of course, we'd like to believe leads to long term to success and $$$$.
I also like the call out of SLOs (or OKR or SMART goals or whatever) as a mechanism to broadcast your priorities and improve visibility. BUT I've also worked places where they didnt work because the ultimate owner with a VP title didnt care or understand to buy in to it.
And of course theres the hazard of principal agent problems between those selling, buying, building, and running are probably different teams and may not have any meaningful overlap in directly responsible individual.
It's a long running topic in a lot of areas. I remember back when data warehousing was the hot thing, collecting and cleaning all this data was supposed to be the key to insights that would unlock juicy profits. Basically didn't happen.
I would add that "extremely sound data engineering" is also necessary to make observability cost-effective. Some of these otel platforms can burn 10%-25% of your cloud budget to show you your logs. That is insane.
It's baffling to me that it can still take _so_much_work_ to set up a good baseline of observability (not to mention the time we spend on tweaking alerting). I recently spent an inordinate amount of time trying to make sense of our telemetry setup and fill in the gaps. It took weeks. We had data in many systems, many different instrumentation frameworks (all stepping on each other), noisy alerts, etc.
Part of my problem is that the ecosystem is big. There's too much to learn: OpenTelemetry, OpenTracing, Zipkin, Micrometer, eBPF, auto-instrumentation, OTel SDK vs Datadog Agent, and on and on. I don't know, maybe I'm biased by the JVM-heavy systems I've been working in.
I worked for New Relic for years, and even in an observability company, it was still a lot of work to maintain, and even then traces were not heavily used.
I can definitely imagine having Claude debug an issue faster than I can type and click around dashboards and query UIs. That sounds fun.
This isn't the whole picture, but it's a huge part of the picture. IMO, observability shouldn't be so complex that it warrants specialized experience; it should be something that any junior product engineer can do on their own.
> I can definitely imagine having Claude debug an issue faster than I can type and click around dashboards and query UIs. That sounds fun.
Working on it :)
We've had success keeping things simple with VictoriaMetrics stack, and avoiding what we perceive as unnecessary complexity in some of the fancier tools/standards.
But. IMO, statistics and probability can’t be replaced with tooling. As software engineering can’t be replaced with no-code services to build applications…
If you need to profile some bug or troubleshoot complex systems (distributed, dbs). You must do your math homework consistently as part of the job.
If you don’t comprehend the distribution of your data, the seasonality, noise vs signal; how can you measure anything valuable? How can you ask the right questions?
I don't see why the same isn't true for "vibe-fixers" and their data (telemetry).
I believe the author is in the former camp.
> We'll have solved the problem when AI detects the problem and submits the bug fix before the engineers wake up.
Working on it :)
Datadog is equally as easy.
That alone gets you pretty good canned dashboards on vendors that have built in APM views.
The rest definitely rings true and I suspect some of it has come with the ease of software development. You need to know less about computer fundamentals and debugging with the proliferation of high level frameworks, codegen, and AI.
I also have noticed a trend that brings observability closer to development with IDE integration which I think is a good direction. Having the info "silo'd" in an opaque mega data store isn't useful.
Everything is then instrumented automatically and exhaustively analyzable using standard tools. At most you might need to add in some manual instrumentation to indicate semantic layers, but even that can frequently be done after the fact with automated search and annotation on the full (instruction-level) recording.
For compliance (or contractual agreement) there are limitations on data collection, retention, transfer, and access. I certainly don't want private keys, credentials, or payment instruments inadvertently retained. I dont want confidential material to be distributed out of band or in an uncontrolled manner (like your dev laptop). I probably don't even want employees to be able to _see_ "customer data." Which runs head long in to a bunch challenges where low level trace/sampling/profiling tools have more less open access to record and retain arbitrary bytes.
Edit: Im a big fan of continuous and pervasive observability and tracing data. Enable and retain that at ~debug level and filter + join post-hoc as needed. My skepticism above is about continuous profiling and recording (ala vtune/perf/ebpf), which is where "you" need to be cognizant of risks & costs.
> with the rise of [...] microservices, apps were becoming [...] too complex for any individual to fully understand.
But wasn't the idea of microservices that these services would be developed and deployed independently and owned by different teams? Building a single app out of multiple microservices and expecting a single individual to debug it sounds like holding it wrong, which then requires this distributed tracing solution to fix it.
2015 - Ben Sigelman (one of the Dapper folks) cofounds Lightstep
Huge fan of historical artifacts like Cantrill's ACM paper
We need better primitives and protocols for event-based communication. Logging configuration should be mostly about routing and storage.
My intention wasn't for this post to be a comprehensive historical record. That would have taken many more words & would have put everyone to sleep. My goal was to unpack and analyze _modern observability_ - the version that we are all dealing w/ today.
Good point though!
I'm a data professional who's kind of SRE adjacent for a big corpo's infra arm and wow does this post ring true for me. I'm tempted to just say "well duh, producing telemetry was always the low hanging fruit, it's the 'generating insights' part that's truly hard", but I think that's too pithy. My more reflective take is that generating reliability from data lives in a weird hybrid space of domain knowledge and data management, and most orgs headcount strategy don't account for this. SWEs pretend that data scientists are just SQL jockeys minutes from being replaced by an LLM agent; data scientists pretend like stats is the only "hard" thing and all domain knowledge can be learned with sufficient motivation and documentation. In reality I think both are equally hard, it's rare that you find someone who can do both, and that doing both is really what's required for true "observability".
At a high level I'd say there are three big areas where orgs (or at least my org) tend to fall short:
* extremely sound data engineering and org-wide normalization (to support correlating diverse signals with highly disparate sources during root-cause)
* telemetry that's truly capable of capturing the problem (ie. it's not helpful to monitor disk usage if CPU is the bottleneck)
* true 'sleuths' who understand how to leverage the first two things to produce insights, and have the org-wide clout to get those insights turned into action
I think most orgs tend to pick two of these, and cheap out on the third, and the result is what you describe in your post. Maybe they have some rockstar engineers who understand how to overcome the data ecosystem shortcomings to produce a root-cause analysis, or maybe they pay through the nose for some telemetry/dashboard platform that they then hand over to contract workers who brute-force reliability through tons of work hours. Even when they do create dedicated reliability teams, it seems like they are more often than not hamstrung by not having any leverage with the people who actually build the product. And when everything is a distributed system it might actually be 5 or 6 teams who you have no leverage with, so even if you win over 1 or 2 critical POCs you're left with an incomplete patchwork of telemetry systems which meet the owning team's (teams') needs and nothing else.
All this to say that I think reliability is still ultimately an incentive problem. You can have the best observability tooling in the world, but if don't have folks at every level of the org who understand (a) what 'reliable' concretely looks like for your product and (b) have the power to effect necessary changes then you're going to get a lot of churn with little benefit.
I'll choose this point:
> reliability is still ultimately an incentive problem
This is a fascinating argument and it feels true.
Think about it. Why do companies give a shit about reliability at all? They only care b/c it impacts bottom line. If the app is "reliable enough" such that customers aren't complaining and churning, it makes sense that the company would not make further investments in reliability.
This same logic is true at all levels of the organization, but the signal gets weaker as you go down the chain. A department cares about reliability b/c it impacts the bottom line of the org, but that signal (revenue) is not directly and attributable to the department. This is even more true for a team, or an individual.
I think SLOs are, to some extent, a mechanism that is designed to mitigate this problem; they serve as stronger incentive signals for departments and teams.
I also like the call out of SLOs (or OKR or SMART goals or whatever) as a mechanism to broadcast your priorities and improve visibility. BUT I've also worked places where they didnt work because the ultimate owner with a VP title didnt care or understand to buy in to it.
And of course theres the hazard of principal agent problems between those selling, buying, building, and running are probably different teams and may not have any meaningful overlap in directly responsible individual.