The numbers from the latest CNCF survey are hard to ignore: OpenTelemetry has reached a 95% adoption rate for new cloud-native projects in 2026. Not “awareness” — actual adoption. And 89% of production users now say OTel compliance is “very important” or “critical” when evaluating observability vendors. We’ve crossed a tipping point, and the implications for how we think about observability architecture are significant.
But as someone who just completed a major observability migration using OTel as the abstraction layer, I want to give you the full picture — the genuine wins, the hidden traps, and the places where the promise still falls short of reality.
The Promise: Vendor-Agnostic Telemetry
The core value proposition of OpenTelemetry is elegant: standardize how applications generate telemetry data (traces, metrics, logs), and decouple that from where the data goes. Instrument your code once using OTel SDKs, and you can send that data to Datadog, Grafana Cloud, Elastic, Honeycomb, Lightstep, or any other backend. If your vendor doubles their pricing — and let’s be honest, Datadog’s pricing has made this a very real concern — you can switch backends without touching your application code.
In theory, this eliminates the observability vendor lock-in that has been a pain point for infrastructure teams for the better part of a decade. In practice? It’s complicated.
The Reality: Soft Lock-In Is the New Hard Lock-In
Every major observability vendor now advertises “full OTel support.” But here’s what they don’t highlight: they’re all adding proprietary extensions on top of OTel that create soft lock-in. Datadog’s “Enhanced OTel” adds custom attributes and semantic conventions that only render properly in the Datadog UI. Grafana’s OTel integration works best with their custom resource detectors. Elastic’s APM agent wraps OTel with proprietary correlation logic.
None of this violates the OTel spec — it extends it. But the practical effect is that teams who use these “enhanced” features find themselves just as locked in as they were before, because their telemetry only makes full sense in one vendor’s UI.
The lesson: OTel gives you portability of the base telemetry layer, but you have to be disciplined about staying within the standard spec and avoiding vendor-specific extensions if you actually want to maintain the ability to switch.
Our Migration Story
My team migrated from Datadog to Grafana Cloud over the past quarter, using OTel as the abstraction layer. Here’s what that looked like:
The good: Our services were already instrumented with OTel SDKs (we made that investment 18 months ago specifically for this flexibility). Switching the backend meant changing OTel Collector configuration — updating exporters from OTLP/Datadog to OTLP/Grafana. For 80% of our telemetry pipeline, this was a configuration change. The migration took 3 weeks instead of the 6 months we estimated it would take without OTel.
The painful: The remaining 20% was brutal. We had Datadog-specific custom metrics with semantics that didn’t translate cleanly. Dashboard queries written in Datadog’s proprietary query language had to be rewritten for Grafana’s PromQL/LogQL. Alert definitions couldn’t be ported. All the operational knowledge embedded in “how to investigate X in Datadog” had to be rebuilt for Grafana.
The cost savings: Moving from Datadog to Grafana Cloud reduced our observability bill by approximately 60%. At our scale, that’s a substantial annual savings. I’ll be honest — cost was the primary driver for this migration, not philosophical commitment to open standards. OTel made the migration feasible; cost made it necessary.
Where OTel Still Falls Short
Despite the impressive adoption numbers, there are real gaps:
Logs: OTel’s logging support is still the weakest leg of the observability triad. The log data model was only stabilized recently, and SDK support varies significantly across languages. If you’re a Python or Java shop, you’re in reasonable shape. If you’re running Go or Rust services, the logging SDK maturity is still catching up. Most teams I talk to still use a separate logging pipeline (Fluentd, Vector) alongside OTel for traces and metrics.
Profiling: Continuous profiling support in OTel is in early stages. The profiling signal was accepted as an OTel signal type last year, but production-grade SDK support is limited. If profiling is core to your observability strategy, you’re still largely dependent on vendor-specific agents.
Spec velocity: The OTel spec moves slowly by design — stability is a feature, not a bug. But this means emerging observability patterns (eBPF-based instrumentation, AI workload telemetry, LLM token tracking) aren’t covered by the spec yet, and vendors are filling the gap with proprietary solutions.
AI-Powered Observability: The Next Frontier
One area where things are moving fast is AI-driven root cause analysis layered on top of OTel data. Grafana launched an AI assistant that correlates traces, metrics, and logs to suggest root causes. Elastic’s AI-powered anomaly detection works directly with OTel-formatted data. Datadog’s Watchdog has been doing this for a while but now accepts OTel-native inputs.
The interesting dynamic: OTel standardization is making AI-powered observability more viable because the data is structured consistently regardless of source. AI models trained on OTel-format traces can generalize across different services and even different organizations in ways that vendor-specific data formats couldn’t support.
The Real Question
So here’s what I want to discuss: has anyone successfully gone multi-vendor with OTel? I mean actually sending the same telemetry to multiple backends simultaneously — using Grafana for dashboarding, Honeycomb for trace exploration, and a data lake for long-term analytics?
The OTel Collector’s fan-out exporter capability makes this technically possible, but I’m curious about the operational reality. Does anyone actually run multi-vendor, or does everyone end up picking one backend and sticking with it, with OTel just serving as insurance against future vendor changes?
I’d also love to hear from anyone who’s migrated to Datadog using OTel. The migration stories I hear are almost always away from Datadog — curious if the traffic goes both directions.