If you’re still using vendor-specific instrumentation SDKs in 2026, you’re building technical debt. Here’s why OpenTelemetry should be your default choice and how it enables the migrations we’ve been discussing.
What OpenTelemetry Actually Is
OTel is a vendor-neutral instrumentation standard. It provides:
- APIs for generating traces, metrics, and logs
- SDKs for major languages (Python, Java, Go, Node.js, .NET, etc.)
- Collectors for processing and routing telemetry data
- Semantic conventions for consistent attribute naming
Why This Matters for Datadog Migration
With OTel instrumentation:
Application → OTel SDK → OTel Collector → [Any Backend]
↓
Datadog | SigNoz | Grafana | OpenObserve
Switching backends is a collector config change, not a code change.
The Migration Path
Phase 1: Instrument with OTel (2-4 weeks)
- Replace Datadog SDK calls with OTel equivalents
- Configure OTel Collector to export to Datadog
- Validate parity with existing dashboards
Phase 2: Run Parallel (4-8 weeks)
- Add second exporter to OTel Collector
- Send same data to Datadog AND alternative
- Compare visualization and alerting capabilities
Phase 3: Cut Over (1-2 weeks)
- Disable Datadog exporter
- Update dashboards and alerts
- Decommission Datadog agents
The Hidden Benefit: Future-Proofing
OTel is now backed by every major cloud provider and observability vendor. Even Datadog supports OTel ingestion (though they charge premium for it). By standardizing on OTel:
- No more vendor lock-in negotiations
- Best-of-breed backend selection
- Community-driven instrumentation libraries
- Consistent instrumentation across your entire stack
Resources
Anyone else using OTel as their migration strategy? What challenges have you encountered?
We’re about 6 weeks into the Phase 1 you described, Rachel. Here’s what we’ve learned:
The Good
-
Auto-instrumentation is magic - For Java and Python services, the OTel auto-instrumentation agents captured 80% of what we needed without code changes.
-
Collector is the right abstraction - Having a central point for routing, sampling, and transformation is cleaner than per-service configuration.
-
Community libraries are solid - We found OTel instrumentation for every framework we use (Spring Boot, FastAPI, gRPC, Redis, Postgres).
The Challenges
-
Metric naming translation - Datadog’s custom metrics don’t map 1:1 to OTel conventions. We spent a week on naming strategy.
-
Dashboard recreation - Our Datadog dashboards use DD-specific query syntax. Grafana equivalents required manual recreation.
-
Alert migration - This is the hardest part. Datadog monitors have complex conditions that don’t directly translate.
Team Impact
We created an “OTel Guild” - 2 engineers from each team who became the instrumentation experts. They handle:
- Reviewing instrumentation PRs
- Maintaining shared libraries
- Troubleshooting collector issues
My Advice
Start with new services. Migrating existing Datadog instrumentation is more work than instrumenting greenfield. We’re doing existing services incrementally as we touch them for other work.
From a compliance perspective, OTel adoption addresses several data governance concerns.
Data Portability Rights
GDPR Article 20 establishes data portability as a right. When your observability data is locked in a proprietary format:
- Audit trails are vendor-dependent
- Historical data migration becomes complex
- You’re dependent on vendor retention policies
OTel’s standardized format (OTLP) means your telemetry data is genuinely portable.
The Collector as a Security Control Point
I love that Rachel highlighted the Collector architecture. From a security standpoint, it’s a natural place to:
- Scrub PII - Remove sensitive attributes before data leaves your network
- Enforce sampling - Ensure you’re not over-collecting in regulated environments
- Audit data flows - Log what telemetry goes where
- Implement encryption - TLS termination and re-encryption for different backends
Compliance Documentation
When auditors ask “how do you ensure observability data doesn’t contain PII?” your answer is much stronger with:
- A documented scrubbing pipeline in the Collector
- Consistent attribute naming (OTel semantic conventions)
- Clear data flow diagrams
One Caution
OTel Collector configuration is YAML-based and can get complex. Treat it like infrastructure code:
- Version control all configs
- Automated testing for pipeline changes
- Staged rollouts to production
The last thing you want is a misconfigured collector silently dropping security-relevant traces.
This is exactly the strategic framing I use when discussing observability with the board.
The Vendor Independence Argument
When we signed our Datadog contract 3 years ago, we had limited leverage. They knew switching costs were high because:
- All instrumentation was DD-specific
- Dashboards couldn’t be exported
- Team knowledge was platform-specific
Now with OTel, our next vendor negotiation looks very different. “We can switch backends in a week” is a powerful statement.
The Total Cost Picture
| Approach |
Year 1 |
Year 2 |
Year 3 |
Flexibility |
| DD Native |
$150K |
$180K |
$220K |
Low |
| OTel + DD |
$165K |
$140K |
$160K |
Medium |
| OTel + OSS |
$180K |
$80K |
$60K |
High |
The OTel migration has upfront costs (Luis mentioned the guild, training, migration work), but the long-term economics are compelling.
What I Tell Other CTOs
- Start now, even if you stay with Datadog - OTel instrumentation gives you options
- Budget for the transition - Don’t treat it as “free” migration
- Measure switching cost reduction - Track how portable your observability actually is
The Linux Foundation Factor
OTel recently moved under Linux Foundation governance. This signals long-term stability and vendor neutrality that matters for enterprise adoption. We’re not betting on a startup’s open-source project - this is industry-standard infrastructure now.