DevOps metrics and DORA benchmarks for 2025
The DevOps Research and Assessment (DORA) metrics have become the industry standard for measuring software delivery performance. According to the 2024 State of DevOps Report, elite performers now deploy 973x more frequently than low performers, demonstrating that world-class delivery is achievable and measurable.
The four key DORA metrics
According to DORA's research, these four metrics are predictive of both organizational performance and employee wellbeing.
Understanding performance levels
DORA Performance Levels (2024 Benchmarks)
| Feature | Elite | High | Medium | Low |
|---|---|---|---|---|
| Deploy Frequency | ✓ | ✓ | ✓ | ✗ |
| Lead Time for Changes | ✓ | ✓ | ✗ | ✗ |
| Time to Restore | ✓ | ✓ | ✗ | ✗ |
| Change Failure Rate | ✓ | ✓ | ✓ | ✗ |
Key Insight: High performers excel across ALL four metrics—speed and stability are complementary, not competing goals. Teams that move fast also have lower failure rates.
Deployment frequency benchmarks
Multiple Deploys per Day
On-demand deployments whenever code is ready. Continuous delivery to production.
Daily to Weekly
Regular cadence of production deployments, typically once per day.
Weekly to Monthly
Sprint-based releases or bi-weekly deployment cycles.
Monthly to Quarterly
Infrequent releases with large batches of changes.
Lead time for changes
Lead Time for Changes by Performance Level (Hours)
Commit
Developer pushes code to version control
Build
Automated CI builds and tests the change
Test
Automated testing validates functionality
Review
Code review and approval process
Deploy
Automated deployment to production
Monitor
Observability confirms successful deployment
Time to restore service (MTTR)
Time to Restore Service Distribution (Industry)
Change failure rate
Change Failure Rate Trends by Performance Level
Counter-intuitive Truth: Teams that deploy more frequently have LOWER change failure rates. Smaller, more frequent changes are easier to test, review, and roll back.
Capabilities that drive performance
Trunk-Based Development
Short-lived branches (less than 1 day) merged frequently to main. Enables continuous integration.
Continuous Integration
Automated builds and tests running on every commit. Fast feedback loops.
Continuous Delivery
Code always in deployable state. One-click deployment to any environment.
Automated Testing
Comprehensive test suites running in CI. High confidence in changes.
Infrastructure as Code
Version-controlled infrastructure. Reproducible environments.
Building a metrics dashboard
Define Metrics
Align on DORA definitions for your context
Instrument Pipeline
Capture timestamps at each pipeline stage
Track Incidents
Log all production incidents and resolutions
Visualize Trends
Dashboard showing metrics over time
Set Targets
Establish improvement goals
Review Regularly
Weekly team reviews of metrics
Beyond DORA: additional metrics
Complementary Engineering Metrics
| Feature | Elite Teams | Most Teams |
|---|---|---|
| Developer Experience (DX) | ✓ | ✗ |
| Code Review Time | ✓ | ✓ |
| Build Time | ✓ | ✓ |
| Test Coverage | ✓ | ✓ |
| Technical Debt | ✓ | ✗ |
| On-Call Burden | ✓ | ✗ |
Common measurement mistakes
Common DORA Metrics Mistakes (%)
Metrics for Improvement: DORA metrics should drive improvement conversations, not blame. Elite teams use metrics to identify systemic issues and celebrate progress, not to judge individual developers.
Implementation roadmap
Establish Baseline
Start measuring current state across all four metrics.
Quick Wins
Address obvious bottlenecks in deployment pipeline.
Automate Testing
Build comprehensive automated test suite.
Continuous Delivery
Enable automated deployments to production.
Optimize and Scale
Fine-tune processes, extend to all teams.
FAQ
Q: Which DORA metric should we focus on first? A: Start with lead time for changes—it's often the biggest bottleneck and improvements here typically cascade to other metrics. Identify where in your pipeline changes get stuck.
Q: How do we calculate deployment frequency for microservices? A: Track deployments per service and aggregate. Elite teams deploy individual services multiple times per day. Consider weighted averages for services of different criticality.
Q: What's a realistic improvement timeline? A: Teams typically move one performance level in 6-12 months with focused effort. Moving from medium to high is often easier than high to elite, which requires cultural and architectural changes.
Q: How do we avoid gaming the metrics? A: Focus on outcomes, not just numbers. Pair quantitative metrics with qualitative measures like developer satisfaction. Celebrate improvements, but investigate if metrics improve without corresponding quality improvements.
Sources and further reading
- DORA State of DevOps Report
- Accelerate by Forsgren, Humble & Kim
- Google Cloud DORA
- The DevOps Handbook
- Continuous Delivery by Humble & Farley
Improve Your DevOps Performance: Understanding and improving DORA metrics requires expertise in engineering practices, tooling, and culture. Our team helps organizations measure and improve their delivery performance. Contact us to discuss your DevOps transformation.
Ready to improve your engineering metrics? Connect with our DevOps experts to develop a tailored improvement plan.



