Skip to content

Applying the DORA metrics

The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. Developed through rigorous research based on data from thousands of companies, these metrics quickly became a standard for measuring software organization performance.

What are the four DORA metrics?

The four DORA metrics are:

  • Lead time to change: time from code commit to deployment
  • Change failure rate: percentage of failed changes in the production environment
  • Deployment frequency: how often code is deployed to production
  • Mean time to recover (MTTR): how fast teams can recover from a failure. This metric is now called “Failed deployment recovery time.

How and when to use DORA

Use DORA metrics when you want to improve your software delivery practices. Track them consistently to spot where your teams are excelling and where they need to get better. When gaps emerge, DORA provides a clear compass for guiding change.

DORA metrics not only answer “how are we doing?” but also tap into the equally pressing question: “how are we doing compared to everyone else?” By benchmarking your capabilities, DORA shows how your organization stacks up against thousands of peers. This comparative lens is part of what has made DORA so sticky with engineering leaders. Based on your results, you’ll be classified as an Elite, High, Medium, or Low performer. You can see where you land by taking the DevOps QuickCheck.

It’s important to understand what DORA metrics measure, but just as critical is understanding when they were created. That context reveals their goals and design, and it helps leaders decide how much utility they’ll provide in today’s environment.

DORA metrics were popularized in 2018 with the release of Accelerate: Building and Scaling High Performing Technology Organizations by Dr. Nicole Forsgren, Gene Kim, and Jez Humble. At that moment, many enterprises were still deep in digital transformation. They needed ways to measure the success of new investments in DevOps, automation, and agile practices. Against that backdrop, DORA focused squarely on delivery performance. The framework became a breakthrough because it gave leaders shared language and benchmarks for evaluating progress across large organizations.

Today, the landscape is shifting again. AI, platform engineering, and developer experience have expanded the conversation beyond delivery speed to questions of quality, sustainability, and impact. DORA remains a useful foundation, but it’s no longer sufficient on its own. Forward-looking teams are now pairing DORA with frameworks like SPACE and the DX Core 4 to get a more complete picture of productivity, experience, and business outcomes.

Who should use DORA metrics?

DORA metrics are best suited for organizations in the middle of digital transformation, those looking for consistent benchmarks for software delivery, or teams establishing new processes from the ground up.

At their core, DORA metrics are standardized measures of software delivery performance. They are especially useful for companies that:

  • Are modernizing their development practices, such as adopting DevOps at scale
  • Want a reliable benchmark to understand their delivery capabilities against industry peers
  • Are building processes from scratch and need validation that their design aligns with proven practices

DORA is most valuable when organizations commit to acting on what the metrics reveal. Because these measures describe not only how you are performing but also how you should perform, they provide a roadmap for improvement. For teams in the Low or Mid-Performer categories, DORA highlights the gaps that must be closed to reach Elite status and offers a foundation for planning high-leverage interventions. Those interventions not only improve delivery outcomes but also translate into stronger developer productivity.

Take deployment recovery as an example. If your team takes more than a month to recover from a failed release, you will be classified as a low performer. DORA is prescriptive here: to achieve Elite performance, recovery time must drop to less than an hour. That clear line in the sand makes it easier for leaders to prioritize the right changes and measure progress over time.

Who shouldn’t use DORA metrics?

Not every team will find value in DORA metrics. For some, the effort of instrumenting, collecting, and analyzing these measures may outweigh the benefits. Because DORA is narrowly focused on software delivery performance—and because many teams can reach the Elite cluster relatively quickly—its utility can be limited in certain contexts.

DORA tends to have less impact for teams that:

  • Have practiced DevOps and continuous delivery from the very beginning
  • Have already achieved Elite status and can maintain it without significant effort
  • Do not deploy software directly to customers
  • Are not web application or IT teams, since DORA benchmarks are primarily based on data from those environments

For these groups, the prescriptive thresholds that make DORA so powerful for others may add little new insight. In such cases, leaders often turn to broader frameworks like SPACE or the DX Core 4 to capture dimensions of developer productivity and experience that go beyond delivery speed alone.

How do you collect and implement DORA metrics?

Many developer tools and productivity dashboards now include DORA metrics out of the box. Platforms like GitHub, GitLab, Jira, and Linear can automatically surface the four DORA measures using workflow data. For some teams, this instrumentation is truly plug-and-play, providing metrics with minimal additional effort.

For others, collecting these metrics can be costly. While the metrics themselves are standardized, the way teams work rarely is. Variations in tooling, processes, and definitions—for example, what counts as a “production deployment”—can make it challenging to collect clean, consistent data.

Using surveys and self-reported data

You don’t have to rely on workflow instrumentation to measure DORA. Surveys and self-reported data are equally valid, and in fact, much of the original research behind DORA was based on survey responses. While these measurements may be less precise or frequent than automated collection, they provide enough fidelity to assess capabilities and benchmark performance.

The tradeoff is administration. Running surveys and tracking responses requires deliberate effort, particularly in larger organizations with many developers and applications. But for many teams, the lighter overhead of surveys outweighs the complexity of building and maintaining instrumentation.

Choosing the right approach

Whether you lean on workflow tools or surveys, the key is consistency. DORA metrics are only valuable if you collect them regularly and interpret them against a shared standard. For organizations looking beyond delivery performance alone, leaders are increasingly pairing DORA with broader frameworks like SPACE and the DX Core 4 to capture a more complete view of developer productivity and experience.

How to understand DORA results

While each DORA metric can be measured in isolation, they are meant to be analyzed together. The four metrics are intentionally designed in tension with each other, creating balance and guardrails as teams adopt more automation. For example, to move into the higher-performance clusters, teams must both increase deployment frequency and reduce the number of defects reaching customers. This ensures that speed never comes at the expense of quality.

Strategy beyond measurement

Once you have measurements in place, the work shifts to strategy. DORA is prescriptive about what to measure and the thresholds you must hit to qualify as Elite. But it does not tell you how to improve. That part is up to your organization: deciding what type of work needs to be done and prioritizing interventions that will move the needle.

The Continuous Delivery Capabilities outlined in Accelerate provide a helpful starting point for choosing where to focus. They highlight practices most likely to improve delivery performance. But the real value comes when organizations go beyond chasing benchmarks and use DORA alongside frameworks like DX Core 4 to understand not just how fast they can deliver, but also how improvements connect to developer productivity and business outcomes.

Misconceptions about DORA metrics

DORA is not developer productivity

DORA metrics are not a measure of developer productivity. They measure software delivery capabilities. Yet in practice, DORA has often been conflated with productivity and is frequently discussed as if it were a proxy for it. This misunderstanding can be dangerous. Without clarity on why DORA metrics exist and what contexts they are suited for, organizations risk chasing the wrong signals about productivity and developer experience.

Elite status does not guarantee success

Another common misconception is that achieving Elite performance means an organization is highly productive or will succeed as a business. DORA metrics are valuable because developers cannot be productive in environments where they cannot iterate and deploy rapidly. But hitting Elite thresholds does not guarantee outcomes. A team can be excellent at deploying software yet still deliver the wrong product—or fail to deliver real value to the business.

Using DORA in the right context

DORA metrics remain a powerful way to benchmark delivery performance. But they should be seen as a foundation, not an end state. Forward-looking organizations use DORA alongside frameworks like the DX Core 4 to connect delivery capabilities with broader measures of developer productivity, experience, and business impact. That context ensures leaders don’t just move faster, but also move in the right direction.

How DX can help give you insight into your DORA metrics

DORA metrics are a strong starting point for measuring software delivery performance, but on their own they rarely explain why performance looks the way it does. That’s where DX comes in.

DX helps leaders move beyond delivery benchmarks by pairing DORA with the Core 4—the industry standard for measuring developer productivity and experience. With DX’s platform, you can combine workflow data with survey insights to see both the outcomes and the underlying drivers of performance.

The result: a complete, actionable view of engineering health that shows not just how you’re performing against DORA, but where to invest to improve productivity, experience, and ROI.

Published
July 1, 2025