Understanding and implementing DORA metrics

This article is an excerpt from our guide, DORA, SPACE, and DevEx: Which Framework Should You Use?

The DevOps Research and Assessment (DORA) metrics revolutionized the way the software industry measures software organization performance and delivery capabilities. Developed with rigorous research that relied on data from thousands of companies, these metrics quickly became a standard for measuring the performance of software organizations. 

The four DORA metrics are:

  • Lead Time to Change: time from code commit to deployment
  • Change Failure Rate: percentage of failed changes in the production environment
  • Deployment Frequency: how often code is deployed to production
  • Mean Time To Recover (MTTR): how fast teams can recover from a failure. This metric is now called “Failed Deployment Recovery Time

DORA metrics answer the question “how are we doing” but also scratch the insatiable itch of “how are we doing compared to everyone else?” When you assess your capabilities using DORA metrics, you will see how your company is doing compared to the other respondents, and this benchmarking data is a huge attractor for users of DORA metrics. Based on your organization’s measurements, you will fall into one of four categories: Elite, High, Mid, or Low Performer. You can see your own results by taking the DevOps QuickCheck at https://dora.dev/quickcheck/

It’s important to understand what DORA metrics are, but equally important is understanding when they are, as this helps contextualize their goals and design, and will help you make a decision about their utility in your own organization*.* DORA metrics were made popular in 2018 in the book Accelerate: Building and Scaling High Performing Technology Organizations by Dr. Nicole Forsgren, Gene Kim, and Jez Humble. Thinking back to 2018, many large enterprises were in the middle of or completing large digital transformation projects, and they were in search of metrics that would help them quantify their progress. It’s this landscape that DORA metrics came out of, which is why they focus so much on software delivery capabilities. 

Who should use DORA metrics?

Dora metrics are standardized measures of software delivery capabilities. These metrics are a great fit for companies that:

  • Are going through a digital transformation and modernizing their software development practices, such as by adopting DevOps practices
  • Want a consistent benchmark to understand their software delivery capabilities
  • Are building processes from scratch and need to validate their process design and delivery capabilities against industry benchmarks

If your organization is committed to addressing the weaknesses highlighted by DORA metrics, then it’s more likely that they will be helpful to you. This is because the metrics are not just measures, but also guidance as to how your organization should be performing. Especially if your team falls within the Low or Mid Performer clusters, DORA metrics will spell out what your teams would need to achieve to qualify as Elite, and from there, you can make a plan of high-leverage interventions. These interventions will improve the capabilities of your organization, and in turn, improve developer productivity. 

For example, if it takes your team more than one month to recover from a failed deployment, you will fall into the Low performer cluster, and DORA is prescriptive in telling you that you need to improve that measure to less than one hour in order to qualify as Elite.

However, there are some teams that may not see a big benefit from DORA metrics, and the cost of instrumenting, collecting, and analyzing DORA metrics may be higher than the benefit they provide. Because they are very specific to software delivery capabilities, and because the Elite performance cluster is within reach for many companies, DORA metrics may not be as useful for newer teams who have practiced DevOps from their inception, or teams who have reached Elite already. 

DORA may have limited impact on teams that:

  • Have practiced DevOps and continuous delivery since their inception
  • Have already reached Elite and are maintaining their status with ease
  • Are not responsible for deploying software to customers
  • Are not web application or IT teams, as the benchmarks have been established using data from those teams

How do you collect and implement DORA metrics?

Many off-the-shelf developer tools and developer productivity dashboards include DORA metrics as a standard feature. These tools work by collecting workflow data from your developer tool stack, such as GitHub, Gitlab, Jira, or Linear. You’ll be able to see measurements for all four of the DORA metrics using workflow data from these tools. 

For some teams, this instrumentation is plug-and-play, giving you DORA metrics with minimal effort. For many other teams, there is a higher cost with collecting these metrics. The metrics are standardized, but the ways that teams work certainly aren’t. That means that there is plenty of variation when it comes to tools, processes, and in turn, collection methods. Even defining how and when to measure the metrics can vary from team to team (for example, what do you consider a “production deployment”?). 

However, it’s not necessary to collect data from your workflow tools in order to track DORA metrics. Surveys and self-reported data are a reliable method to collect these measurements, and in fact, DORA metrics themselves are based off of survey data, not automatically collected data. Self-reported measurements may be less precise and less frequent than measurements collected automatically, but offer enough fidelity to be useful for assessing capabilities, without needing to instrument additional software. However, you will need to administer surveys and track responses, which may take some effort, especially within organizations with a high number of developers and applications.

Though each DORA metric can be measured in isolation, it’s important to analyze them as a collection. DORA metrics have been designed with metrics that are in tension with each other, providing some guardrails as teams work toward adoption of more automation. In order to be classified in the upper performance clusters, teams must both deploy more frequently, but also reduce the number of defects that reach customers. This tension ensures that teams are not compromising quality as they accelerate their deployment rates.

Once you have measurements in place, it’s still up to your organization to determine what type of work needs to be done in order to influence the metric. DORA is prescriptive about what to measure, and what benchmarks you must achieve to qualify as Elite, but does not offer a copy-and-paste solution for how to improve. However, the Continuous Delivery Capabilities called out in Accelerate (reference guide on xix in the preface) can give you a jumpstart when it comes to choosing where to focus your efforts first.

What’s important to consider about DORA metrics?

Dora metrics are not a measure of developer productivity, but a measure of software delivery capabilities. In practice, you’ll find that dora metrics have almost become synonymous with developer productivity, and are often talked about as a productivity measurement in our industry. It’s important for you to understand the goal of dora metrics, why they exist, and what contexts are appropriate for dora metrics. Otherwise, you run the risk of measuring the wrong thing and getting the wrong signals about developer productivity and developer experience.

Another common misconception is that qualifying as Elite means that your organization is highly productive, or that you will perform well as a business. It’s difficult for developers to be productive in an environment without the capability to rapidly iterate and deploy software, and DORA is a helpful measure for assessing that. But you may still be building the wrong thing, just building it very quickly.

Published
January 12, 2024

Get started

Want to explore more?

See the DX platform in action.

Get a demo