Applying the DORA metrics

The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. Developed through rigorous research based on data from thousands of companies, these metrics quickly became a standard for measuring software organization performance.

What are the four DORA metrics?

The four DORA metrics are:

  • Lead time to change: time from code commit to deployment
  • Change failure rate: percentage of failed changes in the production environment
  • Deployment frequency: how often code is deployed to production
  • Mean time to recover (MTTR): how fast teams can recover from a failure. This metric is now called “Failed deployment recovery time.

How and when to use DORA

Use DORA metrics when you want to improve your software delivery practices. Check them regularly to see where you need to get better. If you find areas that need improvement, use DORA to guide your changes.

DORA metrics answer ‘how are we doing’ but also scratch the insatiable itch of “how are we doing compared to everyone else?” When you assess your capabilities using DORA metrics, you will see how your company is doing compared to the other respondents, and this benchmarking data is a huge attractor for users of DORA metrics. Based on your organization’s measurements, you will fall into one of four categories: Elite, High, Mid, or Low Performer. You can see your results by taking the DevOps QuickCheck.

It’s important to understand what DORA metrics are, but equally important is understanding when they are. This helps contextualize their goals and design and will help you decide their utility in your organization.

DORA metrics were made famous in 2018 in the book Accelerate: Building and Scaling High Performing Technology Organizations by Dr. Nicole Forsgren, Gene Kim, and Jez Humble. Thinking back to 2018, many large enterprises were in the middle of or completing sizeable digital transformation projects, and they were searching for metrics that would help them quantify their progress. It’s this landscape that DORA metrics came out of, which is why they focus so much on software delivery capabilities.

Who should use DORA metrics?

DORA metrics are best suited for companies undergoing digital transformation, seeking consistent benchmarks for software delivery capabilities, and building processes from scratch.

DORA metrics are standardized measures of software delivery capabilities. These metrics are an excellent fit for companies that:

  • Are going through a digital transformation and modernizing their software development practices, such as adopting DevOps practices
  • Want a consistent benchmark to understand their software delivery capabilities
  • Are building processes from scratch and need to validate their process design and delivery capabilities against industry benchmarks

If your organization is committed to addressing the weaknesses highlighted by DORA metrics, then it’s more likely that they will be helpful to you. This is because the metrics are measures and guidance on how your organization should perform. Especially if your team falls within the Low or Mid-Performer clusters, DORA metrics will spell out what your teams would need to achieve to qualify as Elite, and from there, you can plan high-leverage interventions. These interventions will improve your organization’s capabilities and, in turn, improve developer productivity.

For example, if it takes your team over a month to recover from a failed deployment, you will fall into the low-performer cluster. DORA is prescriptive, telling you you must improve that measure to less than one hour to qualify as Elite.

Who shouldn’t use DORA metrics?

Teams that have always used DevOps, already achieved Elite status, aren’t in charge of deploying customer software, or aren’t web application or IT teams might not find much value in using DORA metrics.

Some teams may not see a significant benefit from DORA metrics, and the cost of instrumenting, collecting, and analyzing DORA metrics may be higher than the benefit they provide. Because they are precise about software delivery capabilities and the Elite performance cluster is within reach for many companies, DORA metrics may be less helpful for newer teams that have practiced DevOps from their inception or those that have reached Elite already.

DORA may have a limited impact on teams that:

  • Have practiced DevOps and continuous delivery since their inception
  • Have already reached Elite and are maintaining their status with ease
  • Are not responsible for deploying software to customers
  • Are not web application or IT teams, as the benchmarks have been established using data from those teams

How do you collect and implement DORA metrics?

To collect and implement DORA metrics, use integrated developer tools like GitHub or GitLab, though variations in workflows may require extra effort; additionally, rely on surveys and self-reported data, albeit with some manual administration needed.

Many off-the-shelf developer tools and developer productivity dashboards include DORA metrics as a standard feature. These tools collect workflow data from your developer tool stacks, such as GitHub, GitLab, Jira, or Linear. Using workflow data from these tools, you can see measurements for all four DORA metrics.

This instrumentation is plug-and-play for some teams, giving you DORA metrics with minimal effort. For many other teams, collecting these metrics is costly. The metrics are standardized, but the ways teams work aren’t. That means there is plenty of variation regarding tools, processes, and, in turn, collection methods. Defining how and when to measure the metrics can vary from team to team (for example, what do you consider a “production deployment”?).

However, collecting data from your workflow tools to track DORA metrics is optional. Surveys and self-reported data are reliable methods for collecting these measurements, and in fact, DORA metrics are typically based on survey data rather than automatically collected data. Self-reported measurements may be less precise and less frequent than measurements collected automatically, but they offer enough fidelity to help assess capabilities without needing to instrument additional software. However, you must administer surveys and track responses, which may take some effort, especially within organizations with many developers and applications.

How to understand DORA results

Understanding DORA results involves analyzing the metrics collectively. They are designed to balance different aspects and guide teams toward higher performance. While DORA provides benchmarks for excellence, it’s up to each organization to strategize how to improve.

Though each DORA metric can be measured in isolation, analyzing them as a collection is essential. DORA metrics have been designed with those in tension with each other, providing some guardrails as teams work toward adopting more automation. To be classified in the upper-performance clusters, teams must deploy more frequently and reduce the number of defects that reach customers. This tension ensures that teams do not compromise quality as they accelerate their deployment rates.

Once you have measurements in place, it’s still up to your organization to determine what type of work needs to be done to influence the metric. DORA is prescriptive about what to measure and what benchmarks you must achieve to qualify as Elite but does not offer a copy-and-paste solution for improving. However, the Continuous Delivery Capabilities in Accelerate can give you a jumpstart when choosing where to focus your efforts first.

Misconceptions about DORA metrics

Misconceptions about DORA include conflating it solely with developer productivity rather than its focus on software delivery capabilities and wrongly assuming that reaching Elite status guarantees business success.

DORA metrics are not a measure of developer productivity but of software delivery capabilities. In practice, DORA metrics have almost become synonymous with developer productivity and are often discussed as a productivity measurement in our industry. It’s essential to understand the goal of DORA metrics, why they exist, and what contexts are appropriate for them. Otherwise, you risk measuring the wrong thing and getting the wrong signals about developer productivity and developer experience.

Another common misconception is that qualifying as Elite means that your organization is highly productive or that you will perform well as a business. Developers find it difficult to be productive in an environment without the capability to iterate and deploy software rapidly, and DORA is a helpful measure for assessing that. But you may still be building the wrong thing, just very quickly.

How DX can help give you insight into your DORA metrics

DORA metrics help understand and boost software delivery performance, but gaining actionable insights can be tough. Enter DX, the ultimate developer intelligence platform. Built for developer productivity, experience, and platform engineering teams, DX combines qualitative and quantitative data to give you a complete view of your development process. With tools like DevEx 360, Data Cloud, and PlatformX, DX helps you dive deep into your DORA metrics and identify areas for improvement.

Through its Data Cloud integration, DX stands out by unifying metrics across various data sources. It allows teams to move beyond basic metrics and uncover the factors affecting developer productivity and delivery performance. The DevEx 360 survey tool offers qualitative insights into developer experience, complementing your DORA metrics for a more nuanced analysis. This powerful combination helps pinpoint specific bottlenecks and optimization opportunities.

Published
August 13, 2024