Skip to content

16 developer productivity metrics top companies actually use

How Dropbox, Booking.com, Adyen, Google, and more measure engineering impact with DX’s Core 4

Taylor Bruneaux

Analyst

Engineering leaders have long known that measuring developer productivity goes far beyond counting lines of code or tickets closed. What actually drives impact is a holistic view—one that balances speed, quality, satisfaction, and outcomes.

The most effective teams today are combining system metrics with developer-reported experience data to capture this whole picture. Companies like Dropbox, Booking.com, Adyen, LinkedIn, and Spotify have shown that this approach not only accelerates delivery but also improves retention, satisfaction, and ROI across engineering.

In this article, we’ll break down 16 essential metrics, used by leaders at companies like Google, Uber, Etsy, GitLab, Atlassian, and more, that together provide a clearer understanding of engineering effectiveness. These metrics reflect the Core 4 dimensions of productivity and demonstrate how modern organizations measure, improve, and ultimately scale the impact of developers.

What is developer productivity?

Developer productivity isn’t just about how much code gets written or how quickly features ship. It is the sum of how effectively teams build software, the quality of what they deliver, and the experience of the developers doing the work.

At its core, productivity means accelerating the path from idea to reliable software in production, while minimizing friction and waste along the way. This includes collaboration across teams, reducing cognitive load in processes such as code review, and fostering an environment where developers feel engaged and supported.

The field has moved beyond chasing individual metrics. Research-backed frameworks, such as SPACE and DX’s own Core 4, emphasize that productivity must account for system dynamics, developer experience, and business outcomes. It’s about understanding not just output, but the conditions that allow developers to do their best work.

Why should you measure developer productivity?

Engineering leaders can’t improve what they can’t measure. Organizations that rigorously track developer productivity gain a critical competitive advantage by identifying bottlenecks, eliminating waste, and making smarter investment decisions.

The most effective approach goes beyond traditional activity metrics. It combines complex data with insights from developer surveys to understand where friction slows teams down and where improvements will have the most significant impact. This holistic view helps leaders reduce attrition, accelerate delivery, and build more resilient systems.

DX’s AI Measurement Framework and DXI connect engineering health directly to business performance, making the payoff clear: faster feature delivery and motivated teams doing their best work.

The DX Core 4 framework

The DX Core 4 provides a unified approach to measuring developer productivity by consolidating DORA, SPACE, and DevEx into four balanced dimensions: speed, effectiveness, quality, and impact.

Leading companies are already seeing results. Dropbox uses it to align teams on shared language, Booking.com quantified a 16% productivity lift from AI adoption, and Adyen achieved measurable improvements across half its teams in just three months.

16 developer productivity metrics measured by top companies

Here are 16 Core 4-aligned metrics that top companies use to understand, benchmark, and improve engineering performance.

Speed metrics

Speed metrics measure how quickly code moves from development to production, focusing on the velocity and frequency of software delivery.

1. Diffs per engineer (PRs or MRs)

A directional signal of throughput. Companies like Meta, Microsoft, and Uber use it carefully, paired with developer experience data, to avoid misuse. When balanced with DXI, it helps leaders understand whether efforts to improve productivity are working at scale.

2. Lead time

Measures how long it takes a code change to go from commit to production. GitLab and Atlassian track this to uncover bottlenecks and shorten the path from idea to delivery.

3. Deployment frequency

How often teams successfully push changes to production. Google, LinkedIn, and GitLab rely on this metric to ensure agility and responsiveness to customer needs.

4. Perceived rate of delivery

Captures developers’ perception of how quickly they’re able to ship. This balances system metrics with lived experience, highlighting friction that quantitative data may miss.

Effectiveness metrics

Effectiveness metrics measure how efficiently developers can complete their work without friction, focusing on the developer experience and organizational support systems.

5. Developer Experience Index (DXI)

DX’s proprietary index measures the drivers of productivity: flow, feedback loops, cognitive load, and collaboration. Companies like Dropbox and Booking.com use DXI to connect developer experience directly to business outcomes.

6. Time to 10th PR

Adopted at Peloton, this metric measures onboarding effectiveness. Faster ramp-up signals effective enablement, documentation, and mentorship.

7. Ease of delivery

Used as a north star by Amplitude, GoodRx, and Postman. It captures how intuitive and frictionless it feels for developers to get work into production. Improvements here often correlate with reduced cognitive load and faster cycles.

8. Regrettable attrition (org-level)

Tracks the percentage of high-performing engineers who leave. For executives, this metric translates developer experience into tangible retention costs and culture impact.

Quality metrics

Quality metrics measure the reliability and stability of software deployments, tracking how well the code performs in production and the health of development processes.

9. Change failure rate

A foundational DORA metric, measured at Lattice and Amplitude as incidents per deployment. A lower failure rate means safer deployments and fewer customer-facing outages.

10. Failed deployment recovery time (time to restore service)

Atlassian and GitLab track how long it takes to resolve incidents. Faster recovery protects customer trust and reduces business risk.

11. Perceived software quality

Captured through surveys, this metric reflects whether developers believe their systems are reliable, maintainable, and easy to work with. Often, perceived quality surfaces issues before they manifest in production.

12. Operational health and security metrics

Includes things like vulnerability remediation time, CI determinism, and test reliability. LinkedIn uses CI determinism to quantify the consistency of test outcomes across its massive codebase.

Impact metrics

Impact metrics measure how engineering work translates into business value, connecting development activities to organizational outcomes and strategic goals.

13. Percentage of time spent on new capabilities

Shows how much engineering capacity is devoted to innovation rather than maintenance or firefighting. Adyen uses this to ensure talent is focused on strategic work.

14. Initiative progress and ROI

Tracks whether engineering-led initiatives deliver their intended business outcomes. At Booking.com, this included measuring AI adoption’s impact on merge rates and developer satisfaction.

15. Revenue per engineer (org-level)

Provides a financial lens on productivity. Dropbox uses Core 4 metrics like this to connect engineering performance to board-level efficiency discussions.

16. R&D as percentage of revenue (org-level)

Measures how much of the company’s revenue is reinvested in engineering and innovation. This metric helps executives understand efficiency relative to peers and guides long-term allocation decisions.

You have developer productivity metrics. Now what?

Combining quantitative and qualitative data

Measuring these 16 metrics requires two distinct approaches that work together. Quantitative metrics come from your existing toolchain—deployment pipelines, version control systems, incident management tools, and CI/CD platforms—to track complex numbers like lead time, deployment frequency, and change failure rate.

Qualitative insights capture the developer experience through continuous surveys that measure perceived delivery speed, software quality, and satisfaction with tools and processes. This dual approach ensures you’re not just seeing what’s happening in your systems, but understanding how to measure developer experience and productivity holistically.

Two types of metrics for different purposes

Think of these metrics as falling into two categories: diagnostic and improvement metrics. Diagnostic metrics like DORA metrics and DX Core 4 provide high-level trends collected monthly or quarterly—they’re your “annual blood panel” that shows overall engineering health and benefits from industry benchmarks for context.

Improvement metrics are collected daily or weekly and focus on smaller, actionable variables within teams’ control. For example, if your diagnostic metric shows high Change Failure Rate, your improvement metrics might track CI flakiness, batch size, and satisfaction with quality practices. Teams can act on these immediately rather than waiting for quarterly reviews.

Making the connection actionable

The key is connecting diagnostic insights to improvement actions through what DX calls “metric mapping.” Start with a diagnostic metric, understand what processes influence it, then identify specific, granular measurements teams can act on daily.

When Booking.com measured their 16% AI productivity lift, they combined merge rate data with developer satisfaction surveys to validate that faster code delivery actually improved the developer experience. Platforms like DX automate this integration, pulling quantitative metrics from your toolchain while systematically collecting qualitative feedback, then correlating the two to reveal which improvements actually move the needle on both system performance and developer happiness.

For organizations looking to implement similar measurement approaches, resources like choosing a developer productivity metrics platform and guides on operationalizing developer productivity metrics provide practical frameworks for getting started.

Common missteps or traps to avoid

The speed trap

Focusing only on velocity metrics like story points or commits creates dangerous tunnel vision. Teams game the system, technical debt accumulates, and paradoxically, delivery slows down. Speed without effectiveness, quality, and impact leads to burnout and churn.

Vanity metrics over value

Many organizations track metrics that appear productive but fail to connect to business outcomes. Lines of code, hours logged, and tickets closed tell you nothing about whether engineering drives strategic value. Every metric must tie to a measurable business impact.

Survey fatigue without action

Collecting developer feedback without acting on insights breeds cynicism. Teams stop responding when surveys don’t lead to improvements. Successful organizations close the feedback loop by sharing results, implementing changes, and measuring impact.

One-size-fits-all approaches

Copying metrics from another company without understanding the context is ineffective. What works for a startup is different from what works for an enterprise. The Core 4 framework is designed to adapt to an organization’s maturity, team structure, and business objectives.

Measurement bureaucracy

Avoid Complex dashboards that overwhelm rather than inform. The best measurement systems are simple, automated, and actionable. Leaders should spend time acting on insights, not generating reports.

Building momentum with the right metrics

Developer productivity cannot be reduced to a single number. The lesson from companies like Dropbox, Booking.com, Adyen, and Google is that progress comes from considering the entire system, encompassing speed, effectiveness, quality, and impact together.

The organizations that win are those that measure with rigor, act with focus, and adapt continuously. By grounding improvement efforts in Core 4 metrics — and validating them with developer experience data — leaders can build engineering organizations that are not just faster, but healthier, more resilient, and better aligned to business outcomes.

Published
September 10, 2025