Skip to content

Setting targets for developer productivity metrics

Principles for setting goals around developer productivity metrics while avoiding common pitfalls.

This post was originally published in Engineering Enablement, DX’s newsletter dedicated to sharing research and perspectives on developer productivity. Subscribe to get notified when we publish new issues.

Setting targets for developer productivity metrics takes careful consideration. In some cases, setting the wrong goals can backfire by creating unintended consequences. Teams might start focusing on optimizing the numbers instead of the system, especially if there are anti-patterns like tying bonuses to individual metrics, or setting blanket targets on metrics teams can’t directly control.

At the same time, leaders want to drive meaningful improvement and use goals for motivation and accountability. Teams want transparency and direction on where to focus. Even so, it can be difficult to figure out what kind of targets are realistic in the first place.

These three practices help engineering leaders avoid pitfalls and encourage their teams to use data to improve the system, leading to the right outcomes:

  • Set goals on the right type of metrics
  • Use multi-dimensional systems of measurement
  • Consider organizational context when setting targets themselves

Without these three things, organizations run the risk of developers feeling mistrusted and micromanaged, teams gaming metrics rather than improving systems, and metrics becoming distorted so they no longer represent reality.

Set team goals on controllable input metrics, not output metrics

Not all metrics are immediately actionable because they measure big-picture trends and are often summary metrics that are influenced by many other factors. Setting goals on these kinds of metrics—output metrics—can incentivize the wrong type of behavior and disempower developers, as they feel they can’t meaningfully influence the numbers. On the other hand, a different type of metric—controllable input metrics—are very actionable on the team level and contribute to improving the system. Being able to identify the difference between these different types of metrics is an important skill for any DevEx leader.

  • Output metrics: These metrics represent what you want to get to, but are not directly actionable. That’s because they’re a summary of other factors, used best as a diagnostic tool but not as something to be directly influenced by a single process, tool, or action. Some examples include:
    • Change Failure Rate
    • PR Throughput
  • Controllable input metrics: These measure behaviors or processes that teams directly influence, which then result in changes to the output metrics. For example, code review turnaround SLAs are controllable and can improve PR throughput, and reducing flaky CI tests can improve Change Failure Rate.

This pattern is not unique to developer experience and can be seen in other parts of life. Let’s imagine you have low levels of iron in your blood. This level is an output metric, and setting a goal on it—without mapping it to controllable input metrics—can make improvement seem out of reach. Instead, you want to focus on controllable input metrics like taking supplements, eating iron-rich foods, and avoiding coffee with meals. Doing these activities will lead to a change in the output metric, which makes them more suitable for goal-setting. Similarly, engineering teams need to identify the actionable inputs that influence the larger output metrics.

Depending on an organization’s size and complexity, it might still be preferable to set goals on output metrics, like improving Change Failure Rate, in order to simplify reporting and align on a single goal. In cases like this, it’s essential that frontline teams go through the process of metric mapping to break down the output metric into controllable input metrics, and that those input metrics have their own goals and structures of reinforcement around them.

Avoid gamification with multi-dimensional measurement and aligned incentives

A common objection to setting targets around metrics is the fear that developers will game the system. Gamification is the phenomenon where individuals distort the data in order to make the metrics look good, without actually improving the system. Goodhart’s Law describes this phenomenon, summarized as “when a measure becomes a target, it ceases to be a good metric.”

Gamification is dangerous for organizations because while the metrics show surface-level improvements, the reality is that the systems are usually worse off—but those negative changes are largely invisible because they aren’t being measured properly.

Setting goals amplifies the incentive for individuals to game the system, because goals create accountability and pressure to deliver specific results. When people know they’re being evaluated against a specific number, especially if rewards or advancement opportunities depend on it, the temptation to find shortcuts or manipulate metrics becomes stronger than the motivation to make genuine improvements that might take longer to reflect in the measurements.

A well-designed system of measurement and intentional culture around using metrics can help protect from the effects of gamification. We know how humans behave when metrics are used for measurement and goal-setting. With that knowledge, it’s up to us to design better systems.

  • Use multidimensional measurements instead of one-dimensional metrics. When you track multiple related metrics together, manipulating one metric usually affects others negatively, making gamification more obvious. DX Core 4 is an example of a multidimensional system of measurement.
  • Focus on learning and improvement rather than incentivizing or rewarding hitting specific thresholds.
  • Give teams time and autonomy to address the root causes affecting metrics. When teams feel pressured without having the resources or authority to make real improvements, they’re more likely to find ways to adjust the numbers without fixing the system.

Set realistic targets based on organizational context and strategy

When determining actual target values, one size doesn’t fit all. Consider:

  • Past performance: Different teams start from different places. Instead of blanket targets across the organization, consider percentage improvements from each team’s current baseline.
  • External benchmarks: Industry benchmarks (like the 75th percentile) provide useful reference points, but remember that context matters.
  • Effort curves: Improvement isn’t linear. For example, moving from the 50th to 75th percentile often requires less effort than moving from the 75th to 90th percentile.
  • Metric characteristics: For some metrics, higher isn’t always better (e.g., extremely short PR cycle times might indicate inadequate code reviews). Some metrics need SLAs or thresholds rather than continuous improvement targets.

Above all, remember that metrics don’t replace strategy. They enhance it. Even with robust metrics, you still need human judgment to set appropriate goals in your specific context.

Getting started

To apply these principles in your organization:

  1. Clearly distinguish between controllable input metrics and output metrics
  2. Identify the specific input metrics teams can influence
  3. Show how these inputs connect to larger organizational goals
  4. Set appropriate targets on those controllable metrics
  5. Ensure teams have time and resources to address improvements
  6. Monitor both input and output metrics to validate your approach

By following these guidelines, you can create a more productive environment focused on genuine system improvement rather than superficial number manipulation.

 

Published
May 28, 2025