close
esc

Get a demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
close
esc

Get a demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
<-- Back to blog
ResourcesCompany News
August 2, 2023

How Google Measures Developer Productivity

This article was written from an interview with Collin Green and Ciera Jaspan, who lead the Engineering Productivity Research team at Google. If you'd prefer to listen rather than read about the discussion, go here.

Google’s investment in developer productivity may be daunting to most—the company has many teams dedicated towards the problem, as well as a centralized research team that surfaces insights about how productivity can be improved. But while other organizations may not be able to match their investment, they can still learn from the underlying principles of how Google approaches the problem. 

The Engineering Productivity Research team leads Google's efforts to measure developer productivity and distribute insights to the teams and leaders taking action to make improvements. In this article, we describe how the research team measures developer productivity in order to identify areas that can be improved. In specific, we describe how they choose metrics and the methods they use for measurement.


Google has had to grow quickly into new businesses, which has meant learning how to make our engineers more productive. To do this, we needed to understand what makes them productive, identify inefficiencies in our engineering processes, and fix the identified problems. Then, we would repeat the cycle as needed in a continuous improvement loop. By doing this, we would be able to scale our engineering organization with the increased demand on it.
— Ciera Jaspan // Engineering Productivity Researcher, Google

What Google measures: speed, ease, and quality

The research team serves various stakeholders with different data needs: VPs want to get a broad sense of how engineering is doing and whether there are fires that need their attention; infrastructure teams require data on specific tools so they can make improvements. Regardless of the stakeholder, the research team always follows the Goals, Signals, Metrics (GSM) approach to determine what to measure.

The first step in this approach is for stakeholders to identify what they want to understand at a high level. The research team encourages stakeholders to define their goals in terms of speed, ease, and quality. This way, stakeholders get a more complete picture instead of just measuring one aspect and forgetting its tradeoffs.  

For example, a stakeholder may want to understand:

  • Speed: How quickly can engineers accomplish their tasks?
  • Ease: How much cognitive load is required to complete a task? 
  • Quality: What is the quality of the code being produced? 

These are just examples of questions a stakeholder may have. Having established these questions, the research team can then select metrics.

How metrics are captured: combining data from surveys and systems

The research team uses multiple metrics to capture the three aspects of speed, ease, and quality in order to more holistically understand each aspect. For example, for speed, the team will collect metrics from both self-reported data and log data. 

This is what researchers call using a “mixed methods” approach: the team uses qualitative and quantitative metrics together. Google captures these metrics through developer surveys and data from systems. The following sections cover these channels in further detail.


We lean into a variety of methods and use them together, not in isolation, so that we get a complete picture of what the developer experience is like. We do qualitative analysis, log data, interviews — a wide range of things to understand exactly what’s happening as best as we can and as holistically as we can.
— Collin Green // Engineering Productivity Researcher, Google

Capturing qualitative metrics through surveys

Google’s research team collects qualitative metrics by conducting a quarterly Engineering Satisfaction survey, which measures developer experience across a wide range of tools, processes, and activities. The survey includes questions about specific topics and sends follow-up open-ended questions to developers when they’ve said they’re less satisfied with an area. 

The questions derive from the GSM framework. While the survey does evolve, the research team has found consistency to be powerful, so many of the metrics stay the same so the team can collect data over a long period of time. 

To improve participation during the survey, Google uses sampling: specifically, it splits the developer population into three groups that are surveyed at multiple points throughout the year. 

Once the survey analysis is complete, the results are delivered to everyone in engineering, from VPs to individual contributors. Anyone in the organization can view the dashboards and query the results. The data is aggregated, which means individuals cannot be identified. 


One key benefit of surveys is that they can help you measure things that you don't know how to measure objectively. They can also help you measure things that are, in principle, not measurable objectively. For example, technical debt is something we’ve struggled to find good objective metrics for, but that we can measure with surveys.
— Collin Green // Engineering Productivity Researcher, Google

In addition to the quarterly survey, the research team has built a tool for real-time surveys when a developer completes a workflow. (This is an approach similar to LinkedIn’s real-time feedback system.) This system, called “experience sampling,” allows tool owners to get feedback from developers while they’re using the internal tools. 

Capturing quantitative metrics through systems

The research team also collects quantitative metrics through a system that ingests logs from multiple developer tools. Within this system, the research team has created what it calls “sessions,” which are groups of related events. Each session represents a contiguous block of time when the engineer works on a single task, such as coding or code review. This provides a lens for viewing developer workflows. 

This system is also used to derive other quantitative metrics. For example, the research team captures coding time, reviewing time, shepherding time (time spent addressing code-review feedback), investigation time (time spent reading documentation), development time, email time, and meeting time. 


Our logs based metrics are useful to understand developer behavior at scale. We can collect metrics like active coding time for every engineer all the time.
— Collin Green // Engineering Productivity Researcher, Google

Summary

Google’s approach to measuring developer productivity should provide some inspiration for others building out their measurement programs. To summarize its approach:

  • Before choosing metrics, Google asks leaders to determine what they want to understand at a high level. It encourages leaders to establish goals that balance each other: they set goals for speed, ease, and quality. 
  • The research team uses multiple metrics to understand each goal. It uses a mixed-methods approach to measurement, which means capturing qualitative and quantitative metrics. This gives stakeholders a more complete understanding of productivity.