Inside Extend’s journey of rolling out a Git and Jira metrics tool

Brook Perry

Head of Marketing

At some point, engineering leaders inevitably face the task of figuring out what metrics to use to measure and understand developer productivity.

There are many different approaches and solutions for this problem, one of which is “engineering intelligence” tools that provide engineering metrics based on data pulled from tools like GitHub and Jira. There are numerous vendors marketing these types of solutions, leaving some leaders wondering how effective these tools really are.

This article covers a real-world story shared by Matthew Schrepel, who leads the Developer Experience team at Extend. Over the past couple of years, Extend has iterated on multiple approaches to measuring software delivery, including initially rolling out a Git and Jira metrics tool. Matthew shares his experiences with rolling out the tool across several hundred engineers, and the outcomes after using the tool for a little over a year.

With that, it’s over to Matthew. 


Executives want information about how the company is doing. How are our teams? What are our issues? They want this information to be able to make choices that improve the situation at their company. Unfortunately, getting that signal in engineering is hard. 

When one of our leadership teams brought up the idea of using one of the many DORA and GitHub metrics-y tools, I was opposed to the idea. In particular, I thought it would be too easy to consume the metrics from those tools and all of sudden start creating weird targets. For example, if the data said “our Jira turnover was slower this week,” and we felt like we needed to do something about it.  

Nevertheless, the leaders that wanted the tool had a clear idea of what they wanted to do with it: they wanted to use it for gut checks. As a manager or leader, you‘re thinking “something is going on with this team,” so you dive into the data to see if your intuition is correct. 

Once we rolled out the tool, exactly what I had feared would happen, did. Engineers saw the metrics and changed their behavior — which is unavoidable, because it’s human nature to shift your behavior when something is being scored or graded or tracked. At the end of the day, this distorted our metrics and made them less useful. 

Another challenge we faced with the tool is that it felt like an endless sea of dashboards, where you have to determine how to get the signal out of the noise. Every time you’d ask a question, you would have to come up with a new way to determine what you’re doing is right. 

Ultimately we felt we weren’t getting what we needed. We ended up instead connecting with a tool that captured developer sentiment, and the signal started flowing much more consistently. We found a lot of value in what we were getting, and our executives did too: after using the new tool, our VP of Platform very quickly said, “This is the data that I trust” — more than the one that had all the dashboards in it. He found the tool capturing sentiment to show what the actual problems were more accurately than did the tool showing us how many tickets developers were pushing each week. 

Thanks to Matthew for sharing his team’s experience at Extend. His team’s story is helpful for leaders considering these tools for their own organizations. For more from Matthew and his team, listen to their interview on the Engineering Enablement podcast. 

Published
September 13, 2023

Get started

Want to explore more?

See the DX platform in action.

Get a demo