Podcast

Behind the scenes with Extend’s developer experience team

Matthew and Luke lead Extend’s Developer Experience team, a team that has approached their work in a way that is more forward-thinking than most. In this episode, they cover how they deliver impact at multiple levels of the organization, their journey with productivity metrics, and how they’ve made DevEx a C-level concern.

Timestamps

  • (1:40) How the DevEx team started and where it fits at Extend
  • (5:08) Tradeoffs of DevEx reporting into Platform 
  • (6:40) The mandate and tasks they focus on
  • (12:07) The impact of learning and development efforts
  • (16:33) How to drive team-level improvements 
  • (18:44) Why developer experience is becoming more prevalent
  • (26:17) How they made DevEx a C-level concern
  • (30:27) Their journey with productivity metrics 
  • (33:10) Advice for presenting DevEx data to executives 
  • (34:52) The team’s experience using git metrics tools
  • (48:30) Being rigorous in leveraging metrics 

Listen to this episode on Spotify, Apple Podcasts, Pocket Casts, Overcast, or wherever you listen to podcasts.

Transcript

Abi: Matthew, Luke, welcome to the show. Excited to chat today.

Matthew: Me too. It’s been a fun ride so far.

Luke: Thanks for having us.

Abi: Well, you are up to so much interesting stuff at Extend. I want to just dive right into it. I think listeners are going to be really inspired, maybe envious of the work you’re able to do at Extend and the impact you’ve been able to have. Start by telling listeners a little bit about your team and how it fits into the rest of your organization?

Matthew: Well, it’s a big story in a sense. I was a normal employee at Extend at the very beginning, just some engineer. I got asked to do a bunch of things related to hiring. The long story short on that one is that it became clear that I cared a fair amount about engineering culture and that is essentially where this developer experience team came from that I’m the manager of. I got really lucky because the technical founder of the company essentially allowed me to create this team in my own way to just do what I wanted to do with the team because he liked what I was doing before.

The developer experience team eventually came together and the whole goal of the team was essentially to do things that would shape engineering culture. Very quickly we got away from thinking about like a developer productivity team. We didn’t want to talk about that at all. The goal was not to explicitly make developers more productive or say that we’re going to move metrics up into the right. The whole idea here was we’re going to improve working at this company by making it easier to get things done. And as a side effect, we will move those metrics, but that’s not our mission.

Luke: I think for a lot of people in this company, maybe even in leadership, which we’ll get to a little bit later, we’ve experienced all different varieties of types of engineering culture, the good, the bad, and the ugly. I think that some of the decisions around forming this team and the work that we’ve been doing have come out of past experiences that maybe have been really negative. I think we’ve realized we have the people who have the heart to create a better culture and the skillset to be able to do that. And hey, let’s give them a platform and let’s give them the space to be able to do that. That has been very exciting to be a part of because driving feature development and feature work is always important, but to be able to shave off a team, to augment all of that and help improve the day by day of all of that has been really incredible to be a part of.

Abi: Yeah, it’s clear you have such a purposeful mission and have been given a lot of trust from leadership and like we said, we’ll talk more about that later. Love the characterization of focusing on not just velocity as it’s commonly called in the industry, but really improving the lives of developers. One question I have, do you just have a one line mission statement or North Star that you guys have published internally? And if so, would you mind sharing it?

Matthew: Honestly, it’s so stupid simple that it’s barely even worth saying or maybe it’s the thing most worth saying, which is to improve the lives of all engineers at Etend. That’s it.

**Abi: Yeah,I like that. Share a little bit more how you fit into the org structurally speaking, do you report directly to the CTO or are you within an umbrella organization like an infra or platform organization?**‍

Matthew: Yeah, there are two big umbrellas within our engineering organization and that is product engineering and platform engineering. We are under the platform engineering umbrella. I report to the vice president of platform engineering and he reports to the CTO. Our team is one layer away.

Abi: This is a question I’ve never really asked anyone. This org structure is common, having spent now a couple of years in this role, what do you think of that structure… Does it make sense for the developer experience team to be a part of the platform organization or have you thought about it differently at different points?

Matthew: We have thought about it differently at different points and we have gone through a lot of reorganization over the course of the lifetime of Extend. This is where we’ve settled for the moment and it has trade-offs, is what I will say of everything else. The benefit of being on the platform side is that the other people on the platform side are DevOps and the security team and all of those people have a lot to do with things that affect every engineer at Extend. The fact that we do a lot of work that really does affect every engineer and affect every system makes sense on the platform side. The part that hurts or the trade off that we get from being on the platform side is not being as well attached to the product engineering side. All of the people who are making features, just all of those boots on the ground, we’re a little detached from them and we have to work extra hard to make sure that we’re actually engaging with them to figure out what their problems are so that we can go and solve them.

Abi: I want to move into sharing just a little bit about what you’re actually doing. This DevEx mission sounds awesome. You have your org structure figured out, but I think a lot of listeners are wondering like, “What are these teams actually doing?” Maybe you could share tactically what you are actually doing right now.

Matthew: Yeah, this is interesting because the team is the least centrally focused, which I mean to say we aren’t all doing one task together and then getting that done. The team has six people on it. Luke is our learning and development specialist, so we’re always worried about producing documentation, making videos, producing an internal newsletter for the company, making our own podcasts to get to know other people within the company. All of that’s going on and Luke is doing that on his own. Then there’s another guy whose name is Mike Golus who is on the team, and his job is to sit in between design and engineering. He is making the component library from the engineering side and then organizing the design library for the designers on the design side. He interfaces with all of those people and the engineers to basically produce our component library and make sure that the design language of that component library matches the engineering language.

Like in Figma, we make sure that the way that you refer to what would be the props on the other side for react components, that those lineup, so that engineers and designers can speak the same language. Then we have someone who came from the DevOps side. One person who came from the testing side who’s very worried about helping people with the testability of all of these things and spending a lot of time working in our CI system and then producing automated tooling. 

And then a couple of engineers who are, well, here are broad efforts on the engineering front. We had this huge monorepo and it was causing us a ton of problems because everything was a big tangle of code and we’re turning that into many repos. We produced all of the documentation to make that go. We migrated a few services ourselves and now we’re having that be a self-serve team by team thing, which allows us to do a bunch of upgrading and then let teams just run faster and make more choices on their own, because the monorepo meant that they were hampered by single configuration for everything.

Next step is local development. We’re rolling that whole thing out. We work with all serverless and serverless is difficult to do local development for. We figured out a way to do some of that stuff and we’re going to roll that out by working with all of the teams. Just pair with them for a bit, show them all the ropes of what it can and can’t do, and here’s a scenario, here’s how you debug such a thing. Then schema testing is our next big initiative, which is ensuring that the contract that some service publishes is actually correct. I’d write my own way to call that service from some other service. Instead, they just publish their client and then I use their client to call them. We can just use that client to test the service and say, “Yes, the client is valid.”

Then we get rid of a bunch of integration testing, which makes the lives easier for everyone else and our CI system, because everything doesn’t have to test everything to make anything deploy. And then we’ve got the testing architect on the team as well who fully implemented Cypress across all of our systems is now doing Cypress Component Testing and implementing that in our client applications. He works really closely with the Cypress devs and they actually produce code to fix all of our problems and that’s been super handy.

Luke: I think if I could summarize, a lot of what we do is we work to understand what the current or upcoming problems might be for the engineers at large. Sometimes the solutions there are tooling. Sometimes there are new technologies. Sometimes there are process changes. Sometimes they’re a large scale, monorepo to polyrepo, a shift in that direction. Our team has a variety of different expertise and we’re able to come together and work toward facilitating those changes, those improvements, development of new tools and tooling and delivery of those to the broader organization and trying to stay ahead of the needs as best as we can there so that those teams who are developing features, who are doing product work have minimal roadblocks is really how I look at the work that we do.

Matthew: That’s a really good point. There’s one thing worth mentioning here, which is that the mandate of our team says that we should solve whatever problem with the best tool available. And not always is that code. Sometimes this is a process change. Sometimes it’s a video documentation. Sometimes it’s talking with directors somewhere else to implement some other type of change. Code is one of the things that we do, but it’s definitely nowhere near the only thing we do.

Abi: Yeah, thanks so much for that overview. One thing that struck me, Luke, your focus area around learning and development, the internal newsletter. I would love to ask you, how do you think about the impact of that? What kind of impact do you ultimately feel like you’re trying to drive with those efforts?

Luke: Yeah, that’s a good question. There’s a few things. I think culture is one of the clear ones, right? Everyone talks about culture and a lot of times no one really knows, okay, like, “How do we actually impact culture and shift it in a positive direction?” And that’s something that we’ve put a lot of time, thought and energy into, and the results have been really good. I can share a little bit more about that, but again, thinking back to experiences that we’ve had in the past with company culture, engineering culture that is maybe toxic or at best just neutral, right? We’ve stood back and thought about how can we make this an amazing culture but maybe not do it in a way where we have pool tables in every room and massage studios. What are the real things that really matter in culture?

I think encouraging a culture of learning has been huge and there’s a variety of different ways that we’ve done that, right? That goes into some of the resources that we’re developing. We’re really trying to understand, I work closely with some of our leaders and just really understand where’s the direction that we’re going in six months, in a year, in two years, where are we going when it comes to our infrastructure, when it comes to the way that we’re conceptualizing the code that we’re writing the systems that we’re building. And then let’s begin to build out some in-house content around those things and begin to educate the rest of the organization around those things. That’s like a win-win win because those leaders have an opportunity to share their knowledge and expertise. They’re able to become a thought leader within the organization. The rest of the team gets to learn those concepts as just a benefit of career growth.

And then when we do begin to make shifts in the company towards those directions, everyone is already on the same page. They’ve already been prepared. That’s one example of how that has gone and how that’s been really good. Something else that we’ve invested really heavily into is our onboarding. We realize we’re using some more novel technologies and cutting edge technologies that engineers may be coming into the company, perhaps they don’t have a lot of experience with those. How can we shorten that learning curve and really equip them to dive in and begin feeling more comfortable in our ecosystem and able to contribute sooner and faster and really understand better how we’re doing things and why. Matthew and I, a while back, we worked together and built a backend onboarding bootcamp. That’s like A to Z of all the main things that you would need to know.

We wrote all the content, it’s like one-half lecture, one-half practical hands-on coding. Engineers that come into the company go through that and that really gets them up to speed in a powerful way. What’s been cool is that we’ve actually been able to send some of our existing engineers back through that, maybe the whole thing, maybe one part of it. We’re able to use it as an augment to other learning resources that we’ve had and the feedback has been really incredible. The number of people that have worked at other really big companies who have said, “I have never had a onboarding experience like this before.” That’s been really incredible to hear from some of our team members. I don’t know if that fully answers your question, but that’s a little bit of an insight into some of what we do.

Abi: Absolutely. Well, I love everything you shared because it gets at a question that was on my mind, which is how do you lift up the whole organization? You talked about how there are some areas where you guys are building tooling to impact an area, solve a problem, and then other times it’s how do you affect the culture or processes of folks or knowledge in your case, folks across the organization. I was just having a conversation with Manuel Pais, one of the co-authors of Team Topologies, and I asked him, “Hey, you have this concept of enabling teams. How do you actually enable other teams? What are you actually practically doing?” His response was curation of knowledge. Exactly what you guys are doing. I love this concrete example of that, but I want to ask you, how else do you guys think about lifting up and driving the team level improvement, the bottoms up, tide that lifts all boats, if you will, side of this. How do you guys tactically think about and execute on that?

Matthew: I think there are two primary aspects of that. One being just figure out what the roadblocks are and if that roadblock can be removed, remove it. The other big one is lead by example. We have a lot of staff level engineers on my team and those people are expected to be the best in the company and they behave like that too. We think that we are an example. When we’re visible, we try to be the best version of ourselves. We do stuff to pair with teams to roll out features and do things with them instead of be a shadowy team behind the scenes just making code changes and saying things like, “Merge this and you’ll be better off.”

Luke: Yeah. I think a lot of times we can try to do top down improvement campaigns, right? Where we work through the management structure to try to improve things and with some of our internal blog, our internal podcast, our lunch and learns, I try to connect the higher level leaders directly with those team members who are boots on the ground and directly expose them to new ideas and maybe different ways of doing things. It doesn’t have to filter all the way down through the structure. We’re able to just really connect people like, “Let’s all get around a virtual table together and talk about some different ways that we could think about certain problems.” I think that is really helpful because it allows members of our product teams to begin to think differently about how they’re writing software, how they’re building code, and we don’t have to try to push as much from the top down, which has been good.

Abi: Yeah, that’s great. Well, I want to shift topics a little bit. This is actually a question you asked me, Matthew, before the show. I’m going to first ask you. Why do you think developer experience is becoming such an industry-wide trend now?

Matthew: Yeah, it’s a question that I’ve been rolling around in my head for a little bit because I see a lot of stuff related to developer experience teams being formed at fairly large companies. I think we’re probably an aberration in the sense that we’re not a very large company, but we have this team that I think basically goes to the forward-thinking leadership of the technology founder of the company. But seems like all of a sudden this has become an important topic within the industry and I really don’t know why. I would assume that you have slightly more insight to this than I do. Do you have any thoughts on what this is? Why now?

Abi: Yeah, it’s interesting. I have lots of conversations about this and one interesting conversation I was just having with Dr. Nicole Forsgren, creator of DORA and SPACE, and I was like, “Yeah, Nicole, this is like a new approach. This is new thing.” And she’s like, “It’s not new. It’s not new at all. We’ve been doing this for decades at Microsoft.” I think there’s a newness to it and there’s a non-newness to it. I think there are two things going on that I see. One is actually, I think you yourselves touched on it at the beginning of this call, which is there is just a rebranding, a slight tweaking of something that’s existed for a long time, which is dedicated developer productivity teams and infrastructure organizations. These teams have always existed to drive productivity and experience within organizations. However, they weren’t called developer experience teams until I think more fairly recently.

When I talk to leaders about that shift, they actually describe things very similar to what you did. They talk about how the word productivity felt too narrow. It didn’t really fully capture the full scope and mission of what they were trying to do within the organizations. I think one piece of this is just a rebranding, an appropriate rebranding of work that’s always been done. I think the other thing that’s really sparking this trend is if you take a step back, the maturation of things like Kubernetes, which is like the new wave of development infrastructure has reached a point of somewhat maturation in terms of adoption, has opened up this whole new can of worms of, “Oh man, all these new tools they’re actually really hard for developers to use and it’s causing a lot of problems.” Actually slowing us down. Should we have even done this? That’s the same story we saw with microservices five, six years ago, I think, is playing out with infrastructure. I think that’s created a burning need, like a painkiller level problem around developer experience when it pertains to infrastructure. That’s, I think, a tide that a lot of organizations are adapting around as well. I don’t know, what do you guys think? This is kind of my two cents.

Matthew: Now that you mention it. This brings up part of why this team exists at this company, which is we decided from the very beginning that we were going to write infrastructure as code and have that be a part of our primary code base. Every engineer was going to write infrastructure as code. For a lot of engineers, this is brand new and we were using a framework that is not super well known. That itself was pretty new and it was painful for the engineers and we did need serious stuff around it. I think part of what you touched on there that makes me think about this is the difference between the way a lot of DevOps engineers write code and the way that backend or front end engineers write code. This is a broad, I don’t know, it sounds mean to say realistically, but I think that people who are more experienced in writing code that other people read, when you’re making features, you really have to care a fair amount about the interface. It has to be relatively easy to use, easy to understand.

When you’re writing DevOps code, it’s just for other DevOps engineers. Not a lot of people actually see that code. It just needs to work. That presented a problem where the DevOps people were writing code for the infrastructure as code at a lot of places, not just our company, and it was hard to use. I think that something had to go around all of that stuff to make what people did every day a little easier to use, and we’re still going down that vein. It’s still hard.

Abi: Yeah, I agree. That’s an interesting observation. 

Luke: Yeah, I think too, thinking about this broader question of developer experience and why is this such a big thing now? I wonder if it’s not partially related to how as a society we are continuing to mature along the lines of understanding the importance of emotions. I think each generation is growing in that as we move along, and I think maybe we’re at a point now where we understand that engineers aren’t just components of computing power that we plug into a system and they generate code and generate value. They’re actually people and they actually have emotions and those emotions correlate to that value in some way. Let’s figure out how that works and begin to measure that and then we can take that information and begin to make adjustments that improve the lives of these engineers in such a way that it does have a bottom line impact, but it also helps shape culture and the company in a positive way as well.

Abi: It’s almost comical when you make that point around, “Oh, we’re coming around to realize developers are humans and their emotions matter.” I’m going to read you guys something. This is from an article just a couple of months ago from productivity researchers at Google. In their paper they have a section says, “Software developers are humans, all of them.” It seems that there should be an uncontroversial assertion, and yet we find ourselves making this assertion on a regular basis. They’re making the exact same point as you. We’re still struggling to view software developers and development as a human problem, not assembly line or factory widget. Love that point. I also want to add, earlier before the show, we talked a little bit about what’s happening in industry-wide. I think the biggest shift that I’ve seen in the past eight months is the shift in focus from retention and onboarding, which were huge points of concern about 12 months ago when the market was super competitive from a talent standpoint.

Companies are still in hypergrowth hiring mode up to now where companies have slowed down in our focus on efficiency. Now the irony is it’s actually the same stuff that you got to solve to tackle efficiency that you would to tackle retention and onboarding. When our research team is analyzing the aggregate level data, we see roughly similar correlations between the top drivers of what you would use efficiency and productivity as we do retention and developer satisfaction. All the decades of research also back that up that there’s a tight interdependency between developer happiness and productivity.

That’s something I wanted to share as well. I want to ask you guys, one thing that I think is really special and remarkable about what you’re doing in Extend is that you’ve been able to elevate developer experience to a C-level topic, right? It’s something that your C-suite cares about. You mentioned at the beginning you have a founder who cares, but I’m sure there’s been an evolution. When I ask like, “What’s been that evolution in terms of developer experience becoming a C-level concern?”

Matthew: Yeah, at first, as the team formed, we were just designing our own roadmap to get roadblocks out of the way. We had an idea that we wanted to survey people to figure out what the roadblocks were in the first place. Luke and I put together a Google Form survey and it was difficult to work with because it just produces a crappy spreadsheet in the back. I don’t know if this is the same everywhere, but our engineers do not read email. Saying like, “Hey, here’s a survey, look at your email.” No. We just couldn’t get people to actually participate in the survey. The people who did, it was hard to slice and dice the data and then to do something like send that out again was another whole big ball of effort. There wasn’t a good way to just automate that.

We came up with a different way of doing that. This was all based on SPACE metrics and we use an app to do such a thing. Once we started to get real sentiment scores for what people were saying is the real problem areas, it was very easy, surprisingly easy to get buy-in from VP or let’s say director plus all the way up to the C-suite to say, “These are the areas that need focus.” And then they were like, “Well, okay, let’s do it.”

Once we started to see that putting something into the quarterly roadmap to say let’s improve this particular thing because it’s a pain point, the biggest pain point at the company for the engineering org, that we moved those metrics. It was like, “Hey, we’re actually doing something. We’re moving the needle here. People are saying we are more satisfied as engineers at the company based on what we did.” Then all of a sudden people started saying, “Hey, let’s look at the results of that at the board meeting.” I didn’t advocate for this personally. One day it was just like, “Hey, we’re doing that. Can you get this presentation ready so the board can see?” And I was like, “Wait, what now? That’s happening. Okay.”

Abi: That’s awesome.

Luke: Yeah, I think most executives probably they’re looking for that information. How is the company doing? How are the teams doing? What are the issues? They want that information. They want to be able to make choices that improve the situation in their company. Getting that signal is the hard part. And as Matthew mentioned, we tried a variety of different things. We built some pretty great forms. We chased people down in Slack and worked to try to develop that information. That’s really the hard part is can you get a large amount of your people to give you feedback and can you do that in a recurring way that you can measure progress consistently? I was the one who did a lot of the behind the scenes work with our initial efforts and it was a lot of work. If you want to try to go that route, you’re going to need to have someone full-time doing that.

Do you have one person who has, you mentioned research, who has the background to be able to shape and craft those questions in order to actually surface really valuable information. That can be really difficult. I think once we did connect with the tool that we’re using now and began to utilize that, those signals began to flow much more consistently. We found a lot of value in the information that we were getting. Then that’s a clearly desirable thing for executives to get their hands on to understand, “Okay, what are the top things that we’re facing and what can we do to make adjustments towards improving those?”

Matthew: Here’s the funny part about all of that. This isn’t the only tool that we have to measure something like productivity. We have another tool that’s very DORA metricy, getting stuff from Jira, from GitHub, from the other sources, I can’t remember what. It’s an endless sea of dashboards that is nothing but noise and you have to determine how to get the signal out of it. Once you ask a different question, you go through the entire thing again. Let’s figure out how to get the noise out of the or how to get the signal out of this noise. An annoying tool in the sense that every time you want to ask a new question, you have to come up with some new way to actually determine what you’re doing is right.

The funny part was the tool that was measuring sentiment, the SPACE framework tool, our VP of platform very quickly was like, “This is the one that I trust more than the one that has all of the data in it.” These are people reporting how their sentiment… They’re saying, “I feel like this.” They aren’t saying I pushed 18 Jira tickets this sprint, but somehow that one was more accurate to what the actual problems were rather than like, “Here is the specific day-to-day metrics.” That became useless.

Abi: Yeah, it’s really interesting, just in a couple of weeks here, we’re going to actually have some folks from Google on this show. When I asked them, “Hey, what are some things you feel like you really want to get out there for developer productivity leaders?” They said, “We spend at Google a lot of time explaining to leaders why the data we get from our surveys is superior to the data that we get out of our logs.” When I talk to folks like Nicole Forsgren how they approach it at Microsoft, a lot of the big tech companies are already doing this.

They already have a complimentary approach where they’re doing survey-based insights and system-based insights. They’ve also learned that when there’s a discrepancy or muddy waters with the data that usually what your developers are telling you is what’s accurate and true. Also, Luke, your depiction of the effort and the manual work that goes into surveys rings really true. We just did a episode two weeks ago with someone at Peloton told her entire story of launching her own DevOps survey. It’s really inspiring but also intimidating as you very well know to see the effort that goes into it. One question I have for you both. First of all, it’s amazing to hear that your executive leadership team invited you to report on this at the board level. I want to ask you for your advice. What’s your advice for presenting this type of data to that type of listeners out there who might find themselves in a similar position?

Matthew: Yeah, this was something that I had learned at other companies, which was… As a base level engineer, you don’t really know how to talk to the C-suite. This is something that you have to learn. The advice I would say is high level. Don’t go into detail, remove it. If they ask for it, give it to them. What we say is essentially really simple like, “This is the top priority. This is what the people are saying. This is what we think that they actually mean, and here are a few things that we can do about it.” If you want to be super classic about this, you offer two bad ones and one good one so that they go with the one good one.

Luke: I think Matthew nailed it. There’s a lot of information that we get and so we try to really boil it down into the top three areas of focus that are necessary. We try to assess what’s really going on and then provide some actionable solution steps. We do that in a slideshow deck presentation that can be shared really easily and it takes us maybe half an hour to an hour to read through all the comments that people leave in the survey and we go through it and turn that into something that executives can get value from without having to wade through all the details.

Abi: As you were describing the journey you’ve been on with the data you present or look at as a leadership group, you mentioned this previous tool that was much more focused on metrics from Git and DORA. This journey you’ve been on, I think, is a very common one. A lot of organizations start with DORA metrics and even looking at some of the Git activity metrics. I want to rewind a little bit and let folks live in your shoes a little bit as you went along this journey. Now, maybe tell me rewinding back to when you first brought in the DORA metrics and those Git metrics, what was the hope? What was the vision and expectation and then what turned out to be the reality?

Matthew: Right. That one was fun. I was opposed to it when we first offered up this tool. My broad thought on those very DORA metric-y tools is that it’s so easy to turn that into the first order thing that you consume and then all of a sudden you start trying to aim for weird targets. Your focus shifts because the data says, “Oh, our Jira turnover was slower this week, therefore we need to do something.” The metrics don’t say what the story is. Does it say someone was out sick this week or someone changed something mid-sprint. To me it was like, “This is a hammer and not everything is a nail, but we’re going to hammer everything. This is what we’re going to do.” I wasn’t making the choice to bring in this tool or not, that’s not my zone.

I gave my opinion, but that was about it. Now, the leadership that did bring in the tool, they had an idea of what they wanted to do with it, which was this is a tool to do a gut check with. You as a manager or a director, you’re thinking, “Something’s going on with this team. Here is where I can go dive into the data to see if my intuition is correct.” I think what ended up happening is exactly what I had expected in the first place, that this is so easy to become first order metrics. Engineers are like, “Oh, I know how they’re measuring all of this, so let’s start trying to game the system.” At the end of the day, the tool becomes less and less useful as more people are trying to either shift some metric that they think they should or game the system to produce the metric that they think they should.

Luke: Yeah, I think that’s such a common thing. You can end up with the tail wagging the dog where those metrics that should provide a little bit of feedback do end up becoming something that people drive toward. There’s so many more important underlying things usually that are part of that, they can get ignored. I think in our case, we do have leadership that is aware of how that can happen. I think they have worked behind the scenes to help create a clear cultural imperative that we’re not going to do that. To Matthew’s point, sometimes that’s just unavoidable because it’s human nature, when we know that something’s going to be scored or graded or rated, it’s hard to avoid it even with being really clear in a cultural way. Yeah, I think that is a common thing, right? If you don’t have that cultural reinforcement, things can get weird really quick because your focus just on that dashboard rather than perhaps maybe what you should be focused on.

Abi: It’s interesting, Matthew, that you mentioned, you voted against bringing in these types of metrics. They got brought in anyways, and then you were proven right in the end, at least your hypothesis or prediction played out. If you were to do that situation over again… Just last week I was talking to a leader who literally asked me, “Hey, there’s a few leaders here who are trying to buy this tool that does exactly the type of thing you’re talking about. How do I talk him out of it? What would be your advice to that having been there before yourself?”

Matthew: I think the unfortunate part here is… The way I would approach this is never try to convince. It would be pointed questions like, “Given this situation, how would you avoid turning that dashboard into the target?” When a measure becomes a target, it ceases to be a good measure, right? How would you avoid getting hyper-focused on these metrics? How would you use it only for the problem that you’re trying to solve? Also, just asking like, “What is the problem that we’re trying to solve? Do we have a problem that is worth spending $60,000 a year for?”

They seem like simple questions, but I don’t think there’s a logical argument that you can throw out there that would just convince people. It is only if they can really question clearly, what is this for? What is the problem that we’re trying to solve and do we really think that this will solve the problem? And if we do this tool, what are we not doing? What is the counterfactual here? What do we avoid by saying that this is the way that we’re going to solve that problem? Is it the best way? I think probably not, but you could probably get someone around to thinking maybe there is a better way to solve this problem if you asked all of those questions

Abi: In previous episodes of this podcast, I’ve shared my own experiences. We went through a journey very much like this when I worked at GitHub, including trying to build our own DORA and Git metrics tool, which was shuns internally. We unsuccessfully shipped it. The engineering managers at GitHub gave us the middle finger. When we tried to roll they said like, “Get these metrics out of here. We don’t want these.” Which is really interesting. 

Another thing that’s interesting, you talked about how these types of metrics mistakenly become first sort of metrics. I just want to bring it up for listeners that Nicole Forsgren, the creator of DORA, the current people who run DORA are constantly out there trying to remind people that the DORA metrics are outcomes. They’re not the things that you’re actually trying to optimize. They’re the results of things that have been optimized.

Optimization of those metrics itself has never been nor should be the goal of using metrics like that. As we all know here, that just is never remembered when it comes to actual practical use, which is one of the problems. I think before the show, you were asking me a little bit like, “How have we gotten here? How did we end up here? Why is it taking so long to realize this? Why is Google writing articles about how we need to be reminded that software development is a human thing?” I wanted to share a couple thoughts and get your thoughts as well. I think that there’s two things. One factor at play, I think, is that measuring this stuff is hard. Actually, Peter Drucker is the you can’t manage what you don’t measure guy, right?"

He also has a lesser known quote that is actually in that Google paper. He said measuring knowledge worker productivity is one of the greatest challenges of the 21st century, right? This distinction between the traditional approaches to measuring mechanical or industrial work as opposed to measuring knowledge work. I think as a society we’re slow to realize that you need different approaches to these problems. I think the other factor is that, and you touched on this when we were chatting before, we’re engineers and we love pulling together data. We love pulling together that real time data. We love building charts. We keep doing that. We keep pulling together data and what we haven’t asked ourselves and the problem we stop before we solve this important question, which is like, “Which of these actually matter to the business? What are these numbers actually helping us make decisions on?”

I would really recommend there’s a book, How to Measure Anything. It’s a few years old, but it’s a great book that really boils down the science narrative measurement to first principles. The very first thing they talk about is like, “If you can’t boil down the decision you’re trying to affect with the measurement, then you shouldn’t be measuring it all.” There’s no value in measuring something unless it’s actually informing a decision. I think that’s something that when we get excited about data, just pull together numbers where we’re forgetting about the actual value of what is this for. Anyways, that’s my two cents and how we’ve gotten here. But yeah, curious to get your take as well.

Matthew: Well you’ve made me think about something that the very first engineering job I had. There was a CTO that I worked really closely with. It was a tiny company. There were four of us, and I just happened to work with him a lot. One of the things that we discussed a lot was how the industry still treats engineers like we are ditch diggers. That we’re doing manual work. We need to make sure that everyone’s sitting at their desk and working for eight hours a day because more hours means more linear feed of ditch that we dug. Knowledge work doesn’t work that way. We don’t have an unlimited capacity to do deep focus work.

More realistically, we probably have four hours a day in us to do really deep work and then there’s a bunch of CRUD that comes along like writing emails or cleaning up some documentation. We can do some of that stuff. There isn’t an unlimited well to just get all of this done. And if the metrics that you’re looking at are about trying to get the most out of some person, but you have the wrong preconception in your mind that this is about more hours means more output with knowledge work, you’re just wrong. You’re not actually thinking about it because it just doesn’t work that way.

Luke: Yeah, I think it goes back to the question of what is value? What is the value that engineers produce and how do you go about measuring that? I think people who maybe aren’t engineers approach the situation and they say, “Well, the value must be the lines of code or it must be something else like that. We’ll measure that and then we’ll know how valuable the team is or the individual is.” I think for us, having very experienced engineers in our leadership structure has helped protect us from that approach because they understand that it’s much more nuanced than that and that you can’t just take a formulaic metric approach to understand really how the teams are doing and how productive they’re being. You actually have to get in there and dig deeper. I think that really encourages me that there are some companies that are moving that direction because I think it’s much more accurate. It helps preserve the culture as well, right? If you’re coming about this from the wrong direction, it’s going to probably damage your culture in some way at some level.

Matthew: Yeah, if the value that engineers are producing is in large part creative, which I think so, it’s a creativity kind of thing. It’s a certain puzzle solving where it’s like, “Here’s this puzzle. There are a million ways to solve it. What’s the best one?” That’s our job. We figure that part out. If we’re going down that creative route, but you’re treating the engineering team the same way that you would treat the sales team, yeah, you’re suppressing that creativity.

Abi: Also, as the VP of sales that our company always reminds me, if you want to treat developers like salespeople, their lives are going to suck.

Matthew: Yeah.

Abi: I want to mention something because inadvertently I’ve referred to this Google paper a lot and it’s interesting you used the analogy of ditch diggers and then, Luke, you mentioned lines of code. Here’s another line from their paper, which I’ll be sure to send you after this session, but they say pounds of coal shoveled per hour will tell you which shovelers are the best shovelers. Lines of code per minute will not tell you which software developers are the best developers. It’s just funny they use very similar analogy as you, and this ties into, you had asked me before the show like, "What is it like to be working with actual researchers on this problem?

I don’t talk a lot about my work on this show, but as listeners know, I work with Dr. Nicole Forsgren, Dr. Margaret-Anne Storey, and several other researchers who are very renowned for their research in this field of delivery and measuring productivity. I think one of the things that I’ve taken away, first of all, no surprise, research actually helps you tackle these problems. These are difficult problems as Peter Drucker said as one of the most difficult challenges of the 21st century, figuring out how to measure knowledge work in practical ways is a very difficult problem both conceptually and in terms of how to actually apply it with an organization. I found in contrast to experiences like I had at GitHub where we were just trying to solve wingless, just come up with solutions, often bad solutions, to apply a rigorous approach to this problem, I think, is a much better way to go at it.

The other thing I’ll just say, and this is more reflective of, I think, our company DX and the work I do… When you’re working with researchers, we just have a no BS kind of rule. When we build solutions, when we advise customers, everyone at the company understands we know what the right answer is or if we haven’t figured it out yet. I think as you guys have seen, it’s easy to get into a place where there’s a lot of fluff and a lot of BS and a lot of unfounded opinions around how to use metrics, how to measure things, what to focus on. Research gives you grounded facts as opposed to just opinions on these difficult topics. I think that’s something every company… Big companies have researchers working on these problems internally. Smaller companies can’t do that. Thankfully there’s more and more research out there that folks can use to get clear answers on this. That’s my, I think, advice and experience around working with researchers on this problem.

Matthew: How do you think that changes the company culture based on having the research be the basis rather than some product person saying, “Here’s my best shot at it.”

Abi: It forces rigor. When we run experiments, it’s grounded in research. It’s grounded in an understanding of the right and wrong ways, the prior ways problems have been solved and new approaches that we’re testing based on well-formed questions and qualitative research that we’ve done. I don’t think this is about my company. I think when you look at companies like Google and Microsoft that have staff of researchers focused on developer productivity, you see that same level of rigor, right? Google’s developer productivity survey is a body of science in of itself.

The way that Microsoft approaches capturing signals on developer productivity is mind-blowing. I had a team from LinkedIn on this show a few months ago. I was mind blown to hear about the level of investment and rigor that goes into how they approach this. I think when you get in touch with the research on these topics, you realize, first of all, solving these problems is a lot harder than you might have initially thought. It’s not as easy as just whipping out the DORA metrics. Two, you also realize that they are solvable, that there are ways to do this that are valuable and beneficial and actually tell you how things are going and allow you to show you the impact of your work. I think there’s just a lot of promise in the research that a lot of folks across the industry are doing. Companies like Google, Microsoft, they’re putting a lot of good knowledge to everybody and I encourage folks listening to stay attuned to that.

Matthew: Considering how successful those companies are, obviously, Google and Microsoft, we’ve heard of them. They’re big. Do you think a focus on developer experience that does have the no BS attitude to it and the desire to actually shape the engineering culture around that is worth way more than anyone would put into it and that basically any company should be concerned about this.

Abi: Absolutely. When I talked to Nicole about developer experience, for example, recently, we did a joint interview together and we were asked about this new approach and her and I were having a discussion about whether it was new or not. Again, she was saying, “Microsoft, Google, we’ve been doing this for years.” I was like, “Yeah, but I don’t think the rest of the industry’s there yet.” I think this is new for the rest of the industry. One thing I think leaders should be aware of, the top companies have been doing this for years. It’s not just Google and Microsoft, it’s a lot of the well-known companies that everyone aspires and models a lot of their practices out of… Invest heavily in this.

To your second question of is there ROI in doing this, right? I think that’s a piece that we’re working on research. When I say we, Nicole, myself, others are working on research to prove. But anecdotally, and when you look at companies on a case by case basis and even just the back of the napkin calculation, there’s immense ROI. Of course, you could argue sometimes there’s greater ROI, the greater your scale. The greater the size of your organization, you can get more leverage. I think at any point past the 30 engineer mark, there’s usually maybe not the need for a dedicated team necessarily, but there’s always huge opportunity for leveraged return on finding ways to make developers more productive or just keep them happy so they don’t leave, which is another thing that our team is actually studying is like, “What is the cost of losing a developer?” There’s actually surprising not a lot of data on that. That’s something we’re investigating.

Matthew: My background, as I was becoming a new engineer, it was all San Francisco startups in that super hot money is free, let’s go bananas period. Uber and Lyft and all of the big San Francisco names. That all happened while I was an employee in San Francisco. This kind of focus on, and rigor, in particular, no. This didn’t exist. The developer experience was basically what Luke said earlier, “Let’s get a pool table. Let’s get some ping pong set up. We’ll get you the pinball machine that you want. That’s developer experience. We’re going to throw a party every weekend, but your job is probably going to suck because we’re going to treat you like salespeople.”

Abi: Yeah, it’s interesting. I think in some ways this is like a inevitable pattern in corporate America. Well, not just even America, let’s just say worldwide, but this kind of tension between I think treating workers is worker bees, COGS in the wheel versus treating them as real assets that when you empower and invest in, you get real return in. I think this transcends just the conversation around developer experience. You see this just as a corporate topic of debate across the board where some companies really believe in investing in their people and other companies view their people as a cost center, right?

Matthew: Yeah.

Abi: Like, “Who do we cut? How do we optimize it?” I think software development’s interesting because anyone who’s ever done software development realizes that you really can’t approach software development from that worker bee mentality. That just doesn’t work. Flies in the face of what software development is and what makes it successful. Yeah, it’s an interesting question. It’s a debate, a topic that I think will continue on for a long time.

Luke: I’d be curious to hear what you both think about this. But my view is that the world is changing and people being treated well and being treated as people and being invested in is not going away. It’s just going to improve. Companies need to recognize that. It doesn’t need to translate into pool tables and ping pong tables and massage studios, right? I think, really, people want something more deeper than that. They want to know how they’re doing matters and concerns they have or areas of improvement that they see are being heard and are being addressed, right? I think that’s just going to be table stakes for operating a successful business in the future as we move along. It seems to me that at some level, companies are going to have to begin to engage a little bit deeper with their people and collect those signals and be willing to act upon them in ways that are productive and that increase the fulfillment, the work environment for their people. That’s my sense. Curious to hear what you think.

Matthew: Well, I think you’re totally right and this is probably never not true. Workers always want to be treated fairly well, right? The difference now is that when you’re busy doing all of this creativity based work, you really have to put an environment underneath you where you can continually produce that. That really means a lot of things have to be in place. Makes me think of the open office versus having an office thing. Open office is not an environment for, “Let’s produce deep work.” I think that companies really do have to say, “If you’re going to spend this much time with us, we’ll pay you back a little bit by caring about you, your existence, your ability to do the work that we’re asking you to do. And this is mutually beneficial. We both get something out of this. It’s a great way to go.” As opposed to like, “You’re not working out, let’s get the next guy in.”

Luke: Right.

Abi: To tie this all the way back to what we were talking about at the beginning, I think this new trend around developer experience is about exactly that. The trends that have intersected and the rebranding of developer productivity to something broader and more human-centered. I think there’s a greater focus right now than ever before on the human side of software development and how important that is to unlock greater levels of productivity for these businesses. Yeah, I think we’re at an exciting time and an exciting moment where the shift is happening and glad to be able to be part of it with you all as well.

Matthew: Indeed. Well, it’s a funny thing. I think you’re right. We are at a precipice where this is shifting in the right direction and at the same time the automated everything is also happening. Think of how many applications are out there to automate code hiring. I feel like that’s about as inhuman and disrespectful to a person as you can get to be like, “You got 30 minutes to go through this code challenge and it’s pass-fail. The end. We aren’t going to look at, we don’t care who you are. That’s it.” Both of these are happening at the same time and I don’t think that it’s necessarily a battle, but some companies are going to choose to focus on that and some aren’t. And I think if you’re going in the developer experience direction, you’re probably not doing those other things.

Abi: Yeah. I love how you alluded to hiring practices, which is its own big can of worms as we all know. Really interesting to touch on. I want to wrap up here and just say, thanks so much for coming on the show. I think listeners are going to get so much value out and be inspired by hearing how you’re approaching developer experience. How you’ve been able to elevate it to be such a core part of how you view software development and how your C-Suite views software developers at company, the story around how you’ve evolved, the way you measure, how you tried to solve that on your own. I think really interesting to listeners as well. Really enjoyed the chat. Appreciated the questions to me as well. Thanks again guys for coming on the chat.

Matthew: Thank you.

Luke: Yeah, thanks for having us.