While CTO, Mike Fisher spearheaded a multi-year DevEx initiative. Here he shares the story of that initiative, including the program's pillars and the investment that went into it.
If you enjoyed this discussion, check out more episodes of the podcast. You can follow on iTunes, Spotify, or anywhere where you listen to podcasts.
Abi: Mike, thanks so much for sitting down with me and coming on the show today. Really excited to chat.
Mike: Yeah, thanks for having me.
Abi: So I want to start with the story of Etsy and you were telling me before the show how the story around developer productivity really begins with COVID. So take me back to that time and what was happening at Etsy and why you kicked off an initiative to tackle developer productivity.
Mike: Yeah, absolutely. And maybe a little bit of background of how I got there. And so I was a consultant. I ran a consultancy called AKF Partners for about 10 years and we focused on scalability and high growth companies, one of which was Etsy when they were a startup. And then in 2017 I got the chance to join them full-time. And one of the things that we focused on right out of the gate was moving to the cloud. And we spent the first couple years that I was with Etsy doing that, we finished up in February of 2020, which was very fortuitous that we know in March of 2020, COVID happened and everything went... Everybody held their breath initially. And then a couple days into it, the CDC issued guidance that people should wear a mask. And with all the protective gear needed for healthcare workers, people turned to companies like Etsy to manufacture masks.
And so overnight our traffic doubled. So thankfully we were on the cloud, we could handle that massive surge in traffic. Everybody's searching for masks. There's a whole interesting story about how we had to ask sellers who are making everything from wedding dresses to tablecloths to, okay, you can sew, please pivot, download patterns for masks. We got the sellers on board and we had to retrain our algorithms because before then if you'd searched for face mask on Etsy, you probably would've found a Halloween costume or a facial cleanser mask or something. So we actually manually trained our algorithms overnight on what a mask meant today and pushed that out and within 24 hours we're running. But because of that and all that was great, we pivoted very quickly, but a couple months into it realized that this wasn't letting up and that our traffic, we described that people came to Etsy for the mask and they realized all this other wonderful things that we offered and which was great, but that meant people were coming back and our traffic was not going away, it was staying up.
And we realized that we'd need to also start hiring engineers to keep up with this. And so began a hiring plan that was pretty aggressive. And then, of course, as you know, when you add a bunch of people to the organization, you've got other challenges. This is the mythical man-month at scale. You can't add someone else and just be as efficient as everyone. There's more communication that's required. So all of that was really what got us into this idea of we need to focus on developer experience because we can't just throw a bunch of engineers in the organization expecting us not to have big challenges.
Abi: Well first of all, I had no idea that the pandemic and demand for masks in particular were such a catalyst for Etsy. That's an incredible story. And actually, my mom made her own masks at home and would mail them to me. So I also had artisan masks. I'm sure there were amazing options on Etsy. So I want to ask, you mentioned things started to slow down as far as developer productivity. What were the things you were seeing or hearing? What were the signals that you had that this was happening? Or was it mostly just intuition and your gut telling you this?
Mike: There's a lot of intuition. So knowing and watching organizations scale, no matter what level you're scaling from, whether it's five to 50, 50 to 500, or 500 to 5,000, you have scale challenges. And this is what I learned and practiced as part of my consulting was that when you grow an organization, you've always got to be thinking about people processing technology. They all three interact, they're all three related to each other. All three need to work. If you focus on one and not the others, you're going to be in trouble. And that you really need to do these constantly. So every step keep tweaking those a little bit because if you get over or too far, you're wasting. If you scale your technology before there's a demand for it, you're wasting money. If you scale your process before there's a need for it, you're bureaucratic.
So it's key. So part of it is this intuition that we, as many of us have scaled companies, knew about this. The other thing that we look at is always, at Etsy we're very metrics driven and we look at things like experiment velocity, how many velocity? Yeah, what's the velocity that a team's producing of experiments that are going out to the marketplace? So there's signals like that we can watch for actual loss in productivity or hits like that. So it's a combination of both.
Abi: I want to ask you, well first of all we'll dig more into measurement and experiment velocity later. I just want to call that out for listeners. I want to ask you, you mentioned that as you began identifying these problems, you started to focus on developer experience. And I wanted to ask you about even that term developer experience because at the time of COVID just even a few years ago, I don't feel like developer experience was really the term being used by companies as much when focusing on developer productivity. So I want to ask you, where did that term come into the picture and how did you guys decide on that?
Mike: Yeah, that's a great point. And I don't know exactly who came up with it first, but Etsy's always thought about the human side of things. I used this a lot when I was talking to people who were thinking about joining because our mission was to keep commerce human. And you see that in the marketplace because you're taking an individual buyer and an individual seller and connecting them, and letting them have a human experience, which we don't get too much today. We're consumers of mass produced products, which is great, but it makes it a little bit dehumanized. So Etsy's mission is this, but if we just did that on the marketplace and it wasn't how we operated, it wouldn't be true. And so this is actually how we worked. We thought of each other as humans. We didn't just think of it, oh you're just an employee, we know you're a human and that means lots of things.
It means you experience stress and you have emotions, you have good days and bad days, and you have skills and areas that you need development. So I think we have always had a culture of bringing the human side to things. And so when you think about that, it wasn't just, Hey, let's focus on making these developers more efficient. It was like there's a holistic thing here, which we've got to make sure that not only do we keep them efficient, but we also are making their lives better. We're making sure that we're taking care of them. So I think we've always had... So it was a natural fit for us and one of my good friends and colleagues at Etsy said at one point that teams are happy when they're shipping product.
And that really, we use that as a bit of an area that we focused on if you're right. One of the ways that we can make sure people are happy, is if we can make it easy to ship things. And then they're happy, they're seeing stuff out on the marketplace, they're making an impact. So I think we saw all this tying together, that it wasn't just a single perspective of let's get more efficiency out of the engineers. It was no, no, let's think about how do we not only make them happy but productive and engaged and committed, all these things.
Abi: I love what you said. It's funny you said the line developers are humans. And I was laughing because I was actually just speaking to Tim Cochran, who I know you're going to speak with later today, who's at Thoughtworks. We had just both read this recent article from researchers at Google about developer productivity and in that article they have the line, software developers are humans. And they say it multiple times. And Tim remarked to me that it says a lot about our industry that it has to be stated that developers are humans. So that was a little funny.
Mike: It does. And a quick aside about Tim and the Thoughtworks team. When I first joined, they were the first team that I reached out to, to bring in to help us. And not only to help us with staff augmentation, but also eventually help us build out our product development process, which eventually we came to call our project on the culture, because culture beats strategy, but also beats process. And Tim and his team did a wonderful job helping us develop that. And it was a really important part of our evolution of how we work.
Abi: That's awesome. Well, one more question about just the concept of developer experience. I was actually having a debate yesterday with Dr. Nicole Forsgren who I believe you've connected with. And we were debating whether developer experience is a new approach or an old approach. Is it a new idea? So I'm curious as you guys put together this initiative, and we'll talk more about what the components of that were, did this feel like a new approach to you, or something that's been done for years?
Mike: Yeah, that's an interesting concept of is this new or is... I think a little bit about this, I think enablement, which we'll talk about in a while, this work has to get done. Whether we organize a team to do it efficiently or we just require engineers to do it on their own. I think I use this example of if my IDE is broken, my VM is not working or something and I as a developer, I need to get it working. If there's not a team that can pitch in and help me and they're experts at this and I would've to figure it out myself, the work is still there. And I think about DevEx in the same way. Somebody's probably doing this a bit in the background no matter what. And maybe now in the industry for the last couple years we've just put a name on it. And like Etsy did and started focusing on it, but I think there's always been work in this area. It just might have gone unnoticed or not centralized.
Abi: I like your take on it. Yeah, I agree. The work and the fundamental principles behind developer experience are things that have existed forever. But I do think that putting a label on it, making it a C-level initiative within the technology organization, I think that is a new trend and there are newer approaches I think that are interesting to follow.
I want to ask you now about your multi-year DevEx initiative. Talking to me about this earlier about what were the core pillars and components of this initiative?
Mike: Yeah. So when we kicked off DevEx, and that was our name for this developer experience initiative, we knew it'd be multi-year and that we couldn't get it all done in a single year. And we also subdivided it into what we call pillars. So we had four pillars. The first one was helping people build with data. And the reason that this was an important first pillar was that I used the analogy when I talk to people who aren't familiar with Etsy as a technology company, that think of Etsy as an iceberg. What you see above the waterline is this marketplace and everyone knows that, they're familiar with that probably, hopefully people have had experience on it, but what they don't see is below the waterline is Etsy's really a big data machine learning company? And it has been for many, many years that we process, a while ago it was six billion events of a day. It's terabytes of data they could process daily, tens of terabytes of data that is stored.
It's this massive amount of data, and that all goes to everything from BI. So all the teams in marketing, finance and so forth use this to make decisions, but it also goes right back into the product. And it goes back in a couple of ways, and we can talk more about this, but it goes back through analysts who look at experiments and results and the impact and help guide the product managers on next iterations. And it also goes back directly through machine learning and very advanced AI models that help power everything from recommendations to search and everything in between. So that is why the very first pillar in our DevEx was to help people with their data. The second was around, we called it crafting product, and this was the idea that we're very focused on allowing more and more features and adding value to the sellers and the buyer experience. And so making that ability to craft the product easier is important.
And we can probably dig in, we talked about the metrics of why we think it is a craft. We think our development is a craft and not something that's super scientific and can be measured so easily. But anyway, crafting products was our second pillar. And we thought about that in terms of things like how do we modernize our front end so that it's easier for our developers to work in that environment. We can bring a new developer in, they're not working with decade old technology that they don't even know because they're a newer developer and they learned React, they didn't know jQuery stuff like that. The third one was around developed test and deploy. Etsy's famously a monolith, although we're much more than that. We've actually got a bunch of services that are independent. Search is actually an independent service payments as well.
But still famously the core marketplace is a monolith. And one of the pieces of that secret to being able to scale a monolith to such an amazing place, both in traffic and number of developers, is very, very fast deployment. And Etsy's always been a CICD shop. And so making sure as we added engineers, they were able to develop tests and deploy very quickly is critical. And it's, like I said, it's the secret sauce of being able to scale this. And if the fourth pillar was around reducing toil that we know that with increased product development, with increased team size, there comes toil. And toil can be in many shapes from struggling to find the right information. Who do I ask about this? And remember this was also at a time when we were all remote, we're forced remote.
Etsy's been always pretty heavily remote. I think we were 30% going into the pandemic, but it forced the other 70% into remote. So now I can't run around the office and ask who knows this? How do we make sure that people can find the information they want quickly? And then, of course, there's the more traditional toil of pages and how do we make sure we're reducing as we're increasing all of our product development, how do we make sure we're not overwhelming people with pages? So those were the four pillars, data, product, test deploy and toil, reduced toil that we focused on.
Abi: Well thanks so much for outlining that for me and listeners. One thing you said at the beginning that got me thinking was that you set up front that this was a multi-year journey. I often get asked by leaders who are thinking about DevEx and developer productivity, they say, oh, executives want to double this and that this year. And I tell them no, it's going to be a longer journey. So I'm curious, was there that type of debate internally? Were there some people who wanted more immediate results and how did you convey or think about the fact that this was going to be a multi-year journey?
Mike: Yeah, I don't think Etsy's any different in that you always have executives who want everything today, I want to double this, this year. That's natural. I think what we thought about was how do we want to do check-ins on a quarterly basis, so twice a quarter do check-ins and show progress. And by doing that we can alleviate this concern of, oh, you're going to go away and spend a lot of money or a lot of engineering efforts and then not doing anything for 18 months. And by doing these check-ins with the executive team and with the entire company. So we committed right upfront to those check-ins and to share back to the engineering organization at least twice a year. We did an all hands quarterly. So we actually eventually made it part of all those all hands, giving an update on it.
By doing so, I think we helped alleviate that concern about are you really going to go away for this multi-year journey and then come back and show us something instead? No, we're going to take you along the journey with us and make sure that you're aware of what's going on. And there's a story in there about one of the things that we did within the craft products was we did a pilot and eventually started rolling out GraphQL and we had some setbacks, some restarts with that, but ultimately we decided to pivot away from that. And that was an example of not just heads down until the 18th, the end of this project, but actually while we're doing this benchmarking and saying ‘is this giving us the results that we wanted and that we think are sustainable?’ And it allowed us to pivot when we didn't think it was meeting our needs. And so I think that's the important thing to think about.
Abi: Yeah, that's definitely a great lesson about pivoting and having frequent check-ins to examine progress and share progress. I want to ask you another common question around these types of initiatives is the investment that goes into it. So what was the investment? And I know we want to talk about how and who was doing this type of work, but just from a dollars and headcount or percentage of headcount standpoint, what was allocated towards this DevEx initiative?
Mike: Yeah. When we did a multi-year journey to the cloud, and that was 2017 until 2020, and we dedicated about 25% of our engineering capacity to that. And so we were familiar with that amount of investment. And in this case it came in a little bit less than that. I think we were probably around 20%. And so fortunately we had just finished that cloud migration and so we had in our plans that capacity. It wasn't already booked into other projects and initiatives. And so that made it a little bit easier for us, but it's not something that is free. All of this comes with an investment, but it is... We've now proven this time and time again at different companies the importance of it and the payback.
And I think those are, if you're a very metrics driven company like Etsy is, those are the conversations you have to be prepared to have as a tech leader of, all right, this other company's done it, or you've done it, or you've talked to people who've done it. This is the investment but it pays off. It pays off in these ways, and those check-ins are key to say, is my investment really giving me the return that I expected? And when you either build the trust with your finance partners or the other executives that you're going to hold the line like we did, we've shown that we are willing to pivot both on the product side and, of course, on the engineering side, if it's not giving us the returns, then the trust is there and you can execute. But yeah, you've got to be able to show that because it's not insignificant to invest this time and effort in these products or projects.
Abi: Figuring out how to talk about and show that payback I think is a challenge a lot of tech leaders do run into. So what's your advice, what are the ways you had success in doing that?
Mike: I think one thing that I talked with tech leaders about when I was a consultant and then as CTO is that you're often in the executive team, the only technical person, and then when you get to the board the same thing. You might be fortunate, like we did have, our chairman came up as a software engineer, so pretty technical, but there's other people who have no technical experience. So as soon as you start talking about something technical, GraphQL of a test, whatever you're going down there, their eyes roll back in their head, they're like, I don't understand this. It makes no sense. So you've got to be able to translate that to a language they understand. And I think the common language in business is typically finance, money. So we'll get into the metrics side, and I'm not a huge fan of only monetarily driven metrics, but when you are able to translate that into a dollar value, I think it helps.
And we were able to do those things with deploys. If you can take the deploy down from 15 minutes to seven minutes and the number of times engineers deploy and the number of engineers you have, we start getting into a pretty serious amount of hours that were saved by just small things like that. And so the reduced help desk because of issues with dev environments or the reduced pages which pull them off of their full-time work to focus on something or disrupts their evenings, which means they are not as productive the next day. All of these can be, I called it a back of the envelope. It doesn't have to be super precise because it's hard to track, but if you can do it back of the envelope and say, look, these things end up saving us time. And then you get into things that are longer term that really matter like people leaving.
And if someone leaves the company, my estimate was you lose at least six months, maybe 12 depending on their seniority of productivity. And so when you have really high retention rates, that starts to matter. And so you can start telling people, okay, if the industry average is 7% a year and we can cut that in half, 3% that you're losing for six months your team starts to... Again, these things back of the envelope, super just math in my head doing. And we eventually had a thousand engineers, these numbers start to really matter. So that's what I would say is try to translate these into something. It doesn't have to be precise, it can be rough math, but those things translate that in a language they can understand.
Abi: Got it. So back of the envelope, translate to dollar value. We'll talk more about measurement in a little bit. I want to transition now and ask you about who is actually leading and driving this work? And I also want to ask you next about enablement teams. So you can take it there in this response if you want. But yeah, who is actually doing this work? Was it just existing teams and directors across the company or did you put together dedicated groups to focus on this work?
Mike: Yeah, for this initiative, we asked some of my VPs to be the exec sponsors. So it was my initiative as CTO, but the VPs really drove each of these pillars. So we had a vice president who would really be focused on the data piece, someone focused on the develop test and deploy, and they then could build their teams around it. And we did think this was important. I'll jump to the crux of the story is this work, I think you said this, this work isn't really ever over. And we realized that and said, if we're going to continue to grow the teams, which we hope to and continue to grow the company, this should be an ongoing evergreen exercise.
And we thought that might be the case from the beginning. It turned out that's what we did after two years, we said this is evergreen, this is just the way we work. We've constantly got to be focused on these areas, but because of that, we didn't want separate teams. We wanted the teams that owned the data pipelines to own making those, what we eventually called pay paths, crew people, because we didn't want a third party making that and then handed it off to them to own. So we ultimately just had the teams that worked in these areas dedicate engineering efforts to this. And then that made it an easy transition to evergreen this and make it just the way we do work.
Abi: Well, I really love that and I think a lot of listeners will be inspired by the way you approached it because all too often I think the immediate step companies take when they start focusing on developer productivity is to spin out dedicated enablement teams. And I think there's a lot of challenges with that. I recently had John Michelle on this show who was the former CTO of Shopify and Atlassian and he talked about the same thing. He actually called dedicated platform teams an anti-pattern, went as far as saying that really a lot of this work ought to belong with the functional product engineering teams, that they're the ones who are closest to the problem. So sounds like you had similar perspectives on it.
Mike: Yeah. Now the platform team is an interesting idea. You and I were talking before the show, but Etsy tried this multiple times. Even when I was there, maybe before I was there, they tried to create platform teams and for many reasons they failed. I think you're exactly right that part of the issue is that the people who know most about the problem are not on the centralized teams. So that's a challenge. I think you've got challenges with just human nature because this work initially is definitely done on the product teams or the info teams and giving up that work is sometimes difficult and challenging for multiple reasons. One, people like it usually, they might like the work, they might be part of their workflow, so they're used to it. They might see this as a path to promotion or they really shouldn't, but they might. And then the other is centralizing this tends to slow things down.
And I have a bit of a theory around, there's this central versus decentral work. And many, many times the work starts decentralized because that is where they're closest to the problem. That's great, they know the problems, they can work on this. Eventually you get multiple teams solving the same problem and you start looking at this as a leader going, well, I'm wasting my effort. They might be doing it differently so I don't have standardization. They can't really share. All these challenges come up with that. And then you say, well, I want to centralize this. And we tried to, as I said many couple times, we eventually realized that there is a place for these centralized teams, but we didn't call them platform, we actually called them enablement teams. So that right off the bat people knew what their purpose was.
Their purpose really is to enable the teams, and I'm going to call it above them, but it's above them in terms of closer to the customer. Nothing else is meant by that. But in the stack order from customer down through to the infrastructure, the enablement team's job is to really enable those product teams to be more efficient. That was the first step is to call them enablement teams and help them understand that's their primary purpose. The second thing was that we found to be really successful is to start to put product managers on there. We had not had that in the past and you're then asking the engineering manager to do multiple, multiple jobs. So they not only got to manage their teams, their projects, but they've also got to meet with their customers, which are other engine managers and engineers and figure out what they really need and what should be on their roadmap. By having the product managers in there that helps enormously, they could take the burden off.
That's what they do for a living. They interact with customers, they take the research, they take all their stakeholders, they merge it together, come up with the roadmap. The product managers are incredibly valuable in our product teams and we found that they were also very, very valuable for these enablement teams. So that was another piece of this that made it work. And then back to that, I guess the third point is in my mind, this cycle between decentralized and centralized has a third step that makes it a complete cycle back to decentral. And I think you get this cycle, you go back to decentral when you find that you've got enough people effort, all of this into the centralized and you're slowing down the product teams. And I'll give you an example of when we did that.
So when I first joined in 2017, we probably had less than a dozen machine learning data scientist and they were on the search team because that's where we could apply this immediately. And they did some wonderful work. In fact, they did such good work, all the other teams started saying, I want data scientist, which is great, but they're working on the search team and they couldn't have them. So we centralized this and we started growing the team and we said, okay, we're going to centralize it so that they can help multiple teams and we'll scale this way.
We eventually got to about a hundred data scientists and said, okay, now we've got this down, we've got enough people, we've got enough skill, we've got the process down, let's decentralize it again because we're now slowing down the teams. And so I think about it as this iteration. In fact, when I make org changes, often people think of them as permanent. And I try to remind people that they're not. And in fact, the way I think about engineering is this three factors. It's people, process and technology, people being the org structure, how you deal with people, how you promote all this. But the org structure, all of these are dynamic. Your people process and technology is never stagnant. And as a leader you've always got to be thinking about where we are. And the org is one piece of that and it's not stagnant, it changes. And that's okay. That's what it should do.
Abi: Such great advice and lessons here. My takeaways are to focus these dedicated teams on enablement, not platform or even developer productivity. Have you actually seen that recent article by Sam Newman whose titled, Don't Call it a Platform Team, Call it Enablement?
Mike: No, I haven't seen that, but I'm going to look it up immediately because I totally agree with that. Yeah.
Abi: Yeah. And also the important, as you mentioned, keeping the option open of re-decentralizing.
Even if you centralize the way you work, I think you should still be connected to the rest of the organization and you should always have that ability to rethink whether centralizing is the best option. One hypothesis I've had for why organizations typically are inclined towards centralization is that I see a lot of leaders who are in similar positions as you were really struggle with carving out the resources to actually devote to this type of work, even if it's important to the business, even if it's spoken about as a priority. All too often it seems like the product management organization, the customer features still went out in terms of what teams ultimately end up spending their time on.
So I'm curious what advice you would have, maybe for another CTO or vice president who is trying to champion an initiative around developer experience, but doesn't really have the resources or capacity or maybe alignment with the product management organization. It's a really common thing I hear where frontline managers say, these people are telling us to focus on developer experience, but these people are telling us to ship features and they don't know how to manage that.
Mike: Yeah. And I was very, very fortunate to have just an amazing product partner in Kruti at Etsy, and she totally got it, knew from the beginning how important this work was and both the developer experience as well as I wrote a recent article about, we call it technical debt, we actually should probably rename it to product debt because it comes in that while we're building product usually, not always, but usually. And by renaming it you help products, the product teams understand that, oh yeah, not only did we cause when we built the product, they were a part of it, but it impacts the way and the how fast we can build products in the future. And so I definitely think as much as possible, make sure you're spending time with your product partner helping them understand this just like you're trying to understand their world and all of the complexities. I think that's very, very important to do that.
And then I again think that being able to check in and not just say, I'm going to do this and go away for 18 months, checking in frequently and building up that trust that you know how to iterate and pivot. One of my tangents is that, back to org charts is we as a industry, we're very used to iterating own products, start with an MVP, we're going to iterate until we get it right, we can talk experiments and why that's so important because most of the changes we do are actually harmful to, in terms of our metrics that we care about. And so by not, you've got to pivot or you've got to iterate very quickly to find what works, what doesn't work. But we think about that in building products, but then we turn around and we do annual reviews, or we do our org charts are fixed for many years as opposed to why can't these be iterated on as well? Why can't we give our feedback and constructive criticism and all this much, much more frequently?
I'd like to see it almost instantly in an ideal world, just like we do on the products, we pivot, something's not working, we iterate. We should do this with feedback and praise and everything. Why wait till the end of the year to tell someone they did a great job? Tell them today, tell them after the meeting, you did a great job, you ran up meetings so well. The same thing with orgs. Orgs can be dynamic and I know that change, in fact, we found that moving a team from one initiative to another was actually a six week hit in efficiency, productivity as measured by experiment velocity. So, there's downsides to that. There's downsides to changing processes as well. People got used to it. It's a hit, but it doesn't mean they need to be static.
Abi: I really love the concept you shared earlier of calling technical debt, product debt. I think part of the problem with technical debt is it sounds like just this nebulous concern of the engineers and the product organization views it as this other thing that they're not really responsible for. And as you pointed out, it's really caused by the product development in the first place. So I love that idea.
I wanted to ask you this idea of checking in and aligning with your product counterparts, there are a lot of leaders out there who, the idea of going and suggesting that you would spend 25% of engineering on DevEx would be really a difficult message to convey to other executives. Tactically, what would be your advice to someone in that role? Do they start in the middle, go up? Who do they go talk to? How do you have those conversations tactfully?
Mike: I think to start with, we believed in lots of companies and hopefully that we had a pairing of product engineering from the seed level all the way down to the line manager. And that's important because that builds a relationship. I spoke to my product partner probably every day. We're in meetings together. We spoke either on Slack or one-on-ones. We were in constant communication. So I think that partnership by the organization of both product and end is important. And then I think the way you start is the timing is important. If you've already put together an annual plan that is based on the capacity of engineering that they think they're going to have, you're going to have a very difficult time getting this pushed in. If you're thoughtful and can do this ahead of that annual planning cycle or quarterly planning cycle, whatever you do and start socializing ahead of time, it becomes a much easier conversation because then they're roadmaps, their OKRs, their goals, all of this aren't built around that org structure. So I think that's important is get ahead of that.
And then also selling the benefits that as we've talked about, if you just add another engineer, they're not as productive, but they cost the same. And so really talking to them about this is not a good investment for the business unless we do this work, because you want that additional engineer to be just as productive and we started to eventually think about engineers that joined the team as not really adding value until the next year. And by having that buffer, because that's reality, we talked about, if you have a very high attrition rate, you're at least six months in the hole because.. Maybe more if it's a really senior person with tons of knowledge about the company and process and technology, but you're at least six months in.
And so really you've got to think about as you bring someone on, it's going to take that six months really on average to get them up to speed. So you really want to be working constantly on these tools to make that faster, make hiring faster, get it up to speed faster, all these things. So I think that's the type of conversations we should have.
Abi: Yeah. I think that's great advice, especially the timing point. I think people sometimes lose sight of the bigger picture and the business cycle. So getting ahead of these conversations to try to build up buy-in and design some initiatives around it before the planning cycle I think is really great advice. I want to ask you, in today's climate, efficiency is being talked about a lot. I'm hearing terms like we want to maximize ROI per engineer, get the most out of our people. This is different but also similar to the types of problems you were trying to solve at Etsy during this hyper growth phase. So I want to ask you, in your view, is the way organizations should be addressing efficiency the same as how you were approaching productivity back during COVID?
Mike: I think it's not new and I think it could be a bit of a win-win if we think about it properly. If we just think about this as a single metric, it's ROI per engineer, then I think it could be a bit destructive and harmful to both the culture and people. But I think we start thinking about that as one aspect that yes, we can't have a high ROI on our engineers. We can measure it and think about it, but we get there by also making sure that they're happy, they're productive, they're efficient. I mentioned that a colleague of mine had this saying about happy team ship product, ship code. And it's true, but that also happens to be very beneficial for the business. So it's a win-win. We can make sure our teams are happy, but we can't, we shouldn't, in my opinion, measure that just with a single metric, it simplifies things and then we start thinking about two short term investments and we really should be thinking about this longer term.
I mentioned that attrition rate is really important, that we want a very low attrition rate. And in the industry, unfortunately, I think it's probably going down right now the last 12 months, but before it was very, very fast, very high attrition rate people, that's a huge hit. And a big part of that is about being happy, having a mission, being productive. And that's the stuff that we should be thinking about. It's not just the single metrics, it's a bunch of things that are really important. We should be looking at all of them, and try to drive them. And if we do that, it has the added benefit of being a great ROI for the business. But that shouldn't be the first focus. I think you've got to get to that underlying stuff that's really what matters to people and important to them.
Abi: It's interesting to me, I completely agree with what you said, just this basic concept of making employees happy to unlock higher levels of performances, such a ubiquitous concept across all industries in business, yet it's so elusive to actually put into practice, it gets lost in business. So just as often, and I think that's what you're describing here. I want to ask you, you brought up measurement again, and I want to go into this topic now. First of all, I just want to ask you how, you're going to get a new job soon, you're going to step in when the CEO asks you, how are we going to measure developer productivity? What's your answer?
Mike: I think the way I've been thinking about this is there are multiple ways that we should measure this. There's not a single metric. There's stuff that's easier to measure, like experiment velocity, and that is important, that typically the more experiments that we do, then eventually that results in higher sales revenue so forth because we can iterate on the product and find what really resonates with our customers. So I think that is a metric that we'd want to watch, but we'd also want to do things like measure how quickly people can deploy code. That, of course, is going to impact that. That's important to people. We want them to be able to do this quickly. We want to measure things by how much they're getting disrupted by pages. If they're paging during the day, they're taken off their project, they page at night, it disrupts their sleep and often they're compensated with time or they can't work on their projects the next day.
And then we also want to measure their happiness. And we can do that through engagement surveys, we can do that through NPS scores, things like that. And there's many, many factors that go into that happiness of are they on the right team? Do they have great teammates? Are they really bonded with the teams? Are they getting enough time with them? All these things are important. So it's not a simple, like we would for a marketing campaign. We put our ROAS on the marketing campaign and say, if it drops below positive, then we're going to stop it. Engineers, and the work that we do isn't like that. And Etsy's famous for Codis craft because we think about the work we do as a craft. It is not a robot or a more science market campaign. It's a craft. And the way that I've used this description is talking about developer productivity on a single metric is like asking how productive an artist is, because that's what our work is more akin to an artist than it is to some production line.
And you wouldn't do that because how would you measure the artist's productivity? Would you measure by the number of paintings they produce or the quality of one painting, the price that they got for one painting? It's so difficult. There's too many things that go into that. And that's the same thing for engineers, that it cannot be a single metric. It really needs to be looked at holistically. A lot of it is like we talked about at the beginning of you have a feel for some of this and in this is why having people, both senior engineers that have been around the industry a long time and engineering managers that have been around long time can feel and understand what a good culture and if people are happy. And if not, why not? And get to that. So that's my answer to when my next boss asked me how are we going to measure developer productivity? Hopefully I'll condense that a little bit. But yeah, that's the idea is it can't be a single metric.
Abi: Well, you're not alone in that perspective as you know, and I didn't mention this when you shared your analogy with me earlier, but I have come up with the exact same analogy around artists and I've tried to convey that to people. And similar to you, I say, would you measure them in terms of the number of paintings produced, or the number of breaststrokes per minute? How much paint volume is used gets on the canvas. So I love that analogy. Earlier I mentioned that Tim Cochran and I had just read this paper out of Google and they contrast software engineering against coal shoveling, an industrial process. And I remember Tim remarked it, but he felt like that was a bit of a ridiculous depiction of, or the contrast. But I think that they use that analogy to make the same point that you're trying to make here, which is that software development is a creative craft, not an industrial process. You can't measure it like you would measure an auto manufacturing factory. And in fact, that's how a lot of companies try to measure it.
Mike: They try to, because that is how, as you mentioned, business for almost a hundred years have increased productivity. So that makes sense that they would bring that approach. But you're right, the craft side, the artistry side of this is why many of us love it. It is problem solving. It's like solving a puzzle. So I say that a lot of people will pick up a crossword puzzle and solve that because it's relaxing and it stimulates their brain. I often do that with small coding problems, because to me it's just a puzzle that allows me to relax and focus.
And the other piece of this is there's often not a right answer. There's multiple ways that we can solve things in our industry, which again, is not like a factory or an industrial line or something that you can measure, because of that, how do you say one person, one developer who is very quick but has lots of bugs and has to iterate to fix those is any better or worse than someone who takes their time and doesn't have those and does it? So it's a very different process. They ultimately can both have the same outcome for their customers. You can't really say that one's better than the other. It's just different.
Abi: Yeah, I love that final piece about the two types of developers. I think we can all think of people we've worked with in the past who fit in one or the other. And what you said is very true. I want to ask you, earlier you mentioned experiment velocity and you called it a thing that was pretty easy to measure. I've been doing engineering measurement for a long time, I've never even heard of anyone measuring that. So can you just share what that metric is and how you measured it?
Mike: Yeah, so yeah, Etsy has a really rich culture of measurement. They've just been leaders in this observability for almost two decades. They famously said if it moves, measure it, I think was one of their mantras at one point. So very, very metrics and data driven. And if you look across the industry, people that will share these results with you, and I think Netflix, Airbnb, Google, Microsoft have done this. When you experiment, oftentimes what you'll find is only about 15% of them are positive in terms of the metric you're measuring. Let's say you're measuring conversion rate. So you've given a team a goal, hey, increase conversion rate by percentage. They iterate on this, often 50% of their experiments are positive. Another call it 30, 35, 40% is neutral. It didn't help or hurt that metric. And then a good 50 to sometimes 60% are actually negative.
And this is measured by an experiment that you let long enough that it becomes statistically significant, it becomes powered. And you look at this. So if you do this, you'll see that this is why experimenting is so important. And these are experiments that are produced by very smart teams with product managers that really know their product. So it's not like we're randomly throwing stuff out there. It is well thought out, it's researched, it's well thought out. And still if you're at scale, you're probably seeing about 15% positive, at least 50% negative. So super important that you measure your experiment. But when you start experimenting, then you want to see how many experiments am I running? And so this is where we get into the velocity. And so we would look at how many experiments we started each week, how many we stopped, what the, we called it the hit rate, but what the positive rate was?
And we could then aggregate those across our teams. And certainly as you grew the engineering team size, you would expect to see more. So take that into account. But it starts to become really important, this question of how many experiments teams are running. And it's important because not only does it produce positive results for the company, ultimately we're benefiting those. It's an indicator of things like something's not working. If they can't deploy an experiment quickly, there's a problem. And there is often dissatisfaction with that.
If an engineer's waiting around because they can't get their experiment into production, they're not happy. And so again, it goes back to this happy developer experience. If you're working on the right things, it's important. So measuring how fast they are. And this is where we found out that if we move a team from one initiative to another, it takes them about six weeks to get back to the velocity that they were before running experiments. And the interesting fact of this is if you disrupt the team and you break the bonds of the team by adding a person or removing someone, you typically can expect 12 weeks. So you almost double the amount of time that the team takes to get back to that velocity that they were before. So yeah.
Abi: I really love that. And as I was listening to this, I was thinking about how so many organizations just try to count ticket velocity or pull request velocity. And what's really interesting about your approach is focusing on experiments. There's more to an experiment than work getting done. In order to launch an experiment, you also have to understand the context and mission you're operating within, and be able to collaborate with the rest of your team to design thoughtful experiments. So I really love that this captures not just units of work, but real critical thinking that's happening on these teams. I really like that. One follow-up question is just tactically speaking, were you tracking this in spreadsheets or was it like a thing in Jira?
Yeah, how were you actually tracking it?
Mike: Yeah, Etsy happens to have their own experimentation platform. So built over many, many years and it's fantastic. It allows them to aggregate these metrics to look at it by team. And so we're running that all off of internal software that the team's built. We had a team dedicated to that. So that was the focus. I would even, and as much as I do the experiment velocity, what I was pushing for and I think would ultimately be even the better way of looking at this is what I call learning velocity. Experiments, at least for us, were typically online, although we did a lot of offline training and testing of models especially. But there's also, and you brought this up with the research and so forth, there's other ways we can learn, experiments are actually one of the most expensive ways to learn. They're great, they're very scientific, especially if you do them to stat sig and power and so forth. But they're expensive.
It takes maybe days to weeks to get them ready, develop, set them up, run them, and then certainly it can take weeks depending on your traffic volumes to get really good quality experiments. That's an expensive way to learn. So I would actually say the ideal measurement is more about learning velocity. How fast am I learning? And again, it really starts to encompass this broader perspective of that's what you're trying to do. You're not actually just trying to experiment, you want to learn because if you learn early enough in the pipeline, you might not even do the experiment. You might say that's never going to work. I didn't know that. So I don't know if I measured that yet, but that would be the ultimate goal if we could measure learning velocity for teams.
Abi: I really like that. Well, this has been a great conversation around approaches to measurement and great conversation before that on how you approached your multi-year DevEx journey. Mike, it's been so awesome having you on the show. I think listeners are going to get a ton of value out of this conversation. Thanks so much for coming on today.
Mike: Yeah, thanks so much for having me. This is great. Appreciate it.
While CTO, Mike Fisher spearheaded a multi-year initiative aimed at improving developer happiness and efficiency. Here, he shares the story of that initiative, including the pillars of the program and the investment that went into it.