Julio Santana from Workday shares how he thinks about the ideal scope of a Developer Experience team, getting buy-in for DX initiatives, how his team gathers feedback from developers, and more.
Abi: Welcome to the show, Julio. Could you start with just a quick intro about yourself?
Julio: Thanks, Abi. My name is Julio Santana. I lead a few teams here at Workday, including our Developer Experience Team that supports the Experience Product Engineering Group that I think is about 80 engineers today.
Well, I know from our previous conversations, there's quite a story that you've told around how this team actually came into existence. So I'd love to really start at the beginning. What really sparked the beginning of the DevEx Team at Workday?
So, in our organization, it's helpful to understand the product evolution and where developer experience needs emerged. A few years ago, Workday set out on a product journey to change the way that we were providing an employee-first experience to customers of Workday software. And so there was a little startup-ish group created within Workday, and pulled and hired some folks who had startup roots and startup DNA to quickly assemble a collection of services and provide a coherent experience that we can get to market quickly.
That was largely successful, and as that organization grew and gained traction, a couple of things happened. The surface area with customers increased a lot. The services that backed that experience increased in complexity and in number. The engineering organization grew tremendously. So one team turned into five, and I think today, we're something like 15 teams, and a little less than a year ago, we as leaders at Workday within this business unit were trying to grapple with a problem that we were starting to run into periodically.
So, really, what was happening is we were hiring people, and we were observing that folks were slow to onboard and build times were taking longer and longer, and we were trying to solve this in a couple of different ways. We assembled a tiger team and had them work together to drive down build times with very targeted goal for a little bit. They'd disband. They'd claim success, disband, go back to their product teams, and within a short amount of time, we were back in the same position.
So after a couple of rounds of that, we thought, "Well, maybe it is appropriate for us to have dedicated investment to not just builds and tooling, but to develop our experience holistically." The organization has grown. It evolved. Those sorts of concerns get forgotten or aren't highly prioritized when you're razor-focused in providing customer value and every team is trying to bring value to market to meet a market window. So that was the genesis of Developer Experience at Workday within my organization. So I was fortunate that our leadership was willing to staff a small team who was responsible and focused on just providing a great developer experience for the engineering teams that we serve, accelerating and catalyzing delivery, and continuing to provide ongoing maintenance for these systems that we build and support.
Thanks for sharing that story. So interesting and lots to dive into more. I'm curious. So, at the beginning of that story, you said about a year ago, you started noticing problems, builds were getting slower, you were hiring more people, and people were having trouble. Where were you at that time? Where were you in the organization?
Yes. So I think a year ago, I was getting ready to go on paternity leave, but a year ago right now, I was leading a couple of search teams, and we were certainly struggling with these challenges. we would struggle to complete work. We noticed that things were taking longer and spending time not in active development, but in review, awaiting a passing build. You'd spend a day or two either as a team or across the holistic organization waiting for someone to unblock the build to figure out what was going on.
It's a tragedy of the commons problem where there's sort of the... Everybody knew there was an issue there, but it was no one's problem. It was no one's job to go and resolve those issues, and so we would observe this deterioration while everyone is focused on their objectives and their goals. So after a couple of iterations of trying to solve this off the side of people's desks, the late summer is when we got to a point where we said, "All right. Now, it's appropriate for us to invest committed capacity towards developer experience for the long-term."
I'd love to dive into how that decision process went. We actually talked to quite a few developers and leaders who are in positions where they're trying to bootstrap a DevEx team, but are maybe having difficulty getting that commitment from the business to fully fund and staff it. So how was that journey like for you?
Yeah. So I should admit that like a lot of these types of initiatives, I wasn't alone in advocating for its need, and this is coming from everywhere, coming from developers themselves. I want to spend a minute on how developer feedback makes its way to leadership, but developers themselves were pretty vocal about the need for this sort of long-term and ongoing investment. Other managers like myself were trying to advocate for these sorts of needs, and we were fortunate that we had a senior leader within the organization supporting the collection of product teams that Developer Experience now supports, who was willing to advocate with executives for the funding and for the headcount necessary to build this team.
The thing I think that was helpful to us. So there's a couple of things to think about. I think about the world a lot of times in terms of impact to revenue, people's times, the waste that happens when you're not providing a great developer experience. However, Workday is really people-focused and employee-focused, and we found that there was a lot of mileage with our executive team in focusing on being people-centric and employee-centric, and thinking about developer experience as an active service to your engineering organization and as something that allows you to make developers happy, and satisfied, and engage when they're at work.
I think employee engagement, especially now, since the pandemic through now has become a focus for a lot of companies, I think there are maybe a couple reasons for that. One is that it's been a developer's market for jobs. So competition for staff is really high, and if you can provide a really great day-to-day experience and aren't actively wearing your developers out with the things that they have to go through to get their jobs done, that is something that will allow you to stand out from other competitors or other options that they have on the table.
The other thing is that the nature of work has changed and companies increasingly need to support hybrid and distributed work models, and so being able to provide an experience that is really employee-focused and allows them in spite of not having what used to be normal in terms of office, and colleagues, and people to go to as they come on board a new company to help support them in their journey. If you can invest some energy in providing some self-service and providing some tools to enable a great experience, that can also set you apart and help not just them feel successful, but help your teams be cohesive and productive. So I think being people-focused in our argument and in the debate, and thinking about the way that engineering organizations have need to adapt over the last couple of years aided in our success for advocacy.
That's really interesting, and it sounds like your organization has great leaders who care about things like employee engagement. I feel like sometimes we talk to teams or people who are trying to spin up DevEx teams and the engagement angle falls on deaf ears a little bit, whereas the efficiency piece maybe resonates more strongly. I'm curious about the efficiency side. Were you having ROI type conversations about the level of staffing and headcount that the DevEx team would have?
Yes. That was an important discussion to navigate with middle and senior management just to bootstrap the team initially, and then to provide clarity I think from a couple of perspectives. One, visibility into a roadmap, and so I'll just want to bookmark that for a second and come back to that.
The other thing that was important for us is to have some metrics, metrics that are valuable to leaders in an organization that are not necessarily engineering-specific. Wasted time is one that we track on an ongoing basis, and the improvements that we make to wasted time as reflected through a reduction in flaky builds and a lowering of the amount of time it takes for a passing build. So that's an example of a place where we take the work that we do and try to relate that to something that business leaders are going to care about. Yeah, so those are a couple of areas.
So what's important is that what I have found and learned in this journey over the last nine month or so is that once you have a developer experience team of some form... At least for us, it's a new concept, and so as an engineering leader, it was important for me to just build some structures that can help higher level leaders understand our Developer Experience Team in terms that are similar to other product teams that they look at, and a roadmap is one of those things.
While the way that roadmap is generated may be different, having an understanding of the pace of delivery and the outcomes and output from the team at a reasonable level of fidelity, plus what's going to be coming next, like what are your next focus areas, making sure that, one, you can connect with leadership on that and say, "Hey, here's our roadmap. Here's what we're doing now. Here's what's next. Here's what's going to be down the road and what additional support we expect to be able to execute against those goals," and allowing them to be able to pull that information themselves became really important over, I'd say, the last six months because as a senior leader in an engineering organization, you're responsible for 10 or 15 teams, and it's helpful to have ways of understanding how teams are doing, behaving, performing that are consistent. DevEx is going to break the mold in a few ways in a lot of organizations. But if you are leading a DevEx effort and you can do things to help aid in that translation, I think that makes a big difference and puts executive minds at ease.
Yeah. A lot of good stuff there, and it sounds like some ideas around how to operate a DevEx team may be more similarly to a typical customer-facing product team. So I'd love to come back to that, but I would like to ask. Can you take us behind the scenes a little bit in terms of when you got the official approval or budget for this team? I mean, was there an executive meeting where you made the ask, or was it a written proposal? Tactically, how did that decision actually get made?
I actually think that the executive level discussion happened towards the end of my paternity leave. So I was out, and I came back and was trying to prioritize the things that I was going to be doing. In discussions with my leadership about developer experience because that was a thread that had been opened before I left like, "Hey, we might want to invest permanently here because this tiger team thing doesn't appear to be working very well." It seemed like, by the time I came back, that a senior exec or I'd say middle to senior executive who was championing this for us was able to secure a small amount of investment on our behalf.
So when I came back, they asked... in August of last year, the conversation was, one, "Do we think that there is value in creating this team and how?" because it seems like we have an opportunity to go make DevEx a reality for EXP engineering, and then, "Am I interested in leading that effort and helping to shape like bootstrap the team from the ground up and shape the initial charter, the mission, help the team build its initial processes and ceremonies, and start to deliver value to our organization?" So, unfortunately, I can't speak to what those earlier conversations looked like because by the time I came back, it seemed like... Yeah. It was just a different world.
Well, so you were given the blessing to move forward with this team. So what is your charter and your scope then exactly?
I should mention that in constructing the team, it was mostly comprised of internal seasoned developers within the organization who had experience with the services that we have currently and with the problems that we've faced over the last couple of years as opposed to going and hiring externally. So we actually backfilled those roles across the organization. Then, since then, we've done a little bit of hiring externally now that we have a clean charter. I bring that up because I believe that if you are going to create a developer experience team, it is beneficial to start with that model, like pull from your internal teams because they're going to know the problems of engineering, productivity, and efficiency best, and then backfill them on those product teams, and they should... Together, we together craft the mission and the charter for the team, the frameworks that we use and apply to make decisions about what is in and out of scope for us, the way that we think about our engagement model to the broader engineering organization. The founding team I think should do that.
So our mission is that we make tooling and processes more efficient and our engineers happier and more effective. So we think that yes, developer efficiency is an important component to developer experience, but it's only part of the story. Part of our focus is to attempt to make engineers happier and more effective. So our charter has a few components to it. Well, we measure our developer experience, so we... I think, Abi, on a previous call, you and I talked a bit about some surveys that we run and airing of grievance sessions that we have with teams, and we can talk about that in a bit. We want to be change agents within our organization, yes, but also for the company holistically.
I mentioned that we support an 80-person engineering organization within Workday. Workday is huge. We have 15,000 plus people at Workday. So, at a point, depending on how big you want to go with developer experience, you end up in conversations and thinking about making cultural changes that would impact the entire company. Those are some of the big things that we're grappling with now is, "Where do we decide that our boundary is, and how do we make trade-offs between what's tractable today and where we want to make a difference for a larger section of the company?" We try to consider our developer ecosystem holistically and not focus on any one area for too long.
To the previous point about our facilitation or the way that we work with other teams, we believe it's important for us to promote developer enablement and promote team ownership of services, of processes, of the entire development life cycle. So, to that end, we try to deliver platforms. On top of which, product teams are able to build, and enhance, and make our concepts and work their own, and not build specific tools for any one team because that doesn't scale, at least not in my experience and not with this team given that we have one relatively small team trying to support 15 or so. Teaching our teams to fish, so to speak, is something that's important to us.
Well, again, so much there. Great, great tips. I want to drill into the piece you talked about around what's in and out of scope. I also admit I saw something you wrote in a Slack group around... There's these three categories of things. There are things you can directly change, things you can influence, and then there are things you can't change or influence. I'm really curious how you think about that. Maybe not the tool side as much as more that cultural side that you talked about things like engagement and how happy developers are. When you go talk to developers about how happy they are, tools is definitely a part of that, but there's, of course, so much else, like just the dynamics of their team and their manager. Do they have clear direction? Things like that. So I'm curious. How do you think about your scope, and how far are you trying to go in terms of impacting or influencing those types of things?
That's an interesting question because I have maybe a couple of answers. For me, personally, as an engineering leader, independent of DevEx, I focus on what I think and understand is right, and I'm not super concerned with whether or not that is in the scope of my responsibility or not if it's something I think there's a chance that we can move the needle on, depending on how important it is. So maybe that context is helpful for the way I think about DevEx. So certainly, the things that we can control directly are tools, the services that we own, to some degree, the way that they are built and deployed, but there's an asterisk there because of the size of Workday and some of the company-wide processes that we are beholden to. There's a set of things that we can make significant change against in a reasonable amount of time, and those tell a great story. Eventually, you start to get diminishing returns on only driving those changes.
So then, we look to the next concentric circle outward, which is, "Okay. What are the things that we don't exactly own, but maybe a partner organization owns, or we have some influence, or we have some ability to facilitate conversation and surface problems that we are seeing through either the work that we're doing directly, the interviews that we're having with teams, the survey results that we are seeing?" So in those cases, what I and the team are trying to do is facilitate conversations with groups at Workday that maybe are able to make a difference in the lives of our developers, of engineers.
A couple of axes there. So one is management and leadership. So we receive feedback that developer workstations, for example, aren't always powerful enough to locally build the software services that they're supposed to be developing, I guess. So rely heavily on ephemeral containers and external infrastructure, external to your laptop infrastructure, which means you're dependent on a high bandwidth internet connection to be able to development more. We can't change that directly. What we could do is raise a flag with leadership and say, "Hey, it might be the best to spend some money to get M1 MacBook Pros for everyone. Yes, it's an expensive upfront cost, but it seems like it would make a difference based on the survey results we're seeing." So that's one style of conversation that we're having.
The other thing that may not be true of every company, but I think is true of larger companies is that a lot of times, there's significant horizontal dependency in how you get work shipped to production and out to customers. We experience that. I'm sure that we are not alone in that regard, and so what I and we try to do is try to facilitate conversation with our external dependencies with the groups that we find that we're running into collisions and challenges in the ability to quickly discover and deliver against features because if we can invest some energy in making that relationship healthier and in improving the path to delivery in a short amount of time, yes, that's good for customers, but developers are coming to work and hoping to do a good job. For any number of reasons, reducing the friction and reducing the pain that they encounter when they are just trying to do work feels important, even if it's not something we have absolute direct control over. So we can try to engage with management and try to engage with the external organizations that partner with us.
There's a third axis, and it's things that are not within our control and not really within our scope of influence, but are things that are frustrating, and slow us down, and where our developers out. That's interesting because up until relatively recently, I think I would've said those fall below our cut line because you can invest a ton of energy and get nowhere, and that energy is probably better spent where there's a more reasonable chance of return on your investment. Except that right now, in this moment, I am in a debate on pushing back on one of those areas that is way outside our scope of control because of the cost, because of how frustrating it is for developers across organizations.
So I guess I'd say that if it's painful and frustrating enough that everyone is upset about it, and it slows everyone down, and it's creating significant attrition risk, then you should try. Even if it seems the likelihood of success is small. To what I mentioned about my personality, I prefer to focus on what I think is right and not be entirely concerned with whether or not that's within my scope of responsibility. I think engineering leaders should bring that perspective to work, especially with developer experience, right, because you're trying to support and create an environment where developers are satisfied and happy to be working. If you don't champion those battles and push back against processes that make developers really frustrated, who's going to?
That's really powerful and almost made me think of an analogy of a union leader, which probably isn't the right analogy, but when you told that story about M1 MacBooks, it made me think that kind of thing happens all the time where developers are talking amongst themselves. They're grumbling about something that's obviously just dormant and frustrating, but the voice of the developer doesn't always really get elevated to leadership in an effective way. It sounds like in a few scenarios, you were describing that really as a huge part of your role is just really advocating for the voice of the developer. So I really, really love that.
I'd love to shift a little bit more into the operational side of what you do. So you mentioned when we were talking earlier some frameworks on which your charter or maybe metrics are based. I'd love to dive into your survey a little bit more. In fact, I'd love to know. Did that survey begin after your team was formed, or did it predate your team and provide ammunition to form the team?
I'm trying to remember the sequence of events. I think what happened was we were given permission to start to form the team, and we thought, "Well, we need to see the backlog." I think the concept of a survey had already existed, but the form of the survey didn't exist yet. So someone in a Google site had sketched out, "Oh, we should do some surveys," and that was probably the level of maturity when we realized that we're going to have real funding here. So that was a catalyst for deciding what the developer survey was going to look like.
It's worth mentioning cultures vary in... Relative to other companies I've worked, Workday is a very survey-heavy culture. Some companies are survey-heavy, some aren't. Some have other preferred ways of collecting feedback from large groups of the organization. Workday surveys are very common, yes, with customers, but also internally through varying modes and different slices of the organization. So it's a pretty organic first step for any team that is a new concept for an organization to start with, "Hey, let's do a survey of the folks that we think that we're serving and get an understanding for what their priorities are." So that is the spirit with which our survey started.
In parallel, as we were figuring out who was going to seed this team, one of the developers had recently read about the SPACE Framework, which is a framework that we are currently using to assess the types of work that we take on as a developer experience team. I believe the SPACE Framework, and I'll get to the acronym in a second, was proposed and constructed by a combination of folks out of the University of Victoria. We have a Victoria Office, and so I think there's some connectivity there and GitHub. So I think it's maybe a year or two old, and it provides actually a really holistic perspective on what developer experience looks and what sorts of things might be important.
So I think the acronym stands for Satisfaction and wellbeing, Performance, Activity, Communication and collaboration, and efficiency and flow. That's S-P-A-C-E. So that's the SPACE Framework, and what we decided for our developer surveys, we're going to orient a lot of the questions on those themes, and so make a statement like, "My developer experience is intuitive and comfortable," and provide a Likert score. Right? So from strongly disagree to agree, and that would be a question that we ask around satisfaction and wellbeing.
We have four or five questions that we ask for each letter of the SPACE Framework. We also ask a couple of questions around developer tooling. The first couple of surveys, we asked specific questions around our repository structure because that was, even before I conducted a survey, a known point of concern for a developer team. So we wanted to get some quantitative information about what sentiment for that was like.
So we've run our survey now three times, I think, and we try to keep most of the questions consistent. So we're trying to detect trends. Right? So we did a version of the survey before we did anything. As the team was starting to come together, the first task was, "Let's put the survey together, get this out there, get some feedback, and see how things are trending, one, to establish a baseline and two, to hypothesis test an assumption that we had." The assumption was we, for the three years that our organization has been around, have operated off of a monorepo. Based on what we're seeing and what we're hearing, we don't think that serves us anymore, but maybe it does. We should ask and get some information and find out.
The overwhelming response was that it did not serve the current needs of the organization, and we received some feedback through our airing of grievance sessions. There was some concern about what that would look as we continue to hire and continue to grow. I think I mentioned that we build a number of services, and a lot of teams contribute to that monorepo, and so you can imagine what the integration and build processes look like, especially if the services have less and less relationship to one another tied to the same tooling, even if that doesn't make sense for your team and the thing that you're trying to build. So anyway, those are the sorts of things that we pulsed on in our survey.
How often do you run these? Then, how do you do the reporting? Are you sending this information back to the teams themselves, or is this really just for your team? How is it acted upon?
Yeah, so roughly quarterly. I say roughly because I think we've been around a little longer than nine months, and I think we just completed three iterations of this survey. I try to keep the questions consistent. There is one developer on the team who is really passionate about visualizing the survey results and comparing from one iteration to the next. So he puts together a report based on the survey responses once we've closed the survey. The initial share out is with management, and so we... What we do and I think we've probably done something a little bit different for each iteration of the survey, and it'll probably continue to evolve, but we think minimally, it's important for us as a team to review the results, quantitatively pick out trends, observe where things have gotten better, things have gotten worse, things maybe stayed the same. That's really important.
The second thing that we thought was important was to do a share-out with management leadership, who are all stakeholders of this team's success. So giving them some visibility into the information that we're seeing, so that they... Two reasons. One, they have some understanding that we're moving the needle on areas where we're committing to move the needle, but also, so they have an awareness of what developers are saying along accesses that they're not going to see through other form. So that's a second aspect, and then the third aspect is trying to provide some course visibility to the entire organization, like getting rid of the qualitative comments and the things that will give individuals away in the way that they talk about their concerns at a town hall. So a town hall that's work-wide.
We do 10 to 15 minute share-outs of, "Here's what we've learned. Here's what the last round of feedback said. Here's what we have seated in our backlog in response and what we were doing to address the things that you've told us." Not unlike the way that product teams work with customers. So you collect feedback from your customers, and you don't necessarily do everything that they tell you that they need or they want, but it is important to close that feedback loop and make sure that your customers know, "Yes, we heard you, and here's what we're going to do about it." That's the way that we think about that connectivity.
That's really helpful. I'm curious. Earlier, we were talking about how you can influence management and culture. So I'm curious. Is the survey... Is that something you use? Are you highlighting things? I'm sure there are things in the survey that aren't things that you can directly change. Do you drive conversations with management around, "Here are things you need to be doing?"
That has come up. So I'm trying to think if there's an example where we've done that so far. I think that's come up, but as I mentioned, Workday is a survey-heavy culture. Actually, before I started to lead DevEx, I was also responsible for the management-led, engineering-wide survey, like state of the org health survey, which we used and used for similar purposes, like for management to make adjustments in how they're relating to their employees. So we have a different survey form that is really... It's not exactly a manager report card, but it does inform managers of how they're doing along in certain areas. That's an organization-specific thing. So we've since trimmed back the survey a little bit because developer experience, there were some areas overlap, and we felt it was more appropriate to allow developer experience as a team to own the collection of survey information that relates to the things that we believe we are able to impact and call that from the management-led initiative to pulse org sentiment.
The third thing I should mention, so we talked about two surveys, is also a third one that's Workday-wide, where once a week, every people leader, every people manager at Workday gets a snapshot of their organization's health because every employee at Workday gets a weekly survey that asks a few questions about engagement, about a number of topics, and all of that gets fed back into management. So we've talked about using developer experience survey information to work with engineering leadership on some things that engineering leadership needs to do better. However, there are at least two other places where that information shows up. If there is something that we feel is relatively unique that's not getting pulsed on in the Workday-wide weekly survey or in the management-led surveys that are conducted, then there is an opportunity for us to push in that direction, but I think we have not found the need for that yet. It's just something that we've discussed as an option for us.
On past episodes of this podcast, we've had interesting conversations with leaders specifically around the overlap of DevEx or end jobs teams and HR. So when you just shared that, there were some items from your manager survey that you decided to cut from there and move into the DevEx survey that reminded me of that. I'm curious. Do you have any examples of the things that got transitioned over to DevEx from the manager pulse?
Yeah. Much like our DevEx survey, our EXP engineering, that's the bigger work. Our engagement survey has fixed categories, and occasionally, we would vary the question depending on how we felt from one iteration to the next. We had a category that we titled Pride in Craftsmanship, and so that was a category where every four months, six months, we would ask a question that was related to how every engineer was feeling about craftsmanship and the code base.
So the question, an example of this type of question was, "I am happy with the maintainability and legibility of our code base," and we looked at that in January of this year like, "That doesn't make sense for this group anymore because we now staffed a team that among other things is helping to improve that and make that delightful for engineers. So if that falls, that really falls within this team's charter, and they are responsible for helping us to improve that, and we can omit that from our pulsing because we're getting regular updates not just us as managers, but the whole organization is getting regular updates from that team on how things are going."
Gotcha. Well, thanks for sharing that. We've touched on it a couple times, but I would love to hear more about the airing of grievance sessions and that practice.
Yes. So I think that we pulled that practice from our product partners. So in product management, particularly when you're conducting customer interviews that are trying to understand a problem space, you're thinking about investing in a particular area, but you want some customer insight, and you don't have a great way to get it. You'll set up an interview, alternatively, and EA programs. For us at least, we often will spin up partner groups with customers who are interested in being early adopters to get their feedback. So we took a page out of our product management parts books, and we hosted... In addition to the survey, we host sessions with every engineering team to understand what's bothering them, what's slowing them down, what's making their life less fun.
The team, including me, have access to the raw form notes that are taken during those structured sessions where we try to set some context and explain why we're having the conversation, and then let them do a lot of talking, and just try to scribe what they're saying, but nobody else does. So our goal is, one, to accurately transcribe from those teams what they find frustrating and disappointing in their day-to-day lives. The second goal is to look across teams, and try to extract patterns, and think about opportunities for us to solve problems at scale for the entire organization.
So, to me, this is very analogous to product discovery and prioritization. We try to use those sessions to compliment the survey results. So between the quantitative feedback that we get from the surveys across the organization and the specific sessions we have with each team that help to color, and detail, and illustrate what's going on, we can tell a story. We can tell a story about the problems that we're seeing, our proposals to address some of those problems or attempt to address some of those problems, and the work that we are going to take on to support teams.
I love that. That sounds like a fun activity and gets you close to your customers, so to speak.
I want to go back to one thing you had mentioned earlier in the conversation, which is that you were measuring wasted time and use that as a part of the business case for forming the team, and you had mentioned some hard metrics around build times. I'm just curious. Is that still something you do today? Do you have a metric called Wasted Time? If so, how do you calculate that?
Yeah. So, actually, we didn't call it Wasted Time when the team was formed. I mean, we were aware that build times were slow. We were aware that flaky builds were frustrating and those specific pieces were part of the business case for making this team long lived because at some point, everybody gets tired of hearing that their feature didn't go out to prod because we couldn't get a passing build for reasons that had nothing to do with the software we shipped. It wasn't until, I want to say, four or five months ago as we were talking through actually OKRs, Objectives and Key Results.
I was asked by my leadership to come up with a key result for developer experience that was related to an objective around employee experience, around employee experience within the organization that we lead. So as I was working with the team to try to identify what could make sense, one of the team members creatively was able to... thought wasted time was a good way to combine some of the work that we were doing that's important around driving down flaky tests, creating visibility around flaky build failures, and the slow and steady progress we were making towards making the passing builds faster as well.
So that came out of the team and I thought was really insightful and a really great way to connect the benefit of what we're providing to the organization to a metric that makes sense to our non-engineering leaders. So we had a key result around driving down wasted time from those sources by 20% for a quarter. This quarter, we have the same key result, but that 20% is harder than the previous 20%, and that is a good way, I think, to talk about at a high level how things are going with DevEx.
So senior leaders, VPs, senior directors are going to be less concerned with the minutia of an epic, or stories, or particular tasks. What they care about is return on investment, and I think that return on investment is made obvious if you connect it to time or dollars, and so that. I think not only have we made that a part of our OKR process for a couple of quarters now, but it's been a catalyst for producing some Splunk dashboards to help track this at a team level, and then at an organization level. So there are some visuals that make it easy for leaders to see at their preferred level of fidelity how we're addressing wasted time and what the impact of that looks like.
Well, Julio, I really enjoyed this conversation and love the way you think about DevEx. Thanks so much for being on the show. Really enjoyed the conversation.
Thanks for chat, Abi.