Podcast

DX Core 4: Framework overview, key design principles, and practical applications

In this episode, Abi and Laura introduce the DX Core 4, a new framework designed to simplify how organizations measure developer productivity. They discuss the evolution of productivity metrics, comparing Core 4 with frameworks like DORA, SPACE, and DevEx, and emphasize its focus on speed, effectiveness, quality, and impact. They explore why each metric was chosen, the importance of balancing productivity measures with developer experience, and how Core 4 can help engineering leaders align productivity goals with broader business objectives.

Timestamps

  • (2:42) Introduction to the DX Core 4
  • (3:42) Identifying the Core 4’s target audience and key stakeholders
  • (4:38) Origins and purpose
  • (9:20) Building executive alignment
  • (14:15) Tying metrics to business value through output-oriented measures
  • (24:45) Defining impact
  • (32:42) Choosing between DORA, SPACE, and Core 4 frameworks

Listen to this episode on Spotify, Apple Podcasts, Pocket Casts, Overcast, or wherever you listen to podcasts.

Transcript

Laura Tacho: Let’s get into Core 4.

Abi Noda: Let’s get into Core 4. So just to set the stage and do quick intros, my name is Abi. I am one of the founders and CEO of DX. DX is an engineering intelligence platform that gives leaders insights into developer productivity and how to improve it. And of course, Laura, you’re joining as CTO of DX and resident expert in terms of engineering metrics in general. And of course today we are talking about and unveiling the DX Core 4, a new framework we’ve created for measuring developer productivity. Laura, maybe set some expectations and ground rules for the conversation today.

Laura Tacho: So Abi and I are going to give an overview of the DX Core 4. We’ll talk a little bit about how it was developed, what to do with it, how it’s different, how it’s the same from other frameworks out there.

Abi Noda: So with that, Laura, maybe you could share the slide just so folks can refer to the Core 4 as we’re talking through it, and hopefully that prompts some questions. Of course, this is available at dxcore4.com. That’s a vanity URL that’s easy to remember for folks to access this visual as well as the white paper that explains the framework. Laura, I think that the best place to start would be to just jump in and talk about what is the Core 4. So maybe give folks the elevator pitch and a quick overview.

Laura Tacho: So the 30-second version of what the Core 4 is our answer to the question, what should we measure? We have lots of popular metrics frameworks out there, some of you’re using DORA, we have SPACE framework, there’s DevEx. And in all of this, engineering leaders are still left to answer the question for themselves, what metric should I actually pick? And so Abi and I, along with some collaboration from other experts in the industry, have put together our recommendation, and what we’re advocating for is that speed, effectiveness, quality, and impact are the four pillars of a really comprehensive and rigorous metrics program that’s still simple enough to bring to every level of the business and to be easily understood.

So Core 4 basically takes together DORA, SPACE, DevEx, looks at all the metrics that are out there and just puts it together in a simple unified framework that you can start using pretty much off the shelf and feel confident that you have what you need, what matters most when you walk into conversations about engineering productivity metrics.

Abi Noda: Laura, let’s talk a little bit about who this is for. We talked to folks in CTOs, CEOs, people leading developer productivity and platform. In your view, who is the audience for this framework? Who should be using this?

Laura Tacho: I think one of the things that we really tried to bake into the Core 4 was making this applicable and relevant for all layers of the business. I think it makes sense that different audiences need different things, and so you don’t want to walk into a conversation with an individual developer team with the same information that you’re going to come to to your executive meeting or your board, but in order to comprehensively express all the things that engineering is responsible for and the impact that engineering can have, it’s important to cover quite a broad set of metrics, which is where those four pillars come into into play.

Abi Noda: One of the most common questions we of course get asked when introducing the Core 4 is how is this developed? So to share with listeners and viewers a little bit of backstory on the Core 4. So of course. Folks on the DX team, Dr. Nicole Forsgren, Dr. Margaret-Anne Storey are behind prior frameworks like DORA and the SPACE framework. And then last year we jointly published the DevEx framework. And as you mentioned, we’re constantly asked, there’s now three frameworks, DORA, SPACE DevEx, which one do we use?

So this has been a collaborative effort both within our team, you and I, Laura working with subject matter experts, getting input from Nicole and Margaret, and most importantly, working in partnership with some of our largest and most strategic customers to design something that was really solving the practical challenges that they’ve been facing in terms of measurement, and we’ll talk more about those in a moment. I want to get into also the why. Why did we create this framework? Because as we just mentioned, there’s already too many frameworks in a certain sense. There’s already confusion around which framework to use.

So of course Laura, that is one of the reasons we published this. Both of us have constantly been asked, DORA, SPACE, DevEx, which one should we use? Share your experience and how you’ve approached that question in the past.

Laura Tacho: I think also the answer for you is a lot of the time in the past, my answer has been it depends, because businesses are generally very different and there were so many different things that people were trying to use productivity metrics for. I think my own answer to that question has evolved a lot in the last two, three years just because of the sheer amount of companies who are now deploying engineering productivity metrics. It’s been much more popular in the last couple years. But now we have actual data and use cases and insights from organizations who are using metrics and we can point to them and say, okay, well, it’s actually these organizations, what they have in common is that they’re looking at percent of time spent on brand new features as an indicator of business impact.

And we can confidently look at that and say, you know what? If you’re looking for a metric for business impact, that’s the one that you should use. And so that’s sort of where all of these were born. We can evolve our answer from, it depends and have it be just a very custom thing that requires you as an engineering leader to do all this sifting and matching up and choosing the metrics and not being sure if you’re measuring the right things, if you’re getting the right signal to something we can confidently say based on the data we have now and all the research that’s come before it, this is what our recommendation is for you to get started and just to skip over that step of trying to sift and wane through everything that’s out there in terms of metrics.

Abi Noda: Laura, as we know, this is a really tricky problem. We both know organizations who spend months, sometimes even years or over a year trying to figure out what is it that we should be measuring? And the hope is that by publishing something that is both authoritative but proven and standardized, that this can help organizations shortcut that process and get to an answer or at least a starting point more clearly. One conversation I had several months ago with a CTO at a large company, he said, “Look, our team’s been following your research for years now. We’ve followed DORA, we’ve followed SPACE, we’ve followed DevEx, so what’s the answer? How do we actually measure productivity?”

And I remember wanting to give them the well, it depends answer and at the same time realizing that they really needed something more than that, a real answer to that question. And that’s one of the many reasons we decided to really put effort toward developing this. Another factor, something we’ve seen across the industry is the challenge for developer productivity leaders specifically. So these are people who lead infrastructure organizations or platform organizations whose entire purpose is to internally improve developer productivity. And of course, metrics are so important in terms of justifying that investment and guiding the efforts.

And one of the challenges we’ve seen is a disconnect or difficulty in establishing a shared language across the business that builds alignment. So Laura, we’ve seen developer productivity leaders have difficulty going to their CEO or CFO or even the CTO and using data to get alignment and get appreciation and recognition for the work they’re doing. So maybe you can share a story or example that you’ve seen of that happening.

Laura Tacho: Actually I’m thinking back to a conversation that I had on LinkedIn just a couple of days ago 'cause I posted something about your CFO doesn’t care about flaky tests as an example of sometimes it’s really difficult to understand which altitude level of metrics that you should use for which audience and making sure that you’re not just bringing data but you’re bringing information because data does not equal information. And I think that’s really the heart behind Core 4, is to put together a collection of metrics that are fundamentally designed to convey information and make sure that you’re not getting overly fixated on one or two metrics that might be pointing you in the wrong direction.

And I think now might actually be a good time to go into a little bit of the fundamental design principles of what is actually in the Core 4 because we’ve covered the four pillars, speed, effectiveness, quality and impact. There’s something really important that I want to call out about these, is that they’re what we would describe as oppositional metrics. I sometimes call them tension metrics. And the whole point is that speed is great, but if you’re going faster with being less effective, that’s not great. Business impact is great, but if you’re having a lot of business impact but your quality is going down, which maybe doesn’t happen in real life.

But we want to make sure that we have multiple dimensions and that they’re all moving in the same direction together, not that we’re over indexing on quality or over indexing on speed and losing sight of the big picture.

Abi Noda: Absolutely. And calling out that developer experience is one of these pillars. One of the key metrics in this framework is also something I always highlight to leaders because of course we here have a belief in developer experience and that it is the key to improving productivity. But more surface level, this is really important because one of the challenges and fears we see around rolling out metrics and leveraging metrics in organizations is the impression it creates and the potential for harm to culture creates in terms of developers.

And I always like to highlight that by having developer experience be one of the core pillars, it emphasizes that by just focusing on speed or quality, that the potential trade-offs to the developer experience the satisfaction and fulfillment with work and ability to do frictionless work is equally important to some of the more top-down objectives of things like speed, impact and quality.

Laura Tacho: Yeah, absolutely. There is an interesting example to share Core 4 in action, which comes from Pfizer. I’ve done a couple of presentations with them. They’re just such a great partner for us, we’ve been working with them for a while. But Pfizer is increasing speed while also increasing things like security and quality and documentation. And this is such an important example of where oppositional metrics are critical for being able to tell that story. Because usually if you think about the scale of Pfizer, even just think about an organization that has more than 1,000 engineers.

When you start to say that organization is going to have more stringent security dates and they’re also going to increase documentation as the code is being written, my mind would automatically go to that is slowing down 'cause those two things just usually trend in the opposite direction. And we don’t want that. We don’t want to sacrifice speed for quality or quality for speed. And using the Core 4, they’re able to see well, actually we’re moving both of these in the same direction. And then bringing in the developer experience to it, their developers are reporting that they’re having a better experience, it’s easier to release, they can get more done.

So they’ve used this as it’s a simple framework. It takes DORA metrics, it takes metrics in SPACE, DevEx puts them all together, but it’s comprehensive yet simple. And I think it’s so powerful out there in real organizations when it’s deployed.

Abi Noda: I’m seeing really great questions in the chat that I think will guide a little bit of where we take this discussion. I think one question we get is, how did we actually arrive at the specific metrics? And even more so, why are certain metrics like diffs per engineer the key metric instead of the secondary metric? And we don’t have time today to explain the rationale for every single metric here, but I think there’s a couple key points that I think are worth sharing. Outside of the research dimensions here and making sure that each dimension is represented with the appropriate metrics, I think the two things that we were trying to really optimize for here is one, applicability to different audiences and then two, practicality in terms of ability to actually measure these things.

And I’m going to briefly touch on each one. In terms of applicability, a question we get asked for examples for the speed dimension, why is it diffs per engineer? Which we’re going to talk more about shortly 'cause there’s a lot of caution we want to share with folks about that metric. But why diffs per engineer instead of lead time, for example? And one of the reasons is applicability to different audiences. And the feedback we’ve gotten with the organizations we work with is that a metric like lead time, while it’s really well understood within the engineering community why that metric matters, when you take that metric to non-technical stakeholders or your CEO or your CFO, you often get asked questions like why does lead time matter?

And what we’ve found is having a more output-oriented measure like diffs per engineer is something that is more easily understood and gets buy-in from people like your CFO and your CEO because it’s just an easier concept to grasp and tie back to business value. And then the second rationale around ability to actually measure, those of us who have been involved in the DORA community for a long time know that there are a lot of differing definitions for something like lead time and that actually measuring lead time at large organizations can be pretty challenging in terms of the instrumentation it requires.

So we really wanted to focus, especially for the key metrics, on things that could be both measured and benchmarked in a standardized way across different organizations out of practicality. But of course that doesn’t mean that in your organization if you’re trying to approach this problem that you couldn’t swap a key metric for a secondary metric or another metric that better fits your organization. So just wanted to convey those points. Laura, I’d love to get your take on that and then want to get into diffs per engineer because I know that’s an area we get asked about a lot.

Laura Tacho: Let’s actually go there 'cause we sometimes agree so much that I don’t have anything different to say necessarily than what you just said. I think that answer encapsulates a lot of my own thoughts. I think now is a good time to wave that flag around diffs per engineer. There’s some other metrics in the Core 4 framework, for example, regrettable attrition or R&D as percent of revenue per engineer, some of these things that honestly, even myself two, three years ago I might’ve said, under no circumstance have that in a metrics framework because the tendency to misuse them was just too high.

I think now we’ve found a way to use them responsibly and I think that is another discussion, is metrics on the slide that we’re showing you is one thing, how to deploy them and use them ethically responsibly in a way that’s helpful and sustainable to your business is a different thing. The diffs per engineer, we’re not intending to measure how many PRs Abi closed last week. That’s not what this is. We want to look at this at a team level and organizational level and aggregate level. What were some other names for this metric actually that we considered calling it just because we know diffs per engineer is sometimes a little bit sensitive?

Abi Noda: We considered a lot of names. PR throughput was one. PR merge frequency or merge rate was another. And I’m seeing this brought up in the comments as well, but we should really acknowledge that we definitely viewed this as controversial and had a lot of internal conflict, conflicted feelings I should say, around its inclusion. This wasn’t a topic we’d really visited recently because I think generally our position has been warning organizations about the dangers of using metrics like this, and recently we revisited this topic and had really interesting conversations with folks like Nicole and Margaret about their views. And as you said, Laura, the intention here is definitely it is called out that this should not be measured at the individual level.

However, what we’ve generally found with organizations we work with and talking to other experts is that while there is the risk of harm and counterproductive behaviors that can come out of using a metric like diffs per engineer, that generally what we’ve seen in the industry is that the benefits outweigh the costs and risks. And in the white paper, we talk about three specific guidelines for how to actually roll out a metric like diffs per engineer, how to talk about a metric like diffs per engineer in a way that doesn’t lead to some of the counterproductive outcomes we see. But by and large, what we found is that diffs per engineer is a useful input into understanding productivity.

Again, this is one of many metrics and many dimensions that we’re talking about here. It’s really important to call that out. A lot of the stories we hear of where this metric goes wrong is when it’s the only metric or one of two metrics that are being measured in organizations. But as part of a balanced approach, this is a metric that does provide insight into the level of activity and flow that’s happening in an organization. It is supportive of metrics that I think are less controversial like lead time and cycle time. Diffs per engineer in some sense is just a different view into the rate of delivery. And as we’ve been talking about earlier, one of the key reasons we included this metric is the appeal to other parts of the business.

Although as engineers and engineering leaders, we have maybe mixed feelings about both the pros and the cons of a metric like this, what we found is that this is a metric that appeals and is understood by CEOs, CFOs and is important to help build a bridge there and again, a shared language, a way to get aligned and ultimately drive investment into improving developer experience and productivity. That’s my two cents on the rationale for why business is included, but what are your thoughts and additional maybe words of caution about how this metric should be deployed?

Laura Tacho: I think Ano and Nuno both had good points in the comments here. Ano says that diffs per engineer can be easily gamed. I think Nuno said, “Well, how do you prevent someone from just increasing speed by doing smaller PRs while still producing the same code? How do we prevent that?” I think number one, this is dangerous when it’s the only metric, but when you have it in the context of Core 4, we’re not just looking at speed. You might see speed increase then, but effectiveness, maybe that stays the same or changes, but quality and impact maybe aren’t going to change or might even go down. And so we have these oppositional metrics in place to build in some barriers and a little bit of insurance against gaming the system.

I think the second thing, and this is my own opinion that’s evolved over time because I am very conscious of Goodhart’s law which is when a measure becomes a target, it ceases to be a good measure. In talking with our customers, some that are operating on huge scales, if you want to go back and watch the workshop that I did with Grant and Max from LinkedIn, we talked about this specifically, and they had said in their tenure doing developer productivity at LinkedIn, they have never come across a case of someone trying to game the system. That just wasn’t so obvious because it is actually really difficult to game the system with these metrics.

I think it’s possible, it’s within the realm of possibility, no doubt, but the amount of extra effort and overhead that a team would have to go through in order to game this metric is actually quite a lot and it’s very infrequent that I’ve actually seen it happen in real life. So I think it’s a good thing to keep in mind when designing a system and we definitely kept it in mind when designing this one, but in reality it doesn’t actually come up as much as we are worried that it would. I think the diffs per engineer, again, is just a different way of expressing lead time. Basically it’s you ask the question of why measure lead time? The answer is going to be the same as diffs per engineer.

And I think as long as you are a responsible leader who can use it and deploy it in context and not use it as your only metric, if I were an individual engineer, I would be comfortable with that. That’s my own opinion and my own experience, but I think there’s plenty of context and narrative and storytelling around it to make it not as dangerous as it was perhaps in the past.

Abi Noda: I’ve had the opportunity to present on the DX Core 4 at a number of different organizations in front of developers and the concerns, challenges, questions have been brought up. I think one thing that I always go back to is that the developer experience is one of the other core pillars here. And a lot of the fears about gamification or the consequences of optimizing for dimensions like speed or even impact, I think are addressed by the fact that the developer experience itself is a counterbalanced dimension here. If there is cultural problems created with diffs per engineer, then that will be reflected in the Developer Experience Index if folks are working in counterproductive ways to gamify metrics that will be reflected in the Developer Experience Index.

So the voice of the developer, the reflection of how the work is being done by developers based on developers’ perspectives is an equally important dimension here and I think provides a really important counterweight to a lot of those concerns that we hear from developers and leaders about the risks of certain types of metrics.

Laura Tacho: Let’s move on to another question on the other side of this chart or the able here, which is on the impact side. So taking the percent of time spent on new capabilities, so this is our key metric for the impact pillar here, how would you frame the narrative around that metric? What’s good or bad? How would you approach this, et cetera? Actually Abi, I’d love to hear your answer and then see if we have a different one. I don’t think we will, but [inaudible 00:26:59]

Abi Noda: Impact is tricky. A lot of organizations, when they start thinking about how do we measure impact, what they want is to almost quantify the business value of each thing that they ship, and I’ve never seen an organization actually be able to do that in terms of operationalizing that idea. I think that is where we’d love to get to as an industry. In the meantime, I think percentage of time spent on new capabilities or innovation is a metric that we’ve seen work really well. An example of this would be a lot of developer productivity organizations we work with will go have conversations with the CFO and CEO about the right amount of investment in developer productivity.

And we’re looking at the menu of metrics and languages really available to us to have that conversation. And what we’ve found is that especially CFOs, they view R&D as we’re putting X amount of dollars in and what are we getting out? And the what are we getting out, again, as I just mentioned, it’s hard to answer that in terms of at a per feature or per even deliverable level. However, at a broader level, if we just think of the problem as for every dollar we get in, how much are we getting back in terms of innovation as opposed to KTLO, busy work, support or maintenance, non-accretive work, we found that to be a really good way to think about the ROI and accretive value being produced by a team.

Laura Tacho: I think one of the things that I draw a lot of inspiration from and still read through every couple months is the framework of evidence-based management, and specifically their ability to innovate, which breaks down this percent of time spent on new capabilities and really outlines why it is so critical, because if your business stops innovating, you will lose. It’s just a fact of, I don’t know, capitalism. But the percentage of time that you’re spending on new capabilities, we can then take that even further to talk about are you a SaaS company where every customer has everything that’s released immediately as it’s released? Do you have some customers that are quarterly, half-yearly releases behind?

The percent of time that you spend on new capabilities, how does it actually flow back to the customers? How many of your customers are getting these new capabilities? There’s so much business impact, ability to innovate stuff trapped in there that can help you draw a line between dollar, in dollar out. I think one of the things, I saw in the comment maybe was it feels more business-y maybe in general than things that we’ve talked about before, which have really focused on purely the developer experience. I know my answer to why that is, but Abi, I would love to hear your take on why it’s so important that we’re highlighting impact as a pillar, it’s one of the four of the Core 4.

Abi Noda: Well, the question that always comes up I think when technical leaders are talking non-technical stakeholders or even technical executives about developer productivity and about the importance of improving developer experience and productivity is what’s the business impact? And I saw another question here about how do we package all this up into language and terms such as dollars that are meaningful to the business? And we can touch on that. I think that will be the topic of another discussion we have soon as well as the question of what’s good for each of these. We plan to talk through the benchmarks and industry data we’re seeing across all these metrics.

But largely that question of, so what’s the business impact of investments in productivity or of the investment in additional headcount? And I think we’ve found again that the allocation, the percentage of time, the ability for the organization to allocate its resources toward accretive elective investments is one of the best proxies we have for answering that question.

Laura Tacho: I think as well, we operate in a business setting and I think developer experience to me is one of those very interesting places where what’s right and good for your development team is also the thing that’s right and good for your business. And what I see so many engineering leaders struggle with is to try to figure out how to articulate that thing that is right and good for your engineering organization; better developer experience, investing in platform tooling, making things frictionless so that people can just sit down and do their job and not have to bang their head against a wall. How can we bring that and connect it to, hey, this is actually the right thing for the business?

And that really requires us as engineering leaders to strengthen our business skills, which quite honestly, just hasn’t been expected of us for a long time because we have been in a very fortunate position to not have that demanded of us in the same way that other parts of the organization have had that demanded on them. And so I think one thing that Abi and I and the collaborators with Core 4 are trying to do is to help give you a bit of structure and help give you some language to better articulate why what you want to do is actually the right thing for the business and help you draw that line together. Because I think if you are listening to this and you manage people or you manage projects and it feels a little icky to have to talk about the business stuff because it’s uncomfortable because you haven’t done it before, this is a sign.

I’ll tell you now, you have to lean into that discomfort and figure out what skills you’re missing that are making you feel uncomfortable about it because it will be expected of you more and more. We’re in this post-ZIRP world. The market is really changing, expectations on engineering leaders are changing a lot, and it will only help you if you can really confidently articulate why things like developer experience, platform tooling, CI-CD are good for the business and help become that business leader. The end of my little speech.

Abi Noda: To go even really high level, I’m preparing a talk with Margaret-Anne Storey that we’re giving at a conference next week, and we’re focusing on the history of developer productivity research and what’s new. And it was funny, while we were preparing the talk, she mentioned, “Yeah, we still don’t really have a clear definition of what developer productivity even is.” And I think that just underscores, if as an industry, as a research community, and as practitioners, we recognize that just even defining productivity is really challenging and we felt very lost in terms of how to measure it, it makes it almost impossible to drive aligned efforts, conversations, investment into improving developer experience and productivity.

And so we hope that the Core 4 is at least a step in the direction of providing a standardized and shared language, common language around what developer productivity is in practical terms and how to measure it within an organization. We’re almost at time here. We want to touch on a few final notes here. Firstly, Laura, before, we used to get asked, should we use DORA, SPACE or DevEx? And thus we created the Core 4 to address that question. Now, you mentioned some folks are asking you, “Should we use DORA, SPACE or the Core 4?” What’s your recommendation there?

Laura Tacho: We have all these different standards, so now we’re going to create another standard, and now we have 13 standards instead of 12. We want to avoid yet another framework by simplifying everything and just making something that encapsulates everything. The short answer is if you use Core 4, all of the four key DORA metrics are embedded in the Core 4. Those are not in competition with one another. Similarly, SPACE is an extremely broad framework. It describes dimensions of developer productivity that every single metric falls into. Keystrokes per minute is a SPACE metric. It’s a bad metric. I’m not saying that you should measure it, but as an illustration that keystrokes per minute would fall into SPACE.

So by default, everything that is in Core 4 is in SPACE. We have taken consideration and again, collaborated with Margaret-Anne Storey, who’s the co-author of SPACE to make sure that the SPACE dimensions are adequately balanced and represented in the Core 4. So the short answer is if you put this on a Venn diagram, they’re not independent circles sitting next to each other, it’s more like a big circle with a few parts where they’re not overlapping. But if you’re using the Core 4, DORA is wrapped up in it. They’re all aligned to SPACE. Abi obviously as the author of DevEx framework, has made sure that DevEx feedback loops, cognitive load, what’s the third one? I’m blanking on it right now.

Abi Noda: Feedback loops, cognitive load and flow.

Laura Tacho: Flow, yeah. That flow state is encapsulated into Core 4 as well. So we’ve taken all of these, thank you those of you who are in the comments for my little brain fart, but we’re making sure that this is well-balanced so that you can just skip the analysis paralysis of should it be DORA, should it be SPACE, and just get started working with this framework that is simplified but comprehensive.

Abi Noda: So we’ll, in the near future, be having conversations around deeper dives into the DXI, Developer Experience Index, I saw some questions about that today. That deserves its own conversation entirely. We’ll be having conversations about how organizations are using the Core 4 benchmarks and industry data we’re seeing around the Core 4, how to measure the Core 4 in your organization. In the meantime, we want to recommend that people go to dxcore4.com to see the white paper that does discuss in greater detail a lot of the things we’ve covered today.

Laura, share more. You’re doing some upcoming educational content or coaching sessions, office hours around the Core 4. Can you share a little bit more about that?

Laura Tacho: I do notice a couple names that I recognize that have taken my developer productivity metrics course with me before. I’ve redesigned the course and updated the content quite a lot 'cause research moves pretty quickly here. I’m going to open up another cohort of this engineering developer productivity metrics course in a couple days. So the best thing to do would be to go to bit.ly/lauratacho and you can get on the wait list, so then you’ll be notified. But what we’re going to do is I’ll talk you through and really go into depth about what is SPACE? What is DORA? What is Core 4? What is DevEx? Why were they designed? What are they good for?

And then more importantly, we’re going to take it the next step, which is how do I actually use this in my organization and on my teams? What do I do with this data? Picking metrics is one thing, but how do I actually deploy it? And so we’ll have a chance for peer connection, learn from others. There’ll be a resource library. But get on the wait list if that’s interesting to you, bit.ly/lauratacho, and you’ll be the first to know when enrollment opens up for that.

Abi Noda: We will share that link as well as the link to the recording in a follow-up email to everyone who attended and registered. We want to just thank everyone for joining today. This was a fun conversation, and appreciate the input from everyone, and hope to see everyone again soon.

Laura Tacho: We’ll see you around the internet. Take care, everyone.