Laura: Welcome to AI Budgeting: Planning your 2026 AI tooling budget. Lots of uncertainty, lots of change happening in the space, so planning and forecasting how much money you’re going to need in November of 2026 is definitely a difficult task. Abi and I are here to help you, give you some insights, share with you some trends that we’re seeing so that you can maybe not get the math 100% perfect, but at least be aware of big things coming up that you need to take into consideration for your planning in 2026. Let me share the results of this poll. A couple folks not spending anything yet, which I think is about to change probably, given that you’re here. As I kind of expected a bit of a bell curve here, we’ve got some folks spending $100 per dev per year, but the majority of folks are spending upwards of 100 up to 500, and even some up to 1,000. We’ve got 10% of folks spending more than 1,000 per developer per year. We’ll talk about how these numbers should go up or down as well into the next year so you can plan your AI budget.
Abi: Laura, first off, I think we can just dive into the first topic, which is around how much should companies be spending. And I don’t know if folks still have the poll results up, but as Laura and I were looking at some of the industry data, some of the empirical data from the organizations we work with, I’m curious, Laura, what’s your reaction to the poll results?
Laura: I was really curious how many people were spending nothing yet, because even though we see tales of AI usage in headlines all over the place, we know that there’s lots of organizations that are still very much in their early days of bringing AI tools to their developers, so I was interested to see that. I was also dying to know how many are already spending more than $1,000 per developer per year. I did expect that we’re probably going to see a lot clustering around that $500 mark because that is how much GitHub Copilot costs for an enterprise license per developer per year. So overall, not surprised, but still very interesting.
Abi: Yeah, same. I was curious to see the more than $1,000 per developer per year. Yeah, I’m curious to dig into that more. Laura, as we were looking at data across the industry, what were some of the other references out there that you were looking at?
Laura: We were looking at how quickly the costs can actually add up here, especially for organizations that don’t have a single streamlined approach. And actually one of the big things that Abi and I want to emphasize is that a multi-vendor approach is really normal right now. So if you’re in that $1,000 per dev per year, it’s very normal to have multiple tools.
These costs can add up really quickly. Right now, companies are allocating anywhere between around 1 to 8% of their total revenue toward internal productivity tools. This is coming from a study that ICONIQ did. They definitely skew to a particular kind of company. They’re in an, I guess, investment firm, if that’s the way to categorize them. So their benchmarks might not be 100% relevant depending on what your company background is, but overall, we’re seeing that to be fairly accurate, so around 1 to 8%. This looks like for a startup that has less than 10 million in revenue, maybe you’re spending a couple hundred thousand on AI tools depending on your team size, all the way up to huge enterprises, which are spending 60 to 100 million per year on internal developer tools.
Abi: And, Laura, when we’re talking about AI spend, how much we spend and the cost of these tools, one thing that’s a little tricky, especially when comparing across companies, is when we talk about per-developer spend, are we talking strictly about tools like Cursor and Copilot? I see Glean listed up here. There’s also just AI capabilities baked into existing tools. So how should we think about AI budget in terms of what is the scope of the different tools we’re talking about?
Laura: If you are an engineering leader who is having discussions about AI productivity tools with anyone even within engineering or outside engineering, it is so critical that you get very specific about what your language, when you say AI tools or AI coding tools, what it actually means, because depending on who you’re talking to, they might only think about tools like Copilot, Cursor, Claude Code, et cetera. If you’re talking to someone else, they might lump ChatGPT or other chat interactions into that definition of AI productivity tools for developers because we might use it for pair programming or troubleshooting. There’s also a wide gamut of other tools, like code review tools, security, observability, that are also AI tools but not related to code authoring. So get very, very specific with your language, because depending who you ask and who you’re talking to, you might come back with a really different answer or find yourselves talking past each other a little bit.
My definition has really expanded even within the last probably three months when I’m talking about AI coding assistance, or AI-assisted development actually is the term that I prefer to use. I do include ChatGPT or just regular Claude in that, whereas I think actually three months ago, that’s something I might not have considered because I was really focused on the code authoring. Abi, I wonder if that’s your experience as well, that your own definition has expanded a little bit as we start to use tools in different ways.
Abi: Absolutely, and it’s interesting because some of these tools, even tools like Cursor, now we hear about sales teams and product teams and designers using them. So are we really talking about just R&D investment for the engineering organization, or are these tools actually more company-wide than even we think about as engineering leaders?
Laura, going back to the data, so based on the poll and some of our research, I think it’s safe to say that $500 to $1,000 per developer per year is sort of the entry point, the bare minimum. Can you talk a little bit about the ICONIQ benchmarks and the range that they’re seeing and what that means for all of us?
Laura: So I think you said the cost of entry going forward for 2026, I think the floor of what you should expect to spend is $500 per developer per year. In fact, I would be extremely surprised if that is even achievable as tools begin to increase in their feature set and then therefore increase in costs. I don’t think that we’re at a point where tools are going to get less expensive. I think we’re still at a point where tools are going to continue to get more expensive.
Just for some comparative analysis here, when we talk about that ICONIC report of 1 to 8% spent on internal productivity tools, that’s often the same amount that organizations spend on marketing budget for the whole year, so this is a very significant investment. And what we’re talking about here is tools for internal productivity. We’re not talking about building AI capabilities into their product offering. So this is a pretty significant investment.
One thing to kind of connect this back to is previously, this was kind of net new money. It was new investment. For 2025 and 2026, one of the patterns that ICONIQ also pointed out in their state of AI report, which is where we got those benchmarks, is that organizations are actually taking money that was allocated to headcount and bringing it back into engineering, or R&D engineering productivity tools. They’re not taking jobs away; I want to make that really clear. This isn’t a reduction in headcount. It’s just slower headcount growth with a heavier emphasis on automation. It just happens that AI is the way to get there. We saw similar patterns happen during the DevOps transformation revolution days with cloud. This pattern has repeated itself many times, so I’m not surprised to see it again. But it is, I think, on the surface level can cause concern because it looks like things are perhaps bad or that we’re using AI as an excuse to remove people from jobs that they currently have when it’s a slowing of headcount growth. I can’t say that that’s true in 100% of cases forever, but that’s what we’re starting to see right now for 2025, 2026 going forward.
Abi: And as you said, Laura, I think this budget is coming from and being justified in different ways. We’ll talk more about how to stabilize and justify this budget longer-term. But as you said, I think discussions around this eating a little bit into future headcount growth budget, obviously coming out of existing R&D budget… I also think it’s pretty common right now for organizations to be treating this as sort of additive discretionary budget, even experimental budget. At least, that’s been the case this year. What are you seeing heading into next year?
Laura: I think that’s exactly right. This was sort of total research and development non-returning, very experimental, and now the business case of whether or not this can actually accelerate time to market, shorten the time from please to thank you… Thank you, Randy Schaub, for that lovely way to phrase things… so shortening that time from please to thank you. Now we’re really getting into ROI and cost control and looking into the economics of it. One thing also for 2026 looking forward is being able to measure ROI, being able to really understand where the money is going, because it’s no longer non-returning experimental. This is a part of your budget, and we need to be able to prove that we’re good stewards of that money and that we’re investing it in the right things that’s actually bringing the outcomes we want to see and the financial return back to our organization.
Abi: We’ve talked about where the budget loosely comes from and the approximate levels of spend. How should organizations really think about the budgeting process? How should we think about allocating this budget across the developer workforce, how to have those conversations with leaders? What’s your advice on that, Laura?
Laura: Let’s take this time to look at a little bit of information that some of the attendees shared. Those of you who took part in the pre-webinar survey, thank you, and if you didn’t, I think we have a big enough sample size that I hope you see your own experiences reflected here.
A couple of things that we asked, so how are you approaching your 2026 AI tooling budget? There’s some of you who have your budget finalized, and I think congratulations to you, a very small percentage of you, only 6%. There’s a couple similarly sized that haven’t started planning yet, 4%. But the majority of you have either a draft budget in place right now, or you’re starting your early conversations. So very glad that you’re in the room with us right now.
A couple other things that we asked is how much of your engineering budget are you planning to allocate to AI tools in 2026. This is interesting because the ICONIQ benchmarks that we shared before is percentage of total revenue, so it’s not a one-to-one number match here, but interesting to see some other patterns. So we’ve got less than 1%. 1 to 3% is the biggest chunk with 43, or sorry, 46.5. We’ve got some folks that are doing 4 to 6%, and then actually a significant chunk that are in the 7 to 10 or above 10% as well. I was a little bit, maybe not surprised, but interesting to see. I’m curious about how they’re spending all of the money.
Abi: And so, Laura, how do we allocate this?
Laura: To answer your allocation question, which is are you reserving budget for AI tools outside of coding assistance, and most of you said yes.
And, Abi, to answer that question, I think in 2024, 2025, I think realistically 80% or more of AI tooling budgets were dedicated to coding assistance, probably average, based on my own conversations that I’ve had, data that I’ve looked at. The reason for that is just because there weren’t a lot of other tools out there. We had some good documentation AI tools, but the coding assistant use case was really the biggest one.
My prediction going into 2026 is that we’re going to see bigger budgets. We’re going to see coding assistance be the same hard-number spend, or even increase, but we have to reserve even up to 50% of budget for tools outside of coding assistance. And then on top of that, there’s a bit of wiggle room that we need for those challenger tools or experimentation tools. Is that your worldview, Abi, as well based on your conversations?
Abi: I think that mirrors what I’m seeing as well. Definitely this was the year of get your foot in the door with AI code assistance. And increasingly, we talked about this in the last webinar too, Laura, a lot of the conversations we’re having are about the now what? How do we look at expanding the impact we can have on the SDLC with AI? It is a more difficult problem. I think as an industry we’re still beginning to reveal, what does that future SDLC look like? But certainly I think that mirrors the data you’re showing here where organizations are now looking at, what are investments beyond just core code gen and AI code-assistant tooling?
Laura: I think, as you said, the market is maturing. On that note, there was interesting point brought up. We asked about cost-control mechanisms, and I think cost control and maturing industry kind of go together. We saw a lot of early genesis in the coding-assistant space. I think that’s been pretty well-established. We have the incumbent GitHub Copilot, we’ve got a lot of challenger tools, AI-native tools coming on the scene. Now we’re expanding out into the SDLC, but we’ve proven that AI tools are useful. And so now for 2026, looking forward, cost control, making sure that we’re not overspending and that the spend is the right size is becoming front-and-center concern for engineering leaders who are budget-holders for these decisions.
Abi: Can we talk about how to approach enterprise licensing agreements, how to approach actual contracting with vendors? How do you kind of mitigate the vendor risk here, given how quickly the landscape is changing? So what are you seeing organizations do, Laura? What’s best practice around that?
Laura: I think best practice overwhelmingly across the industry is sticking with a multi-vendor approach. There are definitely some advantages of having a single vendor. You have streamlined procurement, and there might be some organizational situations where we want to stick streamlined. I think the risk that comes with that is these tools, even the models, are leapfrogging each other, like every month, and so if we’re locked into one tool, we could lose out on more modern tools and the efficiency gains that they could bring.
I think the other risk is that there’s a lot of use cases to cover. We’ve got chat, we’ve got IDE tools, we’ve got background agents, then we have all these other SDLC tools. And these vendors might be expanding, but it might not be actually what you need. So I don’t think there’s going to be a one-stop-shop for a while to integrate all of the parts of the SDLC into one, single, AI streamlined solution. I think there’s a lot of contenders who are trying to do that now, but even speaking with, for example, the go-to-market team at Windsurf, I spoke with them last week or two weeks ago in Las Vegas at the Enterprise Tech Leadership Summit, they said on the ground we’re seeing, even at the very regulated enterprise level, a multi-vendor approach, and they expect to see that continuing for the near-term future at least.
Abi: And it’s important to note, when we talk about a multi-vendor approach, in many cases this is still experimental. This is prolonged POCs, trials of multiple vendors, and there’s typically active data and measurement and research efforts happening in parallel to understand the impact and ROI of these different tools, and ultimately look toward eventual standardization. Laura, can you talk more about how to think about budget, in terms of how to evaluate different tools, in terms of how to even know where to invest, can you talk about how organizations can leverage data in that process?
Laura: Yeah. I think given the heat that’s on engineering leaders to make these decisions the best ones for the organization and to prove out that ROI, having data-driven proof of concepts, data-driven trials, and then also making sure that you have the measurement visibility into your organization so you can really see what is happening to your software delivery processes and to the quality of your code, the satisfaction of your developers, the satisfaction of your customers when you roll out AI just helps you have more confidence making those decisions. We’re not talking about just a $20,000 decision. We’re often talking about millions of dollars in investment, especially if your company is of any significant size. These are very big decisions with big budgets and big consequences, so saving money in your budget as well for the measurement layer is also very important.
Telemetry is kind of lagged behind a little bit in these tools. It wasn’t the first place that was started. We saw Anthropic come out with a really good metrics API that hit GA last month that we partnered with them building so that you can have a first-class data connector in DX to get that data and see it. Other tools have varying levels of telemetry, so make sure that you’re able to track adoption, looking at other factors of AI impact, like time-savings, percentage of code written by AI, developer satisfaction, developer experience, some cost-control kinds of metrics.
If the measurement question is still fuzzy, take a look at a previous webinar that Abi and I did about measuring the impact of AI. There’s also a whitepaper on our website, getdx.com. There’s a banner right at the top to read our whitepaper on measuring AI and code assistance and agents that will give you some really concrete guidance on what to measure in order to start building out the ROI really clearly and to have confidence that you’re making the right choice.
Abi: And as we move past just measuring the impact of these initial investments and looking at where do we allocate budget and investment going forward, I think one of the things that I’ve been stressing with the companies I meet with, Laura, is this aim before you fire, especially with this next wave of AI investment. You need data to really understand, where are the bottlenecks in the SDLC where AI investments and tools are going to have the greatest ROI?
A lot of us are kind of going about this retroactively. We make some purchases, investments, then we’re retroactively looking at, okay, is this worth continuing? Is the ROI there? I think that works well for a lot of the AI code-assistant tools where there clearly is ROI. It’s a matter of making sure you’re achieving it and optimizing it within your organization. But going forward, I think the bets are going to be a little bit less clear for organizations. And again, having that data through tools like DX to really understand, okay, where in the SDLC should we be deploying and building or buying AI solutions I think is going to be really, really critical.
Laura: And I think it’s so hard to get that data without speaking directly with your developers who are using those systems every day and can tell you pretty easily where the problems are. The organizations that I see being most successful with AI tools aren’t ones that take what I call the spray-and-pray approach, which is let’s just give licenses to every single developer and hope that their curiosity and their natural tendency to be problem-solvers takes care of the rest. It’s organizations that are looking across their SDLC and thinking, “Oh, wow, we’ve got a huge bottleneck here.”
Can we point AI at this problem and come up with a solution that wasn’t even possible two years ago because the technology has just evolved so much? And those kinds of problem-oriented use cases and approaches to AI I’m seeing pay off big time because the gains that you can get from eliminating those bottlenecks, which are there, are much bigger than the incremental time gains that we might get from speeding up the coding process. And so if you want organization-wide ROI and organization-wide results, we really have to think about AI as an organizational tool, not just developers getting some acceleration in individual coding tasks.
Abi: What do we expect to stay the same and change? What do we see coming down the pipe here going into the next year, but also really the next two to three years?
Laura: A couple of things. The biggest thing that I think is going to change is that tools, they’re going to get more expensive, and then we’re also going to be using them more. One thing to keep in mind for budgeting in 2026 is that your usage and the amount that you’re spending actually in Q1 is probably going to be some fraction of what you should expect to spend in Q4 because adoption should increase and also the prices should increase, so independent of adoption. So just expect things to get more expensive.
Another point would be factoring in a percentage of your budget for tools that either challenge incumbents or expand capabilities across the SDLC. Again, that multi-vendor approach, it is the norm right now, so if you have opportunity to do that so that you can be using best-in-class tools, we definitely would recommend that as the approach right now. Also avoiding vendor lock-in as models are updating and tools are leapfrogging each other in terms of capabilities and performance.
And then the kind of last point in making sure that you have data-driven pilots, continuous measurement in order to understand ROI is going to be very critical because the spotlight is on engineering, the pressure is on us to use AI in ways that advance the business, and without measurement, it’s just really hard to prove that. And we need to show that we’re being good stewards of budgets because this is not a non-returning, experimental investment. This is something that needs to drive the business forward, and we need the data to show it.
Abi: I also think investment in enablement, standardization… And this is what we talked about on the last webinar for those who are interested is what’s sort of the new changing role of platform and developer experience teams amidst this change. And one of the things we talked about was how there’s going to be huge opportunity for the traditional platform engineering concepts, like setting up golden paths, paved paths, and standardizing to provide more leverage to the organization, and integrating these more seamlessly into the SDLC. So I think a lot more investment in enablement, education, and integration and standardization is something I expect.
I also think it’s reasonable to expect to continue to see consolidation in the vendor space. We’ve seen this pattern over and over again in the industry, but right now there’s a lot of really exciting point solutions and new point solutions and startups coming out almost every week. Over time, I think we’ll start to see more standardization, consolidation into the larger players, many of whom folks on this call already work with. So I think those are things to look ahead to. Laura, anything else you want to kind of wrap up on before we get into some of the questions we’re getting?
Laura: There’s a lot of good questions, so why don’t we get into them? And I think anything that I still wanted to say that I’ve left out, I think we’ll get to them in the Q&A.
Abi: I think one really interesting one is, how can I respond to the top management challenge on why I don’t see headcount cut down after introducing an AI tool? I think this is a challenge that predates AI. Anytime we’re talking about engineering efficiency, you run into the problem of someone in finance or someone on the business side saying, “Okay, X percentage more efficient, that means X percentage less headcount or X percentage more something on the P&L.”
Laura, I’m curious to get your advice. My advice has been generally to steer clear of the P&L discussion certainly, but when it comes to headcount, I think the messaging right now needs to be that this is about keeping up with the new pace of innovation. This isn’t about cost-cutting and slowing down. While, yes, it’s true that AI is accelerating engineering productivity, it’s also true that all your competitors are also accelerating nearly just as much. And so I think it’s about staying competitive, and not about cost-cutting, at least in the current environment.
Laura: Could not agree more. I was talking with some folks about this last week. Only cost centers brag about cost savings, and if you brag about cost savings, your reward is going to be a budget cut. So I don’t think that cost savings is the right angle to approach… Every leader that I’ve talked to recently, save very few who are more on the services side, but… I talked with a CTO today that said it really nicely, that we’re giving these tools to our developers to help them get more work done, and we want them to be a part of the future that we’re building. This is not about reducing headcount. This is about making people more effective in their jobs and being more competitive in the market. Because the truth is is that if you’re trying to reduce costs with AI and your competitors are treating it as an accelerant, you will be out of business in a matter of time. That’s just how the cards will fall in that scenario. So I think that’s a good thing to focus on. You might need a little bit of messaging tuning to have that land on their ears in the right way.
Abi: And I think there’s little things, nuanced things that we can do to make sure we get that right. For example, just the language we use to talk about ROI, when we talk about dollar-savings in terms of dollars, that immediately gets into ideas of cost-cutting. We can instead talk about time recaptured or capacity gained through these investments. So those subtle differences in framing, I think, can make a big impact on how the message lands and what sorts of conversations that leads to.
Laura: Absolutely. Words have lots of meaning in these high-stakes executive conversations, so tuning your language… If we’re talking about time savings, we can talk about reinvestment in feature quality, in documentation, in whatever that might be, and really frame it as the profit-center thinking instead of the cost-center thinking, as some of you’re pointing out in the chat.
Abi: What portion of AI budgets are companies allocating to upscaling, playbooks, enablement, training, et cetera, relative to tooling? Laura, I don’t know if we have concrete data on this, but we have probably a lot of empirical data from the companies we’re working with. What would be your take on this?
Laura: I would have to do some mental math to figure out the percentage. What I can say is that training and enablement are absolutely essential in order to get adoption up and for it to stick, first of all, and then also for the impact to come. We know that outcomes can’t come if people aren’t using the tooling, and so doing the bare minimum is getting basic training and enablement for people to adopt the tools.
We see other heavy investment techniques from companies like booking.com who do experience-based accelerators, so this is something that they are borrowing from AWS, and having teams bring a real business problem that they have into the room where they’re going to learn more about how to, maybe it’s Windsurf or it’s Cursor or whatever, whatever tool they’re trying to onboard onto, and then they go through some training on AI proficiency, but then most of it is actually experiential learning. They’re using their real problem as a way to gain proficiency with the tool. So at the end of it, not only do they have proficiency in the tool because they’ve just spent three days or five days working on gaining those skills, they have a solution that is actually valuable for their particular team and context and their goals. So it’s kind of a win-win. They can bring that thing right back. They’ve learned how to change parts of how they work, and they can just use that as a really big jumping-off point.
So when we talk about AI spend in the AI measurement framework, I think maybe it’s directly in the table, if not, it’s in the text, but making sure to track as well how much you’re spending on training and enablement, because if that number is zero, that’s too low. That’s definitely too low. I don’t have strong guidance for whether it should be $2,000 per developer per tool or something like that. I think you should expect to spend somewhere around 500 to 2,000 US dollars for a high-quality training workshop on AI tooling per developer, depending on who’s offering it. So make sure that number is non-zero, and also talk to your developers to figure out who’s having trouble onboarding and decide if you need to change your onboarding strategy based on that info.
Abi: I would say I think this is not optional, like you said. This needs to be a critical part of your AI strategy. I also think this is a great example of where data can be really helpful, right? Look at the data, look at the adoption utilization numbers, compare them against our industry benchmarks we have in DX, look at the impact numbers, compare those against industry benchmarks, and then based on that, make a decision as to how much of a gap you have to close. So then based on that gap, you can rationalize dollars, hours, number of FTEs going into enablement. So I think, again, I think enablement is absolutely critical, but I don’t know if there’s a blank-check approach to that. I think starting with the data is going to be most useful.
Laura: I want to answer maybe this question quickly from Helen who asks, “Are we also counting all the individual AI price hikes for various tools, or are we really considering the core gen AI tools only?” So when I was doing research for this slide about how costs can add up really quickly… I’ll just throw that here… I looked at some of the other existing developer tools, Sentry, for example, and sure enough, all these tools now are coming on with AI add-on, AI production debugging tool. It’s not part of their core product offering, but it’s an extra $18 per user per month. Not commenting on their pricing strategy there, but I think that is very valid that we should also expect to see a bit of pricing expansion in existing tools that are now offering AI capabilities.
I think this is what Abi is talking about with genesis and expansion of tool offerings, so we’re going to see a lot of add-ons, AI add-ons, to other tools before they just get swallowed into the core product offering, or until those more specialized product offerings get rolled into maybe one of the existing vendors that does IDE code complete or other agentic workflows, for example.
Abi: There’s another question here around trends of companies running local models or investing in more capable hardware. I wouldn’t say we see a lot of companies doing that at this point. I do think that in conversations with a lot of larger enterprises that the model layer, so fine-tuning or even custom model development, is generally an area of huge opportunity. Ultimately, the value of all these tools is limited by their ability to be trained to the context of what your organization is working on and the technologies and programming languages that your organization is working on. So both fine-tuning and sort of bespoke development of models, that’s something, for example, the company Poolside specializes in, is partnering with enterprises to kind of bring in more self-hosted bespoke models. So I think that’s a huge area of opportunity. I don’t see a lot of organizations, especially midsize and smaller ones, worrying about that right now, and I think that’s fine. I think the broadly available models, the popular models are improving still so rapidly that we’re not really close to tapping out on the potential there.
Laura: Someone asked a great question about, what about small to mid-scale startups? What are some options that we can’t get into the enterprise plans, but we maybe do a stipend or reimbursement? And this is actually something that I think is worthwhile to have in the room here, which is the approaches to budgeting overall. So even enterprise companies, depending on their AI adoption maturity, might go for that stipend model where let’s say each developer gets $100 a month or whatever it is, and they can spend that however they’d like on the tools of their choice.
Some pros here, that it’s more straightforward to budget for because you know exactly where that upper limit is per developer. You don’t have vendor lock-in. Things can be month to month. The downside is that with that approach, unfortunately you’re missing out on some of those economies of scale, but also the team workflows, where we find the biggest productivity unlocks are, the biggest kind of automation opportunities are. On the flip side, we have those enterprise license agreements, which are going to give you the economy of scale. Also streamlining enablement if we can stick to one vendor.
The biggest difference between the individual plans and the enterprise plans are pooled usage and also what to do with overages. So generally speaking, and I did talk to quite a few vendors to make sure that this is still the case with the individual licenses, you have your usage limit, and once you’re over it, you’re done. We’ve maybe all had Claude Code say, “I’m going to self-destruct after my five hours,” and then we have to wait in order to do something.
For the enterprise plans, we’re not going to run into those same usage limits, but usage is going to be pooled at a certain level. So we run the risk of very unpredictable overages, because there is no usage limit, but also when we think about cost control, we might want to introduce usage limits to particular groups of developers, junior developers, or certain teams. Some enterprise license agreements are introducing those things, some haven’t yet, but there’s really a wide spectrum of different pricing models. And I found that there’s not really a consistent way that every company is doing it. There’s a lot of variability. So that should also be something, when you’re doing your data-driven evaluations, to think about the pricing model and what that actually means for your organization.
How are folks measuring percentage of code written by AI? How do we know, if we’re trying to get some metrics there?
Abi: We talked about this on one of our recent webinars, so again, I would really recommend to folks… A lot of questions about measurement of ROI. Laura and I published a whitepaper a few months ago on this. We also did pretty lively webinar on the topic and talked about things, including this problem of how to measure percentage of code generated by AI. We talked about how this is really challenging metric for organizations to measure because a lot of the tools today are just looking at number of lines accepted, not how many lines are ultimately committed after being accepted, deleted, modified by humans, et cetera.
In terms of how to measure the percentage of lines written by AI today, a couple of the vendors, including Cursor, as well as Windsurf, have begun introducing measurement into some of their tools, not all, but some. At DX, we have been also measuring this two ways, one, through self-reported data, so data reported by developers themselves to different types of survey-based approaches. We’ve also been developing a daemon that runs on the developer machines and can track file changes, so really looking at the system level for who is actually writing the code, and ultimately how much of that AI-generated code gets committed and pushed up to trunk.
So for more information on that, visit our website, visit the previous webinar Laura and I did. We could talk for hours about the measurement problem and metrics like percentage of code generated, but again, would point people to the prior sessions and white papers we’ve written on the topic.
Laura: Wonderful. Abi, thanks so much for joining me and all of you for spending 45 minutes to talk about budgeting and measurement. If you’re interested in learning more about the AI measurement framework, about AI code metrics, about the new AI mandate or AI charter for platform engineering teams, Abi and I do these webinars about every month, and there’s a backlog of all the recordings on our website, getdx.com, so definitely check that out. There’s a whitepaper on AI measurement right in a nice purple banner at the top of our website. That provides you some really clear guidance on how to measure if you’re doing evaluations, if you need to prove ROI to make better budget decisions.
Thanks, everyone. We’ll see you around.