Skip to content
Podcast

The biggest obstacles preventing GenAI adoption — and how to overcome them

In this episode, Abi Noda speaks with DX CTO Laura Tacho about the real obstacles holding back AI adoption in engineering teams. They discuss why technical challenges are rarely the blocker, and how fear, unclear expectations, and inflated hype can stall progress. Laura shares practical strategies for driving adoption, including how to model usage from the top down, build momentum through champions and training programs, and measure impact effectively—starting with establishing a baseline before introducing AI tools.

Show Notes

Obstacles to AI adoption

  • The hype: Some of the claims written on LinkedIn and other places online overstate the impact of AI coding tools.
  • The cost of AI tools.
  • Technical barriers aren’t holding back adoption—cultural and human resistance are.

Some AI adoption stats

  • Top-end organizations report that 60-70% of developers are using code assistance either daily or weekly.
  • Less than 60% of code is written with AI at Microsoft.
  • There is a steady increase in the adoption of AI tools.

Strategies for driving AI adoption

  • Work against the hype by showing what the real impact of AI tools is. Methods for demonstrating impact include: self-reported, telemetry-based, direct signals from developers, indirect signals using hard data, and quality measures.
  • There isn’t a special set of metrics for AI adoption: We still care about quality, developer experience, and business impact.
  • Model and encourage AI use from the top down.
  • Remove the fear and stigma from using AI: Make employees reassured that using AI isn’t cheating.
  • Carve out time for employees to experiment with AI tools.
  • Have clear conversations around expectations around AI use.
  • Host webinars, in-person trainings, and office hours.
  • Champions programs: Identify the early adopters to help evangelize and drive adoption with their peers.
  • Use DX’s Guide to AI assisted engineering
  • DORA’s recent AI report shows a 451% increase in adoption among companies with an acceptable use policy.

Key measures from the DX Core 4 productivity framework

  • For quality: Measure change failure rate before and after introducing AI tools.
  • For speed: Measure PR throughput.
  • For quality: Survey developers. Many report improved code readability at companies with high AI adoption.
  • Always stay on top of developer experience.

Timestamps

(00:00) Intro: The full spectrum of AI adoption

(03:02) The hype of AI

(04:46) Some statistics around the current state of AI coding tool adoption

(07:27) The real barriers to AI adoption

(09:31) How to drive AI adoption

(15:47) Measuring AI’s impact

(19:49) More strategies for driving AI adoption

(23:54) The Methods companies are actually using to drive impact

(29:15) Questions from the chat

(39:48) Wrapping up

Listen to this episode on:

Transcript

Laura Tacho: Abi, you want to kick things off?

Abi Noda: Yeah, really excited for this conversation. AI obviously really top of mind for everyone here, most organizations, most leaders. So today we really wanted to dive into what we’re seeing, our guidance around AI adoption. As we know this is a really big priority for a lot of the organizations we work with. We’re seeing a lot of effort put into evaluating and trialing different tools, getting these tools in the hands of developers, measuring and really pushing for adoption of these tools. In some cases, we’re also seeing top-down mandates, which effectively require the use of these tools or even tie them to performance reviews. So we’re seeing the full spectrum.

And I think what really underpins this is a feeling that if we don’t adopt AI, we’re going to be left behind both individually, organizationally. And of course, there’s tremendous loud noise and hype coming in the headlines and in the press and so I think as leaders right now, it’s really difficult to navigate this, right? It’s really difficult to know what’s real, what should we expect? How do we measure this? So those are some of the questions I think we want to talk about today.

Laura Tacho: Yeah, Abi, I think Brian from Microsoft said it really well in one of the previous podcast episodes or a recent one that kind of the thing working against AI adoption is the hype around AI, its hype is its own worst enemy because you don’t have to go very far to hear on LinkedIn or see some news article with Microsoft says 20 to 30% is being written by AI, Google said it’s 30%, the Anthropic CEO is saying give it three to six months and 90% of the code is going to be written by AI. It’s like it’s really not hard to find these sort of sensationalist statements that, in my opinion, are just not matching up with the on-the-ground experience that we’re seeing from real organizations. And so executives and other leaders are reading this and then thinking, “Oh my gosh, we’re so far behind,” a developer sees that and thinks, “Wow, this must be the best thing on the planet,” and then they give it a try and they get pretty lackluster results those first couple of times because it’s a tool that you have to learn. And so then when you get those lackluster results, then we set it to the side and that hype cycle is kind of working against AI adoption.

So what we’re seeing is that AI adoption is a struggle for a lot of organizations. It’s not cheap. We were just having a conversation about how many tools each of us personally are using on a daily basis, those are all like 20 euros a month or 20 bucks a month. When you scale that across an organization, this is not cheap. It’s not cheap at all. And so of course from the business perspective, we want to see return on investment, and so we have to understand what the obstacles are that are blocking people. Hype is one of them. Abi, do you see any trends? What are you seeing in terms of adoption? Is it matching the hype that you’re seeing on LinkedIn?

Abi Noda: Yeah, so we’ve been taking a look at our aggregate data, a couple things that I think are interesting. First of all, I think we have to acknowledge these AI tools are effective to some extent, right? I think we’ve moved on from the skepticism around are these tools worthwhile. Across the board we see real impact, both in terms of self-reported time savings from developers, in terms of some lift correlation to throughput and other developer experience measures.

In terms of adoption, as you mentioned, Laura, I think that there is still a struggle there in terms of I think where organizations want to be. We were looking at the data, I think at the very top end, top decile, top quartile, we see organizations achieving adoption where approximately 60 to 70% of their developers are using AI code assistants either daily or weekly. That’s at the very top end. We know even Microsoft recently published a paper acknowledging that even at Microsoft, their adoption of AI code assistants is sub 60%, and that mirrors what I’ve heard from folks at Microsoft as well. So there’s a lot of room, at least 50% or more for a lot of organizations right now in terms of driving adoption up.

We certainly don’t see the level of impact that’s in the headlines. So 30% of code being written by AI, 2x, 50, 100% productivity improvements. We’re not seeing that anywhere in the data on any sort of consistent basis right now. What we are seeing though is an steady increase in the impact and adoption. And so I want to caution anything we’re seeing today could be different six months from now, especially as these tools rapidly evolve.

Laura Tacho: Yeah, and I think we are definitely on the upswing, right? Everyday these tools get better. Like you said, I think developers are still highly skeptical of the reliability of the code, and we saw that in the Microsoft study that you referenced that it’s somewhere around 20 to 30% of developers would sort of blindly trust the code. And that’s very fair. I think it’s not necessarily a replacement yet, it’s meant to augment, and that’s sort of where we are. But things are getting better day by day, which is really great to see, and we see that in our data.

I think when it comes to adoption and kind of the focus of the conversation that we’ll have today is that the technical barriers to adoption, those are really not the things that are holding us back to adopt, it’s more cultural. It’s more about how humans interact with these tools and changing ways of working that is the barrier to widespread adoption. That’s the barrier of getting better ROI, better results from these tools.

And so leaders, I think a big question that you have, because you’re trying to do, I don’t want to call it damage control, but you have to do some expectation setting out there with maybe your non-technical stakeholders or counterparts who are seeing these sensational headlines, about 30% of code being written by AI, and you need to level set expectations. I think it’s important for you to be able to communicate what impact is actually realistic. I saw someone in the chat just now asking, “Are these self-reported? How are you actually measuring? How do you measure the impact of AI? Is it about lines of code? Is it about time saved? Is it about something else?” And then also getting into the actual use cases, like, what is this class of tools actually set up to do well in our organization?

And Abi and I have had quite a lot of experience looking at these problems at many different companies. We have access to a bunch of data where we can kind of inspect, and so we wanted to come together today to sort of share some of our findings and kind of what we’re seeing and also the ways that we see a solution going forward.

I think kind of the point that we want to land on is that we can’t just give licenses to Copilot to any group of developers and expect magic to happen overnight that suddenly 30% of your code is being written by AI. I know that all of you in this room, you’re here and you probably know that already, but it’s up to you to have realistic expectations, understand how to communicate them, and then also know what are the barriers, the cultural barriers, and how can we work around them? So Abi, I’m curious to hear a bit more from you, how are you thinking about this problem? How do you think about solving it?

Abi Noda: Yeah, well, I think, we’re going to talk today about the three recommendations we have around best practices, biggest gaps we see in terms of driving AI adoption and impact in balanced ways.

I did want to call out one thing. You touched on the recent research coming out of Microsoft, one thing that was interesting, although in their research they found that the hype around AI is one of the biggest barriers and friction points in terms of driving adoption, they also found that fear of being replaced by AI was actually not a very prevalent fear. So although that is a pretty dominant narrative in the headlines around AI, “Is this going to replace us?” Most developers aren’t worried about that as much as I think is written about in the press, but rather it’s more the tension around inflated expectations and hype versus reality. It’s that tension that I think can create some friction in the organization that leads to misalignment, skepticism, reluctance to buy in and get involved and really adopt these tools and experiment with them. So just wanted to call that out because I thought that was a really interesting finding from the research.

But Laura, with that said, why don’t you guide us into our discussion around our recommendations around three key areas for how to guide AI adoption?

Laura Tacho: Yeah, so I think the first place that Abi and I talked about, what’s the 1, 2, 3 step for how we would advise a company to do this? And one of the very first things is to, again, work against the hype, that’s really important to level set expectations because not only does that create inflated expectations on the executive level, but also developers then feel like they’re promised the world with these tools. And it’s really important to actually level set what’s realistic, but then show them what is happening. And so to do that, we have to start by measuring.

So kind of the first class of breaking down barriers to adoption is getting clear insight on how the tools are actually being used, what’s the real impact they’re having and how is it relevant to your organization? So of course it’s really hard to know how adding in AI tooling changes the way that your organization performs or how individual developers perceive their work without measuring that from a baseline but if you don’t have a baseline, that shouldn’t be a reason to kind of skip the measuring step. Just use what you have. You can use self-reported data to fill in a lot of gaps. Abi, do you want to talk a little bit about our recommended methodology for measuring AI and measuring the impact in general?

Abi Noda: Yeah, it’s really challenging right now both because of the speed at which new tools are coming out and the differences in the ways developers engage with these tools. Of course, another challenge is that the telemetry that you can get from these tools is in many cases lacking either deliberately, for example, Copilot has TLS limitations, they’re very cautious about what telemetry at the user level they provide. Other tools are still in the early phases of standing up even APIs.

In terms of measurements that we’re finding the most success in right now, so as you mentioned, it’s a spectrum which includes both self-reported data as well as telemetry-based data. We like looking at direct signals from developers on just asking them about how often or frequently they’re using these tools, how much time they feel they’re saving from these tools, their satisfaction with these tools. So those are some of the types of direct signals we can get from developers which cross-cut different tools.

We also look at indirect signals using hard data to try to validate and triangulate some of the impact we’re seeing through self-report, so we look at how does usage correlate longitudinally cross-sectionally with things like PR throughput or weighted PR throughput, true throughput from DX? We also look at quality measures, so how teams that are utilizing AI more, how is it affecting their downstream code quality? Developer experience metrics like their ability to understand the code base easily.

I would say in aggregate, if we were to summarize, we have a piece coming out of my newsletter really soon with some aggregate analysis of this data, but I think some takeaways that I can share today, in aggregate what we’re seeing is definitely strong signal around self-reported time savings from developers. So on average we’re seeing around two to three hours per week of time savings from developers who are using AI code assistants. That’s meaningful. There’s a spectrum there, so there’s folks who are on much higher ends that we’re doing a lot of research into as well as on the lower end.

We’re not seeing as much correlation to PR throughput as we would expect. This aligns with what we hear in the industry. We hear a lot of organizations saying, “Wait, why aren’t we seeing the lift in PR throughput that we would expect, especially given the self-reported time savings?” So we’re seeing a positive but relatively soft relationship between AI usage and PR throughput. I think that that’s a fascinating thing. I don’t really know the explanation for that personally.

And I know there’s been questions about percentage of code written with AI. That’s something where it has been hard to actually measure that. So even if you see the headlines coming out of places like Microsoft, you’ll see the researchers at Microsoft commenting in different forums that that’s a bit of a crude measure right now because most folks are measuring the number of lines of accepted suggestions, but of course developers actually modify that code after they accept the suggestions. And so that’s something we’re digging into at DX is where we’re actually developing a tool to properly measure AI generated code versus human generated code across all IDEs and tools. So we don’t have an answer to what is the industry benchmark around that yet, we just don’t, I think, have good enough data outside of what’s in the headline.

So yeah, those are some of the measures we look at, Laura. Again, it’s important to get both the self-report and the system, look at all these different signals to try to give you a clear picture of what’s really happening. And as we know, the situation is quickly evolving.

Laura Tacho: Yeah, and I think to add on, just to kind of help people understand the richness of data that’s required, self-reported data is incredibly valuable when measuring the impact of AI because we are right now in a world of high fragmentation. Humans are really good at putting together multiple systems. And I can ask you, “Abi, how much time did you save last week by using AI?” And you can give me a decent estimate as long as I’m not asking you about how much time did it save you five weeks ago, right? And so for developers that are maybe using an AI coding assistant in their IDE and they’re using ChatGPT and something else, we can ask them a self-reported question and get good data on how much time they’re saving across all of the tools. We can also get information about if they feel more productive, there’s a lot more room and a lot more possibility of what to capture using self-reported data.

We can then of course look at code suggestions, acceptances percentage of lines of code written and all of those kind of output metrics and then look at the secondary metrics. Is this impacting PR throughput? But without that self-reported data to kind of hold everything together, we’re really just getting fragments and we have to do quite a bit of stitching on a system level if we were trying to get them, whereas humans are pretty good at stitching the data together. And I think that’s generally true of any kind of new and novel class of tools that there’s high fragmentation without a ton of unification or visibility into them.

I think another thing I want to emphasize, just to make sure that we put a fine point on it, is that there’s not necessarily a special class of metrics that we need to start measuring now because we use AI. The physics of software development are staying the same, we just have different means of production. And so we still are going to care about quality, we’re still caring about developer experience, we’re still caring about business impact, and we want to measure how AI has changed all of those things. It’s not that we have to start from scratch and redo all of our thinking and all of our understanding about measuring software performance and productivity, because those pieces stay the same regardless of whether it’s AI, and then we can actually look, once we introduce AI coding assistants, what’s the delta? What’s the change? Is it negative, positive? There’s lots of different ways we can use that data to then validate our hypotheses about whether AI is going to help us or not.

Abi Noda: I think the other big thing, moving on, I think as we’ve talked about, measurement is so important to level set, right? You have heightened inflated expectations from leaders, you have skepticism from developers, what is really the impact of this? I’m reading these things on Reddit, blah, blah, blah. So I think getting that baseline for your organization helps really create alignment and give you a grounded place to start having conversations about the benefits of AI in your organization.

I think equally important piece to driving adoption is how we really model and encourage AI use from the top. I don’t know what the exact best practice for this is yet, but we’re seeing everything from on Twitter, we’re seeing these crazy CEO mandates from organizations like Shopify that’s happening more quietly at a lot of other organizations we’re seeing. But I think a good analogy for this is, or a good anecdote, imagine let’s not even talk about a software developer like a marketer, right? Picture a marketer who has to write, hit deadlines every week, get five blog posts written every week, and they’re barely hitting that deadline as is. That marketer doesn’t have time to necessarily go tinker around with, “What are the right prompts in ChatGPT? How do I actually use AI?”

So we need to remember that not only do we need to remove the fear or stigma away from using these tools, like it’s okay, it’s not cheating, but we really need to give people time and space and we need to encourage that kind of experimentation from the top. I think being able to share, “Hey, here’s a new way I just discovered or learned how to use AI as part of my workflow,” encouraging that kind of play and experimentation and making sure developers have that time. Without that time, people can’t fundamentally change the way they work, and AI is definitely a dramatic shift in how we work.

Laura Tacho: Yeah, absolutely. I think this all comes down to putting individuals in the best position to have a positive change on the system, and we need to give them the tools, resources, and autonomy to do that and the time. And so again, throwing licenses at your developer population and saying, “Okay, now you have AU,” without giving time to experiment, time for them to actually learn the tool. Also setting the example from the top isn’t going to get very far.

I think the other side of that, leading by example, leading from the top down, not necessarily a mandate that’s going to go viral on LinkedIn, but having a really clear conversation with the company about, “What’s the expectation here? Are expectations going up? Are we expecting you to do more with less or is this really just something that we’re giving you because we know that it’s going to improve developer experience and that’s the whole point of it?” Having those conversations transparently is really important.

I think the other thing as well is making people feel that they’re not cheating, that it is allowed and that can actually be kind of structured in an acceptable use policy, governance, compliance, all of those things built in. That was actually the factor that Dora found in their recent AI report that it was a 451% increase of adoption of AI tools of companies that had an acceptable use policy versus companies that didn’t. And I think that’s extremely surprising to see that it’s not about necessarily which vendor you pick or which tool you’re using, but making sure that the expectation is clear that it is okay and encouraged to use those tools and that it’s not considered cheating, you’re not going to get fired for using a robot to do your job. Those kinds of things, driving from the top-down, can really, really boost adoption.

Abi Noda: The biggest thing, we’ve talked about level setting, getting aligned, making sure we’re modeling from the top, I think with the organizations we’ve been working really closely with on really driving these impact and adoption numbers up dramatically, it’s a hands-on effort, right? We’re seeing a lot of investment in workshops and trainings, we’re seeing office hours, we’re seeing champions programs, we’re seeing a lot of content, educational content being created.

So Laura, I’m curious what you’re seeing, but I think that’s our third best practice around increasing AI adoption is that it can’t be a passive effort, this is really a hands-on effort, all-hands effort, engaging leaders to encourage and stress importance of this from the top, but really following that with real enablement efforts. Again, whether that’s webinars, office hours, bringing in outsiders for training sessions, insiders, champion programs, we’ve seen these types of activities. While they don’t sound maybe super sexy, it’s not a technical solution, it’s a cultural shift that you need to drive. We’ve seen this dramatic increase in both adoption impact in terms of the self-reported data as well as secondary metrics such as PR throughput.

Laura Tacho: Yeah, absolutely. I think that companies that understand that the AI tooling is a tool that needs enablement and support just like any other tool are the ones that are going to continue to win. So webinars, training programs, explicitly teaching the skills that developers need in order to be able to maximize the benefit from using tools.

We actually released a report which maybe someone from DX can drop a note in the chat with the link to it or go to getdx.com and you can download it yourself, but our guide to AI-assisted engineering, this research was led by Justin, who’s our deputy CTO, shout out to Justin, and he spoke with 180 plus companies that were seeing really good benefit from gen AI, talked to the executives, talked to the individual developers, really wanted to get into the nitty-gritty of what were the tactical skills that they actually had to know in order to get those two, three, five hours of time savings every week because we don’t just know how to do them innately, they have to be taught. And now that we can sort of quantify and separate that out into discrete skills, you can download this guide and see like, “Oh, okay, teaching our engineers on how to do recursive prompting or how to do a really good system prompt, these are going to be the levers to pull in order to get higher order benefits from AI.”

Aside from those very sort of tactical things on the developer side, there’s also some very tactical things on the leadership side. We mentioned acceptable use policies, creating an environment for experimentation, but there’s a lot of motion that the organization needs to go through in order for adoption to be there, and it’s not going to happen by itself. And I think that is sometimes counterintuitive because there is so much enthusiasm and optimism about AI. Developers want to play with cool new stuff, right? And I think every developer would like to have the opportunity. That’s really different though from using it daily in their work setting. It’s one thing to tinker around with stuff or to vibe code, whatever you want to do on the weekend, it’s very, very different to actually use it as part of your software development workflow in your job and in your organization.

Abi Noda: Just plus one everything you said, Laura. Also, this guide will be updated probably at least every six months so that there’ll be a second edition later this year as the tools and best practices around how to leverage them and evolve. So definitely keep a look-up for that. And Laura, I think we should get into the questions, we’ve been trying our best to aggregate the ones from the chat and we have some in the Q&A.

But I would just again emphasize that this really forward-thinking kind of enablement, that really seems to be the key and really hit me personally when, our engineering team at DX is heavy users of AU, and I’ve been left behind a little bit. I still code, but I haven’t had the time to really tinker with the tools as much as I’d like. So when we were developing this guide, it struck me that, yeah, if I were coming onto a team of developers not using AI and was supposed to get them to use AI, I would have no idea other than, “Hey, go check out these tools,” I wouldn’t have any actual, “Here’s how to do that. Here’s how to use these tools to maximum benefit.” So that’s what this guide is intended to be. I think it’s something folks should incorporate into their larger enablement efforts. The guide itself is definitely shareable with your developers, but pull the nuggets out of there that you like pull them into your own enablement content. And I think we’ve heard of a lot of organizations having success doing that.

Laura Tacho: I want to get into some of the questions just to wrap up here. I saw a couple questions that I’ll sort of aggregate in my own paraphrasing, which is how do we actually measure quality? How do we measure speed? How do you actually measure these things?

I want to just share, in case you’re not familiar with the DX Core 4 Framework, which is a framework that Abi and I co-authored based on research from Dora, the Space Framework and DevEx to try to unify all of them and give you some very clear recommendations on what to actually measure. There are a set of key metrics in there to measure across speed, effectiveness, quality, and then impact. For quality, we recommend measuring change failure rate. So this is the percentage of deployments that result in degraded service. You can go and kind of dig down into that, look at some of the secondary metrics. Looking at change failure rate before introducing AI tools and then after is a good way to see what was the impact of AI on our quality? When we’re talking about PR throughput, that’s the measurement for speed, in the Core 4 framework we want to see is speed going up or down after introducing AI tools. So these are some of those process, second-order business level metrics to track to see what the impact of AI is. These are really good at explaining the impact.

I think earlier we talked about how suggestions or lines of code are not great at explaining the impact. I think that is true, and I’ll stand by that, but where they are useful is explaining the adoption processes. So if you’re responsible for, you’re the one overseeing making the choice that, “Yes, we’re going to have Copilot licenses,” you’re responsible for doing enablement. Looking at those user adoption numbers, how many active users do you have on a weekly basis? What are their actual usage statistics? That is going to give you insight into what kind of enablement campaigns are going to be most successful in boosting adoption. So they definitely have their place, but I wouldn’t lead with that, that wouldn’t be my first slide if I was trying to explain the impact of AI either to a developer or to an executive.

Abi Noda: Double-clicking to the quality metrics question, the most recent Dora report that focused on gen AI found a negative relationship between AI utilization and quality as measured by a change fail rate, which, again, I think intuitively isn’t surprising. I mean, it was a subtle relationship, shouldn’t be like a headline, I think. I think at DX we are not seeing that. We have not seen the negative relationship between AI usage and change fail rate, we largely have seen quality hold pretty steady. That being said, there’s a lot of factors that go into change fail rate. So it’s not necessarily valid to also say that that confirms that AI isn’t introducing quality problems.

Where we are seeing signal and quality is, and one of our DXI, developer experience index, drivers around code base maintainability or readability. We’re not seeing a huge statistical takeaway from the data yet, but I can say empirically, we have seen a lot of cases where on teams where AI usage is higher, we are seeing a decrease in developers’ own self-reported ability to understand and maintain that code. And so I think that’s sort of an early signal into what the future could look like. Again, that makes a lot of sense, it’s intuitive. Another metric we sometimes try to look at that’s a little more inner loop than change fail rate is peer revert rate. So just looking at basically how many oopsies are being committed into the code base. Again, we’re not seeing much of a impact there based on gen AI utilization.

So I think largely what we’re seeing in the data and what I would recommend measuring is the actual developer experience, the self-reported indicator of like, “Are you having more difficulty understanding your own code because more of that code is being generated by AI?” I think that’s really important to keep tabs on. There are AI tools that are also now focused on things like documentation, so I think really important to keep tabs on it, don’t let your code base run away under your developer’s feet because that is going to lead to future velocity and quality problems.

Laura Tacho: And going back to the Core 4, the DXI, in my opinion, is the thing to keep an eye on above all else because that is really the leading indicator or kind of the controllable input metric that’s going to impact all of the other metrics about speed, quality, impact. And so watching for degradation of developer experience, it’s early enough when you see it happen there before it gets too late where it’s showing up downstream in ways that are going to be much more challenging to fix.

Moving on, someone asked about the greatest areas of time savings. We have a white paper documentation, go download that guide, I’ll share it again, this Guide to AI Assisted Engineering. So what we did here was look at the users of AI who were reporting the highest time savings and then asking them specifically about what actions they were doing that was resulting in the most time saved. And surprisingly, mid-loop code generation, so using it for code authoring, wasn’t one of the top. It’s actually things like stack trace analysis that were incredibly time-consuming, but is great application of AI to save time and just give you some directionality in terms of where to look, “What does this error mean?” So download that guide, there’s a lot of really interesting statistics and research about how certain use cases are best suited and a great fit for AI that have the best time savings. That’s all in there.

Abi Noda: Laura, I think we have time for one more question and then I’ll let you kind of wrap us up. So there’s been a couple questions throughout the discussion around how should this AI shift affect hiring practices? Should we be focusing on interviewing for AI skills or hiring for AI skills?

I’ll just give my personal take on this. I think that that is probably a mistake. Here’s why. I think AI adoption is clearly sloping upward to the point where most developers are using AI to some extent in their work. So I think, first of all, it’s safe to assume, maybe it’s reasonable to ask a candidate, “Are you anti-AI? Are you culturally opposed or morally opposed?” Maybe that presents cultural problems. But in terms of technical interviewing and hiring, I think we should assume that most developers are using AI.

The question really becomes how effective are developers at incorporating AI into their work to boost their own productivity? And I think that’s something that needs to be demonstrated through traditional interviewing and hiring practices, I don’t think we need to hone in on nitpicking, “Hey, how do you use AI? How often do you use AI?” What we see in our data actually is that use of AI for a good portion of developers can be more situational. And we’ll publish an article on this soon, but we see the greatest impact, the greatest gains in time savings actually come from that initial going from not using AI to periodic but regular use of AI, that’s where we see the biggest gains. The folks using AI daily still see more gains, but it’s not like a linear increase from if you’re using AI every 10 minutes for example, you’re not seeing gains that are 100x more than the person using it once per week.

So I would definitely caution organizations against sort of gating or introducing rigorous hiring practices around AI. I think it’s better to think of AI as an input into a developer’s overall ability to produce good code and ship good features for the business.

Laura Tacho: Yeah, and just to add onto that, I think the only mistake to make, or another big mistake to make, not the only one would be to disallow AI in an interview scenario. But I think it’s the same mistake of not allowing Googling during an interview. You want to make sure that the situation or the environment that someone is completing an interview task matches the environment and the resources they’ll have access to when actually working. And so the mismatch there I think is not necessarily super helpful. I am worried about a future where we have to have proctored tech interviews because we’re too worried about AI cheating. I really hope our industry stays away from perceiving that. And we realize that AI is just another tool, another resource that helps people be more productive, and we should be encouraging that and not so afraid of it.

So thanks, Abi, and thanks for all of your questions. If you are in a position where, as we said, 50% of users using AI on a daily or weekly basis is pretty normal right now at a median performing organization. So these sensational headlines are sensational for a reason, they’re in the headlines for a reason, that’s just not what we’re seeing on the ground. That being said, though, there’s a lot of opportunity out there and the barriers to adoption aren’t necessarily technical at this point, it’s more cultural. So the tech is there, it’s working for us. Really, as Abi said, the biggest delta is between users who don’t use AI and users who are now periodic users of AI, and so that’s the population that you want to target. So looking at if you have 50% of users using AI every week, every day, those users, that’s great, keep supporting them.

But what you want to do is focus on those 50%, did they use AI once and then stopped using it? What can you do to bring them into the fold? Because that is where you’re going to see the most gain. So do some segmentation, do some interviews, go on a listening tour, figure out what matters to those people. You want to definitely level set expectations, both with your execs and with the developers in terms of what’s the real impact, make sure that you’re measuring so that you can tell a good story with data around what’s happening. Leading from the top with examples of experimentation, making it encouraged, also encouraging use through acceptable use policies is something we found to work. And then of course, treating this like any other tool that needs support and enablement and doing training, office hours, helping teach the explicit skills to your developers to get the most out of AI so that they can either become a user or become a better user over time. Those are really the three things that we want you to walk away with, those are the levers that you can pull on in order to increase AI adoption at your own organization.

Another shout out for our AI Assisted Engineering Guide. So that’s incredibly valuable. We’ve gotten tremendously positive feedback. This is meant to really go to every engineering manager, every engineer. There’s a lot of tactical examples in there as well as leadership examples. So give that a read through. There’s a lot more that will continue this conversation that Abi and I are having today. So we’ll leave it there. Feel free to follow both of us on LinkedIn if you’re not already, because we’re posting about this stuff in real time as we keep learning more, and this is changing every hour, it seems, so definitely a lot to talk about around AI and AI adoption. Thanks everyone for spending the last 45 minutes with us. Take care.