Skip to content

Where does the time go?

If developers are saving time with AI, where does the time go?

This post was originally published on my Linkedin.

I had the chance to sit down with Gergely Orosz and talk about how to effectively measure AI impact across engineering organisations. This is a thorny problem in our industry right now: the AI tool ecosystem is expanding rapidly, and companies are eager to adopt tools so they can reap the benefits of improved efficiency and accelerated innovation as soon as possible.

Many companies are tracking time saved per developer as a key metric to measure the impact of AI. When paired with metrics around adoption, developer experience, quality, and cost, time savings is a useful way to understand how AI is changing developer workflows and improving efficiency. It’s a top-level metric for impact in the AI Measurement Framework, and although it’s not perfect, it does help organisations make better decisions about their strategy and investment.

So Gergely (and nearly every CTO I speak with) wanted to know: if developers are saving time with AI, where does the time go?

It’s pretty hard to answer that question across the industry right now, for a few reasons. First, it requires that orgs had robust baseline measurements before using AI, which is already a hard problem. We also need enough time to pass so that users can get onboarded onto the tool, get over the learning curve, and then have enough “normal work” so that we can increase confidence in the analysis of what’s happening. We can’t really draw a conclusion of what’s happening after only a month, and there are so many orgs just getting these tools integrated into workflows in the recent months.

So while industry-wide data to answer this problem might be sparse, I can share some of my recent observations, and give you some advice if you’re trying to answer this question yourself.

Accelerating the fun parts, leaving the toil

One case that Gergely and I discussed in this episode came from DORA’s 2024 State of DevOps Report. While AI adoption increases developers’ reported flow, job satisfaction, and productivity, it actually correlates with less time spent on valuable work rather than reducing toilsome tasks as expected. The “vacuum hypothesis” suggests that AI helps developers complete valuable work more efficiently, creating extra time that isn’t necessarily filled with more valuable activities, challenging the common assumption that AI primarily eliminates manual, repetitive work.

More time for bugs, and more time for innovation

I have observed the opposite happen at some companies: developers who are daily+ users of AI fix more bugs per week and also spent more time on innovation work vs KTLO, bugs, and maintenance.

This sounds great at face value, but it’s important to think about not just what the data tells us, but what it does not tell us.

  • Are they fixing more bugs because AI is creating more bugs?
  • Are they getting more work done in the same amount of time?
  • Are the AI users on particular teams where there is less of a maintenance overhead? Are other people on their teams working on more maintenance tasks?
  • …and 50 other questions

To answer these questions and draw a confident conclusion about what’s happening, we need to look at several measurements at once to triangulate an answer. PR throughput, change complexity, maintainability, quality, allocation analysis – all of these together give confidence to the analysis.

Time saved coding is pushed into reviewing and verifying

Some teams are finding that AI is speeding up their code authoring process, but that time is soaked up in other parts of the SDLC; for example, code reviews may take longer. The time isn’t really saved; it’s just shifted.

At DX, we recently looked at 180+ different companies and interviewed developers on the use cases that do actually save time for them. Code generation is up there, but there are so many others – stack trace analysis, unit test generation, writing SQL – that might not be obvious to developers just getting used to developing with AI.

Answering this question yourself

Having the ability to compare before and after introducing AI tools is the most effective way to understand how AI has shifted how developers spend their time. This means you need alignment on what engineering performance looks like (use the Core 4 to simplify this discussion) and good baseline metrics.

PR throughput, allocation, innovation ratio, and PR cycle time are just a few metrics I’d recommend looking at in order to start putting a picture together of how AI is changing the makeup of day-to-day work for development teams.

This is also a time where speaking with your developers is crucial.

Some developer activities are not visible from system metrics alone, so it is essential to bring them into the conversation so they can detail their experiences. For example, AI might reduce the amount of time it takes them to find an answer to a question. This measurement is nearly impossible to get from systems data, but a developer can tell you the answer quickly. Developers know where these tools are helping them and where they are causing more friction, so believe them when they tell you.

 

Published
August 1, 2025