AI-assisted engineering: Q4 impact report
Data and insights from 400+ organizations navigating their AI transformation.
This post was originally published in Engineering Enablement, DX’s newsletter dedicated to sharing research and perspectives on developer productivity. Subscribe to be notified when we publish new issues.“Are we falling behind on AI?”
“Are we falling behind on AI?”
Without fail, this question makes its way into nearly every conversation I have with software engineering executives these days. Click-worthy headlines claim that Coinbase has 40% of its code written by AI, that two-thirds of AI-native startups are letting AI write nearly all of their code, and dozens of other success stories have surfaced just in the past week alone. Combine that with pressure from the boardroom to improve efficiency, and it’s easy to see why so many leaders feel like they’re already behind.
But are those headlines true? And are they a good reference point for your own organisation’s performance?
At DX, we have the unique vantage point of working closely with more than 400 organisations and seeing firsthand how their AI strategies are unfolding. To help you better contextualise your own performance, we’re releasing a new report today: AI-Assisted Engineering: Q4 Impact Report.
Each quarter, we’ll analyse the prior quarter’s data to offer a clear, moment-in-time view of AI adoption and impact across the industry—reviewing the good and the bad, not just cherry-picked success stories that make for good headlines. We also include our analysis and interpretation, so you can draw meaningful insights faster and spend less time wondering what the numbers actually mean for your organisation.
In this quarter’s report, there are a few big themes.
First, AI tool use is now ubiquitous, with over 90% of developers using AI tools at least once a month. Earlier in the year, many leaders focused their AI strategies on adoption and activation, and it’s paid off. But now that adoption has ramped up, the focus needs to shift toward driving real impact.

We also looked at the extent to which AI-authored code is being merged without significant human rewrites, and found that the percentage of AI-authored code is lower than what you see in some of the major headlines. Getting reliable data here is a challenge for many organisations, but an area of tool telemetry that’s evolving rapidly, including through a solution we’re developing at DX.
Finally, enterprises continue to lead on AI impact. Engineers at large organisations report saving more time with AI tools than those at startups or smaller companies. However, AI tool adoption within enterprises still lags behind, suggesting there’s untapped potential once adoption catches up.

You can read the full report here.
A final thought to leave you with is that there is no “average” experience with AI impact. Looking at industry trends and averages like those in this report can help contextualise your performance and do some pattern-matching, but I want to caution against using these averages as a guide to what your performance should look like. The truth is that AI impact is very asymmetrical and looks very different for each organisation. Some are doing very well when it comes to throughput, but quality is suffering; some have done a great job of using AI for migrations, but are struggling to incorporate AI in testing and release processes. Averages abstract away the nuances of each company’s transformation journey. The best way to use this information is to enhance—not replace—your own metrics and AI strategy.
If you’re looking to improve the way your organisation is measuring AI impact, the AI Measurement Framework offers practical guidance on what to measure and how to collect those measurements, and is a great place to start.