Skip to content

What the 2025 DORA Report means for your AI strategy

AI amplifies bad practices, real gains come from focusing AI efforts on systems, and success depends on strong change management.

This post was originally published in Engineering Enablement, DX’s newsletter dedicated to sharing research and perspectives on developer productivity. Subscribe to be notified when we publish new issues.

DORA just released their new 2025 State of AI-Assisted Engineering Report. In it, you’ll find research on how organisations are changing the way their work gets done with AI tools.

There are a few notable findings and changes this year:

  • With AI tool adoption reaching nearly 90%, organisations need to shift focus beyond adoption metrics and start to maximise impact, which is nearly impossible without robust measurements in place.
  • DORA also found that AI is increasing instability as teams increase velocity, something most devs and leaders have long suspected.
  • DORA itself has also evolved, adding a new key metric to their canonical software delivery performance metrics, as well as removing their elite, high, medium, and low performance clusters that have been a mainstay of their reports for the last decade.

Here’s guidance on how to interpret these findings and use them to evaluate your current AI strategy.

AI makes bad practices even worse

Organisations putting AI on top of existing dysfunction should expect those bad practices to get even worse. If code review is already a bottleneck, the increased volume and frequency of changes will create longer delays. If your deployment pipeline is brittle, it will break more frequently. If your priorities shift constantly, AI will help your teams build the wrong things faster.

The research in this year’s report shows these aren’t just spooky hypotheticals. Organisations lacking foundational capabilities see AI adoption correlate with decreased team performance, increased friction, and greater instability. AI amplifies whatever system it enters, which means the main thing determining the success of your AI strategy isn’t the tool itself. It’s your processes.

→ AI won’t bypass all of the bottlenecks that slowed you down before. Instead, use AI to create new solutions for existing process problems.

If you’ve got a solid foundation, AI will move you forward faster

Just as AI exacerbates bad practices, it amplifies the good ones. Teams with solid engineering practices, like good source control practices, monitoring, observability, and shipping small changes, are benefiting more from AI, which we’ve also seen evidence of in DX data.

To stay ahead with AI, organisations need resilient systems that can detect anomalies quickly and allow developers to repair them just as fast. This is the “AI safety net” that allows developers to experiment quickly without degrading performance for customers.

Similarly, working in small batches—something DORA has advocated for for over a decade—is also correlated with better product performance. On an individual level, shipping more code at once might seem like a productivity win. But bigger changes are still riskier, so it’s wise to focus on the outcomes of that code and not get hung up on the novelty of shipping more at once.

DORA specifically calls out internal platforms as the foundation for AI acceleration. First, platforms provide a streamlined developer experience with self-service capabilities, clear golden paths, and integrated toolchains. Internal platforms are already linked to higher organisation performance (though also a small but credible increase in instability), and these positive effects are amplified with AI in the picture. Second, these platforms also act as a distribution channel for AI tools across the SDLC. Platform teams can use AI to solve real problems across entire organisations by introducing capabilities into centralised internal platforms.

→ Every investment in developer experience will come back with higher returns when using AI.

For the most benefit, think at the org level, not the individual level

For leaders who want to use AI to boost organisational outcomes like better product performance and a faster time to market, this report reinforced what industry leaders have been saying for the past two years: focus on the system, not the individual. While I don’t want to downplay the importance of individual proficiency (and I continue to encourage organisations to invest in individual training), the biggest gains happen when applying AI to a system.

Organisations who have taken the “spray and pray” approach when it comes to AI are arriving at a dead end. Giving all of the developers AI tool licenses and hoping the rest will take care of itself limits the impact of AI to the individual level. And those returns just get eaten up in other parts of the system.

Instead, DORA recommends looking at the system first. Use the Value Stream Management techniques shared by DORA (starting on page 73). If your company uses DX, we’ve made it easy to see bottlenecks across the value stream based on your developer experience data. Anecdotally, I can share that I’ve never come across a company where the incremental time savings from individual task speed-up were greater than any existing bottleneck in the SDLC. Pointing AI at those problems is a great way to unlock a tremendous amount of impact.

Another technique to consider is problem-based learning. Booking.com gets teams focused on solving real business problems with AI using a technique called Experience-Based Accelerators, which Bruno Passos and I discussed in a recent talk about scaling AI adoption to 3000+ developers.

→ Stop thinking about AI as a way to speed up individual work, and instead use AI to solve system-wide problems in new ways.

You need a robust measurement strategy

If your approach to measuring AI impact was keeping track of the four key DORA metrics, this is a wake-up call: even DORA does not recommend that.

Instead, use a comprehensive measurement strategy that covers many dimensions of AI adoption and impact, like the AI Measurement Framework. The DORA report emphasizes that introducing AI doesn’t require completely overhauling your measurement strategy, but you do need to augment it with AI-specific measures while maintaining baseline comparisons to understand what’s changing.

Your measurement strategy should be designed around the right question. In 2025, the critical question isn’t “are people using AI?” (90% already are) but “is AI helping us achieve better outcomes for individuals, teams, products, and the organisation?”

DORA also recommends mixing self-reported and system data. Surveys and interviews capture subjective experiences like satisfaction and perceived effectiveness that are difficult to quantify automatically. Without this, organisations risk losing sight of critical information about how AI is impacting the day-to-day work of developers. System data (commits, deployment frequency) provides standardized, scalable data. Both have limitations, and the best measurement strategies use both.

→ Adoption rate and lines of code generated won’t tell you if your strategy is working. Instead, use your existing frameworks as a baseline to see how performance is changing, and augment with AI-specific metrics for a deeper understanding of how your strategy is playing out in real time.

Don’t underestimate change management

A clear and communicated stance on AI is one of the single most important levers an organisation can employ to increase positive outcomes when it comes to AI. Leaders need to clearly communicate whether AI use is expected as part of developers’ work, which specific tools are approved for use, and how developers can experiment with AI—even if the answers might seem obvious.

Without explicit guidance, developers must guess at boundaries, leading some to unnecessarily limit their use of AI out of caution about data exposure or career consequences, while others unknowingly violate policies that were never clearly articulated in the first place.

For the second year in a row, DORA’s research shows that change management leads to greater benefits in individual effectiveness and organisational performance, while also reducing day-to-day friction. Companies without clear policies face stagnation or even negative impacts on team performance.

This isn’t about mandates. It’s about creating the operational clarity that allows developers to experiment confidently, learn rapidly, and channel AI’s capabilities toward problems that actually matter to the business, without wasting time second-guessing whether it’s allowed, or worse, putting your organisation in a risky position due to security and data privacy.

→ It takes a lot of organisational muscle for this magnitude of transformation. A clear and communicated stance on AI is an important driver of outcomes.

Keep focus on improving

This year, DORA did not include the elite, high, medium, and low performance clusters that have been a hallmark of DORA reports and performance benchmarking. These clusters were useful for telling organisations how they were doing, but they always lacked a connection back to capabilities, which is what teams need to improve.

With eyes on improvement, these clusters are no more, and instead, DORA introduced seven team types. Like the performance clusters, the team type determination is based on data, but with a big difference: knowing your team type will draw you a map of what capabilities to improve in order to increase your performance, something that the performance clusters were never great at doing.

This underlies the most prominent theme in this year’s report: measurements enable improvement. We measure AI impact in order to know how teams use AI and whether it’s helping or hurting our performance. We keep track of software delivery metrics like change failure rate and deployment frequency so we can see if we’re getting better. We look at software delivery across the value stream to identify bottlenecks and propose improvements.

The key question in this year’s report is “Is AI helping us achieve better outcomes for individuals, teams, products, and the organisation?” AI is a tool for improvement. Yes, it is changing the way work gets done. But just as much is staying the same. AI just holds a mirror up to organisations, and amplifies both the good and the bad. Solid engineering practices allow teams to maximise impact, but AI can’t cover up existing dysfunction.

Using the data in this year’s report, take the time to reevaluate your own AI strategy. Does your strategy focus on individuals using AI to accelerate their daily tasks? If it does, then your returns are constrained to individual tasks, and it’s misguided to expect a high organisational payout.

But if your organisation continues to invest in a great developer experience, pointing AI at real bottlenecks and keeping the focus on solving system-wide problems, then AI will be a tremendous accelerant for your teams and your customers.

Published
October 8, 2025