Customer spotlight: DroneDeploy’s AI impact analysis using DX data
Kali Watkins
Product Marketing
We recently hosted a customer spotlight with Joseph Mente, Senior Director of DevOps, Security, and IT at DroneDeploy, where he talked about how his organization is using DX to evaluate AI’s impact on developer productivity. As part of that session, Mente shared a story of one experiment and analysis they ran, which focused on grouping developers into cohorts based on when they started using AI. This article summarizes that analysis and may provide some inspiration for other DX customers running analyses on AI’s impact.
The analysis and results
To understand the impact of AI, Mente initially focused on two metrics: PR cycle time and TrueThroughput. While cycle time measures speed, TrueThroughput is a complexity-weighted measure of output unique to DX. By accounting for the size and difficulty of each change, it provides a more precise reflection of engineering effort than raw PR counts.
At first glance, aggregate metrics for cycle time and throughput showed almost no change after AI adoption. However, Mente noticed that minor filter adjustments produced very different results, swinging from “massive impact” to “net negative.” This confirmed that the reality was too nuanced for simple averages to capture.
Mente looked into PR cycle time first. After analyzing the metric at both organizational and individual levels, Mente found no correlation with AI usage. He determined that because engineers at DroneDeploy typically open PRs only when they are ready for review, the metric failed to capture the speed gains during the development phase. Recognizing this, Mente disqualified cycle time as a relevant metric for this analysis.
Mente shifted his focus to TrueThroughput. Working with his DX Customer Success Manager, they created custom SQL queries to extract raw data on tool adoption dates and TrueThroughput. Then, they analyzed TrueThroughput by grouping developers into cohorts based on when they first adopted the tool. This shift to reporting by time since adoption was what finally conveyed the impact AI was having.
The cohort analysis revealed a clear J-Curve. In the first month of adoption, TrueThroughput actually dropped, reflecting the “friction of learning” as engineers experimented with prompting and adjusted their workflows. By month three, this friction dissipated, and the trend turned upward. After eight months of usage, DroneDeploy saw a 5% average monthly increase in TrueThroughput. The J-curve explained the flat organizational average: the early “learning dip” was effectively masking the significant gains of long-term users.

Additional takeaways and lessons
Code quality did not decline. A frequent concern is that AI speed comes at the expense of code quality. By pairing this analysis with automated code scanning, Mente determined that quality did not decline.
AI tools had a positive ROI. Mente found that even marginal productivity gains make these tools incredibly cost-effective. In fact, even at 10x the current licensing cost, the ROI per engineer would still be undeniably positive.
Review times emerged as a new bottleneck. As AI increases the volume of code, the bottleneck often shifts from writing to reviewing. DroneDeploy found that as throughput rose, PR review times became a new constraint on velocity.
Enablement was more important than tool choice. Mente emphasized that AI impact is 30% tool choice and 70% enablement. Instead of obsessing over which tool is “best,” he recommended focusing on training and workflow integration.
Standardizing on one tool proved to be a good strategy. Mente recommended avoiding the temptation to chase shiny new tools. DroneDeploy standardizes on a core tool like Cursor, which provides 95% of the capabilities of specialized alternatives while enabling unified training and peer support.
The greatest gains come from optimizing for task fit. Mente observed that the greatest gains occur when AI is applied to specific, high-friction tasks such as boilerplate generation and unit testing, rather than to complex architectural work. To accelerate the J-curve, encourage teams to use AI where it has the highest impact rather than pushing for universal usage across every task.
More is not always better. Mente found that many of the most enthusiastic adopters, using the most tokens and therefore dollars, are not necessarily more productive. There is an incredibly large variance in output, despite the correlation. Mente advised that a blanket push towards more use of AI tools without a strong emphasis on effective use will likely not result in positive return on investment. Communication of this important detail is critical to avoid missed expectations.

If you’re interested in setting up a similar analysis, reach out to your customer success representative.