DroneDeploy's playbook for evaluating and maximizing AI impact
For DroneDeploy, the leading reality capture platform, AI has always been integral to both the product and the way teams work. When ChatGPT launched, the company quickly recognized the potential of large language models to accelerate software development, but also understood that realizing that potential would require data to guide and measure progress.
“Part of my role is making sure we can measure the performance of the engineering organization, and that we have a way to continuously improve it,” says Joseph Mente, Senior Director of DevOps, Security, and IT. “We’ve used DX for about half a decade now as our engineering dashboard at the management level. So as soon as DX introduced AI impact metrics, we jumped right in.”
A three-phased approach to rolling out AI tools
Rather than simply rolling out AI tools and hoping for the best, DroneDeploy has taken a structured, data-driven approach with three phases: first evaluating AI tools, then driving adoption, and ultimately helping developers use them more effectively. “The first half of this year was all about phase 1: make sure we understand what the right AI tools are," Mente explains. “Last quarter was about making sure we’re driving adoption on those tools, and this quarter is all about driving effectiveness at scale with those tools.”
This continuous cycle ensures that DroneDeploy doesn’t just chase the latest AI trend, but systematically evaluates, deploys, and optimizes tools to deliver real value.
Phase 1: Identify the most effective tools
The first phase is about running vendor bake-offs through small experiments. “We like to run about a dozen experiments at a time with various tools,” says Mente. For each experiment, Mente works with engineers to understand the specific task, evaluate the approach, and determine whether the tool delivers meaningful value. “It’s interesting, but often we see that AI tools don’t actually save developers time—they’re neutral at best. But occasionally you find a tool and a use case that is actually really good, and that does make a difference.”
Running deliberate experiments, rather than letting developers adopt tools ad hoc, is key. “There’s a tendency for users to want complete freedom. Developers will say, ‘Oh, I heard about this thing, I’m going to go try it. Can I have access?’ But how do we measure that what they’re doing is actually impactful, and we’re not just burning $200 a month keeping GPUs warm?”
Phase 2: Scale adoption of proven tools
Once DroneDeploy identifies a tool that’s driving impact, they focus on promoting adoption across the organization and measuring whether it’s making a difference. “Once I’ve done an experiment and I’ve said, ‘I’m going to go roll this out to a whole team or a whole department,’ did it actually make a difference? That’s where DX comes in,” explains Mente.
He also notes that it’s difficult to get the adoption data he needs from the AI vendors themselves. “I can see how many accounts we have, but that doesn’t show me actual usage,” he says. “DX has been great for filling that gap. It also gives our CEO data he can use to show the board that these tools are being actively adopted.”
Phase 3: Increase effectiveness through targeted enablement
After driving adoption, the next step is to enable developers to get more out of AI. That starts with understanding what differentiates those seeing the biggest productivity gains from everyone else. “Instead of just taking the overall impact of AI on productivity, we’ve found it more insightful to analyze developer cohorts,” says Mente. “The first time we did this, the results were eye-opening. Some developers were seeing huge gains, while others were actually less effective.”
Digging deeper in DX, DroneDeploy found that the difference had to do with task-tool fit. “Our data showed that developers who used these tools moderately or lightly were actually more effective than many who used them heavily. The difference was that they applied the tools selectively to the tasks they’re best suited for,” Mente explains. “In other words, judicious use of these tools is far more effective than trying to apply them to every possible task.” He adds, “Without proper training and enablement, two equally skilled engineers can have completely different outcomes. And that’s something DX helped us uncover.”
That insight led DroneDeploy to focus on helping developers understand when and how to use AI tools most effectively. They’ve done this through two key initiatives. The first is AI 101, a required training for every developer that covers how language models work in practice, including topics like context engineering and when to apply different approaches. The second is a series of biweekly AI Lightning Talks, where engineers take ten minutes to share what they’ve tried, what worked, and just as importantly what didn’t.
Directing AI investment towards the highest impact opportunities
After rolling out general-use AI code assistants, DroneDeploy began using DX to pinpoint where AI could make the biggest impact by addressing real pain points. “DX shows us which factors are affecting productivity the most,” says Mente. “For example, our testing suite has been a challenge, and DX helped us identify it as an area where AI could help us improve.” This creates a continuous cycle: DX surfaces areas of friction and waste, the team identifies where AI might help, and then they return to phase one to experiment with new tools for that specific problem.
This structured approach has helped the engineering organization build credibility and trust with the company’s CFO. “AI tools are evolving quickly, and it’s not easy to predict exactly what we’ll be spending next year,” says Mente. “By being rigorous in how we evaluate, roll out, and optimize our use of AI tools, we’ve been able to demonstrate responsible stewardship. Using DX, we can show the finance and executive teams that we’re rolling out AI tooling responsibly. We’re utilizing funds wisely. We’re not just jumping on the hype train.”
Building lasting impact with DX
For DroneDeploy, DX has become more than just a measurement tool. Mente says it’s the equivalent of having a research team embedded within his organization, helping guide decisions. “We’re trying to run a business. We don’t have time to research the best way to approach every problem,” he explains. “A major reason why we buy DX specifically is because of that research component. It’s like having an expert on the team saying, ‘Here’s what the data actually means and what you should do next.’ That saves us enormous time and helps us stay focused on what really matters.”
For DroneDeploy, DX has become fundamental to how the company approaches not just AI adoption, but engineering effectiveness as a whole. “AI tools are a means to an end, which is better engineering effectiveness, better engineering productivity, and also better engineering sentiment,” he says. “All that ties back to our DX data and making sure this is a great place to work, because happier engineers are more effective.”