How to optimize your AI tool rollout with data

Kali Watkins
Product Marketing
Companies are taking very different paths in rolling out AI tools for engineering. Some take a hands-off approach, letting developers choose what works best for them. Others go all-in on a single provider and mandate adoption across the board. Most land somewhere in the middle with a hybrid strategy, offering approved tools, guardrails, and enablement programs while still giving teams room to experiment.
No matter the approach, one theme is consistent: organizations often lack the data to understand whether these tools are actually being used, and if they’re delivering the intended impact. Leaders may assume adoption is high simply because licenses have been purchased, only to discover that most developers aren’t using the tool day-to-day. Others hear stories of productivity wins but can’t prove whether the benefits are widespread or sustainable. Without clear visibility into both adoption and impact, organizations risk overspending on licenses, missing chances to double down where AI is working, and overlooking the real blockers to driving meaningful productivity gains.
This guide should help. Here, we’ll outline the following steps to get more from your AI investment and drive meaningful adoption, using data from DX:
- Capture a baseline for AI adoption
- Capture a baseline for AI impact
- Identify common use cases
- Identify why adoption is lagging
- Run an enablement initiative
- Track changes to your baselines
- Find additional use cases for AI
1. Capture a baseline for AI adoption
The first step is understanding who is using AI and how often. After connecting DX to the AI tools teams are using (such as GitHub Copilot, Claude Code, Windsurf, and even bespoke in-house tools), start with two foundational reports:
- Overall AI utilization: shows how many developers are using AI, giving you a high-level view of adoption across all tools.
- Tool-specific AI adoption: breaks down usage by vendor, helping you compare adoption patterns across different vendors or solutions.
Together, these reports surface important utilization metrics such as:
- Daily active users (DAU): developers using AI at least once per day
- Weekly active users (WAU): developers using AI at least once per week
- Monthly active users (MAU): developers using AI at least once per month
- Power users: the most active AI users across the organization (measured by number of days active)
- Unused licenses: licenses that have not been used within a selected time period
These metrics reveal where AI is gaining traction and where it’s stalling. For example, if a team shows substantially more weekly or monthly active users than daily active users, it signals an opportunity to embed AI more deeply into their daily workflows. Identifying power users is also valuable, as they can become internal champions for enablement initiatives. And tracking unused licenses helps ensure you’re allocating paid seats to developers who are actually getting value from the tools.
By tracking these adoption metrics over time, you establish a clear baseline to measure against as you roll out training and enablement programs. Capturing usage data over time also makes it possible to see how increased AI adoption affects other developer productivity metrics in DX.
2. Capture a baseline for AI impact
Capturing adoption data is the first step of a data-driven AI rollout. The next step is establishing a baseline for AI impact using the recommended metrics in the Impact dimension of the AI Measurement Framework:
- AI-driven time savings: a measure of the time saved across the SDLC by developers using AI
- AI-authored code: a measure of how much code developers produce that is AI authored
- Developer satisfaction: a measure of developer satisfaction with AI tooling through the use of CSAT questions as part of the quarterly snapshot
- Correlations between AI usage and measures of developer productivity: the DX Core 4 metrics
By using this set of metrics as a baseline, you gain a balanced understanding of AI’s impact on developer productivity. You can then measure against this baseline as you run enablement efforts to see whether adoption and impact are improving.
3. Identify common use cases
After capturing baselines for AI adoption and impact, the next step is to uncover how developers are putting it to work. Teams apply AI across a range of tasks, from boilerplate generation and code reviews to creating new features, though certain applications tend to deliver greater efficiency gains.
As outlined in DX’s guide to AI-assisted engineering, some of the most time-saving examples we see across customers include stack trace analysis, code refactoring, and mid-loop generation. However, each organization’s workflows are unique. Using DX’s PlatformX, you can identify the most common and most valuable applications by capturing event-based, in-the-moment feedback. For instance, when an engineer merges a pull request, they can be prompted to share whether AI was used, how it was used, and how much time it saved.
These real-time signals reveal which scenarios are saving the most time within your organization. You can use this information to encourage other developers to adopt AI in similar ways.
4. Identify why adoption is lagging
To increase overall adoption, it’s critical to go beyond overall usage numbers and understand where specifically usage is lagging and why.
With DX, you can use the AI utilization report to break down adoption by team or function and quickly spot groups with little or no usage. Once you’ve identified where adoption is stalled, follow up with a targeted survey to those teams and ask developers directly what’s getting in the way.
Another way to understand why developers aren’t using AI is through CSAT. With DX, you can capture CSAT scores for tools during quarterly snapshots. Alongside the sentiment score, developers’ written feedback provides direct insight into what’s blocking adoption and why certain tools aren’t being used.
Across customers, common blockers surface: some developers aren’t sure what they’re allowed to use AI for, while others don’t know how to get started. These insights highlight exactly where enablement can have the greatest impact, whether that means clearer guidelines, hands-on training, or peer-led demonstrations.
5. Plan an enablement initiative
Data shows that structured training and support are what drive AI adoption. For one customer, a structured rollout increased Copilot adoption and satisfaction by 20%. Another study from Microsoft found that developers at companies that actively encourage AI use are 7x more likely to be daily users. Here are some examples of enablement strategies we’ve seen succeed:
Peer-led enablement (GitHub)
GitHub has found that successful rollouts require time, training, and credible peer advocacy. Rollouts often begin with a small group of early adopters who not only test the tool but also act as champions and informal trainers for their peers. Because the advocacy comes from colleagues, adoption spreads more credibly and organically than it would if it came from a mandate. These champions share practical techniques, from how to prompt effectively to how to integrate Copilot into daily workflows, making the tool easier to adopt at scale.
Problem-first accelerators (Booking.com)
Booking.com grounds AI enablement in the real problems engineering teams face. They use two formats: mini hackathons and experience-based accelerators (EBAs). In hackathons, teams bring in business problems from across the company, work side by side for several days without meeting distractions, and conclude with a showcase of solutions.
EBAs, a concept from AWS, are three-day immersive workshops where entire teams step away from daily work to focus deeply on a single problem. Some sessions split cohorts across different AI tools to compare outcomes. The team at Booking attributed the success of this initiative to two key features: having AI vendors on-site as resources and eliminating routine meetings. Surveys show around 70% of the code written during EBAs was AI-assisted, with adoption climbing significantly afterward.
Pioneers and peer learning (Twilio)
Twilio’s enablement strategy recognizes that adoption won’t happen just by distributing licenses. Platform teams educate engineers on safe usage, prompting, and how to weave AI into workflows.
To scale learning, Twilio pairs greenfield projects with pioneer engineers who run tight feedback loops—sharing examples like drafting product requirement docs for agents or refining prompts. Insights are then spread across the org. At the same time, a DevEx AI guild quickly drew hundreds of participants, creating a grassroots channel for experimentation and best-practice sharing. Together, these top-down and bottom-up efforts ensure that developers are trained, supported, and equipped with knowledge before leadership expects them to adopt AI at scale.
Company-wide AI hackathon (Toast)
Toast realized that expecting developers to become “AI experts” alongside normal workloads wasn’t realistic. Their solution was a week-long, company-wide hackathon dedicated solely to experimentation and learning.
Teams self-organized around projects, from practical improvements like cleaning up Confluence docs to creative ideas like generating podcasts from code repositories. The format emphasized exploration over outcomes, and failure was accepted as part of the learning process. The result was accelerated skill-building, peer-to-peer sharing, and momentum across the company, turning what could have been a top-down mandate into an empowering, confidence-building experience.
Other tactics we’ve seen include:
- Identifying power users to host lunch-and-learns
- Creating interactive formats like “Prompt-a-thons”
- Setting up dedicated support and community channels
6. Track changes to your baseline
After running an enablement initiative, return to the same set of metrics you captured in your baseline. Look at how adoption rates and impact metrics have shifted.
DX simplifies this by allowing you to create custom attributes for comparisons. Two attributes we recommend using during an AI rollout are usage levels and enablement cohorts:
- Usage levels: DX automatically categorizes developers by usage: none, light, moderate, or heavy. Use this to track how their metrics change over time with the before-and-after report, or compare across groups (e.g. light vs. heavy users) using group comparisons.
- Enablement cohorts: Create an attribute for developers who are part of an enablement cohort or who have attended an AI hackathon. Then, view before-and-after comparisons of developers with that attribute. This is a great way to see exactly what programs moved the needle.
Both of these attributes provide data to help you understand whether enablement efforts are making an impact and the overall impact AI is having as adoption increases.
7. Find additional use cases for AI
Coding is just one small part of the software development lifecycle. For organizations exploring how to apply AI across the SDLC, DX makes it easy to identify where the biggest opportunities are using qualitative data. With Snapshots, you can see the top friction points developers face and identify where AI could have the biggest impact.
For example, Faire saw an opportunity to accelerate their code review process, so they built an in-house AI agent which now completes 3,000 reviews per week. Additionally, Morgan Stanley is using AI to support large-scale migration initiatives to improve codebase maintainability.
Share insights and celebrate wins
Finally, don’t let adoption gains or productivity improvements go unnoticed. Share progress and insights, whether it’s an increase in daily active users, a change in throughput, innovative work, an improved developer experience, or a new use case that is saving developers a lot of time. Sharing these insights reinforces the value of AI, builds trust, and helps leadership get a clearer picture of what their organization is gaining from their investment in AI.
As adoption matures, shift the conversation from if AI is working to ROI. At this stage, measuring business outcomes like cost optimization, delivery speed, and product quality becomes both possible and necessary. By combining adoption metrics, enablement strategies, and impact measurement, organizations can move beyond experimentation and unlock the full potential of AI in software development.
To start tracking these metrics, request a demo or speak to your customer success representative.