AI acceptance rate: Easy to measure, easy to misuse
The question is no longer does AI work; it’s how well, and for whom, and where is the most value being created?
This post was originally published on my Linkedin.
When generative AI coding tools like GitHub Copilot first launched, we needed a simple way to answer a basic question: do these tools actually work? In that context, acceptance rate (how often developers accept an AI-generated code suggestion) offered an appealing early signal. It was easy to track and seemed to show whether suggestions were useful. If developers don’t accept suggestions, it’s a sign the tool’s accuracy is off.
But that era is over. We now know that AI coding assistants help developers solve problems, and that developers like to use them. The question is no longer do they work; it’s how well, and for whom, and where is the most value being created? This is where acceptance rate falls apart. I see it as the new “lines of code” measurement: easy to measure, easy to misuse, and largely irrelevant to business value or team productivity.
Unfortunately, some teams still over-index on acceptance rate simply because it’s accessible. It’s built into dashboards and can be compared across orgs. But that convenience is dangerous. As a performance signal, it tells you nothing about long-term impact, developer satisfaction, or actual business outcomes. While it does have role – like during tool evaluation – its value is bounded.
Instead, keep close eye on your existing software engineering performance metrics: stuff like speed, quality, innovation rate, and developer experience. Then, layer on AI-specific metrics across the dimensions of utilization, impact, and cost alongside of these metrics. This gives you the fullest picture of what’s happening, and helps avoid some of the tunnel vision that seems to be plaguing a lot of discussion about AI impact in the news.
AI is an amplifier for existing processes. It can unlock a tremendous amount of value if your systems are ready, but it will also cause a lot of problems (poor quality, maintenance issues, security risks) if your systems are not already sufficiently resilient. If you do not have good visibility into system and team performance already, the risk is greater.
Acceptance rate may help validate early experiments with AI coding tools, but it’s no longer the metric that matters. To truly understand and unlock the value of AI in software development, teams need to focus on meaningful signals that reflect real usage, real impact, and real investment.