Why code and cyclomatic complexity metrics mislead engineering teams (and what works instead)
Cyclomatic complexity and similar metrics don't capture what slows down developers. Learn how to measure software complexity using frameworks that drive real business outcomes.

Taylor Bruneaux
Analyst
Rethinking Code Complexity: Beyond Cyclomatic Metrics
Developers don’t experience complexity as a number. They experience it as friction: code that’s hard to understand, tools that don’t integrate well, and workflows that constantly interrupt their focus.
Most organizations measure complexity with cyclomatic complexity, which calculates the number of paths through a function. These scores are easy to produce but capture only structure—not the cognitive effort developers feel when reading and maintaining code. The result is often backwards optimization: teams improve their scores while making choices that increase cognitive load and workflow friction. Over time, focus erodes, delivery slows, and leaders wonder why “better” code hasn’t improved outcomes.
This guide reframes complexity through the lens of developer experience. Drawing on DX research and case studies, we’ll examine why structural metrics fall short, how “metric-friendly” decisions backfire, and what modern frameworks reveal about the human cost of complexity.
By the end, you’ll learn:
- Why cognitive complexity matters more than structural complexity
- How to measure the friction that actually undermines delivery speed and satisfaction
- What leading organizations are doing to turn complexity management into a competitive advantage
What is cyclomatic complexity?
Cyclomatic complexity is one of the most widely used metrics for measuring structural complexity in code. It calculates the number of possible execution paths through a function, based on control structures like ‘if’ statements or loops. In theory, a lower score indicates simpler code, while a higher score indicates more complex code.
But cyclomatic complexity is not the same as code complexity. In practice, code complexity is about cognitive load—how complex code is for humans to read, understand, and modify. A function with a low cyclomatic score can still be mentally taxing if it nests multiple operations in a single line. Conversely, a function with a higher cyclomatic score may be far easier to follow if it makes each step explicit.
How to measure cyclomatic complexity
Cyclomatic complexity is typically measured directly from source code using static analysis tools. These tools count decision points, such as ‘if’, ‘while’, ‘for’, ‘case’, or ‘catch’ statements, to calculate the number of unique paths through a function or module. Common ways to measure it include:
- Static analysis tools: Most modern IDEs and CI/CD pipelines integrate complexity checkers that automatically report cyclomatic scores.
- Language-specific linters, such as ESLint for JavaScript or Pylint for Python, can be configured to highlight functions that exceed a specified complexity threshold.
- Custom scripts: Teams can write scripts that apply the cyclomatic formula (edges – nodes + 2) across codebases.
A typical practice is to set thresholds—for example, flagging functions with scores above 10 as “too complex.” This makes cyclomatic complexity easy to benchmark across codebases. However, DX research indicates that relying solely on these numbers risks overlooking the human aspect of complexity, which is more effectively captured through cognitive complexity and developer experience frameworks.
Cyclomatic complexity vs. code complexity
It’s important to distinguish between cyclomatic complexity and code complexity:
- Cyclomatic complexity measures structure. It counts code branches, loops, and decision points. It’s objective and easy to calculate, but it only reflects how many possible paths exist in execution—not how easy the code is for humans to reason about.
- Code complexity, in practice, measures the cognitive effort required. It reflects how complex code is to understand, debug, and extend. Nested operations, poor naming, or fragmented logic can all raise cognitive load, even if structural scores remain low.
Why cyclomatic complexity is a flawed metric
DX research shows that while structural metrics, such as cyclomatic complexity, provide a narrow snapshot, the real barrier to productivity comes from cognitive complexity—the friction developers experience in their day-to-day work. That’s why frameworks like the DX Core 4 and Developer Experience Index prioritize measuring human factors over structural proxies.
Consider these two functions, which implement the same business logic:
# Low cyclomatic complexity (score: 2) but high cognitive load
def process_user_data(user_id):
return transform_legacy_format(
validate_permissions(
fetch_from_distributed_cache(user_id)))
# Higher cyclomatic complexity (score: 4) but clearer intent
def process_user_data(user_id):
if not user_id:
return None
data = fetch_user(user_id)
if not has_permission(data):
return error_response()
return transform_data(data)
Traditional metrics would flag the second function as “more complex.” Yet any developer would choose to maintain the second version, because it makes the logic clearer and easier to extend.
This example highlights the flaw in relying on cyclomatic complexity as a proxy for code quality. Research from DX shows that when teams optimize for structural metrics, they often make their codebases harder to work with. The compound effect is costly: each “metric-friendly” decision that adds cognitive burden compounds across every developer, every day. Measuring cognitive complexity—the human effort required to understand and modify code—provides far more actionable insights than traditional structural metrics.
The better way to understand complexity
If you can measure complexity, you can manage it. The problem is that traditional structural analysis misses the real sources of complexity that impact delivery speed and developer satisfaction. Instead of focusing on complexity scores, teams need to understand what complexity means in practice—the human effort required to work effectively in complex codebases.
Here are a few ways to better understand whether your code is too complex.
Feedback loop delays
Long build times, slow tests, and delayed reviews create exponential productivity losses. Organizations can measure and improve their lead time for changes to achieve significant results. Vercel reduced cycle times by 43% using developer experience frameworks that measure workflow friction.
Tool fragmentation
Developers lose hours navigating inconsistent environments, switching between tools, and managing configuration drift. This “coupling” between disparate systems creates cognitive overhead that is invisible to structural metrics, such as cyclomatic complexity.
Context switching overhead
Unplanned work, meetings, and unclear priorities prevent deep focus. Pfizer achieved breakthrough improvements by protecting developer concentration time, measuring flow state disruption rather than code paths.
How leading organizations actually measure complexity
High-performing engineering teams utilize frameworks that capture both technical and human factors, enabling them to manage what truly matters when it comes to developer productivity.
The DX Core 4
This research-backed approach measures satisfaction, performance, retention, and business outcomes—the metrics that actually correlate with delivery speed and quality. These are among the developer productivity metrics that top companies actually use to drive engineering success.
Developer Experience Index
Industry benchmarking across peer organizations shows where your complexity levels create competitive advantage or disadvantage.
The compound effect of measuring the wrong metrics
The hidden danger isn’t complex code—it’s optimizing for the wrong complexity metrics. When engineering teams chase lower cyclomatic complexity scores while ignoring workflow friction, the compound effect destroys productivity:
Daily decisions add up.
Each developer making “metric-friendly” choices that increase cognitive load compounds across hundreds of daily interactions.
Tool optimization backfires.
Teams invest in code analysis tools that measure branches, while deployment pipelines remain slow and fragmented. This creates a false complexity scale that ignores workflow friction. This is why focusing on engineering metrics that actually drive improvement matters more than structural scores.
Leadership loses trust.
When complexity scores improve but delivery speed stagnates, developers lose confidence in measurement-driven improvement efforts. Understanding what engineering KPIs actually matter for software teams helps avoid this disconnect.
How to actually reduce complexity
Here are practical ways to reduce complexity in your codebase and the cognitive load experienced by your developers.
Optimize feedback loops.
Reduce build times, streamline reviews, and automate deployment pipelines to enhance efficiency. Leading organizations experience significant improvements in deployment frequency when they measure and optimize feedback delays.
Minimize cognitive load.
Standardize development environments, improve documentation quality, and create clear ownership models. Leading organizations experience measurable improvements in deployment practices after implementing developer experience frameworks that accurately measure the actual cognitive burden.
Protect flow state.
Establish dedicated focus time, reduce unplanned work, and align team priorities to maximize productivity. Deep work drives innovation and job satisfaction—metrics that are closely correlated with business outcomes.
Understand developer experience.
Structural scores only tell part of the story. To capture the full picture, leaders need composite measures that combine perceptual and system data. The Developer Experience Index (DXI) provides exactly that: a research-backed composite score built on 14 standardized drivers of engineering efficiency.
DXI not only benchmarks your organization against peers but also correlates developer friction with time savings, turning technical pain points into dollarized business outcomes.
The business case for better complexity management
Traditional structural metrics answer yesterday’s questions about code paths. What leaders need now are modern frameworks that connect developer experience to business outcomes.
Evidence shows the impact is real. Companies with strong developer experience achieve 4–5x higher revenue growth. Etsy cut onboarding time by 40% by uncovering workflow complexity invisible to code metrics. Pfizer saved $5 million annually by improving developer workflows, while Brex saved $5 million by refocusing platform investments based on experience data rather than structural scores.
The lesson is clear: organizations that measure complexity through developer experience frameworks turn productivity into competitive advantage. By focusing on feedback loops, cognitive load, and flow—not just code branches—engineering leaders create environments where complexity fuels innovation instead of frustration.