Skip to content

Analyst reactions: How AI is reshaping engineering organizations

Analysts at Accenture and RedMonk are seeing an increased focus on quality, security, and responsible AI rollout.

Justin Reock

Deputy CTO

This post was originally published in Engineering Enablement, DX’s newsletter dedicated to sharing research and perspectives on developer productivity. Subscribe to be notified when we publish new issues.

In just the past year, AI has set into motion several major shifts in how engineering organizations are structured, including how teams are composed, where responsibilities sit, and what skills matter. These changes are still unfolding, but patterns are emerging.

To help name some of those shifts, I interviewed two analysts who spend their time talking with engineering leaders across industries: Ruchi Goyal, Global AI Practice Leader at Accenture, and Dr. Kate Holterhoff, Senior Industry Analyst at RedMonk. Below I’ve summarized the observations they shared—with the hope that naming these patterns is a useful reference point for leaders to recognize and navigate the same changes in their own organizations.

Hiring priorities are emphasizing AI fluency and code quality

The software industry is still working through a hiring slump, but the vacuum is being filled by specific AI-centric roles.

  • The “AI Engineer” trend: We are seeing a surge in “AI Engineer” titles on LinkedIn. These are roles that are less about training foundational models and more about building the glue between LLMs and production code. (See example roles on Indeed here).
  • Quality in focus: At the same time, a new skill is becoming central to engineering hiring: the ability to distinguish good code from mediocre AI-generated code. As Ryan Dahl and others have noted, AI removes the barrier to writing code, but it introduces a new one: dealing with slop.
  • Because a focus on quality has become more important, some organizations are seeing friction emerge in the code review process.

Fragmented AI experiments are consolidating into centralized platforms

Early AI adoption in most organizations was scattered, with individual teams running their own experiments with different tools and no shared standards. That’s changing. Enterprises are moving toward centralized models such as a formal AI Center of Excellence or a hub-and-spoke structure that sets organization-wide direction. Goyal notes that “a lot of clients are looking into merging Developer Productivity and Internal Tools teams into one.” This can enable a number of new strategies for the organization:

  • Consolidation: Organizations are folding AI initiatives into their internal developer platforms, creating governed paths for how AI gets used across the organization. Without that structure, teams may end up making independent decisions about tools and access, which can be hard to unwind later.
  • Orchestration: Managing individual agents is becoming less of the challenge, and coordinating multiple agents working together is where the complexity now lives. Organizations that have already worked through DevOps maturity will recognize the pattern: the evolution tends to move from automation, to orchestration, to choreography. AI adoption is following a similar arc.
  • AI is spreading across engineering organizations to the point where most developers are expected to use it as a matter of course. As Holterhoff put it, “AI is becoming water.” As that happens, Platform teams are becoming increasingly responsible for the governance and prompt hardening that makes this safe.

A new operational layer is emerging: LLMOps

As AI becomes part of the engineering infrastructure, someone has to own the operational layer that keeps it running reliably. In many organizations, that responsibility is coalescing into what’s being called LLMOps. Many Platform and DevProd teams are absorbing this responsibility.

  • Prompt engineering to guideposts: Ad hoc prompting works for individuals but doesn’t scale across a team or organization. Turning prompts into reusable templates means you’re building institutional knowledge rather than leaving every developer to figure it out independently, and it creates a consistent quality baseline across the SDLC.
  • The SWAT Team approach: In many orgs, a specialized team of engineers handles model integration and AI infra, often leveraging the heavy lifting provided by big cloud providers. Organizations that stand up a dedicated group for this work move faster and make fewer costly mistakes, while the rest of the engineering org can focus on building.
  • Deployment convergence: We are seeing tighter integration between development environments and production. (For example, Expo’s integration with Replit allows developers to deploy directly to production, bypassing traditional friction points.) This creates real efficiency gains, but it also means the guardrails that used to exist between those environments need to be rethought, which is something Platform teams are becoming increasingly responsible for.

The human factor still matters the most

Despite the automation, the primary challenges are still cultural. As RedMonk has famously noted, “The developers are everything.” AI tools are being designed to support practitioners, not replace them.

  • Murky ROI: Individual developers see the value of AI tools clearly enough that many are willing to pay for them personally. But at the organizational level, the picture is murkier. Leaders are measuring the impact of AI while reading headlines claiming 3x productivity gains… and most aren’t seeing numbers that match.
  • Security as a skill: A major skill gap isn’t learning to prompt; it’s learning how to use AI securely**.** Training now focuses heavily on privacy, vetting specific LLMs, and ensuring that agentic workflows don’t bypass critical security guardrails.
  • Software engineers as self-optimizers: Developers are naturally inclined to try new tools. That’s generally a good thing, but it means organizations need vetted, safe environments for that experimentation to happen in. The risk is that developers could adopt AI in ways the organization can’t see or govern.

These changes are moving quickly, and most organizations are still early in figuring out what they mean in practice. There’s no single right answer yet, which is part of what makes it valuable to hear from analysts with visibility across many organizations. A special thanks to Ruchi Goyal and Dr. Kate Holterhoff for sharing their perspective for this newsletter.

Last Updated
February 18, 2026