Abi Noda: Thanks, everyone, for joining. I’m Abi, co-founder, CEO of DX. I’m joined by Laura, CTO of DX. We do this every month, so excited for everyone to join. Today, we’re talking about the shift we’re seeing in the role of platform teams, the focus of platform teams. At DX, we predominantly work with platform teams, platform leaders, developer productivity teams, and so lots of interesting changes happening right now in the industry. I think a lot of platform leaders we talked to are thinking about, “Hey, how do I navigate? How do I position myself in this time of change in the industry?” So, for today, we’ll have a discussion. We’ll also be looking at the chat and the Q&A for questions as we go, and we’ll try our best to get to them. With that, Laura, I’ll pass it to you to kick us off.
Laura Tacho: Platform engineering in general, developer productivity teams, have a bit of, I would call it like a squishy definition and one that can vary from organization to organization. And then, when we add AI in, kind of the world changes a little bit. As Abi said, we’re going to be talking about how and why platform teams and dev prod teams need to evolve. And really, I think the center to kick us off is this observation that AI, as is the engineering, really is impacting developer productivity teams and platform engineering teams in a big way.
Abi, do you want to talk about this post that you had on LinkedIn the other week that got a bit of attention, seemed to resonate with a lot of folks.
Abi Noda: It is worth saying platform teams sort of defining this, even pre-AI, has been interesting, right?
Laura Tacho: Mm-hmm.
Abi Noda: We talked to a lot of platform leaders, a lot of folks. It’s not always easy to define your role in the organization, to define your mandate and vision within the organization. Now, with AI, I think we’re seeing a lot of platform teams caught in the wind of all this change that’s happening and sort of having to redefine themselves again. And I think there’s both challenges from that as well as opportunities. This post was just based on some conversations I had had a couple weeks ago. There were a lot of comments, lots of questions, so we thought why not turn this into a larger discussion today.
Laura Tacho: Undoubtedly, usually, what happens in organizations is like, at every company, there’s kind of some individual or some team that the CTO looks to to help with productivity. For a lot of companies, this is the platform team. You’re already kind of the overseers of CI/CD developer productivity tools. So AI, obviously, to no surprise to anyone who’s here in this room, really changes expectations in a big way because, now, we have this monumental kind of transformation, big promises, lots of hype around AI accelerating productivity, which is in the mandate for a lot of these platform engineering teams. You are in an interesting position of needing to reconcile your old responsibilities with this new world of AI and how to keep up, what’s important to pay attention to, all while the ecosystem is just exploding and evolving extremely rapidly.
What Abi and I want to break down is just some of the observations that we’ve had and our observations with platform teams and dev prod teams to kind of answer the question like, “Given this new era of AI-assisted engineering, what are the things that platform engineers should be focused on right now? What is the new mandate?” We’re not just talking about pipeline speed anymore. The tools have expanded, the expectations have expanded as well. And I think the first kind of natural place to start is that, for a lot of platform engineering teams, evaluating and rolling out AI tools, coordinating the proof of concepts. That is now a core responsibility of platform teams.
Abi Noda: As you mentioned, Laura, platform teams typically are the governors of the developer tool chain within the organization. Again, it’s natural that folks have turned to platform leaders for leadership and guidance and selection of AI vendors, internal AI tooling opportunities. Again, I think the overarching theme from my side is that this is such big opportunity for platform leaders because AI is a big deal and executives care a lot about AI right now, about accelerating with AI. Platform teams, platform leaders, right now, are positioned to be the stewards of this change.
Yeah, Laura, as you said, we’re seeing platform teams being asked to evaluate the AI tooling landscape, to bring recommendations to the organizations about which vendors to pilot, what the vendor strategy should be, build versus buy. And then, actually, your go-to procurement bring these tools in and then start beginning to roll these tools out to developers and then demonstrate their impact and success. Really, the full life cycle of rollout of AI tooling is something that is falling upon platform leaders and should be viewed now as a core responsibility and core pillar in the platform roadmap.
Laura Tacho: Yeah. And I think for leaders, platform engineering leaders, dev prod leaders, who are now responsible for coordinating AI, responsible for adoption, they’re running proof of concepts, this is all happening in a tooling ecosystem that is expanding extremely rapidly where we have vendors kind of leapfrogging each other, sometimes it feels like every week. Every month, the landscape really changes. There’s a lot of whiplash going on here.
For leaders now that are responsible for evaluating and rolling out AI tools, having really good measurements in place is something that is really important in order to be able to make those big decisions that usually have big budgets and big consequences. Another one, I guess leading into kind of evaluating and rolling out AI tools, that’s definitely on the plate of platform engineering leaders, but AI is also changing the way that measurement and data collection needs to happen. So you need to be able to answer the question, “Are we picking the right tools to invest in? Are we making them available to developers?” But the methodology and kind of the principles behind the measurement, in some cases, it needs to stay exactly the same, and then, in some cases, we do need to have different measurements to account for the different ways that software is being developed now with AI.
Abi Noda: Folks who didn’t catch it, last month, Laura and I did a session specifically on AI measurement framework, so really deep dive into how we’re seeing organizations tackle measurement of AI engineering, what we’re seeing from different companies, what our research team has put together, what our guidelines and recommendations are. To add to Laura’s point, for a long time, as an industry, we were really focused on delivery metrics, CI/CD metrics, like DORA metrics. More recently, there’s been a lot more tension on broader productivity signals, developer experience. This is just the next page-turn, the next evolution of how platform teams need to be evolving, how they think about their measurement programs, and that is, how do we measure AI? How do we measure AI agents? What are the right signals to be focused on as an organization as we change the way that we build software and how the SDLC is designed? Again, we should point people to our previous session in the AI measurement framework on our website, but, big plus one, platform teams need to be shifting the way not only they measure the impact of their tools but overall engineering success and productivity.
Laura Tacho: I think right now is also a really pivotal moment for platform engineering leaders, for developer productivity leaders, to be that kind of internal advocate or internal educator about solid principles in measuring development team and engineering organization performance in general. What I’m seeing a lot, I just had a conversation with someone on LinkedIn about this today, was that we got past lines of code being a good measurement of developer productivity a while ago, but now my executive team is just obsessed about understanding how many lines of code AI is generating. Unfortunately, whether we like it or not, it falls back on us as engineering leaders, if you’re a platform engineering leader, dev prod, whatever other kind of engineering leader you are, to educate.
A lot of times, I think, with AI, it’s because it’s so new and so novel and there’s just not a lot of alternatives to offer. So look to the AI measurement framework for some very grounded research-backed suggestions, guidelines for what to measure to help you with these conversations, so you don’t feel like you’re having to go at it alone trying to explain why. Well, it doesn’t matter so much how many lines of code are being generated by AI. That’s one data point that can give us some insight into the problem, but we also have to be looking at all of these other things.
Abi Noda: The measurement piece… History repeats itself in our industry. Every time that there’s been change of this magnitude, executives, lots of leaders will flock to the same old metrics that we know as practitioners aren’t really the most effective signals for productivity. The very same thing happened during COVID when organizations shifted to hybrid and remote work. There was a huge gravitational pullback to measuring things like lines of code. In fact, that’s why the space framework was published. At the very beginning, it talks about specifically how lines of code isn’t really the right way to be thinking about productivity at a pivotal moment of change in the industry. I think we kind of see the same dynamic happening right now with the AI shift. Again, for leaders, this isn’t really anything new. As Laura said, you are always in that role of educating the rest of the organization and bringing best practices and recommendations on how to be thinking about productivity in the right ways.
Laura Tacho: Yeah, absolutely. Speaking of lines of code, actually, what it can tell us, one thing that we are seeing is that, because now it is much easier to generate code or author code, we do see faster iteration speed, we see more throughput, we see, in some cases, or at least I have, I’ve seen this more code in general being shipped, so larger batch sizes. Working with AI in the organizational context is going to put stress on certain aspects of your delivery pipeline, of build and test, of the development process itself, local development environment. I think the role of platform and developer productivity is sort of to harden the system, but then also wrangle the system. One other thing, another point that Abi and I wanted to talk about, which we’ll get into now, is about the role of developer productivity teams and platform teams of creating both paved paths for AI-assisted engineering but also guardrails, and then looking at the whole of the platform and making sure that it can handle the increased throughput, maybe bigger batch sizes, those downstream implications of working with AI.
Abi Noda: It’s really interesting… At a lot of organizations right now, we’re just throwing AI coding tools at our developers and sitting back and seeing what happens, right?
Laura Tacho: Mm-hmm.
Abi Noda: The results vary widely based on the context, based on the code base, based on the language, based on the technology. I think what we’re seeing from a lot of, I think, forward-thinking platform leaders is, “Hey, this is not dissimilar from the same types of tool sprawl problems that we were focused on in the past.” We’ve talked about how standardization provides leverage. Ultimately, narrowing down to a finite set of tools and ways of working allows you to invest more in making those tools and ways of working really excellent and seamless for developers. I think we’re in a very similar transition point around AI tooling, where a lot of organizations to date have just been piloting a lot of different tools, throwing them at developers, the results vary widely.
A lot of developers are still maybe not feeling fully equipped or enabled to maximize the benefit of these tools, and therein lies the opportunity for platform teams to come in. We’ve already been talking about training and basic enablement for developers learning these new skills and new tools, but I think, beyond that, truly creating these paved paths, narrowing down to a set of tools for different types of use cases, and starting to build reusable workflows, shared templates, that the same types of things that platform teams have been focused on for a long time, I think there’s an incredible opportunity in AI. A concrete example would be like Claude Code has a shared workflows capability, and we’re starting to see more and more platform teams really lean into that. How can we be the curators, the creators of really useful reusable AI workflows for developers across the organization?
Laura Tacho: Yeah, I think we can answer one question here that’s very related to this from Steven. “How do you control tool sprawl when dev teams are themselves creating new AI assistance and agents?”
I think part of it goes back to what Abi said. This is just the same problem that we’ve had with any other kind of tool sprawl. The fact that it’s AI doesn’t necessarily make it any different. We have to solve it in the same way that we did in the past, and it is the role of the platform team to solve it. Actually, there’s two things that I think are unique about this though. The first is that experimentation is good, but tool sprawl is bad. When I hear that question, we have teams that are creating new AI assistance, new agents creating, hacking around. On one hand, that is the kind of stuff that we want to preserve and encourage because experimentation helps us figure out what are the new and novel use cases that we can exploit. Where are the boundaries of this? How can we actually use it? What doesn’t work is when it does turn into tool sprawl and it creates enough friction that starts to work against.
Finding the right channels and the appropriate channels for experimentation is a great way to kind of confine that and put it in a place. I also would get really curious about why they are building those tools. What problem does it solve? You have to think about, again, this is back to platform as a product mindset, “What are the use cases? Can we standardize? Can we unify and standardize?” and just get really curious about this organic gardening growth that’s coming up from your organization, and find the opportunities then to maybe do a bit of pruning and trimming, and focusing all of that work to keep it on a standardized path that then we can also share all of those tools around, and so it’s not so kind of siloed or one-off as well.
Abi Noda: You touched on guardrails earlier. What we’re seeing and hearing from a lot of organizations is the unbelievable pace at which code can be generated through AI tools needs to be counterbalanced with safeguards, guardrails, and quality checks. Furthermore, the ability to generate reliable code rapidly with AI tools is also constrained by the feedback loops in the system. We’ll talk more about this later towards the end of the discussion. Again, this is where I think lots of SRE teams, platform teams have been singing this tune for a long time, the need for really systematic quality checks, feedback loops, tests, things like that. Now, it’s actually more important than ever because we’re moving at a speed faster than ever, and so we need to counterbalance that with ensuring that the code we ship to production is good quality, secure, and it has checks in place to avoid issues.
Laura Tacho: Yeah. I do think the increased throughput, the increased volume of code that’s going to start coming through a lot of these platforms and build systems will test the boundaries and really show you what’s broken. So platform engineering teams can get ahead of this by making sure that the fundamentals are there. Ask yourself the question, “How will we know? How will we know that the code shipped, where 50% of it is AI authored, is secure and compliant according to our standards? How will we know when our systems can’t check that sufficiently?” and make sure to do a bit of hardening in those guardrails to work toward the future problems. Obviously, we don’t want to over optimize or pre-optimize, but a little bit of planning and hardening will go a long way because we know that that’s what’s going to come.
Abi Noda: We are talking about guardrails, the ability to identify AI code is one of the primitives to help with that. We won’t get in that discussion today, but I just wanted to point out to people, in our previous session that focused on the AI measurement framework, Laura and I talked in depth about the different approaches out there to tracking AI-generated code, so that can all feed into these types of guardrails and checks that we’re talking about today.
Laura Tacho: Yeah, great. I think on the kind of guardrails, paved paths, what we’ve seen really is standardization is a huge point of leverage. I guess standardization in tools, but also I would extend this to think about knowledge as well, to find the use cases, find the paved paths that really work, and then make sure that they’re reproducible and can spread throughout the organization so you get maximum leverage. Experimentation is really important. I think the challenge here is there always needs to be a balance between experimentation. Like Steven, to your question, we’ve got tool sprawl because people are just so excited to be building stuff. We don’t want to discourage that necessarily, but we also need to kind of keep it in check and make sure that, for the organization, those things are working. That really is always has been the role of platform engineering to make those paved paths and that standardization, but even more so now with this kind of metamorphosis, evolution kind of situation that we’re finding ourselves in with AI.
Abi Noda: All right, let’s get into the last point we have for today around what we see as the new mandate for platform and developer productivity teams. I think this one is a little bit of a prediction because it’s not necessarily a shift we’re fully seeing yet. One of the observations that we’ve had is a lot of platform teams again are putting tools out there in the hands of developers across the organization, but platform teams themselves are a little bit sitting on the sidelines right now in terms of their own AI capabilities and tools. There isn’t really a clear role for platform themselves to be fully applying and leveraging these tools. I think that’s where we see an opportunity, right?
Laura Tacho: Mm-hmm.
Abi Noda: Even when we put these AI tools into the hands of developers, they don’t necessarily have time or the expertise or the desire to apply them to a lot of the problems that they’re actually well-suited to solve. For example, things like code refactoring or security patching, a lot of the KTLO work that AI is actually really well-suited to repetitively tackle are actually things that a lot of product teams don’t really prioritize in their roadmaps and backlogs and, therefore, won’t actually get to. I think that’s where we see an opportunity for platform teams to be thinking about, “What are the one-to-many opportunities?” What’s the opportunity for platform teams to maybe own a lot of the metaphorical AI headcount in the organization and apply it in a horizontal way across the entire organization rather than just putting tools in the individual’s hands and looking for acceleration that way.
Laura, I’m sure you can add to this, but again this is one of our more future predictions of how we think platform teams can have even more impact with the shift that’s happening.
Laura Tacho: Yeah, I think the bottom line about this one-to-many is that AI is an accelerant best when it’s used at an organizational level, not when we just put a license in the hand of an individual and hope that their curiosity and grit is going to take it the whole way or the rest of the way. You can imagine a world where centralizing that cost of set up or maybe what I would even describe as context as a service, having platform teams take on the pain and centralize it so that really they’re providing really good tooling to the product teams who can go out there and work on their particular business problems. A good example of this that I’ll talk about on stage in Las Vegas next month at the Enterprise Tech Leadership Summit with Bruno Passos who leads GenAI and developer experience at Booking is, within their developer productivity, developer experience team, they use this concept of experienced-based acceleration.
What they did was try to take a business problem with AI and then put that in the hands of a hackathon for team members in order to identify those use cases that they could then expand and leverage and standardize across the whole organization. They found, exactly what you mentioned before, migrations were a really important use case and, through these experience-based accelerators, they found many teams that had this problem in their business unit. They found a way to make AI get them 70% of the way there, and then they could take that method as the centralized team and then spread it out and standardize it across the organization. So they went from having nothing that fulfills that to this team being able to spread that across the whole organization and bring some really good acceleration results from that. So there’s lots of opportunities.
Single-player mode is hard. Whenever platform teams can apply AI to workflow problems, to organizational problems, the leverage is just so much bigger and adoption will be stickier because you’re solving actual, real problems on the team level, not just for individuals trying to speed up their individual tasks.
Abi Noda: I think that’s a good segue into just recapping the points we’ve talked about thus far. I think a lot of it boils down to enabling great multiplayer mode, right?
Laura Tacho: Yeah.
Abi Noda: By default, where a lot of organizations are starting is just again throw tools at developers, let them try to figure it out, give them some training, but I think we see the opportunity for platform teams being helping the organization take an organizational systematic approach to roll out, to maximizing the value and the velocity that can be gained from all these tools through data-driven rollouts and good vendor selection, paved paths, standardization, and then putting in the right guardrails, shared workflows to really enable the organization to have a lot of impact from AI.
Laura Tacho: Yeah, definitely. One thing we didn’t talk about too much but is worth just bringing up here is marrying the old world and the new world together, and I think that’s a particular challenge for those folks. I know there’s plenty of you in this room who are working at companies with a lot of brownfield or legacy code. A lot of times, AI is talked about in a greenfield way. It’s all new stuff, it’s hacking, it’s experimentation and innovation. There’s so much opportunity, just like with that Booking example, of using AI to modernize, to migrate, making sure that legacy code is also married up with these AI capabilities so that we don’t have these AI silos and we can bridge those gaps and get this kind of working for the whole organization.
Platform team has so much influence and there is so much opportunity for you all to take AI from something that is being seen or maybe treated like an individual productivity tool and making it really the highest piece of organizational leverage that you have. It all starts with approaching AI like a tool, finding the use cases, finding the problems that your users have, the internal developers, and then finding a way to standardize and spread all of that, those gains, the good use cases across the organization so you can move forward together.
Abi Noda: What I wanted to conclude on today was what we see as maybe one of the most important things that platform and developer productivity teams need to do right now, which is stay focused on the bigger picture. Everyone is focused on AI, everyone is talking about AI tools. “Should we use Cursor? What are we doing with Claude Code? What do we do about Copilot?” But when we look at the data, we see a couple of interesting things, right? The first thing we see is that, across the board, the acceleration gains that we’re seeing from AI are still very much offset and outweighed by the inefficiencies that exist across the SDLC. So all the traditional things like excessive meetings, interruptions, code quality, all the things that have hindered us in the past still hinder us today. We see in the data that these things outweigh the acceleration we’re getting from AI, so that’s something we still need to solve.
The second thing that we’ve seen is that there’s actually a refocus on developer experience with AI because what a lot of organizations are finding is that a lot of the same things that benefit human developers in terms of developer experience are also prerequisites for success with AI developers. Things like well-documented code, feedback loops, documentation, these are all things that actually have a significant impact on the efficacy of LLMs in the same way that they have a significant impact on the ability of the human developer to navigate the system and make changes confidently. Again, really important for platform teams to be not losing sight of the bigger picture. While AI is very much the topic of the moment, it needs to be part of a broader holistic strategy around developer experience and productivity.
Laura Tacho: Yeah. Great. Abi, let’s do maybe just a couple questions before we wrap up. There’s one that relates so well to your point about keeping the focus on developer experience. “The whole reason that AI works is because it improves developer experience, but there’s still so much work to do.”
Logan, I was thinking of the theory of constraints. What do you see as the current bottleneck in the SDLC? I think, to Abi’s point, looking at the slide here, this is a snapshot data. AI time savings at 67 hours annually per engineer totally swallowed by meeting heavy days interruptions. They’re still very real problems outside of the coding task. A lot of what we’ve talked about here is about AI-assisted coding, so we’re talking about Claude Code, we’re talking about Cursor. If we have the same conversation three months from now, I would say, I think we’re going to be talking about a wider set of tools across the SDLC. I think that’s a very important point to bring up for platform engineering teams as well is we can’t just stay fixated on the code authoring. We have to be looking at opportunities in other parts of the SDLC in order to really get the most gain.
I have an interesting anecdote from a company that’s using AI for code authoring, but also for requirements and for testing and validation, and they see the biggest improvement when AI is used in all three places and don’t see a third of it. It’s a third, a third, a third. It’s like if they only use it for code authoring, then there’s actually not that much. But when they use it for the whole thing, they get a lot of speed up. AI tools across the SDLC is going to be, for 2026, I think the most top of mind part of the conversation on AI. We’re going to move past code authoring, and that’s something important to keep in mind for platform teams.
Abi Noda: I would offer the tip to communicate that as an opportunity. So rather than be the naysayer in the organization, pointing out how unimpactful code generation increases are, I would rather suggest that leaders present this as an opportunity. “Hey, we have tackled, we have unlocked one bottleneck in the system with AI. Now, let’s go focus on other bottlenecks that still exist in the system.” That’s the way we would recommend positioning it for leaders who are looking to advocate for focusing on those other areas beyond just AI code generation.
Laura Tacho: Yeah. There’s one question here about restructuring legacy code bases to be LLM-optimized. “Any other specific examples other than documenting the code in a better way?”
Certainly, documentation gets us a lot away there. One interesting technique that I’ll share is, when we’re talking about legacy code and maybe modernizing it or using AI to do migration kinds of work, I think our tendency is to point whatever Claude Code at this old code base and say, “Move it from a to B,” when a better technique, and I’m curious if someone wants to try this and prove me wrong or prove me right, do a small chunk of that by hand. Then, you have basically the diff and you can feed that instead to your coding tool of choice and you can say, “This is what I’m trying to do. I’m trying to get things from this format over here. Write me a prompt, whatever it is, to do this at a larger scale across a whole code base. What am I not considering?” And then you’re going to get a much better prompt in order to do that piece of work over and over and over again rather than just trying to let your Cursor or Claude Code come up with the migration methodology itself.
So that’s one thing, one way that I’ve seen companies use AI-assisted engineering in the context of legacy without needing to write incredible documentation. I have also seen companies use LLM tools to improve the documentation of legacy code as well. That’s also an opportunity.
Abi Noda: I would say this is an area… We have a lot of I think podcast episodes and research coming out on this specific topic. I think the right way for folks to think about this problem is that, ultimately, the code base is context that’s being fed into the LLM. Just like it’s talked about with how to write a good prompt, CLAUDE.md files, the code base itself is part of that context that we’re feeding in. We’ve seen organizations thinking about or having success with, for example, breaking up large code bases, very similar to the way that we’ve created more modular service-based architectures for other reasons in the past. We’ve seen some success with organizations doing that with code bases today for the purpose of optimizing for context provided to LLMs. I think the specifics around what are the specific techniques in terms of structuring code, documentation, meta documentation, like Claude.md, that can be orchestrated and fed into these tools, I think we don’t have the specific pointers yet. I think everyone is still learning that and sort of coming to understand what the best practices are.
Laura Tacho: All right. Abi, do you have one picked out?
Abi Noda: I saw a question from Steven on just handling tech debt, which many teams don’t allow for. That resonated with me. I may be misinterpreting the question. As I talked about earlier, a lot of the things that AI tools right now are really good at, which is KTLO, repetitives, refactorings, patching, code migrations, happen to be the things that most product engineers don’t have time for, didn’t in the past and still don’t today because they have roadmaps and deadlines to hit on product features. Again, this is where I think this idea of a one-to-many opportunity for platform teams… We’ve seen this already in the industry, plenty of examples of this. Airbnb published quite a bit of articles on how they perform large-scale code migrations in a centralized way. Again, I think there’s a lot of opportunity for platform teams to be thinking about, “Hey, as a very human headcount-constrained part of the organization, how can we leverage AI to get a lot more impact and be able to do a lot more than we’ve been able to in the past to impact things like technical debt?”
Laura Tacho: There’s a question that’s sort of the opposite of what you said, where only the product engineers are having time to experiment with AI and this person asked, “Do we need to hire specific people for fulfilling these new responsibilities of platform teams?”
Possibly, your headcount will grow. I think my gut reaction is that this isn’t a separate job. This is an evolution of the job. I think what I’ve seen not work is trying to put together a siloed AI team who’s just doing their thing over here. It gets really, really difficult to spread those ideas back into the organization. So my concern with bringing on special AI platform engineers would be exactly that. This isn’t something that we can just tack on. I think we saw the same pattern with companies hiring DevOps engineers instead of integrating the practices into their day-to-day operations. We don’t want us to make that same mistake with AI, so we don’t want to have it separate. We want to bring it into platform, into product, into everything, and that means evolving job expectations and duties and not necessarily hiring people to be specialists in these areas.
Abi Noda: One of the most important things for anyone working in platform is being very product-minded and customer-minded, right, ultimately?
Abi Noda: With everything we’ve talked about here today, we are building and serving our customers, aka developers across the organization, more so than deep technical expertise in a very fast moving field of large language models. I think what’s most important is the ability to think very pragmatically about how to apply this technology to the organization for developers in ways that are going to really move the needle.
Laura Tacho: Great one. Maybe one last one, Abi, before we close off for today.
Gene, you had a question about recommendations on AI tools that can help with tech debt, and I’ll maybe expand that to tools across the SDLC. Make sure to check out our newsletter because Justin Reock, who’s our deputy CTO, is doing a lot of case studies in organizations that have done exactly this. They’re either using AI for tech debt, they’re using it for other parts of the SDLC, not code authoring, but using it to solve other problems. So there’s a lot of interesting examples.
Morgan Stanley, for example, they have their DevGen AI tool, which I found out actually just got patented as well, I saw that on LinkedIn, which is very, very cool. They have this whole methodology of how to use it for outside of just code authoring, but as a whole SDLC improvement tool, and they patented their approach. Of course, they’re open about it in sharing it. But you’re going to be seeing, especially over the course of the next couple months, while Justin’s doing a lot of active research on this, really real-world clear examples of tools pulled from different companies of how they’re doing this. So that’s a good resource to follow for some guidance on how to do this in a legacy code base or in the enterprise across the SDLC.