Abi Noda: Frank, really excited to have you on the show today. Lots to cover. Thanks for your time.
Frank Fodera: Well, thank you for having me on. I’m really excited to be here. And I thank you for the opportunity.
Abi Noda: Today we’re talking about two topics that are top of mind for a lot of leaders, a lot of organizations. And that is internal developer portals and AI coding assistance. I want to start with your IDP journey. Because I don’t know if you saw this week, but I actually published an article with some analysis on the IDP space, and I gave you guys a shout-out as an example of an organization that’s built a homegrown IDP and seems to be having a lot of success with it. So would love to dive into that journey to begin. Maybe introduce listeners to, you know, just what your IDP is called, what it does, and a little bit of history of how it came into existence.
Frank Fodera: Sure. Yeah. So we started our IDP journey really in 2019 before kind of internal developer portals even were a thing. I mean, we started it because we had a ownership problem. Our IDP is called Showroom. We like our car names within CarGurus and we definitely try to kind of stick with that theme as we name our internal tools.
But our ownership problem was primarily that we knew we were going to be going to microservices from our monolith, but we knew as soon as we did that it would become very difficult to keep track of all the different artifacts that we had, the journey of them transitioning to microservices, and then also the ownership across the entire organization. So we started very simple, really just went into creating a service catalog because we knew services were going to explode. And kept them very cataloged in here with a whole bunch of information. Centralizing it, treating it as the source of truth for ownership. And we very quickly found that that helped to actually, as people were having questions, go and know which team to ask for. So people would ask like, “Hey, who owns X?” And we could very quickly reference our catalog and say, “Okay, go check there.” And that slowly started to redirect traffic from people asking in Slack, having to answer the same questions or hallway conversations to saying, “Hey, I need to know who owns something. Let me actually go to this centralized catalog.”
And that seemed to work really effectively. So we ended up starting with services, but then ended up cataloging our jobs. And we took very different approaches for each of these with services. We were kind of more manually cataloging them for the start. But with jobs, we had a centralized system, but we had thousands already. So we ended up doing a more automated syncing between these systems and pulling it in from four or five different versions of the same system into one centralized location. And that alone helped pretty dramatically with our ownership problem.
Abi Noda: You mentioned something interesting, which was when you began on this journey, “internal developer portal,” that term didn’t even really exist in the industry. I’m curious, when you were trying to tackle this problem, what were you calling… This was just called the catalog. What kind of options were you exploring? Were you considering a spreadsheet? Like, what led you to decide, okay, we’re going to build a custom application around this problem?
Frank Fodera: Yeah, so we were just calling it a service catalog. We actually had spreadsheets. We had wiki pages that were keeping track of all of these. It was kind of scattered. It got out of date very quickly. So our problem was we wanted something that was a bit more dynamic, both from a, what we call checklist perspective as well as a just like ownership perspective. So we actually started with all of our stuff being cataloged in the wiki got out of date, stale very quickly, and it was hard to maintain. And we started out with essentially production readiness checks that were in a spreadsheet that says, as we’re bringing these new services to production, “Let’s just have a spreadsheet that we’re checking off and saying, okay, it has all of these certain criteria.”
So when we first developed it, the two major features we had were the catalog and the checklist component. We later evolved the checklist component into a more dynamic version of that, which I can talk about later. But that just alone was just saying, “Hey, have you checked these various things as you’re developing the service itself?” And it proved to be very effective.
Abi Noda: How has the staffing model investment around Showroom evolved over time? When you were getting started, was there an official person assigned to this or was this someone’s side project? Like a manager just said, “Hey, I’ll solve this problem?” And so yeah, what’s kind of been the journey around the funding and team that’s actually working on this?
Frank Fodera: So we had one back end developer on it full-time. I was actually the direct manager at the time leading it, and I was working on it, coding as well, kind of part-time. And then I had to scrounge to get a front end engineer to build the front end component of it because we didn’t have staffing for it. And we did that for a good amount of time until we kind of got to that minimum viable product and it started to take off from there. And that was just the start of our journey. It really has evolved from that service catalog into more of an IDP. And I can kind of talk about the components that made it become that.
Abi Noda: Yeah, I would love to dive into that. First I would ask, maybe working backwards, looking at Showroom today, what are the core features and capabilities?
Frank Fodera: Okay. So Showroom’s really made up of what we call five different pillars. The first pillar is all about discoverability, making sure that you have all of the information centralized in one place, making it easier to discover, easy to search and find what you needed. The second pillar is really all about governance. And this is where we evolved our checklist into what we call more of a compliance rules. And I say “compliance” as in like, “Are you compliant with the Golden Path standard of what you have developed?” So it helped individuals say, “Are we ready for production? Are we actually following the best practices?” And that was all about our governance builder.
Then we went into self-serviceability. So really making it so if you need to do general actions that you would have on a day-to-day basis, like creating a new service, having to subscribe to topics and things like that, where you get notifications as things are happening around you. That’s really about the self-serviceability.
The third was transparency. We really wanted everyone to know all the information that they needed about the services. So you could really sit there and keep your IDE open with your AI editor in it, and then have Showroom open on the other screen and have all the information you need from an operational perspective around your services, around your artifacts. And you had everything you needed to know.
And then the last one is just about operational efficiency. What we found with internal developer portals, they were a very critical way to help you improve the efficiency of your organization by reducing cognitive load, having that single pane of glass and really having it so you could centralize it all in one place. And that was kind of, what we ended up doing to make deployments easier, to make commits easier to see, to see your logs in a centralized place. So that was very, very helpful.
Abi Noda: How much did you start reaching into infrastructure? I mean, you mentioned something like logs. To what extent is Showroom the interface for the underlying infrastructure or observability tools that you use across the company?
Frank Fodera: So within CarGurus, my team developer experience sits very, very close to our infrastructure team, which is led by a colleague of mine that I work very closely with. So we were always very closely tied to infrastructure. So we had the ability to integrate with our observability systems, our alerting systems, all of those different types of things. And that actually allowed us to use Showroom as an interface to say, hey, platforms are going to change over time. Infrastructure may change over time, but if we do it in a way where we have this single pane of glass, we have that facade on top of it, which allows us to actually go and say, if we’re changing out a platform, we can still integrate, we can still show this information, it’ll remain steady for our developers. So they are used to that look and feel that they have, but under the covers we might be completely switching from one platform version of observability tool to another, and it’s obscured to them.
Now, what we also wanted to do was encourage transparency into how we’re doing these integrations. So we did say, “Where is this information coming from? How are we collecting it?” If you want to go directly to the source, we would provide you the ability to do that. So that way it’s not just completely abstracting it away and preventing you from even learning those downstream systems if you are kind of a power developer who wanted to learn more.
Abi Noda: Question I have is around how you’re continuing to evolve this platform over time. You’ve mentioned goals around compliance, best practices, production readiness, and also developer experience. How do you think about… As you continue to build out Showroom, how are you thinking about the main aspects of the business that you’re really driving impact to?
Frank Fodera: So from where it started to where it is today, what we’ve learned is that we’ve really built a set of core frameworks and functionality that we can continue to extend and enhance. We focus on the compliance rules, we focus on making those extremely pluggable. So as new best practices, new standards come about, we can easily implement more of those. So we’re more focused on the extensibility at this point. Our workflows feature of really making it so it’s self-service to kind of automate anything that takes a lot of steps to get something done, like creating a service, creating the pipelines, all of that type of stuff. We continue to add more of those.
We’re trying to make it so if there is an initiative that we need to focus on to help ourselves be more efficient and get stuff done faster, we can leverage these existing frameworks to invest into them, to automate those components that we need to. If there’s new information that we want to be centralized and be more visible, we can use our automated data collection feature to go and collect all of that information and put it in that centralized place. So in the current phase, we have a really solid foundation of a whole bunch of different features and those frameworks allow us to continue to extend it. Another thing is the way that we built Showroom was not by sitting there and saying, “We are trying to build an internal developer portal.”
What we ended up doing was saying, “We have these strategic initiatives that we need to invest into, and is there something that can help accelerate that strategic initiative to make us move even faster?” And time and time again, initiative after initiative, we saw that by investing some time into this internal developer portal, we were actually able to move that initiative forward faster with better results. And that comes into the developer experience side. That comes into the efficiency perspective. Our model for developer experience is truly creating a great user experience for our developers with our tools and our platforms, but also helping ourselves be more efficient.
So that is one of our major pillars. And when we see an opportunity to not only optimize our own development, but optimize the development of the developers that we’re serving with our platforms. It really was a two-for-one deal, which was great.
Abi Noda: One of the drivers for developing Showroom was a shift from having a monolith to microservices architecture. Share with listeners a little bit about that journey. From what I understand, there were a few false starts and attempts before you finally found a successful path. So tell that story and any takeaways for listeners.
Frank Fodera: So when we first started our monolith decomposition journey, we really tried to take our monolith and completely detangle it, and we found that that was actually very difficult. We tried to take one application, one user section, and kind of do a what we call full vertical slice of it. So taking everything from the front end all the way down to the database and just making it more isolated. The reality was once we attempted to do that, it really ended up being difficult to take all of those tangled dependencies and split them apart. We had kind of multifaceted dependencies. We had not only compile time dependencies, but runtime dependencies, and that was where it was really difficult.
We did invest into some tools to help us analyze that and visualize that, and it just very evidently showed how complex it was. So we ended up shifting gears a little bit to more of this strangler fig pattern where we wanted everything net new to really be focused on using these golden path standards, using these automated workflows to get your services into production very quickly. And we leaned heavily on Showroom to do that. And that really allowed us to take the time that we were taking, which was about 75 days to start developing a concept of a service, get all of the integrations into our platform ready, get the code checked in, and then deploy it to the various environments from 75 days down to under three days now and where we’re at.
And it can move even faster than that if we wanted, but that allowed us to just increase the speed quite dramatically, which was what we were aiming for.
Abi Noda: Looking at the landscape today, Spotify Backstage, for example, is extremely popular, lots of organizations adopting it. You went on a little bit of a different path with a completely homegrown approach. And what are your thoughts today, looking back on maybe some of the trade-offs? What have been of the benefits of going down your own path as opposed to maybe using something a little more off the shelf, like Spotify Backstage?
Frank Fodera: So when we first started on the journey, Backstage wasn’t even released as an open source product. So we really didn’t have a choice, but that was where we started. We started with research. Is there something in the industry that can provide us what we were looking to do? So we did look at these. We did look at some tools, we did evaluate them, but there was really nothing that would satisfy even at that catalog layer in 2019, what we were looking to do in the way we were looking to do it. When Backstage was released though, that question did come up like, “Hey, there is something now that is doing this.” But we already had our minimum viable product at that time. It was functioning really well.
So we did continue to evaluate a build versus buy option or using something open source and enhancing it from there. What we ended up finding was that there would be a pretty close to equal amount of support effort needed to do something like open source and customize it for your needs then having the solution that we currently had and continuing to enhance the frameworks that we were using there to provide us a solution that we wanted. One of the biggest benefits that we had from having our own homegrown solution was that we could truly customize every aspect of it that we wanted. And we did a lot of ROI analysis to see what is the investment that we’re putting into this tool versus the efficiency gains, the cognitive load that we’re reducing.
And we found with every single one that we were investing, it was paying itself off incredibly quickly. It did require a lot of discipline into making sure you’re measuring what we were saying, a conversion rate essentially, of how these features are successful. And when we did that, we could find that even manually calculating those is very much so beneficial. And what we’ve heard from feedback is a lot of companies sometimes use very specialized tools or tools that are not commonly adopted in the industry. But by having our own IDP, we were actually able to do that level of integration that typically you don’t find outside an out of the box solution.
And that was really powerful for us because it meant we didn’t have to change the way that the organization worked and the tools that they were used to. We were able to bring that all together and incorporate it into that single pane of glass and continue to keep that similar experience no matter what, which was really awesome. And then that actually would make it so down the road if we did want to replace those other tools that we were acting as that facade for, it was much easier for us to do that.
Abi Noda: Well, thanks for taking me through the journey with Showroom. I want to shift gears a little bit and talk about something just as exciting that I know you’re working on, which is really a company-wide initiative around driving up adoption and impact with AI coding assistance. So firstly, this is not something that’s uncommon right now. I’m talking to a lot of leaders who are trying to conduct these types of initiatives, sometimes from the top down, sometimes from the middle out. But for you guys at CarGurus, how did this come about?
Frank Fodera: So we’ve always been focused on continuing to get more efficient. We want to continue to be ambitious with what we’re trying and being more efficient with the people that we have here is going to help us achieve our big ambitions. So as AI, as you mentioned, a lot of companies today are investing into AI to help with that efficiency gain. And what we’re doing is really trying out the different options that there are. Currently, we’re having a bake off of three different AI assistant tools to just see which ones are most suitable for the needs that we have at the company, and we want the developers to try them out, see what’s working, and we’re seeing really promising results thus far.
Developers are able to move quickly and get things done in a faster manner that they couldn’t do before, which is really great, and we’re trying to be really diligent about the way that we’re measuring this too. We know that there’s a lot of reports out there that say AI will help you move more efficiently, but it also has other impacts like potentially increased burnout. So we’re using tools to help us measure a whole bunch of different dimensions following a lot of the developer experience frameworks that exist today to help us say, "Look at these different dimensions. Are we able to move faster from a flow perspective? Are we actually more efficient? What is our time savings? Are we negatively impacting burnout, and are we improving our review speed time? So things like that are helping us to look at AI not only from a just are we able to move more quickly, but a whole bunch of different dimensions that we can really see the different impacts of how it is helping our business move towards the goals that we have.
Abi Noda: I want to double click into both things you touched on. So first of all, you mentioned doing evaluations of different AI coding assistants. Without naming names in this landscape that is quickly evolving. So any names we named probably there’ll be more names in a couple months. But what are some of the key considerations between the different offerings out there? What are you actually seeing, if you boil it down, what are the trade-offs or differences that leaders should be thinking about when they’re going in and doing these types of bake-offs?
Frank Fodera: Yes, I believe that the biggest thing was we’re using a lot of qualitative data from our developers. So a lot of the feedback that we’re getting is how do developers feel like they are able to move more efficiently with this? We are looking at some more quantitative metrics like latency, like the ability of accuracy, code quality, things like that as well. But we are primarily relying on the qualitative feedback, collecting it in a more holistic way to say, how much more efficient are you able to move with these various tools? And what we’re finding is that each tool has a different specialty.
There are coding assistants that work better in certain IDEs, or work better with certain languages or parts of the technology stack versus others. You may have one that works phenomenal with Java, another one that works phenomenal with more infrastructure or DevOps-based things, and another one that works better with data. And we’re seeing that by having the ability to really test it out for your space, each individual can really say which one works best for the domain that I have to operate in. And that qualitative feedback that we’re collecting can be very critical to helping us decide what are the right choice or choices for our company.
Abi Noda: That’s really interesting. Which leads me another question then. If you’re finding that these different tools are specialized in different types of work and domains, are you then envisioning a world in which you standardize on just one of these tools? Or are you imaging a long-term scenario in which there is a collection of different tools used by different areas of the organization? What are you hoping for and what do you think is going to happen?
Frank Fodera: So what we decided is that as we’re doing this evaluation, we actually said that there is no one size fit all. What we’re finding is that it actually makes more sense to have the best AI code and assistant for the domain that you’re working on, give the individuals the choice to pick which one is going to best suit them. And that heavily depends on what type of engineer you are, what type of work are you working on day to day. And I think having that flexibility, having that choice can really go and allow individuals to move as quickly as they can because they will have the right tool for the job.
Abi Noda: And this goes back to the core platform problem of okay, freedom of choice around tooling. What are some potential trade-offs that organizations leaders should be aware of if you do go down that path? For example, I think you and I have talked about just the ability to measure in a consistent way, telemetry around these tools. What are some trade-offs you’re seeing if you do go down that route for others?
Frank Fodera: Yeah, so you mentioned telemetry. What we also saw was that not all of these tools have the right level of telemetry that we want. So I think actually being measured around these from both a qualitative and a quantitative perspective makes it a little bit difficult when you have a whole bunch of different tools that you’re trying to use. So that is one part that is difficult and we’re seeing that we might face that challenge. But there’s also the aspect of maintaining the contracts with these and working through the relationships with those various vendors. That has some overhead and that is something that we do have to consider as a trade-off as well. Generally these tools, I believe have low level of active maintenance. They are offerings that vendors provide, and that is actually really positive. So it shouldn’t be too much of a burden from that perspective.
Abi Noda: Double-clicking a little bit more in a measurement. We’ve talked about relying on collection of different types of signals, some of them quantitative, some of them self-reported from developers, things like time savings. As you’ve embarked on this journey, what have you found to be useful? What have you found surprising in terms of what are good signals that maybe you didn’t anticipate when you went into this and what are some signals or metrics you thought would be useful that maybe have proven to not give you as much signal as you would’ve expected?
Frank Fodera: I think the big ones that we’re seeing from a side is flow of work, which is really how fast are we able to produce pull requests and merge them in. Improving that rate, I think, is one of the big quantitative ones that we’re finding being more efficient with pull requests to turn around and analyzing those, I think, is another one that we’re looking at pretty heavily. One of the most surprising metrics that we looked at was actually that AI coding assistants seem to be slightly dropping code maintainability. Although when you dive a little bit deeper, it doesn’t seem that shocking because if AI is generating a lot of this code, the perception of code maintainability might be lower because you didn’t fully write all that code, you didn’t spend all the time doing it.
You were using a generator for it. Which was a little bit shocking, but not too surprising as you dove deeper into it. Then just continuing to look at the different dimensions, we’re very heavily looking through what the DORA report talks through about the different dimensions that are most critical. And thus far, even in our early findings, we are finding that it is mimicking a lot of those same types of results that we’re seeing from DORA, from that perspective. I think we’re seeing really promising efficiency gains more positively than DORA, which is great, but it’s still very early on in our journey.
Abi Noda: The second part to this topic is you are now focused on expanding and enhancing the impact and success of these tools within the organization, and you’ve even set goals around how much more ROI impact you want to generate out of these tools within the organization. So first of all, what advice do you have to listeners on how to embark on that journey? How have you, for example, engaged with leadership or gotten leadership involved or not? How have you constructed your plan and maybe goals and measurements around how you’re going to measure that this has actually happened? So maybe talk about the components of how to get started successfully on this type of initiative.
Frank Fodera: I think leadership alignment is definitely very critical. I think that that’s the first place to start. At CarGurus, it was actually the leaders were very bought in, so it was not a challenge that we had. They very much so wanted to be leveraging the efficiency that AI tools could provide to help us move more quickly. But ultimately that is the biggest aspect that we are seeing is moving more efficiently. And I think when you present that case to leadership about, “Hey, we can make ourselves X percentage more efficient.” that is very enticing. Especially if you have ambitious goals and you’re not able to achieve those with the current staffing that you have. I think if you’re trying to push for those, getting more efficient is a really great way to help you achieve those ambitions and push into the areas that you’re trying to. There are a lot of tools out there that help us measure this, and I definitely would say research those.
And we actually, when we were first evaluating the different options of how to measure our own efficiency, we did the same thing that we were doing with the AI coding tools. We had a bake-off, we evaluated multiple options. We even evaluated against potentially investing into our own types of measurements ourselves. So we had four different options that we were looking at. Three vendor tools, one of our own homegrown solution enhancing things that we had today. And we just ended up using a pretty objective way to analyze those and say, “Okay, which one of these can help us measure the different aspects that we want?” Both qualitative data as well as quantitative data, which ones are going to be providing all of our engineers that transparency into the different efficiencies that they have? So that is very critical. And using those tools to measure your baseline is where you’ll want to start.
Get that baseline of how quickly are you able to move today, invest into the right AI coding assistance for your company, one or many. And then continue to measure that over time as you’re using this. There’s probably going to be a curve, and we’re still early on, so we haven’t fully seen the result of this, but there’s probably going to be a curve where initially you might see some efficiency gains, but as individuals get more comfortable with these tools, we’ll continue to see that compound in that efficiency gain hopefully accelerate even further. And then there’s more advanced AI coding assistant techniques, like agents, that can go and help you to bring it even further.
Abi Noda: What are the main metrics? And we’ve talked about a lot of different ways to measure this stuff, but… main metrics, and we’ve talked about a lot of different ways to measure this stuff, but as it pertains to this initiative in particular, what are some of the main metrics that you’re going to be reassessing X months from now, X quarters from now? That’s my first question. Second question is what are the main things you’re doing to actually drive this forward? How are you trying to drive adoption up and impact up? What are the things that you can actually control?
Frank Fodera: So some of the metrics that we’re going to be looking at over the course of the year are diffs per engineer. So really just seeing how quickly are we able to move or pull requests and merging them in per engineer over time. So that is one that we’re looking at pretty closely. We’re looking also at change failure percentage, so like change failure rate to make sure that, hey, if we’re moving more quickly, let’s also make sure that we’re not introducing more quality issues. So those are more of a balancing effect that we’re looking at pretty closely.
The way we’re actually going to be measuring this is looking at six different dimensions to help us. So we’re looking at flow or speed, which is diffs per engineer. So how many pull requests are we being merged in per engineer that we have? And we want to see an uptick in that, that’s going to help us to be more efficient. We’re also looking into efficiency, which is, we’re using DXI for that. So that’ll help us to say, okay, how much more efficient are we as we’re using these tools as well? We are looking at another countermeasure of job satisfaction because that is something that the DORA report says, job satisfaction burnout actually increases when you’re using these tools. So that is something that we want to keep a close eye on. We want to make sure that as we move more quickly, we’re not burning out folks as well.
And then time savings, I mean, that’s the biggest ROI metric. And we’re using qualitative data to account for this, but we want to make sure that we are more efficient and we are saving time from these tools, so we’re going to be collecting that as well. And then the last is satisfaction. I mean, we want to make sure people are happy with these tools that they have. So collecting a CSAT score on them, monitoring it over time. As people get more familiar with these tools and more comfortable with these tools, do they continue to have a positive sentiment on it? That’s really important. And then quantitatively just adoption. We want to make sure that these tools are adopted, that they’re being leveraged, that every developer feels empowered to use them. And I think that that’s really critical.
So now how are we actually going to be investing into this? What are we actually doing? I think empowering every engineer to experiment with these is the first step. Education. We really want to continue to invest into education for these tools. We have a lot of champions for these tools that use them very extensively, that know tips and tricks that they can share. So what we’re doing is leveraging them to do tech talks or share quick videos that say, “Here’s how I used AI recently to help me solve a problem that I wasn’t able to solve before.” We’re doing a lot of just targeted outreach as well, making sure that teams who we see lower adoption on are empowered to use that. So I think that that will help.
I also think that having leadership talk about the investment that we’re making into AI, the push for it can be a really huge aspect that can really turn the tides. And something that we did recently was we got up in front of our team and just said, everybody should try it out. Everybody should use it. Everybody should be trying it out day to day. Make it part of your development. And I think if we have leaders across the board really investing into that message, that will help with the adoption quite greatly.
Abi Noda: I was talking with Brian Houck, a researcher at Microsoft, and he actually shared that, that organizations where senior leadership actually vocalizes encouragement around using these tools see 30% higher adoption and utilization. I mean, it seems like a no-brainer, but surprising a lot of organizations aren’t actually putting the emphasis on actually getting that kind of communications out. So it sounds like you’ve done that.
I want to ask, where are you seeing more of the challenge? Is it just getting people over the hump to start adopting and using these things? Or are you really at the point where, okay, folks are all dabbling with these tools, but how do we actually help them learn how to use them to get to the next level? Which problem are you dealing with more?
Frank Fodera: I think we’re still pretty early on in the phase where we’re just trying to increase the adoption. We’ve had a coding assistant for a while and we didn’t really do a great job advertising it. Everybody had access to it, everybody was encouraged to use it, but there wasn’t this big push within the company to go and have everybody leverage it for their day to day. So I think we’re still in the phase where we’re trying to get everybody to jump on that. I think with our more recent bake-off that we had, we had individuals who were trying out these new tools and even in that short period of time where they were trialing it out, realized, “Oh, I miss this now, it’s gone now and I want it back.” And I think that that was really pivotal to go and have us realize, okay, these tools are something that people want in their day to day.
So the big thing that we’re hearing is try it out, leverage it, and kind of get sticky to it. And that’s kind of the current phase that we’re in. We have seen ever since we’ve done that, a pretty big uptick in adoption. We’ve had our leader encourage everybody to use this as well. Our leadership below that, also going and encouraging it across the board. And I think that has made a huge push. That’s kind of what we’re focused on.
Abi Noda: And you shared some of the metrics through which you are going to be tracking progress on this initiative over time. What are you aiming for? I mean, it’s okay if you don’t have specific numbers. I mean, what kind of lifts are you and leadership hoping? Is it 30%? Is it 50%? Is it doubling productivity? What do you think is achievable right now?
Frank Fodera: So I can’t fully share what our target is, but I would say that results are looking pretty promising. I do think that there can be somewhere from 15 to 30% efficiency gains that you can see very quickly. And as you continue to get more advanced, I do think that there is an opportunity to get even further than that.
Abi Noda: That’s exciting to hear from someone who’s in the trenches not selling an AI coding assistant tool. Well, Frank, I’ve really enjoyed this conversation and diving into both your IDP journey and now your focus on AI coding assistants’ impact in driving adoption. Thanks so much for coming on the show today and for your time.
Frank Fodera: Yeah, thank you for having me.
Abi Noda: Awesome.