Skip to content
Podcast

Scaling developer experience across 1,000 engineers at Dropbox

Developer productivity is often framed as a tooling initiative or a morale issue. At scale, it’s a more complex socio-technical systems challenge that spans engineering foundations, leadership alignment, organizational structure, and culture. In this episode, Laura Tacho sits down with Uma Namasivayam, Senior Director, Engineering Productivity at Dropbox, to discuss how the company approaches developer experience across an organization of nearly 1,000 engineers. Uma explains why productivity must be treated as a business problem, how executive alignment enables sustained progress, and what it means to run developer experience like a product. The conversation also explores the intersection of AI and developer experience. Uma shares how Dropbox prepared its engineering systems to support AI adoption, why daily AI use depends more on habits than access, and how the company evaluates build-versus-buy decisions as AI tools struggle to scale in large environments. The episode concludes with a candid discussion of the open questions facing engineering leaders today: how to understand where AI-driven capacity actually goes, and how to connect improvements in developer experience to meaningful business outcomes in 2026.

Show notes

Developer productivity is a socio-technical problem

  • Productivity cannot be solved through tooling alone; it spans engineering systems, leadership behavior, organizational structure, and people practices.
  • Problems like build and test are engineering problems, while problems like focus time and interruptions are people problems, and both matter equally.
  • Treating productivity as a system forces tradeoffs to be explicit, rather than hidden inside isolated tooling initiatives.

Executive alignment matters more than any single metric

  • Top-down sponsorship creates permission to act, especially when productivity work cuts across org boundaries.
  • A shared framework creates alignment, not answers; its value is giving leaders and engineers a common language.
  • System metrics matter more than single metrics, because productivity improvements rarely move one dimension in isolation.
  • Distributed accountability makes productivity a company problem, not a developer experience team problem.

Developer experience works best when treated as a product discipline

  • Developers are customers, and their experience must be understood through both qualitative feedback and quantitative signals.
  • Good system metrics do not guarantee good developer experience, which is why sentiment and perception matter.
  • DX surveys surface where systems break differently for different teams, such as desktop, mobile, and web developers.
  • Continuous feedback loops are essential, combining surveys, direct conversations, and usage data.
  • Internal communication is part of the product, reinforcing to developers that their feedback leads to real change.

Prioritization requires structure, not intuition

  • Finite capacity makes prioritization unavoidable, even in large, well-resourced engineering orgs.
  • Segmenting developer populations clarifies tradeoffs, since different teams experience different bottlenecks.
  • DX survey data provides a defensible starting point, but prioritization still requires judgment.
  • Leadership-level stack ranking helps resolve conflicts, especially when multiple teams compete for attention.
  • Frameworks make hard decisions easier to explain, even when they do not make them easy.

AI and developer experience must advance in parallel

  • AI accelerates work, while developer experience reduces friction, and both are required for sustained gains.
  • Foundational systems act as plumbing, enabling trust in speed, quality, and safety.
  • Without strong CI, testing, and observability, faster code creation increases risk instead of value.
  • Trust in guardrails enables confidence in AI-assisted development, especially at scale.

AI adoption succeeds through choice, not mandates

  • Early organic adoption revealed real developer needs, rather than forcing a single tool.
  • Different teams require different AI tools, particularly for mobile, desktop, and large-repo workflows.
  • Supporting multiple tools increased adoption, rather than reducing it.
  • Daily use depends on fitting AI into existing workflows, not adding extra steps.
  • Habits matter more than access, which is why SDLC-level integration is critical.

Build vs. buy decisions change at scale

  • Many AI tools fail when tested at large-company scale, despite working well in smaller contexts.
  • Cost and performance become gating factors, not feature completeness.
  • Internal platforms can abstract complexity, enabling teams to build AI workflows safely and consistently.
  • Shared internal platforms unlock reuse, allowing teams to innovate without rebuilding infrastructure.
  • Speed of iteration remains the primary differentiator, even when building in-house.

Timestamps

(00:00) Intro

(00:45) Dropbox’s engineering org

(01:59) Why developer productivity is a business problem

(04:08) The role of executive sponsorship in developer productivity

(06:02) How DX’s Core Four framework created a shared language

(08:13) Treating developer experience as a product

(11:30) How Dropbox prioritizes developer experience work

(14:20) The challenge of tying developer experience to business outcomes

(16:38) How AI and developer experience intersect at Dropbox

(18:35) The prerequisites for AI adoption to accelerate work

(20:26) How Dropbox encourages daily AI use

(23:12) AI use beyond code completion

(25:00) Managing AI tool demand at scale

(27:56) Early results from Dropbox’s AI efforts

(30:05) Progress on developer experience at Dropbox

(32:55) Advice for organizations investing in developer experience

(34:25) Capacity tradeoffs for developer experience

(35:59) The unanswered questions around AI and capacity in 2026

Listen to this episode on:

Transcript

Uma Namasivayam (00:00):

Working with our chief people officer and the people team, we had to actually literally think about how do we restructure meeting times? How do we actually think about giving blocks to their employees? That is a very different problem. It’s not an engineering problem. So that’s why having a common language and also bringing the people together and the leadership team was very, very important because we had to attack productivity from multiple different angles.

Laura Tacho (00:19):

Welcome to this week’s episode of the Engineering Enablement podcast. I’m your host, Laura Tacho, and this week I’m joined by Uma Namasivayam, a senior director of developer experience at Dropbox, where we’re gonna talk about developer experience, AI, how they intersect, and what really great executive and organizational support for developer experience initiatives looks like. Welcome to the show.

Welcome, Uma. Thanks for joining me. Let’s set the stage a little bit and talk about Dropbox as a company. Dropbox is a really engineering first kind of company. You’ve got engineering deep in your DNA. Can you tell me a little bit more about the size and scope of your engineering org, your role, you know, what you’re setting out to do?

Uma Namasivayam (01:01):

Thanks for having me, Laura. So, Dropbox, as you said, it is an engineering DNA. We have close to a thousand engineers in the company, and we ship products in the file sync and share. That was the bread and butter for Dropbox. Now we are in the world of dash, which is focused on AI enterprise search and assistant. So in terms of productivity specifically, what my team does is we are looking at engineering productivity across the company. That includes the core organization that ships the core products, the dash organization that ships the AI products also. So my team looks into the CICD systems, the telemetry systems, and also like overarching the AI rollout within the company for all the engineers. So the way you should think about it is my engineering team and the product management team looks into the overall experience of the developers and how we can actually improve their productivity through the use of AI and also the overall day to experience of how we are shipping code to our customers.

Laura Tacho (01:59):

For a company with so much engineering focus, it’s not, you know, surprising that there’s a lot of engineering muscle behind productivity and platform initiatives, but you’re thinking about this not just as a technology problem, right? You’re thinking about it as a business problem. Can you talk a little bit more about where that mindset came from?

Uma Namasivayam (02:18):

Yeah, that’s a great question, and I think that is one of the, I would say, a big differentiator in terms of how we are approaching productivity within our company here. So I think of a productivity problem and Dropbox as more like a, anywhere actually has a social technology technological problem. Yes, there has to be a very strong investments in the technology itself to improve the foundations, to improve the reliability, but to improve the speed of the system and whatnot. But then there is also the element of collaboration, the element of working with the leadership team, working with developers themselves, and also working with the people team. Like it’s all about bringing the people together too, so we can solve towards how we can improve productivity. There is also an aspect of the overall, improving the culture of the organization as well.

Like why actually productivity matters for the company, right? So how do we think about business impact? So for me when I think about productivity as such as a concept, it started in 1950s with respect to Toyota, production system, right? This is not something you unique to the world, but when it comes to the applying the world of software, when we connect towards the arc of what does it actually help with our customers, bringing that mindset into the picture clarifies a lot of things for our developers, clarifies a lot of things for our cross-functional partners also. So ultimately, when I think about productivity is how do we actually bring value to our customers as best as possible in the highest quality and the best possible way to our customers. So that framing sort of helped not only our engineering team, but also the leadership to rally behind this whole construct of productivity within Dropbox.

Laura Tacho (03:51):

And executive leadership is something that’s so important when you’re operating at the scale, you have, you know, a thousand developers. Tying productivity back to business outcomes is crucial for getting, you know, the story straight from like start to finish about how this benefits the business. It’s not about ping pong and beer. That’s often what we talk about the, the kind of marketing problem of DevEx. Can you talk a little bit about how strong executive sponsorship just accelerated and enabled a lot of success when it comes to devex and productivity?

Uma Namasivayam (04:19):

So I think one of the things that actually helps us, and I think I’d be very candid here, is, Drew’s also an engineer as you know, as he’s said in a lot of interviews, and I think we all know, that actually helped us a lot in terms of setting the context also. So even when it’s thinking about AI rollout, he’s a very big fan of using AI coding tools and whatnot. So his thought process was, Hey, if it’s helping me do my job much better with ai, why does it not help my developers do the job better? So that framing really, really helped, but that leadership alignment was even more important for us was when we actually like approached this problem from the perspective of, okay, it’s not a single metric that we are trying to move, but it’s actually a system metric that we’re trying to move.

Like, good example is DX Core 4 metrics that DX offers was very useful for us because, we had benchmarks. We had not focused on a single metric, but multiple dimensions. So when we bring this tool, like the leadership team, it resonated with them very, very well in terms of like, Hey, we are not just focused on one single metric, but a framework of metrics. And that also helped drive the alignment from top down to making sure that we are actually focused on the business problem itself. So that helped us also to go towards the world of how we can improve productivity as a system, looking at multiple aspects of things, right, not only system metrics, but how does the developer experience improves, what is the impact that brings to the table? So it was a very cohesive way of bringing things, but that top-down mandate helped us a lot in terms of rallying the team here. Another point of improvement that also helped us was, since we had leadership alignment across the board, it was not a single team problem. Yes, I have, I own developer productivity within Dropbox, but ultimately it’s a shared problem. Every developer, every manager, every leader within the company has to be thinking about productivity and it making sure this becomes like a central accountability for the company, I would say was the biggest win that we had with respectful leadership alignment.

Laura Tacho (06:19):

One thing you shared, in preparing for this conversation was that adopting that common language, so in this case the core four framework, and having alignment on how everyone from your leadership to your engineers are thinking about productivity didn’t give you answers, it gave you alignment so that we’re all kind of directing effort in the same direction and see it as a business problem and a shared responsibility, and not just something that’s about developer sentiment only, but really that it’s a crucial, it’s a crucial operational and kind of crucial business performance aspect for Dropbox.

Uma Namasivayam (06:54):

Yeah, totally. A hundred percent agree with you. I think one of the examples that I always think about when I think about productivity and why this is actually a common thread that connects all the teams is let’s say for example, developer experience that are multiple dimensions to where I’ll take an example of build and test the things that we do from a technical standpoint, how we are building software that is actually a very strong technical problem that we can solve. And that requires like talking to developers what pain points they’re facing, you know, how we standardize some of the tools that we have. So that is a very engineering different problem that we can solve. But then there is another dimension when we talk about productivity is, as a developer, am I actually focusing on the right things for the company?

Do I have focus time? Can I actually code uninterrupted? There’s no interruptions. That is a very different people problem. So working with our chief people officer and the people team, we had to actually literally think about how do we restructure meeting times? How do we actually think about giving blocks to their employees? That is a very different problem. It’s not an engineering problem. So that’s why having a common language and also bringing the people together and the leadership team was very, very important because we had to attack productively from multiple different angles. And that’s why I keep on hammering a point that this is a social technical problem, just not really a technical problem,

Laura Tacho (08:09):

That collaboration is really key. I think one other thing that stands out to me is that you’re really treating developer experience and platform engineering as like a product problem where your developers are your customers. Can you talk a little bit about that and, and you know, how that mindset changes the way that you approach developer experience in general?

Uma Namasivayam (08:28):

I think that I would say that was actually one of the biggest, leg of improvement that we had in actually making sure productivity was actually like was able to turn around in the company. So just to give you some context, right? Like, Dropbox has been built with like bespoke tools in the past. We had, we wanted to always move fast. We had a different set of tools in the past and there was not a lot of standardization at place. So that was an effort that was ongoing quite some time in terms of how we standardize the tools, how do we make sure that it’s reliable and it’s actually moving at a full speed that foundation was actually happening, which was great. But then when the problem was like, looked at, let’s say in early late 2024 was, okay, we feel like the systems are actually like doing well from a system metrics standpoint, but then developers felt like, it’s very, very hard to ship code within Dropbox.

It could be a perception, it could be a problem that they’re facing, but you’re not able to connect to actually the system metrics that you’re thinking about. Right? For example, a TTL detail, time to land, attempts to land, we actually really, really good, but developers are still feeling the pain point. So in that lens, when you think about like, why is this problem happening? The biggest shift that we had was like, how do we think about this problem from a product mindset, okay, my product is great. Why is it actually having causing a problem with my developers who are our customers? So connecting that arc required a mindset shift in terms of how do we think about like, the problem from a product perspective. So we literally started thinking about getting the DX survey, which was very useful to kind of analyze like what developers are actually saying about different parts of the system, what are the pain points they’re facing?

So it gave us a good way of thinking about the problem like that to actually focus on, and nothing beats talking to our developers, right? Like the PMs in my team, they talk to the developers, the ts in my team, they talk to the developers, they understand what their pain points are bad in the system. It’s actually breaking, having that feedback loop and understanding different parts of the system and different problems. A good example is our desktop developers had a very specific problem and build and test the web developers had a problem in production debugging. So it’s a very different problems that we need to solve for, and having that kind of mind share and also making sure that leaders in the team are also investing towards the problems was helping us connect the whole box here. And that’s why I think we were able to tackle the problem and change the perception around the productivity.

The second piece of the puzzle, which people may also not consider and which is I think is very important, is the internal communication piece, right? Like you have a product, you talk to the developers, then what does it actually mean in terms of like you selling the product to the developers to share? These are up solving level, but that’s also where the PMs came into a big support here where sharing out the story not with leadership, but also to the developers saying that, Hey, here are the updates that you’re doing to our product stack. Here are the things that are changing in our developer stack based on your feedback that you heard. So that reinforces the learning between the developers and the system, which actually gave us the edge in terms of moving a lot of metrics that we thought were useful for productivity.

Laura Tacho (11:27):

And I imagine just as you are prioritizing features and prioritizing new things for your customer end users, you’re looking at all of this data and prioritizing the parts of your own internal product, your developer experience that you’re gonna work on for your internal customers, your developers. Can you give us a glimpse into how you make those hard choices? Because even a company with as many resources as Dropbox still needs to make a tough call about something first and doing something second. You know, in that example about the desktop folks struggling with build and test, but web folks struggling with production debugging, how do you decide what happens first in which sequence these problems get fixed?

Uma Namasivayam (12:07):

That’s a great question. I think we are actually like learning as we go. I mean, there is no great answer when it comes to prioritization as you know, Laura, but at least a couple of things helped us move in the right direction. One I was mentioning earlier, right? The leadership alignment piece was very, very useful. And as part of that, what we also did was we made this as a distributed accountability for all the engineering VPs, the core vp, the dash vp, the CTO organization. They would all have to like invest X percentage of their resources in productivity. So that’s like the top level construct that we were able to align. Now, once that capacity is there, now the question of prioritization is what you’re asking, right? So we definitely relied a lot on the DX survey metrics in terms of how do we prioritize what are the top five pain points that we have within the company?

And then going into another level of segmentation, right? Like when I talked about desktop mobile versus then different developers. Once you have a clear matrix like that and then you know what capacity you’re going after, then it comes down to roadmap planning and it comes down to saying that, hey, all the teams provide your bottoms up work in terms of how we can fix these problems that we have with respect to the, the ex survey. And then it comes down to staff ranking with the VPs in the room saying that these are the problems that we are going after. These are the things what the developers are saying. Here’s how much capacity that we have in the different teams. Here’s how much of segmentation that we have with the developers. Now let’s put the vote to the pin and then save into the board and say how we can actually like, fix down of productivity. So that’s how our prioritization scheme went. And always, like you said, like it’s not a rocket science, but at the same time having some framework that think and rationalize about that made these conversations much easier. And there is always gonna be a large backlog that you can go after, but this framework actually helps you to prioritize really well.

Laura Tacho (13:54):

I think that’s reassuring for some of our listeners to hear because it’s very easy to look at the great work that you and your teams have done and think, oh, that’s inaccessible to us. We can’t do that. They don’t have the same problems that we do. But truth be told, you also still have prioritization problems. There’s still finite resources. You still have to make tough calls about what to do first and what not to do at all. As you said, there’s a lot of things that we’re learning as we go. You mentioned something earlier about, you know, strong executive sponsorship and how that makes it, you know, you’ve got visibility on these DevEx pieces from the top down, from the bottom up, and you’re able to align your DevEx goals to the company goals at scale. Can you talk a little bit about how that alignment works? Like how are you, how are you connecting these dev X projects back to business value? Like where are you on that journey? Have you cracked the code of doing that really well?

Uma Namasivayam (14:47):

I think this is something that keeps me up all night. My teams also in terms of how do we connect back to the business impact. So when I think about the arc, right? So the way I think about it is in 2024, it was about like, how do we actually fix engineering? When we had the foundational issues, that was a different set of metrics in 2025, it was about like how, what framework we are using towards get dx, right, the DX Core 4 framework we used and we were able to move metrics along multiple lines, whereas the speed line also moved very well. Quality was there, impact was there. We are almost very close to getting the, hitting the benchmarks on our DXI metrics also, which is the developer experience site. But then connecting back to the business impact that is a code that is, in my opinion, very hard to crack at this point.

The reason I say that is it requires a lot of input from not only like the developer experience metrics, but also like what products that we’re trying to ship. So beyond the journey of connecting these developer improvements, plus how much it helps you to reduce the time leadership features back to the business impact, and that is a code that has not been cracked yet, in my opinion. We are trying to connect all the data we are working towards, like how we can show that impact much better. Having said that, for 2025, what was useful for us in terms of channeling the the crew and the leadership towards the business impact was moving DXI, we felt also will help you save the amount of hours that engineers are actually spending on actually writing code towards shipping products much faster. So we were able to see evidence of that in multiple areas not only in like experience, but also like through the adoption. So that was a metric that we are using back in 2025 to kind of connect back to the business impact in terms of how we can ship for us faster to our customers.

Laura Tacho (16:32):

Great segue, because I do wanna talk about AI. Dropbox has been, Dropbox is a, you know, developer, DNA kind of company and you’ve been working on and emphasizing the importance of developer experience for a lot longer than ai. Coding assistance have been kind of on the market, but when they arrived they arrived in a big way, very high level. Can you just give us a little bit of introduction to like, how do developer experience and AI co-exist in your world?

Uma Namasivayam (16:59):

And I think this is definitely a hot topic for anybody, any engineer leader out there. So I would say, so back, I think early 2025, late 2024 we started on the DXI journey. And that definitely that was a developer experience journey that happened. I think of AI coming in at the time and coding assistance were like actually taking off big time, right? Like, I think at that time Dropbox was using roughly one third of engineers were using AI in a very, very organic way. People were actually going after the coding assistant they like. But then when I think about when the takeoff happened with respect to ai, it almost became a two different work streams in my opinion. There is this whole how do you reduce the friction in the existing system, which is the DX side of the journey, which is one.

The other is the AI. How do we accelerate the journey in terms of getting it together? It almost feels like you are rebuilding your car as you’re accelerating, right? That’s how I view that. It’s almost like two different journeys that was in parallel. It cannot be sequential as we all know, but it had a lot of overlap in the middle also. So I would also characterize DX and AI in a different way, right? DX is all about, like we discussed earlier, it’s about defining what the problem is, aligning with leadership, and then slowly making those incremental changes as you standardize the system towards friction. AI is all about speed. Like how do I actually get the best tools possible to our developers? How do we experiment faster? How do we, I trade faster? So we are always on the cutting edge of the technology. So these two are actually in a power work stream, but they actually came in together in a lot of different touchpoints. That’s how I see those two work streams come along in Dropbox.

Laura Tacho (18:32):

I wonder what, from your point of view, what were some of those core developer experience building blocks that needed to be in place as a prerequisite for AI to take root and to accelerate? Because they are, as you said, they’re not totally separate. There’s a lot of crossing of streams, but in a lot of ways the, you know, DX practices help AI accelerate. Can you think of a couple examples of like, what do you, from your point of view, what are those prerequisites?

Uma Namasivayam (19:00):

Yeah, so I like to share one of Drew’s quotes. Drew, I would say that if your plumping is not good, there’s no point in like beautifying your house, right? I think the developer experience, I think of that as like a, you are plumping, so everything stops if it doesn’t work, right? So that’s pretty much the analogy that I was using I would use here. So getting back to your question, right? I think the fundamental piece of you know, build and test, you know, your telemetry system, your production ing system, these all have to be in place to be in a really, really good spot before I, in my opinion, where AI can actually, actually, actually have a lot of feedback. The reason I say that is the speed at which, like the AI coding as systems are going, the speed, the speed at which the developers are using AI to like push code, they need to also have trust in the system that, hey, if I push code, it actually has tested properly, the quality guardrails are actually met, the compliance angles are met, the security scans are met, and then when it actually goes to production leadership plus engineers have the confidence that the ship, the gold that I’m actually shipping will land and also be, have no issues in production.

So building that level of trust, the foundational systems have been very, very strong. So I would anchor on like those foundational systems being super strong in terms of reliability, in terms of like quality before we actually like go really, really hard on AI. So that would be my take on that.

Laura Tacho (20:25):

And you have a pretty experimental mindset. Like it’s not about perfection, right? It’s about experimenting faster. Most of your developers are using AI daily, weekly, and you’re really, you know, focusing in on encouraging daily use. Like what are some of the things that you’re doing to encourage daily use, sustain daily use? So that becomes a new way of working and not just a temporary kind of state of being that’s gonna, you know, fall out of fashion.

Uma Namasivayam (20:55):

So let me take back in terms of how the journey of AI happened, that can actually kind of explain like in terms of why, how can we make it more sustainable and sticky in a way, right? So I think early 2025, like I said earlier one third of our developers were using in an organic way, and then there was huge uptake in the coding assistance, as we all know. And, and I think we had strong leadership as our, our exec team alignment saying that, Hey, we need to be also taking in AI as a very serious company priority. So that kind of felt in kind of like initial stages of like increasing your AI adoption. So I would say like three fourth of people are starting to use on a weekly basis within like a three months or so because of the top one help that came down from the exec team.

But beyond that, then it goes back to the product mindset also, right? Like you have to look at like what developers are actually looking for different coding assistant. Like today we use like three or four different coding assistant that is available to our developers. We are not like one single coding assistant shop going after the, Hey, this is the only thing you have different teams have different needs. So going after, like for example, mobile team, they cannot use the existing a coding tools that are out there. So we had to find that it’s something very specific for their use cases. So that kind of, it helps add you to the adoption metric that you have, right? So looking at these kind of pockets of areas where developers have, what’s working for them, what’s not working for them, and providing those kind of tools definitely help us to fix the long tail.

And they’re able to get close to almost everybody’s using AI tools right now on a regular basis, right? But now how do we actually take it to the next level? Now again, we have to go into the habits of our developers, right? Like what are the areas of the SDLC that is actually helping them or blocking them to not use AI? Are the tests not being written properly with AI? Like what are some of the things that we can do as a central team to kind of enable them? So looking after every finer details of the SDLC also will help you to make it more and more sticky. And the second aspect of it is also like the PM team works very, very closely with the vendors, and they are also making sure any updates that are like needed that’s required from the developers. We are also communicating to our vendors to make sure that is also incorporated in them. So that type of feedback loop also helps you to make the product more sticky and helps us to get the maximum bang for the buck with respectable quality tools.

Laura Tacho (23:11):

Do you have any examples you can share of how your teams are using AI outside of just code completion?

Uma Namasivayam (23:18):

So, once the whole, once we started achieving, like we started with coding tools as the first one because that seems to be the easiest one like everybody’s doing, right? Once we achieved whatever we could achieve there in terms of adoption, then naturally we need to start looking into the entire parts of SDC, right? So the next aspect of it is we started looking into the review review cycle. We started looking at the testing aspect of it, debugging aspect of it. So looking at every single SDLC, it is actually gonna be critical. And we started doing that. One thing that started resonating with us as we go into the POCs was there was a gray of tools that we started testing out, right? Like there was a bunch of tools in different lens that we started testing out, but a bunch of them did not start working for Dropbox as scale.

It is very easy for us to get carried away by this tool will work. But as we started doing POC, we realized some of them will not work. A good example that comes to my mind is in the review side of the world, we were using one of those smart key names and it just did not work for our scale. So we then decided to actually think about how we start building those kind of tools in-house. So we now have an in-house effort to start building those AI tools for specific use cases in the next DLC. So in, in terms of AI, also build versus buy decisions are also very, very critical. I don’t know what specific where your own company is, but what Dropbox scale was a big issue. And then as a result, cost was also a big issue. So we need to think about what actually will help you move the needle faster and how can we actually unblock ourselves in terms of scale. So that was the framework that we are using. And again, like you said earlier, a lot of speed is going to be the biggest differentiator when it comes down to AI tool adoption. So having that lens when you think about build versus buy at this will really fast on the different cases.

Laura Tacho (24:58):

How do you keep up with, you know, a thousand developers and this ecosystem that is just maturing at a, a really rapid pace and all these tools are are coming out. How do you keep up with the demand for, for tools like I imagine that there’s lots of developers coming and saying, Hey, we wanna use this tool, we wanna use this tool. How do you manage that at scale while still, you know, still encouraging experimentation, but like doing it in a, in a safe way?

Uma Namasivayam (25:25):

So great question. So it actually is, I wouldn’t say it’s happening naturally, but that is actually a method to the madness here. Like I said earlier, the PMs on the team they have a very tight pulse in what’s happening in the market in terms of what tools are out there. So we also like do a lot of industry research in terms of what tools are getting matured, what tools are actually really well, we talk to also our peers in terms of what they’re using for AI tools. So we have that kind of intelligence happening on the external side of the person. The second one, also, we leverage a lot of developer feedback. Like we have Slack channels, we have like the surveys, you name it.

Like any inputs that we get, get from the developers on what they are seeing on the ground. So we have a very strong triage process on like what softwares that they’re also looking after and they want to actually use for their work purposes. So the industry side, the developer side, and obviously leadership also, they have their own inputs for the bring that all bring it all together separately. Like we also worked with our own procurement and like the legals, the security teams to reduce the time that it takes for us to like experiment also specifically for AI tools. And that also helped a lot. I would say the biggest innovation in, this one was reducing the whole review process from four to three days to three days. I would say that was the biggest one in terms of moving really fast.

So once you have all these inputs into play, then we also have a framework on SDLC,looking at which portion of the SDLC has the biggest impact, in terms of engineer hours saved, in terms of number of PRs that are getting pushed through. Like we can look into like what impact scores that we have and accordingly we can prioritize them. And like the last one is my, my role and is also to kind of manage expectation to leadership, right? You cannot have all the tools at the same time, you need to prioritize that. But again, having this framework and talking through this framework and the inputs that we have makes the conversation much easier with leadership and also with developers.

Laura Tacho (27:29):

I feel like engineering leaders are caught kind of in a cross stream of really high expectations from leadership. 'cause they hear the, you know, the hype cycle around these tools and like what company X is doing. And then developers are also hearing or like, you know, wanting to experiment with the things and you’re sort of caught in the crossfire of needing to manage expectations both up and down and like make sure that everyone’s needs are met, which is a really, really challenging place to be in, especially without data that you can back up and say like, okay, well this is the actual result for our company and that’s why we can’t get this result that is shared in this LinkedIn, you know, article. That’s not plausible given our current environment.

Uma Namasivayam (28:08):

Absolutely.

Laura Tacho (28:09):

Talking, maybe just a little bit into outcomes and some of the results of this great work. Let’s maybe start with the AI space. So we already talked about like tremendous adoption, really sticky adoption now weaving it into daily work, weaving it into other parts of the SDLC. So it’s not just about coding. Do you have any other, do you have an example to share with us of something that was really innovative or, or just really, really impressive that came out of your engineering org or that you saw someone build with AI that you thought was just super cool and, and wanna share with the audience?

Uma Namasivayam (28:43):

There are a bunch of examples that comes to my mind. One of them is, I’m very proud of one of the teams that I’ve actually building. It’s actually my team. They’re actually building a very specific, a coding platform I would say, where it takes into account all the, the third party systems that we have, but then also works at base very specifically for Dropbox use cases. When I talk about scale, one of the things that mentioned was Dropbox is also mon repo store where your three size is very large. We all know that some of the coding as that we have may not actually like work at the Motor Depot scale. So we are actually, one of the developers that we have, he took it on himself to figure out like what kind of platform that we actually connect can build using Claude, using Claude code in terms of building that platform that can actually work within Dropbox and others can actually build on top of the platform to make sure, let’s say if I have a use case of building a, a review product using AI, if I start using this platform, it takes care of all the backend problems that you, you have, right?

Like your deployment is taken care of, your, you know, the scale of monorepo is taken care of, your testing is getting taken care of. And that platform I’m very, very proud of in terms of its adoption within our own company. And that’s something that we are all looking forward to kind of having organic adoption on top of that. So that’s what I’m looking forward to in 2026.

Laura Tacho (30:03):

I think that’s such a nice example to kind of bring us back to the original point that we started on, which was developer experience because it ties together how AI and building tools with AI does accelerate developer experience. But thinking about the DXI, so the measurement of developer experience in the core four framework, you all have made tremendous progress of that in in the last, you know, the last year. Can you talk a little bit about that as well?

Uma Namasivayam (30:29):

I think 2025 was a very strong here for DXI, and we also very candid, we haven’t reached our benchmarks yet, but at the same time we’ve made a very strous progress. So some context on also DXI, right? I think for folks that don’t know DXI is kind of a sentiment metric, but actually is very useful in terms of looking at multiple dimensions of a developer experience. And that is very, very powerful. So getting back to the initial discussion around like leadership alignment, right? Once we know DX is, is very important and we made that company priority. There was a lot of skepticism even within like engineering as well as like leadership. Like, hey, we are going after a sentiment metric. How can we move this metric? It’s not done in the past. Like we’ve had DX surveys in the past, it’s not very easy to move.

And anecdotally also, like I was mentioning in 2024, the system metrics on the foundational pieces were improving, but the sentiment was not moving. In fact, it was actually dropping in a lot of use cases, right? So there was a lot of skepticism around that within engineering and we had to solve for it. So that’s what I feel like, you know, I also, hear a lot where we actually went back and said, Hey, have you had any companies that were able to move these metrics? And he was very helpful in terms of showing industry benchmarks, what is possible, what’s not possible, and then we took a bet on ourselves, like if somebody else is able to do it, we should have set an ambitious target and we have to go after that. So even like, convincing our leadership team to say that, Hey, here is where the benchmark is today, here we are, we are here, everybody wants to make it happen in the same single year, right?

So convincing them to show that it requires a very different approach for different problems, right? For example, I was telling you, build and test is a technical problem. Deep work is a people problem. Documentation is an engineering wide problem. So it’s a very different problem space. So making them appreciate the fact that it is not just one metric, it has a lot of things that go under them and actually making sure it’s a multi-year journey helped us a lot. And then yeah, I’m proud to say that we exceeded our targets that we set for 2025 and that’s truly because of very, very focused efforts on individual pain points, working with the developers and having a very strong priority scheme. And we are able to move the metric. And I’m actually very proud to say that like we have exceeded the metric for 2025 and like we are on the journey towards leading the benchmarks in 2026. So it’s been a very remarkable journey and definitely all the skeptics that were there in 2025, they’re like, okay, this is possible and we are all marching towards what can we be doing in the 2026. So it’s a very, very great story for us.

Laura Tacho (32:58):

I like how you went from internal skepticism and looking out for an example in the industry to being the example in the industry. And I’m sure there are lots of folks listening to this podcast right now that have listened to your story and listened to the great work your teams have done and thought, wow, that’s really aspirational. I would like to do that as well. What are some little nuggets of advice that you would have to those people listening who wanna get to the same kind of results and maturity that Dropbox has? You know, what are the things that they should definitely keep top of mind

Uma Namasivayam (33:28):

Looking back into the journey? I would actually, I’m gonna again, is change this problem as a sociotechnical problem. You need to have a very strong product mindset in terms of how do we actually move developer productivity within the company. That means that you work with your developers very closely, you understand their pain points, you set your strategies accordingly. You prioritize your roadmap based on all that. So having that goal and then making sure all your plans that are aligning towards a goal is extremely critical. And that’s one piece of advice that I would give. The second one is also we should not underestimate the importance of leadership alignment and culture, in my opinion. If that is there and then you have a framework, things will automatically happen. So I would highly, highly encourage folks to like think about what is the best way you can get your leadership aligned on a specific framework, whatever it is. And then let’s go after that. And also think about culture. Very, very early stages. Also, don’t wait for, you know, hey, let me get my stats to a certain level and think about culture. It needs to go hand in hand. And talking to your developers in talking to your communication partners in talking to you, like what’s out there in industry, it all has to come together. That would be my piece of my advice and then things will automatically happen.

Laura Tacho (34:37):

And I think listening to, you know, we’re talking about this for a half hour, it is kind of a highlight reel. We don’t wanna brush over the fact that these are, a lot of this is multi-year investment. There’s a lot of hard strategic choices. You also had to make some really important like capacity choices as well in order to facilitate the, this amount of work. And maybe you just speak about that, like the reality on the ground of what allowed you to achieve all of this.

Uma Namasivayam (35:02):

Yeah, so the point we made earlier about like capacity here is this, all of these pro projects, when you think about it, like I know dev productivity is critical, but then we also have to ship products for revenue. So that actually Organization call is going to be hard. So if dev productivity is important, it is a multi-year journey. There is no question about it. Not only from the fact that we need to actually like, you know, make the system stronger and also invest in those systems to make sure those are all standardized. It takes time. It’s a pure challenge of prioritization as well. Like even, even today, like, we always have a very healthy debates between the product and managers and the board team and our team in terms of why is productivity important? What is the ROI we can get? Like having those challenging conversations over and over again is important. It’s a healthy debate, but ultimately it’s up to the engineering leader or the product leader who works driving it to keep the mandate moving. So it is important for the company and it’s a multi-year journey. So that’s where I think the frameworks, the prioritization and the capacity management comes into play and ultimately to struggle. But it is possible. That’s all I can say from my, my own experience in Dropbox.

Laura Tacho (36:12):

One last question for you before we wrap up our conversation. I would love to know what you’re most excited about when it comes to developer experience in ai looking ahead to 2026.

Uma Namasivayam (36:23):

Great question. So 2025 is, behind us, was a strong year, but then as you know, in 2026 at the question that everybody’s asking, across the industry right now is, okay, AI is that most of the companies are adopting them in different forms and fashion coding as systems are awesome. I get all these capacity. Where is it going? Like that question comes up across from our CFO from our engineering leaders across the world, right? So how do you answer this question? Like it’s not an easy question to answer to be honest, because from anecdotally, what we have seen, we have done some initial analysis and I can happy to share here, is, yes, we have seen a capacity of uplift because of AI and naturally it’s actually going towards like the migrations, the tech net reduction. And these are automatically happening. Like this is what is very cool about engineering actually.

You give the capacity automatically close into the right things for the problem. And we are seeing that now, how do we systematically think about it? Like if I am an engineering leader, I have this additional capacity, there is a world where I can focus on tech debt, but then there is also a world where I can shift it towards product development, right? So how do we actually make those tradeoffs? It’s not very clear those in those instrumentation and telemetry is not there yet. One approach that we are taking is we are trying to connect all these productivity improvements that we’re trying to the roadmap, to the product roadmap also. And it’s not a very one-to-one relationship. So for the listeners out there, if there is a better way for us to actually solve for it, if we have actually cracked the code, we would love to talk to you also. It’s not an easy one to actually fix. But that’s what I’m looking forward to in 2026. Like how we crack the code on the improvements that we’re getting from dollar about productivity, how we tie it back to the business outcomes, to the real business outcomes in terms of time to shipping the features and also, you know, where the additional capacity is going. How do we answer that question? That’s what I’m actually focused on in 2026.

Laura Tacho (38:13):

Yep, me too. I think a lot of engineering leaders are in the same boat. Uma, thank you so much for sharing your story, the approach that you’ve taken at Dropbox, your successes. I’m looking forward to seeing what you all accomplish in 2026. I think it’ll be a beacon for the, for the rest of the industry if you’re, you know, if you’re listening to this and think, wow, this is really great work. It definitely is. And it’s a lot of hard work and I think Uma shared so many really great strategies for, for the audience to take away. So thank you so much.

Uma Namasivayam (38:40):

Thank Ms. Laura, thank you for having me, the podcast. I know, I think productivity is gonna be extremely critical in the coming years and I’m happy to share the nuggets that we learned from Dropbox. And I’d also say that like, DX has been a great partner. We’ve been able to like, get a lot of leverage from the DX Core 4 framework. So that has been helpful for us to align with the leadership. So thank you so much for all the partnership and collaboration.

Laura Tacho (39:03):

Thank you.