Developer experience at scale – lessons from Dropbox
How Dropbox treats developer productivity as a sociotechnical problem and weaves AI into the fabric of their engineering culture.
Welcome to the latest issue of Engineering Enablement, a weekly newsletter sharing research and perspectives on developer productivity.
🗓 Next month, you can join me for a live Q&A session. I’ll address some of the more pressing questions we’ve received around measuring AI impact, the impact of tool choice, and more. Register here.
This week on the Engineering Enablement podcast, we were joined by Uma Namasivayam, Senior Director of Engineering Productivity at Dropbox. Uma’s team owns engineering productivity across Dropbox’s roughly 1,000 engineers: everything from CI/CD systems and telemetry infrastructure to the company’s AI tooling rollout.
During the conversation, Uma shared specifics on how Dropbox drove AI adoption from one-third of engineers to three-quarters, why they chose to deploy multiple AI coding tools rather than starting by standardizing on one, and how they ended up building their own internal AI platform.
Below is a lightly edited excerpt from this conversation. You can listen to the full episode here.
A lot of engineering orgs treat developer productivity as an engineering problem. You have a different take on that.
Uma: I think of productivity at Dropbox—or anywhere, really—as a sociotechnical problem. There absolutely has to be a strong investment in the technology itself: improving the reliability of your systems, improving speed, and reducing friction in your tooling. But then there’s the element of collaboration, of culture, of working with leadership, with developers themselves, and with the people team.
A concrete example: one of the dimensions of developer productivity is deep work—can developers actually code uninterrupted? That’s not necessarily an engineering problem. Working with our chief people officer, we had to literally think about how to restructure meeting times, how to carve out focus blocks for employees. That required a completely different set of partners than fixing a slow CI pipeline. That’s why having a common language across all of those groups, and bringing them together around that language, was so important. You have to attack productivity from multiple different angles.
How did the rollout of AI coding tools intersect with the DevEx work that was already underway?
Uma: I think of them as two parallel work streams that have a lot of overlap in the middle. DevEx is about incremental, systematic friction reduction. It requires aligning with leadership, defining the problems clearly, and making steady progress. AI is about speed. It’s about getting the best tools into developers’ hands quickly, experimenting fast, and staying on the cutting edge.
Before AI can really deliver on its promise, the foundational systems—build and test, telemetry, production observability—have to be in a strong place. Developers need to trust that when they push code through an AI-assisted workflow, the quality guardrails are actually there. Without that trust, you can’t go really hard on AI.
How did you actually get adoption moving? What drove the initial uptake?
Uma: Early on, roughly one-third of our engineers were using AI tools organically, so people were just finding tools they liked on their own. That’s a decent starting point, but it’s not a strategy. The real inflection point came when our exec team made AI a clear company priority. Within about three months of that top-down signal, we got to around three-quarters of engineers using tools on a weekly basis.
With that said, top-down mandates only take you so far. After that initial push, we had to go deeper, starting by looking at what was actually blocking adoption in different parts of the engineering population, and addressing those pockets specifically. That’s where a product mindset comes in: understanding your customers’ actual pain points rather than assuming one solution works for everyone.
You offer developers multiple AI coding tools rather than standardizing on one. What’s the thinking there?
Uma: It comes back to treating this like a product problem. Different teams have genuinely different needs. Our mobile developers, for example, couldn’t use the tools that work well for other parts of the codebase. We had to find something specific to their use case. So we deliberately chose not to be a single-tool shop.
We also learned quickly that some tools that work at a smaller scale fall apart at Dropbox’s scale. We were piloting an AI code review tool and it just didn’t hold up. That pushed us toward building some of this in-house. We now have an internal platform that handles the backend complexity specific to Dropbox’s monorepo, and that other teams can build on. The build-versus-buy decision turns out to be really critical when you’re operating at this scale and at this speed.
One practical thing that also helped: we worked with our procurement and legal and security teams to dramatically reduce the time it takes to evaluate and approve new AI tools, getting that review process down to around three days. When the market is moving this fast, your ability to experiment is only as good as how quickly you can get tools in front of developers.
Once you’ve expanded beyond code completion, what does AI usage actually look like across the SDLC?
Uma: Code completion was the obvious starting point. Once we felt we’d gotten what we could from that, we started looking at the rest of the development lifecycle: code review, testing, and debugging. Every stage of the SDLC is on the table.
What we discovered is that a lot of the commercially available tools for these adjacent use cases didn’t hold up at our scale or for our specific codebase. That’s what led us to build in-house. One of our developers took it upon himself to figure out what kind of platform we could build, using Claude and Claude Code as the foundation, that would work within Dropbox’s environment and that others could build on top of. That platform now handles the backend complexity: deployment, monorepo scale, and testing guardrails. If a team wants to build an AI-assisted code review product, they start from that platform rather than from scratch. It’s one of the things I’m most proud of from the past year.
What’s the hardest unsolved problem you’re carrying into 2026?
Uma: Connecting developer productivity improvements to actual business outcomes. We can show that DXI improved, that AI adoption is up, and that developers are saving hours. But the arc from “developers are more productive” to “we shipped more value to customers faster”—that instrumentation isn’t there yet, and I don’t think anyone in the industry has fully cracked it.
What we’re seeing is that the capacity unlocked by AI is naturally flowing toward migrations and tech debt reduction. That’s actually pretty cool—give engineers more capacity, and they automatically invest it in the right things. But as a leader, I need to be able to answer the CFO and the exec team: where is this capacity going, and how does it connect to revenue? We’re working toward that. If anyone listening has cracked that code, I’d genuinely love to talk.
You can listen to the full conversation with Uma on the Engineering Enablement podcast.
This week’s featured DevProd job openings. See more open roles here.
American Express is hiring a Sr. Manager, Digital Product Management - DevProd | Hybrid - London UK
CoreWeave is hiring a Sr. Software Engineer - Developer Experience | Livingston NJ; New York, NY
DoorDash is hiring an Engineering Manager - Developer Experience | San Francisco, CA; Sunnyvale, CA; Seattle, WA; Los Angeles, CA; New York, New York
Figma is hiring a Staff Software Engineer, Developer Experience | Remote; US
Plaid is hiring a Software Engineer - Platform | New York, NY
UserTesting is hiring a Software Engineer, Developer Experience (Platform) | Spain
That’s it for this week. Thanks for reading.


