AI and productivity: Year-in-review with Microsoft, Google, and GitHub researchers
What 2025’s AI research told us about developer productivity and identity, and why enablement matters more than tool choice.
Welcome to the latest issue of Engineering Enablement, a weekly newsletter sharing research and perspectives on developer productivity.
🗓️We recently announced DX Annual, our flagship conference for developer productivity leaders navigating the AI era. Go here to learn about the event and request an invite to attend.
In 2025, AI-assisted engineering moved from an experiment to a core business expectation.
At the start of the year, adoption rates varied widely across organizations, and a path toward widespread use was just beginning to come into focus. Now, at the end of 2025, that picture looks very different: roughly 90% of developers across the industry are using AI tools at least once a month to get work done, with more than 40% relying on them every day.
As adoption has increased, so have our questions about its impact. Looking back at 2025, we’ve learned a lot as an industry about how AI is changing the way software gets made. To close out the year, I hosted a research roundtable with prominent voices in the AI and developer productivity research space. I invited them to reflect on what we’ve learned so far, and to share the questions they’re carrying into 2026.
A few clear themes emerged from that conversation. Below, I expand on those themes and add my own perspective on what they mean for the year ahead.
Watch the full discussion with Brian Houck (Microsoft), Ciera Jaspan (Google), Collin Green (Google), and Eirini Kalliamvakou (GitHub) here.
Lines of code is still a bad metric. Don’t let uncertainty and change make you reach for it
Building on themes from the last decade of developer productivity research, 2025’s research into measuring AI impact landed in a familiar place: there’s no single number that tells you whether AI is actually making a difference. Across the industry, organizations are using a broad range of metrics to measure impact (like the AI Measurement Framework, or check out this research piece on how Google, Microsoft, GitHub, and others are actually measuring productivity).
But even if there are a lot of good patterns out there, there’s also one bad pattern that this group of researchers called out: using lines of code (LOC) as a measurement for AI impact. This raw output metric is easy to measure, and in the absence of a clear alternative (especially in times of change), it can be tempting to reach for numbers that seem predictable and objective. Collectively, this group warned not to mistake output for impact. AI lends itself to writing a lot of lines of code, but that doesn’t necessarily mean a positive impact on your teams, organizations, or business.
Talking to AI and not talking to your colleagues might not be great for teams longer-term
Brian Houck, Sr. Principal Applied Scientist at Microsoft and co-author of the SPACE Framework of Developer Productivity, shared insights from his paper The SPACE of AI: Real-World Lessons on AI’s Impact on Developers, which shows how the impact of AI tools can vary widely across five key dimensions of developer productivity: Satisfaction, Performance, Activity, Collaboration, and Efficiency. While 90% of developers report that AI makes them more productive, fewer than half agree that it makes them more collaborative and communicative with their teammates. As teams use AI for longer periods of time, it will be interesting to see how this pattern plays out when it comes to knowledge sharing or even the long-term maintenance of codebases.
AI changes what skills developers need, but also how they perceive themselves
Eirini Kalliamvakou, Research Advisor at GitHub, shared some details from her recent research The new identity of a developer: What changes and what doesn’t in the AI era. As developers become more fluent with AI, their identity is shifting from traditional “code producer” toward a role focused on directing, delegating, and validating AI-assisted workflows, with creative judgment and strategic orchestration becoming central to their craft.
Interestingly, many of today’s heavy AI users started out as skeptics. Hands-on experience with the tools often changed both their sentiment and their expectations.
This identity shift has real implications for organizations, particularly around career progression, hiring, and upskilling. Both companies and developers need to place more value on AI fluency, systems thinking, and judgment, rather than raw coding output alone. And keeping pace with the ecosystem will require broader AI enablement, not just tool-specific training programs.
Is AI the death of the junior developer, or an accelerant to help them level up faster?
Lots of folks, from new grads to seasoned developers, are concerned about how AI will impact the talent pipeline. The usual narrative is that AI puts junior developers at risk of extinction, because why hire a junior when a senior developer can just delegate tasks to an AI agent instead? This could be a short-sighted optimization that leaves us with no developing talent a few years from now.
Ciera Jaspan from Google offered a compelling alternative perspective: what if the skills that devs need to level up in seniority—strong problem-solving skills, managing work, delegating work, and clearly defining outcomes—are now learned earlier by junior engineers because of the ways they need to interact with agents? Previously, these skills would be delayed because a tech lead or senior team member would take on the brunt of the project and professional management for these juniors. But when juniors are in the role of team lead for a handful of agents, do they actually level up faster because they get more practice solving problems end-to-end, even if their time spent actually typing code is reduced? Of course, in order for this to happen, companies still need to hire juniors, which isn’t always the case anymore. Find out how you might observe this pattern within your own teams.
Collin Green, another Google researcher, connected this back to earlier concerns about communication. Faster leveling through AI interaction doesn’t automatically translate to stronger collaboration skills. If juniors primarily work with AI rather than people, what does that mean for their professional development? And if seniors spend less time mentoring, what downstream effects might that have as well?
Is automation and code generation the right focus for the future of AI tools?
Collin advised on an AI paper that had a slightly different focus: creativity in software engineering. If creativity is the goal, and not productivity or automation, tools might better help us reimagine how work gets done and lead to more impactful outcomes, rather than just getting to the same outcomes faster. A shift in focus from creativity to productivity changes how tools are built and how we use them. The current emphasis on automation covers only a small surface area, with many developers coding only about 1 day a week on average. Everything else—scoping, experimenting, and validating ideas—exists in a very creative space, which AI is well-suited to help with, but many tools started with an emphasis on productivity and efficiency.
At the same time, there are so many tasks classified as “toil” that would serve developers well to be automated. These types of tasks make developers look at their to-do list and say, “ugh, today is not going to be great.” But Collin shared, we’ve had really solid research and technology for automating tasks for the last 40+ years. We should be automating things that are “dull, dirty, and dangerous.” But the most consistent problems that add friction or toil to developers’ days—like tech debt, lack of documentation, compliance tasks, even expense reports—still haven’t been solved with automation. Can AI change that, or are they just too complex to be automated? Is AI the right tool to automate it anyway?
Headlines will oversimplify AI research to the point of being incorrect, even if the research itself is full of nuance
Earlier this year, METR published a study that showed how, in some contexts, developers actually slow down when using AI, even if their own perception is that they were moving faster. This discrepancy between self-perception and actual result definitely made a big splash in our industry, and not a week goes by that I don’t hear someone cite the study as evidence that we’re all careening off an AI-generated cliff.
Importantly, the study itself had a lot more nuance and depth than the one-line headlines captured. And since one-line headlines are mostly what gets shared on LinkedIn and other platforms, it didn’t take long for the (well-done) METR study to be oversimplified and distilled down to the simple point that AI makes developers slower, which wasn’t really the point of the paper at all.
But the headline that AI isn’t actually as helpful as promised definitely resonated with people, and the study was widely shared. Many people felt drawn to the results that more accurately reflected their own personal experience trying to get started with a tool that was unreliable. For others, it was a good antidote to all the hype. One thing that the METR study did show was that AI research was now taking place in situ, meaning with real developers solving real problems, and specifically their problems.
An important thing to remember as 2026 will surely bring even more headlines that are greatly oversimplified summaries: stay curious about what’s behind the number. For the METR study, which was “AI makes devs 19% slower.” But not all devs, and not all context. Those questions are missing from the headlines, so you need to ask them yourself.
Looking to 2026
Closing out 2025, I can confidently say we haven’t seen the full spectrum of AI impact yet. We’ve made a lot of progress on understanding the impact of AI on teams, but we still need to keep digging to assess whether AI is achieving what we want it to, and to fully understand the impact on not just the whole software delivery lifecycle, but all levels of organizations.
One thing is clear to me though: the companies who will see the biggest wins with AI in 2026 are the ones who deeply understand their existing bottlenecks. The real acceleration from AI doesn’t come from using the newest models and testing every new tool; it comes from pointing AI at the real problems that slow developers down.
This week’s featured DevProd job openings. See more open roles here.
American Express is hiring a Sr. Manager, Digital Product Management - DevProd | Hybrid - London UK
Capital One is hiring a Product Manager - Developer Experience | Plano TX; McLean VA; Richmond VA
Gusto is hiring a Sr. Platform Engineer | Denver, CO; San Francisco, CA; Atlanta, GA; Austin, TX; Chicago, IL; Miami, FL; Seattle, WA
Plaid is hiring a Software Engineer - Platform | New York, NY
Reddit: Staff Software Engineer - Developer Experience | Remote - United States
Whatnot is hiring a Software Engineer - Platform | San Francisco, Los Angeles, Seattle, NYC
That’s it for this week. Thanks for reading.


I think the part here is especially interesting where the researchers talk about junior developers potentially being able to gain necessary skillsets earlier than before the advent of these AI tools. The narrative of junior developers not being necessary feels wrong and this helps to put at least one point as to how the perspective can be corrected.