Driving AI tool adoption — Lessons from Microsoft
SPACE co-author Brian Houck shares proven tactics for moving the needle on AI adoption.
Welcome to the latest issue of Engineering Enablement, a weekly newsletter sharing research and perspectives on developer productivity. Subscribe here for new issues:
This week on the Engineering Enablement podcast, I sat down with
—developer productivity researcher at Microsoft, co-author of the SPACE framework, and returning guest on the show. Last time, we explored his perspective on PR Throughput. This time, we’re diving into one of the most important topics in engineering right now: AI tooling adoption.AI has massive potential, but most companies are still figuring out how to roll it out in a way that actually drives impact. Microsoft has been at the forefront of the research here, and Brian walked me through some of their latest findings.
We talked about what’s working—from leadership advocacy to local champions to creating a sense of competition—and how Microsoft is measuring adoption and outcomes across organizations.
Below is an excerpt from our conversation. You can listen to the full discussion here.
I'm hearing from leaders who are doing top-down mandates to get developers to use AI tools, and others who are trying to drive AI adoption using a grassroots, bottom-up approach. What are you seeing companies try?
Brian: Many organizations are wrestling with the challenge of how to drive AI adoption, including us at Microsoft. One of the reasons it’s so hard is that developers tend to be skeptical by nature. They're not going to change the way they work just because someone tells them to. You have to show real value before behaviors shift. But through our research across multiple companies, including Microsoft, we’ve been able to measure a few strategies that actually move the needle.
Leadership advocacy: While I’m not sure that top-down mandates are going to be well-received, we’ve seen that simply having leadership strongly advocate for AI tools makes developers 7x more likely to become daily users. So just communicating the value of these tools drives a lot of adoption.
Formal training: Formal training is another lever that’s proven effective. We’ve seen that AI tools aren’t equally usable for all kinds of tasks—they’re better at certain tasks than others. Learning which of those tasks AI is better suited for will change your outcomes. So offering formal training makes organizations about 20% more likely to have most or all of their developers adopt AI.
Local champions: Another strategy we’ve seen work again and again is peer-to-peer sharing. Rather than having a central infrastructure team say "Go use AI," having local champions within teams share their experiences is much more effective. (It’s, “Hey, I’ve tried this tool, it’s great, here’s how I use it.”) They can have brown bag sessions where they share their screen and walk through their real-world use cases that will probably resonate with that team. We’ve seen that organizations using local champions like this are about 22% more likely to have all or most developers adopt AI.
So those are just some of the strategies we've been able to measure as effective.
So leadership advocacy, formalized training, and local champions. Before going further into these, is there anything particular about AI that you think is fundamentally different from driving adoption for other development tools?
Brian: Yes, the hype around AI is actually the biggest barrier to adoption. When I ask developers about their concerns with AI, it’s that it’s a gimmick—about 30% say their number one worry is that it's not going to live up to its promise. This intense attention builds up expectations, leading to two potential negative outcomes: either developers think "there's no way it can be that good" and don't try it, or they try it once, find it's not as transformational as they expected, and never use it again.
It's different from driving adoption of a new build technology or IDE because AI is discussed so prevalently.
On top of that, there’s this constant narrative about AI replacing jobs that adds another layer of skepticism. Interestingly, only about 10% of developers who use AI daily are concerned about job replacement in any foreseeable timeframe. But these narratives contribute to the skepticism—developers hear "this is so good, it's like having another human" when in reality, it's an assistant, not a replacement.
“These narratives contribute to the skepticism—developers hear ‘this is so good, it's like having another human’ when in reality, it's an assistant, not a replacement.”
Going back to the strategies for driving adoption, do you recommend approaching leadership advocacy?
Brian: Some of the most common questions that developers have are: Am I even allowed to use these tools? What’s available to me? Am I expected to use them? So having organization-wide messaging is crucial to address questions about expectations, availability, and permitted use cases. This messaging helps overcome the initial fear of "Am I supposed to be using AI for this?"
But the message has to go beyond “you’re allowed to use it.” Developers need to hear, “We want you to use this. We believe this will help you do your best work.” Frame the opportunity positively by presenting AI as a tool to help developers achieve their best work and solve problems they've identified in their daily workflows.
Messaging like this can’t be a one-time thing. It needs to be consistent and visible. A single email isn’t enough. Leaders should be reiterating the message every few weeks—reminding teams what tools are available, where to find them, and how they’re already helping others.
And of course, none of this matters if you can’t track what’s actually happening. So not only do we want to roll these out, but we want to make sure that we can measure which of these approaches are having an impact. We do that first and foremost by looking at who has installed the tools, who has tried it, and then trying to increase the level of usage.
Lastly, perhaps the most important rule when communicating about AI: don’t overpromise. Don't claim AI will eliminate all toil from workflows or completely remove the responsibility for finding bugs in code. Again, the main challenge is combating inherent skepticism about AI tools. So instead of overpromising, focus on framing what AI is good for and how it can realistically help in day-to-day work.
Let’s say we have advocacy from the top and some visibility into who’s actually using it, but still only 15% of people are using these tools on a weekly basis. What are some other components for driving success here?
Brian: As developers get more hands-on time with AI tools, they start to figure out where these tools actually shine. AI coding assistants are particularly good at repetitive, mundane tasks and getting started with boilerplate code. More complex, novel tasks still require developers to be front and center.
So another key strategy is helping teams share this knowledge among themselves to optimize usage and create a flywheel effect—learning to use it better, teaching peers, sharing best practices, and building momentum. This is where internal champions become crucial. Find senior engineers whose voices carry weight within the team, who have discovered best practices and successful integration methods, and ensure they have forums to share these practices.
How do you go about identifying these local champions or initial pilot teams?
Brian: In our experience, we've found it's effective to have team managers identify strong senior developers for this role. These are typically developers who would naturally volunteer, who became early adopters out of interest, and who are passionate about mentoring and sharing their knowledge with peers. So, generally speaking, the best way to scale this approach is by asking managers to identify ideal candidates within their teams.
Another question we get here sometimes is whether organizations should formalize a champion program with defined responsibilities. This varies considerably across organizations—even within Microsoft, approaches are inconsistent. While there are advantages to formalizing a program with clear accountabilities, I've seen success with more of a grassroots approach as well. Organizations may track adoption metrics and managers may feel pressure to drive adoption, however allowing teams the flexibility to determine their own champion systems can be effective.
You mentioned that if you overpromise and lean into the hype, there’s a good chance it’ll backfire and create a poor experience. What are some other failure modes you’ve seen?
Brian: Well, I'm not convinced that shaming individual developers who aren't adopting AI leads to success. Tracking adoption at broader team levels isn't inherently problematic, but calling out specific individuals probably is.
Another failure mode is around quality: developers may worry that AI might introduce vulnerabilities or bugs into code. Therefore, messaging should emphasize that humans remain responsible for quality as the last line of defense through rigorous testing. Unfortunately, the conversation often focuses exclusively on shipping code faster without addressing quality concerns, yet quality is the second most common developer concern after hype.
An additional anti-pattern is not clarifying that using AI doesn't atrophy skills—instead, it changes what it means to be a productive developer. Many worry their technical skills will deteriorate with AI reliance. We need to reframe this: AI is a coding assistant or paraprogrammer, not a replacement. It's someone who sits alongside us, offering help and guidance. The skills we need will evolve rather than atrophy.
“Many worry their technical skills will deteriorate with AI reliance. We need to reframe this: AI is a coding assistant or paraprogrammer, not a replacement. It's someone who sits alongside us, offering help and guidance. The skills we need will evolve rather than atrophy.”
How do you use data in this process? Are there any specific reports, scorecards, or readouts that you’ve found effective in helping drive adoption?
Brian: The most effective approach we've found is creating dashboards that track usage metrics for leaders. People are naturally competitive, and leaders are motivated to drive adoption within their teams both because AI clearly improves effectiveness and because they want to outperform their peers.
This competitive dynamic actually produces good outcomes because developers who adopt AI generally like it—80% say they would be sad if they could no longer use it. The challenge is getting that initial adoption. So, specifically: leadership scorecards showing team adoption percentages, identifying users who tried and abandoned the tools, and tracking daily usage are all metrics that leaders are eager to improve.
As for how we actually track active usage, we don't reduce it to a single score but instead look across multiple dimensions. Those dimensions include who has installed the tools, who has tried them once, and who uses them weekly, monthly, or daily.
Daily users = at least four days a week
Weekly users = once or twice a week
Monthly users = once or twice a month
We also look at those who have tried but haven't used the tools for over a month.
So usage is not binary—we don't simply label someone as "an AI adopter" or not.
What productivity gains are organizations seeing from AI tools?
Brian: Like many productivity questions, the answer is complex. Studies confirm that AI coding assistants help write code faster, potentially dramatically increasing throughput. But developers only spend about 14% of their day coding. The question becomes "how much more productive am I?" when we're only addressing part of what a developer does because software engineering is much more than just coding. So I focus on measuring increases in coding efficiency rather than overall productivity. And across various organizations, I’m seeing a range of improvements.
Who’s hiring right now
This week’s featured DevProd & Platform job openings. See more open roles here.
Adyen is hiring a Team Lead - Platform | Amsterdam
Spotify is hiring a Fullstack Engineer - Platform Developer Experience | Stockholm
Scribd is hiring a Senior Manager - Developer Tooling | Remote (US, Canada)
Preply is hiring a Senior DevEx Engineer | Barcelona
Rippling is hiring a Director of Product Management - Platform | San Francisco
Snowflake is hiring a Director of Engineering - Test Framework | Bellevue and Menlo Park
That’s it for this week. If you know someone who would enjoy this issue, share it with them:
-Abi
I explain my experiments with A.I., what I learned and how Iearned it in my podcast here:
https://open.substack.com/pub/soberchristiangentlemanpodcast/p/ai-deception-2025-pt-2?utm_source=share&utm_medium=android&r=31s3eo