Listen and watch now on YouTube, Apple, and Spotify.
In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, I unpack the gap between AI hype and engineering reality—and how leaders can use data to close it.
I share the latest insights from nearly 39,000 developers across 184 companies, walk through the Core 4 and AI Measurement Frameworks, and explain how to use them together to measure what matters, improve developer experience, and drive real organizational impact—without getting lost in the noise.
Some takeaways:
The AI hype cycle vs. ground truth
The “disappointment gap” refers to the widening space between sensational AI headlines and the lived reality of teams on the ground. Organizations are being pushed to move faster with AI, yet few have defined what success even looks like.
Headlines touting “90% of code written by AI” inflate expectations and erode trust. Developers feel let down when tools don’t deliver on the hype. Executives, in turn, expect exponential productivity gains without understanding what’s realistically achievable.
The best way to close this gap is with data. Leaders need to ground their AI strategies in facts, not forecasts.
AI’s current role in high-performing engineering orgs
In the top quartile of organizations, around 60% of developers are now using AI tools daily or weekly. However, this usage does not translate directly into AI generating most of the code.
These organizations are seeing the best results because they invest in enablement, support, and identifying practical use cases that actually work.
Across nearly 39,000 developers at 184 companies, the average reported time savings from AI use is 3 hours and 45 minutes per week. It’s a meaningful uplift, but not a silver bullet.
Engineering leaders must shape the narrative
Engineering leaders are also business leaders—and they need to take on the responsibility of educating peers and execs on what AI adoption actually looks like.
Effective leaders can clearly answer:
How is our organization performing today?
How is AI helping—or not helping?
What are we doing next to improve?
Back to basics: what defines engineering excellence?
A shared definition of engineering performance is essential before measuring the effects of AI. The DX Core 4 framework offers this foundation.
Core 4 combines elements of DORA, SPACE, and DevEx into a single, balanced model with four key dimensions: speed, effectiveness, quality, and impact.
These metrics must be evaluated together. Optimizing one at the expense of another (e.g., speed at the cost of quality) risks destabilizing the system.
Developer experience drives performance outcomes
Developer experience is the strongest performance lever available to engineering organizations. The DXI (Developer Experience Index) measures 14 evidence-based drivers of experience and correlates directly with time savings.
For each DXI point gained, developers save 13 minutes per week. While that may seem small, the impact scales dramatically across teams.
Block used DXI to identify 500,000 hours lost annually due to friction—data that directly shaped their investment decisions and enabled faster delivery without compromising quality.
A complementary framework for measuring AI
The AI Measurement Framework adds clarity by tracking the effect of AI across three pillars: utilization, impact, and cost.
Utilization captures how broadly and consistently AI tools are being used. The biggest gains typically come when teams move from occasional to consistent usage.
Time savings per week is the most aligned metric across the industry for measuring impact.
Cost includes not just licenses but also investment in training and enablement—areas that are often overlooked but essential for success.
Using both frameworks together creates clarity and confidence
Core 4 answers: “What does high performance look like?”
The AI Measurement Framework answers: “How is AI affecting that performance?”
Together, these frameworks enable leaders to move beyond guesswork and act with clarity, especially during times of rapid change.
AI is a multiplier—but only with the right foundations
Accelerating software delivery with AI is possible, but it requires strong fundamentals in place. Cutting corners on quality, stability, or developer experience for short-term gains can create long-term damage.
When grounded in solid frameworks and real data, AI can improve velocity, collaboration, and developer satisfaction without compromising core engineering values.
Better software faster is possible—not by chasing hype, but by aligning teams on what matters and measuring what works.
In this episode, we cover:
(00:00) Intro: Laura’s keynote from LDX3
(01:44) The problem with asking how much faster can we go with AI?
(03:02) How the disappointment gap creates barriers to AI adoption
(06:20) What AI adoption looks like at top-performing organizations
(07:53) What leaders must do to turn AI into meaningful impact
(10:50) Why building better software with AI still depends on fundamentals
(12:03) An overview of the DX Core 4 Framework
(13:22) Why developer experience is the biggest performance lever
(15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings
(16:08) How to get started with Core 4
(17:32) Measuring AI with the AI Measurement Framework
(21:45) Final takeaways and how to get started with confidence
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
Share this post