Engineering Enablement
Engineering Enablement Podcast
DORA’s 2025 research on the impact of AI
0:00
-26:10

DORA’s 2025 research on the impact of AI

A conversation with DORA’s Nathen Harvey on how AI is transforming engineering systems and why leaders need new metrics to understand its real impact.

Listen and watch now on YouTube, Apple, and Spotify.

In this episode of Engineering Enablement, I sit down with Nathen Harvey, who leads research at DORA, to explore how teams should really think about measuring the impact of AI. We talk about why traditional delivery metrics can give leaders a false sense of confidence and how AI acts as an amplifier, accelerating healthy systems while intensifying existing friction and failure.

We examine findings from the 2025 DORA research on AI-assisted software development alongside DX’s Q4 AI Impact report and unpack where the data aligns and where meaningful gaps emerge. We also dig into how AI is reshaping engineering systems themselves, changing workflows, feedback loops, and team dynamics in ways leaders need to understand to achieve real, sustainable impact.

Some takeaways:

DORA metrics alone cannot measure AI impact

  • The four “key” DORA metrics only reflect delivery outcomes, not system behavior. They show where teams end up, not how they got there.

  • DORA now measures five software delivery performance metrics, not four.

  • These metrics function like a compass rather than a diagnostic tool.

  • Delivery performance metrics are leading indicators of organizational health but lagging indicators of engineering practices.

AI acts as an organizational amplifier

  • AI does not fix systems; it intensifies what already exists. Strong practices compound while weak practices become more painful.

  • Healthy teams experience faster flow while unhealthy systems accumulate more visible friction.

  • AI makes hidden bottlenecks impossible to ignore.

The five DORA software delivery performance metrics

  • DORA divides delivery performance into throughput and instability categories.

  • Throughput metrics include lead time for changes, deployment frequency, and failed deployment recovery time.

  • Instability metrics include change fail rate and deployment rework rate.

DX Q4 2025 AI Impact report insights

  • Junior engineers adopt AI more heavily than senior engineers. This shifts how work is distributed across teams.

  • Senior engineers often capture more measurable time savings despite lower visible usage.

  • DX found widespread experimentation with non-enterprise AI tools.

  • Engineers reported high AI usage even when enterprise telemetry showed no activity.

  • Shadow experimentation reflects weak or unclear organizational AI guidance.

The DORA AI Capabilities Model

  • Successful AI adoption depends on team and organizational capabilities, not tool selection.

  • A clear and communicated AI stance reduces uncertainty and speeds adoption.

  • A healthy internal data ecosystem prevents AI usage from being blocked by silos.

  • Internal policies and documentation must be accessible to both humans and AI systems.

  • Strong version control provides rollback safety when AI-generated code diverges.

  • Small batch work improves both AI output quality and system stability.

  • User-centered thinking ensures AI effort aligns with real human outcomes.

  • High-quality internal platforms allow improvements to scale across teams.

AI shifts where work breaks

  • AI accelerates code creation but moves constraints downstream.

  • Code review becomes the dominant bottleneck under AI-assisted development.

  • Increased code volume without improved review systems slows overall throughput.

  • Bottlenecks become more visible, not less, as AI usage grows.

Measuring AI ROI requires human signals

  • Dashboards cannot capture where work feels slow or painful.

  • Leaders need direct conversations with engineers about friction and workflow breakdowns.

  • Qualitative insight exposes failure points that metrics cannot surface.

In this episode, we cover:

(00:00) Intro

(00:55) Why the four key DORA metrics aren’t enough to measure AI impact

(03:44) The shift from four to five DORA metrics and why leaders need more than dashboards

(06:20) The one-sentence takeaway from the 2025 DORA report

(07:38) How AI amplifies both strengths and bottlenecks inside engineering systems

(08:58) What DX data reveals about how junior and senior engineers use AI differently

(10:33) The DORA AI Capabilities Model and why AI success depends on how it’s used

(18:24) How a clear and communicated AI stance improves adoption and reduces friction

(23:02) Why talking to your teams still matters

Where to find Nathen Harvey:

• LinkedIn: https://www.linkedin.com/in/nathen

Where to find Laura Tacho:

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• X: https://x.com/rhein_wein

• Website: https://lauratacho.com/

• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-course

Referenced:

DORA | State of AI-assisted Software Development 2025

Steve Fenton - Octonaut | LinkedIn

AI-assisted engineering: Q4 impact report

Discussion about this episode

User's avatar