Engineering Enablement
Engineering Enablement
Measuring AI impact, assessing readiness, and new data trends
0:00
-38:13

Measuring AI impact, assessing readiness, and new data trends

How AI is reshaping the entire SDLC, shifting bottlenecks and redefining AI readiness, and why developer experience, not tools, determines real impact.

Listen and watch now on YouTube, Apple, and Spotify.

In this special episode of Engineering Enablement, I welcome back Jesse Adametz, this time as host.

In our conversation, we explore how AI is showing up across the SDLC, not just in code generation, and how it is shifting bottlenecks across the development process. We unpack what “AI readiness” actually means in practice, and why it often comes down to developer experience fundamentals like documentation, environments, and feedback loops.

We also discuss why enablement matters more than tool choice, how teams are thinking about measuring ROI, and what changes as background agents become more common. Finally, we explore how the role of the engineer may evolve, what questions teams are still trying to answer, and the challenges of non-engineers contributing to codebases.

Some takeaways:

AI is expanding beyond coding into the full SDLC

  • The focus has shifted from code generation to the entire software lifecycle. Teams are applying AI to planning, prototyping, review, and documentation—not just writing code.

AI readiness is a developer experience problem

  • The biggest blockers to AI adoption are long-standing DX gaps. Missing documentation, inconsistent environments, weak CI, and unclear system boundaries all limit effectiveness.

  • Tool choice is not the primary driver of success. Models and tools are evolving too quickly for this to be a durable advantage.

  • Some organizations are formalizing AI enablement as a function. Dedicated teams are emerging to drive adoption and share practices.

Measuring AI ROI is messy and still evolving

  • Correlation vs causation makes attribution difficult. High AI usage often correlates with already high-performing engineers.

  • Longitudinal analysis is more reliable than snapshots. Tracking changes over time gives better insight into impact.

  • Token spend introduces real cost considerations. AI creates a direct, variable cost that organizations must evaluate.

AI impact falls into two buckets: amplification and augmentation

  • Amplification improves human productivity. This includes higher throughput, time savings, and better developer experience.

  • Augmentation extends capacity beyond humans. Agents begin to act as additional “headcount,” completing work independently.

  • These require different measurement approaches. Amplification focuses on human output, while augmentation focuses on agent output relative to cost.

Background agents shift how work gets done and where bottlenecks appear

  • Agents enable work to happen outside the human loop. Tasks can be completed asynchronously and proactively.

  • This changes the developer role. Engineers move toward reviewing, guiding, and orchestrating agent output.

  • Human workflows can become the bottleneck. If agents produce work faster than humans can process it, the constraint shifts.

  • This reframes productivity. The question becomes where human involvement adds the most value.

Specs and documentation are becoming critical infrastructure

  • AI makes documentation a core dependency. It directly impacts the quality of outputs.

  • Poor documentation leads to poor results. Agents can duplicate systems or make incorrect assumptions without context.

  • Documentation is shifting from optional to essential. It is now foundational for both human and AI productivity.

In this episode, we cover:

(00:00) Intro

(02:12) Where AI is showing up across the SDLC

(05:53) AI readiness and its link to developer experience

(08:23) Why enablement, education, and experimentation matter more than tool choice

(13:05) The case for a dedicated enablement team

(14:50) Measuring AI ROI: challenges and tradeoffs

(19:46) Background agents and token spend

(24:12) Measuring agent output with PR throughput

(26:58) How the engineer role might change

(31:01) Specs and documentation in the age of AI

(33:11) Non-engineers writing code

(35:30) What’s changing in the SDLC and open questions

Referenced:

Measuring AI code assistants and agents

Lessons from Twilio’s multi-year platform consolidation

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

How Claude remembers your project - Claude Code Docs

specIsJustCode : r/ProgrammerHumor

Discussion about this episode

User's avatar

Ready for more?