Listen and watch now on YouTube, Apple, and Spotify.
Today, I am joined once again by our CTO, Laura Tacho, to explore how the role of Platform and DevProd teams is evolving in the AI era. We discuss why evaluating and rolling out AI tools is now a core responsibility, how measurement frameworks must change, and why fundamentals like documentation and feedback loops matter more than ever for both developers and AI agents. We also dive into strategies for handling tool sprawl, hardening pipelines for increased throughput, and applying AI at scale to tech debt, migrations, and workflows across the SDLC.
Some takeaways:
Platform teams in transition
Platform teams are redefining their mandate in the AI era. They’re moving beyond CI/CD into AI evaluation, rollout, and measurement, taking on more strategic influence across the SDLC.
Expectations on leaders are rising. Platform teams are now expected to guide both productivity and AI adoption, balancing technical and cultural change.
Defining the platform role
The platform role is hard to define and AI raises the stakes. Teams must bridge old responsibilities with new AI-driven expectations.
Clarity creates leverage. Clear mandates help leaders align stakeholders and secure resources.
Evaluation and measurement
Evaluating and rolling out AI tools is now a Platform role. Platform leaders handle vendor selection, pilots, and enterprise rollouts while tracking impact.
Strong measurement frameworks are essential. Teams need data-driven ways to evaluate AI’s impact beyond lines of code, blending old and new metrics for a realistic picture.
Platform leaders are internal educators. Explaining metrics to executives avoids hype-driven decisions and builds trust in AI-assisted development.
Hardening systems and guardrails
AI-generated code stresses pipelines and build systems. Platform teams must harden CI/CD, testing, and local environments to handle larger batch sizes and faster iterations.
Guardrails complement throughput. Quality checks, security reviews, and feedback loops maintain reliability while speed accelerates, laying groundwork for large-scale refactors and technical debt initiatives.
Standardization and knowledge sharing
Training alone isn’t enough — standardization creates leverage. Curated workflows and reusable templates help maintain consistent AI adoption.
Standardizing tools and knowledge multiplies impact. Shared workflows and internal documentation reduce duplication and accelerate learning.
Applying AI at scale
Platform teams can apply AI directly for high-leverage work. Code migrations, technical debt reduction, and repetitive tasks become one-to-many accelerators rather than personal productivity boosts.
Context as a service matters. Centralizing setup and best practices allows product teams to focus on business problems while platform teams handle complexity and modernization.
AI tackles technical debt at scale. Platform teams can orchestrate migrations, refactors, and other neglected work for organization-wide benefit.
Focusing on fundamentals and the big picture
Fundamentals benefit both developers and AI agents. Documentation, feedback loops, and clean codebases improve outcomes for LLMs just as they do for human developers.
AI doesn’t erase old bottlenecks. Meetings, interruptions, and process inefficiencies still outweigh many AI productivity gains, making platform leaders the stewards of the bigger picture.
Looking beyond code authoring
Opportunities go far beyond code authoring. Requirements, testing, validation, and even documentation are ripe areas for AI leverage.
End-to-end adoption multiplies gains. Companies see the biggest improvements when AI is applied across the SDLC, creating a stronger developer experience overall.
In this episode, we cover:
(00:00) Intro: Why platform teams need to evolve
(02:34) The challenge of defining platform teams and how AI is changing expectations
(04:44) Why evaluating and rolling out AI tools is becoming a core platform responsibility
(07:14) Why platform teams need solid measurement frameworks to evaluate AI tools
(08:56) Why platform leaders should champion education and advocacy on measurement
(11:20) How AI code stresses pipelines and why platform teams must harden systems
(12:24) Why platform teams must go beyond training to standardize tools and create workflows
(14:31) How platform teams control tool sprawl
(16:22) Why platform teams need strong guardrails and safety checks
(18:41) The importance of standardizing tools and knowledge
(19:44) The opportunity for platform teams to apply AI at scale across the organization
(23:40) Quick recap of the key points so far
(24:33) How AI helps modernize legacy code and handle migrations
(25:45) Why focusing on fundamentals benefits both developers and AI agents
(27:42) Identifying SDLC bottlenecks beyond AI code generation
(30:08) Techniques for optimizing legacy code bases
(32:47) How AI helps tackle tech debt and large-scale code migrations
(35:40) Tools across the SDLC
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
• Substack: https://substack.com/@abinoda
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course