Advanced prompting guide for AI-assisted engineering
Structured prompting patterns for using AI in more complex, higher-risk environments.
Welcome to the latest issue of Engineering Enablement, a weekly newsletter sharing research and perspectives on developer productivity.
In 2025, we saw engineering leaders focus on rolling out AI coding assistants at scale across their organizations. As these tools became more widely used, it became clear that outcomes depended less on having access to AI and more on how teams were educated and enabled. In response, DX published the Guide to AI Assisted Engineering, outlining best practices and high-value prompting use cases to help engineering teams use AI effectively in their day-to-day work.
Now, as organizations move beyond pilots, the focus has shifted from adoption and enablement to operational improvements and more complex use cases. Successfully applying AI in these contexts requires more structured prompting practices than those used during early experimentation. To support that next step, we’ve created our first supplement to the original guide: Advanced Prompting Guide for AI Engineering.
This new guide follows the same format as the original, with clear Do and Don’t scenarios, full prompt examples, and code output examples. It is vendor-agnostic, with an emphasis on prompting structure, constraints, and context so the techniques can be applied across tools and architectures.
Inside, you’ll find prompt and code examples that focus on:
Complexity management - For systems with cascading rules or conflicting requirements, the guide demonstrates graph-based prompting to reveal hidden dependencies, prioritize rules, and deal with changing state
Governance and quality - Workflows that execute controlled validation loops, which result in higher accuracy, and can deal with more edge cases
Risk mitigation - Dual-implementation strategies can yield more bulletproof outcomes, especially when dealing with critical transactions that require 100% accuracy
Operational efficiency - Techniques like diff-only refactoring can reduce invasive changes to large, complex code repositories, as well as reduce tokens
These use cases are drawn from interviews, educational talks, and community interaction, and can be deployed across multiple scenarios. Whether you’re using coding assistants, building prompts for agents, or writing specs for spec-driven-development, you’ll find applicable methods in the guide.
One other important update: When the original guide was published, it was written primarily for developers. But 2025 was a pivotal year for elevating traditional non-builders. As highlighted in our Q4 AI Impact Report, engineering leaders are shipping more code, and designers and PMs have the ability to create deeper designs and prototypes. We still encourage engineering leaders to distribute this guide to their engineering teams, but this guide need not be exclusive to engineers. Whether engineer, designer, PM, or leader, if you are working on complex problems, this guide can provide useful perspectives.
This week’s featured DevProd job openings. See more open roles here.
American Express is hiring a Sr. Manager, Digital Product Management - DevProd | Hybrid - London UK
CoreWeave is hiring a Sr. Software Engineer - Developer Experience | Livingston NJ; New York, NY
DoorDash is hiring an Engineering Manager - Developer Experience | San Francisco, CA; Sunnyvale, CA; Seattle, WA; Los Angeles, CA; New York, New York
Experian is hiring a Software Engineering Manager - Security Platform | Remote
Gusto is hiring a Sr. Platform Engineer | Denver, CO; San Francisco, CA; Atlanta, GA; Austin, TX; Chicago, IL; Miami, FL; Seattle, WA
Notion is hiring a Senior Platform Engineer | Dubai, United Arab Emirates
Plaid is hiring a Software Engineer - Platform | New York, NY
That’s it for this week. Thanks for reading.



The diff-only refactoring point is underrated. When you're running AI agents against a large codebase, the default behavior is to rewrite entire files — which makes code review nearly impossible and introduces subtle regressions. Constraining the agent to diff-only output forces it to reason about minimal changes, and as a side benefit, it burns way fewer tokens.
I've been using this pattern with Claude Code on production repos: give it the specific function to modify, the constraint to only change what's necessary, and explicit instructions not to "improve" surrounding code. The output quality goes up dramatically when you narrow the scope.