I recently listened to an episode of Engineering Enablement where Abi Noda and Jesse Adametz (Senior Director of Platform Engineering at Twilio) dug into the current state of AI adoption, measuring ROI, background agents, and the shifting role of the engineer. It’s a wide-ranging conversation, but what stuck with me most was how much of what they described lines up with what I’ve been seeing too: the teams getting the most out of AI are the ones that already had their developer experience fundamentals in order.

The best AI strategy is the DX strategy you should have had all along

There’s a term floating around right now, “AI readiness,” and it sounds like it should involve something new. A new framework, a new platform initiative, a new way of thinking. But when you actually look at what makes an environment ready for AI, it’s a suspiciously familiar list: consistent development environments, solid CI, good test coverage, clear system boundaries, widely used and well-supported library and platform choices, actual documentation.

These are the same things we’ve been telling ourselves to fix for years. The difference is that AI makes the cost of not fixing them impossible to ignore. When an agent rolls its own logging wrapper instead of using the Winston setup your team already has in place, the gap in your documentation isn’t theoretical anymore. When there aren’t proper test suites or security guardrails in place, you can’t trust what an agent produces, no matter how good the model is.

Code generation was never the hard part

Here’s the thing that I think gets lost in the AI conversation: we’ve always had “code generation.” It was just really expensive. It involved humans, and those humans needed context, judgment, ramp-up time, and coffee. What AI has done is make the act of producing code dramatically cheaper. That’s genuinely significant, but it’s worth being honest about what it does and doesn’t solve.

Cheaper code generation doesn’t tell you what to build. It doesn’t tell you whether the thing you’re building is the right thing, or whether it fits cleanly into the system you already have. The hard part of software engineering has always been the decisions upstream of the code: scoping, prioritizing, understanding the domain, making tradeoffs.

There’s a thought experiment that stuck with me from the podcast. Imagine you took a mid-level IC on your team and gave them three direct reports with infinite coding capacity. Would your org suddenly be more productive? Probably not. Because coding was never the thing that constrained that person’s output. It was the judgment, the context-gathering, the figuring-out-what-to-do part. And that’s the part AI hasn’t made cheaper yet.

The bottleneck didn’t disappear, it moved

For a long time, code generation was the bottleneck. Not because it was conceptually hard, but because it was expensive. It required humans, and humans are slow, context-dependent, and limited in how many things they can hold in their heads at once. So we organized entire workflows around that constraint: sprints, story points, capacity planning, all of it shaped by the assumption that producing code was the scarce resource.

AI has removed that constraint, or at least drastically reduced it. But the bottleneck didn’t disappear. It just moved. If you speed up code generation by 5x but you don’t change anything about how code gets reviewed, merged, tested in CI, or deployed, congratulations, you now have a massive queue at the review stage.

Anyone who read The Phoenix Project back in the day is nodding along here (if you haven’t read it, you definitely should). The whole point of the assembly-line analogy was that optimizing one station without looking at the full system just relocates the pile. AI tools are, right now, a very effective way of relocating the pile to wherever your next-weakest link is. For most teams I talk to, that’s review. For some, it’s environment setup. For others, it’s the judgment call about what to build in the first place.

The interesting question isn’t “how do we make AI go faster.” It’s “where is the pile going to move next, and are we ready for it?”

The uncomfortable part

The uncomfortable implication is that a lot of orgs don’t have an AI problem. They have a years-old developer experience problem that they finally can’t ignore. That’s not as exciting a story to tell your exec team as “we’re going to transform with AI,” but it’s probably a more honest one.