AI Increased Your Output. It Also Increased Your Variance.

Reading time: ~3 min Like watching bundle install with only one new gem

You gave a handful of your devs access to Claude Code Max… and things moved fast. Output went up. Then things started to drift. Not just formatting or naming. Structure. Approach. Assumptions. One developer builds guardrails around it. Another uses it like autocomplete. A third treats it like a thinking partner. Same tool, same codebase… different results. The code starts to reflect how each person uses the tool, not how the team builds software together.

There’s a tension here that most teams are sidestepping. You can feel the inconsistency creeping in, but saying “we should align on how we use this” sounds heavy-handed. So it gets framed as experimentation. Let people figure it out. Give it space. But this isn’t a side project. This is production work. Treating AI like open-ended R&D means everyone is running their own experiment… and nobody is learning together.

Inside the team, the gap gets wider.

A few engineers find a rhythm with these tools and start moving faster.

Others don’t hit the same stride.

That difference starts getting interpreted as a performance gap… even though the underlying workflows are all over the place. From the outside, leadership sees a few big wins and starts asking uncomfortable questions about why the rest of the team isn’t there yet.

A recent On Rails guest, Brian Scanlan, shared a metaphor that stuck with me. Commercial pilots often land planes on autopilot. They’re still paying close attention. Monitoring, inspecting, managing the system. They’re tested without it. They take control in unfamiliar scenarios. Sometimes they turn it off on purpose just to stay sharp. There’s pride in the craft. That’s the shift for engineers right now. The tool can handle more of the execution, but someone still owns the outcome… and knows when to step in, or step away from it entirely.

There’s another tension coming into focus.

Some engineers see this as survival.

A way to level up quickly and stay relevant.

That’s not irrational. But these tools pull a lot of the work into the CLI tooling away from your coworkers and into a more private loop. As a result, the learning also fragments. You start hearing more “look what I figured out” and less “look what we figured out.” I’m not immune to this either. It’s easy to optimize your own workflow and forget that the team needs a shared one.

Most teams don’t have a dedicated group thinking about how this should work. So the problem doesn’t wait its turn. It shows up in code reviews, in Slack threads where people share what worked or vent about what didn’t, in 1:1s, and eventually in performance reviews. It shows up when leadership is staring at a list of names and trying to make sense of who is rising to the moment and who seems to be falling behind (regardless of whether that is actually the case).

Engineering leaders already have the authority to shape how work gets done. That hasn’t changed. How your team uses AI is now part of your system. If you don’t define it, it still exists but in a fragmented and difficult to improve manner. None of this works if people are each operating in their own setup, with no shared baseline to compare notes against.

Treat your shared AI conventions like a style guide… CLAUDE.md or AGENTS.md or whatever it’s called by the time you’re reading this… prompt rules, shared skills – versioned, reviewed, owned by the team.

Otherwise, you’re scaling your inconsistencies.

Hi, I'm Robby.

Robby Russell

I run Planet Argon, where we help organizations keep their Ruby on Rails apps maintainable—so they don't have to start over. I created Oh My Zsh to make developers more efficient and host both the On Rails and Maintainable.fm podcasts to explore what it takes to build software that lasts.