Humans in the Loop

The Oh My Zsh core team recently met up in person at GitHub Universe in San Francisco. Getting the maintainers into the same room matters more than most people realize. It strips away the abstractions and forces the conversation back to reality… what’s working, what’s breaking, what’s quietly draining energy, and what’s worth protecting long term.

We spent a good amount of time talking about AI. Not as a culture war or a prediction exercise, but as something already embedded in our day-to-day work. It shows up in our jobs, our creative projects, our personal workflows, and increasingly, in open source contributions to projects like Oh My Zsh. We don’t all share the same worldview on AI usage. That’s fine. Alignment on outcomes matters more than agreement on philosophy.

What we do share is stewardship of a project that millions of people rely on. And stewardship means occasionally slowing down long enough to name a problem instead of pretending it’ll sort itself out.

The pattern we couldn’t ignore

Over the past year, we’ve seen a noticeable increase in contributions that appear to lean heavily on AI tooling. That, by itself, is not a problem. People should use whatever tools help them learn and build. Forks are a playground. Experimentation is healthy.

What changed was the shape and cost of the work landing on maintainers’ desks.

We’re seeing larger pull requests from first-time contributors. Broader scope than necessary. Changes that touch parts of the codebase unrelated to the stated goal. PR descriptions and follow-up comments that feel polished but oddly disconnected from the actual implementation. In some cases, we genuinely can’t tell whether the contributor understands the changes they’re proposing.

That uncertainty matters because review is the bottleneck. Not code generation. Review. When a PR is sprawling, optimistic, and hard to reason about, it doesn’t matter how fast it was produced. It consumes volunteer time. AI doesn’t remove that constraint. In many cases, it amplifies it.

We needed clarity, not vibes

At some point, “we’ll handle it case by case” stops being fair to contributors and exhausting for maintainers. We needed something explicit we could point to. Not a moral stance. Not a ban. Just clarity around expectations and accountability.

The team agreed that I’d take the first pass at researching how other open source projects are approaching AI usage and propose a path forward. We already maintain a private GitHub project where we collaborate on behind-the-scenes decisions… security considerations, moderation questions, and process changes that don’t belong in public threads. That gave us space to pressure-test ideas before bringing anything to the community.

What I found was interesting. Many projects treat AI as a separate category entirely, with standalone policies layered on top of CONTRIBUTING.md. Others get extremely prescriptive, trying to enumerate when AI is allowed, how much is allowed, and under what circumstances.

I understand the impulse. But it also felt like a distraction.

CONTRIBUTING.md already exists to describe how humans contribute responsibly. Tools change. Responsibility doesn’t. Treating AI as something fundamentally different risks avoiding the harder conversation… ownership.

Where does AI start and end anyway?

If your editor suggests a line of code, is that AI? If autocomplete finishes a function, does it count? If a tool rewrites your PR description, is that different from asking a colleague to proofread it? If a Copilot agent updates a handful of links, is that categorically different from doing a global search-and-replace by hand?

Bright lines fall apart quickly. In reality, maintainers rely on judgment, pattern recognition, and experience. We always have. AI just makes it cheaper to submit work that looks plausible without being deeply understood. Volume goes up. Review cost goes up with it.

So we didn’t try to play detective. We’re not interested in policing tooling. We’re interested in accountability.

The approach we took

We chose to integrate guidance about AI-assisted contributions directly into our existing contribution guidelines, instead of treating it as a separate class of work. Concretely, that meant adding a “Working with AI tools” section to our CONTRIBUTING.md and updating our PR templates so that contributors can disclose how they used AI when it’s relevant to their work.

The standard is straightforward: if you submit code to core, you own it. You must understand every line. You must be able to explain what changed and why. You must test what you touched and keep the scope focused. And if something breaks, you should be able to debug it without regenerating your way out of the problem.

That’s not an “AI policy.” That’s basic stewardship.

If you’re curious, the pull request where we proposed and merged these changes is ohmyzsh/ohmyzsh#13520.

A quick story, because this matters

We’ve already had people reach out to say they’re done using Oh My Zsh because I once used GitHub Copilot to help update a few links in the codebase and referred to a class of low-effort contributions as “slop.”

Yes, I could have done those updates manually. I’ve been writing shell scripts for decades. The point wasn’t capability. The point was experimentation.

I opened a GitHub issue, let the Copilot agent propose a change, and then did what maintainers always do: we reviewed it. A human noticed a problem. We modified the PR. We merged it.

Exactly like we always have.

The only difference is that I didn’t need to fire up my editor to replace a handful of URLs. The human checkpoints didn’t disappear. Responsibility didn’t disappear. Review didn’t disappear.

That’s the entire point.

What hasn’t changed

Nothing gets merged without human review. Every approval still represents a maintainer making an informed decision on behalf of the community. AI doesn’t remove that responsibility. It increases it.

We reserve the right to ask contributors to explain their code. To show how they tested it. To narrow scope. To revise. To collaborate. And yes… to decline changes that don’t meet the bar.

Not because we’re precious about the code. Because volunteer time is the most finite resource in open source.

Oh My Zsh exists to make the terminal a little more delightful for humans, keystroke by keystroke. If your contribution moves us in that direction, we’re excited to review it. If it reads like output optimized for confidence instead of clarity, we’ll say no.

We’ll be friendly about it. But we’ll say no.

If you want to follow along, the project lives at github.com/ohmyzsh/ohmyzsh, and the public-facing home for documentation and installation is ohmyz.sh.

Tools evolve. Stewardship remains a human job.

Hi, I'm Robby.

Robby Russell

I run Planet Argon, where we help organizations keep their Ruby on Rails apps maintainable—so they don't have to start over. I created Oh My Zsh to make developers more efficient and host both the On Rails and Maintainable.fm podcasts to explore what it takes to build software that lasts.