Recent Posts

Humans in the Loop

January 20, 2026

The Oh My Zsh core team recently met up in person at GitHub Universe in San Francisco. Getting the maintainers into the same room matters more than most people realize. It strips away the abstractions and forces the conversation back to reality… what’s working, what’s breaking, what’s quietly draining energy, and what’s worth protecting long term.

We spent a good amount of time talking about AI. Not as a culture war or a prediction exercise, but as something already embedded in our day-to-day work. It shows up in our jobs, our creative projects, our personal workflows, and increasingly, in open source contributions to projects like Oh My Zsh. We don’t all share the same worldview on AI usage. That’s fine. Alignment on outcomes matters more than agreement on philosophy.

What we do share is stewardship of a project that millions of people rely on. And stewardship means occasionally slowing down long enough to name a problem instead of pretending it’ll sort itself out.

The pattern we couldn’t ignore

Over the past year, we’ve seen a noticeable increase in contributions that appear to lean heavily on AI tooling. That, by itself, is not a problem. People should use whatever tools help them learn and build. Forks are a playground. Experimentation is healthy.

What changed was the shape and cost of the work landing on maintainers’ desks.

We’re seeing larger pull requests from first-time contributors. Broader scope than necessary. Changes that touch parts of the codebase unrelated to the stated goal. PR descriptions and follow-up comments that feel polished but oddly disconnected from the actual implementation. In some cases, we genuinely can’t tell whether the contributor understands the changes they’re proposing.

That uncertainty matters because review is the bottleneck. Not code generation. Review. When a PR is sprawling, optimistic, and hard to reason about, it doesn’t matter how fast it was produced. It consumes volunteer time. AI doesn’t remove that constraint. In many cases, it amplifies it.

We needed clarity, not vibes

At some point, “we’ll handle it case by case” stops being fair to contributors and exhausting for maintainers. We needed something explicit we could point to. Not a moral stance. Not a ban. Just clarity around expectations and accountability.

The team agreed that I’d take the first pass at researching how other open source projects are approaching AI usage and propose a path forward. We already maintain a private GitHub project where we collaborate on behind-the-scenes decisions… security considerations, moderation questions, and process changes that don’t belong in public threads. That gave us space to pressure-test ideas before bringing anything to the community.

What I found was interesting. Many projects treat AI as a separate category entirely, with standalone policies layered on top of CONTRIBUTING.md. Others get extremely prescriptive, trying to enumerate when AI is allowed, how much is allowed, and under what circumstances.

I understand the impulse. But it also felt like a distraction.

CONTRIBUTING.md already exists to describe how humans contribute responsibly. Tools change. Responsibility doesn’t. Treating AI as something fundamentally different risks avoiding the harder conversation… ownership.

Where does AI start and end anyway?

If your editor suggests a line of code, is that AI? If autocomplete finishes a function, does it count? If a tool rewrites your PR description, is that different from asking a colleague to proofread it? If a Copilot agent updates a handful of links, is that categorically different from doing a global search-and-replace by hand?

Bright lines fall apart quickly. In reality, maintainers rely on judgment, pattern recognition, and experience. We always have. AI just makes it cheaper to submit work that looks plausible without being deeply understood. Volume goes up. Review cost goes up with it.

So we didn’t try to play detective. We’re not interested in policing tooling. We’re interested in accountability.

The approach we took

We chose to integrate guidance about AI-assisted contributions directly into our existing contribution guidelines, instead of treating it as a separate class of work. Concretely, that meant adding a “Working with AI tools” section to our CONTRIBUTING.md and updating our PR templates so that contributors can disclose how they used AI when it’s relevant to their work.

The standard is straightforward: if you submit code to core, you own it. You must understand every line. You must be able to explain what changed and why. You must test what you touched and keep the scope focused. And if something breaks, you should be able to debug it without regenerating your way out of the problem.

That’s not an “AI policy.” That’s basic stewardship.

If you’re curious, the pull request where we proposed and merged these changes is ohmyzsh/ohmyzsh#13520.

A quick story, because this matters

We’ve already had people reach out to say they’re done using Oh My Zsh because I once used GitHub Copilot to help update a few links in the codebase and referred to a class of low-effort contributions as “slop.”

Yes, I could have done those updates manually. I’ve been writing shell scripts for decades. The point wasn’t capability. The point was experimentation.

I opened a GitHub issue, let the Copilot agent propose a change, and then did what maintainers always do: we reviewed it. A human noticed a problem. We modified the PR. We merged it.

Exactly like we always have.

The only difference is that I didn’t need to fire up my editor to replace a handful of URLs. The human checkpoints didn’t disappear. Responsibility didn’t disappear. Review didn’t disappear.

That’s the entire point.

What hasn’t changed

Nothing gets merged without human review. Every approval still represents a maintainer making an informed decision on behalf of the community. AI doesn’t remove that responsibility. It increases it.

We reserve the right to ask contributors to explain their code. To show how they tested it. To narrow scope. To revise. To collaborate. And yes… to decline changes that don’t meet the bar.

Not because we’re precious about the code. Because volunteer time is the most finite resource in open source.

Oh My Zsh exists to make the terminal a little more delightful for humans, keystroke by keystroke. If your contribution moves us in that direction, we’re excited to review it. If it reads like output optimized for confidence instead of clarity, we’ll say no.

We’ll be friendly about it. But we’ll say no.

If you want to follow along, the project lives at github.com/ohmyzsh/ohmyzsh, and the public-facing home for documentation and installation is ohmyz.sh.

Tools evolve. Stewardship remains a human job.

Why So Serious?

December 01, 2025

The question Sheon Han poses — “Is Ruby a serious programming language?” — says a lot about what someone thinks programming is supposed to feel like. For some folks, if a tool feels good to use… that must mean it isn’t “serious.”

Ruby never agreed to that definition. If it did, I missed the memo.

If you arrived late, you missed a chapter when the language felt like a quiet rebellion. The community was small. The energy was playful. Ruby tapped you on the shoulder and asked what would happen if programming didn’t have to feel intimidating… what might be possible if clarity and joy were allowed.

The early skeptics were predictable. Java architects. Enterprise traditionalists. Anyone whose identity depended on programming being a stern activity. They said Ruby was unserious. And the community mostly shrugged… because we were busy building things.

Ruby made programming approachable. Not simplistic… approachable. That distinction matters. It helped beginners see the path forward. It helped small teams build momentum before anxiety caught up. It helped experienced developers rediscover a sense of lightness in their work.

This is why bootcamps embraced it. Why tiny startups found traction with it. Ruby wasn’t trying to win benchmarks… it was trying to keep you moving. When you’re creating something new, that matters far more than the theoretical purity of your type system.

And yes… critics love the Twitter example. But look closer. Ruby carried them further than most companies will ever reach. They outgrew their shoes. That’s not an indictment… that’s success.

In my world… running a software consultancy for a few decades… I’ve never seen a team fail because they chose Ruby. I have seen them fail because they chose complexity. Because they chose indecision. Because they chose “seriousness” over momentum. Ruby just needed to stay out of the way so people could focus on the real work.

And while folks keep debating its “credibility,” the receipts are plain. Shopify moves billions through Ruby. Doximity supports most physicians in the US with Ruby. GitHub held the world’s source code together for years using Ruby. This isn’t sentiment. This is proof.

What outsiders often miss is the culture. Ruby attracts people who care how code feels to write and read. Not because of nostalgia… but because most of our careers are spent living inside someone else’s decisions. Joy isn’t a luxury. It’s how sustainable software gets made.

I don’t know Sheon personally, but I’m guessing we have as much in common about music tastes as we do whether _why’s Poignant Guide to Ruby made any sense to them. And that’s fine. That’s actually the point.

And on that note… there’s one thing I genuinely agree with Sheon about. Ruby doesn’t seem to be for them. That’s not a failure of the language. That’s just taste. Some people like jazz. Some like metal. Some prefer the comfort of ceremony. Ruby has never tried to convert anyone. It simply resonates with the people it resonates with.

Since we’re noting taste, I’ll add something of my own. As an atheist, it feels oddly appropriate to mention my lack of religion here… mostly because it mirrors how strangely irrelevant it was for the article to bring up Matz’s religion at all. It didn’t add context. It didn’t deepen the argument. It was just… there. A detail reaching for meaning that wasn’t actually connected to the point.

Sheon mentions approaching Ruby without “the forgiving haze of sentimentality.” Fair enough. But the sentiment wasn’t nostalgia. It was gratitude. Gratitude for a language that centers the human being. Gratitude for a community that believes programming can be expressive. Gratitude for a tool that makes the work feel lighter without making it careless.

But here’s the part the discourse keeps missing… this isn’t just about the past.

The future of programming is fuzzy for everyone. Anyone claiming to have the master recipe for what’s coming is bullshitting you. The future won’t be owned by one paradigm or one language or one ideology. It’ll be a blend… a messy collage of ideas, old and new, borrowed and rediscovered.

And in that future… Ruby’s values aren’t relics. They’re an anchor. Readability will matter more as AI writes more code. Maintainability will matter more as products live longer. Joy will matter more as burnout becomes the default state.

And if you need a reminder that seriousness isn’t the reliable signal people wish it were…

The serious candidate doesn’t always get elected.
The serious musician doesn’t always get signed.
The serious artist doesn’t always sell.
The serious man doesn’t always find a serious relationship.
The serious startup doesn’t always find product-market fit.
The serious engineer doesn’t always write the code that lasts.
The serious rewrite doesn’t always solve the real problem.

Culture doesn’t reliably reward the serious. Neither does business.
It rewards the resonant. The clear. The human. The work that connects.

Ruby has always leaned toward that side of the craft. Toward the part of programming that remembers people are involved. Toward the part that says maybe the code should serve the team… not the other way around.

And honestly… I think unserious people will play an important role in the future too. The curious ones. The playful ones. The ones who keep the door propped open instead of guarding it. They’ll keep the industry honest. They’ll keep it human.

So is Ruby “serious”? I still think that’s the wrong question.

A better one is… does Ruby still have something meaningful to contribute to the next chapter of software?

It does.
And if that makes it “unserious”… maybe that’s exactly why it belongs in the conversation.

Who Keeps the Lights On?

October 20, 2025

Every so often, someone in the Ruby community will ask,
“So… what does Planet Argon actually do these days?”

Fair question.

Architecture for Contraction

October 13, 2025

We’ve spent the last decade optimizing for scale. How do we handle more traffic? More users? More engineers? The assumptions were baked in: Growth is coming. Prepare accordingly.

So we split things apart. We mapped services to teams. We built for the org chart we were about to have.

Then 2023 happened. And 2024. And now 2025.

Turns out, the future isn’t always bigger.

Organizations, Like Code, Deserve Refactoring

October 09, 2025

I’ve been thinking about what happens when open source organizations hit their breaking point… when funding dries up, relationships fracture, and everyone’s scrambling to make sense of what went wrong.

It turns out, the patterns look familiar.

Talking Shop with Ruby & Rails Maintainers at Rails World 2025

September 22, 2025

As the opening keynote on Day 2 of Rails World 2025, I had the chance to host a panel with three people who’ve been shaping the direction of both Ruby and Rails from deep within the internals.

We covered a lot in an hour:

  • What they’ve been working on behind the scenes
  • Which areas of Ruby and Rails could use more community support
  • The evolving release process for the language
  • Why Hiroshi’s focused on improving the experience for developers on Windows
  • How security fixes are coordinated across multiple versions
  • Performance work related to YJIT and ZJIT
  • JSON parsing performance and compatibility
  • What keeps them motivated to continue maintaining the ecosystem

There’s even a moment where Aaron and Jean get into a friendly disagreement about performance and priorities. If you enjoy technical nuance and sharp perspectives, you’ll appreciate that exchange.

And yes… I asked Aaron about his favorite Regular Expression. His response did not disappoint.

It was a fun, thoughtful, and occasionally surprising conversation — and a reminder that Ruby and Rails continue to evolve in the hands of people who care deeply about their future.

If you weren’t in Amsterdam or want to revisit it, the full panel is now available:

Also worth pairing with this interview with Jean on the On Rails podcast, where we dig into IO-bound workloads, misconceptions, and what it’s like maintaining Rails at scale.

A solid pairing if you’re curious where the ecosystem is headed next.

7 Stages of Software Tech Stack Adoption (You're Probably in Stage 5)

September 20, 2025

I’ve been part of the Ruby on Rails ecosystem for over two decades. I’ve watched teams adopt Rails with wild enthusiasm… evolve their systems… struggle through growing pains… and eventually find themselves in an uncomfortable position; debating whether to abandon the tools that once brought them so much joy.

I don’t think that’s necessary… or even wise.

But I do think it’s understandable.

After working with and talking to hundreds of teams… many of them using Rails, Laravel, Ember.js, or even React… I’ve noticed a pattern. A lifecycle of sorts. The way teams internally adopt and evolve their relationship with a technical stack. I’ve seen it reflected in our consulting clients at Planet Argon, the guests on my podcasts (Maintainable and On Rails), and peers who’ve been part of the various peak waves of these ecosystems.

And while every team is different, the stages of internal tech stack adoption often follow a similar spiral.

This post is an attempt to describe that spiral.

Not as a fully baked theory; but as a conversation starter. A mirror. And maybe a compass.

Because whether your team is building your core product with Rails, or you’re a non-software company maintaining internal tools on Laravel, understanding where you are in this lifecycle might help you understand what comes next.

🌀 The Spiral of Internal Tech Stack Adoption

Before we go deeper, here’s a quick overview of the seven stages I’ve observed. These aren’t fixed; your team might skip around or revisit them multiple times. But in general, this is the pattern I’ve seen:

  1. Adopting: A small group of enthusiastic engineers selects and introduces the stack while building a prototype or MVP.

  2. Expanding: The stack proves useful… so it spreads. More features, more developers, more tooling.

  3. Normalizing: The stack becomes the default. Teams standardize around it. Hiring pipelines and best practices emerge.

  4. Fragmenting: Pain points surface. Teams bolt on new tools or sidestep old ones. Internal consistency erodes.

  5. Drifting: The stack feels sluggish. Upgrades are deferred. The excitement is gone.

  6. Debating: Conversations shift to rewrites or migrations. Confidence is shaken.

  7. Recommitting: Teams pause, reflect, and decide to reinvest in the stack… and their shared future with it.

Again, these stages aren’t a ladder; they’re a spiral.

And the question your team has to ask is: Are we spiraling upward… or downward?

Because while The Downward Spiral is a great album, it doesn’t have to be your trajectory.


♻️ It’s a Cycle, Not a Ladder

It might be tempting to look at this lifecycle and think, “Our goal is to get to the Recommitting stage and stay there forever.”

But that’s not how this works.

Every team will move through these stages multiple times over the lifespan of their product. Shifting priorities, team turnover, organizational pivots… they all create new dynamics that ripple across your tech stack.

Recommitting isn’t a finish line. It’s an inflection point. One that clears the fog, sharpens priorities, and invites your team to move forward with intent.

Just don’t mistake clarity for comfort… the spiral keeps turning.

When Your Cache Has a Bigger Carbon Footprint Than Your Users

August 28, 2025

Caching in Rails is like duct tape. Sometimes it saves the day. Sometimes it just makes a sticky mess you’ll regret later.

Nowhere is this more true than in data-heavy apps… like a custom CRM analytics tool that glues together a few systems. You know the type: dashboards full of metrics, funnel charts, KPIs, and reports that customers swear they need “real-time.”

And that’s where the caching debates begin.

All valid. All expensive in their own way.


The Dashboard Problem

Your SaaS app has grown to 2,000 customers, each with multiple users.

For the overwhelming majority, dashboards load just fine. Nobody complains.

But then your whales log in. The Fortune 500 accounts your sales reps obsess over. Their dashboards pull data from half a dozen APIs, crunch millions of rows, and stitch together a wall of charts. It’s not just a page. It’s practically a data warehouse in disguise.

These dashboards are slow. Painfully slow. And you hear about it… through support tickets, account managers, and sometimes even a terse email from someone with “Chief” in their title.

So your engineering team digs in. You fire up AppSignal, Datadog, or Sentry and zero in on the slowest dashboard requests. You look at traces, database query timings, and request logs. You chart out the p95 and p99 response times to understand how bad it gets for the biggest customers.

From there, you start experimenting:

  • Are we missing database indexes?
  • Are there N+1 queries lurking in the background?
  • Can we preload or memoize expensive calls?
  • Could a few data points be cached individually, without touching the rest?

You squeeze what you can out of the obvious optimizations. Maybe things improve… but not enough.

So the conversation shifts.


Negotiating “Real-Time” Expectations

When your product team actually sits down with those whale customers, the conversation shifts.

They start by saying: we need real-time data. But after a little probing, everyone realizes “real-time” doesn’t always mean right now this second.

Maybe what they really need is a reliable snapshot of activity as of the end of the previous day. That’s good enough for the kinds of decisions their leadership is making in the morning meeting. Nobody is making million dollar calls based on a lead that just landed five minutes ago.

And your team can remind them: there are other real-time metrics in the system. For example:

  • New leads created today.
  • Active users in the past hour.
  • Pipeline changes as they happen.

So now you’ve reframed the dashboard story. Instead of one giant “real-time” data warehouse, you split it into two categories:

  1. Daily rollups. Crunch the heavy stuff once a night. End-of-day data is sufficient, and it’s reliable.
  2. Today’s activity. Show a few real-time metrics that are fast to calculate. Give customers the dopamine hit of “live” data without boiling the ocean.

That’s usually enough to recalibrate expectations. Customers feel like they’re still getting fresh data, while your app no longer sets itself on fire every time a big account logs in.


The 1 AM Job (and Its Hidden Cost)

Armed with that agreement, your team ships the “reasonable” solution most of us have built at least once:

  • Every night at 1 AM, loop through all 2,000 customers.
  • Generate every dashboard report.
  • Cache the results somewhere “safe.”

The next morning, dashboards are instant. Support tickets quiet down. Account reps breathe easier. Everyone celebrates.

But here’s the kicker: the whales were the problem. The rest of your customers never needed this optimization in the first place. Their dashboards were already fine.

So now you’ve turned one customer’s problem into everyone’s nightly job. And under the hood, you’ve cranked through hours of CPU, memory, and database load… just to prepare data for customers who won’t even log in later today.

Worse, you’ve stuffed your background job queue with 2,000 little tasks every night. Which means your queue system—whether it’s Sidekiq, Solid Queue, or GoodJob—is spending precious time juggling busy work instead of focusing on the jobs that actually matter. And when those queues get stuck, or a worker crashes, you’re left wading through a mountain of pending jobs just to catch up.

This is what I call Cache Pollution: the buildup of unnecessary caching work that bloats your systems, slows down your queues, and leaves your caching strategy with a far bigger carbon footprint than it needs to. Another benefit of tackling Cache Pollution early is future flexibility — you might eventually solve the computation challenges in a different way, and you won’t be anchored to big, scary scheduled tasks that churn through all of your customers every night.


Frequency Matters More Than We Admit

Do these reports need to run every single day… or only on weekdays when your customers actually log in?

If your traffic drops on Saturdays and Sundays, consider a lighter schedule. Or even none at all. Because “slow” isn’t so slow when almost nobody is around. A BigCorp admin poking the dashboard on Sunday morning might be fine with an on-demand render… especially if the weekday experience is snappy.

And here’s another angle: if your scheduled job runs at 1 AM, that means when a BigCorp user logs in later that same day, they’re still looking at data that’s less than 24 hours old. For most business use cases, that’s plenty. You don’t need to rerun heavy jobs every few hours just because you can.

This is all about right-sizing frequency:

  • Weekday cadence: nightly rollups for whales; maybe twice a day if usage demands it.
  • Weekend cadence: pause, or run a narrower subset.
  • Holiday mode: same idea… different switch.

If your dashboard code doesn’t rely on the cache to render, you keep the option to not precompute. That flexibility is where the savings live. As the business grows, the cost of overly eager schedules grows with it… so design for dials, not hard-coded habits.


Other Things to Consider with Recurring Tasks

One more question to ask about recurring scheduled jobs: do you really need to iterate through all users or all organizations?

In many cases, the answer is no. Most customers don’t trigger the conditions that require a heavy recompute. Yet teams often design jobs to blast across every top-level object in the database, every night, without discrimination.

Instead, look for signals that help you scope the work down:

  • Which organizations actually logged in today?
  • Which customers have datasets large enough to need optimization?
  • Which accounts crossed a threshold since the last run?

By narrowing the set of work each job touches, you cut down on wasted compute, reduce queue congestion, and avoid the kind of Cache Pollution that grows silently as your business scales.


Clever Tricks We’ve Seen

The trick isn’t just caching everything for everyone. It’s knowing who to cache for and when.

  • Selective pre-caching. Only build nightly rollups for your whales. Maybe 50 out of 2,000 customers. Everyone else can render on demand, which was fine all along.
  • Cache on login. If you know a user from BigCorp is signing in, enqueue a background job to warm up their dashboard before they hit it. You can even anticipate who they are based on a cookie value when they land on the Sign In page — before they’ve had a chance to trigger 1Password or type in their credentials, the system is already working behind the scenes to prep their dashboard. Even a 10–20 second head start can smooth the experience.
  • Cache on demand… with a fallback. If cached data is missing, build fresh on the spot. Outages happen when teams assume the cache will always be there.

And here’s a bonus: if your job fails at 1 AM, re-running it for 50 customers is a whole lot faster than crawling through 2,000.

Extra credit: scope your scheduled tasks so that when a customer crosses a certain threshold—say, user count, dataset size, or request volume—they automatically join the “whale” group. No manual babysitting required.


Other Patterns

Not all caching challenges look like dashboards.

Case Study: The Press Release Problem

We once managed a public-facing site for a massive brand. Whenever they dropped a big press release, it spread fast across social media. Traffic would spike within minutes.

Of course, that’s when the CEO would notice a typo. Or the PR team would need to update a paragraph to reflect a question from the media. Despite their editorial workflows, changes still had to happen after publication.

So we had to get clever. We couldn’t cache those fresh pages for hours. Instead, we used a sliding window approach:

  • First 5 minutes: cache for 30 seconds at a time.
  • After 5 minutes: increase to 1 minute.
  • After 10 minutes: increase to 2 minutes.
  • After 20 minutes: increase to 5 minutes.
  • After 6 hours: safe to cache for an hour.
  • After a day: cache for a few hours at a time.

This let us protect our Rails servers from massive traffic spikes when a new article was spreading fast, while still giving editors the ability to push corrections through quickly. Older articles, once stable, could safely sit in Akamai’s cache for hours.

At the time, Akamai could take up to seven minutes to guarantee a purge across their global network. Not ideal. We had to plan for that lag. Today, most CDNs can purge instantly, but back then… it was a constraint we had to design around.


A Final Challenge

A lot of what we’ve talked about here comes down to avoiding Cache Pollution.

That’s the unnecessary churn your system takes on when it generates data nobody asked for. It’s the background job queue bloated with thousands of tasks that fight with more important work. It’s the 1 AM process chewing through CPU just to prep dashboards for customers who never log in.

Cache Pollution looks like optimization on the surface… but underneath it’s just waste.

So before your team spins up the next caching project, stop and ask:

  • Who really needs this cache?
  • How fresh does it need to be?
  • What happens if the cache isn’t there?
  • Do we need to run it this often, or for this many customers?
  • Could we scale down the busy work instead of scaling it up?

Because the goal isn’t just faster dashboards. The goal is to keep your caching strategy lean, resilient, and focused — instead of leaving behind a trail of Cache Pollution that grows with every new customer you add.

The Internal Tooling Maturity Ladder

August 13, 2025

(or: How I’m Learning to Step Back So We Can Move Forward)

Here’s a pattern I’ve been contributing to more than I’d like to admit:

Someone on the team proposes a new internal tool. There’s a clear need. There’s momentum. Conversations start about how we’ll build it… what tools we’ll use… where it’ll live… whether it might become client-facing someday.

It hasn’t been built yet, but we’re already architecting the scaffolding.

And that’s usually when I step in.

Not with a thumbs-up. Not with funding. But with questions. The kind that start with, “What if we didn’t build this?”

It’s not fun. It’s not fair. And it’s not how I want to lead.

By that point, people are invested. They’ve done the thinking. They’ve shared the idea. They’ve taken a risk. And now I’m asking them to scale it back—or stop entirely.

This is me taking responsibility for that pattern.

So I did the only thing I know to do in moments like this: I wrote it down.
We now have a model. Internally, we call it The Internal Tooling Maturity Ladder.


🪜 The Ladder (in Plain English)

Level 0 – One-off Manual Script
Something one person runs on their own machine to save time or reduce repetitive work.
Maybe it lives in an unsaved file or as a task in Alfred or Automator.
It’s not elegant—but it works.
You run it manually. You copy-paste the result. You feel smug for five minutes.
No repo. No expectations. No long-term promises.

Example: A quick Ruby or Bash script that tallies something from an API and drops it into Slack.


Level 1 – Shared Manual Script
You cleaned it up. You wrote a little README. You dropped it in a shared Gist or Google Drive.
It’s still manually triggered, but now others can use it too—if they read the instructions.
It’s still lightweight. Still safe.
And it’s often where great tools should stay.

Example: A command-line tool that a few team members can run locally, maybe to generate a report or fetch usage stats.


Level 2 – Scheduled Automation
Now we’re automating things.
It runs on a schedule—maybe through Zapier, a GitHub Action, or a scheduled Rake task.
No UI. No buttons. Just automated updates that go where we already spend time.
Slack. Google Sheets. Email.
These tools hum quietly in the background, doing one job well.

Example: A script that posts weekly project stats to a Slack channel every Monday morning.


Level 3 – Lightweight Internal Service
Now we’re getting fancy.
This has a small UI. A form. A dashboard. Maybe some configuration options.
It needs hosting. Credentials. Some thought about security.
It’s still simple enough that one person can manage it—but now it’s a thing.
And it needs some care.

Example: A mini app that lets the team search across client project docs or surface stale Jira tickets.


Level 4 – Fully Hosted Internal Product
This is a real web app.
It’s deployed. It has a frontend and a backend. It has users. Sessions. Maybe even tests (hopefully).
It needs to be maintained. Updated. Monitored.
It might solve a meaningful problem—but it’s not free.
This is the top of the ladder for a reason.

Example: A polished internal dashboard that’s become a critical part of day-to-day operations.


Start Lower

This isn’t a blueprint. It’s a conversation starter.

The higher you go, the more you commit—time, infrastructure, expectations.
So we’re learning to start lower on the ladder.
To earn our way up.
To see if people care before we care too much.


Why It Matters

Every internal tool is a promise.
To support it. To upgrade it. To explain it to the next person who inherits it.

And sometimes… the smallest version of the tool is all we need.

A Slack post.
A spreadsheet.
A script that helps one person do their job 10% faster.

Not everything needs a UI.
Not everything needs a repo.
And not everything needs me to be the one who calls time on the project two weeks in.

This post isn’t about our internal model. Not really.

It’s about building fewer things that trap us.
And creating more space to experiment without regret.

If you’ve found yourself playing the role of reluctant gatekeeper… you’re not alone.
This ladder is helping me find a better way.

One rung at a time.

The Features We Loved, Lost, and Laughed At: My RailsConf 2025 Talk Is Now Online

July 25, 2025

If you didn’t make it to RailsConf this year…or couldn’t make it to my talk…I’ve got good news: the full video is now live.

🎥 Watch it here


Preparing for this talk was one of the most nostalgic (and sometimes absurd) research dives I’ve done in years. I pitched The Features We Loved, Lost, and Laughed At thinking it would be easy to uncover a long list of removed or weird Rails features to poke fun at.

Turns out? They weren’t so easy to find.

Rails hasn’t just thrown things away. It’s looped. It’s learned. It’s come back to old ideas and made them better.

In the talk, I trace that evolution…using code examples and stories from the early days of ActiveRecord, form builders, observe_field, semicolon routes, and even a few lesser-known misadventures involving matrix parameters.

I touch on features like Observers (invisible glue, invisible bugs) and ActiveResource…which wasn’t confusing so much as it was optimistic. It assumed the APIs you were consuming were designed with Rails-like conventions in mind. That was rarely the case.

I also explore what Rails has taught us about developer happiness, what it means to build with care, and what the community keeps refining (and laughing about).

Here’s a quick example: I once wrote an InvoiceObserver that did four different things silently…and when it broke, it took hours to even figure out where the logic lived. Magical until it wasn’t.


Why Look Back Now?

With RailsConf coming to a close, it felt like the right moment to reflect not just on the framework…but on how we evolve alongside it.

Rails doesn’t just chase trends. It revisits its own decisions and asks: “What still brings us joy?”

That’s a rare trait in software. And it’s why Rails still feels like home for so many of us.

“Rails doesn’t just move forward…it reflects. It loops. It asks: Where’s the friction? What can we make effortless again?”

If you’re newer to the framework, or just curious what Rails has quietly taught us over the years…I hope you find something here to smile at.


I’m grateful to my Ruby friends…some old, some new…who shared memories, weird bugs, screenshots, mailing list lore, and just the right amount of healthy skepticism while I was putting this together.

Stop Pretending You're the Last Developer

July 16, 2025

Ruby on Rails is often celebrated for how quickly it lets small teams build and ship web applications. I’d go further: it’s the best tool for that job.

Rails gives solo developers a powerful framework to bring an idea to life—whether it’s a new business venture or a behind-the-scenes app to help a company modernize internal workflows.

You don’t need a massive team. In many cases, you don’t even need a team.

That’s the magic of Rails.

It’s why so many companies have been able to start with just one developer. They might hire a freelancer, a consultancy, or bring on a full-time engineer to get something off the ground. And often, they do.

Ideas get shipped. The app goes live. People start using it. The team adds features, fixes bugs, tweaks things here and there. Maybe they’ve got a Kanban board full of tasks and ideas. Maybe they don’t. Either way, the thing mostly works.

Until something breaks.

Someone has to redo work. A weird bug eats some data. A quick patch is deployed. Then someone in management asks the timeless question: “How do we prevent this from happening again?”

Time marches on. Other engineers come and go, but the original developer is still around. Still knows the system inside and out. Still putting out fires.

Eventually, the company stops backfilling roles. There’s not quite enough in the backlog to justify it. And besides, everything important seems to be in one person’s head. That person becomes both the system’s greatest asset—and its biggest risk.

This is usually about the time our team at Planet Argon gets a call.

Sometimes, it’s the developer who reaches out. They’re burned out. They miss collaborating with others. They’re tired of carrying the whole thing. Other times, it’s leadership. Things are moving too slowly. Tickets aren’t getting closed. The bugs they reported last quarter still haven’t been addressed. They’re worried about what happens if that one dev goes on vacation. Or leaves.

They’ve tried bringing in outside help… but nothing sticks. The long-term engineer keeps saying new people “don’t get it.”

By the time we step in, we’ve seen some version of this story many, many times.

Documentation? Sparse or outdated.
Tests? There are some, but good luck trusting them.
Git commit messages? A series of “fixes” and “WIP”.
Hardcoded credentials? Of course.
Onboarding materials? There’s nobody to onboard.
Rails upgrades? “We’ll get to it eventually… maybe.”

A New Rails Podcast: On Rails

June 25, 2025

Today marks the launch of On Rails, a new podcast produced by the Rails Foundation and hosted by yours truly.

We’ve recorded the first batch of episodes, and Episode 1 is out now: Rosa Gutiérrez on Solid Queue.

The show dives into technical decision-making in the Ruby on Rails world. Not the shiny trend of the week… but the real conversations teams are having about how to scale, what trade-offs to make, and what long-term maintainability actually looks like.

You’ll hear from developers running real apps. Some are building internal tools. Others work on products you’ve probably used. A few are out there blogging and tweeting… but many are too deep in the day-to-day to stop and write about it. They’re just doing the work — shipping, fixing, refactoring, and figuring it out as they go.

The idea for On Rails started with those hallway conversations at conferences. The ones that don’t make it into keynotes or blog posts. It grew out of the calls I have with clients at Planet Argon. And, of course, out of years of hosting Maintainable.fm.

You’d think that after recording over 200 episodes of Maintainable, I wouldn’t be so nervous to hit record on something new… but here we are. New show jitters are real.

We’re approaching this podcast with depth and focus. Fewer episodes. Longer interviews. Conversations that aim to surface lessons learned…and the thinking behind the decisions that shape real systems.

If you’re a Rails fan, I hope you’ll give it a listen. If subscribing is your thing, you know what to do. And if you’ve got a story worth sharing — I’d love to hear from you.


🎧 Listen to Episode 1: Rosa Gutiérrez: Solid Queue

🌐 Browse all episodes: onrails.buzzsprout.com

📢 Official announcement: Ruby on Rails blog


Backstory

Earlier this year, I dusted off this blog — which I started back in 2005 — and found myself reflecting on Maintainable, the podcast I’ve hosted for the past few years about long-term software health.

At the same time, I was toying with the idea of spinning off something more Rails-focused. A show that could spotlight the kinds of conversations I was already having…with clients, with other devs, and in those casual, between-session moments at conferences.

Right around then, Amanda from the Rails Foundation reached out with a prompt:

“A podcast of Rails devs talking about the nitty gritty technical decisions they’ve made along the way.”

…which aligned nicely with what I had been ruminating on. The timing was perfect and we decided to make it happen.

One of the early shifts for me was adapting to a more collaborative production process. I’ve been running Planet Argon for more than two decades — and I’m used to moving quickly, often without needing to pitch or workshop ideas with others. But with On Rails, I’ve had the opportunity to work closely with Amanda, the Foundation, and DHH. They’ve all taken an active interest in shaping the vision, the guests, and the format.

Another early challenge? The first round of guests were pitched to me — which meant jumping into the deep end with folks I hadn’t already spoken with. That raised the bar for prep. On Maintainable, I’ve occasionally relied on some degree of improvisation. Here, I knew I’d need to come in more prepared…and that’s been a good thing.

So On Rails was born.

I’ll still be hosting Maintainable (though likely on a slower cadence). And I’m excited to run both of these shows side by side — each with their own tone and focus.

Hope you get a chance to give it a listen.