Stumbling Into the Future: My Journey Building an Agentic Developer

Steve Mitchell — Steve’s AI Diaries

I set out to build a framework for autonomous software development. What I didn’t expect was how many times I’d have to tear it down and start again.

The first version worked. Elegantly. And the uncomfortable truth I had to learn the hard way — across three rebuilds, two spectacular failures, and one very expensive weekend — is that I should have stayed closer to it.

The Spark: Ralph Wiggum and the Loop

It started with someone else’s good idea. A Claude Code plugin called Ralph Wiggum was gaining traction in the AI developer community. I tried it, and the core concept immediately resonated. The approach was elegant: spec-driven development anchored to a PRD, with the AI tracking its own progress, writing lessons learned when it completed a task, and then — crucially — starting a completely fresh session for the next piece of work.

That fresh context was the insight. Rather than letting an AI accumulate confusion across a long session, you give it a clean slate every time. It picks up the next outstanding user story from the PRD, reviews progress, checks lessons learned from previous sessions, and carries on. Each iteration is focused and self-contained.

I liked the approach. But I didn’t like what was missing.

The Enterprise Gap

The projects I work on professionally are nothing like a weekend side project. I lead a 40-person engineering team across the US and UK, building products with hundreds of thousands of lines of code spread across dozens of repositories. These systems span multiple countries and come together as unified products. They exist in a perpetual state of modernisation because software never stands still — customers expect more, technology evolves, and the architecture of yesterday becomes the technical debt of tomorrow.

Ralph Wiggum had no concept of any of this. There was no way to set organisational context, no product vision, no awareness of where a codebase had been or where it was heading. No way to flag fragile areas where you don’t want an AI making changes. No coding standards. No enterprise guardrails.

I needed an AI developer that understood not just what to build, but how to build it within the constraints of a real organisation.

FADE: Framework for Agentic Development and Engineering

So I built FADE. The name is deliberately plain — it’s a framework, not a product. Its job is to fade into the background and let the engineering standards do the talking.

FADE wraps around Claude Code and introduces the governance layer that was missing. Every AI session begins by reading a project context file that describes the strategic direction of the codebase, a standards library covering everything from API security to git conventions, a progress log of what’s been completed, and a lessons learned file containing cumulative insights from every previous session. Work is driven by structured PRDs containing user stories with acceptance criteria, processed sequentially through a bash-based execution loop.

Two modes emerged naturally. “FADE Run” processes one user story at a time, pausing for human review between each. “FADE YOLO” — because you only live once — processes the entire queue autonomously. Queue up your PRDs, run YOLO before bed, wake up to delivered software.

And it worked. It worked incredibly well. I reached a point where I could stack up PRDs and let FADE work through the night. I’d wake up to freshly delivered, tested, working software. Every single time, it was excellent.

The framework was simple. It was reliable. And for a while, I appreciated both of those things.

The Night Everything Broke

And then I ran out of credits.

I was on Anthropic’s top subscription tier and I’d burned through it all by Saturday morning. I was mid-project, momentum was high, and I was desperately frustrated. So I topped up with an additional $50 in API credits and carried on. What I didn’t fully appreciate was the token economics of running on Opus, the most capable and most expensive model. That $50 evaporated in four hours.

I topped up another $50, and this time asked Claude directly what was happening. The answer was simple: Opus consumes tokens at a dramatically higher rate. To finish my project without another top-up, I switched down to Haiku — the fastest, cheapest model in the lineup.

This was a mistake I should have known better than to make. Haiku took my carefully crafted 3,000-line repository and inflated it to roughly 13,000 lines. It duplicated logic, added unnecessary abstractions, and generally made a mess of the clean architecture FADE had been maintaining.

I was gutted. My elegant framework — the one that had been delivering flawless results — was buried under thousands of lines of bloat.

The Response That Made Things Worse

When my credits renewed and I had Opus back, the rational engineering response would have been simple: revert to the last known good commit and carry on. Git exists precisely for moments like this.

But I didn’t do that. In the heat of frustration, I decided this was the moment to start fresh — cross-platform support, test-driven development from the ground up, every enterprise feature included from day one. Go big. Fix everything at once.

This was the birth of MADeIT: My Agentic Developer — Made It.

The name made sense at the time.

MADeIT: The Overengineered Disaster

MADeIT was an exercise in ambition outpacing capability. I wanted acceptance test-driven development, cross-platform support, comprehensive enterprise integrations, and perfect quality gates — all built from scratch, all at once.

I spent a week on it. I used Claude to help me build it, which created an interesting recursive problem: I was using an AI to build a framework for directing AI development, and the complexity of the framework exceeded what the AI could reliably construct in a single coherent effort.

What I was really doing, though I couldn’t see it at the time, was trading reliability for sophistication. FADE was simple enough that it always worked. MADeIT was impressive enough that it never did.

MADeIT never worked. Not once.

Swanson: Back to Basics

The third iteration was named after Ron Swanson, whose philosophy — “Never half-ass two things. Whole-ass one thing” — perfectly captured what I needed to do differently.

Swanson stripped everything back. The core insight I wanted to preserve from MADeIT was test-driven development — specifically acceptance test-driven development where tests are generated from acceptance criteria before any code is written, and validated in separate sessions to prevent the AI marking its own homework. That part was worth keeping. Everything else went.

Where MADeIT tried to be everything, Swanson focused on doing one thing well: taking a queue of PRDs and delivering working, tested software. No self-healing. No integrations. No learning database. Just a clean execution loop with external test validation and standards enforcement.

The result was a Python-based framework that could execute a user story for approximately $0.14 on Sonnet. Predictable, measurable, and reliable. I was pleased with Swanson. It represented the distilled lessons of everything that had come before.

It was also still more complex than FADE. And I was starting to notice a pattern.

The Trap I Kept Falling Into

Every time I rebuilt, I added complexity. And every time I added complexity, I moved further from the thing that had actually worked.

FADE succeeded because it was simple enough to be dependable. The execution loop was straightforward, the governance layer was clear, and the AI had everything it needed and nothing it didn’t. When something went wrong, I could see exactly where. When something went right, I understood why.

MADeIT and Swanson were both, in different ways, attempts to build the impressive version before I’d properly earned it. I kept reaching for the enterprise-grade solution when what I actually needed — what my team actually needed — was something I could rely on completely.

Reliability isn’t a feature you add later. It’s the foundation everything else has to be built on. I knew this as an engineering principle. It took three iterations to live it.

Coming Full Circle

Before Swanson was fully complete, work intervened. I needed to bring agentic development to my professional environment, and I couldn’t wait. So I went back to the last clean version of FADE — the one before the Haiku incident — and ported it to my work environment.

The most sophisticated thing I’m running in production is the first thing that worked.

That’s not a failure. That’s wisdom, arrived at expensively. FADE is running across my organisation and it works remarkably well. I’ve given it to a couple of other engineers, though I have a concern that nags at me: FADE is powerful enough to accelerate engineers who might not fully understand what it’s producing or check its output rigorously enough. The tool amplifies whatever you bring to it — strong engineering judgement produces outstanding results, but insufficient oversight could multiply problems just as efficiently.

This is why, even though YOLO mode exists, I mostly use the step-by-step approach for my team. The human review gate between each user story isn’t a bottleneck — it’s a safety mechanism.

The Stripe Signal

Then, last week, a colleague sent me something that made me sit up. Stripe published a detailed account of their internal system — autonomous coding agents that now produce over 1,300 pull requests per week across their codebase. The human’s role shifts from writing code to defining requirements clearly and reviewing the output.

Reading it felt like looking at a scaled-up version of exactly what I’d been building towards. The core principles were identical: spec-driven development, fresh isolated contexts, human review at the handoff point. The governance layer, not the AI capability, as the differentiator.

What struck me most wasn’t the architecture. It was the validation that the instincts I’d been following — sometimes fumbling — were pointing in the right direction. Stripe got there with a team of engineers and serious infrastructure investment. I got there with Claude Code and a bash script. The destination was the same.

But Stripe operates at a scale that demands infrastructure I haven’t yet built. Their agents run in isolated environments integrated with CI/CD pipelines, with the pull request as the natural handoff between machine and human. My current approach works for individual productivity. The next challenge is making it work for a team.

What Comes Next

The next iteration is forming, informed by every failure and success along the way. The key shift is from individual developer tooling to platform-level capability. Lightweight containers — not full developer environments — where agents can spin up, execute against a well-defined task, and produce a pull request.

But this time, I’m starting with the smallest thing that could possibly work. And I’m not touching it until it’s reliable.

The Lessons

Looking back across this journey — from Ralph Wiggum to FADE, to the Haiku disaster, to MADeIT’s spectacular failure, to Swanson’s disciplined simplicity, and back to FADE in production — a few principles have emerged that I believe will hold regardless of how the technology evolves.

First, reliability before sophistication. Every failed iteration traded dependability for impressiveness. The version that worked was the one simple enough that nothing could hide inside it. Earn reliability first. Build sophistication on top of it, never instead of it.

Second, you have to earn the right to complexity. MADeIT failed because I tried to build the enterprise version before I understood what the essential components actually were. Every successful iteration started simple and added complexity only where experience proved it was needed.

Third, governance matters more than capability. The AI models are already capable enough to write excellent code. What they lack is context — the organisational knowledge, standards, and boundaries that turn raw output into production-ready software. The framework around the model is where the real value lives.

Fourth, fresh context is a feature, not a limitation. Starting each task with a clean session, armed with accumulated progress and lessons learned, consistently produces better results than long-running sessions that accumulate confusion. This is counterintuitive but repeatedly proven.

Fifth, the human review boundary is sacred. The point where human judgement intersects with AI output is the quality control mechanism that makes the whole system trustworthy. Removing it doesn’t make the system faster — it makes it dangerous.

And sixth, failure is the curriculum. The Haiku incident taught me about model economics. MADeIT taught me about earned complexity. Swanson taught me about disciplined scope. None of this knowledge was available in a textbook or a blog post — it came from building, breaking, and rebuilding.

I set out to build an autonomous developer. I’ve built one. It just took longer, cost more, and taught me more than I expected. If you’re on a similar journey, I suspect you already know the feeling — and I’d genuinely love to hear where you’ve got to.


Steve Mitchell is Director of Product Engineering at Milliman, where he leads a 40-person team obsessed with unlocking the next level of software engineering with AI. He writes about his experiments at Steve’s AI Diaries. These experiments are often personal trials, not only things that are useful for Enterprise Software Engineering.


The catalyst for this article was the Stripe blog post: https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents by https://stripe.dev/authors/alistair-gray

Leave a Comment