The Reader Axiom
Clean Code, rederived for the age of AI
It begins like it always does. You pick up a coding agent — for me it was Claude Code with Opus 4.5, but the model and the tool don’t matter — and you build something. The first few thousand lines come fast. Then, somewhere around day three, you discover a single file that contains, by your reckoning, most of the application’s actual logic. Functions four hundred lines long. Conditionals nested four levels deep. Naming that suggests the agent ran out of words around the second pass.
A God Module, you mutter, with the disgust of a person who has read the book.
So you do what twenty-five years of professional habit demand. You refactor it. You don’t do it by hand, of course — you have the agent do it, under your supervision, with Uncle Bob whispering in your other ear. You break the file apart. You name things properly. You introduce a few small, pragmatic abstractions. You document the architecture in Markdown. The result reads as prose. It is not over-engineered. It is clean, in the canonical sense.
You feel good. You go to add the next feature.
And the agent stumbles.
The same model that one-shot refactored six thousand lines of slop spaghetti the previous afternoon now cannot find its own utility classes. It hallucinates function arguments. It reinvents code that already exists, two folders away, with a name that almost rhymes with the one the agent just made up. You watch this happen, and you realize, slowly, that something is broken. It is not the model — the model didn’t get worse in twelve hours. It is the codebase. The very act of cleaning it up has made it harder for the agent to navigate.
And then, you realize what’s actually broken is your instinct to refactor. It is the foundation that instinct stood on. It is the entire premise of the book on your shelf.
The Reader Axiom — that code is read far more often than it is written, so optimize for the reader — has been quietly invalidated. Most of what we built on top of it was QWERTY.
The QWERTY keyboard layout was designed in the 1870s to slow typists down, so the mechanical arms in early typewriters wouldn’t jam. The constraint vanished a century ago. The layout did not. We’ve spent 150 years training people on a workaround for a problem that no longer exists, and calling it “how to type”.
Most of Clean Code is like QWERTY. The mechanical typewriter is the human brain; the arms are working memory; the workaround is a thousand small disciplines designed to keep the brain from jamming when it tries to hold a five-thousand-line file in mind. Small functions, meaningful names, careful formatting, intention-revealing structure. The patterns persist because we taught them — not because they were ever about the underlying activity. They were always about the human reader.
Robert Martin’s book wasn’t wrong. It was correctly identifying the typewriter, the arms, and the jam, and prescribing layouts that reduced jams. The book held for forty years because the constraint held for forty years. The next reader was always a junior engineer, scrolling through that five-thousand-line file in a terminal at 2 a.m., trying to figure out why production is on fire. We owed her clarity. We mostly delivered.
The next reader is no longer her. The next reader has a 1M context window, no opinions about formatting, and reads the file in 800 milliseconds. It does not jam.
Twenty-five years writing code, two writing it with agents at my elbow. Sample size of codebases where I’ve watched the Reader Axiom break: enough to have stopped treating it as bad luck. There is no peer-reviewed paper for this — yet — but the pattern is visible enough that I have stopped trying to refactor my way out of it.
Take a deep breath. Most of what we called engineering ‘best practice’ is up for inspection.
Sorting the canon
The interesting question isn’t whether the Reader Axiom has expired. It has. The interesting question is which of our practices were QWERTY all along — workarounds for a constraint that no longer holds — and which were doing structural work the whole time.
DRY is the cleanest example. Every senior engineer can recite it: don’t repeat yourself. Every senior engineer also knows DRY is a balancing act. Too DRY and you get baroque inheritance hierarchies named AbstractFactoryStrategyVisitor. Not DRY enough and you get bug-multiplying brittleness. The balance was always governed by a hidden term: the cost of changing things in lockstep across the codebase. When that cost is human and high, DRY wins. When that cost approaches zero, DRY loses its argument.
A SaaS company I know spent the better part of a quarter building a polished internal design system. Tokens, primitives, smart components, the works. The reason — without it, every minor design tweak required hunting down forty-three slight variations of the date picker. With the design system, you tweak in one place and inherit everywhere.
Then they pointed an agent at the codebase. The agent, faced with a need for a date picker, did what agents do. It built one. From scratch. Inline. Beautifully styled. Using none of the company’s design system, which it had been told about in three different places.
Under the old constraint, that agent’s behavior would have been a catastrophe — forty-three slight variations of the date picker, and any global design tweak now requires forty-three little surgeries. Under the new constraint, the cost of that global tweak is ten minutes of agent time. The case for the design system is no longer that humans will pay for it later. The case has to be made on different grounds, or it can’t be made at all.
You can run the same exercise across the rest of the canon. Small functions: QWERTY. They existed because human working memory caps out at seven plus or minus two; the agent does not have working memory in that sense. Meaningful names: QWERTY. The agent is happy with processStuff_v2_FINAL_actually_real. SOLID, design patterns, careful formatting — every rule that was about helping a human chunk meaning loses most of its weight. They are still useful at the human-review boundary. They are no longer load-bearing.
What gets promoted is the half of the discipline that was never QWERTY in the first place — the practices whose load came from the runtime, not the reader. API-first design. End-to-end testing. CI/CD. Observability. Property-based testing. Contracts, invariants, machine-checkable assertions. These were always doing structural work, and the Reader Axiom expiring doesn’t touch them. If anything, it makes them more important: when nobody reads the code, the contracts and the tests are the only things keeping the system honest.
What dies entirely is the third pile — the practices that existed to coordinate teams of humans. These solved problems that don’t exist in the same shape when half the work is being done by something that doesn’t get tired, doesn’t bikeshed, and doesn’t need a daily standup. The team is something else now, and the rituals that managed it are vestigial.
The surviving rules were the ones about intent, verification, contracts, or safety. The rest was QWERTY.
What this reveals
Once you start running practices through the QWERTY filter, the bigger picture is uncomfortable.
Most software engineering “best practices” were never about software. They were cognitive scaffolding for the humans holding the code. Most of what we taught engineers was never engineering — it was ergonomics dressed up in engineering vocabulary.
The AI age won’t kill software engineering. It will reveal that half of it was always something else.
That single reframing explains a lot of things that used to be confusing. Why a junior engineer who was rigorous about formatting could ship correct systems and a principal engineer who couldn’t be bothered with naming conventions could too. Why pair programming was transformative on some teams and felt anachronistic on others within five years of the same generation. Why every company that adopted SOLID at the C-suite level produced two camps of engineers, one swearing by it and one quietly working around it. The practices weren’t failing or succeeding on their merits. They were succeeding when they happened to fit the cognitive ergonomics of the people doing the work, and failing when they didn’t.
The discipline never had a clean way to talk about that, so it called the ergonomics “engineering” and went on. The AI age is the first time the labels have been forced apart, because half the practices stop applying when the entity reading the code doesn’t have the cognitive constraint they were designed around.
What survives is the engineering. Which, it turns out, is a much smaller and harder thing than the canon implied.
Pushback
The objections write themselves, and I should voice them before someone gets to them in the comments.
“But humans still read code. When production breaks at 2 a.m., you can’t ask the agent to debug itself.”
True. And when that happens, the residual disciplines are still useful — at the human-review boundary, where someone is reading to validate intent, debug, or audit. What has changed is the ratio. The vast majority of code-touching activity has shifted from read-then-modify to spec-then-regenerate or test-then-verify. You don’t need to optimize the entire hundred-thousand-line codebase for the rare 2 a.m. session anymore. And when that session arrives, the thing you actually need to read is the test that failed, the contract that was violated, or the diff between two agent runs — not the function body.
“This is just an excuse to write sloppy code.”
It is not. The surviving disciplines are harder, not easier. Property-based testing is harder than naming. Designing a bounded interface the agent cannot accidentally misimplement is a stiffer kind of work than refactoring processOrder into validateOrder + persistOrder + emitOrderEvent. What disappears is cosmetic rigor. What survives is the actual kind. If anything, the post-QWERTY world asks more from senior engineers, not less.
“You’re describing a transitional moment. In five years the agents will be reliable enough that none of this matters.”
Maybe. Maybe even probably. But the transition is the rest of our careers, and assuming a problem will resolve itself eventually is exactly the cognitive move that produced every legacy codebase any of us has had to dig out of. We are in the transition. The transition is where the work is.
The objections are not silly. They are written from inside the old constraint — and the old constraint is what changed.
I haven’t refactored that codebase since. The God Module is, of course, gone — Uncle Bob and I made sure of that. But the parts I’d have rewritten next, on instinct, on autopilot, because something in me said this is too messy — those parts are still standing. The agent has been navigating them like it owns the place. We get along now.
I keep the book on the shelf. I just stopped reaching for it before I refactor.