The exponential power of scaffolding
Commit ff38673 shipped 198 insertions and 70 deletions across 11 files in three sites from a seven-word instruction: “extend the cu for all sites” (cu was a typo for ci). No clarifying question. No “did you mean ci?”. The fan-out only worked because the underlying conventions, regex categories, frontmatter format, code-fence handling, three-site monorepo layout, were settled long before that session opened.
That commit was the punchline of a 16-hour Claude Code arc that started with me typing “I don’t know what RAG is” and ended with a published 5,257-word technical post, three custom SVG diagrams, a shared CI script, the same script wired into three workflows and a pre-commit hook, four legacy posts retroactively fixed, and a longstanding grep bug surfaced and patched. Five commits total: 419821f → 26aa53a → 60ad2d3 → 027f1bd → ff38673. Roughly six hours of active back-and-forth.
This post is a postmortem of that session, and an attempt to disentangle which gains came from the model and which came from a year of accumulated scaffolding around it. The model itself was Claude Opus 4.7. It is the same model that has been available all year. What changed between this session and a session a year ago is not the model. It is the surface area of context the model has to draw on. Each tool I have built with Claude in prior sessions is a tool this session could use. Each memory entry is a constraint that no longer needs to be argued about. Each convention is a class of mistake that cannot recur. The exponential is in the scaffolding.
Layer 1: memory
My MEMORY.md has 32 entries: 12 feedback rules, 13 project-state notes, seven external references. None of them are restated at the start of a session. They are loaded automatically and applied silently every time I write a sentence.
A few that fired during this conversation without me ever mentioning them:
- The no-em-dashes rule drove the regex extension that surfaced four legacy violations across all three sites.
- The push-without-asking rule is why all five commits got pushed without a “should I push?” interruption.
- The monitor-CI-before-live rule is why every push was followed by
gh run watchplus acurlverification before reporting back. - The explicit-
git addrule is why every commit staged files by explicit path. Without it, a commit touching 11 files across three sites could have grabbed several hundred MB of stray clip footage from a sibling worktree.
These rules cost nothing at runtime. I wrote each one once, after a small mistake, and now they apply forever. Twelve reusable corrections in a flat file override default behavior without ceremony. Each new feedback rule reduces the surface area of mistakes the next session can make.
Layer 2: tooling
The repo has 32 scripts in scripts/. A representative slice:
humanizer-check.mjsrunsnpx humanizer scoreagainst staged blog posts and fails the commit above 50.ai-writing-pattern-check.mjsis the script this case study turns on: 157 lines, four named regex categories.citation-check.mjsvalidates inline citations against a sources file.seo-check.mjsvalidates frontmatter, OG image existence, and slug shape.generate-tts.mjsproduces ElevenLabs Daniel-voice narration with a dry-run cost preview.instagram-carousel.mjsproduces five 1080×1350 carousel slides plus a caption from any blog post.stamp-staged-publishes.mjsis a pre-commit helper that auto-stampspublishedAton draft→live flips.seo-deploy.mjs,check-indexing.mjs,discoverability-monitor.mjshandle IndexNow submission, Search Console indexing, and SerpAPI ranking checks on cron.
Plus eight CI workflows and a 187-line pre-commit hook that chains stamp-publishes, humanizer, AI-pattern check, citation accuracy, and SEO frontmatter validation in order.
The clearest before/after I can measure: the SVG hero pipeline. A year ago, generating an OG image for a new post meant designing in Figma, exporting PNG, and dropping it into public/blog/. About 30 minutes per post. Six months ago I built a TypeScript hero system: each post gets a renderHero(opts) function in heroes/{slug}.ts, registers in heroes/registry.ts, and npm run render:og runs Playwright to screenshot each one to PNG at 1200×630. Today, an OG hero is one TS file plus one registry line. Roughly three minutes per post, observed across 22 heroes. A 10× compression that only became possible because the prior 21 heroes existed to copy from.
Layer 3: conventions
Conventions are scripts you do not run. They are documents that shape what gets built.
context/brand-voice.md is roughly 121 lines of explicit style guidance: vary sentence length, lead with subjects not articles, avoid AI-smell hedges, prefer curiosity-gap headlines over engagement bait. It sits next to MEMORY.md and gets loaded for every writing-adjacent session.
sites/tech/src/components/svg/BLOG-SVGS.md is 11 enforcement rules for in-body SVGs: no text overlap, body text minimum 12px in viewBox units, palette pulled from tokens.ts, viewBox sized to fill the card, Playwright screenshot verification at desktop and mobile widths before commit. Reading that doc once in a previous session is why three new diagrams in this session passed visual review on the first try. Without it, the first attempt would have failed on at least one of: text overlap, undersized body copy, off-palette accent. I know because I cut myself on each of those at least once before the doc existed.
A content-collection schema in src/content/config.ts enforces frontmatter at build time. Forget a description? Build fails. Wrong tag format? Build fails. Images-not-found never show up live, because the validate-images script runs in CI before deploy.
These conventions are checked into the repo. They are the institutional memory of every sharp edge I have cut myself on previously, encoded as either documentation or schema validation. New sessions inherit all of them without ever reading them, because the tooling enforces them automatically.
Inside the seven-word fan-out
Back to commit ff38673. Here is what “extend the cu for all sites” actually expanded into.
Before: an inline 42-line beige-ness linter inside ci-tech.yml. A partial copy elsewhere. Truck site CI did not have it. Main site CI did not have it.
After: one new file, scripts/ai-writing-pattern-check.mjs (157 lines), with four named regex categories (puffery, why-X-matters, hedging, fabricated-experience). The 42-line inline shell collapsed to one node call. Three CI workflows (ci.yml, ci-tech.yml, ci-truck.yml) and the pre-commit hook now invoke it. Coverage went from one site to three sites plus drafts.
ci-tech.yml: 42 inline shell lines → 1 line (one node call)ci.yml: 0 lines → 3 lines (new check)ci-truck.yml: 0 lines → 3 lines (new check).githooks/pre-commit: inline block → one node callscripts/ai-writing-pattern-check.mjs: 0 → 157 lines (new shared source of truth)Net diff: 198 insertions, 70 deletions, across 11 files. The fan-out only worked because the regex categories, frontmatter format, and code-fence handling were already settled.
While extracting the script I broadened one regex from a fixed three-word slot to any short noun phrase up to a sentence boundary. That broadening immediately fired on four legacy posts across all three sites. All six pattern hits got fixed in the same commit.
Re-implementing the regex in JavaScript surfaced a longstanding grep bug. The original inline shell version used [^.!?\n] to mean “any character except sentence terminators.” Inside a [...] character class with grep -E, the \n is a literal backslash-n, not a newline. The class was excluding the literal letter n. Re-writing in JS surfaced this because Node treats \n correctly inside character classes. Migration is a debugging tactic; running the same logic through a different parser flushes out assumptions the original parser had quietly absorbed.
The recursive moment
Three commits before ff38673, I shipped scripts/ai-writing-pattern-check.mjs. Two commits before ff38673, the same script started running on every staged blog post. One commit before ff38673, a parallel Claude session, working on the same repo concurrently with mine, committed the OG hero assets for the post you are reading. The script’s first real customer was the post about why you should build the script.
That is the lightbulb. Compounding is not a metaphor here; it is the literal data flow. Each tool you build with Claude is a tool the next Claude session can use, including sessions about the tool itself, including parallel sessions you did not coordinate with. The scaffolding becomes the shared workspace.
What this isn’t
I am not advocating for any particular framework, vector store, or model gateway. The companion post on retrieval architecture is the design for a personal homelab document-lookup tool, not a system anyone has shipped at scale. The architecture choices there are domain-distant from anything I work on professionally.
The pattern is what matters. Build small. Save everything. Treat each working solution as a deposit into the environment. A regex extracted into a script is more valuable than a regex inlined into a hook, even if the regex is identical, because the script can be invoked from five places instead of one. A feedback rule written into MEMORY.md after a small mistake is more valuable than the mistake correction itself, because it prevents a class of future mistakes. A convention written into a markdown doc next to the code it governs is more valuable than the same convention held only in your head, because the next session inherits it for free.
A year of small deposits is what made ff38673 possible from seven words. The model is the lightweight piece. The exponential is in the scaffolding around it.