March 18, 2026
I Want to Share — ensemble rewrite
Major rewrite of the I Want to Share episode script. Twelve voices, all speaking in first person. Cast expanded: Vic (the enthusiast who got shut down), Sam (the manager), Sal (the writer stuck in the middle), Gus (anti-AI identity), Alex (the mentee), Mo (the sandwich shop owner), and Parrot (ChatGPT, research voice). Every research stat now grounded in a character’s reaction. Monologues broken into dialogue. The episode went from essay to ensemble drama.
- FPV pass: characters speak for themselves instead of being described
- Parrot takes all research lines, James keeps interpretation
- Gus as recurring antagonist with human moments (Star Trek, hamburgers, “...they said no?”)
- Sal/Vic accidental cruelty scene (tried once, laughed at, never tried again)
- Baking spreadsheet demo: four characters react to the same work, two descriptions
- Meta moment: Sal asks “who wrote this podcast?”
March 15, 2026
New audiobook scripts and /pitch skill
Several new audiobook episode scripts added to the production queue. New /pitch Claude Code skill for iterating on documents through feedback loops.
March 14–15, 2026
I Want a Podcast — the show launches
New podcast: I Want a Podcast. A first-person dramatization where Rich builds a podcast from scratch using seven AI voices. Three episodes deployed with RSS feed.
- Episode 0: What Is This? — 4-minute trailer. Two understudies for Vic and Sam describe the show.
- Part 1: The Build — 35 minutes. Rich says “I want a podcast.” Four words to an encrypted podcast on his phone. Seven voices: Rich, Octopus (CLI), Crab (Co-Work), Parrot (ChatGPT), Error, Stage, James.
- Part 2: What You Built — 35 minutes. Gmail connectors, baking spreadsheets, scheduling, prompt injection, content redaction, voice cloning, and a TOGAF postscript with LOTR references.
- RSS feed at feed.xml — ready for Apple Podcasts and Spotify submission.
Also shipped: three costume pages (Pirate, Star Trek LCARS, LOTR × TOGAF) with audio, a private mentoring portal for Rich, site tagline changed to “Read the book and let’s build,” wall MCP TLS support, and the Aaron ChatGPT quote on the quotes page.
March 13, 2026
The Lightweight Wall landing page
New standalone page: The Lightweight Wall. The ME.md operator brief prompt now has its own home — separate from the full Wall of Data guide. Includes the copyable prompt, the ~/me/ file tree, blind test results (6 runs, best 13/20), and cross-links to related chapters and the podcast episode.
- Cross-linking: Episode 2 now links to the lightweight wall instead of the full wall-of-data for the operator brief references. The full wall link stays where Vic explicitly says “the full wall of data.”
- Index: New card in Project Guides and new entry in the Stage 5 curriculum path
- Wall of Data: Added reverse link back to the lightweight wall in Related Guides
March 13, 2026
Vic and Sam Episode 2: The File That Knows You
Vic and Sam return for a second episode. Vic tried the lightweight wall prompt from the book, built a ME.md file, forgot about it, and then watched an octopus read it. Nine AI-cloned voices this time — including three character voices where Vic reads the prompt as a Victorian naturalist, a compliance officer, and a steward. Sam attempts enthusiasm. It does not go well.
- New voices: VIC-EXPLORER (Dean Griffin), VIC-COMPLIANCE (Ser Vaydar commanding), VIC-STEWARD (Hann of Soloh whisper), SAM-HYPE (Old Ben Kenobi commanding — “a foghorn expressing joy”)
- New metaphor: chatbot = head in a jar, coding agent = octopus in a box. The octopus has arms. It opens folders. It reads the file.
- Episode cross-linking: both episodes now link to each other with navigation bars
- Duration: 8.8 minutes / 58 lines / stereo mix with cold open
March 13, 2026
Making of: Vic and Sam podcast
The Vic and Sam page now has a 12-minute podcast — five AI-cloned voices performing a two-character play about what happens when you give a shape access to your plumbing. This is a detailed making-of: the full pipeline from script to MP3.
The script. The source is a markdown file (vic-and-sam.md). Speaker tags are [UPPERCASE-NAME] on their own line, followed by dialogue. --- marks section breaks rendered as pauses. A separate cold-open.md is prepended at generation time — James’s disclaimer: “Everything you’re about to hear was written by AI, voiced by AI, and assembled by AI. I directed it. I corrected it.” The script includes voice design notes at the top (not parsed): who each character is, why they sound the way they do, and what the casting intent was.
The cast. Five voices, all cloned from short reference WAV clips (5–15 seconds each) using Qwen3-TTS 0.6B running on a DGX GB10 GPU:
- Vic — Dorian Moreau, a Wells-BTTF family voice. f0 ~133 Hz, brightness ~1320 Hz. Low, entitled, confident. The cable-access host who committed so hard to the bit that it became the truth.
- Sam — Old Ben Kenobi from the casting wall. f0 ~93 Hz, brightness ~1292 Hz. Deep, warm, measured. The straight man.
- Stage — Smithers. Obsequious, precise. Stage directions delivered like a nervous assistant reporting what he’s witnessing.
- Hype — Lady Leia. f0 ~373 Hz, brightness ~2758 Hz. The breathless AI announcer voice that Vic parodies.
- James — The author. Cloned from a Google Meet recording. Cold open and outro only. Sounds like a hostage proof of life (his words).
Voice references live on a casting wall (~32 unique voices harvested from various sources). Each voice is identified by its acoustic fingerprint: MFCCs for timbre, f0 for pitch, spectral centroid for brightness. The model clones timbre and cadence from the seed clip.
The generation pipeline. The generator (generate_episode.py) parses the markdown script, resolves each speaker tag to a voice reference WAV, and generates one line at a time via Qwen3-TTS. Each generated line is saved as an individual WAV file using sequenced content-addressed filenames: {seq:03d}_{speaker}_{sha256(text)[:12]}.wav. Sequence numbers count by 10s (Apple II style: 010, 020, 030...) so you can insert lines between them without renumbering. The content hash is the cache key — if the text hasn’t changed, the WAV is reused. If it has, it’s regenerated. A --renumber flag re-spaces everything back to 10s when the gaps get ugly.
Voice prompt caching accelerates generation. The TTS model computes a voice embedding from the reference WAV on first use, then caches it as {speaker}_{refhash}.pt. The cache key includes a hash of the reference file itself — if you swap in a different voice reference, the cache auto-invalidates and the stale file is deleted. No more silent wrong-voice generation.
The sound board. Assembly runs entirely in numpy — no torch, no GPU, works on a MacBook. All effects are applied during the _assemble() pass over the generated line WAVs. No regeneration needed to change the mix. The effects chain, in order:
- RMS normalization — target amplitude 0.045, peak-limited at 0.95
- Per-speaker volume scaling — Stage at 0.7, Sam at 0.85, others at 1.0
- Emphasis boost — lines ending in
! get 1.3x volume lift (excludes Stage and James)
- Telephone filter — FFT brick-wall bandpass, 300–3400 Hz, applied to all Stage (stage direction) lines
- Horror filter — triggered by “You track it” in the text. Time-segmented: 0–0.8s gets full blast (telephone + drive=5 overdrive + 8% pitch drop + tight reverb), 0.8–1.3s crossfades to medium grit, 1.3–4.1s fades to dry, 4.1s+ fully clean. “No!” gets everything; “Sorry” snaps back.
- Reverb — decaying delay taps: decay=0.3, delay=40ms, 6 taps
- Stereo pan — Vic at -0.30 (left), Sam at 0.30 (right), others center
- Room tone — pink noise injected in pauses between lines, level 0.003
- Fade in (50ms anti-click) / Fade out (2.0s)
Think of it as a pedalboard you can rearrange without re-recording. The Sound Lab on the internal player page has A/B comparisons of the horror filter iterations (hard cut vs. slide-in vs. the final growl-to-dry crossfade) and emphasis boost levels (flat, 1.3x, 2x, 2.5x).
The wrong-voice problem. Before Vic and Sam, we tried to recast Episode 1’s narrator from Aubrey to Dorian Moreau. It failed twice. 143 lines generated with Aubrey’s voice instead of Dorian’s, 45 minutes of DGX time wasted each run. The system declared success both times. We only caught it by ear, days later.
Root cause: stacked fallback behavior. The voice prompt cache had a stale entry keyed by speaker name (not voice ref). The voice resolution function silently fell back to a different path. No verification step checked that the generated audio matched the intended voice. Each fallback was individually reasonable. Together they formed a silent pipeline from “I asked for Dorian” to “I got Aubrey” with no error, no warning, and a success message at the end.
Three gates. We built three verification gates that run before committing to a full DGX generation run:
- Input manifest (before any generation) — writes
voice-manifest.json with status STARTED, listing every speaker’s voice ref path, file hash, VOICE_MAP entry, and prompt cache status (HIT/FRESH/LOAD_FAILED). A cross-similarity matrix flags accidental voice duplicates (sim > 0.90). Review the manifest before generation proceeds.
- Smoke test (after first generated line per speaker) — extracts an x-vector from the generated audio, computes cosine similarity to the reference. If similarity < 0.70: hard stop. Cost: 1 line per speaker.
- Batch voiceprint (after 5 generated lines) — runs pitch and family check via
voiceprint.py. Compares f0 of each generated line to the reference voice. If pitch is off by > 40%: hard stop. Cost: 5 lines.
Only after all three gates pass does the full run proceed. The safe workflow: build the voice DB, generate 5 lines (hits all 3 gates), listen, spot-check with voiceprint, then run the full generation.
The voiceprint tool. We needed acoustic voice identification without loading torch (no GPU locally). Built voiceprint.py: MFCCs for timbre, autocorrelation-based f0 for pitch, spectral centroid for brightness. All numpy and scipy. The comparison uses weighted euclidean distance — pitch weighted 5x, brightness 3x — which discriminates voices far better than cosine similarity on MFCCs alone (cosine gave 0.96+ for everything; euclidean separates Aubrey at 209 Hz from Dorian at 133 Hz immediately). The tool can fingerprint all 32 voices on the casting wall, identify which voice a WAV sounds like, scan a full episode chunk-by-chunk emitting a character trace of voice identity across time, and check every line in an episode against expected voices. Runs in seconds on a MacBook.
Final assembly. The assembled stereo WAV (68 MB, 12:22) is converted to MP3 via ffmpeg (-qscale:a 4, 7.7 MB) and embedded on the page with a standard <audio> element. The cold open disclaimer is prepended as its own segment with a pause before the main script begins.
The lesson. Every fallback, every default, every cached value is a place where a disconnection can hide. The system should crash on a mismatch, not produce 143 lines with the wrong voice and declare success. Trace your fallbacks tip to tail. All fallbacks now log when they fire. The input manifest is written before generation, not after — it’s a pre-flight checklist, not an autopsy report.
- Listen: embedded player on Vic and Sam
- New chapter: The Slot Machine
- Publish checklist now includes hash diff against live site before deploy
March 11, 2026
The book that taught me to build
Added a new slush note to hold onto a line that feels bigger than a note but not ready for the front door yet. The shape is that the project stopped being only a manuscript and became a quiver: a set of named arrows, tested explanations, and examples ready to use when the next real person shows up with the next real problem.
March 11, 2026
Flywheel and maintenance got real folders
The book's process pages now line up better with the repo they describe. Added actual flywheel/ and ops/ scaffolds to the project, tightened the maintenance guide so every proposal has to carry a lesson, and improved the slush pile's usability with direct permalinks and a dated note about what "free" currently means for agents.
- Added
flywheel/ and ops/ folders with starter files for observations, metrics, incidents, runbooks, schedule, and status
- Maintenance now includes
ops/proposals/ and requires each proposal to explain the lesson future systems should inherit
- Slush Pile entries now support direct
#hash permalinks for easy sharing
- Added slush notes on the conversation-as-test-suite pattern, manual plain-text sanitization, multimodal meeting transcripts, and the current free-vs-paid agent landscape
- Updated
.gitignore for local-only Gemini files and screenshot captures so they do not leak into publishes by accident
March 11, 2026
Director safety layer promoted to first-class pages
The latest bronze blind taste test kept landing on the same question: the site is useful, but what would make it safe to hand to a younger or more casual director? Promoted that gap into public reference pages instead of leaving it buried in the manuscript and slush. The result is a small safety stack: habits, prompt-injection framing, and a deploy checklist.
March 11, 2026
Improve, scale, and simplify the box
Folded the latest meeting-derived lessons back into the public site. The curriculum now names the full loop from read to touch to make to improve to scale, the chatbot guide leans harder into the simple stack that actually worked, and the wall-of-data page now frames local consolidation as infrastructure you control instead of leading with honeypot language.
- Homepage path expanded to five stages: Read, Touch, Make, Improve, Scale, with the artifact-first loop stated directly
- From Read to Make now says the artifact teaches the process it needed, not just that the flywheel exists
- Chatbot now names the practical architecture lessons: child process over premature SDKs, agent tools over homemade keyword logic, localStorage as privacy by architecture, and nested-agent setups as a trap
- Wall of Data now uses calmer security framing and treats
~/wall as shorthand for a user-controlled local path, not a required convention
- Slush Pile gained notes on speech-to-text with agents and artifacts that improve the process that made them
March 11, 2026
Curriculum reframing: from read to make
Reframed the site around the core mentoring mission: help people move from head in a box to octopus in a box, from reading about AI to making with an agent that can touch files. Added a dedicated curriculum page and rewrote the homepage so the site reads like a path instead of a shelf.
- New page: From Read to Make — the curriculum for crossing from chat AI to agentic AI
- Homepage hero rewritten around the mission and a visible three-stage path: Read, Touch, Make
- Homepage card descriptions tightened so they read as stages in a progression, not isolated pages
- What's All the Fuss? and The Cheating Sheet now name the mentoring move explicitly
March 11, 2026
Accountability and context
Tightened the front-door framing so the site says more directly what the project is actually teaching: accountability is the cure for hallucination, and context is what makes the tool stop being generic. Also shipped two more slush notes that felt durable enough to keep around.
- Homepage mission now names the underlying claim instead of leaving readers to infer it
- What's All the Fuss? and From Read to Make now frame the curriculum more explicitly around accountability and context
- The Gap now keeps the stronger line about the book becoming a platform that grows as you climb it
- Slush pile additions:
Ask it to explain what it's building and When the agents become interchangeable
March 11, 2026
The flywheel follows the action
Wrote The Flywheel Follows the Action to capture a live realization from the site-building process: the corpus stopped being a record of the work and became part of the machinery that improves the next run. The key move is not abstract "multi-agent collaboration." It's that agents on different machines can read the same transcripts, guides, worklogs, and dev logs, so the improvement loop stays attached to where the work already leaves evidence.
- New page: The Flywheel Follows the Action — the book/site as active infrastructure, not just output
- Homepage card added in Project Guides with essay framing, not hidden in extras
- Cross-links added from Flywheel, Guide-Based Development, The Cheating Sheet, and the slush pile's cross-agent note
- New rule named explicitly: the flywheel works best when it follows the action instead of asking the action to summarize itself later
March 10, 2026
The cheating sheet becomes the thesis
The blind taste test kept pulling toward the same missing page: the one-page thesis statement the rest of the site was orbiting without naming. Built The Cheating Sheet as that page. It starts with the index card story, tracks the pattern across health, renovation, bridge, interviews, and the book itself, then lands on the actual claim: I document my way out of confusion, and AI changed the speed by giving the documents hands.
- New page: The Cheating Sheet — now the first card in The Book section
- Homepage hero reframed around the real thesis: a folder, some files, and an agent that can touch them
- Book introduction now opens with the cheating sheet story and the personal log before "This is a textbook about shapes"
- Slush Pile promoted into Project Guides as first-class navigation, not an extra
- Slush entries added: cheating sheet, blind taste test, translation is free, push the prompt not the agent, voice palette from one recording session
March 10, 2026
Wall of data: cognitive prosthetic, not data project
Rewrote the Wall of Data opening based on blind taste test feedback. The page opened with mkdir ~/wall. Now it opens with what it feels like to have one — the cognitive prosthetic framing, the ice-under-your-feet metaphor. Added the honest security trade-off: one encrypted local folder with no API is genuinely less exposed than your data spread across 40 cloud services.
- Wall of data: new "What it feels like" section before "What this is"
- Wall of data: security trade-off paragraph — consolidation is risk, but risk you control
March 10, 2026
Confusion-driven development and showing the mess
Updated the Board Game guide based on feedback from the blind taste test agent. Added Stage 0: "What if you don't know the game?" — documenting the bridge build pattern where neither the builder nor the agent knew the rules. Added "What a real build looks like" section showing the actual 1,937-line development arc: layout wars, YouTube transcript pivots, context limit hits, confusion-as-requirements.
- Board game guide: new Stage 0 (building a game you don't know) and "What a real build looks like" (the mess behind the clean lifecycle)
- Chatbot guide: TOS language strengthened — "actively blocks and bans" instead of passive warning, added §3.7 reference
- Slush pile: new "confusion-driven development" pattern — your confusion is the most useful input you have
- Slush pile: fixed "CLI arbitrage" → "CLI as backend" in cross-agent reading note
March 10, 2026
Blind taste test round 3: fixing what's actually wrong
Third blind taste test on bronze-november surfaced real gaps. Fixed them all: MIT license, external references on every guide, stronger OAuth security warning, a "What's All the Fuss About?" onramp for beginners, SHA256 hashes for content verification, and honest framing ("observations" not "chapters"). Build script now auto-generates sitemap.xml, pages.json, and hashes.json.
- What's All the Fuss About? — new page for people who haven't started yet
- External references on every guide (Diátaxis, Google SRE, PARA, rclone, OpenClaw, etc.)
- OAuth security warning upgraded from amber "skip this" to rose "these are like passwords"
- MIT LICENSE added to repo
- Build script generates sitemap.xml, pages.json, and hashes.json automatically
- Book framing: "48 chapters" → "48 short observations" (honest about the format)
March 11, 2026
Project roots as local facts
Reworked the setup and starter pages so they no longer teach ~/work or C:\work as if they were defaults. The site now treats the project root as a user-specific local fact: something the agent should ask for, not inherit. Replaced the examples with short neutral names and added explicit PowerShell variants where the copy had drifted Unix-only.
- Setup pages now frame the project root as local configuration, not a universal standard
- Replaced personal-looking path examples with neutral options:
proj, lab, forge, bench, craft
- Agent instructions now say "ask where the user keeps projects" instead of assuming a stock path
- Added Windows-friendly PowerShell command variants to start boxes that previously read Unix-only
- Repo path examples now use placeholders instead of the author's actual clone path
March 10, 2026
Principle of least surprise
Dropped numeric prefixes from all URLs. 30-chatbot.html → chatbot.html, 20-board-game.html → board-game.html, and four more. Added sitemap.xml and pages.json for machine-readable discovery. Ran a blind taste test on bronze-november — agent guessed URLs correctly on first try. Added project-level CLAUDE.md with jq steering.
- Renamed 6 files to drop numeric prefixes (principle of least surprise for agents)
- Updated all internal links across 9 source files
- Generated
sitemap.xml (74 URLs) and pages.json (agent-readable manifest)
- Blind taste test: 3/3 guessed URLs returned 200 (was 0/3 before rename)
- Slush entry: principle of least surprise — "unsurprising beats correct"
March 10, 2026
Blind taste test, Linux quickstart, and deploy friction
Ran a blind taste test — spun up a fresh exe.dev VM, pretended to be a stranger, had Claude cold-read the site. Found real gaps: no Linux quickstart, buried navigation, missing book preview, ambiguous dev log. Fixed them all. Added link checker to /publish skill.
- Built Zero to Developer (Linux) — the missing third onramp
- Moved "What Do You Want to Build?" to top of project guides
- Added book part titles to homepage card (Learning · Working · Building · Living)
- OAuth page: added "you probably don't need this yet" warning
- Dev log: added pre-history entry, clarified 15-minute ratio
/publish skill: added link checker as step 7 (now 10 steps total)
- Slush entries: blind taste test shape, deploy friction pattern, remote conversation mining, public URL as portable dev environment
March 10, 2026
Calculus rewrite, meta-guide, and /publish
Rewrote We All Invented Calculus with data: OpenClaw (298K stars), Kaijuu divergent architecture, AI DM proliferation dataset (10+ blogs, 6+ MCP servers, 5+ commercial products, 1 thesis), MCP as inflection point. Built Guide-Based Development — the meta-guide that documents the exact anatomy so agents can build new guides from the pattern.
- Created
/publish skill — 8-step checklist: PII scan, tests, build, deploy, spot-check, security review, git commit, dev log
- GitHub links added to all page footers
- Flywheel: added slush pile link in interventions layer, maintenance link in related guides
- Jack video engine: added "it's slop but it's my slop" note (vs NotebookLM's locked pipeline)
- Slush entries: wall data scheduling tiers, jack slop note
- Fixed .gitignore:
skills/ → /skills/ so .claude/skills/ can be tracked
March 10, 2026
Daily Briefing, Maintenance, and the operating system takes shape
Built Daily Briefing guide — wins-first morning dashboard generated from calendar, email, messages, goals, and git activity. Built Maintenance guide — ops/runbooks shape with green/yellow/red status dashboard, five layers (inventory, health checks, runbooks, schedule, changelog).
- Flywheel: self-managed scheduling, dashboard section, proposal docs dispatched to project folders, folder renamed from papercuts to flywheel
- Inter-system communication protocol: any system writes to any other system's folder. The folder is the interface.
- Self-reflection skills: each system reflects on its own performance with max local context, logs findings to the flywheel
- Slush entries: scheduler (absolute/recurring/delta/event), Google Meet mining (transcript + screenshot extraction)
- All start boxes now include "navigate to your work folder" before creating project directories
March 10, 2026
Flywheel guide, scheduling, and Kai's mission
Built the Flywheel guide — five layers (observe, metrics, causals, interventions, subgoals), seven collection engines, grounded in real examples from screenshots to sleep data to MSP operations. Added scheduling section (calendar blocks to cron to nightly consolidation). Connected the flywheel to Kai's mission: minimize friction, step one is finding it.
- Added OAuth setup guide — Google Cloud credentials, consent screen traps, the localhost trick
- Internal links across all reference pages — every guide now links to relevant book chapters
- Fixed link colors site-wide (was default blue/purple, now amber)
- Added dev log (this page)
- PII audit — found database credentials in reference files, excluded from tracking
March 9, 2026
Reference site goes live
Built the entire reference companion site from scratch. Static HTML, no framework, no build step beyond a bash script. "Reimplement, don't import." Every page works two ways: human-readable and agent-parseable. Paste a URL, tell the agent to follow the instructions, it does.
- ~10am–2pm: Vic improv screenplay with "wall of data" concept
- ~2:50pm: Reference site started — decided on static site, modular reference pages. Built scaffold with 9 project shapes, workspace/SSH/GitHub pages
- ~2:55pm: Memory Viewer prototyped (shelved) — foveated retrieval from Postgres, "fibers of attention" concept
- ~4pm: Board Game guide — full staged guide with network visibility as core multiplayer principle
- 30-Minute Chatbot, Wall of Data, Zero to Dev (Mac + Windows), Vic and Sam screenplay
- ~6pm: Readability tooling — check-readability.py (textstat), analyze-chapters.py, chapter dashboard. Average grade across book: 6.9
- ~6:30pm: Book on website — build-book.py compiles 48 chapters to HTML with prev/next nav
- ~7pm: Spiral curriculum concept — Wonder → Try → Break → Know cycle, builder track vs director track
March 9, 2026
Cribbage and Bridge in one afternoon
Built two complete board games to prove the flywheel. Cribbage (~5:30pm): 2.5 hours wall clock, ~1 hour AI work, ~15 minutes human attention. Full peg board, scoring engine, AI opponent, training mode with expected value analysis. TypeScript, Vite, Vitest, no framework. Bridge: Python/FastAPI + vanilla JS, Chicago scoring, bidding conventions. Started 40 minutes after cribbage, done same session.
The 15-minute ratio: 2.5 hours wall clock, ~1 hour AI work, ~15 minutes of actual human attention (decisions, corrections, testing). Domain expertise matters more than programming skill. Two games in one afternoon = flywheel proof.
March 7–8, 2026
Repo hygiene and manuscript cleanup
Consolidated repos, added .gitignore for data/convo/repos directories. PII audit on the manuscript — stripped real names, locations, and identifying details. Crash Course Philosophy transcript research for AI personhood material.
- Killed redundancy across chapters, sharpened promises
- Grounded chapters in personal stories
- Added color map for chapter-to-part associations
- Refined learning chapters, tightened sensitive context wording
Pre-history
Before the dev log
The book was written over ten months alongside Kai (a personal AI infrastructure project). By the time this reference site launched on March 9, 2026, the manuscript had 48 chapters across six parts — ~32,000 words covering learning, working, building, living, society, and a study guide. The companion site was built in a single afternoon session.
Earlier work that doesn't have timestamped entries: Kai architecture (voice, memory, scheduling, inter-agent communication), Kaijuu (16-day build, 105 commits, divergent 7-layer architecture), OpenClaw integration, cribbage and bridge game builds, health data pipeline, home automation sensors, report automation consulting, and hundreds of conversations across Claude, ChatGPT, and Gemini that became the material for the book.