Board Game
Make a system that will learn, but don't make it guess — tell it what you know.
What this shape is
You have a board game your family plays. You know the rules — maybe better than anyone, because your family has house rules the official rulebook doesn't cover. You want a digital version.
This is one of the best first projects because you are the domain expert. You know when the AI gets it wrong. You know the edge cases. You don't need to learn a new domain — you already have one.
What follows is the full lifecycle. You don't build all of this at once. Each stage is its own project, its own set of prompts, its own corrections. Start at Stage 1 and stop wherever you're satisfied.
Quick start: prompt generator
Pick your options below — the prompt updates live. Hit "Roll d20" to randomize everything, or choose your own and copy.
Your choices determine the complexity. Multiplayer Settlers of Catan with expansions is bigger than Mille Bornes — you don't need a label to tell you that.
Games I know
Check the ones you know — they're remembered. Click a name to build it.
Optional features
Pick a game above to generate your prompt.
Stage 0: What If You Don't Know the Game?
The guide assumes you're the domain expert. But what if you're not?
One build started with: "Build a contract bridge trainer." The builder didn't know bridge. The agent didn't know bridge. Neither of them could have written a rulebook. Here's what happened instead:
- YouTube transcript as corpus. The builder found a "Learn Bridge in 5 Minutes" video and had the agent download the transcript. Neither of them started with expertise — they started with a five-minute video.
- Confusion as requirements. Instead of telling the agent what to build, the builder told it what confused them. "Explain 1NT better." "And why 1?" "Oh I can play from the other person's hand?" Each confusion became a feature: a richer advisor, bid-level explanations, dummy-hand guidance.
- Map to what you know. Partway through, the agent asked: "Do you know Hearts?" The entire learn panel was rewritten — from 120 lines of bridge rules to 50 lines of "you know Hearts, here are three new things." The best teaching strategy emerged mid-build from one question about what the player already knew.
- The artifact taught the builder. By the end, the builder understood bridge — not from reading about it, but from building a trainer that explained it to them. The software was a side effect of learning.
This is a different loop than the rest of the guide describes. The guide says: you know the rules, you tell the agent, you correct when it's wrong. Stage 0 says: you don't know the rules, you learn alongside the agent, and your confusion is the most useful input you have.
If you're building a game you don't know: find a short video or tutorial, have the agent read the transcript, and start playing immediately. Tell it what confuses you. That's the prompt.
Stage 1: The Rulebook
The single most important thing you can do is write the rules down. Not a summary. Not "it's kind of like Uno." The actual rules, including your house rules.
About ShapeGame: Throughout this page, examples use a made-up game called ShapeGame — a tile-matching game where players place colored geometric tiles on a shared board, connecting matching edges to score points. ShapeGame isn't real. It's a stand-in for your game. Every sample prompt and example uses it so you can see the shape of a good prompt without needing to know someone else's rules. Swap in your game wherever you see it.
Create a file called RULEBOOK.md. Include:
- Setup: how many players, what pieces, starting state
- Turn structure: what happens in what order
- Legal moves: what you can and can't do
- Scoring: how points work, when they're counted, edge cases
- Win condition: how the game ends, who wins
- House rules: anything your family does differently
Every family has house rules. Maybe you skip a penalty on the first round. Maybe you allow a move the official rules prohibit. Maybe you've played it wrong for twenty years and that's how the game works now. Write it all down. The rules you don't write become the bugs you find later.
Sample prompt
I want to build a digital version of ShapeGame. Interview me about the rules. Ask me about setup, turn structure, legal moves, scoring, win conditions, and any house rules my family plays with. Once you understand the rules, write them into RULEBOOK.md.
Describing space
If your game has a board, a grid, a layout — anything spatial — this is the hardest thing to get right. Language models don't have a canvas. They process text, not geometry. Spatial relationships you leave to interpretation may be interpreted wrong.
Be explicit about:
- Coordinate system: name your axes. "Columns go left-to-right, rows go top-to-bottom. Position (0,0) is the top-left corner." Don't assume the AI knows which way is up.
- Orientation: if pieces have a direction (face-up, rotated, flipped), define what each orientation means and how it's stored. "A tile's value is read left-to-right. A vertical placement reads top-to-bottom."
- Adjacency: what does "next to" mean? Up/down/left/right only? Diagonals too? "Two tiles are adjacent if they share an edge. Diagonal tiles are not adjacent."
- Connections: if pieces connect (like tiles, cards in a tableau, or linked nodes), define exactly what a legal connection looks like. "The touching ends must match. A tile placed to the right of another tile connects its left end to the existing tile's right end."
- ASCII diagrams: draw the board state in text. This is the single most effective thing you can do. The AI can parse a labeled grid better than any paragraph of prose.
Example board state (ShapeGame):
col 0 col 1 col 2
row 0 [▲red]—[▲blu] .
row 1 . [■blu]—[■grn]
row 2 . . [●grn]
Tile [▲red] has edges: top=red, right=red, bottom=blue
Placed at (0,0). Right edge connects to [▲blu] left edge (both red).
Orientation is one of the things you may correct often. If two pieces can connect in multiple orientations, spell out every case — and build test cases around it.
Where you'll correct
The AI may miss ambiguities in the rules that you resolve without thinking. "What happens when there are no cards left to draw?" "Who goes first in a tie?" These are the things you know but haven't said. Every game has some of them.
2D spatial reasoning is a weak spot. The AI can confuse left/right, mix up which end of a piece connects where, or lose track of the board layout. The more spatial your game is, the more you may correct. Coordinate systems and ASCII diagrams in your rulebook prevent some of it — and since you've read this far, you may dodge the worst of it entirely.
Stage 2: Core Engine
The game logic, separate from any UI. Pure rules: generate legal moves, validate plays, calculate scores, manage turns, detect game end.
The key architectural decision: the engine should have zero dependencies on UI or networking. It's a library that takes a game state and an action, and returns a new game state. This means you can test it without a browser, run tournaments without a server, and swap the UI later without touching game logic.
Sample prompt
Read RULEBOOK.md. Build the core game engine — no UI, no server. Just the game logic: board state, legal move generation, move validation, scoring, turn management, and game-end detection. Use dataclasses (Python) or plain types (TypeScript). Every function should be pure — take state in, return state out. Write tests that play a full game and verify scoring.
Architecture patterns that work
- Pure reducers: game state in, action in, new state out. No side effects.
- Deterministic RNG: seed your randomness so games are reproducible. You can replay any game from its seed. Essential for testing and debugging.
- Strategy pattern: AI players are just functions that take a read-only game state and return a move. Plug in different strategies without changing the engine.
- Event sourcing: every action is a recorded event. The game state is derived from the event log. This gives you replay, undo, and a full audit trail for free.
Where you'll correct
- Scoring edge cases. Ties, blocked games, what happens when someone can't move.
- The engine may be "close enough" on the first pass. The corrections tend to be about the 20% of rules that require real domain knowledge — the stuff you know from playing, not from reading.
Stage 3: Player Count and Teams
Decide how many players, whether there are teams, and how human vs AI players work.
- Solo play: 1 human + AI opponents. Easiest to build and test.
- Hot-seat: multiple humans take turns on the same device. No networking needed.
- Teams: paired players sharing a score. The engine needs to track team scores separately.
Sample prompt
ShapeGame supports 2–4 players individually. Add individual scoring to the engine. For single-player mode, the human plays against 1–3 AI opponents. AI players use the strategy interface we built.
Where you'll correct
- Team scoring aggregation
- Turn order and how it interacts with teams
- Rules about partnerships — can teammates communicate? Share information?
Stage 4: UI — Make It Visible
Now you make it playable. Start with the simplest thing that works.
Terminal UI is the fastest to build. The AI can generate a working text-based game in minutes. Good for testing, hard to love.
Vanilla HTML/CSS/JS is the sweet spot. No framework needed. A canvas or a grid of divs. The AI may default to React — push back. You don't need a JavaScript framework to render a card game or a board.
The key UX decisions:
- How do you show the game state? (Board view? Hand view? Both?)
- How does the player indicate their move? (Click? Drag? Type?)
- How do you show legal moves? (Highlight? Filter? Tooltip?)
- How do you show scores?
- What happens between turns? (Pause? Animation? Sound?)
Sample prompt
Build a web UI for the game. Plain HTML, CSS, and vanilla JavaScript — no React, no framework.
Show the ShapeGame board in the center as a grid. Show the player's tiles at the bottom. Highlight valid placements when it's your turn. Show scores in the corner. Use a dark felt background with the tile shapes in bright colors.
The UI calls the engine for legal moves and validation — it never implements game logic itself.
Where you'll correct
- Sizing and spacing. The AI may guess wrong on proportions.
- The "feel" of the game — transitions, announcements, pacing between turns
- Mobile layout if you care about phones
- The visual design may be generic. You'll want to make it feel like your game.
Six things you'll say every time
These patterns showed up across multiple board game builds — cribbage, bridge, dominos. They're predictable. Every game hits them.
1. "What just happened?"
The game scores a point and the player doesn't know why. This is always the first complaint. Every game event needs a visible explanation. Don't just update a number — say "15 for 2" or "3-card run." If the player has to guess why their score changed, the UI is broken.
When a game event occurs — a score, a penalty, a turn skip — show what rule triggered it and why. Log it, and briefly animate the rule name near where it happened.
2. "Who goes first?"
The game starts and the player doesn't know whose turn it is, or why. Narrate the flow. "You deal first (random)." "East opens the bidding." A status line that says what's happening in plain language is not optional.
Always show: whose turn it is, what phase the game is in, and what the current player can do. If the starting player is random, say so.
3. "It looks wrong."
If the game has a physical form — a cribbage board, a card table, a chess board — the player has visual expectations. They'll know instantly if the board "looks wrong" even if the logic is correct. Google the real thing before rendering it.
If your game has an iconic physical form, match it. Screenshot a real one and put it in the project as a reference. The first playtest catches this instantly — but only if you look.
4. "It's covering the cards."
Score explanations, advisor tips, and event logs will overlap the game area on the first try. Every time. Overlays get in the way, and making them smaller doesn't fix it. Give feedback its own dedicated space — a panel, a sidebar, a bottom bar with its own real estate.
Never overlay game feedback on the play area. Dedicate a fixed region for scores, messages, and advisor output. If it needs more room, let the player expand it — don't let it cover the board.
5. "What does that mean?"
If the game has domain terms the player might not know — trump, meld, HCP, crib, NT — they need inline explanation. Don't assume the player already knows the game. They might be learning it through your app.
Every domain term should have a tooltip or pop-out the first time it appears. Consider a "learn" panel that covers the basics — card values, scoring rules, common terms. Make it toggleable so experienced players can hide it.
6. "Can it teach me?"
You'll want a training mode. A toggle that shows: what should I play here, why, and what are the risks. On the second game you build, you'll ask for this in your opening prompt. On the first, you'll discover you want it after playing a few rounds.
Plan for a training/advisor mode from the start. Show recommended moves, explain the reasoning, color-code options by expected value. This is often more valuable than the game itself — especially for games you're learning.
Three more things (if you're building for a learner)
If the player is learning the game through your app — not just playing a game they know — these show up fast.
7. "Explain it like I already know Hearts."
The single most effective teaching move: map the new game onto a game the player already knows. "Bridge is like Hearts, except there's a trump suit you choose by bidding, and you play with a partner whose hand you can see." One sentence. More useful than a full rules page.
Ask the player what games they already know. Frame the rules as differences from the familiar game. "Like Hearts, you must follow suit. Unlike Hearts, you choose a trump suit by bidding."
8. "Too much text."
The learn panel had everything in it — and the player still didn't understand. The problem wasn't missing information. It was too much information. Walls of rules don't teach. Short explanations at the moment they matter do.
Keep reference text short. Teach in context — when the player needs to bid, explain bidding. When they need to play, explain playing. Don't front-load a rulebook.
9. "I refreshed and lost my game."
If the player is learning, they'll refresh by accident, fat-finger a navigation, or close the tab. Don't punish them. Save the game state. Let them come back.
Persist game state to localStorage or the server. A refresh should resume the game, not start a new one.
Stage 5: Validation — Prove It Works
This is where testing goes beyond "does it run" to "does it play right."
Break it before you fix it
The biggest sin in debugging with AI: letting it fix something it hasn't reproduced. The AI sees an error, jumps to a fix, and tells you it's solved. But if there was no failing test before the fix, how do you know the fix did anything? Maybe it fixed a different problem. Maybe it fixed nothing and the bug comes and goes. Maybe it introduced a new bug that looks like progress.
The rule: no fix without a failing test first. When you find a bug, the first prompt isn't "fix this." It's:
Write a test that reproduces this bug. The test should fail right now. Don't fix anything yet.
Once you have a red test, then fix it. The test goes green. Now you know. This is true for all software, but it's especially important with AI because the AI will confidently skip this step every single time.
See the same thing
Here's a problem you'll hit: you look at the board and see a bug. The AI looks at the data and thinks everything's fine. You're both right about what you're looking at — you're just not looking at the same thing. You see pixels. The AI sees a data structure. And the two don't agree, but neither of you can tell from your own view.
You need to find a way to look at the same thing. Something you can see and the AI can read.
One approach that works: build small, standalone test pages. Place a known sequence of pieces on the board. Render them. Then interrogate what's actually there — DOM positions, bounding boxes, what's adjacent to what. The AI can read coordinates and adjacency lists. You can see the picture. Now you're both looking at the same thing.
For ShapeGame, a test page might render three tiles in a row and then dump:
Tile [▲red] at (0,0): right_neighbor=[▲blu] at (1,0)
Connection: ▲red.right_edge=red, ▲blu.left_edge=red → MATCH
Tile [▲blu] at (1,0): right_neighbor=[■blu] at (2,0)
Connection: ▲blu.right_edge=green, ■blu.left_edge=blue → MISMATCH
You can see the mismatch on screen. The AI can see it in the text output. Now you're arguing about the same thing instead of talking past each other.
The medium doesn't matter — DOM inspection, canvas pixel sampling, script output, a log file. What matters is that you both agree on what you're looking at.
Test snapshots
Run a full game with a fixed random seed. Capture every move, every score, every state transition as a human-readable log. Save it as a golden file. If you change the engine and the log changes, you'll see exactly what changed and whether it's correct.
Tournament simulation
Run 100 games with different seeds. Check that:
- Every game terminates
- All scores satisfy your game's scoring rules
- No illegal moves were made
- The winner's score meets the win condition
Sample prompt
Add deterministic testing: seed the RNG so games are reproducible. Write a test that plays a full game with seed 12345, logs every move to a snapshot file, and compares against a golden log. If anything changes, the test fails and shows the diff.
Then write a tournament test: run 100 seeded games. Assert every game terminates, all scores are valid, and no illegal moves occurred. Report win rates by strategy.
Sample prompt (test page)
Build a standalone test page at
/test/three-tiles. Place three ShapeGame tiles in a row with known values. Render them with the real renderer. After render, dump every tile's position, its neighbors, and whether each connection is valid. Show PASS/FAIL for each connection. I want to see the board AND read the validation output on the same page.
Where you'll correct
- The AI may write tests that test its own code, not the rules. Push for rule-based assertions: "the score is always even," not "the score equals 14."
- Snapshot format — make it human-readable. You need to be able to read it and say "yes, that's a correct game of ShapeGame."
- The AI may skip reproduction and go straight to a fix. Hold the line: red test first, then fix, then green test. Every time.
Stage 6: AI Strategy — Make It Smart
Start dumb, then get smarter. The strategy pattern from Stage 2 pays off here.
Level 0 — Random: pick a random legal move. This is your baseline. If random wins 50% against your "smart" AI, your smart AI isn't smart.
Level 1 — Heuristic: simple rules. Play the highest-scoring move. Or the move that blocks your opponent. Or the move that leaves you safest. You know what a good player does — tell the AI.
Level 2 — Context-aware: consider the whole board state. What's been played? What can opponents probably do? What information is hidden? This is where your domain knowledge as a player matters most.
Level 3 — Learned: train a model on game telemetry. Export per-turn features (game state, available moves, outcomes). Train a predictor to rank moves. This is optional and advanced — some games benefit from it, many don't need it.
Sample prompt (heuristic)
Add AI strategies to the engine. Start with three:
- RandomStrategy — picks a random legal move
- AggressiveStrategy — plays the move that scores the most points
- DefensiveStrategy — plays the move that minimizes risk
Run a tournament: each strategy plays 100 games against each other. Report win rates.
Sample prompt (ML, optional)
Export per-turn game features to JSONL: game state, all candidate moves, and the resulting outcome. Run 10,000 seeded games and export the data. Then train a predictor that ranks moves by expected outcome. Start with scikit-learn.
The real work
Building the AI player is its own correction loop, separate from building the game. The engine can be perfect and the AI player can still be terrible.
The cycle looks like this:
- Watch it play. Run a game with verbose logging. Read every move. You may immediately see moves no human would make.
- Name the mistake. "It played a high-value piece early when it should have held it." "It ignored an obvious block." "It doesn't count what's been played." Turn your instinct into a rule.
- Add the rule. Give the strategy a new heuristic. "If the opponent is within 10 points of winning, prioritize blocking over scoring."
- Tournament it. Run 100 games. Did the win rate change? Did new dumb behavior emerge?
- Repeat.
This is where you're the domain expert, not the programmer. You know what a good player feels like. The AI doesn't — it optimizes whatever you tell it to optimize. If you only tell it to maximize score, it'll play greedily and lose to any strategy that thinks two moves ahead.
Where you'll correct
- The AI may optimize for one thing (usually scoring) and ignore everything else. Push for strategies that play like a real person.
- Feature engineering is where your domain knowledge matters most. You know what a good player pays attention to. The AI doesn't — tell it.
- Spatial games can hit the same 2D problem here: the AI player may misread the board. If it can't see that a position is blocked or that a connection creates a scoring opportunity, its moves may look random. Test spatial reasoning in the strategy separately from move selection.
Stage 7: Multiplayer — Play Together
Some projects never get here. The author's didn't — six repositories, a year of work, an ML pipeline, and the game still runs on one machine. Not because the architecture couldn't handle it. The engine was pure, the state was serializable, the strategy pattern meant swapping humans for network players was a clean interface change. It just wasn't the itch that needed scratching.
That's fine. If you got through Stage 6 and you're playing a good game against an AI opponent that feels right — you built the thing. Multiplayer is here if you want it.
Two paths:
WebSocket — real-time, bidirectional. Both players see moves instantly. The server runs the engine; clients send moves and receive state updates.
Turn-based API — REST endpoints. Players poll for state. Simpler but higher latency.
Network visibility is the whole game
The single most important rule of multiplayer: the client only sees what that player is allowed to see. Not "the UI hides it." The server never sends it.
If ShapeGame has hidden tiles in your hand, the server sends each player only their own hand. The board state is public — everyone gets it. The draw pile count is public — everyone gets it. But the contents of the draw pile? The other players' tiles? Never leaves the server.
This means the server doesn't broadcast one game state to all clients. It builds a per-player view — a filtered projection of the full state that contains only what that player should know.
Full state (server only):
board, draw_pile, hands[player_0, player_1, player_2], scores, turn
Player 0's view (sent to player 0):
board, draw_pile_count, my_hand, scores, turn, legal_moves
Player 1's view (sent to player 1):
board, draw_pile_count, my_hand, scores, turn, legal_moves
If you open the browser dev tools and inspect the WebSocket messages, you should see nothing you couldn't learn from looking at your side of the table. That's the test.
The AI may get this wrong on the first pass — broadcasting the full game state to every client and letting the UI decide what to show. That's a cheat code waiting to happen. Push back. The server is the only authority, and it should filter before sending.
Sample prompt
Add WebSocket multiplayer to ShapeGame. The server runs the game engine. Clients connect, send moves as JSON, and receive per-player state views.
Critical: build a
player_view(state, player_id)function that returns only what that player is allowed to see — their own hand, the public board, scores, and their legal moves. Never send another player's hand or the draw pile contents. The client receives only its view, never the full state.Handle disconnects gracefully — the game pauses, doesn't crash.
Where you'll correct
- Network visibility — this is the big one. The AI may default to broadcasting full state. Every time you add a new feature (chat, spectators, replays), re-check: does the client see anything it shouldn't?
- Race conditions: two players acting at the same time
- Reconnection: what happens when someone's wifi drops?
- Spectator mode: spectators should see the board but not any player's hand. That's a third view type.
Stage 8: Deployment — Put It Somewhere
Get it running outside your laptop.
GitHub Pages — free, automatic. Works for games where the AI runs in the browser (no server needed).
A VPS — for games with a server. Run the backend, serve the frontend from the same process.
Sample prompt
I want to deploy ShapeGame. The frontend is static HTML/CSS/JS. The backend is a Python WebSocket server. What's the simplest way to get both running on a VPS?
Where you'll correct
- CORS issues (frontend on one domain, backend on another)
- Secure WebSocket connections (wss:// vs ws://)
- Process management (the server needs to stay running)
The real progression
This page describes a clean lifecycle, but real builds aren't clean. The author's board game project went through six repositories over a year. A sandbox. A proof of concept. An engine rewrite. A full-stack rebuild. A production UI. An ML training pipeline. Each one taught something the next one needed.
The breakthrough came when the author could hand the AI a rulebook and a steering file, and get a working game in a single session. But getting to that point required the earlier attempts — not because the code carried over, but because the human's ability to articulate the rules and architecture had sharpened. The AI got better at building. The human got better at asking.
That's the real lesson: the inflection point isn't just about smarter models. It's about you learning what to tell them.
How prompting evolved
Early 2024: function-by-function
Write a function that scores a round of ShapeGame.
It takes a list of tiles remaining in a player's hand.
Each tile's point value is the sum of its edge colors.
Wildcards score zero.
You were the architect. The AI was a typist. Every edge case you forgot was a bug.
Mid 2025: system-level
Here are the rules of ShapeGame. Build a scoring engine that
handles multiple rounds, tracks cumulative scores, and
determines the winner.
Better. But still missed house rules, got confused by ambiguity, needed substantial debugging.
November 2025+: the inflection point
Read RULEBOOK.md. Build a playable version with scoring, turn
management, and a web UI. Follow the rules exactly. Ask me
if anything is ambiguous.
Corrections shifted from "this function is broken" to "my family plays ShapeGame differently" and "I'd prefer the scores displayed this way." The shift from correcting fundamentals to correcting preferences — that's the inflection point.
The second game: the flywheel
The first game prompt was four words: "It's cribbage. On a board."
The second game prompt, started three minutes later:
Build a contract bridge trainer for my aunt. She'll be viewing
it on her phone. She plays with friends, is a recent learner.
I'd like to learn it too. Maybe 2p v 2ai, or 1p+1ai v 2ai.
Customizable house rules. A mode that tells me what I should
play and why, toggle on and off. TV browser too — mouse
and/or keyboard, don't prefer switching between the two.
Same person. Same afternoon. But different reasons. Cribbage was goofing around — a game he already knows, built for fun, just to see if it works. Bridge was a gift — a game he doesn't know, built so he could learn it and play with his aunt on her phone.
The second prompt is more specific because the intent is different. When you're building for someone else, you think about their device, their skill level, their context. When you're building for yourself, "it's cribbage" is enough because you'll correct as you go.
But the flywheel is still there. The cribbage build is where training mode was discovered — asked for late, after playing a few rounds. The bridge build is where training mode was assumed — requested in the opening prompt, because now it's obvious that any game should teach you while you play. Both games were playable within three hours. The human spent about fifteen minutes on each, dropping in to correct what looked wrong.
The flywheel doesn't just make your prompts longer. It changes what you think a game should be. After cribbage, a game without a training mode feels incomplete.
What a real build looks like
The bridge build sounds clean in retrospect: opening prompt, agent builds, deploy. Here's what actually happened across 1,937 lines of conversation:
Layout wars (3 rounds). The advisor panel overlapped the player's cards. First fix: repositioned as a collapsible panel. Still overlapping. Second fix: full CSS grid rewrite with the advisor in its own dedicated row. Third round: couldn't see the advisor at all — turned out to be browser caching. Three rounds of debugging to solve "the text is covering the cards." This is Stage 4, problem #4, in practice.
Learning through playing. The builder started playing and asking beginner questions mid-build. "Explain 1NT better" became a rewrite of the advisor output. "And why 1?" revealed the agent had never explained bid levels. "Oh I can play from the other person's hand?" led to a dedicated Learn panel section about dummy. Every confusion became a feature request.
The YouTube pivot. Midway through, the builder linked a "Learn Bridge in 5 Minutes" video. The agent pulled the transcript, produced a coverage matrix, and the builder admitted they still didn't understand trump, passes, and bonuses. Then the agent asked one question: "Do you know Hearts?" The entire Learn panel was rewritten from scratch — 120 lines replaced with 50 lines framed as "you know Hearts, here are three new things." The best pedagogical decision in the whole build came from a question, not a prompt.
Context limits. The conversation hit Claude's context limit twice. Both times, it continued with a summary. This is normal for a real build — the conversation is longer than any single context window.
What the builder spent time on: telling the agent what confused them, playing the game and reporting what felt wrong, linking a YouTube video. Not writing code. Not debugging logic. The corrections were about understanding, not implementation.
Book connection
This is the correction shape from Chapter 7. The first version is never right — not because the AI failed, but because building something is how you discover what you actually want.
It's also the trust shape from Chapter 10. Each stage builds evidence. The engine passes tests — you trust the scoring. The tournament runs 10,000 games — you trust the rules. Trust is earned stage by stage, with evidence.
And it's the maxim: make a system that will learn, but don't make it guess — tell it what you know. The RULEBOOK.md is you telling it. The corrections are you telling it more. The ML pipeline is you letting it learn from what you've told it. Every stage is the same motion: your knowledge, articulated, becoming the system's capability.
External references
- Phaser — the most popular open-source HTML5 game framework. Overkill for card games, useful if your board game needs real-time rendering or physics.
- Boardgame.io — a framework specifically for turn-based games with multiplayer. Handles state management, AI, and networking.
- chess-programming.org — deep reference on game tree search, evaluation functions, and AI strategy. Relevant for Stage 6.