What happened
In 2022 and 2023, AI systems crossed a capability threshold. They went from "impressive demos" to "useful daily tools." The key shift: they can now read, write, and reason about text well enough to do real work. Not perfectly, but usefully.
You may have heard people talk about ChatGPT, or Claude, or Gemini. These are all examples of the same underlying technology. The names don't matter much. What matters is that a new kind of tool showed up, and it's not going away.
What they actually are
They're called large language models. They were trained on enormous amounts of text from the internet — books, articles, code, conversations, documentation. They learned to predict what comes next, word by word.
That sounds simple, and in a sense it is. But the emergent capabilities are surprising: they can write code, analyze documents, explain concepts, draft emails, debug problems, translate languages, summarize research, and carry on nuanced conversations about almost anything.
They don't "understand" the way you do. There's a real philosophical debate about what's happening inside them. But the practical difference between what they produce and what a knowledgeable person would produce is shrinking fast.
What changed
Three things converged at roughly the same time.
The models got good enough to be useful, not just impressive. Earlier AI could do party tricks. The current generation can do your taxes, review a contract, or help you write a quarterly report. The gap between "demo" and "daily tool" closed.
Companies made them accessible. You can talk to them in plain English through a chat interface or a terminal. No programming required to start. You type a question, you get an answer. The barrier to entry dropped to zero.
They got fast and cheap enough to use all day, every day. What cost hundreds of dollars per conversation in 2020 now costs fractions of a cent. Speed went from minutes per response to seconds. This made casual, exploratory use possible.
What people are doing with them
Not just chatbots. That's the first thing people see, but it's the tip of the iceberg.
People are writing code with AI pair programmers — describing what they want in English and watching the code appear. They're building personal dashboards that read their email and calendar. They're creating chatbots trained on their own documents. They're automating repetitive work that used to eat hours. They're learning new skills faster by having a patient tutor that never gets annoyed and never judges a question.
The mentoring move underneath all of this is simple: help people go from head in a box to octopus in a box — from AI that can only answer, to AI that can actually touch files and help make things. If that's the transition you care about, this site is organized as a curriculum, not just a pile of pages.
The practical lesson under the hype is simpler than the tooling landscape makes it sound: accountability is the cure for hallucination, and context is what makes the tool stop being generic. The first keeps you from trusting output you didn't verify. The second is what turns a clever stranger into a useful collaborator.
I don't know whether the world that's coming will feel more liberating or more brutal. I do know that I need to learn how to build in it. That's why the emphasis here is not prediction. It's practice.
The book this site accompanies documents one person's year of doing all of these things. Forty-eight short chapters on what worked, what didn't, and what it felt like to go from knowing nothing about programming to building real tools.
What they're bad at
Honesty requires this section.
They hallucinate — they confidently state things that aren't true. They'll invent citations, fabricate statistics, and present fiction as fact with the same calm tone they use for everything else. You have to check their work.
If you give them tools, file access, or the power to deploy, you also get new failure modes: leaked secrets, prompt injection, and code you never really reviewed. That is not a reason to panic. It is a reason to add a small safety layer before anything goes public.
They have no memory between conversations unless you build it in. Every new chat starts from zero. They don't know what you told them yesterday.
They can't access the internet or your files unless you explicitly give them tools to do so. They don't know what's in your inbox or what happened in the news this morning unless someone set that up.
They're expensive to run at scale. Cheap per conversation, yes. But if you want to process a million documents or serve a million users, the costs add up fast.
And they raise real questions about jobs, creativity, authorship, and truth. These questions are not resolved. Anyone who tells you they have all the answers is selling something.
Where to go from here
You have options. None of them require urgency.
Patterns at work
Two chapters from the book that are relevant if you're just starting to think about this:
- AI Rewards Curiosity Why the people getting the most out of AI aren't the most technical — they're the most curious.
- Learn by Building The case for making something real as the fastest path to understanding.
Related guides
- From Read to Make The curriculum page: orientation, setup, first build.
- Zero to Dev (Mac) Get an AI agent running on macOS in 15 minutes.
- Zero to Dev (Linux) Same thing, for Linux.
- Zero to Dev (Windows) Same thing, for Windows.
- Security for Directors The short trust-and-safety layer for people shipping code they did not personally write.
- Terminology Key terms defined simply. If someone says a word you don't recognize, it's probably here.
Further reading
Three good pieces if you want to go deeper, all written for a general audience:
- Large Language Models, Explained Timothy B. Lee — the best plain-English explainer of how LLMs actually work.
- What Just Happened Ethan Mollick — an academic's take on why this moment matters.
- The AI Revolution: The Road to Superintelligence Wait But Why — long but accessible. Good for understanding the bigger arc.