|
Description:
|
|
- Why I'm Not "Picking a Fight" on AI: A listener asked if I'm intentionally stoking a flame war by treating agentic coding as a foregone conclusion. The honest answer is that I've used it, the data points one direction, and a show built around pretending otherwise would slowly drift away from reality — and away from being useful to you.
- Respecting the Misgivings, Without Getting Stuck in Them: Ethical concerns, skill atrophy worries, and questions about long-term effects are all legitimate. But the goal of this show is practical applicability, so we focus on mental models you can use Monday morning rather than litigating every angle of the debate.
- The "Minecraft" Principle: If I ask you to "build Minecraft," I've handed you several chapters of specification in a single word. That's meaning-rich abstraction — language that points at a huge amount of shared context with very little token cost.
- Meaning-Rich AND Specific: "Human history" is meaning-rich but uselessly broad. "Block-building game" is specific but loses fidelity. The sweet spot is vocabulary that is both compact and unambiguous — sitting in the top right of the meaning-density / specificity graph.
- A Real Example — Strategy Pattern: When working on authorization rules, I didn't want a pipeline. Instead of describing base classes, shared interfaces, and parallel execution to the LLM, I used the words "strategy pattern." Three words did the work of three paragraphs, and the output landed where I wanted it.
- Vocabulary as Leverage: Named patterns, named algorithms (Monte Carlo, etc.), named architectural concepts — these act like compressed pointers. The more of them you genuinely understand, the higher the leverage of every prompt you write and every conversation you have with another engineer.
- How to Build This Vocabulary: Have conversations with senior engineers. Ask an LLM what patterns are at play in a codebase, which ones you're using incorrectly, and which ones you're tricked into thinking you're using. Learn the abstraction layer that sits one step above your day-to-day implementation work.
- The Asterisk — Shared Context Required: This only works when both sides know the term. Public, well-documented concepts (patterns, papers, algorithms) translate immediately to LLMs. Private or organization-specific concepts need to be loaded into context — via CLAUDE.md, AGENTS.md, or skills — before that compression kicks in.
- Episode Homework: Pick one area of your current codebase. Ask an LLM to name the patterns in play, the patterns you're using incorrectly, and the ones you might be missing. Use that conversation to add at least one new piece of meaning-rich vocabulary to your working set.
|