NewsletterBlogGlossary

Claude Code's Seven Programmable Layers: From User to System Designer

How Claude Code's seven-layer architecture turns a terminal AI assistant into a programmable coding system — context, tools, skills, hooks, subagents, and verification.

DEV
ShareXLinkedIn

Claude Code's Seven Programmable Layers: From User to System Designer

Most developers treat Claude Code as a terminal chatbot — ask it to write code, copy the output, move on. That's using maybe 20% of what it actually is. Tw93's deep-dive analysis reframes Claude Code as a seven-layer programmable architecture, where each layer solves a different design problem: what Claude knows, what it can do, how it's constrained, and how its output gets verified. Understanding these layers is the difference between "AI that writes code" and "AI that operates as a reliable engineering system." Here's the full architectural breakdown and how to use each layer effectively.

What Happened

Developer Tw93 (@HiTw93) published a comprehensive architectural analysis of Claude Code, mapping its capabilities into seven distinct programmable layers:

Layer Responsibility
CLAUDE.md / rules / memory Long-term context — tells Claude what it is
Tools / MCP Action capabilities — tells Claude what it can do
Skills On-demand methodology — tells Claude how to do it
Hooks Enforced behaviors — executes rules without relying on Claude's judgment
Subagents Isolated workers — controlled autonomy with separate contexts
Verifiers Validation loops — making output auditable and rollbackable

The key insight: these aren't a feature list. They're design surfaces — each solving a fundamentally different dimension of the human-AI collaboration problem. A CLAUDE.md file defines project-level contracts. Skills provide domain-specific methodology. Hooks enforce rules deterministically, without depending on the model's judgment. Subagents isolate context so parallel work doesn't pollute shared state.

Tw93 also documents Claude Code's core runtime model: the agentic loop of gather context → take action → verify results, running continuously until the task completes. Users can intervene at any point — interrupting, redirecting, or adding context mid-loop.

Why It Matters

The gap between "Claude Code works" and "Claude Code works reliably at scale" is entirely about how you configure these layers. Most teams hit a ceiling because they treat Claude Code as a single-surface tool: type prompt, get code. Tw93's framework reveals why that ceiling exists and how to break through it.

Context economics is the most underappreciated constraint. With a 200K token context window, connecting 5 MCP servers can consume 25,000 tokens just in tool definitions — 12.5% of your budget before any work begins. Tw93 recommends using the /context command to audit token allocation across system prompts, tools, memory files, skills, and messages. Teams that ignore context economics wonder why Claude "forgets" instructions mid-session; it's not forgetting — it's running out of room.

The explore vs. execute separation is equally critical. Tw93 advocates using PlanMode for design work (read-only, no side effects) before switching to execution (actual file changes, commands, commits). This isolates the cost of exploring wrong directions. Combined with /clear to cut context pollution from failed paths and fork-session for parallel exploration, you get systematic control over Claude's autonomy.

For teams evaluating Claude Code against Cursor or GitHub Copilot, this layered architecture is the differentiator. Neither competitor offers file-based, version-controlled configuration at this granularity.

Technical Deep-Dive

Context Loading Timing

Not all context loads at once. Understanding when each piece enters the context window is essential for optimization:

  • Session start (always loaded): CLAUDE.md content, MCP server tool definitions, Skill descriptions (names only)
  • On use (loaded when invoked): Full Skill content, file reads, search results
  • Isolated (zero context cost): Subagents (fresh context), Hooks (run externally)

Skills with disableModelInvocation: true load nothing until explicitly called — a powerful optimization for projects with many specialized skills.

Tool Design Quality

Tw93 highlights that tool design directly impacts Claude's decision quality. A well-designed MCP tool has a descriptive name (jira_issue_get, not query), specific parameters (issue_key, not id), returns only decision-relevant information, and includes corrective suggestions in error messages. A badly designed tool returns raw JSON dumps full of UUIDs, has ambiguous parameter names, and bundles multiple actions with opaque side effects.

Hooks as Shift-Left Validation

The most immediately actionable layer is Hooks — deterministic scripts that fire on lifecycle events like PreToolUse, PostToolUse, and UserPromptSubmit. The shift-left value proposition is concrete: without hooks, errors accumulate across edits and surface late during compilation. With hooks, each edit gets validated immediately at zero latency.

Tw93 estimates that across 100 edits in a session, hooks save 1-2 hours of backtracking. More importantly, they enforce rules that the model might occasionally forget — formatting standards, forbidden patterns, required test runs — without consuming any context tokens.

From Todos to Tasks

As model capabilities improve, task orchestration evolves from linear todo lists (single agent, sequential execution) to task DAGs (multiple agents, parallel execution with dependency tracking). Subagents enable this by providing isolated contexts and constrained tool access per task, preventing the context pollution that makes single-agent multitasking unreliable.

What You Should Do

  1. Audit your context budget. Run /context in your next Claude Code session. If MCP tool definitions exceed 15% of your window, consolidate or remove low-value servers.
  2. Separate explore from execute. Use PlanMode for design tasks. Use /clear aggressively when changing direction. Don't let failed exploration pollute your execution context.
  3. Add one Hook this week. Start with a PostToolUse hook on file writes that runs your linter. The immediate feedback loop compounds fast.
  4. Structure your CLAUDE.md as a contract, not a knowledge base. Include commands, boundaries, and prohibitions — not team documentation. Move path-specific rules to .claude/rules/.
  5. Design MCP tools like APIs: single-purpose, clear names, controlled output size. Every token of tool definition that loads at session start is budget you can't use elsewhere.

Related: Today's newsletter covers more AI developer tooling updates. See also: Claude Code Skills System Guide.


Found this useful? Subscribe to AI News for daily AI briefings.