NewsletterBlogLearnCompareTopicsGlossary

Claude Code Hooks: A Complete Guide to Automating Your AI Coding Workflow

Master Claude Code hooks to add deterministic automation to your AI coding workflow. Lifecycle events, configuration, and practical patterns.

tools
ShareXLinkedIn

Claude Code Hooks: A Complete Guide to Automating Your AI Coding Workflow

Claude Code hooks solve the single biggest reliability problem in AI-assisted development: you can't trust a language model to follow rules every time. Anthropic introduced hooks as a deterministic automation layer for Claude Code, their terminal-based AI coding agent. Instead of hoping the model remembers your linting rules or security policies, hooks guarantee that specific scripts execute at precise points in the agent's lifecycle — before a file is written, after a command runs, or when a tool is invoked. This shifts critical guardrails from probabilistic prompt-following to system-enforced execution.

Why Hooks Exist: The Determinism Problem in AI Coding

Every developer who has used an AI coding agent long enough has hit the same wall. You tell the agent to run tests before committing. It does — sometimes. You add instructions to your project config telling it never to modify production configs. It complies — usually. According to research into early AI coding adoption, practitioners encountered what some termed a "vibe coding paradox": productivity gains from AI agents collapsed as codebases scaled, because the non-deterministic nature of LLMs meant guardrails were only suggestions, not guarantees.

The root cause is architectural. Large language models are probabilistic systems. No matter how clearly you write a CLAUDE.md instruction, the model might deprioritize it when context windows fill up, when competing instructions conflict, or when the task complexity increases. Prompting is influence, not control.

Hooks address this by moving critical automation out of the model's decision-making loop entirely. They operate as interceptive middleware — the software environment triggers execution based on lifecycle events, overriding the AI's autonomous decisions rather than waiting for it to request permission. This is a fundamentally different pattern from OpenAI or Google Gemini's model-driven function calling, where the AI decides when to invoke a tool.

How Claude Code Hooks Work: The Lifecycle Event Model

Claude Code hooks are configured in your settings.json file and bind shell commands to specific lifecycle events. When the agent reaches one of these events, the hook fires unconditionally — the model has no say in whether it runs.

The core lifecycle events are:

  • PreToolUse — Fires before Claude Code executes any tool (file write, shell command, etc.). Use this to validate, block, or modify what the agent is about to do.
  • PostToolUse — Fires after a tool completes. Use this for logging, cleanup, or triggering downstream automation.
  • Notification — Fires when the agent surfaces a notification to the user. Useful for routing alerts to external systems.
  • Stop — Fires when the agent completes a task or stops execution.

Each hook receives a JSON payload on stdin describing the event context — which tool is being called, what arguments it received, and the current session state. The hook script can inspect this payload, perform any action, and return a JSON response that modifies the agent's behavior.

A Basic Hook Configuration

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Write",
        "command": "node ./scripts/validate-write.js"
      }
    ],
    "PostToolUse": [
      {
        "matcher": "Bash",
        "command": "node ./scripts/log-commands.js"
      }
    ]
  }
}

This configuration does two things deterministically: before any file write, it runs a validation script. After any shell command, it logs what happened. The matcher field filters which tool triggers the hook — without it, the hook fires for every tool invocation.

Practical Hook Patterns for Real Workflows

The value of hooks becomes clear when you map them to the problems AI coding agents actually create in practice. Here are the patterns that matter most.

Pattern 1: Blocking Dangerous File Modifications

The most common fear with agentic coding tools is that the agent will modify files it shouldn't — production configs, environment files, migration scripts. A PreToolUse hook on the Write tool can inspect the target file path and reject the operation before it happens:

// validate-write.js
const input = JSON.parse(require('fs').readFileSync('/dev/stdin', 'utf8'));
const blockedPaths = ['.env', 'docker-compose.prod.yml', 'migrations/'];
const targetPath = input.tool_input?.file_path || '';

if (blockedPaths.some(p => targetPath.includes(p))) {
  console.log(JSON.stringify({
    decision: "block",
    reason: `Protected file: ${targetPath}`
  }));
} else {
  console.log(JSON.stringify({ decision: "approve" }));
}

The agent sees the block reason and adapts its approach. This is categorically different from a CLAUDE.md instruction saying "don't edit .env files" — the hook makes it physically impossible, not merely discouraged.

Pattern 2: Auto-Formatting on Every File Write

Instead of hoping the agent outputs correctly formatted code, a PostToolUse hook can run your formatter after every write:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write",
        "command": "prettier --write \"$HOOK_FILE_PATH\" 2>/dev/null; exit 0"
      }
    ]
  }
}

This guarantees consistent formatting regardless of what the model produces. The formatter runs on the actual file, not on a suggestion.

Pattern 3: Mandatory Test Execution Before Commits

A PreToolUse hook on the Bash tool can intercept git commit commands and gate them on test passage:

const input = JSON.parse(require('fs').readFileSync('/dev/stdin', 'utf8'));
const command = input.tool_input?.command || '';

if (command.includes('git commit')) {
  const { execSync } = require('child_process');
  try {
    execSync('npm test', { stdio: 'pipe' });
    console.log(JSON.stringify({ decision: "approve" }));
  } catch {
    console.log(JSON.stringify({
      decision: "block",
      reason: "Tests failed. Fix before committing."
    }));
  }
} else {
  console.log(JSON.stringify({ decision: "approve" }));
}

Pattern 4: Audit Logging for Compliance

For teams that need to track what an AI agent did and why, a PostToolUse hook can write structured logs to a file or external service:

const input = JSON.parse(require('fs').readFileSync('/dev/stdin', 'utf8'));
const logEntry = {
  timestamp: new Date().toISOString(),
  tool: input.tool_name,
  arguments: input.tool_input,
  session: input.session_id
};
require('fs').appendFileSync('./agent-audit.jsonl',
  JSON.stringify(logEntry) + '\n');

This creates a deterministic audit trail — every tool invocation logged, no exceptions. As covered in our analysis of agent harnesses in 2026, this kind of observability is becoming non-negotiable for production AI agent deployments.

HTTP and Conditional Hooks: The 2026 Expansion

Early 2026 brought significant expansions to the hooks system. HTTP hooks allow hooks to call external services — posting to Slack when the agent completes a task, triggering CI pipelines, or consulting an external policy service before approving a file write. Conditional hooks added logic to the matcher system, enabling hooks to fire based on file patterns, argument content, or session metadata rather than just tool name.

These additions moved hooks from simple script triggers to a full programmable layer in Claude Code's architecture. Combined with skills, MCP servers, and agent teams, hooks form part of what is effectively a complete extension stack for the platform.

Hooks vs. CLAUDE.md Instructions: When to Use Which

A common question is when to put rules in CLAUDE.md (soft instructions the model follows) versus hooks (hard automation the system enforces). The decision rule is straightforward:

Use CLAUDE.md when you want to influence the model's reasoning — coding style preferences, architectural patterns, naming conventions. These are guidelines where the model needs flexibility to interpret context.

Use hooks when the rule must execute every single time without exception — security boundaries, formatting, test gates, audit logging, deployment checks. If violating the rule once causes real damage, it belongs in a hook.

Use both together when you want the model to understand why a constraint exists (CLAUDE.md) while the system enforces that it's never violated (hooks). For example, your CLAUDE.md might explain your team's security policy around environment files, while a PreToolUse hook physically blocks writes to .env.

This layered approach — soft guidance plus hard enforcement — is what makes the Claude Code hooks mastery pattern effective at scale.

The March 2026 Source Leak: What It Revealed About Hooks Architecture

On March 31, 2026, Claude Code's source maps (cli.js.map) were accidentally published to the public npm registry. The leak revealed the internal complexity of the hooks system, including multi-agent coordination pathways, advanced memory consolidation engines, and internal features like an "Undercover Mode." While Anthropic addressed the leak quickly, it confirmed that hooks operate at a deep architectural level — they're not a superficial wrapper but an integral part of Claude Code's execution pipeline.

For the developer community, the leak validated that the hooks system was designed for production-grade automation, not just convenience scripts. The internal architecture showed hooks intercepting at the same level as Claude Code's own safety systems.

Getting Started: A Decision Framework

If you're adopting hooks for the first time, start with these three in order:

  1. File protection hook (PreToolUse on Write) — Block modifications to sensitive files. This is the highest-impact, lowest-effort hook and prevents the most common AI agent mishap.

  2. Audit logging hook (PostToolUse on all tools) — Log every action the agent takes. Even if you don't need compliance logs now, having an audit trail makes debugging agent behavior dramatically easier.

  3. Pre-commit gate (PreToolUse on Bash matching git commit) — Enforce your quality gates (tests, linting, type checking) before the agent can commit. This is the hook equivalent of a CI pipeline running locally.

Once these three are stable, expand to HTTP hooks for team notifications, conditional hooks for file-pattern-specific rules, and Stop hooks for end-of-task automation.

Frequently Asked Questions

Where are Claude Code hooks configured?

Hooks are defined in your settings.json file, either at the project level (.claude/settings.json) or user level (~/.claude/settings.json). Project-level hooks apply to anyone working in that repository. User-level hooks apply across all your projects.

Can hooks modify what Claude Code is about to do, or only block it?

PreToolUse hooks can block an action by returning a "block" decision, or approve it. They can also return modified parameters, effectively rewriting the tool's input before it executes. PostToolUse hooks can trigger follow-up actions but cannot undo what already happened.

Do hooks slow down Claude Code?

Hooks add latency proportional to their execution time. Simple validation scripts (file path checks, JSON inspection) add negligible overhead — typically under 50ms. Hooks that run test suites or call external APIs will add noticeable delay. Keep hot-path hooks fast and reserve expensive operations for less frequent lifecycle events.

What happens if a hook script crashes?

A hook that exits with a non-zero status or produces invalid JSON is treated as a failure. By default, Claude Code treats hook failures conservatively — a crashing PreToolUse hook blocks the action rather than silently approving it. This fail-closed behavior is intentional for security-critical hooks.

Can I use hooks with Claude Code agent teams?

Yes. Hooks fire for the main agent and any sub-agents spawned during a session. This means your security boundaries and audit logging apply uniformly across parallel agent execution — a critical property for teams using multi-agent workflows.


Want more AI insights? Subscribe to LoreAI for daily briefings.