Claude Code Hooks: The Deterministic Layer That Makes AI Coding Reliable
Claude Code hooks enforce security, formatting, and logging automatically — adding deterministic control to AI-driven workflows. Here's how to use them.
Claude Code Hooks: The Deterministic Layer That Makes AI Coding Reliable
Claude Code hooks solve the core reliability problem with AI-assisted development: language models are non-deterministic. They generate excellent code but forget to run your linter, skip your test suite, or occasionally execute something destructive. Hooks are lifecycle triggers that fire at specific points during Claude Code's execution — before a tool call, after a file write, on session end — running predefined scripts or prompts regardless of what the model decides to do. They turn "usually follows conventions" into "always follows conventions."
Anthropic introduced hooks as Claude Code evolved from a smart autocomplete into a full autonomous agent. As agentic workflows get longer and more complex, the need for deterministic guardrails grows proportionally.
What Claude Code Hooks Actually Are
A hook is a shell command or prompt that Claude Code executes at a defined lifecycle event. You configure them in your settings.json or project-level [CLAUDE.md](/blog/claude-code-memory). When the trigger fires, the hook runs — no model judgment involved, no risk of it being skipped.
The core lifecycle events include:
- PreToolCall: Runs before any tool execution — ideal for safety checks, logging, or blocking dangerous commands
- PostToolCall: Runs after a tool completes — useful for formatting, validation, or state updates
- PostFileWrite: Triggers after any file is written — run your linter here
- SessionEnd: Fires when the session closes — update your task tracker, send a notification, commit a log
Anthropic has iterated quickly on this system. Since mid-2025, the feature set has expanded to include Async Hooks (non-blocking execution for logging and notifications) and Subagents (spawning parallel agents from within a hook), making the system substantially more powerful than its initial release.
Why Hooks Are the Killer Feature
The standard critique of AI coding tools is that they're great for greenfield work but unreliable in production codebases with strict conventions. Hooks directly address this.
Without hooks, Claude Code might write a file but skip prettier. It might run a database migration without first checking for pending conflicts. It might delete a file that a hook would have flagged as protected. The model can be prompted to avoid these mistakes, but prompts are suggestions — hooks are guarantees.
What are Claude Code hooks? They're the difference between a capable assistant and a reliable one.
Boris Cherny, an Anthropic engineer closely associated with Claude Code's development, has noted that AI now writes a significant portion — reportedly up to 100% — of Claude Code's own repository. That level of AI authorship is only viable with robust oversight mechanisms. Hooks are part of that oversight layer.
Five Hook Patterns Worth Implementing
1. Pre-execution safety filter
Block destructive commands before they run. A PreToolCall hook that pattern-matches against rm -rf, DROP TABLE, or git push --force and exits non-zero will stop the operation. Claude Code respects non-zero exit codes from hooks as hard stops.
#!/bin/bash
# hooks/check-dangerous.sh
if echo "$CLAUDE_TOOL_ARGS" | grep -qE 'rm -rf|DROP TABLE|--force'; then
echo "Blocked: dangerous command pattern detected"
exit 1
fi
2. Automatic linting on file write
Every file write triggers your formatter. No exceptions, no "I forgot."
#!/bin/bash
# hooks/post-write-lint.sh
if [[ "$CLAUDE_FILE_PATH" == *.ts || "$CLAUDE_FILE_PATH" == *.tsx ]]; then
npx prettier --write "$CLAUDE_FILE_PATH"
npx eslint --fix "$CLAUDE_FILE_PATH"
fi
3. Structured logging
A PostToolCall async hook that appends every tool invocation to a JSONL log gives you a full audit trail of what the agent did — invaluable for debugging long sessions.
4. Test runner enforcement
A PostFileWrite hook that detects changes to *.test.ts files and immediately runs the test suite catches regressions before they accumulate across a multi-step session.
5. Session summary on close
A SessionEnd hook that calls a summarization script and appends the output to your project notes creates automatic documentation of what changed and why.
The Async Hooks and Subagent Expansion
The addition of Async Hooks changes the performance profile of hook-heavy setups. Before async support, every hook was blocking — a slow notification script would pause the entire agent. With async execution, logging, metrics, and notifications run in parallel without interrupting the primary workflow.
Subagent hooks go further: they let a hook spawn a parallel Claude Code instance to handle a specific task while the main agent continues. The pattern emerging in the community is using subagent hooks for cross-cutting concerns — security audits, documentation generation, dependency checks — that should happen alongside the main task rather than blocking it.
This positions agentic coding workflows closer to traditional CI/CD pipelines, where checks run in parallel and gates block only when something fails.
Community Reception: Enthusiastic but Calibrated
The developer community response has split into two camps. Engineers focused on production reliability are enthusiastic — hooks solve real problems with AI code review, security, and consistency. The second camp raises a legitimate concern: when AI writes most of the code and hooks enforce the standards, human engineers risk losing the review bandwidth needed to catch the cases hooks don't cover.
This tension is real. A PreToolCall safety hook blocks known-dangerous patterns, but it can't catch novel attack vectors. Auto-formatting enforces style, but it can't enforce architecture. Hooks are a floor, not a ceiling.
The practical takeaway from community discussion: start with the highest-value hooks (safety checks, linting), validate they're working as expected, then expand. Don't treat hooks as a substitute for review — treat them as automation for the parts of review that are purely mechanical.
Configuration Approach
Hooks can be configured at two levels:
User-level (~/.claude/settings.json): Applies across all projects. Good for safety filters and personal logging preferences.
Project-level (CLAUDE.md or .claude/settings.json in repo root): Applies to everyone working in the repo. Good for linting, test enforcement, and project-specific guardrails.
The project-level configuration is the more powerful pattern for teams — it means the hooks travel with the repo and apply consistently regardless of who's running Claude Code or how they've configured their personal settings.
What's Next for Hooks
Anthropic's trajectory with hooks follows the broader shift toward what engineers are calling "agentic engineering" — system-driven orchestration rather than model-driven chat. The hooks system is becoming the coordination layer between Claude Code and the rest of your toolchain: CI systems, monitoring, deployment pipelines.
The introduction of subagents suggests hooks will eventually support full multi-agent coordination patterns, where a primary agent delegates to specialists via hooks rather than handling everything in a single context window. For large codebases, this is a meaningful architectural shift.
For a broader look at where this fits in the AI regulation and governance conversation, see our coverage of AI regulation and what deterministic controls mean for AI in production.
Practical Starting Point
If you're new to Claude Code hooks, the highest-ROI starting point is a two-hook setup:
PreToolCallsafety filter blocking destructive command patternsPostFileWritelinter running on every write
These two hooks eliminate the most common failure modes — dangerous commands slipping through and style drift accumulating across long sessions — with minimal configuration overhead. Add the async logging hook once you want visibility into session behavior. Everything else builds from there.
The FAQ on Claude Code hooks covers common configuration questions from the community if you run into setup issues.
Want more AI insights? Subscribe to LoreAI for daily briefings.