Building a Claude Code Hook System for Automated Code Review
What You’ll Learn
- What Claude Code hooks are actually good for
- How to build a lightweight automated review layer around file edits
- Which hook events I would use first on a real project
- How to block unsafe behavior and trigger deterministic checks
- Why hooks are better than “please remember to do X” prompts
If you use Claude Code seriously, you eventually run into the same limitation: prompts are not guarantees.
You can tell the assistant to run formatting, avoid dangerous commands, or check changed files before stopping. Often it will. Sometimes it will not. That is not because the model is bad. It is because natural-language instructions are still advisory unless you wrap them in something deterministic.
That is exactly where hooks become useful.
Claude Code hooks let you run shell commands automatically at specific points in the tool lifecycle. The important detail is not that they are customizable. It is that they are deterministic.
If you want review-like behavior to always happen after edits, or you want specific commands blocked before they run, hooks are the right mechanism.
What Hooks Are Good At
From the official docs, hooks can run at points like PreToolUse, PostToolUse, Stop, Notification, SessionStart, and more. That gives you enough control to add a small layer of engineering discipline around the agent.
The kinds of review automation I care about most are:
- run a formatter after file edits
- run a linter after relevant files change
- block obviously dangerous commands
- remind the session about review or testing expectations
- collect an audit trail of edits
That last point matters more than people expect. Once the agent becomes productive, you usually want a record of what happened.
My First Hook: Format After Edits
The fastest useful hook is usually a PostToolUse hook on Write and Edit.
At project scope, that can live in .claude/settings.json:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "npm run format"
}
]
}
]
}
}
This is intentionally boring. That is why it works.
If a project already has a formatter, the hook ensures formatting is enforced whether or not the assistant remembered to run it manually.
My Second Hook: Block Unsafe Bash Before It Runs
Claude Code’s hooks reference includes PreToolUse, which is exactly where I would put command guardrails.
For example, a small hook script can reject obviously destructive shell commands:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": ".claude/hooks/block-dangerous-bash.sh"
}
]
}
]
}
}
And the hook script:
#!/bin/bash
COMMAND=$(jq -r '.tool_input.command')
if echo "$COMMAND" | grep -Eq 'rm -rf|git reset --hard|del /f /s /q'; then
jq -n '{
decision: "block",
reason: "Blocked dangerous shell command by project hook."
}'
exit 0
fi
exit 0
The exact matching logic will vary by team. The point is that a real policy is more reliable than a polite instruction buried in a system prompt.
My Third Hook: Run a Targeted Review Check on Stop
The Stop event is useful for end-of-turn review automation.
I would not use it for a huge test suite on every turn. I would use it for a focused check that is cheap and valuable.
Examples:
- run
npm run typecheck - run
eslinton changed files - generate a short review note
- check that no secret-like files were written
The practical pattern is to keep the hook fast enough that people do not hate it.
For example:
{
"hooks": {
"Stop": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "npm run typecheck"
}
]
}
]
}
}
If the project is large, I would replace this with a narrower script that checks only the relevant subset of files.
Hooks Are Better Than Review Prompts for Deterministic Rules
There is a useful line to draw here.
Use prompts for judgment. Use hooks for rules.
Examples of judgment:
- is this refactor worth it?
- is the naming consistent with the domain?
- should we split this function?
Examples of rules:
- always run formatter after edits
- never allow destructive shell patterns
- always typecheck before ending a turn
- always notify when a session needs input
The mistake is trying to force rules through natural-language reminders when the platform already gives you lifecycle hooks.
My Preferred Hook Stack on a Real Project
If I were setting up a client repo today, I would start small:
PostToolUseonEdit|Writefor formattingPreToolUseonBashfor safety blockingStopfor a lightweight validation command
That is already enough to make the assistant feel more production-ready.
Later, I might add:
- notifications when long tasks finish
- session-start context injection
- audit logging for changed files
- MCP tool hook checks for higher-risk workflows
But I would earn those additions. The best hook systems start with a few high-value guarantees, not a giant automation labyrinth.
Final Thought
Claude Code gets much more useful once you stop expecting prompts alone to enforce your engineering rules.
Hooks are the missing layer between “the assistant usually remembers” and “this behavior actually happens every time.” For automated review, that difference matters a lot.
If you need help building Claude Code workflows, hook-based guardrails, or AI-assisted engineering systems that stay predictable, take a look at my portfolio: voidcraft-site.vercel.app.