10 Deterministic Hooks That Catch What Your LLM Misses — A Claude Code Pattern

10 Deterministic Hooks That Catch What Your LLM Misses — A Claude Code Pattern
Let's talk about the elephant in the room. You're using Claude Code, or maybe Cursor, or any of these AI coding assistants. They're brilliant. They're creative. They're also... probabilistic. They usually follow your instructions. They often remember your project conventions. They sometimes catch security issues.
That "usually, often, sometimes" is what keeps me up at night when I'm coaching engineering teams. It's what caused a junior dev on one of my projects to almost commit an .env file with production keys last month. Claude Code was told not to. It just... forgot.
This isn't a bug. It's the fundamental nature of LLMs. They're suggestion engines, not rule engines.
That's where hooks come in. Not as a nice-to-have, but as your deterministic control layer. While your LLM explores the solution space creatively, hooks enforce the non-negotiable constraints. They're what turns "vibe coding at scale" into reproducible, auditable, governable agent behavior.
I've been running these patterns in production across five different projects for the past three months. What started as safety rails has become the backbone of our AI-assisted workflow. Here are the ten hooks that catch what Claude misses every single time.
The Architecture of Control
Before we dive into code, understand this mental model: hooks operate outside the LLM's reasoning chain. They're shell commands, LLM prompts, or subagents that execute automatically at specific points in Claude Code's lifecycle.
The official docs list 12 hook events. We're focusing on three critical ones:
PreToolUse: Runs before a tool executes (block dangerous commands)PostToolUse: Runs after a tool executes (format, validate, transform)PreSessionStart: Runs when session begins (setup, checks)
The pattern is always the same: deterministic verification after probabilistic creation.
Hook 1: The Force-Push Block
This one saved us from three potential disasters already. Claude Code, in its enthusiasm to help, sometimes tries to git push --force. Never. Ever.
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "jq -r '.tool_input.command' | grep -q 'push --force\\|push -f' && { echo 'Force push blocked by hook'; exit 1; } || exit 0"
}
]
}
]
}
}
The beauty? It doesn't ask permission. It just blocks. Deterministic.
Hook 2: JWT Secret Strength Validator
Claude Code loves to help with authentication. It also loves to suggest secret123 as JWT secrets. This hook ensures HS256 secrets are actually strong.
bash-guard.py:
#!/usr/bin/env python3
import sys
import json
import re
data = json.load(sys.stdin)
command = data.get("tool_input", {}).get("command", "")
# Look for JWT secret generation or usage
if "HS256" in command or "jwt.encode" in command or "SECRET_KEY" in command:
# Extract potential secret (simplified pattern)
secret_pattern = r"['\"]([^'\"]{3,})['\"]"
secrets = re.findall(secret_pattern, command)
for secret in secrets:
if len(secret) < 32:
print(f"JWT secret too weak: '{secret}' (min 32 chars)")
sys.exit(1)
# Check if it's a common weak secret
weak_secrets = ["secret", "password", "123456", "admin", "token"]
if any(weak in secret.lower() for weak in weak_secrets):
print(f"JWT secret too common: '{secret}'")
sys.exit(1)
sys.exit(0)
Hook config:
{
"type": "command",
"command": "python3 .claude/hooks/bash-guard.py"
}
Hook 3: MD5 Usage Interceptor
Yes, in 2026, I still see MD5 suggestions for password hashing. No, we don't allow it.
{
"type": "command",
"command": "jq -r '.tool_input.command' | grep -q -i 'md5\\|MessageDigest.getInstance(\"MD5\")' && { echo 'MD5 usage blocked - use bcrypt or Argon2'; exit 1; } || exit 0"
}
Hook 4: Tkinter GUI File Guard
Specific to our project, but the pattern is universal. We have a legacy Tkinter GUI that should never be modified directly—only through our abstraction layer.
write-guard.py:
#!/usr/bin/env python3
import sys
import json
from pathlib import Path
data = json.load(sys.stdin)
file_path = data.get("tool_input", {}).get("file_path", "")
if file_path.endswith(".py"):
content = data.get("tool_input", {}).get("content", "")
# Block direct tkinter imports in GUI files
if "gui/" in file_path and "import tkinter" in content:
print("Direct tkinter import blocked in GUI layer")
print("Use abstraction layer: from gui.core import WindowBuilder")
sys.exit(1)
# Check for direct widget creation
if "tk." in content or "ttk." in content:
print("Direct tkinter widget creation blocked")
print("Use WindowBuilder pattern")
sys.exit(1)
sys.exit(0)
Hook 5: UUIDv7 Enforcer
For distributed systems, timestamp-ordered UUIDs matter. Claude Code defaults to v4. We enforce v7.
post-write-guard.py:
#!/usr/bin/env python3
import sys
import json
import re
data = json.load(sys.stdin)
file_path = data.get("tool_input", {}).get("file_path", "")
content = data.get("tool_input", {}).get("content", "")
if file_path.endswith((".py", ".js", ".ts", ".go")):
# Look for UUID generation patterns
patterns = [
r"uuid\.v4\(\)",
r"UUID\.randomUUID\(\)",
r"new UUID\(\)",
r"random_uuid",
r"[Uu][Uu][Ii][Dd]\(\)"
]
for pattern in patterns:
if re.search(pattern, content):
print(f"UUIDv4 generation detected. Use UUIDv7 for time-ordered IDs.")
print(f"Python: uuid.uuid7()")
print(f"Node: import { v7 as uuidv7 } from 'uuid'")
print(f"Go: github.com/google/uuid.NewV7()")
sys.exit(1)
sys.exit(0)
Hook 6: i18n String Parity Check
We maintain English and French versions. This hook ensures when Claude adds a new UI string, it flags the need for translation.
{
"type": "command",
"command": "jq -r '.tool_input.file_path, .tool_input.content' | grep -q -i 'error_message\\|success_message\\|label\\|button_text' && echo 'Remember to add French translation in locales/fr.json' >&2 || true"
}
It doesn't block—it reminds. Because some decisions require human judgment.
Hook 7: Conventional Commits Validator
Claude writes decent commit messages. But "conventional commits" require consistency that's hard for LLMs.
{
"type": "command",
"command": "jq -r '.tool_input.command' | grep -q 'git commit -m' && jq -r '.tool_input.command' | sed -n \"s/.*git commit -m '\\([^']*\\)'.*/\\1/p\" | grep -qE '^(feat|fix|docs|style|refactor|test|chore)(\\([a-z-]+\\))?: .{10,}' || { echo 'Commit message must follow conventional commits pattern'; exit 1; }"
}
Hook 8: API Version Header Injector
All our API calls need X-API-Version: 2026-04. Claude remembers 80% of the time. Hooks remember 100%.
{
"type": "command",
"command": "jq -r '.tool_input.command' | grep -q 'curl.*api\\.' && jq -r '.tool_input.command' | grep -q 'X-API-Version' || { echo 'API calls require X-API-Version header'; exit 1; }"
}
Hook 9: Database Migration Safety Check
Never apply migrations without a backup. Never.
{
"type": "command",
"command": "jq -r '.tool_input.command' | grep -q -i 'migrate\\|alter table\\|drop table' && jq -r '.tool_input.command' | grep -q -i 'backup\\|pg_dump\\|mysqldump' || { echo 'Database changes require backup command in same session'; exit 1; }"
}
Hook 10: Cost Guard for AI API Calls
This one's meta. When Claude Code uses our AI API wrapper, we want to prevent accidental $1000 prompts.
#!/usr/bin/env python3
import sys
import json
data = json.load(sys.stdin)
command = data.get("tool_input", {}).get("command", "")
if "call_ai_api" in command or "openai.ChatCompletion.create" in command:
# Extract max_tokens parameter
import re
token_match = re.search(r'max_tokens[=:]\s*(\d+)', command)
if token_match:
tokens = int(token_match.group(1))
if tokens > 10000:
print(f"API call blocked: {tokens} tokens exceeds 10k limit")
sys.exit(1)
# Check for gpt-4-128k or other expensive models
if "gpt-4-128k" in command or "claude-3-5-sonnet-20241022" in command:
print("Expensive model usage requires manual override")
sys.exit(1)
sys.exit(0)
The Insight That Changed Everything
Here's what I learned after three months of running this in production: hooks aren't just safety rails. They're a teaching tool.
Every time a hook fires, Claude Code gets immediate feedback. "Force push blocked." "JWT secret too weak." "Use UUIDv7." Over time, I've watched the rate of these interventions drop by 60% across our team. The LLM is learning from the deterministic feedback loop.
But more importantly, my junior developers are learning. Each hook failure is a micro-lesson in security, in best practices, in our team's conventions.
Implementation Strategy
Start small. Don't implement all ten at once.
- Week 1: Add the force-push block and MD5 interceptor. These are non-negotiable safety issues.
- Week 2: Add the commit validator and API version check. These improve code quality.
- Week 3: Add project-specific guards (like our Tkinter rule).
- Continually: Review hook triggers weekly. What's firing often? Maybe it needs better documentation. Maybe the rule needs adjustment.
The Balance
The dotzlaw.com article nailed it: "For decisions that require judgment rather than deterministic rules, you can also use prompt-based hooks or agent-based hooks that use a Claude model to evaluate conditions."
Some rules should be firm (security). Some should be guides (i18n reminders). Some should escalate to human judgment (expensive API calls).
Your Homework
Look at your last week of Claude Code sessions. What did it "forget" to do? What convention did it break? What safety rule did it almost violate?
That's your first hook.
Don't aim for perfect. Aim for "better than probabilistic." Because in production, "usually" isn't good enough. Your users deserve deterministic safety. Your team deserves reproducible quality. And you deserve to sleep at night, knowing your AI assistant has guardrails that actually guard.
Start with one hook today. The force-push block takes 2 minutes. It might save your repository tomorrow.
The Ermite Shinkofa

Jay "The Ermite"
Holistic Coach & Consultant — Creator of Shinkofa
Coach and consultant specializing in neurodivergent support (gifted/high-potential, highly sensitive, multipotentialites). 21 years of entrepreneurship, 12 years of coaching. Based in Spain.
Learn more →