Claude Code Review
Developer ToolsAn AI-powered code review tool by Anthropic that dispatches multiple agents in parallel to analyze pull requests for bugs, security issues, and logic errors, with a verification layer to filter false positives before surfacing findings.
You do not call a forensic accountant for every invoice. But when the numbers are large enough and the stakes are real, that is exactly who you want reviewing the books before they are signed off.
Claude Code Review is an automated code review system built into Claude Code, launched by Anthropic on March 9, 2026. It uses a multi-agent architecture where several AI agents examine a pull request concurrently from different angles, then a verification layer cross-checks and filters findings before presenting them to the developer.
The system produces a single overview comment plus inline annotations ranked by severity. It does not approve or merge pull requests - it reports findings and stops there. Review depth scales with PR complexity: large PRs (1,000+ lines) receive findings 84% of the time, averaging 7.5 issues per review, while small PRs (under 50 lines) receive findings 31% of the time.
Internally at Anthropic, code output per engineer grew 200% in a year, creating a review bottleneck where only 16% of pull requests received substantive review comments. After deploying Claude Code Review, that figure rose to 54%. Less than 1% of findings are marked incorrect by reviewing engineers.
Claude Code Review is available as a research preview for Team and Enterprise customers at an estimated cost of $15-25 per review, depending on PR complexity. Organizations can set monthly spending caps and enable reviews selectively by repository.
References & Resources
Related Terms
Last updated: March 11, 2026