anthropic.com
|
ksl
|
|
Anthropic launched Claude Code Security as a limited research preview, using Opus 4.6 to scan codebases for vulnerabilities the way a human security researcher would — tracing data flows and component interactions rather than matching patterns. The team found over 500 previously undetected vulnerabilities in production open-source projects during internal testing, with responsible disclosure still underway. Every finding goes through multi-stage verification with severity ratings and confidence scores before reaching a dashboard where developers approve or reject suggested patches. Nothing gets applied autonomously. Enterprise and Team customers get first access, with open-source maintainers fast-tracked. Snyk, SonarQube, and GitHub’s CodeQL have dominated this space with rule-based approaches for years, and Anthropic entering with reasoning-based analysis at the model layer raises the question of how long pattern-matching scanners remain the default.
