CloneGuard¶
Hook-level prompt injection defense for AI coding agents.
Your AI agent reads untrusted repos. CloneGuard watches what it does next.
CloneGuard runs at the hook layer -- before tool calls execute, outside the agent's control. It detects prompt injection attempts, constrains suspicious operations via OS-level sandboxing, and emits structured audit logs.
-
5-Minute Setup
Install from PyPI, run one command, and CloneGuard is active in Claude Code.
-
240 Detection Rules
Pattern matching, semantic classification, and behavioral sequence monitoring across 34 attack categories.
-
Detect, Constrain, Audit
Three-verdict enforcement with OS-level sandboxing. Dry-run by default.
-
Works With Any Agent
Built for Claude Code. Standalone scan works with any agent. Hook protocol compatible with Gemini CLI, Cursor, and Windsurf.
Install¶
pip install cloneguard # Pattern matching only
pip install "cloneguard[mini]" # + semantic classifier (recommended)
Development Status¶
CloneGuard is in active development (v0.6.0). The core detection engine is tested with 1,677 automated tests and false positive rates calibrated against 208,127 real coding-agent sessions from published SWE-bench datasets.
Enterprise features (policy backends, SIEM connectors, fleet deployment) are early-stage and experimental.
We want feedback -- open an issue or contribute.