Step 1
Install and verify
Get to a valid local setup in less than 60 seconds.
pip install vyper-guard
Documentation • Vyper Guard
Vyper Guard is a Vyper-native security CLI combining deterministic static analysis, semantic context, robust reporting workflows, safe remediation controls, and deployed contract intelligence.
Step 1
Get to a valid local setup in less than 60 seconds.
pip install vyper-guard
Step 2
Produce deterministic findings and baseline score.
vyper-guard analyze contracts/Vault.vy
Step 3
Generate machine-readable artifacts for release policy.
vyper-guard stats contracts/Vault.vy --graph
Vyper Guard is a Vyper-focused security analysis CLI designed for developer loops, security review workflows, and CI policy enforcement. It combines deterministic static analysis with semantic context and optional AI-assisted explanation, while keeping machine-readable output for automation.
The platform supports both local source-file analysis and address-based intelligence for deployed contracts. Outputs are available in human-friendly terminal format and structured report formats for integration into release gates and audit pipelines.
Install Vyper Guard from PyPI and verify the CLI in your active Python environment.
installation
Minimal path to first scan, flow view, and graph exports.
quick start
For folder-level analysis, point to a directory path. The scanner recursively evaluates Vyper contracts and aggregates findings per file and per severity class.
The execution model is intentionally layered: input normalization, deterministic detector pass, semantic verification, optional AI interpretation, and report synthesis. This keeps core findings reproducible while allowing richer human-readable audit context.
Deterministic core
Advisory/augmentation layer
This staged model ensures the same source yields stable baseline findings while still allowing optional augmentation layers to improve reviewer velocity and report quality.
| Command | Purpose |
|---|---|
| vyper-guard analyze contracts/Vault.vy | Run full deterministic static analysis and produce risk-scored findings. |
| vyper-guard analyze contracts/Vault.vy --ai | Add AI-assisted prioritization and remediation narrative on top of deterministic findings. |
| vyper-guard ast contracts/Vault.vy --format json | Inspect declarations and contract structure through an AST summary. |
| vyper-guard flow contracts/Vault.vy --format mermaid | Generate function and call-flow views for reasoning about execution order. |
| vyper-guard fix contracts/Vault.vy --fix-dry-run --max-auto-fix-tier B | Preview safe remediations with tier-based safety gates. |
| vyper-guard stats contracts/Vault.vy --graph | Export analysis metrics with graph artifacts (JSON + HTML). |
| vyper-guard analyze-address 0xYourAddress --format json | Analyze deployed contract source via explorer-backed lookup. |
Recommended baseline commands for teams are `analyze`, `analyze --ai`, `fix --fix-dry-run`, `stats --graph`, and `analyze-address` for post-deploy checks.
Detector inventory grouped by category and operational intent.
| Detector key | Severity | Category |
|---|---|---|
| missing_nonreentrant | CRITICAL | Security |
| unprotected_selfdestruct | CRITICAL | Security |
| cei_violation | HIGH | Logic |
| unsafe_raw_call | HIGH | Security |
| dangerous_delegatecall | HIGH | Security |
| unprotected_state_change | HIGH | Security |
| integer_overflow | HIGH | Logic |
| send_in_loop | HIGH | Security |
| unchecked_subtraction | HIGH | Logic |
| compiler_version_check | MEDIUM | Best Practice |
| missing_event_emission | LOW | Best Practice |
| timestamp_dependence | LOW | Best Practice |
AI mode augments deterministic findings. It should be treated as a triage and explanation layer, not as a replacement for deterministic checks or formal audit review.
ai configuration
Structural views improve reviewer understanding of contract layout, function interactions, and potential high-risk control paths.
structure views
AST output is suited for machine processing and tooling integration, while flow summaries are suited for reviewer cognition and architecture walkthroughs.
fix workflow
Teams should require human approval for Tier B and Tier C classes and retain fix-plan artifacts in pull request metadata to preserve review transparency.
explorer lookup
Address-based mode is intended for post-deployment visibility and incident response workflows, especially when source and ABI are verified through explorer providers.
CLI
Human-readable local feedback with score and severity summary.
JSON
Machine-readable artifacts for CI gates, dashboards, and archival.
Markdown
Audit and PR-ready narrative reporting format.
report output
Core configuration surfaces include AI provider/model settings, explorer provider credentials, output defaults, and CI severity thresholds.
AI configuration
ai config
Explorer configuration
explorer config
Penalty model
Grade bands
Severity caps prevent a single class from over-dominating score impact and preserve a more balanced security posture interpretation across multiple risk dimensions.
Existing workflows based on analyze [path] remain valid. Newer features such as AI layering, explorer integration, and fix policy controls are additive and can be introduced incrementally.
ci pipeline
Recommended CI pattern: generate machine-readable report, evaluate threshold policy, publish report artifact, and fail fast on disallowed severities.
A practical team pattern is to run deterministic scans at commit time, run full AI-assisted triage on pull requests, and enforce severity thresholds in CI before merge. This approach keeps developer feedback fast while preserving strict release controls for security-critical changes.
For audit-heavy repositories, teams typically pair `analyze --format json` with markdown reports. JSON outputs are consumed by quality gates, while markdown artifacts are attached to pull requests for reviewer clarity and discussion.
Security governance should define acceptable risk by severity and environment. A common policy is zero tolerated critical findings in all environments, and zero high findings for production deploys. Medium and low findings may be allowed with explicit waivers and remediation timelines.
Waiver records should include detector key, rationale, owner, expiry date, and mitigation status. This creates auditability and prevents permanent drift from baseline security posture.
Vyper Guard is a security analysis assistant and not a formal guarantee of exploit absence. All critical and high findings should be reviewed by qualified auditors before production deployment.
AI-assisted sections are advisory. Deterministic findings and manual code review remain the authoritative basis for release decisions.
vyper-guard ai config show.Does AI mode change deterministic findings?
No. AI mode augments interpretation and prioritization, but baseline findings come from deterministic analysis.
Can I use Vyper Guard only in CI?
Yes, but local scans are recommended to reduce CI churn and speed developer feedback loops.
Should I auto-apply all fixes?
No. Start with dry-run and tier caps. Require human review for higher-risk fix classes.
This page is a product-level operational reference derived from available Vyper Guard metadata, command references, and detector/scoring model representation in the project context. For upstream implementation details and release notes, use the linked repository and package pages.