|
31 | 31 |
|
32 | 32 | --- |
33 | 33 |
|
| 34 | +## What is SuperClaw? |
| 35 | + |
| 36 | +SuperClaw is a pre-deployment security testing framework for AI coding agents. It systematically identifies vulnerabilities before your agents touch sensitive data or connect to external ecosystems. |
| 37 | + |
| 38 | +### 🎯 Scenario-Driven Testing |
| 39 | + |
| 40 | +Generate and execute adversarial scenarios against real agents with reproducible results. |
| 41 | + |
| 42 | +[Get started →](https://superagenticai.github.io/superclaw/getting-started/quickstart/) |
| 43 | + |
| 44 | +### 📋 Behavior Contracts |
| 45 | + |
| 46 | +Explicit success criteria, evidence extraction, and mitigation guidance for each security property. |
| 47 | + |
| 48 | +[Explore behaviors →](https://superagenticai.github.io/superclaw/architecture/behaviors/) |
| 49 | + |
| 50 | +### 📊 Evidence-First Reporting |
| 51 | + |
| 52 | +Reports include tool calls, outputs, and actionable fixes in HTML, JSON, or SARIF formats. |
| 53 | + |
| 54 | +[CI/CD integration →](https://superagenticai.github.io/superclaw/guides/ci-cd/) |
| 55 | + |
| 56 | +### 🛡️ Built-in Guardrails |
| 57 | + |
| 58 | +Local-only mode and authorization checks reduce misuse risk. |
| 59 | + |
| 60 | +[Safety guide →](https://superagenticai.github.io/superclaw/guides/safety/) |
| 61 | + |
| 62 | +## ⚠️ Security and Ethical Use |
| 63 | + |
| 64 | +### Authorized Testing Only |
| 65 | + |
| 66 | +SuperClaw is for authorized security testing only. Before using: |
| 67 | + |
| 68 | +- ✅ Obtain written permission to test the target system |
| 69 | +- ✅ Run tests in sandboxed or isolated environments |
| 70 | +- ✅ Treat automated findings as signals, not proof—verify manually |
| 71 | + |
| 72 | +**Guardrails enforced by default:** |
| 73 | + |
| 74 | +- Local-only mode blocks remote targets |
| 75 | +- Remote targets require `SUPERCLAW_AUTH_TOKEN` |
| 76 | + |
| 77 | +## Threat Model |
| 78 | + |
| 79 | +### OpenClaw + Moltbook Risk Surface |
| 80 | + |
| 81 | +OpenClaw agents often run with broad tool access. When connected to Moltbook or other agent networks, they can ingest untrusted, adversarial content that enables: |
| 82 | + |
| 83 | +- Prompt injection and hidden instruction attacks |
| 84 | +- Tool misuse and policy bypass |
| 85 | +- Behavioral drift over time |
| 86 | +- Cascading cross-agent exploitation |
| 87 | + |
| 88 | +SuperClaw evaluates these risks before deployment. |
| 89 | + |
34 | 90 | ## Why SuperClaw? |
35 | 91 |
|
36 | 92 | Autonomous AI agents are deployed with high privileges, mutable behavior, and exposure to untrusted inputs—often without structured security validation. This makes prompt injection, tool misuse, configuration drift, and data leakage likely but poorly understood until after exposure. |
37 | 93 |
|
38 | | -**SuperClaw** is a pre-deployment, behavior-driven red-teaming framework that stress-tests your agents before they touch sensitive data or external ecosystems. |
39 | | - |
40 | 94 | ### What It Does |
41 | 95 |
|
42 | 96 | - **Runs scenario-based security evaluations** against your agents |
|
0 commit comments