One decorator. Zero trust. Full control.
Get Started in 30 Seconds · Why Airlock? · All Frameworks · Docs
┌────────────────────────────────────────────────────────────────┐
│ 🤖 AI Agent: "Let me help clean up disk space..." │
│ ↓ │
│ rm -rf / --no-preserve-root │
│ ↓ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ 🛡️ AIRLOCK: BLOCKED │ │
│ │ │ │
│ │ Reason: Matches denied pattern 'rm_*' │ │
│ │ Policy: STRICT_POLICY │ │
│ │ Fix: Use approved cleanup tools only │ │
│ └──────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────┘
pip install agent-airlockfrom agent_airlock import Airlock
@Airlock()
def transfer_funds(account: str, amount: int) -> dict:
return {"status": "transferred", "amount": amount}
# LLM sends amount="500" (string) → BLOCKED with fix_hint
# LLM sends force=True (invented arg) → STRIPPED silently
# LLM sends amount=500 (correct) → EXECUTED safelyThat's it. Your function now has ghost argument stripping, strict type validation, and self-healing errors.
|
LLMs hallucinate tool calls. Every. Single. Day.
|
Enterprise solutions exist: Prompt Security ($50K/year), Pangea (proxy your data), Cisco ("coming soon").
We built the open-source alternative. One decorator. No vendor lock-in. Your data never leaves your infrastructure.
Click to expand full navigation
from agent_airlock import Airlock, STRICT_POLICY
@Airlock(sandbox=True, sandbox_required=True, policy=STRICT_POLICY)
def execute_code(code: str) -> str:
"""Runs in an E2B Firecracker MicroVM. Not on your machine."""
exec(code)
return "executed"| Feature | Value |
|---|---|
| Boot time | ~125ms cold, <200ms warm |
| Isolation | Firecracker MicroVM |
| Fallback | sandbox_required=True blocks local execution |
from agent_airlock import (
PERMISSIVE_POLICY, # Dev - no restrictions
STRICT_POLICY, # Prod - rate limited, agent ID required
READ_ONLY_POLICY, # Analytics - query only
BUSINESS_HOURS_POLICY, # Dangerous ops 9-5 only
)
# Or build your own:
from agent_airlock import SecurityPolicy
MY_POLICY = SecurityPolicy(
allowed_tools=["read_*", "query_*"],
denied_tools=["delete_*", "drop_*", "rm_*"],
rate_limits={"*": "1000/hour", "write_*": "100/hour"},
time_restrictions={"deploy_*": "09:00-17:00"},
)A runaway agent can burn $500 in API costs before you notice.
from agent_airlock import Airlock, AirlockConfig
config = AirlockConfig(
max_output_chars=5000, # Truncate before token explosion
max_output_tokens=2000, # Hard limit on response size
)
@Airlock(config=config)
def query_logs(query: str) -> str:
return massive_log_query(query) # 10MB → 5KBROI: 10MB logs = ~2.5M tokens = $25/response. Truncated = ~1.25K tokens = $0.01. 99.96% savings.
config = AirlockConfig(
mask_pii=True, # SSN, credit cards, phones, emails
mask_secrets=True, # API keys, passwords, JWTs
)
@Airlock(config=config)
def get_user(user_id: str) -> dict:
return db.users.find_one({"id": user_id})
# LLM sees: {"name": "John", "ssn": "[REDACTED]", "api_key": "sk-...XXXX"}12 PII types detected · 4 masking strategies · Zero data leakage
Block data exfiltration during tool execution:
from agent_airlock import network_airgap, NO_NETWORK_POLICY
# Block ALL network access
with network_airgap(NO_NETWORK_POLICY):
result = untrusted_tool() # Any socket call → NetworkBlockedError
# Or allow specific hosts only
from agent_airlock import NetworkPolicy
INTERNAL_ONLY = NetworkPolicy(
allow_egress=True,
allowed_hosts=["api.internal.com", "*.company.local"],
allowed_ports=[443],
)Secure existing code without changing a single line:
from agent_airlock import vaccinate, STRICT_POLICY
# Before: Your existing LangChain tools are unprotected
vaccinate("langchain", policy=STRICT_POLICY)
# After: ALL @tool decorators now include Airlock security
# No code changes required!Supported: LangChain, OpenAI Agents SDK, PydanticAI, CrewAI
Prevent cascading failures with fault tolerance:
from agent_airlock import CircuitBreaker, AGGRESSIVE_BREAKER
breaker = CircuitBreaker("external_api", config=AGGRESSIVE_BREAKER)
@breaker
def call_external_api(query: str) -> dict:
return external_service.query(query)
# After 5 failures → circuit OPENS → fast-fails for 30s
# Then HALF_OPEN → allows 1 test request → recovers or reopensEnterprise-grade monitoring:
from agent_airlock import configure_observability, observe
configure_observability(
service_name="my-agent",
otlp_endpoint="http://otel-collector:4317",
)
@observe(name="critical_operation")
def process_data(data: dict) -> dict:
# Automatic span creation, metrics, and audit logging
return transform(data)The Golden Rule:
@Airlockmust be closest to the function definition.
@framework_decorator # ← Framework sees secured function
@Airlock() # ← Security layer (innermost)
def my_function(): # ← Your code
from langchain_core.tools import tool
from agent_airlock import Airlock
@tool
@Airlock()
def search(query: str) -> str:
"""Search for information."""
return f"Results for: {query}" |
from agents import function_tool
from agent_airlock import Airlock
@function_tool
@Airlock()
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: 22°C" |
from pydantic_ai import Agent
from agent_airlock import Airlock
@Airlock()
def get_stock(symbol: str) -> str:
return f"Stock {symbol}: $150"
agent = Agent("openai:gpt-4o", tools=[get_stock]) |
from crewai.tools import tool
from agent_airlock import Airlock
@tool
@Airlock()
def search_docs(query: str) -> str:
"""Search internal docs."""
return f"Found 5 docs for: {query}" |
More frameworks: LlamaIndex, AutoGen, smolagents, Anthropic
from llama_index.core.tools import FunctionTool
from agent_airlock import Airlock
@Airlock()
def calculate(expression: str) -> int:
return eval(expression, {"__builtins__": {}})
calc_tool = FunctionTool.from_defaults(fn=calculate)from autogen import ConversableAgent
from agent_airlock import Airlock
@Airlock()
def analyze_data(dataset: str) -> str:
return f"Analysis of {dataset}: mean=42.5"
assistant = ConversableAgent(name="analyst", llm_config={"model": "gpt-4o"})
assistant.register_for_llm()(analyze_data)from smolagents import tool
from agent_airlock import Airlock
@tool
@Airlock(sandbox=True)
def run_code(code: str) -> str:
"""Execute in E2B sandbox."""
exec(code)
return "Executed"from agent_airlock import Airlock
@Airlock()
def get_weather(city: str) -> str:
return f"Weather in {city}: 22°C"
# Use in tool handler
def handle_tool_call(name, inputs):
if name == "get_weather":
return get_weather(**inputs) # Airlock validates| Framework | Example | Key Features |
|---|---|---|
| LangChain | langchain_integration.py |
@tool, AgentExecutor |
| LangGraph | langgraph_integration.py |
StateGraph, ToolNode |
| OpenAI Agents | openai_agents_sdk_integration.py |
Handoffs, manager pattern |
| PydanticAI | pydanticai_integration.py |
Dependencies, structured output |
| LlamaIndex | llamaindex_integration.py |
ReActAgent |
| CrewAI | crewai_integration.py |
Crews, roles |
| AutoGen | autogen_integration.py |
ConversableAgent |
| smolagents | smolagents_integration.py |
CodeAgent, E2B |
| Anthropic | anthropic_integration.py |
Direct API |
from fastmcp import FastMCP
from agent_airlock.mcp import secure_tool, STRICT_POLICY
mcp = FastMCP("production-server")
@secure_tool(mcp, policy=STRICT_POLICY)
def delete_user(user_id: str) -> dict:
"""One decorator: MCP registration + Airlock protection."""
return db.users.delete(user_id)| Prompt Security | Pangea | Agent-Airlock | |
|---|---|---|---|
| Pricing | $50K+/year | Enterprise | Free forever |
| Integration | Proxy gateway | Proxy gateway | One decorator |
| Self-Healing | ❌ | ❌ | ✅ |
| E2B Sandboxing | ❌ | ❌ | ✅ Native |
| Your Data | Their servers | Their servers | Never leaves you |
| Source Code | Closed | Closed | MIT Licensed |
We're not anti-enterprise. We're anti-gatekeeping. Security for AI agents shouldn't require a procurement process.
# Core (validation + policies + sanitization)
pip install agent-airlock
# With E2B sandbox support
pip install agent-airlock[sandbox]
# With FastMCP integration
pip install agent-airlock[mcp]
# Everything
pip install agent-airlock[all]# E2B key for sandbox execution
export E2B_API_KEY="your-key-here"Agent-Airlock mitigates the OWASP Top 10 for LLMs (2025):
| OWASP Risk | Mitigation |
|---|---|
| LLM01: Prompt Injection | Strict type validation blocks injected payloads |
| LLM02: Sensitive Data Disclosure | Network airgap prevents data exfiltration |
| LLM05: Improper Output Handling | PII/secret masking sanitizes outputs |
| LLM06: Excessive Agency | Rate limits + RBAC + capability gating prevent runaway agents |
| LLM07: System Prompt Leakage | Honeypot returns fake data instead of errors |
| LLM09: Misinformation | Ghost argument rejection blocks hallucinated params |
| Metric | Value |
|---|---|
| Tests | 1,157 passing |
| Coverage | 79%+ (enforced in CI) |
| Lines of Code | ~25,900 |
| Validation overhead | <50ms |
| Sandbox cold start | ~125ms |
| Sandbox warm pool | <200ms |
| Framework integrations | 9 |
| Core dependencies | 0 (Pydantic only) |
| Resource | Description |
|---|---|
| Examples | 9 framework integrations with copy-paste code |
| Security Guide | Production deployment checklist |
| API Reference | Every function, every parameter |
Built by Sattyam Jain — AI infrastructure engineer.
This started as an internal tool after watching an agent hallucinate its way through a production database. Now it's yours.
We review every PR within 48 hours.
git clone https://github.com/sattyamjjain/agent-airlock
cd agent-airlock
pip install -e ".[dev]"
pytest tests/ -v- Bug? Open an issue
- Feature idea? Start a discussion
- Want to contribute? See open issues
If Agent-Airlock saved your production database:
- ⭐ Star this repo — Helps others discover it
- 🐛 Report bugs — Open an issue
- 📣 Spread the word — Tweet, blog, share
Sources: This README follows best practices from awesome-readme, Best-README-Template, and the GitHub Blog.