Version control for AI decisions.
Multi-agent AI systems are moving from demos to production, but their decisions are invisible and uncoordinated. Agents contradict each other, relitigate settled work, and have no shared memory of what's already been decided. When something goes wrong, nobody can answer: who decided what, when, why, and what alternatives were considered?
Akashi is the decision coordination layer. Every agent checks for precedents before deciding and records its full reasoning after. When agents diverge on the same topic, Akashi detects it semantically — and when the CTO asks "why did the AI do that?" or an auditor asks for proof of decision traceability, you have the answer.
Akashi is built around two primitives: check before deciding, trace after deciding.
Before making a decision After making a decision
───────────────────────── ───────────────────────
akashi_check akashi_trace
"has anyone decided this?" "here's what I decided and why"
→ precedents → stored permanently
→ known conflicts → embeddings computed
→ conflicts detected
When an agent calls akashi_trace, five things happen:
-
Two embeddings are computed — one from
decision_type + outcome + reasoning(for semantic search and conflict topic matching), one fromoutcomealone (for detecting stance divergence between decisions on the same topic). -
A completeness score is assigned — a 0–1 heuristic based on whether reasoning, alternatives, evidence, and confidence were provided.
-
Everything is written atomically — the decision, alternatives, evidence, and a search index entry commit in a single transaction.
-
Conflict detection runs asynchronously — the new decision is compared against the 50 most semantically similar decisions in the org's history. Pairs above a significance threshold are validated by an LLM (Ollama or OpenAI), which classifies the relationship as
contradiction,supersession,complementary,refinement, orunrelated. Only genuine conflicts are stored. -
Subscribers are notified — real-time SSE subscribers and the audit dashboard see new decisions and conflicts immediately.
Before making a decision, an agent calls akashi_check with a description of what it's about to decide. Akashi returns:
- Precedents — the most relevant past decisions on similar topics, re-ranked by assessment outcomes, citation count, and recency
- Conflicts — any open conflicts involving those precedents
This is the mechanism by which later agents build on earlier decisions rather than rediscovering the same ground.
Conflicts are detected semantically, not by type matching. The significance formula is:
significance = topic_similarity × outcome_divergence × confidence_weight × temporal_decay
A planner recommending microservices and a coder recommending a monolith for the same system will have high topic similarity (same domain vocabulary) and high outcome divergence (opposite conclusions) — that pair surfaces as a conflict regardless of what decision_type either agent used.
Conflicts have a lifecycle: open → acknowledged → resolved (or wont_fix). When resolving, you can declare a winner — which of the two decisions prevailed — or use the adjudication endpoint to record the resolution itself as a new traceable decision.
When an agent later observes whether a past decision was correct, it calls akashi_assess:
akashi_assess
decision_id: <uuid>
outcome: "correct" | "incorrect" | "partially_correct"
notes: "the cache TTL was too short; needed 15 minutes not 5"
Assessments feed back into search re-ranking — decisions assessed as correct surface higher as precedents. This is how the audit trail improves over time rather than just accumulating records.
Three modes. Pick the one that matches your setup.
Zero-dependency mode backed by SQLite. Starts in under 3 seconds with no Docker, Postgres, Qdrant, or Ollama. All 6 MCP tools work identically to the full server.
# Build the local-lite binary
go build -o bin/akashi-local ./cmd/akashi-local
# Start — creates ~/.akashi/local.db on first run
./bin/akashi-localThe binary serves MCP over stdio. Add it to Claude Code:
claude mcp add akashi-local -- ./bin/akashi-localOr to Cursor/Windsurf (~/.cursor/mcp.json):
{
"mcpServers": {
"akashi": {
"command": "/path/to/bin/akashi-local"
}
}
}Limitations: No multi-tenancy, no SSE subscriptions, no LLM conflict validation (uses text-based conflict detection instead), no audit dashboard UI. Ideal for individual developers and small teams. When you outgrow it, migrate to the full server — the decision data is portable.
Custom database path:
AKASHI_DB_PATH=/path/to/decisions.db ./bin/akashi-localEverything runs in Docker — TimescaleDB, Qdrant, Ollama, and the Akashi server. No API keys, no external accounts.
docker compose -f docker-compose.complete.yml up -dFirst launch builds the server image from source and downloads two Ollama models: mxbai-embed-large (~670MB) for embeddings and qwen3.5:9b (~6.6GB) for LLM conflict validation. Expect 15–25 minutes on first run depending on your machine and network. Subsequent launches start in seconds.
Watch model download progress:
docker compose -f docker-compose.complete.yml logs -f ollama-initThe server is ready when you see a listening log line. Model downloads run in the background — embeddings and conflict validation activate automatically once complete.
curl http://localhost:8080/health
# Open http://localhost:8080 for the audit dashboardIf port 8080 is already in use, set AKASHI_PORT before starting:
echo "AKASHI_PORT=8081" > .env
docker compose -f docker-compose.complete.yml up -d
# Open http://localhost:8081Just the Akashi server container. You provide TimescaleDB, Qdrant, and an embedding API key.
cp docker/env.example .env
# Edit .env: set DATABASE_URL, AKASHI_ADMIN_API_KEY, and optionally QDRANT_URL / OPENAI_API_KEY
docker compose up -dFirst run builds the server image from source (~3–5 minutes). After that, docker compose up -d starts in seconds. To force a rebuild after updating the source: docker compose up -d --build.
Required variables:
| Variable | Description |
|---|---|
DATABASE_URL |
TimescaleDB connection string (pgvector + TimescaleDB extensions required) |
AKASHI_ADMIN_API_KEY |
Bootstrap API key for the admin agent |
Optional (server starts without them — search falls back to text):
| Variable | Description |
|---|---|
QDRANT_URL |
Qdrant endpoint for vector search |
OPENAI_API_KEY |
Enables OpenAI embeddings and LLM conflict validation |
OLLAMA_URL |
Ollama endpoint for local embeddings |
AKASHI_JWT_PRIVATE_KEY |
Path to Ed25519 private key PEM file. Empty = ephemeral key pair generated on every startup — all tokens are invalidated on each restart. Set this for any persistent deployment. |
AKASHI_JWT_PUBLIC_KEY |
Path to Ed25519 public key PEM file. Must be set alongside the private key. |
AKASHI_JWT_EXPIRATION |
JWT token lifetime. Default: 24h. |
Generating persistent signing keys (run once from the repo root):
go run ./scripts/genkey -out data/
# Writes: data/jwt_private.pem, data/jwt_public.pemThen add to .env:
AKASHI_JWT_PRIVATE_KEY=/data/jwt_private.pem
AKASHI_JWT_PUBLIC_KEY=/data/jwt_public.pem
The docker-compose.yml already mounts ./data as /data inside the container — no other changes needed. Both PEM files must have 0600 permissions; the server rejects looser modes at startup.
See Configuration for all variables.
When AKASHI_SIGNUP_ENABLED=true, new organizations can register without admin intervention:
curl -X POST http://localhost:8080/auth/signup \
-H 'Content-Type: application/json' \
-d '{
"org_name": "My Team",
"agent_id": "planner",
"email": "team@example.com"
}'Returns an org ID, API key, and a ready-to-paste MCP config snippet. The API key is shown exactly once — save it immediately. Rate limited to 1 request/second per IP (burst 5).
# Get a token (default admin key for local dev is "admin")
TOKEN=$(curl -s -X POST http://localhost:8080/auth/token \
-H 'Content-Type: application/json' \
-d '{"agent_id": "admin", "api_key": "admin"}' \
| jq -r '.data.token')
# Record a decision with reasoning, alternatives, and evidence
curl -X POST http://localhost:8080/v1/trace \
-H "Authorization: Bearer $TOKEN" \
-H 'Content-Type: application/json' \
-d '{
"agent_id": "admin",
"decision": {
"decision_type": "architecture",
"outcome": "use microservices for the payment system",
"confidence": 0.85,
"reasoning": "Payment processing needs independent scaling and deployment. A monolith couples payment latency to unrelated features.",
"alternatives": [
{"label": "microservices", "selected": true, "score": 0.85,
"rationale": "Independent scaling, isolated failures, team autonomy"},
{"label": "monolith", "selected": false, "score": 0.65,
"rationale": "Simpler deployment but couples all domains"}
],
"evidence": [
{"source_type": "analysis", "content": "Payment traffic spikes 10x during promotions while other services stay flat"}
]
}
}'# Before making a decision, check if similar ones exist
curl -X POST http://localhost:8080/v1/check \
-H "Authorization: Bearer $TOKEN" \
-H 'Content-Type: application/json' \
-d '{"decision_type": "architecture", "query": "microservices vs monolith"}'curl -X POST http://localhost:8080/v1/search \
-H "Authorization: Bearer $TOKEN" \
-H 'Content-Type: application/json' \
-d '{"query": "scaling decisions for high-traffic services", "limit": 5}'The fastest way to use Akashi is through MCP. Your agent gains decision tracing with zero code changes.
The MCP endpoint supports two auth schemes. ApiKey is recommended for config files — it never expires and survives server restarts:
| Scheme | Format | Expires? | Best for |
|---|---|---|---|
ApiKey |
ApiKey <agent_id>:<api_key> |
Never | MCP config files |
Bearer |
Bearer <jwt> |
24h (default) | Programmatic / short-lived access |
Confirm the server is reachable before adding credentials to your config:
curl http://localhost:8080/mcp/info# Default API keys:
# docker-compose.complete.yml → admin
# docker-compose.yml (binary-only) → changeme
AKASHI_ADMIN_API_KEY="${AKASHI_ADMIN_API_KEY:-admin}"
# Add globally (all projects on this machine) — never expires
claude mcp add --transport http --scope user akashi http://localhost:8080/mcp \
--header "Authorization: ApiKey admin:$AKASHI_ADMIN_API_KEY"
# Or scope to just the current project
claude mcp add --transport http --scope project akashi http://localhost:8080/mcp \
--header "Authorization: ApiKey admin:$AKASHI_ADMIN_API_KEY"Add to your MCP configuration file (~/.cursor/mcp.json, ~/.windsurf/mcp.json, etc.):
{
"mcpServers": {
"akashi": {
"url": "http://localhost:8080/mcp",
"headers": {
"Authorization": "ApiKey admin:<your-api-key>"
}
}
}
}Replace admin with your agent ID and <your-api-key> with your AKASHI_ADMIN_API_KEY value.
Using JWT tokens instead
JWTs work fine for MCP config files if you configure a long enough expiration. The default is 24 hours — set AKASHI_JWT_EXPIRATION=8760h in your .env for 1-year tokens that won't need refreshing.
Requirements for long-lived JWTs:
- Persistent signing keys — without them, every server restart invalidates all tokens regardless of expiration. Generate them once:
mkdir -p data
openssl genpkey -algorithm ed25519 -out data/jwt_private.pem
openssl pkey -in data/jwt_private.pem -pubout -out data/jwt_public.pem
chmod 600 data/jwt_private.pem data/jwt_public.pemThen set in .env:
AKASHI_JWT_PRIVATE_KEY=/data/jwt_private.pem
AKASHI_JWT_PUBLIC_KEY=/data/jwt_public.pem
AKASHI_JWT_EXPIRATION=8760h
- Issue a token:
AKASHI_ADMIN_API_KEY="${AKASHI_ADMIN_API_KEY:-admin}"
TOKEN=$(curl -s -X POST http://localhost:8080/auth/token \
-H 'Content-Type: application/json' \
-d "{\"agent_id\":\"admin\",\"api_key\":\"$AKASHI_ADMIN_API_KEY\"}" \
| python3 -c "import sys,json; print(json.load(sys.stdin)['data']['token'])")- Wire Claude Code:
claude mcp add --transport http --scope user akashi http://localhost:8080/mcp \
--header "Authorization: Bearer $TOKEN"The token is valid for 1 year from issuance. If you restart the server with the same key files, existing tokens remain valid.
Simpler alternative:
ApiKey admin:<your-api-key>never expires at all and requires no token issuance step. See the examples above.
| Tool | Purpose |
|---|---|
akashi_check |
Look for precedents before making a decision (optional type filter; semantic query) |
akashi_trace |
Record a decision with reasoning and confidence |
akashi_assess |
Record whether a past decision turned out to be correct |
akashi_query |
Find decisions: structured filters (type, agent, confidence) or semantic search via query param |
akashi_conflicts |
List and filter open conflicts between agents |
akashi_stats |
Aggregate health metrics for the decision trail |
Three prompts guide the workflow: agent-setup (system prompt with the check-before/record-after pattern), before-decision (precedent lookup guidance), and after-decision (recording reminder).
A planner agent decides to use microservices for a payment system and records it via akashi_trace. Later, a coder agent is about to choose a monolith for the same system. It calls akashi_check first and discovers the planner already made a conflicting decision with different reasoning. The coder sees the conflict, reviews the planner's evidence, and either aligns or records a competing decision with its own rationale. Either way, the disagreement is visible and auditable.
| Interface | Endpoint | Audience |
|---|---|---|
| HTTP API | /v1/... |
Programmatic integrators, SDKs, CI/CD |
| MCP server | /mcp |
AI agents in Claude, Cursor, Windsurf |
| Audit dashboard | / |
Human reviewers, auditors, operators |
All three share the same storage, auth, and embedding provider.
Every decision trace records:
- The decision -- what was chosen and the agent's confidence level
- Reasoning -- step-by-step logic explaining why
- Rejected alternatives -- what else was considered, with scores and rejection reasons
- Supporting evidence -- what information backed the decision, with provenance
- Temporal context -- when it was made, when it was valid (bi-temporal model)
- Integrity proof -- SHA-256 content hash and Merkle tree batch verification
- Conflicts -- when two agents disagree on the same question
| Language | Path | Install |
|---|---|---|
| Go | sdk/go/ |
go get github.com/ashita-ai/akashi/sdk/go/akashi |
| Python | sdk/python/ |
pip install git+https://github.com/ashita-ai/akashi.git#subdirectory=sdk/python |
| TypeScript | sdk/typescript/ |
npm install github:ashita-ai/akashi#path:sdk/typescript |
All SDKs provide: Check, Trace, Query, Search, Recent. Auth token management is automatic.
flowchart TD
C1["Audit Dashboard"] -->|"/"| AUTH
C2["MCP Clients<br/>Claude, Cursor, Windsurf"] -->|"/mcp"| AUTH
C3["HTTP Clients<br/>SDKs, CI/CD, scripts"] -->|"/v1/"| AUTH
AUTH["Auth + Middleware<br/>Ed25519 JWT, RBAC, tracing"]
AUTH --> TRACE["Trace Buffer<br/>in-memory batch + COPY flush"]
AUTH --> QUERY["Query Engine<br/>SQL filters + bi-temporal"]
AUTH --> SEARCH["Semantic Search<br/>Qdrant / pgvector fallback"]
AUTH --> CONFLICT["Conflict Detection<br/>semantic scoring"]
TRACE --> PG
QUERY --> PG
SEARCH --> PG
CONFLICT --> PG
PG[("PostgreSQL 18<br/>pgvector + TimescaleDB")]
style C1 fill:#f0f4ff,stroke:#4a6fa5
style C2 fill:#f0f4ff,stroke:#4a6fa5
style C3 fill:#f0f4ff,stroke:#4a6fa5
style AUTH fill:#fff3e0,stroke:#e65100
style TRACE fill:#f9f9f9,stroke:#999
style QUERY fill:#f9f9f9,stroke:#999
style SEARCH fill:#f9f9f9,stroke:#999
style CONFLICT fill:#f9f9f9,stroke:#999
style PG fill:#e8f5e9,stroke:#2e7d32
| Document | Description |
|---|---|
| Self-Hosting Guide | Step-by-step deployment: Postgres-only through full stack with Qdrant and Ollama |
| Configuration | All environment variables with defaults and descriptions |
| Conflict Detection | How conflicts are found, scored, validated, and resolved |
| GDPR Erasure | Tombstone erasure for right-to-be-forgotten compliance |
| Quality Scoring | Completeness scores, outcome scores, and anti-gaming measures |
| IDE Hooks | Claude Code and Cursor integration via hook endpoints |
| Subsystems | Embedding provider, rate limiting, and Qdrant search pipeline internals |
| Runbook | Production operations: health checks, monitoring, troubleshooting |
| Diagrams | Mermaid diagrams of write path, read path, auth flow, schema |
| ADRs | Architecture decision records (10 technical decisions) |
| OpenAPI Spec | Full API specification (also served at GET /openapi.yaml) |
# Without UI
make build
DATABASE_URL=postgres://... AKASHI_ADMIN_API_KEY=admin ./bin/akashi
# With the embedded audit dashboard
make build-with-ui
DATABASE_URL=postgres://... AKASHI_ADMIN_API_KEY=admin ./bin/akashi
# Open http://localhost:8080The binary requires a PostgreSQL 18 database with the pgvector and timescaledb extensions pre-installed (see docker/init.sql). Qdrant and an embedding provider are optional — the server starts without them and falls back to text search.
For a local database during development:
# Start just the database and Qdrant (no Akashi binary — run that from source)
docker run -d --name akashi-pg \
-e POSTGRES_USER=akashi -e POSTGRES_PASSWORD=akashi -e POSTGRES_DB=akashi \
-v "$(pwd)/docker/init.sql:/docker-entrypoint-initdb.d/01-init.sql:ro" \
-p 5432:5432 timescale/timescaledb-ha:pg18
docker run -d --name akashi-qdrant -p 6333:6333 qdrant/qdrant:v1.13.6
DATABASE_URL=postgres://akashi:akashi@localhost:5432/akashi?sslmode=disable \
QDRANT_URL=http://localhost:6333 \
AKASHI_ADMIN_API_KEY=admin \
./bin/akashiTests use testcontainers-go for real TimescaleDB + pgvector instances. No mocks for the storage layer.
make test # Full suite (requires Docker)
go test -race ./... # Go tests with race detection- Go 1.26+
- Docker (for testcontainers and local stack)
Apache 2.0. See LICENSE.