βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β βββββββ βββββββ βββββββ ββββββ βββββββ βββββββββββββββββ β
β βββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββ β
β βββ βββ βββββββββββββββββ βββ βββ βββ βββββ β
β βββ βββ ββββββββββ ββββββ βββ βββ βββ βββββ β
β ββββββββββββββββββββ ββββββββββββββββββββ βββ ββββββββ β
β βββββββ βββββββ βββ βββββββββββ βββββββ βββ ββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The full-stack framework for AI applications.
LLM wrappers give you chat. Copilotz gives you everything else: persistent memory, RAG, tool calling, background jobs, and multi-tenancy β in one framework.
Build AI apps, not AI infrastructure.
Building AI features today feels like building websites in 2005.
You start with an LLM wrapper. Then you need memory β so you add Redis. Then RAG β so you add a vector database. Then your tool generates an image β now you need asset storage and a way to pass it back to the LLM. Then background jobs, multi-tenancy, tool calling, handling media, observability... Before you know it, you're maintaining infrastructure instead of building your product.
There's no Rails for AI. No Next.js. Just parts.
Copilotz is the full-stack framework for AI applications. Everything you need to ship production AI, in one package:
| What You Need | What Copilotz Gives You |
|---|---|
| Memory | Knowledge graph that remembers users, conversations, and entities |
| RAG | Document ingestion, chunking, embeddings, and semantic search |
| Tools | 24 native tools + OpenAPI integration + MCP support |
| Assets | Automatic extraction, storage, and LLM resolution of images and files |
| Background Jobs | Event queue with persistent workers and custom processors |
| Multi-tenancy | Schema isolation + namespace partitioning |
| Database | PostgreSQL (production) or PGLite (development/embedded) |
| Streaming | Real-time token streaming with async iterables |
One framework. One dependency. Production-ready.
deno add jsr:@copilotz/copilotzTry Copilotz instantly with an interactive chat:
import { createCopilotz } from "@copilotz/copilotz";
const copilotz = await createCopilotz({
agents: [{
id: "assistant",
name: "Assistant",
role: "assistant",
instructions: "You are a helpful assistant. Remember what users tell you.",
llmOptions: { provider: "openai", model: "gpt-4o-mini" },
}],
dbConfig: { url: ":memory:" },
});
// Start an interactive REPL β streams responses to stdout
copilotz.start({ banner: "π€ Chat with your AI! Type 'quit' to exit.\n" });Run it: OPENAI_API_KEY=your-key deno run --allow-net --allow-env chat.ts
For applications, use run() for full control:
import { createCopilotz } from "@copilotz/copilotz";
const copilotz = await createCopilotz({
agents: [{
id: "assistant",
name: "Assistant",
role: "assistant",
instructions: "You are a helpful assistant with a great memory.",
llmOptions: { provider: "openai", model: "gpt-4o-mini" },
}],
dbConfig: { url: ":memory:" },
});
// First conversation
const result = await copilotz.run({
content: "Hi! I'm Alex and I love hiking in the mountains.",
sender: { type: "user", name: "Alex" },
});
await result.done;
// Later... your AI remembers
const result2 = await copilotz.run({
content: "What do you know about me?",
sender: { type: "user", name: "Alex" },
});
await result2.done;
// β "You're Alex, and you love hiking in the mountains!"
await copilotz.shutdown();Most AI frameworks give you chat history. Copilotz gives you a knowledge graph β users, conversations, documents, and entities all connected. Your AI doesn't just remember what was said; it understands relationships.
// Entities are extracted automatically
await copilotz.run({ content: "I work at Acme Corp as a senior engineer" });
// Later, your AI knows:
// - User: Alex
// - Organization: Acme Corp
// - Role: Senior Engineer
// - Relationship: Alex works at Acme Corp24 built-in tools for file operations, HTTP requests, RAG, agent memory, and more. Plus automatic tool generation from OpenAPI specs and MCP servers.
const copilotz = await createCopilotz({
agents: [{
// ...
allowedTools: ["read_file", "write_file", "http_request", "search_knowledge"],
}],
apis: [{
id: "github",
openApiSchema: myOpenApiSchema, // Object or JSON/YAML string
auth: { type: "bearer", token: process.env.GITHUB_TOKEN },
}],
});Schema-level isolation for hard boundaries. Namespace-level isolation for logical partitioning. Your SaaS is ready for customers on day one.
// Each customer gets complete isolation
await copilotz.run(message, onEvent, {
schema: "tenant_acme", // PostgreSQL schema
namespace: "workspace:123", // Logical partition
});When your tool generates an image or fetches a file, what happens next? With most frameworks, you're on your own. Copilotz automatically extracts assets from tool outputs, stores them, and resolves them for vision-capable LLMs.
// Your tool just returns base64 data
const generateChart = {
id: "generate_chart",
execute: async ({ data }) => ({
mimeType: "image/png",
dataBase64: await createChart(data),
}),
};
// Copilotz automatically:
// 1. Detects the asset in the tool output
// 2. Stores it (filesystem, S3, or memory)
// 3. Replaces it with an asset:// reference
// 4. Resolves it to a data URL for the next LLM call
// 5. Emits an ASSET_CREATED event for your hooksEvent-driven architecture with persistent queues. Background workers for heavy processing. Custom processors for your business logic. This is infrastructure you'd build anyway β already built.
// Events are persisted and recoverable
// Background jobs process RAG ingestion, entity extraction
// Custom processors extend the pipeline
const copilotz = await createCopilotz({
// ...
processors: [{
eventType: "NEW_MESSAGE",
shouldProcess: (event) => event.payload.needsApproval,
process: async (event, deps) => {
// Your custom logic here
return { producedEvents: [] };
},
}],
});Multi-agent orchestration with persistent targets, @mentions, loop prevention, and inter-agent communication. Agents can remember learnings across conversations with persistent memory.
Type-safe data storage on top of the knowledge graph with JSON Schema validation.
Document ingestion β chunking β embeddings β semantic search. Works out of the box.
Real-time token streaming with callbacks and async iterables.
Automatic extraction and storage of images, files, and media from tool outputs. Seamless resolution for vision LLMs.
Getting Started
- Quick Start β Install and run your first agent
- Overview β Architecture and core concepts
Core Concepts
- Agents β Multi-agent configuration and communication
- Events β Event-driven processing pipeline
- Tools β Native tools, APIs, and MCP integration
Data Layer
- Database β PostgreSQL, PGLite, and the knowledge graph
- Tables Structure β Database schema reference
- Collections β Type-safe data storage
- RAG β Document ingestion and semantic search
Advanced
- Configuration β Full configuration reference
- Assets β File and media storage
- Loaders β Load resources from filesystem
- API Reference β Complete API documentation
- Deno 2.0+
- PostgreSQL 13+ (production) or PGLite (development/embedded)
- LLM API key (OpenAI, Anthropic, Gemini, Groq, DeepSeek, or Ollama)
MIT β see LICENSE
Stop gluing. Start shipping.