From 24b23243790d8221975fd898307d3a747017c2d7 Mon Sep 17 00:00:00 2001
From: Tanner Stirrat
super_admin relation on every object that can be administered, add a root object to the hierarchy, in this example platform.
+Super admin users can be applied to platform and a relation to platform on top level objects.
+Admin permission on resources is then defined as the direct owner of the resource as well as through a traversal of the object hierarchy to the platform super admin.
+
+\n\nAI fundamentally changes the interface, but not the fundamentals of security. Read on to find out why
\n
It feels like eons ago when the Model Context Protocol (MCP) was introduced (it was only in November 2024 lol)
\nIt promised to become the USB-C of AI agents — a universal bridge for connecting LLMs to tools, APIs, documents, emails, codebases, databases and cloud infrastructure. In just months, the ecosystem exploded: dozens of tool servers, open-source integrations, host implementations, and hosted MCP registries began to appear.
\nAs the ecosystem rapidly adopted MCP, it presented the classic challenge of securing any new technology: developers connected powerful, sensitive systems without rigorously applying established security controls and fundamental principles to the new spec. By mid-2025, the vulnerabilities were exposed, confirming that the new AI-native world is governed by the same security principles as traditional software.
\nBelow is the first consolidated timeline tracing the major MCP-related breaches and security failures - what happened, what data was exposed, why it happened, and what they reveal about the new threat surface LLMs bring into organisations.
\nWhat happened: Invariant Labs demonstrated that a malicious MCP server could silently exfiltrate a user’s entire WhatsApp history by combining “tool poisoning” with a legitimate whatsapp-mcp server in the same agent. A “random fact of the day” tool morphed into a sleeper backdoor that rewrote how WhatsApp messages are sent. Invariant Labs Link
Data at risk & why: Once the agent read the poisoned tool description, it happily followed hidden instructions to send hundreds or thousands of past WhatsApp messages (personal chats, business deals, customer data) to an attacker-controlled phone number – all disguised as ordinary outbound messages, bypassing typical Data Loss Prevention (DLP) tooling.
\nWhat happened: Invariant Labs uncovered a prompt-injection attack against the official GitHub MCP server: a malicious public GitHub issue could hijack an AI assistant and make it pull data from private repos, then leak that data back to a public repo. Invariant Labs link
\nData breached & why: With a single over-privileged Personal Access Token wired into the MCP server, the compromised agent exfiltrated private repository contents, internal project details, and even personal financial/salary information into a public pull request. The root cause was broad PAT scopes combined with untrusted content (issues) in the LLM context, letting a prompt-injected agent abuse legitimate MCP tool calls.
\nWhat happened: Asana discovered a bug in its MCP-server feature that could allow data belonging to one organisation to be seen by other organisations using their system. Upguard link.
\nData breached & why: Projects, teams, tasks and other Asana objects belonging to one customer potentially accessible by a different customer. This was caused by a logic flaw in the access control of their MCP-enabled integration (cross-tenant access not properly isolated).
\nWhat happened: Researchers found that Anthropic’s MCP Inspector developer tool allowed unauthenticated remote code execution via its inspector–proxy architecture. An attacker could get arbitrary commands run on a dev machine just by having the victim inspect a malicious MCP server, or even by driving the inspector from a browser. CVE Link
\nData at risk & why: Because the inspector ran with the user’s privileges and lacked authentication while listening on localhost / 0.0.0.0, a successful exploit could expose the entire filesystem, API keys, and environment secrets on the developer workstation – effectively turning a debugging tool into a remote shell. VSec Medium Link
\nWhat happened: JFrog disclosed CVE-2025-6514, a critical OS command-injection bug in mcp-remote, a popular OAuth proxy for connecting local MCP clients to remote servers. Malicious MCP servers could send a booby-trapped authorization_endpoint that mcp-remote passed straight into the system shell, achieving remote code execution on the client machine. CVE Link
Data at risk & why: With over 437,000 downloads and adoption in Cloudflare, Hugging Face, Auth0 and other integration guides, the vuln effectively turned any unpatched install into a supply-chain backdoor: an attacker could execute arbitrary commands, steal API keys, cloud credentials, local files, SSH keys, and Git repo contents, all triggered by pointing your LLM host at a malicious MCP endpoint. Docker Blog
\nWhat happened: Security researchers found two critical flaws in Anthropic’s Filesystem-MCP server: sandbox escape and symlink/containment bypass, enabling arbitrary file access and code execution. Cymulate Link
\nData breached & why: Host filesystem access, meaning sensitive files, credentials, logs, or other data on servers could be impacted. The root cause was poor sandbox implementation and insufficient directory-containment enforcement in the MCP server’s file-tool interface.
\nWhat happened: A malicious MCP server package masquerading as a legitimate “Postmark MCP Server” was found injecting BCC copies of all email communications (including confidential docs) to an attacker’s server. IT Pro
\nData breached & why: Emails, internal memos, invoices — essentially all mail traffic processed by that MCP server were exposed. This was due to a supply-chain compromise / malicious package in MCP ecosystem, and the fact that MCP servers often run with high-privilege accesses which were exploited.
\nWhat happened: While researching Smithery’s hosted MCP server platform, GitGuardian found a path-traversal bug in the smithery.yaml build config. By setting dockerBuildPath: \"..\", attackers could make the registry build Docker images from the builder’s home directory, then exfiltrate its contents and credentials. GitGuardian Blog
Data breached & why: The exploit leaked the builder’s ~/.docker/config.json, including a Fly.io API token that granted control over >3,000 apps, most of them hosted MCP servers. From there, attackers could run arbitrary commands in MCP server containers and tap inbound client traffic that contained API keys and other secrets for downstream services (e.g. Brave API keys), turning the MCP hosting service itself into a high-impact supply-chain compromise.
What happened: A command-injection flaw was discovered in the Figma/Framelink MCP integration: unsanitised user input in shell commands could lead to remote code execution. The Hacker News Link
\nData breached & why: Because the integration allowed AI-agents to interact with Figma docs, the flaw could enable attackers to run arbitrary commands through the MCP tooling and access design data or infrastructure. The root cause was the unsafe use of child_process.exec with untrusted input in the MCP server code - essentially a lack of input sanitisation.CVE Link
..And we’re sure there are more to come. We’ll keep this blog updated with the latest in security and data breaches in the MCP world.
\nAcross all these breaches, common themes appear:
\n1. Local AI dev tools behave like exposed remote APIs
\nMCP Inspector, mcp-remote, and similar tooling turned into Remote Code Execution (RCE) surfaces simply by trusting localhost connections.
2. Over-privileged API tokens are catastrophic in MCP workflows
\nGitHub MCP, Smithery, and WhatsApp attacks all exploited overly broad token scopes.
\n3. “Tool poisoning” is a new, AI-native supply chain vector
\nTraditional security tools don’t monitor changes to MCP tool descriptions.
\n4. Hosted MCP registries concentrate risk
\nSmithery illustrated what happens when thousands of tenants rely on a single build pipeline.
\n5. Prompt injection becomes a full data breach
\nThe GitHub MCP incident demonstrated how natural language alone can cause exfiltration when MCP calls are available.
\nThe Model Context Protocol (MCP) presents a cutting-edge threat surface, yet the breaches detailed here are rooted in timeless flaws: over-privilege, inadequate input validation, and insufficient isolation.
\nAI fundamentally changes the interface, but not the fundamentals of security. To secure the AI era, we must rigorously apply old-school principles of least privilege and zero-trust to these powerful new software components.
\nAs adoption accelerates, organisations must treat MCP surfaces with the same seriousness as API gateways, CI/CD pipelines, and Cloud IAM.
\nBecause attackers already are.
", + "url": "https://authzed.com/blog/timeline-mcp-breaches", + "title": "A Timeline of Model Context Protocol (MCP) Security Breaches", + "summary": "AI fundamentally changes the interface, but not the fundamentals of security. Here's a timeline of security breaches in MCP Servers from the recent past.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-11-25T18:18:00.000Z", + "date_published": "2025-11-25T18:18:00.000Z", + "author": { + "name": "Sohan Maheshwar", + "url": "https://www.linkedin.com/in/sohanmaheshwar/" + } + }, + { + "id": "https://authzed.com/blog/building-a-multi-tenant-rag-with-fine-grain-authorization-using-motia-and-spicedb", + "content_html": "\n\nLearn how to build a complete retrieval-augmented generation pipeline with multi-tenant authorization using Motia's event-driven framework, OpenAI embeddings, Pinecone vector search, SpiceDB permissions, and natural language querying.
\n
If I was hard-pressed to pick my favourite computer game of all time, I'd go with Stardew Valley (sorry, Dangerous Dave). The stats from my Nintendo Profile is all the proof you need:
\n
Stardew Valley sits atop with 430 hours played and in second place is Mario Kart (not pictured) with ~45 hours played. That's a significant difference, and should indicate how much I adore this game.
\nWe've been talking about the importance of Fine-Grained Authorization and RAG recently, so when I sat down to build a sample usecase for a production-grade RAG with Fine-Grained Permissions, my immediate thought went to Stardew Valley.
\nFor those not familiar, Stardew Valley is a farm life simulation game where players manage a farm by clearing land, growing seasonal crops, and raising animals. So I thought I could build a logbook for a large farm that one could query using natural language processing. This usecase is ideal for RAG Pipelines (a technique that uses external data to improve the accuracy, relevancy, and usefulness of a LLM model’s output).
\nI focused on building something that was as close to production-grade as possible (and perhaps strayed from the original intent of a single farm) where an organization can own farms and data from the farms. The farms contain harvest data and users can log and query data for the farms they're part of. This provides a sticky situation for the authorization model. How does a LLM know who has access to what data?
\nHere's where SpiceDB and ReBAC was vital. By using metadata to indicate where the relevant embedings came from, the RAG system returned harvest data to the user only based on what data they had access to. In fact, OpenAI uses SpiceDB for their fine-grained authorization in ChatGPT Connectors using similar techniques.
\nWhile I know my way around SpiceDB and authorization, I needed help to build out the other components for a production-grade harvest logbook. So I reached out to my friend Rohit Ghumare from Motia for his expertise. Motia.dev is a backend framework that unifies APIs, background jobs, workflows, and AI Agents into a single core primitive with built-in observability and state management
\nHere's a photo of Rohit and myself at Kubecon Europe in 2025
\n
What follows below is a tutorial-style post on building a Retrieval Augmented Generation system with fine-grained authorization using the Motia framework and SpiceDB. We'll use Pinecone as our vector database, and OpenAI as our LLM.
\nIn this tutorial, you'll create a complete RAG system with authorization that:
\nBy the end of the tutorial, you'll have a complete system that combines semantic search with multi-tenant authorization.
\nBefore starting the tutorial, ensure you have:
\nCreate a new Motia project using the CLI:
\nnpx motia@latest create\n\nThe installer will prompt you:
\nBase (TypeScript)harvest-logbook-ragYesNavigate into your project:
\ncd harvest-logbook-rag\n\nYour initial project structure:
\nharvest-logbook-rag/\n├── src/\n│ └── services/\n│ └── pet-store/\n├── steps/\n│ └── petstore/\n├── .env\n└── package.json\n\nThe default template includes a pet store example. We'll replace this with our harvest logbook system. For more on Motia basics, see the Quick Start guide.
\nInstall the SpiceDB client for authorization:
\nnpm install @authzed/authzed-node\n\nThis is the only additional package needed.
\nPinecone will store the vector embeddings for semantic search.
\nClick Create Index
\nConfigure:
\nharvest-logbook (or your preference)1536 (for OpenAI embeddings)cosineClick Create Index
\nyour-index-abc123.svc.us-east-1.pinecone.io)Save these for the next step.
\nSpiceDB handles authorization and access control for the system.
\nRun this command to start SpiceDB locally:
\ndocker run -d \\\n --name spicedb \\\n -p 50051:50051 \\\n authzed/spicedb serve \\\n --grpc-preshared-key \"sometoken\"\n\nCheck that the container is running:
\ndocker ps | grep spicedb\n\nYou should see output similar to:
\n6316f6cb50b4 authzed/spicedb \"spicedb serve --grp…\" 31 seconds ago Up 31 seconds 0.0.0.0:50051->50051/tcp spicedb\n\nSpiceDB is now running on localhost:50051 and ready to handle authorization checks.
Create a .env file in the project root:
# OpenAI (Required for embeddings and chat)\nOPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx\n\n# Pinecone (Required for vector storage)\nPINECONE_API_KEY=pcsk_xxxxxxxxxxxxx\nPINECONE_INDEX_HOST=your-index-abc123.svc.us-east-1.pinecone.io\n\n# SpiceDB (Required for authorization)\nSPICEDB_ENDPOINT=localhost:50051\nSPICEDB_TOKEN=sometoken\n\n# LLM Configuration (OpenAI is default)\nUSE_OPENAI_CHAT=true\n\n# Logging Configuration (CSV is default)\nUSE_CSV_LOGGER=true\n\nReplace the placeholder values with your actual credentials from the previous steps.
\nSpiceDB needs a schema that defines the authorization model for organizations, farms, and users.
\nCreate src/services/harvest-logbook/spicedb.schema with the authorization model. A SpiceDB schema defines the types of objects found your application, how those objects can relate to one another, and the permissions that can be computed off of those relations.
Here's a snippet of the schema that defines user, organization and farm and the relations and permissions between them.
definition user {}\n\ndefinition organization {\n relation admin: user\n relation member: user\n \n permission view = admin + member\n permission edit = admin + member\n permission query = admin + member\n permission manage = admin\n}\n\ndefinition farm {\n relation organization: organization\n relation owner: user\n relation editor: user\n relation viewer: user\n \n permission view = viewer + editor + owner + organization->view\n permission edit = editor + owner + organization->edit\n permission query = viewer + editor + owner + organization->query\n permission manage = owner + organization->admin\n}\n\nView the complete schema on GitHub
\nThe schema establishes:
\nCreate a scripts/ folder and add three files:
scripts/setup-spicedb-schema.ts - Reads the schema file and writes it to SpiceDB
\nView on GitHub
scripts/verify-spicedb-schema.ts - Verifies the schema was written correctly
\nView on GitHub
scripts/create-sample-permissions.ts - Creates sample users and permissions for testing
\nView on GitHub
npm install -D tsx\n\n\"scripts\": {\n \"spicedb:setup\": \"tsx scripts/setup-spicedb-schema.ts\",\n \"spicedb:verify\": \"tsx scripts/verify-spicedb-schema.ts\",\n \"spicedb:sample\": \"tsx scripts/create-sample-permissions.ts\"\n}\n\n# Write schema to SpiceDB\nnpm run spicedb:setup\n\nYou should see output confirming the schema was written successfully:\n
Verify it was written correctly:
\nnpm run spicedb:verify\n\nThis displays the complete authorization schema showing all definitions and permissions:\n
The output shows:
\nCreate sample user (user_alice as owner of farm_1):
\nnpm run spicedb:sample\n\n
This creates user_alice as owner of farm_1, ready for testing.
Your authorization system is now ready.
\nStart the Motia development server:
\nnpm run dev\n\nThe server starts at http://localhost:3000. Open this URL in your browser to see the Motia Workbench.
You'll see the default pet store example. We'll replace this with our harvest logbook system in the next sections.
\n
Your development environment is now ready. All services are connected:
\nlocalhost:3000user_alice owns farm_1)Before we start building, let's understand the architecture we're creating.
\n┌─────────────────────────────────────────────────────────────┐\n│ POST /harvest_logbook │\n│ (Store harvest data + optional query) │\n└─────────┬───────────────────────────────────────────────────┘\n │\n ├─→ Authorization Middleware (SpiceDB)\n │ - Check user has 'edit' permission on farm\n │\n ├─→ ReceiveHarvestData Step (API)\n │ - Validate input\n │ - Emit events\n │\n ├─→ ProcessEmbeddings Step (Event)\n │ - Split text into chunks (400 chars, 40 overlap)\n │ - Generate embeddings (OpenAI)\n │ - Store vectors (Pinecone)\n │\n └─→ QueryAgent Step (Event) [if query provided]\n - Retrieve similar content (Pinecone)\n - Generate response (OpenAI/HuggingFace)\n - Emit logging event\n │\n └─→ LogToSheets Step (Event)\n - Log query & response (CSV/Sheets)\n\nOur system processes harvest data through these stages:
\nThe system uses Motia's event-driven model:
\nEvery API request passes through SpiceDB authorization:
\nWe'll create five main steps:
\nEach component is a single file in the steps/ directory. Motia automatically discovers and connects them based on the events they emit and subscribe to.
In this step, we'll create an API endpoint that receives harvest log data and triggers the processing pipeline. This is the entry point that starts the entire RAG workflow.
\nEvery workflow needs an entry point. In Motia, API steps serve as the gateway between external requests and your event-driven system. By using Motia's api step type, you get automatic HTTP routing, request validation, and event emission, all without writing boilerplate server code. When a farmer calls this endpoint with their harvest data, it validates the input, checks authorization, stores the entry, and emits events that trigger the embedding generation and optional query processing.
Create a new file at steps/harvest-logbook/receive-harvest-data.step.ts.
\n\nThe complete source code for all steps is available on GitHub. You can reference the working implementation at any time.
\n
View the complete Step 1 code on GitHub →
\n
Now let's understand the key parts you'll be implementing:
\nconst bodySchema = z.object({\n content: z.string().min(1, 'Content cannot be empty'),\n farmId: z.string().min(1, 'Farm ID is required for authorization'),\n metadata: z.record(z.any()).optional(),\n query: z.string().optional()\n});\n\nZod validates that requests include the harvest content and farm ID. The query field is optional - if provided, the system will also answer a natural language question about the data after storing it.
export const config: ApiRouteConfig = {\n type: 'api',\n name: 'ReceiveHarvestData',\n path: '/harvest_logbook',\n method: 'POST',\n middleware: [errorHandlerMiddleware, harvestEntryEditMiddleware],\n emits: ['process-embeddings', 'query-agent'],\n bodySchema\n};\n\ntype: 'api' makes this an HTTP endpointmiddleware runs authorization checks before the handleremits declares this step triggers embedding processing and optional query eventsmiddleware: [errorHandlerMiddleware, harvestEntryEditMiddleware]\n\nThe harvestEntryEditMiddleware checks SpiceDB to ensure the user has edit permission on the specified farm. If authorization fails, the request is rejected before reaching the handler. Authorization info is added to the request for use in the handler.
View authorization middleware →
\nexport const handler: Handlers['ReceiveHarvestData'] = async (req, { emit, logger, state }) => {\n const { content, farmId, metadata, query } = bodySchema.parse(req.body);\n const entryId = `harvest-${Date.now()}`;\n \n // Store entry data in state\n await state.set('harvest-entries', entryId, {\n content, farmId, metadata, timestamp: new Date().toISOString()\n });\n \n // Emit event to process embeddings\n await emit({\n topic: 'process-embeddings',\n data: { entryId, content, metadata }\n });\n};\n\nThe handler generates a unique entry ID, stores the data in Motia's state management, and emits an event to trigger embedding processing. If a query was provided, it also emits a query-agent event.
await emit({\n topic: 'process-embeddings',\n data: { entryId, content, metadata: { ...metadata, farmId, userId } }\n});\n\nif (query) {\n await emit({\n topic: 'query-agent',\n data: { entryId, query }\n });\n}\n\nEvents are how Motia steps communicate. The process-embeddings event triggers the next step to chunk the text and generate embeddings. If a query was provided, the query-agent event runs in parallel to answer the question using RAG.
This keeps the API response fast as it returns immediately while processing happens in the background.
\nOpen the Motia Workbench and test this endpoint:
\nharvest-logbook flowPOST /harvest_logbook in the sidebar {\n \"x-user-id\": \"user_alice\"\n }\n\n {\n \"content\": \"Harvested 500kg of tomatoes from field A. Weather was sunny.\",\n \"farmId\": \"farm_1\",\n \"metadata\": {\n \"field\": \"A\",\n \"crop\": \"tomatoes\"\n }\n }\n\nYou should see a success response with the entry ID. The Workbench will show the workflow executing in real-time, with events flowing to the next steps.
\nThis event handler takes the harvest data from Step 1, splits it into chunks, generates vector embeddings, and stores them in Pinecone for semantic search.
\nRAG systems need to break down large text into smaller chunks for better retrieval accuracy. By chunking text with overlap and generating embeddings for each piece, we enable semantic search that finds relevant context even when queries don't match exact keywords.
\nThis step runs in the background after the API returns, keeping the user experience fast while handling the background work of embedding generation and vector storage.
\nCreate a new file at steps/harvest-logbook/process-embeddings.step.ts.
View the complete Step 2 code on GitHub →
\nNow let's understand the key parts you'll be implementing:
\nconst inputSchema = z.object({\n entryId: z.string(),\n content: z.string(),\n metadata: z.record(z.any()).optional()\n});\n\nThis step receives the entry ID, content, and metadata from the previous step's event emission.
\nexport const config: EventConfig = {\n type: 'event',\n name: 'ProcessEmbeddings',\n subscribes: ['process-embeddings'],\n emits: [],\n input: inputSchema\n};\n\ntype: 'event' makes this a background event handlersubscribes: ['process-embeddings'] listens for events from Step 1const vectorIds = await HarvestLogbookService.storeEntry({\n id: entryId,\n content,\n metadata,\n timestamp: new Date().toISOString()\n});\n\nThe service handles text splitting (400 character chunks with 40 character overlap), embedding generation via OpenAI, and storage in Pinecone. This chunking strategy ensures semantic continuity across chunks.
\n\nThe OpenAI service generates 1536-dimension embeddings for each text chunk using the text-embedding-ada-002 model.
await state.set('harvest-vectors', entryId, {\n vectorIds,\n processedAt: new Date().toISOString(),\n chunkCount: vectorIds.length\n});\n\nAfter storing vectors in Pinecone, the step updates Motia's state with the vector IDs for tracking. Each chunk gets a unique ID like harvest-123-chunk-0, harvest-123-chunk-1, etc.
The embeddings are now stored and ready for semantic search when users query the system.
\nStep 2 runs automatically when Step 1 emits the process-embeddings event. To test it:
Send a request to the POST /harvest_logbook endpoint (from Step 1)
In the Workbench, watch the workflow visualization
\nYou'll see the ProcessEmbeddings step activate automatically
Check the Logs tab at the bottom to see:
\nThe step completes when you see \"Successfully stored embeddings\" in the logs. The vectors are now in Pinecone and ready for semantic search.
\nThis event handler performs the RAG query, it searches Pinecone for relevant content, retrieves matching chunks, and uses an LLM to generate natural language responses based on the retrieved context.
\nThis is where retrieval-augmented generation happens. Instead of the LLM generating responses from its training data alone, it uses actual harvest data from Pinecone as context. This ensures accurate, source-backed answers specific to the user's farm data.
\nThe step supports both OpenAI and HuggingFace LLMs, giving you flexibility in choosing your AI provider based on cost and performance needs.
\nCreate a new file at steps/harvest-logbook/query-agent.step.ts.
View the complete Step 3 code on GitHub →
\nNow let's understand the key parts you'll be implementing:
\nconst inputSchema = z.object({\n entryId: z.string(),\n query: z.string(),\n conversationHistory: z.array(z.object({\n role: z.enum(['user', 'assistant', 'system']),\n content: z.string()\n })).optional()\n});\n\nThe step receives the query text and optional conversation history for multi-turn conversations.
\nexport const config: EventConfig = {\n type: 'event',\n name: 'QueryAgent',\n subscribes: ['query-agent'],\n emits: ['log-to-sheets'],\n input: inputSchema\n};\n\nsubscribes: ['query-agent'] listens for query events from Step 1emits: ['log-to-sheets'] triggers logging after generating responseconst agentResponse = await HarvestLogbookService.queryWithAgent({\n query,\n conversationHistory\n});\n\nThe service orchestrates the RAG pipeline: embedding the query, searching Pinecone for similar vectors, extracting context from top matches, and generating a response using the LLM.
\nView RAG orchestration service →
\nThe query is embedded using OpenAI and searched against Pinecone to find the top 5 most similar chunks. Each result includes a similarity score and the original text.
\nView Pinecone query implementation →
\nawait state.set('agent-responses', entryId, {\n query,\n response: agentResponse.response,\n sources: agentResponse.sources,\n timestamp: agentResponse.timestamp\n});\n\nThe LLM generates a response using the retrieved context. The system supports both OpenAI (default) and HuggingFace, controlled by the USE_OPENAI_CHAT environment variable. The response includes source citations showing which harvest entries informed the answer.
View OpenAI chat service →
\nView HuggingFace service →
await emit({\n topic: 'log-to-sheets',\n data: {\n entryId,\n query,\n response: agentResponse.response,\n sources: agentResponse.sources\n }\n});\n\nAfter generating the response, the step emits a logging event to create an audit trail of all queries and responses.
\nStep 3 runs automatically when you include a query field in the Step 1 request. To test it:
POST /harvest_logbook with a query: {\n \"content\": \"Harvested 500kg of tomatoes from field A. Weather was sunny.\",\n \"farmId\": \"farm_1\",\n \"query\": \"What crops did we harvest?\"\n }\n\nIn the Workbench, watch the QueryAgent step activate
Check the Logs tab to see:
\nThe step completes when you see the AI-generated response in the logs. The query and response are automatically logged by Step 5.
\nThis API endpoint allows users to query their existing harvest data without storing new entries. It's a separate endpoint dedicated purely to RAG queries.
\nWhile Step 1 handles both storing and optionally querying data, users often need to just ask questions about their existing harvest logs. This dedicated endpoint keeps the API clean and focused - one endpoint for data entry, another for pure queries.
\nThis separation also makes it easier to apply different rate limits or permissions between data modification and read-only operations.
\nCreate a new file at steps/harvest-logbook/query-only.step.ts.
View the complete Step 4 code on GitHub →
\nNow let's understand the key parts you'll be implementing:
\nconst bodySchema = z.object({\n query: z.string().min(1, 'Query cannot be empty'),\n farmId: z.string().min(1, 'Farm ID is required for authorization'),\n conversationHistory: z.array(z.object({\n role: z.enum(['user', 'assistant', 'system']),\n content: z.string()\n })).optional()\n});\n\nThe request requires a query and farm ID. Conversation history is optional for multi-turn conversations.
\nexport const config: ApiRouteConfig = {\n type: 'api',\n name: 'QueryHarvestLogbook',\n path: '/harvest_logbook/query',\n method: 'POST',\n middleware: [errorHandlerMiddleware, harvestQueryMiddleware],\n emits: ['query-agent']\n};\n\npath: '/harvest_logbook/query' creates a dedicated query endpointharvestQueryMiddleware checks for query permission (not edit)emits: ['query-agent'] triggers the same RAG query handler as Step 3middleware: [errorHandlerMiddleware, harvestQueryMiddleware]\n\nThe harvestQueryMiddleware checks SpiceDB for query permission. This is less restrictive than edit - viewers can query but cannot modify data.
View authorization middleware →
\nexport const handler: Handlers['QueryHarvestLogbook'] = async (req, { emit, logger }) => {\n const { query, farmId } = bodySchema.parse(req.body);\n const queryId = `query-${Date.now()}`;\n \n await emit({\n topic: 'query-agent',\n data: { entryId: queryId, query }\n });\n \n return {\n status: 200,\n body: { success: true, queryId }\n };\n};\n\nThe handler generates a unique query ID and emits the same query-agent event used in Step 1. This reuses the RAG pipeline from Step 3 without duplicating code.
The API returns immediately with the query ID. The actual processing happens in the background, and results are logged by Step 5.
\nThis is the dedicated query endpoint. Test it directly:
\nPOST /harvest_logbook/query in the Workbench {\n \"x-user-id\": \"user_alice\"\n }\n\n {\n \"query\": \"What crops did we harvest?\",\n \"farmId\": \"farm_1\"\n }\n\nYou'll see a 200 OK response with the query ID. In the Logs tab, watch for:
QueryHarvestLogbook - Authorization and query receivedQueryAgent - Querying AI agentQueryAgent - Agent query completedThe query runs in the background and results are logged by Step 5. This endpoint is perfect for read-only query operations without storing new data.
\nThis event handler creates an audit trail by logging every query and its AI-generated response. It supports both local CSV files (for development) and Google Sheets (for production).
\nAudit logs are essential for understanding how users interact with your system. They help with debugging, monitoring usage patterns, and maintaining compliance. By logging queries and responses, you can track what questions users ask, identify common patterns, and improve the system over time.
\nThe dual logging strategy (CSV/Google Sheets) gives you flexibility, use CSV locally for quick testing, then switch to Google Sheets for production without changing code.
\nCreate a new file at steps/harvest-logbook/log-to-sheets.step.ts.
View the complete Step 5 code on GitHub →
\nNow let's understand the key parts you'll be implementing:
\nconst inputSchema = z.object({\n entryId: z.string(),\n query: z.string(),\n response: z.string(),\n sources: z.array(z.string()).optional()\n});\n\nThe step receives the query, AI response, and optional source citations from Step 3.
\nexport const config: EventConfig = {\n type: 'event',\n name: 'LogToSheets',\n subscribes: ['log-to-sheets'],\n emits: [],\n input: inputSchema\n};\n\nsubscribes: ['log-to-sheets'] listens for logging events from Step 3const useCSV = process.env.USE_CSV_LOGGER === 'true' || !process.env.GOOGLE_SHEETS_ID;\n\nawait HarvestLogbookService.logToSheets(query, response, sources);\n\nThe service automatically chooses between CSV and Google Sheets based on environment variables. This keeps the step code simple while supporting different deployment scenarios.
\nView CSV logger →
\nView Google Sheets service →
try {\n await HarvestLogbookService.logToSheets(query, response, sources);\n logger.info(`Successfully logged to ${destination}`);\n} catch (error) {\n logger.error('Failed to log query response');\n // Don't throw - logging failures shouldn't break the main flow\n}\n\nThe step catches logging errors without throwing. This ensures that even if logging fails, the main workflow completes successfully. Users get their query results even if the audit log has issues.
\nThe CSV logger saves entries to logs/harvest_logbook.csv with these columns:
Each entry is automatically escaped to handle quotes and commas in the content.
\nStep 5 runs automatically after Step 3 completes. To verify it's working:
\nPOST /harvest_logbook/queryLogToSheets entries cat logs/harvest_logbook.csv\n\nYou should see your query and response logged with a timestamp. Each subsequent query appends a new row to the CSV file.
\n
Now that all steps are built, let's test the complete workflow using the Motia Workbench.
\nnpm run dev\n\nOpen http://localhost:3000 in your browser to access the Workbench.
harvest-logbook flow from the dropdownPOST /harvest_logbook endpoint in the workflow {\n \"x-user-id\": \"user_alice\"\n }\n\n {\n \"content\": \"Harvested 500kg of tomatoes from field A. Weather was sunny, no pest damage observed.\",\n \"farmId\": \"farm_1\",\n \"metadata\": {\n \"field\": \"A\",\n \"crop\": \"tomatoes\",\n \"weight_kg\": 500\n }\n }\n\nWatch the workflow execute in real-time. You'll see:
\nPOST /harvest_logbook/query endpoint {\n \"x-user-id\": \"user_alice\"\n }\n\n {\n \"farmId\": \"farm_1\",\n \"query\": \"What crops did we harvest recently?\"\n }\n\nWatch the RAG pipeline execute:
\nTry querying as a user without permission:
\n {\n \"x-user-id\": \"user_unauthorized\"\n }\n\nYou'll see a 403 Forbidden response to verify if authorization works correctly.\nYou can also create different users with different levels of access and see fine-grained authorization in action.
\nCheck the audit trail:
\ncat logs/harvest_logbook.csv\n\nYou'll see all queries and responses logged with timestamps.
\nThe Workbench also provides trace visualization showing exactly how data flows through each step, making debugging straightforward.
\nYou've built a complete RAG system with multi-tenant authorization using Motia's event-driven framework. You learned how to:
\nYour system now handles:
\nYour RAG system is ready to help farmers query their harvest data naturally while keeping data secure with proper authorization.
\nThis was a fun exercise in tackling a complex authorization problem and also building something production-grade. I also got to play out some of my Stardew Valley fancies IRL. Maybe it's time I actually move to a cozy farm and grow my own crops (so long as the farm has a good Internet connection!)
\n
The repository can be found on the Motia GitHub.
\nFeel free to reach out to us on LinkedIn or jump into the SpiceDB Discord if you have any questions. Happy farming!
", + "url": "https://authzed.com/blog/building-a-multi-tenant-rag-with-fine-grain-authorization-using-motia-and-spicedb", + "title": "Build a Multi-Tenant RAG with Fine-Grain Authorization using Motia and SpiceDB", + "summary": "Learn how to build a complete retrieval-augmented generation pipeline with multi-tenant authorization using Motia's event-driven framework, OpenAI embeddings, Pinecone vector search, SpiceDB permissions, and natural language querying.", + "image": "https://authzed.com/images/blogs/motia-spicedb.png", + "date_modified": "2025-11-18T22:56:00.000Z", + "date_published": "2025-11-18T17:30:00.000Z", + "author": { + "name": "Sohan Maheshwar", + "url": "https://www.linkedin.com/in/sohanmaheshwar/" + } + }, + { + "id": "https://authzed.com/blog/terraform-and-opentofu-provider-for-authzed-dedicated", + "content_html": "Today, AuthZed is excited to introduce the Terraform and OpenTofu Provider for AuthZed Dedicated, giving customers a powerful way to manage their authorization infrastructure using industry standard best practices.
\nWith this new provider, teams can define, version, and automate their resources in the AuthZed Cloud Platform - entirely through declarative infrastructure-as-code. This makes it easier than ever to integrate authorization management into existing operational workflows.
\nModern infrastructure teams rely on Terraform and OpenTofu to manage everything from compute resources to networking and identity. With the new AuthZed provider, you can now manage your authorization layer in the same way — improving consistency, reducing manual configuration, and enabling repeatable deployments across environments.
\nThe Terraform and OpenTofu provider automates key components of your AuthZed Dedicated environment, including:
\nAnd we’re working to support additional resources in AuthZed Dedicated environments, including managing Permissions Systems.
\nBelow is a simple example of how to create a service account using the AuthZed Terraform provider:
\nprovider \"authzed\" {\n token = var.authzed_token\n}\n\nresource \"authzed_service_account\" \"example\" {\n name = \"ci-cd-access\"\n description = \"Service account for CI/CD pipeline\"\n}\n\nThis snippet demonstrates how straightforward it is to manage AuthZed resources alongside your existing infrastructure definitions.
\nThe introduction of the Terraform and OpenTofu provider makes it effortless to manage authorization infrastructure as code — ensuring your permission systems evolve safely and consistently as your organization scales.
\nFor AuthZed customers interested in using the Terraform and OpenTofu provider, please contact your account manager for access.
\nTo explore the provider and get started, visit the AuthZed Terraform Provider on GitHub.
\nNot an AuthZed customer, but want to take the technology for a spin? Sign up for AuthZed Cloud today to try it out.
", + "url": "https://authzed.com/blog/terraform-and-opentofu-provider-for-authzed-dedicated", + "title": "Terraform and OpenTofu Provider for AuthZed Dedicated", + "summary": "AuthZed now supports Terraform and OpenTofu. You can manage service accounts, API tokens, roles, and permission system configuration as code, just like your other infrastructure. Define resources declaratively, version them in git, and automate deployments across environments without manual configuration steps.", + "image": "https://authzed.com/images/blogs/opentofu-terraform-blog-image.png", + "date_modified": "2025-10-30T10:40:00.000Z", + "date_published": "2025-10-30T10:40:00.000Z", + "author": { + "name": "Veronica Lopez", + "url": "https://www.linkedin.com/in/veronica-lopez-8ba1b1256/" + } + }, + { + "id": "https://authzed.com/blog/why-were-not-renaming-the-company-authzed-ai", + "content_html": "It has become popular for companies to align themselves with AI. For good reason! AI has the potential, and ever increasing likelihood, of fundamentally transforming the way that companies work. The hype is out of control! People breathlessly compare AI to the internet and the industrial revolution. And who knows; they could even be right!
\nAt AuthZed, a rapidly growing segment of our customers are AI first companies, including OpenAI. As we work with more AI companies on authorization for AI systems, we often get asked if we will rebrand as an AI company.
\nCompanies have realigned themselves to varying degrees. SalesForce may one day soon be called AgentForce. As an April Fool’s joke, one company started a rumor that Nvidia was going to rebrand as NvidAI, and I think a lot of people probably thought to themselves: “yeah, that tracks.” Mega corps such as Google, Meta, and IBM have .ai top level websites that outline their activities in the AI space.
\nIt can make a lot of sense! After all, unprecedented shifts require unprecedented attention, and a rising tide floats all boats. Well: we’re not. In this post I will lay out some of the pros and cons of going all in on AI branding and alignment, and explain our reasons for keeping our brand in place.
\nWhen considering such a drastic change, I believe each company is looking at the upsides and downsides of a rebrand given their specific situation (revenue, brand value, momentum, staff, etc.) and making a calculated choice that may only apply in their specific context. So what are some of the upsides and downsides?
\n
The risks that I’ve been able to identify boil down to two areas: brand value and perception. Let’s start with brand value.
\nCompanies spend a lot of time and effort building their brand value. It is an intangible asset for companies that pays dividends in areas such as awareness, customer acquisition costs, and reach, just to name a few. Apple is widely considered to have the most valuable brand in the world, and BrandFinance currently values their brand at $575 billion, with a b. That’s approximately 15% of their $3.7 trillion market cap.
\nWhen you rebrand by changing your company’s name, you can put all of that hard work at risk. By changing your name, you need to regain any lost brand mindshare. When you change your web address, you need to re-establish SEO and domain authority that was hard fought and hard won. If Apple rebranded to treefruit.ai (dibs btw) tomorrow, we would expect their sales, mindshare, and even email deliverability to go down.
\nThe second major risk category is around perception. By rebranding around AI you’re signaling a few things to the market. First, you're weighing the upside of being aligned with AI heavily. Second, you signal that you’re willing and able to follow the hype. These factors combined may change the perception of your company to potential buyers: from established, steady, successful, to trendy, fast-moving, up and coming.
\nOn a longer time horizon, we’ve also seen many such trends come and go. Web 1.0, Web 2.0, SoLoMo, Cloud, Crypto, VR/AR, and now AI. In all cases these hype movements have had a massive effect on the way people perceive technology, but they have also become less hyped over time, as a new trend has arrived to supplant them. With AI, I can guarantee that at some point we will achieve an equilibrium where the value prop has been mostly established, and the hype adjusts to fit. Do you want to be saddled with an AI-forward brand when that happens? Will you have been able to ride the wave long and high enough to establish an enduring company that can survive on its own? One of my favorite quotes from Warren Buffet may apply here: “Only when the tide goes out do you discover who's been swimming naked.”
\nThere are many upsides that companies can expect to reap as well! Hype is its own form of reality distortion field, and it causes a lot of people to act in ways that they might not have otherwise. FOMO, or fear of missing out, is a well established phenomenon that we can leverage to our benefit. Let’s take a look at who is acting differently in this hype cycle.
\nInvestors. If you are a startup that’s hoping to raise capital, you had better have either: insane fundamentals or an AI story. Carta recently released an analysis on how AI is affecting fundraising, with the TL;DR being that AI companies are absorbing a ton of the money, and that growing round sizes can primarily be attributed to the AI companies that are raising. Counter to all of the hype, user Xodarap over at LessWrong.com has produced an analysis on YC companies post GenAI hitting the scene, that paints a less rosy picture of the outcomes associated with primarily AI-based companies so far. It’s possible (probable?) that we are just too early in the cycle to have identified the clear winners and losers for AI.
\nVendors. If partnerships are a big part of your model, there are a lot of dollars floating around for partnerships that revolve around AI. I've had a marketing exec from a vendor tell me straight up: “all of our marketing dollars are earmarked only for AI related initiatives right now.” If you can tell a compelling story here, you will be able to find someone willing to help you amplify it.
\nBusinesses. Last, and certainly not least, businesses are also changing their behavior. If you’re a B2B company, your customers are all figuring out what their AI story is too. That means opportunity. They’re looking for vendors, partners, analysts, really anyone who can help them be successful with AI. Their boss told them: “We need an AI story or we’re going to get our lunch eaten! Make it happen!” So they’re out there trying to make it happen. Unfortunately, a study out of MIT recently proclaimed that “95% of generative AI pilots at companies are failing.”
\nThe world is never quite as cut and dry as we think it might be. The good news is, that you can still reap some of the reward without a full rebrand. At AuthZed, we’ve found that you can still tell your AI story, and court customers who are looking to advance their AI initiatives even if you’re not completely AI-native, or all-aboard the hype train. Unfortunately, I don’t have intuition or data for what the comparative advantage is of a rebrand compared to attempting to make waves under a more neutral brand.
\nAt AuthZed, our context-specific decision not to rebrand was based primarily on how neutral our solution is. While many companies, both AI and traditional, are having success with using AuthZed to secure RAG pipelines and AI agents, we also serve many customers who want to protect their data from unauthorized access by humans. Or to build that new sharing workflow that is going to unlock new revenue. Or break into the enterprise. Put succinctly: we think we would be doing the world a great disservice if our technology was only being used for AI-adjacent purposes.
\nThe other, less important reason why we’re not rebranding, is that at AuthZed we often take a slightly contrarian or longer view than whatever the current hype cycle might dictate. We try not to cargo-cult our business decisions. Following the pack is almost by definition a median-caliber decision. Median-caliber decisions are likely to sum up to a median company outcome. The median startup outcome is death or an unprofitable exit. At AuthZed, we think that the opportunity that we have to reshape the way that the world thinks about authorization shouldn’t be wasted.
\nWith that said, I’ve been wrong many times in the past. Too many to count even. “Never say never” are words to live by! Hopefully if and when the time comes where our personal calculus shifts in favor of a big rebrand, I can recognize the changing landscape and we can do what’s right for the company. What’s a little egg on your face when you’re on a mission to fix the way that companies across the world do authorization.
", + "url": "https://authzed.com/blog/why-were-not-renaming-the-company-authzed-ai", + "title": "Why we’re not renaming the company AuthZed.ai", + "summary": "Should your company rebrand as an AI company? We decided not to.\nAI companies attract outsized funding and partnership dollars. Yet rebranding means trading established brand value and customer mindshare for alignment with today's hottest trend.\nWe stayed brand-neutral because our authorization solution serves both AI and non-AI companies alike. Limiting ourselves to AI-only would be a disservice to our broader mission and the diverse customers who depend on us.", + "image": "https://authzed.com/images/blogs/authzed-ai-bg.png", + "date_modified": "2025-10-27T11:45:00.000Z", + "date_published": "2025-10-27T11:45:00.000Z", + "author": { + "name": "Jake Moshenko", + "url": "https://www.linkedin.com/in/jacob-moshenko-381161b/" + } + }, + { + "id": "https://authzed.com/blog/authzed-adds-microsoft-azure-support", + "content_html": "Today, AuthZed is announcing support for Microsoft Azure in AuthZed Dedicated to provide more authorization infrastructure deployment options for customers.\nAuthZed now provides customers the opportunity to choose from all major cloud providers - AWS, Google Cloud and/or Microsoft Azure.
\n
AuthZed customers can now deploy authorization infrastructure to 23+ Azure regions to support their globally distributed applications.\nThis ensures fast, consistent permission decisions regardless of where your users are located.
\n\n\n\"I have been following the development of SpiceDB and AuthZed on how they are providing authorization infrastructure to companies of all sizes,\" said Lachlan Evenson, Principal PDM Manager, Azure Cloud Native Ecosystem.\n\"It's great to see their support for Microsoft Azure and we look forward to collaborating with AuthZed as they work with more Azure customers moving forward.\"
\n
This launch is the direct result of customer demand. Many teams asked for Azure support, and now they have the ability to deploy authorization infrastructure in the cloud of their choice.
\n
AuthZed Dedicated is our managed service that provides fully private deployments of our cloud platform in our customer’s provider and regions of choice.\nThis gives users the benefits of a proven, production-ready authorization system—without the burden of building and maintaining it themselves.
\nIndustry leaders such as OpenAI, Workday, and Turo rely on AuthZed Dedicated for their authorization infrastructure:
\n\n\n“We decided to buy instead of build early on.\nThis is an authorization system with established patterns.\nWe didn’t want to reinvent the wheel when we could move fast with a proven solution.”\n— Member of Technical Staff, OpenAI
\n
With Azure now available, you can deploy AuthZed Dedicated on the cloud of your choice.\nBook a call with our team to learn how AuthZed can power your authorization infrastructure.
", + "url": "https://authzed.com/blog/authzed-adds-microsoft-azure-support", + "title": "AuthZed Dedicated Now Available on Microsoft Azure", + "summary": "AuthZed now supports Microsoft Azure, giving customers the opportunity to choose from all major cloud providers - AWS, Google Cloud, and Microsoft Azure. Deploy authorization infrastructure to 23+ Azure regions for globally distributed applications.\n", + "image": "https://authzed.com/images/blogs/authzed-azure-support-og.png", + "date_modified": "2025-10-21T16:00:00.000Z", + "date_published": "2025-10-21T16:00:00.000Z", + "author": { + "name": "Jimmy Zelinskie", + "url": "https://twitter.com/jimmyzelinskie" + } + }, + { + "id": "https://authzed.com/blog/extended-t-augment-your-design-craft-with-ai-tools", + "content_html": "\n\nTL;DR
\n
\nAI doesn't replace design judgment. It widens my T-shaped skill set by surfacing on-brand options quickly. It's still on me to uphold craft, taste, and standards for what ships.
Designers on small teams, especially at startups, default to being T-shaped: deep in a core craft and broad enough to support adjacent disciplines. My vertical is brand and visual identity, while my horizontal spans marketing, product, illustration, creative strategy, and execution. Lately, AI tools have pushed that horizontal reach further than the usual constraints allow.
\nAt AuthZed, I use AI to explore ideas that would normally be blocked by time or budget: 3D modeling, character variation, and light manufacturing for physical pieces. The point is not to replace design craft with machine output. It is to expand the number of viable ideas I can evaluate, then curate and polish a final product that meets our design standard.
\nPrevious tools mostly sped up execution. AI speeds up exploration. When you can generate twenty plausible directions in minutes, the scarce skill is not pushing Bézier handles. It is knowing which direction communicates the right message, and why.
\nConcrete example: Photoshop made retouching faster, but great photography still depends on eye and intent. Figma made collaboration faster, but good product design still depends on hierarchy, flows, and clarity. AI widens the search field so designers can spend more time on curation instead of setup.
\n\n\nVolume before polish
\n
\nWhile at SVA we focused on volume before refinement. We would thumbnail dozens (sometimes a hundred) poster concepts before committing to one. That practice shaped how I use AI today: explore wide, then curate down to find the right solution. Richard Wilde's program emphasized iterative problem-solving and visual literacy long before today's tools made rapid exploration this easy.
AI works best when it is constrained by the systems you already trust, whether that is the permission model that controls who can view a file or the rules you enforce when writing code. Clarity is what turns an AI model from a toy into a multiplier. When we developed our mascot, Dibs, I knew we would eventually need dozens of consistent, reference-accurate variations: expressions, poses, environments. Historically, that meant a lot of sketching and cleanup before we could show anything.
\nWith specific instructions and a set of reference illustrations, I can review a new variation every few moments. None of those are final, but they land close while surfacing design choices I might not have explored on my own. I still adjust typography, tweak poses, and rebalance compositions before anything ships, so we stay on brand and accessible.
\nThis mirrors every major tool shift. Photoshop did not replace photographers. Figma did not replace designers. AI does not replace design thinking. It gives you a broader search field so you can make better choices earlier.
\n
For our offsite hackathon, I wanted trophies the team would be proud to earn and motivated to chase next time. Our mascot, Dibs, was the obvious hero. I started with approved 2D art and generated a character turn that covered front, side, back, and top views. From there I used a reconstruction tool (Meshy has been the most reliable lately) to get a starter mesh before moving into Blender for cleanup, posing, and print prep.
\n
I am not a Blender expert, but I have made a donut or two. With the starting mesh it was straightforward to get a printable file: repair holes, smooth odd vertices, and thicken delicate areas. When I hit something rusty, I leaned on documentation and the right prompts to fill the gaps. Before doing any of that refinement, I printed the raw export on my Bambu Lab P1P in PLA, cleaned up the supports, and dropped the proof on a teammate's desk. We went from concept to a physical artifact in under a day.
\nWe ended up producing twelve trophies printed in PETG with a removable base that hides a pocket for added weight (or whatever ends up in there). I finished them by hand with Rub 'n Buff, a prop-maker staple, to get a patinated metallic look. Once the pipeline was dialed in, I scaled it down for a sleeping Dibs keychain so everyone could bring something home, even if they were not on the podium. Small lift, real morale boost.
\n

When anyone can produce a hundred logos or pose variations, the value as a designer shifts to selection with intent. Brand expertise tells you which pose reads playful versus chaotic, which silhouette will hold up at small sizes, and which material choice survives handling at an event. The models handle brute-force trial. You own the taste, the narrative, and the necessary constraints.
\nThe result is horizontal expansion without vertical compromise. Consistency improves because character work starts from reference-accurate sources instead of ad-hoc one-offs. Physical production becomes realistic because you can iterate virtually before committing to materials and time.
\nWith newer models, I can get much closer to production-ready assets with far less back-and-forth prompting. I render initial concepts, select top options based on color, layout, expression, and composition, then create a small mood board for stakeholders to review before building the final production-ready version. The goal is not to outsource taste. It is to see more viable paths sooner, pick one, and refine by hand so the final assets stay original and on-brand.
\n\n\nProcess note: I drafted the outline and core ideas, then used an editor to tighten phrasing and proofread. Same pattern as the rest of my work: widen the search, keep the taste.
\n
What is a T-shaped designer?
\nA designer with deep expertise in one area (the vertical) and working knowledge across adjacent disciplines (the horizontal).
How does AI help T-shaped designers?
\nAI quickly generates plausible options so you can evaluate more directions, then apply judgment to pick, refine, and ship the best one.
How do I keep brand consistency with AI images?
\nDefine non-negotiables (proportions, palette, silhouette), use reference images, and keep a human finish pass for polish.
Which tools did you use in this workflow?
\nModel-guided image generation (e.g., Midjourney or a tuned model with references), a 2D-to-3D reconstruction step for a starter mesh (Rodin/Hyper3D or Meshy), Blender for cleanup, and a Bambu Lab P1P to slice G-code and print.
We're excited to announce the launch of two new MCP servers that bring SpiceDB resources closer to your AI workflow, making it easier to learn and get started using SpiceDB for your application permissions: the AuthZed MCP Server and the SpiceDB Dev MCP Server.
\nThe AuthZed MCP Server brings comprehensive documentation and learning resources directly into your AI tools. Whether you're exploring SpiceDB concepts, looking up API references, or searching for schema examples, this server provides instant access to all SpiceDB and AuthZed documentation pages, complete API method definitions, and a curated collection of authorization pattern examples. It's designed to make learning and referencing SpiceDB documentation seamless, right where you're already working.
\nThe SpiceDB Dev MCP Server takes things further by integrating directly into your development workflow. It connects to a sandboxed SpiceDB instance, allowing your AI coding assistant to help you learn and experiment with schema development, relationship testing, and permission checking. Need to validate a schema change? Want to test whether a specific permission check will work? Your AI assistant can now interact with SpiceDB on your behalf, making development faster and more intuitive.
\nReady to try them out? Head over to authzed.com/docs/mcp to get started with both servers.
\n
We've been experimenting with MCP since the first specification was published. Back when the term \"vibe coding\" was just starting to circulate, we built an early prototype MCP server for SpiceDB. The results were eye-opening. We were pleasantly surprised by how effectively LLMs could use the tools we provided, and delighted by the potential of being able to \"talk\" to SpiceDB through natural language.
\nThat initial prototype sparked conversations across the SpiceDB community. We connected with others who were equally excited about the possibilities, sharing ideas and exploring use cases together. Those early discussions helped shape our thinking about what MCP servers for SpiceDB could become.
\nAs the MCP specification continued evolving (particularly around enterprise readiness and authorization), we wanted to deeply understand these new capabilities. This led us to build a reference implementation of a remote MCP server using open source solutions. That reference implementation became a testbed for understanding the authorization aspects of the spec and exploring best practices for building production-ready MCP servers.
\nThrough our own experience with AI coding tools, we've seen firsthand how valuable it is to have the right resources and tools available directly in your AI workflow. Our team's usage of AI assistants has steadily increased, and we know the difference it makes when information and capabilities are just a prompt away.
\nFor AuthZed and SpiceDB users, we wanted to bring learning and development resources closer to where you're already working. Whether you're learning SpiceDB concepts, building a new schema, or debugging permissions logic, having immediate access to documentation, examples, and a sandbox SpiceDB instance can dramatically speed up the development process.
\nThat's why we built both servers: the AuthZed MCP Server puts knowledge at your fingertips, while the SpiceDB Dev MCP Server puts your development environment directly into your AI assistant's toolkit.
\nWe're still actively building and experimenting with MCP. While the specification provides guidance for authorization, there's significant responsibility on MCP server developers to implement appropriate access controls for resources and accurate permissions around tools.
\nThis is particularly important as MCP servers become more powerful and gain access to sensitive systems. We're learning as we build, and we'll be sharing new tools and lessons around building authorization into MCP servers as we discover them. We believe the combination of SpiceDB for MCP permissions and AuthZed for authorization infrastructure is especially well-suited for defining and enforcing the complex permissions that enterprise MCP servers require.
\nIn the meantime, we encourage you to try out our MCP servers. The documentation for each includes detailed use cases and security guidelines to help you use them safely and effectively.
\nIf you're building an enterprise MCP server and would like help integrating permissions and authorization, we'd love to chat. Book a call with our team and let's explore how we can help.
\nHappy coding, and we can't wait to see what you build with these new tools! 🚀
", + "url": "https://authzed.com/blog/introducing-authzeds-mcp-servers", + "title": "Introducing AuthZed's MCP Servers", + "summary": "We're launching two MCP servers to bring SpiceDB closer to your AI workflow. The AuthZed MCP Server provides instant access to documentation and examples, while the SpiceDB Dev MCP Server integrates with your development environment. Learn about our MCP journey from early prototypes to production, and discover how these tools can speed up your SpiceDB development.", + "image": "https://authzed.com/images/upload/chat-with-authzed-mcp.png", + "date_modified": "2025-09-30T10:45:00.000Z", + "date_published": "2025-09-30T10:45:00.000Z", + "author": { + "name": "Sam Kim", + "url": "https://github.com/samkim" + } + }, + { + "id": "https://authzed.com/blog/the-dual-write-problem-in-spicedb-a-deep-dive-from-google-and-canva-experience", + "content_html": "This talk was part of the Authorization Infrastructure event hosted by AuthZed on August 20, 2025.
\nIn this technical deep-dive, Canva software engineer Artie Shevchenko draws on five years of experience with centralized authorization systems—first with Google's Zanzibar and now with SpiceDB—to tackle one of the most challenging aspects of authorization system implementation: the dual-write problem.
\nThe dual-write problem emerges when data must be replicated between your main database (like Postgres or Spanner) and SpiceDB, creating potential inconsistencies due to network failures, race conditions, and system bugs. These inconsistencies can lead to false negatives (blocking legitimate access) or false positives (security vulnerabilities).
\nHowever, as Shevchenko explains, \"the good news is centralized authorization systems, they actually do simplify things quite a bit.\" Unlike traditional event-driven architectures where teams publish events hoping others interpret them correctly, \"with SpiceDB, you're fully in control\" of the entire replication process.
\nSpiceDB offers several key advantages: \"you're not replicating aggregates. Most often, it's simple booleans or relationships,\" making inconsistencies easier to reason about. Additionally, \"the volume of replication is also much smaller\" since authorization data can live primarily in SpiceDB, and you're \"replicating just to SpiceDB, not to 10 other services.\"
\nThe talk explores four solution approaches—from cron sync jobs to transactional outboxes—with real-world examples from Google and Canva. Shevchenko's key insight: \"dual write is not a SpiceDB problem. It's a data replication problem,\" but \"SpiceDB makes the dual write problem, and ultimately the data integrity problem, much more manageable.\"
\n\n\n\"First of all, as a team now, you own the whole replication process. Because you own both copies of the data. Which makes a huge difference. You're not just publishing an event that other teams would hopefully correctly interpret and apply to their data stores.\"
\n
Takeaway: SpiceDB gives you complete control over your authorization data replication, eliminating dependencies on other teams and reducing coordination overhead.
\n\n\n\"And then feed it as an input to our MapReduce style sync job, which would sync data for 100 millions of users in just a couple of hours.\"
\n
Takeaway: SpiceDB's approach has been battle-tested at Google scale, handling hundreds of millions of users efficiently.
\n\n\n\"But, the first three approaches without Zanzibar or SpiceDB would be really tricky, if not impossible. Not only because of the data ownership problem, but also because of aggregates. With event-driven replication, you're probably not replicating simple atomic facts.\"
\n
Takeaway: SpiceDB's simple data model (booleans and relationships) makes dual-write problems significantly more manageable compared to traditional event-driven architectures that deal with complex aggregates.
\nTalk by Artie Shevchenko, Software Engineer at Canva
\nAll right, let's talk about the dual-write problem. My name is Artie Shevchenko, and I'm a software engineer at Canva. My first experience with systems like SpiceDB was actually with Zanzibar at Google in 2017. And now I'm working on SpiceDB integration at Canva. So, yeah, almost five years working with this piece of tech.
\nAnd from my experience, there are two hard things in centralized authorization systems. It's dual-writes and data backfills. But neither of them is unique to Zanzibar or SpiceDB. In fact, dual-write is a fairly standard problem. And when we're talking about replication to another database, it is always challenging. Whether it's a permanent replication of some data to another microservice, or migration to a new database with zero downtime, or even replication to SpiceDB.
\nThe good news is centralized authorization systems, they actually do simplify things quite a bit. First of all, as a team now, you own the whole replication process. Because you own both copies of the data. Which makes a huge difference. You're not just publishing an event that other teams would hopefully correctly interpret and apply to their data stores. With SpiceDB, you're fully in control.
\nSecondly, with SpiceDB, you're not replicating aggregates. Most often, it's simple booleans or relationships. Which makes it much easier to reason about the possible inconsistencies.
\nAnd finally, the volume of replication is also much smaller. For two reasons. First, most of the authorization data you can store in SpiceDB only, once the migration is done. And second, with SpiceDB, you need to replicate just to SpiceDB, not to 10 other services. Well, there are also search indexes, but they're very special for multiple reasons. And the good news is search indexes, you don't need to solve them on the client side. Mostly, you can just delegate this to tools that materialize.
\nBut that said, even with replication to SpiceDB, there is a lot of essential complexity there that first, you need to understand. And second, you need to decide which approach you're going to use to solve the dual-write problem.
\nThe structure of this talk, unlike the topic itself, is super simple. I don't have any ambition to make the dual-write problem look simple. It's not. But I do hope to make it clear. So, the goal of this talk is to make the problems and the underlying causes clear. And we're going to spend quite a lot of time unpacking what are the practical problems we're solving. And then, talking about the solution space, the goal is to make it clear what works and what doesn't. And, of course, the pros and cons of the different alternatives.
\nBut let's start with a couple of definitions. Almost obvious definitions aside, let's take a look at the left side of the slide, at the diagrams. Throughout the talk, we'll be looking into storing the same piece of data in two databases. Of course, ideally, you would store it in exactly one of them. But in practice, unfortunately, it's not always possible, even with SpiceDB.
\nSo, when information in one database does not match the information in another database, we'll call it a discrepancy or inconsistency. Or I'll simply say that databases are out of sync.
\nWhen talking about the dual-write problem in general, I'll be using the term \"source of truth\" for the database that is kind of primary in the replication process. And the second database I'll call the second database. I was thinking about calling them primary and replica or maybe master and slave. But the problem is, these terms are typically used to describe replication within the same system. But I want to emphasize that these are different databases. And also, the same piece of knowledge may take very different forms in them. So, I'll stick to the terms \"source of truth\" and just some other second database. That's when I talk about the dual-write problem in general.
\nBut not to be too abstract, we'll be mostly looking at the dual-write problem in the context of data replication to SpiceDB, not just to some other abstract second database. And in this case, instead of using the term \"source of truth,\" I'll be using the term \"main database,\" referring to the traditional transactional database where you store most of your data, like Postgres, Dynamo, or Spanner. Because for the purposes of this talk, we'll assume that the main database is a source of truth for any replicated piece of data. Yes, theoretically, replicating in the other direction is also an option, but we won't consider that. We're replicating from the main database to SpiceDB.
\nSo, in different contexts, I'll refer to the database on the left side of this giant white replication arrow as either \"source of truth\" or \"main database\" or, even more specifically, Postgres or Spanner. Please keep this in mind.
\nAnd finally, don't get confused when I call SpiceDB a database. Maybe I can blame the name. Of course, it's more than just a database. It is a centralized authorization system. But in this talk, we actually care about the underlying database only. So, hopefully, that doesn't cause any confusion.
\nAll right. We're done with these primitive definitions. Now, let's define what the dual-write problem is. And let's start with an oversimplified but real example from home automation.
\nLet's say there are two types of resources, homes and devices. Users can be members of multiple homes, and they have access to all the devices in their homes. So, whether a device is in one home or another, that information obviously has to be stored both in the main database, in this case, Spanner, and in SpiceDB.
\nAnd if you want to move a device from one home to another, now you need to update the device's home in both databases. If you get a task to implement that, you would probably start with these two lines of code. You first write to the source of truth, which is Spanner, and then write to the second database, which is SpiceDB. The problem is you cannot write to both data stores in the same transaction, because these are literally different systems.
\nSo, a bunch of things can go wrong. If the first write fails, it's easy. You just let the error propagate to the client, and they can retry. But what about the second write? What if that one fails? Do you try to revert the first write and return an error to the client? But what if reverting the first one fails? It's getting complicated.
\nAnother idea. Maybe open a Spanner transaction and write to SpiceDB with the Spanner transaction open. I won't spend time on exploring this option, but it also doesn't solve anything, and in fact, just makes things worse. The truth is, none of the obvious workarounds actually make things better.
\nSo, we'll use these two simple lines of code as a starting point, and just acknowledge that there is a problem for us to solve there. The second write may fail for different reasons. It's either because of a network problem, or a problem with SpiceDB, or even the machine itself terminating after the first line. In all of these scenarios, the two databases become out of sync with each other. One of them will think that the device is in Home 1, and another will think that it is in Home 2.
\nThe second write failing can create two types of data integrity problems. It's either SpiceDB is too restrictive. It doesn't allow access to someone who should have access, which is called a false negative on the slides. Or the opposite. SpiceDB can be too permissive, allowing access to someone who shouldn't have access. False negatives are more visible. It's more likely you would get a bug report for it from a customer. But false positives are actually more dangerous, because that's potentially a security issue.
\nWe've already tried several obvious workarounds, and none of them worked. But let's give it one last shot, given that it is false positives that are the main issue here. Maybe there is a simple way to get rid of those. Let's try a special write operations ordering. Namely, let's do SpiceDB deletes first. Then, in the same transaction, make all the changes to the main database. And then, do SpiceDB upserts.
\nSo, in our example, the device is first removed from home 1 in SpiceDB. And then, after the Spanner write, the device is added to home 2 in SpiceDB. And it actually does the trick. And it's easy to prove that it works not only in this example, but in general. If there are no negations in the schema, such an ordering of writes ensures no false positives from SpiceDB. So, now the dual write problem looks like this. Much better, isn't it? No security issues anymore.
\nLet me play devil's advocate here. If the second or the third write fails, let's say, 100 times per month, we would probably hear from nobody. Or maybe one user. But for one user, can you fix it manually? But aren't we missing something here?
\nThe problem is, there is a whole class of issues we've ignored so far. It's race conditions. In this scenario from the slide, we're doing writes in the order that was supposed to totally eliminate the false positives. But as a result of these two requests from Alice and Bob, we get a false positive for Tom. That's because we're no longer talking about failing writes. None of the writes failed in this scenario. It is race conditions that caused the data integrity problem here.
\nSo, we have identified two causes or two sources of discrepancies between the two databases. The first is failing writes. And the second is race conditions. So, unfortunately, yet another workaround doesn't really make much difference. Back to our initial simple starting point. Two consecutive writes. First write to the main database. And then write to SpiceDB. Probably in a try-catch like here.
\nAnd one last note looking at this diagram. Often people think about the dual write problem very simplistically. They think if they can make all the writes eventually succeed, that would solve the problem for them. So, all they need is a transactional outbox or a CDC, change data capture, or something like this. But that's not exactly the case. Because at the very least, there are also race conditions. And as we'll see very soon, it's even more than that.
\nAnd now, let's add backfill to the picture. If you're introducing a new field, a new type of information that you want to be present in multiple databases, you just make the schema changes, implement the dual write logic, and that's it. You can immediately start reading from the new field or a new column in all the databases. But if it's not a new type of information, if there is pre-existing data, then the data needs to be backfilled.
\nThen the new column, field, or relation goes through these three phases. You can say there is a lifecycle. First, the schema definition changes. New column is created or something like this. Then, dual write is enabled. And finally, we do a backfill, which iterates through all of the existing data and writes it to the second database. And once the backfill is done, the data in the second database is ready to use. It's ready for reads and ready for access checks if we're talking about SpiceDB.
\nAnd as it's easy to see from the backfill pseudocode, backfill also contributes to race conditions. Simply because the data may change between the read and write operations. And again, welcome false positives.
\nOkay. So far, we've done two things. We've defined the problem. And we've examined multiple tempting workarounds just to find that they don't really solve anything. Now, let's take a look at several approaches used at Google and Canva that actually do work. And, of course, discuss their trade-offs.
\nFirst of all, doing nothing about it is probably not a good idea in most cases. Because authorization data integrity is really important. It's not only false negatives. It is false positives as well, which, as you remember, can be a security issue. The good news is there are multiple options to choose from if you want to solve the dual-write problem.
\nAnd let's start with a solution we used in our team at Google, which is pretty simple. We just had a cron sync job. That job would run several times per day and fix all the discrepancies between our Spanner instance and Zanzibar. Looking at the code on the right side, because of the sync job, we can keep the dual-write code itself very, very simple. It's basically the two lines of code we started with.
\nSync jobs at Google are super common. And what made it even easier for us here is consistent snapshots. We could literally have a snapshot of both Spanner and Zanzibar for exactly the same instant. And then feed it as an input to our MapReduce style sync job, which would sync data for 100 millions of users in just a couple of hours.
\nAnd interestingly, sync jobs are the only solution that truly guarantees eventual consistency, no matter what. Because in addition to write failures and races, there is also a third problem here. It is bugs in the data replication logic.
\nNow, the most interesting part is how did it perform in practice? And thanks to our sync job, we actually know for sure how did it go. Visibility into the data integrity is a huge, huge benefit. We not only knew that all the discrepancies get fixed within several hours, but we also knew how many of them we actually had. And interestingly, the number of discrepancies was really high only when we had bugs in our replication logic. Race conditions and failed writes, they did cause some inconsistencies too. But even at our scale, there were a small number of them, typically tens or hundreds per day.
\nNow, talking about the downsides of this approach, there are two main downsides. The first one is there are always some transient discrepancies, which can be there for several hours. Because we're not trying to address race conditions or failing writes in real time. And the second problem is infra costs. Running a sync job for a large database almost continuously is really, really expensive.
\nAll right. We're done with the sync jobs. Now, all the other approaches we'll be looking at, they leverage the transactional outbox pattern. For some of those approaches, you could achieve similar results with CDC, change data capture, instead of the outbox. But outbox is more flexible, so we'll stick to it.
\nAnd at its core, the transactional outbox pattern is really, really simple. When writing changes to the main database, in the same transaction, we also store a message saying, \"please write something to SpiceDB.\" And unlike traditional message queues outside of the main database, such an approach truly guarantees for us at-least-once delivery. And then there is a worker running continuously that pulls the messages from the outbox and acts upon them, makes the SpiceDB writes. Note that I mentioned a Zedtoken here in the code, but these are orthogonal to our topics, so I'll just skip them on the next slides.
\nAs I already mentioned, the problem the transactional outbox solves for us is reliable message delivery. Once SpiceDB and the network are in a healthy state, all the valid SpiceDB writes will eventually succeed. One less problem for us to worry about. But similar to CDC, it doesn't solve any of the other problems. It obviously doesn't provide any safety nets for the bugs in the data replication logic. And as it's easy to see from these examples, the transactional outbox is also subject to race conditions. Unless there are some extra properties guaranteed, which we'll talk very, very soon about.
\nOkay. Now that we've set the stage with transactional outboxes, let's take a look at several solutions. The second approach to solving the dual-write problem is what I would call micro-syncs. Not sure if there's a proper term for it, but let me explain what I mean. In many ways, it's very similar to the first approach, cron sync jobs. But instead of doing a sync for the whole databases, we would be doing targeted syncs for specific relationships only.
\nFor example, if Bob's role in Team X changed, we would completely resync Bob's membership in that team, including all his roles. So in the worker, we would pull the message from the outbox, then read the data from both databases, and fix it in SpiceDB if there are any discrepancies.
\nTo make it scale, instead of writing it to SpiceDB from the worker directly, we can pull those messages in batches and just put them into another durable queue, for example, into Amazon SQS. And then we can have as many workers as we need to process those messages.
\nBut aren't these micro-syncs subject to races themselves? They are. Here on this diagram, you can see an example of such a race condition creating a discrepancy. But adding just a several-seconds delay makes such races highly unlikely. And for our own peace of mind, we can even process the same message again, let's say in one hour. Then races become practically impossible. I mean, yes, in theory, the internet is a weird thing that doesn't make any guarantees. But in practice, even TCP retransmissions, they won't take an hour.
\nSo the race conditions are solved with significantly delayed micro-syncs. And you can even do multiple syncs for the same message with different delays.
\nNow, what about bugs in the data replication logic? And in practice, that's the only difference with the first approach, is that micro-syncs, they do not cover some types of bugs. Specifically, let's say you're introducing a new flow that modifies the source of truth, but then you simply forget to update SpiceDB in that particular flow. Obviously, if there is no message sent, there is no micro-sync, and there would be a discrepancy. But apart from that, there are no other substantial downsides in micro-syncs. They provide you with almost the same set of benefits as normal sync jobs, and even fix discrepancies on average much, much faster, which is pretty exciting.
\nAnd finally, let's take a look at a couple of options that do not rely on syncs between the databases. Let's introduce a version field for each replicated field. In our home automation example, it would be a home version column in the devices table, and a corresponding home version relation in the SpiceDB device definition. And then we must ensure that each write to the home ID field in Spanner increments the device home version value. And then in the message itself, we also provide this new version value so that when the worker writes to SpiceDB, it can do a conditional write to make sure it doesn't override a newer home value with an older one.
\nAnd there are different options for how to implement this. But none of them are really simple. So introducing a bug in the replication logic, honestly, is pretty easy. And the worst thing is, unlike sync jobs or even micro-syncs, this approach doesn't provide you with any safety nets. When you introduce a bug, it won't even make it visible. So yeah, that's the three downsides of this approach. It's complexity, no visibility into the replication consistency, and no safety nets. And the main benefit is, it does guarantee there would be no inconsistencies from race conditions or failed writes.
\nAnd the last option is here more for completeness. To explore the idea that lies on the surface and, in fact, almost works, but there are a lot of nuances, limitations, and pitfalls to avoid there. And that's the only option where we solve the dual write problem by actually abandoning the dual write logic. So let's say we have a transactional outbox. And the only thing the service code does, it writes to the main database and the transactional outbox. No SpiceDB writes there. So there is no dual write.
\nAnd there is just a single worker that processes a single message at a time, the oldest message available in the transactional outbox, and then it attempts to make a SpiceDB write until it succeeds. So the transactional outbox is basically a queue. And that by itself guarantees eventual consistency. I'll give you some time to digest this statement.
\nYou can prove that as long as there are no bugs, the transactional outbox is a queue, and there is a single consumer, eventual consistency between the main database and SpiceDB is guaranteed. Because it's FIFO, first in, first out, and there are no SpiceDB writes from service code.
\nHowever, a single worker processing one message at a time from a queue wouldn't provide us with a high throughput. So you might be tempted to, instead of writing to SpiceDB directly from the worker, to put it into another durable queue. But I'm sure you can see the problem with this change, right? We've lost the FIFO property. So now it's subject to races. Unless that second queue is FIFO as well, of course. But if it's FIFO, guess what? We're not increasing throughput.
\nSo yeah, if we're relying on the FIFO property to address race conditions, there is literally no reason to transfer messages into another durable queue. If you want to increase the throughput, just use bulk SpiceDB writes]. But you would need to preprocess them to make sure there are no conflicts within the same batch. Yes, there is no horizontal scalability, but maybe that's not a problem for you.
\nYet, what would probably be a problem for most use cases is that a single problematic write can stop the whole replication process. And once we actually experienced exactly this issue, a single malformed SpiceDB write halting the whole replication process for us. And that's pretty annoying, as it requires manual intervention and is pretty urgent.
\nAnd yet another class of race conditions is introduced by backfills. Because FIFO is a property of the transactional outbox. But backfill writes, fundamentally, they do not go through the outbox. So, yeah. While it's possible to introduce a delay to the transactional outbox specifically for the backfill phase, to address it, I would say the overall amount of problems with this approach is already pretty catastrophic.
\nSo, let's do a quick summary. We've explored four different approaches to solving the dual write problem. And here is a trade-off table with the pros and cons of each of them. The obvious loser is the last FIFO transactional outbox option. And probably conditional writes with the version field are not the most attractive solution as well. Mostly because of their complexity and lack of visibility into the replication consistency.
\nSo, the two options we're probably choosing from are the first and the second one. It's two types of syncs. Either a classic cron sync job or micro syncs. And, yeah. You can totally combine most of these approaches with each other if you want.
\nWe're almost done. I just wanted to reiterate the fact that dual write is not a SpiceDB problem. It's a data replication problem. So, let's say you're doing event-driven replication. Strictly speaking, there are no dual writes, same as in the last FIFO option. But, ultimately, there are two writes to two different systems, to two different databases. So, we're facing exactly the same set of problems.
\nAdding a transactional outbox can kind of ensure that all the valid writes will eventually succeed. But, probably only if you own the other end of the replication process. Then, you can also add the FIFO property to address race conditions, which is option four. But, the first three approaches without Zanzibar or SpiceDB would be really tricky, if not impossible. Not only because of the data ownership problem, but also because of aggregates. With event-driven replication, you're probably not replicating simple atomic facts.
\nSo, yeah. SpiceDB makes the dual write problem, and ultimately the data integrity problem, much more manageable.
\nAnd that's it. Hopefully, this presentation brought some clarity into the highly complex dual write problem.
", + "url": "https://authzed.com/blog/the-dual-write-problem-in-spicedb-a-deep-dive-from-google-and-canva-experience", + "title": "The Dual-Write Problem in SpiceDB: A Deep Dive from Google and Canva Experience", + "summary": "In this technical deep-dive, Canva software engineer Artie Shevchenko draws on five years of experience with centralized authorization systems, first with Google's Zanzibar and now with SpiceDB, to tackle one of the most challenging aspects of authorization system implementation: the dual-write problem. This talk was part of the Authorization Infrastructure event hosted by AuthZed on August 20, 2025.", + "image": "https://authzed.com/images/blogs/a5-recap-canva.png", + "date_modified": "2025-09-16T08:00:00.000Z", + "date_published": "2025-09-16T08:00:00.000Z", + "author": { + "name": "Artie Shevchenko", + "url": "https://au.linkedin.com/in/artie-shevchenko-67845a4b" + } + }, + { + "id": "https://authzed.com/blog/turos-spicedb-success-story-how-the-leading-car-sharing-platform-transformed-authorization", + "content_html": "This talk was part of the Authorization Infrastructure event hosted by AuthZed on August 20, 2025.
\nAndre, a software engineer at Turo, shared how the world's leading car-sharing platform solved critical security and scalability challenges by implementing SpiceDB with managed hosting from AuthZed Dedicated. Faced with fleet owners having to share passwords due to rigid ownership-based permissions, Turo built a relationship-based authorization system enabling fine-grained, team-based access control. The results speak for themselves: \"SpiceDB made it trivial to design and implement the solution compared to traditional relational databases\" while delivering \"much higher performance and throughput.\" The system proved remarkably adaptable—adding support for inactive team members required \"literally one single line of code\" to change in the schema. AuthZed's managed hosting proved equally impressive, with only one incident in over two years of production use. As Andre noted, \"ultimately hosting with AuthZed saved us money in the long run\" by eliminating the need for dedicated infrastructure engineering, allowing Turo to focus on their core business while maintaining a \"blistering fast\" authorization system.
\nOn Reliability and Expert Support:
\n\n\n\"In over two years [...] of operations in production, we had a single incident. And even then in that event, they demonstrated the capacity to recover from faults very, very quickly.\"
\n
On Business Focus:
\n\n\n\"For over two years, Turo has used AuthZed's [Dedicated] offering where they're responsible for deploying and maintaining all the infrastructure required by the SpiceDB clusters. And that gives us time back to focus on growing our business, which is our primary concern.\"
\n
Talk by Andre Sanches, Software Engineer at Turo
\nHello, everyone, and welcome. I'm Andre, a software engineer at Turo, working with SpiceDB for just over two years now. I'm here to share a bit of our experience with SpiceDB as a product and AuthZed as a hosting partner. Congratulations, by the way, to AuthZed for its five-year anniversary. It's a privilege to be celebrating this milestone together. So let's get started.
\nFirst, a quick introduction to those who don't know Turo. We're the leading car-sharing platform in the world, operating in most of the US and four other countries. Our mission is to put the world's 1.5 billion cars to better use. Our business model is similar to popular home-sharing platforms you may be familiar with, with a fundamental difference. Vehicles are less expensive compared to homes, so it's common that hosts build up fleets of vehicles on Turo. In fact, many of our hosts build successful businesses with our help, and therein lies a challenge that we solved with SpiceDB.
\nHosts have responsibilities, such as communicating with guests in a timely manner, taking pictures of vehicles prior to handoff, and again, upon return of the vehicle to resolve disputes that may happen, managing vehicle schedules, etc. These things take time and effort, and as you scale up your business, fleet owners often hire people to help. And the problem is, in the past, Turo had a flat, ownership-based permission model. You could only interact with the vehicles you own, so hosts had no other choice but to share their accounts and their passwords. It's safe to say that folks in the target audience of this event understand how big of a problem that can be.
\nMoreover, third-party companies started sprouting all over the place to bridge that gap, to manage teams by way of calling our backend, which adds yet another potential attack vector by accessing Turo's customer data. So, it had become a large enough risk and a feature gap that we set out to solve that problem.
\nThe solution was to augment the flat, ownership-based model with a team-based approach, where admin hosts, meaning the fleet owner, can create teams that authorize individual drivers to perform specific actions, really fine-grained, on one or more of the vehicles that they own. Members are invited to join teams via email, which gives them the opportunity to sign up for a Turo account if they don't yet have one.
\nSo, the solution from a technical standpoint is a graph-based solution that enables our backend to determine very quickly, can Driver ABC perform a certain action on vehicle XYZ? In this case right here, can Driver ABC communicate with guests that booked that certain vehicle? SpiceDB made it trivial to design and implement the solution compared to traditional relational databases, which is most of our backend. Moreover, it offloaded our monolithic database with a tool that offers much higher performance and throughput.
\nAnecdotally, the simplicity of SpiceDB helped implement a last-minute requirement that crept in late in the development cycle—support for inactive team members, the ones who are pending invitation acceptance. Prior to that, the invitation system was purely controlled in MySQL. And we realized, you know what, if we're storing the team in SpiceDB, why not make it so that we can store inactive users too? And the reason I'm mentioning this is this impressed everybody who was working on that feature at the time, because it was literally one single line of code that we had to change in the schema to enable this.
\nSo I'll talk more about this in a second where I show some technical things. But the graph that I just mentioned then roughly translates to this schema. So this is a simplified but still accurate rendition of what our SpiceDB schema looks like. Hopefully this clarifies how driver membership propagates to permissions on vehicles, if you're familiar with SpiceDB schemas.
\nSome noteworthy mentions here are self-referencing relations, this one up here, or all the way up there. So basically, this is how we implemented the inactive users. If you notice that there, there's the member role and then an active member role. And by way of adding a single record that connects the member role with an active member role in the hosting team, you can enable and disable drivers. So this was so incredibly impressive at the time, because we thought we're going to have to change the entire schema and a whole bunch of other changes. And no, that's all it took.
\nAnd again, it's one of those things that once it clicked, if you're familiar with the SpiceDB but not with the self-referencing relation, looking at this, that #member role and pointing to a relation in the same definition, it kind of looks a little daunting. It did to me. I don't know—you're probably smarter than I am, but it was daunting. But then one day it just clicked and I'm like, hmm, okay, that's how it is. And I was super stoked to continue working with SpiceDB and I'm going to implement more and more of the features. And help the feature team, actually, because it was a separate feature team that was working on this. So that self-referencing was interesting.
\nThe other noteworthy mention here is the same namespaces. If you notice in front of the definition, there's a hosting teams forward slash. This is how we separate the schema into multiple copies of the same schema in the same cluster. So we have an ephemeral test environment in which we create and destroy on command sandbox replicas of our entire backend system. This enables us to deploy dozens, if not hundreds, of isolated copies of the schema, along with everything else in our backend, to test new features in a controlled environment that we can break, that we can modify as we see fit without affecting customers. And the namespacing feature in SpiceDB allowed us to use the same cluster for all those copies and save us some money. So we don't have to stand up a new server. We, you know, there's no computational costs or delays or any of that in provisioning computing resources and this and that.
\nSo the feature was released the week of, you know, us going pre-live, in a test environment. And we were probably the first adopters of this and it was really cool.
\nSo let me see at a high level, this is how our hosting team feature works. You can see, let me use the mouse here. You can see how permissions propagate to teams. So, team pricing and availability goes to the relation of the team in the hosting team. Hosting team has the pricing and availability for active member roles or admin role. Plus sign, as you all know, is a or, and then it connects to the driver. Simple, fast. This is blistering fast.
\nOne other query that we make to SpiceDB very, very often—matter of fact, this is the single most, you know, issued query to SpiceDB at any given time—is, is the currently logged in user a cohost. And that's done for everybody. Even if you're not a cohost, this is how we determine whether you're a cohost or not. That will then drive UI, you know, decisions, what, what widgets to show. You know, only if you're, if it's pertinent to you, if you're a cohost, if not, then there's no, no reason to. To pollute the UI with, you know, cohosting features. Yeah.
\nAnd this is what the UI looks like. So, you, on a team, you have cohosts and you can add or invite, here's an interesting thing. The code name of the project was cohosting. It ended up being hosting teams because we then used the nomenclature cohosts to add people to teams. So, here you have your cohosts. You can invite them by email. They get an, an email that points them to sign up to Turo. If they already have an account, they can just log in. And the moment they log in, it automatically accepts the invitation.
\nNext you have the fine grain permissions of what your group can, or your team can do. In this case, we have trip management enabled. This is the base actually, you know, the base permission that you have to grant to everybody on the team. And then there's pricing and availability that allows you to set prices for vehicles, discounts, you know, see finances and all that stuff. So you can imagine why that's, you know, why it's very nice to be able to toggle this and not let, you know, just any cohost that has no business looking at your finances, you know, just hiding it from them by way of untoggling the permission here. And then you have your vehicles. The list shows all the vehicles you own. You just toggle the ones you want, save, and you're off to the races. Your hosting team is in place and working.
\nSo also that as a hosting partner, when you're considering using, you know, a big challenge of adopting a new system is setting it up and running it in a scalable and reliable way. You have to manage, you know, security issues. You have to manage our scaling. You have to manage all kinds of, you know, infrastructure challenges. And that costs money. In this day and age, it's really hard to find engineers who understand infrastructure well enough to manage all the moving parts of a highly scalable system such as SpiceDB.
\nFor over two years, Turo has used AuthZed's fully hosted cloud offering where they're responsible for deploying and maintaining all the infrastructure required by the SpiceDB clusters. And that gives us time back to focus on growing our business, which is our primary concern. So this is a great opportunity actually to give AuthZed a shout out for their excellent reliability.
\nIn over two years, over two years and three months now, actually of operations in production, we had a single incident. And even then in that event, they demonstrated the capacity to recover from faults very, very quickly to pinpoint the problem incredibly quickly. And, you know, take care of it. I think the outage was, we were out for like 38 minutes, something like that. It was, you know, we've had other partners that things were much, much more challenging. So, and once in two years, the root cause, the entire handling of the outage was very, very, you know, nice to see. Because it involved thorough analysis, post-mortems and making sure that it doesn't happen again, putting in safeguards to ensure that it doesn't happen again.
\nSo everything was, you know, systems fail. We understand that. And how we deal with it is how, is what shows how, you know, how good you are. And with AuthZed, we rest, you know, easy knowing that we're well taken care of. And ultimately hosting with AuthZed saved us money in the long run because it would otherwise take a lot of engineering time and effort just to keep the clusters running. So if your company is considering adopting SpiceDB, I would highly encourage you to have a chat with AuthZed about hosting as well. From our experience, it's well worth the investment.
", + "url": "https://authzed.com/blog/turos-spicedb-success-story-how-the-leading-car-sharing-platform-transformed-authorization", + "title": "Turo's SpiceDB Success Story: How the Leading Car-Sharing Platform Transformed Authorization", + "summary": "Andre, a software engineer at Turo, shared how the world's leading car-sharing platform solved critical security and scalability challenges by implementing SpiceDB with managed hosting from AuthZed Dedicated. This talk was part of the Authorization Infrastructure event hosted by AuthZed on August 20, 2025.", + "image": "https://authzed.com/images/blogs/a5-recap-turo.png", + "date_modified": "2025-09-15T13:49:00.000Z", + "date_published": "2025-09-15T13:49:00.000Z", + "author": { + "name": "Andre Sanches", + "url": "https://www.linkedin.com/in/ansanch" + } + }, + { + "id": "https://authzed.com/blog/authzed-is-5-event-recap-authorization-infrastructure-insights", + "content_html": "Last month we celebrated AuthZed's fifth birthday with our first-ever \"Authorization Infrastructure Event\" - a deep dive\ninto the technical challenges and innovations shaping the future of access control.
\nThe livestream brought together industry experts from companies like Canva and Turo to share real-world experiences with\nauthorization at scale, featured major product announcements including the launch of AuthZed Cloud, and included\nfascinating discussions with database researchers about the evolution of data infrastructure. From solving the\ndual-write consistency problem to powering OpenAI's document processing, we covered the full spectrum of modern\nauthorization challenges.
\nWatch the full event recording (2.5 hours)
\nBefore we dive into the technical talks, let's start with the big announcements:
\nWe finally launched AuthZed Cloud - a self-service platform that allows you to provision,\nmanage, and scale your\nauthorization infrastructure on demand. Sign up with a credit card, get your permission system running in minutes, and\nscale as needed - authorization that runs like cloud infrastructure. Through\nour AuthZed Cloud Starter Program, we're\nalso providing credits to help teams try out the platform.
\n\nOpenAI securely connects enterprise knowledge with ChatGPT by using AuthZed to\nhandle permissions for their corporate data connectors - when ChatGPT connects to your company's Google Drive or\nSharePoint. They've built connectors to process and search over 37 billion documents for more than 5 million\nbusiness users while respecting existing data permissions using AuthZed's authorization infrastructure.
\nThis demonstrates how authorization infrastructure has become critical for AI systems that need to understand and\nrespect complex organizational data permissions at massive scale.
\nArtie Shevchenko from Canva delivered an excellent explanation of the dual-write problem that many authorization\nteams face. Anyone who has tried to keep data consistent between two different databases (such as your main database +\nSpiceDB) will recognize this challenge. Watch Artie's full talk
\nArtie was direct about the reality: the dual-write problem is hard. Here's what teams need to understand:
\nThings Will Go Wrong
\nFour Ways to Deal With It
\nCanva uses sync jobs as their safety net. Artie's team found that most inconsistencies actually came from bugs in their replication logic, not from the network problems everyone worries about. The sync jobs caught everything and gave them visibility into what was actually happening.
\nThe Real Lesson: Don't try to be clever. Pick an approach, implement it well, and have monitoring so you know when things break.
\nAndre Sanches from Turo told the story of how they moved from \"just share your password with your employees\" to\naccurate fine-grained access controls. Watch Andre's talk
\nThe Problem Was Real\nTuro hosts were sharing account credentials with their team members. Fleet owners needed help managing vehicles, but\nTuro's permission system only understood \"you own it or you don't.\" This created significant security challenges.
\nThe Solution Was Surprisingly Straightforward\nAndre's team built a relationship-based permission system using SpiceDB that supports:
\nThe best part? When they needed to add support for inactive team members late in development, it was literally a\none-line schema change. This exemplifies the utility of SpiceDB schemas and authorization as infrastructure.
\nTwo Years Later\nTuro has had exactly one incident with their AuthZed Dedicated deployment in over two years - and that lasted 38 minutes. Andre made it clear: letting AuthZed handle the infrastructure complexity was absolutely worth it. His team focuses on building features, not babysitting databases.
\nProfessor Andy Pavlo from Carnegie Mellon joined our co-founder Jimmy Zelinskie for a chat about databases, AI,\nand why new data models keep trying to kill SQL. Watch the fireside chat
\nThe SQL Cycle\nAndy's been watching this pattern for decades:
\nVector databases? Being absorbed into PostgreSQL. Graph databases? SQL 2024 added property graph queries. NoSQL? Most of those companies quietly added SQL interfaces.
\nThe Spiciest Take\nJimmy dropped this one: \"The PostgreSQL wire protocol needs to die.\"
\nHis argument: Everyone keeps reimplementing PostgreSQL compatibility thinking they'll get all the client library benefits for free. But what actually happens is you inherit all the complexity of working around a pretty terrible wire protocol, and you never know how far down the rabbit hole you'll need to go.
\nAndy agreed it's terrible, but pointed out there's not enough incentive for anyone to build something better. Classic tech industry problem.
\nAI and Databases\nThey both agreed that current AI hardware isn't radically different from traditional computer architecture - it's just specialized accelerators. The real revolution will come from new hardware designs that change how we think about data processing entirely.
\nJoey Schorr (our CTO) showed off something that made me genuinely excited: a way to make SpiceDB look like regular\nPostgreSQL tables. Watch Joey's demo
\nYou can literally write SQL like this:
\nSELECT * FROM documents\nJOIN permissions ON documents.id = permissions.resource_id\nWHERE permissions.subject_id = 'user:jerry' AND permissions.permission = 'view'\nORDER BY documents.title DESC;\n\nThe foreign data wrapper handles the SpiceDB API calls behind the scenes, and PostgreSQL's query planner figures out the optimal way to fetch the data. Authorization-aware queries become just... queries.
\nVictor Roldán Betancort demonstrated AuthZed Materialize, which precomputes complex permission decisions so SpiceDB\ndoesn't have to traverse complex relationship graphs in real-time. Watch Victor's demo
\nThe demo showed streaming permission updates into DuckDB, then running SQL queries against the materialized permission\nsets. This creates a real-time index of who can access what, without the performance penalty of traversing permission\nhierarchies on every query.
\nSam Kim talked about authorization for Model Context Protocol servers and released a reference implementation for a\nMCP server with fine-grained authorization support build in. Watch Sam's MCP talk
\nThe key insight: if you don't build official MCP servers for your APIs, someone else will. And you probably won't like how they handle authorization. Better to get ahead of it with proper access controls baked in.
\nIrit Goihman (our VP of Engineering) shared some thoughts on how we approach building software. Watch Irit's insights
\nRemote-first engineering teams need different approaches to knowledge sharing and innovation.
\nWe recognized the contributors who make SpiceDB a thriving open source project. The community response has been\nexceptional:
\nCore SpiceDB Contributors:
\nClient Library Heroes (making SpiceDB accessible everywhere):
\nCommunity Tooling Builders (the ecosystem enablers):
\nEvery single one of these folks saw a gap and decided to fill it. That's what makes open source communities amazing.
\nFive years ago, application authorization was often something that was DIY and hard to scale. Today, companies are\nprocessing billions of permission checks through purpose-built infrastructure.
\nThe next five years? AI agents are going to need authorization systems that don't exist yet. Real-time permission materialization will become table stakes. Integration with existing databases will get so seamless you won't think about it.
\nIf you take anything away from our fifth birthday celebration, let it be this:
\nAuthorization infrastructure has gone from \"development requirement\" to \"strategic advantage.\" The companies that figure\nthis out first will have a significant edge in keeping pace with quickening development cycles and heightene security\nneeds.
\nThanks to everyone who joined AuthZed for the celebration, and here's to the next five years of fixing access control\nfor everyone.
\nWant to try AuthZed Cloud? Sign up here and get started in minutes.
\nJoin our community on Discord and\nstar SpiceDB on GitHub.
", + "url": "https://authzed.com/blog/authzed-is-5-event-recap-authorization-infrastructure-insights", + "title": "AuthZed is 5: What We Learned from Our First Authorization Infrastructure Event", + "summary": "We celebrated our 5th birthday with talks from Canva, Turo, and Carnegie Mellon. Here's what we learned about the dual-write problem, scaling authorization in production, and why everyone keeps reimplementing the PostgreSQL wire protocol.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-09-02T18:00:00.000Z", + "date_published": "2025-09-02T18:00:00.000Z", + "author": { + "name": "Corey Thomas", + "url": "https://www.linkedin.com/in/cor3ythomas/" + } + }, + { + "id": "https://authzed.com/blog/authzed-cloud-is-now-available", + "content_html": "Today marks a special milestone for AuthZed: we're celebrating our 5th anniversary! There are honestly too many thoughts and reflections swirling through my mind to fit into a single blog post. The reality is that most startups don't make it to 5 years, and I'm extremely proud of what we've built together as a team and community.
\nIf you want to hear me reflect on the journey of the past 5 years, I'm giving a talk today about exactly that, and we'll post a link to the recording here when it's ready. But today isn't just about looking back, it's also about looking forward, and I’ve personally been looking forward to launching our next iteration of authorization infrastructure: AuthZed Cloud.
\nIn this blog post, I'll cover what we've built and why, but if you don't need that context and just want to dive in, feel free to bail on this post and sign up right now!
\nTo understand why we built AuthZed Cloud, I need to first talk about AuthZed Dedicated, because in many ways, Dedicated represents our vision of the perfect authorization infrastructure product.
\nAuthZed Dedicated is nearly infinitely scalable: capable of handling millions of queries per second when you need it. It's co-located with your workloads, which means there's no internet or cross-cloud latency penalty for your authorization decisions, which are often in the critical path for user interactions. It can run nearly anywhere on earth, with support for all three major cloud providers, giving you the flexibility to deploy where your business needs demand.
\nPerhaps most importantly, Dedicated provides total isolation for each customer across datastore, network, and compute layers. It marries the best permissions database in the world (SpiceDB) with the best infrastructure design (Kubernetes + operators) to create what we believe is the best authorization infrastructure in the world.
\nSo how did we improve on this formula? We made it more accessible!
\nAuthZed Dedicated's biggest challenge isn't technical: it's the enterprise procurement cycle that comes with it. The question we kept asking ourselves was: how can we bring these powerful concepts to more companies, especially those who need enterprise-grade authorization but can't navigate lengthy procurement processes?
\nAuthZed Cloud takes the most powerful concepts from AuthZed Dedicated and makes them available in a self-service product that you can start using today.
\nWe've also made several key improvements over what’s available in Dedicated today:
\nSelf-service registration and deployment: No more waiting weeks for procurement approvals or implementation calls. Sign up, configure your permissions system, and start building. Scale when you need to!
\nRoles: We've added granular access controls that let you limit who can access and change things within your AuthZed organizations. This was a frequent request from teams who needed to federate access to our platform in different ways. You’ll be happy to know that this feature is, of course, also powered by SpiceDB.
\nUsage-based billing: Instead of committing to fixed infrastructure costs upfront, you can spin up resources on-demand and pay for what you actually use.
\nThe best part? These improvements will also be landing in Dedicated soon, so all our customers benefit!
\nDelivering on this vision does require some compromises. AuthZed Cloud uses a shared control plane and operates in pre-selected regions (though please let us know if you need a region we don't support today!). But honestly, that's about it for compromises.
\nAuthZed Cloud is designed for companies of all sizes. Despite the shared infrastructure approach, we've maintained high isolation standards. Your SpiceDB runs as separate Kubernetes deployments, and datastores are dedicated per permissions system. You still get the same scalable technology from Dedicated that allows you to scale up to millions of queries per second when needed, and the same enterprise-grade reliability.
\nWhat makes Cloud special is how attainable it is. The base price is a fraction of our base Dedicated deployment price, opening up AuthZed's capabilities to a much broader range of companies.
\nThat said, some organizations should still consider Dedicated. You might consider dedicated if you need higher isolation requirements like an isolated control plane or private networking, or if you need higher flexibility around custom legal terms or deployment in cloud provider regions that AuthZed Cloud doesn't yet support.
\nThe response during our early access period has been incredible. There was clearly pent-up demand for a product like this! We've had several long-time AuthZed customers already making the move to Cloud.
\nLita Cho, CTO at moment.dev, had this to say:
\n\n\n“We love Authzed—it makes evolving our permissions model effortless, with a powerful schema language, makes rapid\nprototyping possible along with rock-solid production performance, all without heavy maintenance. Authzed Cloud\ndelivers the power and reliability of Dedicated at a startup-friendly price, without the hassle of running SpiceDB. That\nlets me focus on building our modern docs platform, confident our authorization is secure, fast, and future-proof.”
\n
The best part about AuthZed Cloud is that you can sign up immediately and get started building. We've also set up a program where you can apply for credits to help with your initial implementation and testing.
\nAs we celebrate five years of AuthZed, I'm more excited than ever about the problems we're solving and the direction we're heading. Authorization remains one of the most critical and complex challenges in modern software development, and we're committed to making it accessible to every team that needs it.
\nHere's to the next five years of building the future of authorization together.
", + "url": "https://authzed.com/blog/authzed-cloud-is-now-available", + "title": "AuthZed Cloud is Now Available!", + "summary": "Bringing the power of AuthZed Dedicated to more with our new shared infrastructure, self-service offering: AuthZed Cloud.", + "image": "https://authzed.com/images/upload/AuthZed-Cloud-Blog@2x.png", + "date_modified": "2025-08-20T16:00:00.000Z", + "date_published": "2025-08-20T16:00:00.000Z", + "author": { + "name": "Jake Moshenko", + "url": "https://www.linkedin.com/in/jacob-moshenko-381161b/" + } + }, + { + "id": "https://authzed.com/blog/predicting-the-latest-owasp-top-10-with-cve-data", + "content_html": "OWASP is set to release their first Top 10 update since 2021, and this year’s list is one of the most awaited because of the generational shift that is AI. The security landscape has fundamentally shifted thanks to AI being embedded in production systems across enterprises from RAG pipelines to autonomous agents. I thought it would be a fun little exercise to look at CVE data from 2022-2025 and make predictions on what the top 5 in the updates list would look like. Read on to find out what I found.
\nThe OWASP Top 10 is a regularly updated list of the most critical security risks to web applications. It’s a go-to reference for organizations looking to prioritize their security efforts. We’ve always had a keen eye on this list as it’s our mission to fix broken access control.
\nThe last 4 lists have been released in 2010, 2013, 2017 and 2021 with the next list scheduled for release soon, in Q3 2025.
\nThe OWASP Foundation builds this list using a combination of large-scale vulnerability data, community surveys, and expert input. The goal is to create a snapshot of the most prevalent and impactful categories of web application risks. So I thought I’ll crunch some numbers from CVE data that is publicly available.
\nThis was not a scientific study — I’m not a data scientist, just an enthusiast in the cloud and security space. The aim here was to explore the data, learn more about how OWASP categories relate to CVEs and CWEs, and see if the trends point toward likely candidates for the upcoming list.
\nHere’s the process I followed to get some metrics around the most common CVEs:
\nCollect CVEs from 2022–2025
\nMap CWEs to OWASP Top 10 Categories
\nFor example:
\nCWE-201 - ‘Insertion of Sensitive Information Into Sent Data’ maps to ‘Broken Access Control’.
\n
def map_cwe_to_owasp(cwe_ids):\n owasp_set = set()\n for cwe in cwe_ids:\n try:\n cwe_num = int(cwe.replace(\"CWE-\", \"\"))\n if cwe_num in CWE_TO_OWASP:\n owasp_set.add(CWE_TO_OWASP[cwe_num])\n except ValueError:\n continue\n return list(owasp_set)\n\nCWE_TO_OWASP = {\n # A01: Broken Access Control\n 22: \"A01:2021 - Broken Access Control\",\n 23: \"A01:2021 - Broken Access Control\",\n # ...\n 1275: \"A01:2021 - Broken Access Control\",\n\n\n # A02: Cryptographic Failures\n 261: \"A02:2021 - Cryptographic Failures\",\n 296: \"A02:2021 - Cryptographic Failures\"\n # ...,\n 916: \"A02:2021 - Cryptographic Failures\",\n\n\n # A03: Injection\n 20: \"A03:2021 - Injection\",\n 74: \"A03:2021 - Injection\",\n # ...\n 917: \"A03:2021 - Injection\",\n\n\n # A04 Insecure Design\n 73: \"A04:2021 - Insecure Design\",\n 183: \"A04:2021 - Insecure Design\",\n # ...\n 1173: \"A04:2021 - Insecure Design\",\n\n\n # A05 Security Misconfiguration\n 2: \"A05:2021 - Security Misconfiguration\",\n 11: \"A05:2021 - Security Misconfiguration\",\n # ...\n 1032: \"A05:2021 - Security Misconfiguration\",\n \n # A05 Security Misconfiguration\n 937: \"A06:2021 - Vulnerable and Outdated Components\",\n # ... \n 1104: \"A06:2021 - Vulnerable and Outdated Components\",\n\n\n # A07:2021 - Identification and Authentication Failures\n 255: \"A07:2021 - Identification and Authentication Failures\",\n 259: \"A07:2021 - Identification and Authentication Failures\",\n # ...\n 1216: \"A07:2021 - Identification and Authentication Failures\",\n\n\n # A08:2021 - Software and Data Integrity Failures\n 345: \"A08:2021 - Software and Data Integrity Failures\",\n 353: \"A08:2021 - Software and Data Integrity Failures\",\n # ... \n 915: \"A08:2021 - Software and Data Integrity Failures\",\n\nMap CVEs to CWEs
\ncve.weaknesses[].description[].value with CWE IDs like CWE-201. I wrote a script to process the JSON containing NVD vulnerability data to extract CWE IDs for each CVE, and then map it to OWASP categories.def process_nvd_file(input_path, output_path):\n with open(input_path, \"r\") as f:\n data = json.load(f)\n\n\n results = []\n for entry in data[\"vulnerabilities\"]:\n cve_id = entry.get(\"cve\", {}).get(\"id\", \"UNKNOWN\")\n cwe_ids = []\n\n\n # Extract CWE IDs from weaknesses\n for problem in entry.get(\"cve\", {}).get(\"weaknesses\", []):\n for desc in problem.get(\"description\", []):\n cwe_id = desc.get(\"value\")\n if cwe_id and cwe_id != \"NVD-CWE-noinfo\":\n cwe_ids.append(cwe_id)\n\n\n mapped_owasp = map_cwe_to_owasp(cwe_ids)\n\n\n results.append({\n \"cve_id\": cve_id,\n \"cwe_ids\": cwe_ids,\n \"owasp_categories\": mapped_owasp\n })\n\n\n with open(output_path, \"w\") as f:\n json.dump(results, f, indent=2)\n\n\n print(f\"Wrote {len(results)} CVE entries with OWASP mapping to {output_path}\")\n\nWe now have a new JSON file with mapped outputs that has all the CVEs mapped to OWASP categories (if there’s a match). This is what it looks like:
\n{\n \"cve_id\": \"CVE-2024-0185\",\n \"cwe_ids\": [\n \"CWE-434\",\n \"CWE-434\"\n ],\n \"owasp_categories\": [\n \"A04:2021 - Insecure Design\"\n ]\n },\n {\n \"cve_id\": \"CVE-2024-0186\",\n \"cwe_ids\": [\n \"CWE-640\"\n ],\n \"owasp_categories\": [\n \"A07:2021 - Identification and Authentication Failures\"\n ]\n },\n\nI ran this code snippet for each data set from 2022-2025 and had separate JSON files for each year.
\nNow that we have this data of mapped outputs, we can run some data analysis to find the most common occurrences per year.
\nfor filename in os.listdir(DATA_DIR):\n\n# Loads the JSON data from the file, which contains a list of CVE entries.\n\n year = filename.replace(\"mapped_output_\", \"\").replace(\".json\", \"\")\n year_path = os.path.join(DATA_DIR, filename)\n\n with open(year_path, \"r\") as f:\n entries = json.load(f)\n\n for entry in entries:\n for category in entry.get(\"owasp_categories\", []):\n yearly_data[year][category] += 1\n\n# Convert to a DataFrame\ndf = pd.DataFrame(yearly_data).fillna(0).astype(int).sort_index()\ndf = df.T.sort_index() # years as rows\n\n# Save summary\ndf.to_csv(\"owasp_counts_by_year.csv\")\nprint(\"\\nSaved summary to owasp_counts_by_year.csv\")\n\n# Also print\nprint(\"\\n=== OWASP Category Counts by Year ===\")\nprint(df.to_string())\n\n# Plot OWASP trends over time\nplt.figure(figsize=(12, 7))\n\nfor column in df.columns:\n plt.plot(df.index, df[column], marker='o', label=column)\n\nplt.title(\"OWASP Top 10 Category Trends (2022–2025)\")\nplt.xlabel(\"Year\")\nplt.ylabel(\"Number of CVEs\")\nplt.xticks(rotation=45)\nplt.legend(title=\"OWASP Category\", bbox_to_anchor=(1.05, 1), loc='upper left')\nplt.tight_layout()\nplt.grid(True)\nplt.show()\n\nThis is what it looked like:
\n
Here’s a table with all the data:
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n| A01: Broken Access Control | A02:\u000b Cryptographic Failures | A03: \u000bInjection | A04: Insecure Design | A05:\u000b Security Misconfiguration | A06: Vulnerable & Outdated Components | A07: Identification & Authentication Failures | A08: Software & Data Integrity Failures | |
|---|---|---|---|---|---|---|---|---|
| 2022 | 4004 | 370 | 6496 | 1217 | 151 | 1 | 1233 | 334 |
| 2023 | 5498 | 411 | 8846 | 1480 | 178 | 1 | 1357 | 468 |
| 2024 | 7182 | 447 | 13280 | 1922 | 163 | 4 | 1430 | 584 |
| 2025 | 4314 | 209 | 7563 | 1056 | 90 | 2 | 774 | 418 |
| Totals | 20998 | 1437 | 36185 | 5675 | 582 | 8 | 4794 | 1804 |
So looking at purely the number of incidences in CVEs, the Top 5 would look like this:
\n#5 Software and Data Integrity Failures
\n#4 Identification & Authentication Failures
\n#3 Insecure Design
\n#2 Broken Access Control
\n#1 Injection
But wait, OWASP’s methodology in compiling the list involves not just the frequency (how common) but the severity or impact of each weakness. Also, 2 out of the 10 in the list are chosen from a community survey among application security professionals, to compensate for the gaps in public data. In the past OWASP has also merged categories to form a new category. So based on that here’s my prediction for the Top 5
\nThere’s absolutely no doubt in my mind that the security implications of AI will have a big impact on the list. One point of note is that OWASP released a Top 10 list of LLM in November 2024. Whether they decided to keep the two lists separate or have overlap will largely determine the Top 10 this year.
\nSo looking at the CVE data above (Broken Access Control and Injection had the most occurrences), and the rise of AI in production, here’s what I think will be the Top 5 in the OWASP list this year:
\n#5 Software and Data Integrity Failures
\n#4 Security Misconfigurations
\n#3 Insecure Design
\n#2 Injection
\n#1 Broken Access Control
With enterprises implementing AI Agents, RAG Pipelines and Model Context Protocol (MCP) in production, access control becomes a priority. Broken Access Control topped the list in 2021, and we’ve seen a slew of high profile data breaches recently so I think it will sit atop the list this year as well.
\nI asked Jake Moshenko, CEO of AuthZed about his Predictions for the list and while we agreed on the #1 position on the list, there were also a couple of things where we disagreed. Watch the video to find out what Jake thought the Top 5 would look like and which category he thinks might drop out of the Top 10 altogether.
\n\nAs I mentioned before, I’m not a data scientist so please feel free to improve upon this methodology in the Github Repo. I also need to state that:
\nWhat do you think the 2025 OWASP Top 10 will look like?
\nDo you agree with these trends, or do you think another category will spike?
\nI’d love to hear your thoughts in the comments on LinkedIn, BlueSky or Twitter
If you want to replicate this yourself, I’ve put the dataset links and code snippets on GitHub.
", + "url": "https://authzed.com/blog/predicting-the-latest-owasp-top-10-with-cve-data", + "title": "Predicting the latest OWASP Top 10 with CVE data ", + "summary": "OWASP is set to release their first Top 10 update since 2021, and this year’s list is one of the most awaited because of the generational shift that is AI. The security landscape has fundamentally shifted thanks to AI being embedded in production systems across enterprises from RAG pipelines to autonomous agents. I thought it would be a fun little exercise to look at CVE data from 2022-2025 and make predictions on what the top 5 in the updates list would look like. Read on to find out what I found.", + "image": "https://authzed.com/images/blogs/authzed-predict-owasp.png", + "date_modified": "2025-08-13T18:50:00.000Z", + "date_published": "2025-08-13T18:50:00.000Z", + "author": { + "name": "Sohan Maheshwar", + "url": "https://www.linkedin.com/in/sohanmaheshwar/" + } + }, + { + "id": "https://authzed.com/blog/prevent-ai-agents-from-accessing-unauthorized-data", + "content_html": "I just attended the Secure Minds Summit in Las Vegas, where security and application development experts shared lessons learned from applying AI in their fields. Being adjacent to Black Hat 2025, it's not surprising that a common theme was the security risks of AI agents and MCP (Model Context Protocol). There's an anxious excitement in the community about AI's potential to revolutionize how organizations operate through faster, smarter decision-making, while grappling with the challenge of doing it securely.
\nAs organizations explore AI agent deployment, one thing is clear: neither employees nor AI agents should have access to all data. You wouldn't want a marketing AI agent accessing raw payroll data, just as you wouldn't want an HR agent viewing confidential product roadmaps. Without proper access controls, AI agents can create chaos just as easily as they deliver value, since they don't inherently understand which data they should or shouldn't access.
\nThis is where robust permissions systems become critical. Proper access controls ensure AI agents operate within organizational policy boundaries, accessing only data they're explicitly authorized to use.
\nSohan, our Lead Developer Advocate at AuthZed, recently explored this topic on the AuthZed YouTube channel with a live demo of implementing AI-aware permissions systems.
\nWatch the demo here:
\n\nIn June, we launched AuthZed's Authorization Infrastructure for AI, purpose-built to ensure AI systems respect permissions, prevent data leaks, and maintain comprehensive audit trails.
\nAuthZed's infrastructure is powered by SpiceDB, our open-source project based on Google's Zanzibar. SpiceDB's scale and speed make it an ideal authorization solution for supporting AI's demanding performance requirements.
\nOur infrastructure delivers:
\nWant to learn more about the future of AuthZed and authorization infrastructure for AI? Join us on August 20th for \"AuthZed is 5: The Authorization Infrastructure Event.\" Register here.
", + "url": "https://authzed.com/blog/prevent-ai-agents-from-accessing-unauthorized-data", + "title": "Prevent AI Agents from Accessing Unauthorized Data", + "summary": "AI agents promise to revolutionize enterprise operations, but without proper access controls, they risk exposing sensitive data to unauthorized users. Learn how AuthZed's Authorization Infrastructure for AI prevents data leaks while supporting millions of authorization checks per second. Watch our live demo on implementing AI-aware permissions systems.\n\n", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-08-08T15:46:00.000Z", + "date_published": "2025-08-08T15:46:00.000Z", + "author": { + "name": "Sam Kim", + "url": "https://github.com/samkim" + } + }, + { + "id": "https://authzed.com/blog/authzed-is-5-authorization-infrastructure-event", + "content_html": "AuthZed is turning five years old, and we're throwing a celebration! On Wednesday, August 20th, we're hosting \"The Authorization Infrastructure Event\" by bringing together experts in authorization and database technology to talk about where this space is headed.
\n\nYou'll hear from industry experts who've been shaping how we think about authorization:
\nAnd the AuthZed team will be sharing what we've been building—new product announcements, plus a peek into our lab:
\nWe’ll be announcing new products that I think will genuinely change how people approach authorization infrastructure and I’m particularly excited to finally share about what we've been exploring in our lab, experimental work that could shape the future of access control.
\nIt's hard to believe but five years have gone by so fast. Back when I joined Jake, Jimmy, and Joey as the first employee, they had this clear understanding of why application authorization was such a pain point for developers, the Google Zanzibar paper as their guide, and an ambitious vision: bring better authorization infrastructure to everyone who needed it.
\n
Photo from our first team offsite in 2021. Not pictured: me because I'm taking the photo
\nLooking back at our journey, some moments that stand out:
\nWe've grown from that small founding team to a group of people who genuinely care about solving authorization the right way. Along the way, we've had the privilege of helping everyone from early-stage startups to large enterprises build and scale their applications without the usual authorization headaches.
\nThis event is our chance to share our latest work with the community that's supported us, celebrate how far we've all come together, and get a glimpse of what's ahead.
\nWhether you've been following our journey from the beginning or you're just discovering what we're about, we'd love to have you there. It's going to be the kind of event where you leave with new ideas, maybe some useful insights, and definitely a better sense of where authorization infrastructure is headed.
\nWant to share a birthday message with us? Record a short message here—we'd genuinely love to hear from you and share some of them during the event.
\nSee you on August 20th!
", + "url": "https://authzed.com/blog/authzed-is-5-authorization-infrastructure-event", + "title": "Celebrate With Us: AuthZed is 5!", + "summary": "AuthZed is turning five years old! Join us Wednesday, August 20th for our Authorization Infrastructure Event, where we're bringing together industry experts and sharing exciting new product developments plus experimental work from our lab.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-07-23T09:36:00.000Z", + "date_published": "2025-07-23T09:36:00.000Z", + "author": { + "name": "Sam Kim", + "url": "https://github.com/samkim" + } + }, + { + "id": "https://authzed.com/blog/coding-with-ai-my-personal-experience", + "content_html": "I’ve been in tech for over 20 years. I’ve written production code in everything from Fortran to Go, and for the last five of those years, I’ve been a startup founder and CEO. These days, I spend most of my time operating the business, not writing code. But recently, I dipped back in. I needed a new demo built, and fast.
\nIt wasn’t a simple side project. This demo would ideally have multiple applications, all wired into SpiceDB, built with an obscure UI framework, and designed to show off what a real-world, multi-language, permission-aware system looks like. Naturally, I started thinking about who should build it.
\nShould I ask engineering? Probably not a good idea since I didn’t want to interrupt core product work. What about an intern? Too late in the year for that. Maybe a contractor? I’ve had mixed results there. Skills tend to be oversold, results can fall short, and just finding and vetting someone would take time I didn’t have.
\nJust prior to this, Anthropic had just released Claude Code and Claude 4. A teammate (with good taste) had good things to say about the development experience, and internet consensus seems to be that (for today at least) Claude is kind for coding models, so I figured I’d give it a try. I’m no novice to working with AI: I have been a paying customer of OpenAI’s since Dall-E and ChatGPT had their first public launches. At AuthZed we also make extensive use of the AI features that are built into some of our most beloved tools, such as: Notion, Zoom, Figma, and GitHub. Many of these features have been helpful, but none felt like a game changer.
\nAt first, I wasn’t sure how much Claude Code could take on. I didn’t know how to structure my prompts or how detailed I needed to be. I started small: scaffold a project, get a “hello world” working, and set up the build system. It handled all of that cleanly.
\nEncouraged, I got a little overconfident. My prompts grew larger and fuzzier. The quality of output dropped quickly. I also didn’t have a source control strategy in place, and when Claude Code wandered off track, I lost a lot of work. It’s fantastically bad at undoing what it just did! It was a painful but valuable learning experience.
\nEventually, I found my rhythm. I started treating Claude Code like a highly capable but inexperienced intern. I wrote prompts as if they were JIRA tickets: specific, structured, and assuming zero context. I broke the work down into small, clear deliverables. I committed complete features as I went. When something didn’t feel right, I aborted early, git reverted, and started fresh.
\n
That approach worked really well.
\n

By the end of the project, Claude Code and I had built three application analogues for tools that exist in the Google Workspace suite, in three different languages! We wrote a Docs-like in Java, a Groups-like in Go, and a Gmail-like in Javascript, and a frontend coded up in a wacky wireframe widget library called Wired Elements. Each one was connected through SpiceDB, shared a unified view of group relationships, and included features like email permission checks and a share dialog in the documents app. It all ran in Docker with a single command. The entire effort cost me around $75 in API usage.
\nCheck it out for yourself: https://github.com/authzed/multi-app-demo
\nCould I have done this on my own? Sure, in theory. But I’m not a UI expert, and switching between backend languages would have eaten a lot of time. If I’d gone the solo route, I would’ve likely over-engineered the architecture to minimize how much code I had to write, which might have resulted in something more maintainable, but also something unfinished and way late.
\n
This was a different experience than I’d had with GitHub Copilot. Sometimes people describe Copilot as “spicy autocomplete”, and that feels apt. Claude Code felt like having a pair programmer who could actually build features with me.
\nMy buddy Jason Hall from Chainguard put it best in a post on LinkedIn: “AI coding agents are like giving everyone their own mech suit.” and “...if someone drops one off in my driveway I'm going to find a way to use it.”
\n
For the first time in a long while, I felt like I could create again. As a CEO, that felt energizing. It also made me start wondering what else I could personally accelerate.
\nOf course, I had some doubts. Maybe this only worked because it was greenfield. Maybe I’d regret not being the expert on the codebase. But the feeling of empowerment was real.
\nAt the same time, we had a growing need to migrate our sales CRM. We’d built a bespoke system in Notion, modeled loosely after Salesforce. Meanwhile, all of our marketing data already lived in HubSpot. It was time to unify everything.
\nOn paper, this looked straightforward: export from Notion, import into HubSpot. In reality, it was anything but. Traditional CRM migrations are done with flattened CSV files; that wouldn’t play nicely with the highly relational structure we’d built. And with so much existing marketing data in HubSpot, this was more of a merge than a migration.
\nI’ve been through enough migrations to know better than to try a one-shot cutover. It never goes right the first time, and data is always messier than expected. So I came up with a different plan: build a continuous sync tool.
\nThe idea was to keep both systems aligned while we gradually refined the data. That gave us time to validate everything and flip the switch only when we were ready. Both Notion and HubSpot have rich APIs, so I turned again to Claude Code.
\nOver the course of a week, Claude Code and I wrote about 5,000 lines of JavaScript. The tool matched Notion records to HubSpot objects using a mix of exact matching and fuzzy heuristics. We used Levenshtein distance to help with tricky matches caused by accented names or alternate spellings. The tool handled property synchronization and all the API interactions needed to link objects across systems.
\nThe cost came in at around $50 in Claude Code credits.
\nCould I have done it myself? Technically, yes. But it would have taken me a lot longer. I’m not fluent in JavaScript, and if I had been writing by hand, I would’ve insisted on TypeScript and clean abstractions. That would have been a waste of time for something we were planning to throw away after the migration.
\nOur current generation of coding agents are undeniably powerful. Yes, they’re technically still just a next-token predictor, but that description misses the point. It’s like saying Bagger 288 is “just a big shovel.” Sure, but it’s a shovel that can eat mountains.
\nI now feel confident taking on software projects again in my limited spare time. That’s not something I expected to feel again as a full-time CEO. And the most exciting part? This is probably the worst that these tools will ever be. From here, the tools only get better. Companies like OpenAI, with Codex, and Superblocks are already riffing on other possible user experiences for coding agents. I’m keen to see where the industry goes.
\nIt also seems clear that AI will play a bigger and bigger role in how code gets written. As an API provider, we’re going to need to design for that reality. In the not-too-distant future, our primary users will likely be coding agents, not just humans.
\nWe’re in the middle of a huge transformation, not just in software, but across the broader economy. The genie is out of the bottle. Even if the tools stopped improving tomorrow (and I don’t think they will) there’s already enough capability to change the way software gets built.
\nI’ll admit, it’s a little bittersweet. For most of my career, I have self-identified as a computer whisperer: someone who can speak just the right incantations to make computers (or sometimes whole datacenters) do what I need. But like most workplace superpowers, this one also turned out to be a time-limited arbitrage opportunity.
\nWhat hasn’t changed is the need for control. As AI gets more capable, the need for clear, enforceable boundaries becomes more important than ever. The answer to “what should this AI be allowed to do?” isn’t “more AI.” It’s strong, principled authorization.
\nThat’s exactly what we’re building at AuthZed. And you’ll be seeing more from us soon about how we’re thinking about AI-first developer experience and AI-native authorization.
\nStay tuned.
", + "url": "https://authzed.com/blog/coding-with-ai-my-personal-experience", + "title": "Coding with AI: My Personal Experience", + "summary": "AuthZed CEO Jake Moshenko shares his experience coding with AI.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-07-16T08:21:00.000Z", + "date_published": "2025-07-16T08:21:00.000Z", + "author": { + "name": "Jake Moshenko", + "url": "https://www.linkedin.com/in/jacob-moshenko-381161b/" + } + }, + { + "id": "https://authzed.com/blog/authzed-cloud-is-coming-soon", + "content_html": "Here at AuthZed, we are counting down the days until we launch AuthZed Cloud because we are so eager to bring the power of our authorization infrastructure to every company, large and small. If you're just as excited as we are about AuthZed Cloud, sign up for the waitlist. We will be in touch with AuthZed Cloud news, and you'll be the first to know when the product launches.
\n\n
From the start of our journey, we have had a strong focus on serving the needs of authorization at enterprise businesses. Our most popular product, AuthZed Dedicated, is a reflection of that focus as it caters to those looking for dedicated hardware resources and fully-isolated deployment environments. However, not everyone has such strict requirements, and there are many companies who prefer a self-service product where they can sign up, manage their deployments from a single, shared control plane with other users, and pay for dynamic usage with a credit card. The latter is how we consumed most of our high-value services at our last startup when we were building the first enterprise container registry: Quay.io. In fact, you can read more about our journey from Quay to AuthZed here.
\nThe most gratifying part of creating AuthZed has been working alongside so many amazing companies that are changing the landscape of various industries. It's truly validating to see them come to the same conclusion: homegrown authorization solutions are not sufficient for modern businesses. With AuthZed Cloud, we expect to expand the number of companies we can work alongside to set a new standard of security that ensures the safety of all of our private data by fixing access control.
", + "url": "https://authzed.com/blog/authzed-cloud-is-coming-soon", + "title": "AuthZed Cloud is Coming Soon", + "summary": "AuthZed Cloud is coming soon, expanding beyond enterprise-only solutions to offer self-service authorization infrastructure for companies of all sizes. Join our waitlist to be first in line when we launch this game-changing platform.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-07-03T10:31:00.000Z", + "date_published": "2025-07-03T10:31:00.000Z", + "author": { + "name": "Jimmy Zelinskie", + "url": "https://twitter.com/jimmyzelinskie" + } + }, + { + "id": "https://authzed.com/blog/authzed-brings-additional-observability-to-authorization-via-the-datadog-integration", + "content_html": "Today, AuthZed is providing additional observability capabilities to AuthZed's cloud products with the introduction of our official Datadog Integration. All critical infrastructure should be observable and authorization is no exception. Our integration with Datadog gives engineering teams instant insight into authorization performance, latency, and anomalies—without adding custom tooling or overhead.
\nWith this new integration, customers can now centralize that observability data with the rest of their data in Datadog—giving them the ability to correlate events across their entire platform. AuthZed's cloud products continue to include a web console with out-of-the-box dashboards containing metrics across the various infrastructure components that power a permissions system. At the same time, users of the Datadog integration will also have a mirror of these dashboards available in Datadog if they do not wish to create their own.
\n
\"Being able to visualize how AuthZed performs alongside our other systems gives us real peace of mind,\" said Eric Zaporzan, Director of Infrastructure, at Neo Financial. \"Since we already use Datadog, it was simple to send AuthZed metrics there and gain a unified view of our entire stack.\"
\nAuthZed metrics allow developers and SREs to monitor their deployments, including request latency, cache metrics (such as size and hit/miss rates), and datastore connection and query performance. These metrics help diagnose performance issues and fine-tune the performance of their SpiceDB clusters.
\nThe Datadog integration is available in the AuthZed Dashboard under the “Settings” tab on a Permission System.
\nTo ensure that the dashboard graph for latency correctly shows the p50, p95, and p99 latencies, you’ll also need to set the Percentiles setting for the authzed.grpc.server_handling metric in the Metrics Summary view to ON.
\nTADA 🎉 You should see metrics start to flow to Datadog shortly thereafter.
\nI want to thank all of the AuthZed engineers involved in shipping this feature, but especially Tanner Stirrat who shepherded this project from inception and I can't wait to see all the custom dashboards our customers make in the future!
\n
\nInterested in learning more? Join our Office Hours on July 3rd here on YouTube.
Secure your AI systems with fine-grained authorization for RAG pipelines and agents
\nToday we are announcing Authorization Infrastructure for AI, providing official support for Retrieval-Augmented Generation (RAG) pipelines and agentic AI systems. With this launch, teams building AI into their applications, developing AI products or building an AI company can enforce fine-grained permissions across every stage - from document ingestion to vector search to agent behavior - ensuring data is protected, actions are authorized, and compliance is maintained.
\nAI is quickly becoming a first-class feature in modern applications. From retrieval-augmented search to autonomous agents, engineering teams are building smarter user experiences by integrating large language models (LLMs) into their platforms.
\nBut with that intelligence comes risk.
\nAI systems do not just interact with public endpoints. They pull data from sensitive internal systems, reason over embeddings that bypass traditional filters, and trigger actions on behalf of users. Without strong access control, they can expose customer records, cross tenant boundaries, or operate with more agency than intended.
\nThis is the authorization problem for AI. And it is one every team building with LLMs now faces.
\nWhen you add AI to your application, you also expand your attack surface. Consider just a few examples:
\nAccording to the OWASP Top 10 for LLM Applications, four of the top risks require robust authorization controls as a primary mitigation. And yet, most developers are still relying on brittle, manual enforcement scattered across their codebases.
\nWe believe it’s time for a better solution.
\n
AuthZed’s authorization infrastructure for AI brings enterprise-grade permission systems to AI workloads. AuthZed has been better positioned to support AI from the get-go because of SpiceDB.
\nSpiceDB is an open-source Google Zanzibar-inspired database for storing and computing permissions data that companies use to build global-scale fine grained authorization services. Since it is based on Google Zanzibar’s proven architecture, it can scale to massive datasets while handling complex permissions queries. In fact SpiceDB can scale to trillions of access control lists and millions of authorization checks per second.
\n“AI systems are only as trustworthy as the infrastructure that governs them,\" said Janakiram MSV, industry analyst of Janakiram & Associates. \"AuthZed’s SpiceDB brings proven, cloud-native authorization principles to AI, delivering the control enterprises need to adopt AI safely and at scale.”
\nUsing SpiceDB to enforce access policies at every step of your AI pipeline ensures that data and actions remain properly governed. With AuthZed’s Authorization Infrastructure for AI, teams can safely scale their AI features without introducing security risks or violating data boundaries.
\nRetrieval-Augmented Generation improves the usefulness of LLMs by injecting external knowledge. But when that knowledge includes sensitive customer or corporate data, access rules must be enforced at every stage.
\nAuthZed enables teams to:
\nWhether you are building with a private knowledge base, CRM data, or support logs, SpiceDB ensures your AI respects the same access controls as the rest of your systems.
\nAI agents are designed to act autonomously, but autonomy without boundaries is dangerous. With the AuthZed Agentic AI Authorization Model, teams can enforce clear limits on what agents can access and do.
\nThis model includes:
\nWhether your agent is summarizing data, booking a meeting, or triggering a workflow, it should only ever do what it is explicitly allowed to do.
\nLet’s say an employee types a natural language query into your internal AI assistant:
\n“What was our Q3 revenue?”
\nWithout authorization, the assistant might retrieve sensitive board slides or budget drafts and present them directly to the user. No checks, no logs, no traceability.
\nWith AuthZed:
\nThis is what AuthZed’s Authorization Infrastructure for AI makes possible.
\nYou should not have to choose between building smart features and maintaining secure boundaries. With AuthZed:
\nAnd it is already being used in production. Workday uses AuthZed Dedicated to\nsecure its AI-driven contract lifecycle platform. Other major AI providers rely on SpiceDB to enforce permissions across\nmulti-tenant LLM infrastructure.
\nIf you are building AI features, AuthZed’s Authorization Infrastructure for AI helps you ship faster by allowing you to focus on your product, instead of cobbling together an authorization solution. Whether you are securing vector search, gating agent behavior, or building out internal tools, AuthZed provides the authorization infrastructure you need.
\nFor the team at AuthZed, our mission is to fix access control. The first step is creating the foundational infrastructure for others to build their access control systems upon. Infrastructure for Authorization, you say? Didn't infrastructure just go through its largest transformation ever with cloud computing? From introduction to the eventual mass adoption of cloud computing, the industry has had to learn to manage all of the cloud resources they created. In response, cloud providers offered APIs for managing resource lifecycles. Our infrastructure follows this same pattern, so today we're proud to announce the AuthZed Cloud API is in Tech Preview.
\nThe AuthZed Cloud API is a RESTful JSON API for managing the infrastructure provisioned on AuthZed Dedicated Cloud. Today, it is able to list the available permissions systems and fully manage the configuration for restricting API-level access to SpiceDB within those permissions systems.
\nAs with all Tech Preview functionality, to get started, you must reach out to your account team and request access. Afterwards, you will be provided credentials for accessing the API. With these credentials, you're free to automate AuthZed Cloud infrastructure in any way you like! We recommend getting started by heading over to Postman to explore the API. Next, why not break out a little bit of curl?
\nListing all of your permissions systems:
\ncurl --location 'https://api.$YOUR_AUTHZED_DEDICATED_ENDPOINT/ps' \\\n --header 'X-API-Version: 25r1' \\\n --header 'Accept: application/json' \\\n --header 'Authorization: Bearer $YOUR_CREDENTIALS_HERE' | jq .\u000b[{\n \"id\": \"ps-8HXyWFOzGtk0Yq8dH0GBT\",\n \"name\": \"example\",\n \"systemType\": \"Production\",\n \"systemState\": {\n \"status\": \"RUNNING\"\n },\n \"version\": {\n \"selectedChannel\": \"Rapid\",\n \"currentVersion\": {\n \"displayName\": \"SpiceDB 1.41.0\",\n \"version\": \"v1.41.0+enterprise.v1\",\n \"supportedFeatureNames\": [\n \"FineGrainedAccessManagement\"\n ]\n }\n }\n }]\n\nTake note of the required headers: the API requires specifying a version as a header so that changes can be made to the API in the future releases.
\nI'm eager to see all of the integrations our customers will build with API-level access to our cloud platform! Look out for another announcement coming very soon about an integration that we've built using this new API, too!
\nJoin us on the mission to fix access control.
\nSchedule a call with us to learn more about how AuthZed can help you.
", + "url": "https://authzed.com/blog/introducing-the-authzed-cloud-api", + "title": "Introducing The AuthZed Cloud API", + "summary": "Announcing the AuthZed Cloud API in Tech Preview—an API for managing AuthZed Dedicated Cloud infrastructure. Following the cloud computing pattern of lifecycle management APIs, this new tool allows you to manage permissions systems and restrict API-level access to SpiceDB within your authorization infrastructure.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-05-28T12:00:00.000Z", + "date_published": "2025-05-28T12:00:00.000Z", + "author": { + "name": "Jimmy Zelinskie", + "url": "https://twitter.com/jimmyzelinskie" + } + }, + { + "id": "https://authzed.com/blog/a-closer-look-at-authzed-dedicated", + "content_html": "At AuthZed, our mission is to fix broken access control. After years of suffering in industry from insufficient solutions for building authorization systems, we concluded that we'd have to start from the ground up by building the right infrastructure software. SpiceDB, open sourced in late 2021, was our first-step to providing the solution that modern enterprises need. AuthZed Dedicated Cloud, often referred to as simply Dedicated, launched in early 2022 and productized SpiceDB by offering a dedicated cloud platform for provisioning SpiceDB deployments similar to the user experience you'd find provisioning infrastructure on a major cloud provider.
\n
Dedicated Clouds are a relatively new concept. When AWS hit the market, the term Public Cloud was coined; Public Clouds are cloud platforms that share their underlying hardware resources across a variety of customers. At the same time this term got coined, folks needed a term used to refer to what most folks were already doing before AWS launched: running their own dedicated infrastructure. Unfortunately, instead of calling this Dedicated Cloud, it became known as Private Cloud. So what are Dedicated Clouds? Well, they're the middle ground between Private and Public Clouds; Dedicated Clouds provide varying levels of isolation and dedicated resources than Public Clouds, but aren't placing end users fully in control quite like the traditional Private Cloud. Enterprises in regulated industries, or those that want to isolate particularly sensitive data, increasingly reach for Dedicated Cloud because it can provide most of the niceties of the Public Cloud while also delivering better security.
\n
When AuthZed looked to create the first commercial offering of SpiceDB, we looked at where the industry was heading and implemented a Serverless product. However, it turned out that most enterprises value peace of mind that comes from isolating their authorization data from a shared data plane with other tenants. This was a happy coincidence because at the same time we learned that the best way to operate low-latency systems is to isolate workloads by having dedicated hardware resources. With our new insights, we launched Dedicated, our \"middleground\" that provided dedicated cloud environments with reserved compute resources and private networking. Dedicated customers get a private control plane deployed into their cloud regions of choice where they can provision their own deployments using our web console, API, or Terraform/OpenTOFU. Remaining true to the Infrastructure-as-a-Service (IaaS) spirit, pricing is done on a resource consumption basis.
\nSince launch, Dedicated immediately became our flagship product. However, we recognized that some customers didn't require all of its isolation features.These are the same users looking for a self-service product to try things out without a long enterprise sales cycle. Our Serverless product inadvertently fits this description, but it's a limited experience compared to Dedicated. What if we could bridge the gap and bring a version of our Dedicated product where customers could share the control plane? We're calling this AuthZed Cloud (as opposed to AuthZed Dedicated Cloud) and it's under active development and expected to launch later this year. Best of all, because both Cloud and Dedicated will share the same codebase, all of the self-service features we're building will also be coming to Dedicated.
\nIf you are interested in learning more about AuthZed Cloud, you can sign up here for the beta waitlist.
\n", + "url": "https://authzed.com/blog/a-closer-look-at-authzed-dedicated", + "title": "A Closer Look at AuthZed Dedicated", + "summary": "AuthZed tackles broken access control through innovative authorization infrastructure. After launching open-source SpiceDB in 2021, they created AuthZed Dedicated Cloud—offering enterprises the security benefits of private clouds with public cloud convenience. This middle-ground solution provides isolated authorization data processing with dedicated resources, perfect for regulated industries requiring enhanced security.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-05-20T13:00:00.000Z", + "date_published": "2025-05-20T13:00:00.000Z", + "author": { + "name": "Jimmy Zelinskie", + "url": "https://twitter.com/jimmyzelinskie" + } + }, + { + "id": "https://authzed.com/blog/building-better-authorization-infrastructure-with-arm", + "content_html": "How ARM helps AuthZed build and operate authorization infrastructure, from day-to-day productivity gains to cost-effective, performant cloud compute.
\nToday's cloud-native development environment requires running a growing list of simultaneous services: container orchestration, monitoring, databases, observability tools, and more. For engineering teams, this creates a critical challenge: how to balance performance, cost, and efficiency across both development environments and production deployments.
\nAt AuthZed, we provide flexible, scalable authorization infrastructure—the permissions systems that secure access for your applications’ data and functionality—enabling engineering teams to focus on building what matters—their core products. For our customers using AuthZed's dedicated cloud, the balance of performance, cost, and efficiency is also crucial—they expect a reliable, performant, and cost-effective solution.
\nARM architecture has become our strategic advantage in meeting these challenges across our entire workflow.
\nThe availability of ARM-based laptops with customizable configurations and ample RAM has transformed our development environment. Our journey began with ARM processors in early 2022 and expanded to more powerful variants as they became available. The developer community quickly adopted these machines, and tooling and library support rapidly matured, enabling us to fully adopt ARM as our primary architecture in development.
\nAt AuthZed, we work with distributed systems and databases daily, and running the full stack locally can be resource-intensive, often requiring significant CPU and memory. ARM's efficient performance helps utilize machine capacity, while its energy efficiency keeps our laptops cool enough to truly stay on laps—even when running our resource-intensive local environment.
\nAfter upgrading to higher-performance ARM-based laptops, notable improvements compared to our previous development environment included:
\nThe qualitative benefits have been even more significant—true mobility with our laptops due to minimal battery drain and absence of overheating, smoother performance during resource-intensive tasks, and most importantly, tighter feedback loops during debugging and testing.
\nAuthZed has been building and publishing multi-architecture Docker images for our tools and authorization database for over three years (since March 2022), so we recognized the value of multi-architecture support in CI/CD early on.
\nThere's now robust support for third-party ARM-based action runners for GitHub Actions, our CI/CD platform. Combined with toolchain maturity across runner images for popular architectures, migration to ARM for CI/CD has never been easier.
\nBuild and test workflows are unique to each project and evolve as the project develops. Consequently, the benefits and tradeoffs for a CI/CD platform change over time. We've benefited from being able to easily migrate between architectures and runner providers to best meet our engineering needs at different stages.
\nMajor providers like Google Cloud, AWS, and Azure have all released custom-designed ARM-based CPUs for their cloud compute platforms. The expanding ARM ecosystem bolsters our multi-cloud strategy for AuthZed Dedicated and allows our production workloads to benefit from ARM's design, which prioritizes high core count and power efficiency under load.
\nAuthZed Dedicated is our dedicated authorization infrastructure deployed adjacent to customer applications in their preferred cloud platform. This allows for the lowest latency between user applications and our permissions systems, and for the most comprehensive region support. With the availability of ARM-based compute options across the major providers, we are able to take advantage of the economic and performance advantages of ARM-based infrastructure in production:
\nFrom developer laptops to cloud infrastructure, ARM delivers consistent advantages throughout our engineering pipeline. For AuthZed, it's now our preferred platform for building and running authorization infrastructure that helps customers secure applications with confidence and scale efficiently.
\nThe combination of developer productivity, cost efficiency, and performance gains enables our growing startup to innovate and compete effectively. As cloud providers continue expanding ARM-based offerings and development tools mature further, we expect these advantages to compound, creating even more opportunities to deliver value through our authorization infrastructure.
\nBy embracing ARM across development and production environments, we've created a seamless experience that benefits both our team and our customers—accelerating development while delivering more performant and cost-effective services.
\nCurious about the inspiration behind AuthZed’s modern approach to authorization? Explore the Google Zanzibar research paper with our annotations and foreword by Kelsey Hightower to learn how it all began.
\nhttps://authzed.com/z/google-zanzibar-annotated-paper
Zed is the command line interface (CLI) tool that you can use to interact with your SpiceDB cluster. With it you can easily switch between clusters, write and read schemas, write and read relationships, and check for permissions. It can be launched as a standalone binary or as a Docker container. Detailed installation options documented here.
\nOver the last few months we’ve been making many improvements to it, such as:
\nzed backup commandAnd many other small fixes that are too many to list here. We are happy to announce that last week we released zed v0.30.2, which includes all of these changes.
\nIn the near future we expect to be adding support for a new test syntax in schema files, which will allow you to validate that your schema and relationships work as you expect them to. Stay tuned!
\nAs you can see, we are continuously making improvements to zed. If you see anything not working as expected, or if you have an idea for a new feature, please don’t hesitate to open an issue in https://github.com/authzed/zed. Also, while you’re at it, please give us a star!
", + "url": "https://authzed.com/blog/zed-v0-30-2-release", + "title": "Zed v0.30.2 Release", + "summary": "Zed CLI provides seamless interaction with SpiceDB clusters, allowing you to manage schemas, relationships, and permissions checks. Our v0.30.2 release adds composable schema support, automatic retries, backup functionality, and upcoming Windows package integration via Chocolatey.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-05-01T11:12:00.000Z", + "date_published": "2025-05-01T11:12:00.000Z", + "author": { + "name": "Maria Inés Parnisari", + "url": "https://github.com/miparnisari" + } + }, + { + "id": "https://authzed.com/blog/kubecon-europe-2025-highlights-navigating-authorization-challenges-in-fintech-with-authzeds-jimmy-zelinskie-and-pierre-alexandre-lacerte-from-upgrade", + "content_html": "At this year's KubeCon + CloudNativeCon Europe 2025 in London, AuthZed CPO Jimmy Zelinskie sat down with Pierre-Alexandre Lacerte, Director of Software Development at Upgrade, for an insightful discussion on modern authorization challenges and solutions. The interview, hosted by Michael Vizard of Techstrong TV, covers several key topics that should be on every developer's radar.
\nBefore diving into the highlights, you can watch the complete interview on Techstrong TV here. It's packed with valuable insights for anyone interested in authorization, security, and cloud-native architectures.
\nJimmy shares the origin story of AuthZed, explaining how his experience building Quay (one of the first private Docker registries) revealed fundamental challenges with authorization:
\n\n\n\"When you think about it, the only thing that makes a private Docker registry different from like a regular Docker registry where anyone can pull any container down is literally authorization... the core differentiator of that product was authorization.\"
\n
The turning point came when Google published the Zanzibar paper in 2019:
\n\n\n\"We read this paper and said, this is actually how you're supposed to solve these problems. This would have solved all the problems we had building Quay.\"
\n
One of the most valuable segments of the interview explains the concept of relationship-based access control:
\n\n\n\"The approach in the Zanzibar paper is basically this idea of relationship-based access control, which is not how most people are doing things today. The idea is essentially that you can save sets of relationships inside of a database and then query that later to determine who has access.\"
\n
Jimmy illustrates this with a simple example that makes the concept accessible:
\n\n\n\"Jimmy is a part of this team. This team has access to this resource. And then if I can find that chain from Jimmy through the team to that resource, that means Jimmy has access to that resource transitively through those relationships.\"
\n
Pierre-Alexandre explains the decision-making process that led Upgrade to adopt SpiceDB rather than building an in-house solution:
\n\n\n\"We're a fintech, so we offer personal loans, checking accounts. But eventually we started developing more advanced products where we had to kind of change the foundation of our authorization model... we're kind of not that small, but at the same time we cannot allocate like 200 engineers on authorization.\"
\n
Their evaluation involved looking at industry leaders:
\n\n\n\"We started looking at a few solutions actually, and then also the landscape, like what is GitHub doing? What is the Carta, Airbnb doing?... a lot of those solutions were kind of hedging into the direction of Zanzibar or Zanzibar-ish approach.\"
\n
The interview highlights a critical advantage of centralized authorization systems:
\n\n\n\"The real end solution to all that is centralization. If there's only one system of record, it's really easy to make sure you've just removed that person from the one single system of record.\"
\n
Pierre-Alexandre describes how Upgrade implemented this approach:
\n\n\n\"When someone leaves the company or when someone changes teams, we do have automation that would propagate the changes across the applications you have access to down to the SpiceDB instance. So we have this kind of sync infrastructure that makes sure that this is replicated within a few seconds.\"
\n
For companies operating in regulated industries like fintech, having a cloud-native solution is essential. Pierre-Alexandre emphasizes:
\n\n\n\"We're on Amazon EKS, so Kubernetes Foundation... For us, finding something that was cloud native, Kubernetes native was very important.\"
\n
One of the most forward-looking parts of the discussion addresses the intersection of authorization and AI:
\n\n\n\"The real kind of question is actually applying authorization to AI and not vice versa... now with AI, we don't have that same advantage of it just being like a couple folks. If you train a model or have tons of embeddings around your personal private data, now anyone querying that LLM has access to all that data at your business.\"
\n
Upgrade is already exploring solutions:
\n\n\n\"In our lab, we're exploring different patterns, leveraging SpiceDB where we have a lot of internal documentation and the idea is to ingest those documents and tag them on SpiceDB and then leveraging some tools in the GenAI space to query some of this data.\"
\n
Perhaps the most quotable moment from the interview is Jimmy's passionate plea to developers:
\n\n\n\"If there's like one takeaway from kind of us building this business, it's that folks shouldn't be building their own authorization. Whether the tool is SpiceDB that they end up choosing or another one, like developers, they wouldn't dream of building their own database when they're building their applications. But authorization systems, they've been studied and researched and written about in computer science since the exact same time. Yet every developer thinks they can write custom code for each app implementing custom logic for a thing they don't have no background in, right? And I think this is kind of just like preposterous.\"
\n
Pierre-Alexandre adds a pragmatic perspective from the customer side:
\n\n\n\"Obviously, I probably have decided to go with SpiceDB sooner. But yeah, I mean, we had to do our homework and learn.\"
\n
The full interview covers additional topics not summarized here, including:
\nInterested in learning more about modern authorization approaches after watching the interview?
\nDon't miss this insightful conversation that challenges conventional wisdom about authorization and provides a glimpse into how forward-thinking companies are approaching these challenges. Watch the full interview now →
", + "url": "https://authzed.com/blog/kubecon-europe-2025-highlights-navigating-authorization-challenges-in-fintech-with-authzeds-jimmy-zelinskie-and-pierre-alexandre-lacerte-from-upgrade", + "title": "Techstrong.tv Interview with Jimmy Zelinskie and Pierre-Alexandre Lacerte from Upgrade", + "summary": "Watch AuthZed CPO Jimmy Zelinskie and Upgrade's Pierre-Alexandre Lacerte discuss modern authorization challenges, relationship-based access control, and why companies shouldn't build their own authorization systems in this insightful KubeCon Europe 2025 interview with Techstrong.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-04-08T16:15:00.000Z", + "date_published": "2025-04-08T16:15:00.000Z", + "author": { + "name": "Sam Kim", + "url": "https://github.com/samkim" + } + }, + { + "id": "https://authzed.com/blog/meet-dibs-the-mascot-bringing-spicedb-to-life", + "content_html": "We're pleased to introduce you to the official SpiceDB mascot – the Muad'dib, or Dibs for short. As we prepare for KubeCon + CloudNativeCon EU in London, we're unveiling this distinctive character who will represent our project in meaningful ways.
\n
The name \"Muad'dib\" continues our tradition of referencing Frank Herbert's Dune series. For those unfamiliar with Dune, the Muad'dib is a small desert mouse known for its resilience and adaptability—qualities we strive to incorporate into SpiceDB.
\nWith its distinctive oversized ears and agile movements, the Muad'dib is far more than just a charming emblem. In the unforgiving desert, every step matters, and this remarkable creature's fast, efficient navigation mirrors how SpiceDB processes complex data in real time. Those attentive ears serve as a reminder to remain vigilant and responsive, embodying survival instincts honed in the harshest environments.
\nMuch like SpiceDB's approach to authorization challenges, the Muad'dib transforms obstacles into opportunities. This desert-dwelling creature represents our commitment to resilience, speed, and a collaborative spirit – all values that drive SpiceDB forward in the cloud-native ecosystem.
\nWe will be at KubeCon + CloudNativeCon in London so stop by our booth #: N632 to pick up your very own Dibs swag.
\nAnd join us for our scheduled activities:
\nKelsey Hightower AMA at our booth #: N632
\n
Come party with AuthZed, Spotify, Rootly and Infiscal at the Munich Cricket Club Canary Wharf.
\n\n
We would love to talk with you about how we can help fix your access control and provide the infrastructure necessary to support your applications.
\nWe look forward to seeing how our community connects with Dibs the Muad'dib. Here's how you can get involved:
\nThis creature represents not just our project, but the spirit of our community – adaptable, resilient, and ready to navigate complex challenges.
\nWelcome, Dibs.
", + "url": "https://authzed.com/blog/meet-dibs-the-mascot-bringing-spicedb-to-life", + "title": "Meet Dibs: The Mascot Bringing SpiceDB to Life", + "summary": "Meet Dibs the Muad'dib, SpiceDB's new mascot that embodies our commitment to resilience, adaptability, and precision in solving complex authorization challenges. Drawing inspiration from Frank Herbert's Dune universe, this vigilant desert creature symbolizes how SpiceDB navigates the harsh terrain of modern access control with efficiency and intelligence.", + "image": "https://authzed.com/images/upload/blog-meet_dibs-2x.png", + "date_modified": "2025-03-25T12:17:00.000Z", + "date_published": "2025-03-25T12:17:00.000Z", + "author": { + "name": "Corey Thomas", + "url": "https://www.linkedin.com/in/cor3ythomas/" + } + }, + { + "id": "https://authzed.com/blog/the-evolution-of-expiration", + "content_html": "We are excited to announce that as of the SpiceDB v1.40 release, users now have access to a new experimental feature: Relationship Expiration. When writing relationships, requests can now include an optional expiration time, after which a relationship will be treated as removed, and eventually automatically cleaned up.
\nEven when first setting out to create SpiceDB, there was never any doubt whether or not users would want time-bound access control to their resources. However, the inspiration for SpiceDB, Google's Zanzibar system, has no public documentation for how this functionality is built. As our initial goals for the SpiceDB project were to be as faithful to Google's design as possible, we initially left expiration as an exercise to the user.
\nWithout explicit support within SpiceDB, users could still use external systems like workflow engines (e.g. Temporal) to schedule calls to the SpiceDB DeleteRelationships or WriteRelationships APIs in order to solve this problem. This is a perfectly valid way to solve this problem, but it has a major tradeoff: users must adopt yet another system to coordinate their usage of the SpiceDB API.
\nAfter we had successfully reached our goal of being the premier implementation of the concepts expressed in the Google Zanzibar paper, we turned our focus to improving developer experience and more real-world requirements outside of the walls of Google. This led us to collaborating with Netflix on a system for supporting lightweight policies to more effectively model ABAC-style use cases. This design came to be known as Relationship Caveats. Caveats allow SpiceDB users to write conditional relationships that exist depending on whether a CEL expression evaluates to true while their request is being processed. With the introduction of Caveats, SpiceDB had its first way to create time-bounding without relying on any external system. The use case was so obvious, even our first examples of Caveats demonstrated how to implement time-bounded relationship expiration.
\nAs more SpiceDB users adopted Caveats, we began to acknowledge some trends in its usage. Many folks didn't actually need or want the full expressiveness of policy; instead they cared solely about modelling expiration itself. Eventually it became obvious that expiration was its own fully-fledged use case. If we could craft an experience specifically for expiration, we could steer many folks away from some of the tradeoffs associated with caveats. If you still need caveats for reasons other than expiration and you're wondering if relationships support both caveats and expiration simultaneously, they do!
\nIf you've spent time reading some of the deeper discussions on SpiceDB internals or studying other systems, you might be familiar with the fact that time is incredibly nebulous in distributed systems. Distributed systems typically eschew \"wall clocks\" altogether. Instead, for correctness they need to model time based on the ordering of events that occur in the system. This observation, among others, ultimately led Leslie Lamport to win a Turing Award. SpiceDB is no exception to this research: the opaque values encoded into SpiceDB's ZedTokens act as logical clocks used to provide consistency guarantees throughout the system.
\nIf the problem here isn’t already clear: fundamentally, relationship expiration is tied to wall clock time, but distributed systems research proves this is a Bad Idea™. In order to avoid any inconsistencies caused by the skew in synchronization of clocks across machines, SpiceDB implements expiration by pushing as much logic into the underlying datastore as possible. For a datastore like PostgreSQL, there is no longer a synchronization problem because there's only one clock that matters: the one on the leader's machine. Some datastores even have their own first-class expiration primitives that SpiceDB can leverage in order to offload this logic entirely while ensuring that the removal of relationships are done as efficiently as possible. This strategy is only possible because of SpiceDB's unique architecture of reusing other existing databases for its storage layer rather than the typical disk-backed key-value store.
\nThere are only a few steps required to try out expiration once you've upgraded to SpiceDB v1.40:
\nspicedb serve --enable-experimental-relationship-expiration [...]\n\nuse expiration\u000b\u000b\n\ndefinition folder {}\u000b\ndefinition resource {\n relation parent: folder\n}\n\nuse expiration\u000b\u000b\n\ndefinition folder {}\u000b\n definition resource {\n relation parent: folder with expiration\n}\n\nWriteRelationshipsRequest { Updates: [\n RelationshipUpdate {\n Operation: CREATE\n Relationship: {\n Resource: { ObjectType: \"resource\", ObjectId: \"123\", },\n Relation: \"parent\",\n Subject: { ObjectType: \"folder\", ObjectId: \"456\", },\n OptionalExpiresAt: \"2025-12-31T23:59:59Z\"\n }\n }]\n}\n\nRelationship Expiration is a great example of our never-ending journey to achieve the best possible performance for SpiceDB users. As SpiceDB is put to the test in an ever-increasing number of diverse enterprise use-cases, we learn new things about where optimizations should be made in order to deliver the best product for scaling authorization. Sometimes it requires going back to the drawing board on a problem we thought we had previously solved and totally reconsidering its design. With that, I encourage you to go out and experiment with Relationship Expiration so that we learn even more about the problemspace and continue refining our approach.
", + "url": "https://authzed.com/blog/the-evolution-of-expiration", + "title": "The Evolution of Expiration", + "summary": "We are excited to announce that as of the SpiceDB 1.40 release, users now have access to a new experimental feature: Relationship Expiration. When writing relationships, requests can now include an optional expiration time, after which a relationship will be treated as removed, and eventually automatically cleaned up.", + "image": "https://authzed.com/images/blogs/blog-eng-relationship-expiration-hero-2x.png", + "date_modified": "2025-02-13T10:16:00.000Z", + "date_published": "2025-02-13T10:16:00.000Z", + "author": { + "name": "Jimmy Zelinskie", + "url": "https://twitter.com/jimmyzelinskie" + } + }, + { + "id": "https://authzed.com/blog/build-time-bound-permissions-with-relationship-expiration-in-spicedb", + "content_html": "Today we are announcing the experimental release of Relationship Expiration, which is a straightforward, secure, and dynamic way to manage time-bound permissions directly within SpiceDB.
\nBuilding secure applications is hard, especially when it comes to implementing temporary access management for sensitive data. You need to grant the right level of access to the right people for the right duration, without creating long-term vulnerabilities or drowning in administrative overhead.
\nConsider the last time you needed to give a contractor access to your brand guidelines, a vendor access to a staging environment, or a new employee access to onboarding materials. The usual workarounds – emailing files, uploading to external systems, or (please, please don’t) sharing logins – quickly become a tangled mess of version control nightmares, security risks, and administrative headaches. And what happened when you completed the project? How did you guarantee that access gets promptly revoked? Leaving lingering access privileges hanging around is an AppSec war room waiting to happen.
\nWe’re helping application development teams solve this problem with this powerful new feature in SpiceDB v1.40.
\n\"Authorization is essential for building secure applications with advanced sharing capabilities,\" said Larry Carvalho, Principal Consultant and Founder at RobustCloud. \"SpiceDB, inspired by Google's approach to authorization, provides developers with a much-needed feature for managing fine-grained access control. By leveraging AuthZed’s expertise, developers can build the next generation of applications with greater efficiency, security, and flexibility.\"
\nWhile workarounds exist – scheduling API calls with external tools like Temporal or crafting complex policies – they add complexity and can be difficult to manage and deploy at scale (think 10,000 relationships generated and refreshed every 10 minutes). SpiceDB's Relationship Expiration provides first-class support for building time-bound permissions, leveraging SpiceDB’s powerful relationship-based approach.
\nAs the name suggests, expirations are attached as a trait to relationships between subjects and resources in SpiceDB’s graph-based permissions evaluation engine. Once the relationship expires, SpiceDB automatically removes it. Without this built-in support, conditional time-bound relationships in a Zanzibar-style schema clutter the permissions graph, bloating the system and impacting performance.
\nTime-bound access helps teams to collaborate securely and efficiently. By eliminating the friction of manual access management, it frees up valuable time and resources while minimizing the risk of human error. Knowing that access will automatically expire fosters a culture of confident sharing, removing the hesitation that can lead to information silos and slower project cycles. Additionally, just-in-time access with session-based privileges streamlines workflows and minimizes the risk of unauthorized access.
\nPut access control in the hands of your users: they can define expiration limits for the resources they manage, unlocking powerful workflows like time-limited review cycles or project-based access. A designer, for example, could grant edit access to a file for a specific review period, with access automatically revoked afterward. This granular control enhances security by minimizing the window of opportunity for unauthorized access and fosters a culture of security awareness. Leave a positive impression with custom permissions options that welcome a broad range of use cases.
\nWith millions of users and billions of resources, authorization can become a major performance bottleneck, especially since permissions checks sit in the critical path between user input and service response. By automatically removing expired relationships, SpiceDB reduces the size of its database and load on its system, leading to more performant authorization checks and lower costs.
\nWant to learn more TODAY? Join Sohan, AuthZed technical evangelist, and Joey Schorr, one of the founders of AuthZed, during our biweekly Office Hours livestream at 9 am PT / 12 pm ET on February 13th! We hope to see you there.
\n\nOr, hop over to Jimmy Zelinskie’s blog post to learn more about how to implement expiring relationships and try them out in SpiceDB today.
\nYou may have noticed that we've lined up this launch just in time for Valentine’s Day. Most relationships between humans do, sadly, have an expiration date… To recognize the (somewhat) unfortunate timing of this release, we’ve compiled a Spotify list of songs sourced from the AuthZed team just for those nursing broken hearts this season. And if you’re one of the lucky ones celebrating, hey, it’s fun music to jam to while you learn SpiceDB.
\n\nIf you haven’t already, give SpiceDB a star on GitHub, or follow us on LinkedIn, X, or BlueSky to stay up to date on all things AuthZed. Or ready to get started? Schedule a call with us to talk about how we can help with your authorization needs.
", + "url": "https://authzed.com/blog/build-time-bound-permissions-with-relationship-expiration-in-spicedb", + "title": "Build Time-Bound Permissions with Relationship Expiration in SpiceDB", + "summary": "Today we are announcing the experimental release of Relationship Expiration, which is a straightforward, secure, and dynamic way to manage time-bound permissions directly within SpiceDB. \n", + "image": "https://authzed.com/images/blogs/blog-relationship-expiration-hero-2x.png", + "date_modified": "2025-02-13T10:16:00.000Z", + "date_published": "2025-02-13T10:16:00.000Z", + "author": { + "name": "Jess Hustace", + "url": "https://twitter.com/_jessdesu" + } + }, + { + "id": "https://authzed.com/blog/deepseek-balancing-potential-and-precaution-with-spicedb", + "content_html": "DeepSeek has emerged as a phenomenon since its announcement in late December 2024 by hedge fund company High-Flyer. The AI industry and general public have been captivated by both its capabilities and potential implications.
\nSecurity has been at the forefront of recent conversation due to reports from Wiz that the DeepSeek database is leaking sensitive information, including chat history as well as geopolitical concerns. Even RedMonk analyst Stephen O’Grady discussed DeepSeek and the Enterprise focusing on considerations for business adoption.
\nAt AuthZed, we recognize that trust and security fundamentally shape how organizations evaluate AI models, which is why we're sharing our perspective on this crucial discussion.
\nWhat makes DeepSeek particularly noteworthy is its unique combination of features. As an open-source model, it demonstrates performance comparable to frontier models from industry leaders like OpenAI and Anthropic, yet achieves this with (reportedly) significantly lower training costs. The R1 version exhibits impressive reasoning capabilities, further challenging conventional assumptions about the infrastructure investments required for advancing LLM performance.
\nWhile these factors drive DeepSeek’s popularity, they’ve also drawn skepticism alongside geopolitical considerations based on DeepSeek’s origin. The uncertainty surrounding the source of training data and potential biases in responses warrants careful consideration. A recent data breach of the hosted service has heightened privacy concerns, particularly given the official hosted service’s terms of service permit user data retention for future model training.
\nDespite the concerns, users and companies increasingly express interest in exploring its capabilities. Organizations seeking to leverage DeepSeek's capabilities while maintaining data security can adopt permissions systems to define data access controls. This strategy is especially relevant for applications built on DeepSeek's large language models, where protecting sensitive information is paramount.
\nSpiceDB offers a robust framework for organizations integrating AI capabilities. Its fine-grained permissions help avoid oversharing by letting you precisely define which data the model can and cannot access. This granular control extends beyond data access - you can prevent excessive agency by explicitly defining the scope of actions a DeepSeek-based agent is permitted to take. This dual approach to security - controlling both data exposure and action boundaries - makes SpiceDB particularly valuable for organizations that want to leverage DeepSeek’s capabilities but in a controlled environment.
\nTo help organizations get started, we've created a demo notebook showcasing SpiceDB integration with a DeepSeek-based RAG system: https://github.com/authzed/workshops/tree/deepseek/secure-rag-pipelines
\nFor further exploration and community support, join our SpiceDB Discord community to connect with other developers implementing secure AI applications.
", + "url": "https://authzed.com/blog/deepseek-balancing-potential-and-precaution-with-spicedb", + "title": "DeepSeek: Balancing Potential and Precaution with SpiceDB", + "summary": "DeepSeek has emerged as a phenomenon since its announcement in late December 2024 and security has been at the forefront of recent conversation. At AuthZed, we recognize that trust and security fundamentally shape how organizations evaluate AI models, which is why we're sharing our perspective on this crucial discussion.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-01-31T07:56:00.000Z", + "date_published": "2025-01-31T07:56:00.000Z", + "author": { + "name": "Sam Kim", + "url": "https://github.com/samkim" + } + }, + { + "id": "https://authzed.com/blog/2024-soc2-reflection", + "content_html": "I'm happy to announce that AuthZed recently renewed our SOC2 compliance and our SOC2 Type 2 and SOC3 reports are now available on security.authzed.com.
\nHaving just endured the audit process again, I figured it would be a good time to reflect on my personal feelings toward compliance and how my opinion has evolved.
\nIf you're reading this now and aren't familiar with SOC2 and SOC3, I'll give you an overview by someone that isn't trying to sell you a compliance tool (feel free to skip this section):
\nSOC (System and Organization Controls) is a suite of annual reports that result from conducting an audit of the internal controls that you use to guarantee security practices at your company. An example of an \"internal control\" is a company-wide policy that enforces that \"all employees have an anti-virus installed on their devices\". Controls vary greatly and can be automated by using software like password managers and MDM solutions, but some will always require human intervention, such as performing quarterly security reviews and annual employee performance reviews.
\nIn the tech industry, SOC2 is the standard customers expect (or ISO27001 if you're in the EU, but they are similar enough that you often only need either one). As I wrote this, it came to my attention that I have no idea what SOC1 is, so I looked it up to discover that it is apparently a financial report which I've never heard of customers requesting in the tech industry. SOC3 is a summary of a SOC2 report that contains less detail and is designed to be more publicly sharable so that you don't necessarily need to sign an NDA to get some details. SOC2 comes in two variants \"Type 1\" and \"Type 2\". It's fairly confusing, but this is just shorthand for how long the audit period was. Type 1 means that the audit looked at the company at one point in time, while Type 2 means that the auditor actually monitored the company over a period of time usually 6 or 12 months.
\nTo engineering organizations, compliance is often seen as a nuisance or a distraction from shipping code that moves the needle for actual security issues. Software engineers are those deepest in the weeds, so they have the code that they're familiar with at the top of mind when you ask where security concerns lie. Because I knew where the bodies were buried when I first transitioned my career to product management from engineering, I always tried to push back and shield my team from having to deliver compliance features. The team celebrated this as a win for focus, but we never got to fully understand the externalities of this approach.
\nFast forward a few years, I've now gotten much wider exposure to the rest of the business functions at a technology company. From the overarching view of an executive, the perspective of the software engineer seems quite amiss. If you asked an engineer what they're concerned about, it might be that they quickly used the defaults for bcrypt and didn't spend the time evaluating the ideal number of bcrypt rounds or alternative algorithms. This perspective is valuable, but can also be missing the forest for the trees; it's far easier to perform phishing attacks on a new hire than it is to reverse engineer the cryptography in their codebase. That simple fact makes it clear that if you haven't already addressed the foundational security processes at your business, it doesn't matter how secure the software you're building is.
\nAll of that said, AuthZed's engineering-heavy team is not innocent from this line of thinking, especially since our core product is engineering security infrastructure. However, if we put our egos aside, there is one thing that reigns supreme regardless of the product you're building: the trust you build with your customers.
\nThe compliance industry was never trying to hide that its end goal is purely trust in processes. SOC2 is defined by the American Institute of Certified Public Accountants and not a cybersecurity standards body; this is because compliance is about ensuring processes at your business and not finding remote code execution in your codebase. That doesn't mean that compliance cannot uncover deep code issues because SOC2 audits actually require you to perform an annual penetration test from an actual cybersecurity vendor. Coding vulnerabilities are only one aspect of the comprehensive approach that compliance is focused on.
\nWithout compliance, our industry would be stuck having to blindly trust that vendors are following acceptable security practices. By conforming to the processes required for certifications like SOC2, we can build trust with our partners and customers as well as prove the maturity of our products and business. While it may feel like toil at times, it's a necessary evil to ensure consistency across our supply chains.
\nThe final thought I'd like to leave you with is the idea that compliance isn't a checkbox to do business. It's a continuous process where you offer transparency to your customers to prove that they should trust you. I'm looking forward to seeing if my opinions change next renewal.
\nI'd like to thank the teams at SecureFrame and Modern Assurance who we've collaborated with during this last audit as well as all of the vendors and data subprocessors we rely on to operate our business everyday.
", + "url": "https://authzed.com/blog/2024-soc2-reflection", + "title": "Our SOC2 Renewal and Reflections on Compliance", + "summary": "I'm happy to announce that AuthZed recently renewed our SOC2 compliance and our SOC2 Type 2 and SOC3 reports are now available on security.authzed.com.\nHaving just endured the audit process again, I figured it would be a good time to reflect on my personal feelings toward compliance and how my opinion has evolved.\n", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-01-07T20:20:00.000Z", + "date_published": "2025-01-07T20:20:00.000Z", + "author": { + "name": "Jimmy Zelinskie", + "url": "https://twitter.com/jimmyzelinskie" + } + }, + { + "id": "https://authzed.com/blog/the-dual-write-problem", + "content_html": "The dual-write problem presents itself in all distributed systems. A system that uses SpiceDB for authorization and also has an application database (read: most of them) is a distributed system. Working around the dual-write problem typically requires a non-trivial amount of work.
\nIf you've heard this one before, feel free to skip down where we talk about solutions and approaches to the dual-write problem. If it's your first time, welcome!
\nLet's consider a typical monolithic web application. Perhaps it's for managing and sharing files and folders, which makes it a natural candidate for a relation-based access control system like SpiceDB. The application has an upload endpoint that looks something like the following:
\ndef upload(req):\n validate_request(req)\n with new_transaction() as db:\n db.write_file(req.file)\n return Response(status=200)\n\nAll of the access control logic is neatly contained within the application database, so no other work needed to happen up to this point. However, we want to start using SpiceDB in anticipation of the application growing more complex and services splitting off of our main monolith.
\nWe start with a simple schema:
\ndefinition user {}\n\ndefinition folder {\n relation viewer: user\n permission view = viewer\n}\n\ndefinition file {\n relation viewer: user\n relation folder: folder\n permission view = viewer + folder->viewer\n}\n\nNote that if a user is a viewer of the folder, they are able to view any file within the folder. That means that we'll need to keep SpiceDB updated with the relationships between files and folders, which is held in the folder relation on the file.

That doesn't sound so bad. Let's go and implement it:
\ndef upload(req):\n validate_request(req)\n with new_transaction() as db:\n file_id = db.write_file(req.file)\n write_folder_relationship(\n file_id=file_id\n folder_id=req.folder_id\n )\n \n return Response(status=200)\n\nWe've got a problem, though. What happens if the server crashes? We're going to use a server crash as an example problem because it's relatively conceptually simple and is also something that's hard to recover from. Let's mark up the function and then consider what happens if the server crashes at each point:
\ndef upload(req):\n validate_request(req)\n # point 1\n with new_transaction() as db:\n file_id = db.write_file(req.file)\n # point 2\n write_folder_relationship(\n file_id=file_id\n folder_id=req.folder_id\n )\n # point 3\n # point 4 (outside of the transaction)\n return Response(status=200)\n\nNote that the points refer to the boundaries between lines of code, rather than pointing at the line of code above or below them.\nHere's an alternative view of things in a sequence diagram:

If the server crashes at points #1 or #4, we're fine - the request will fail, but we're still in a consistent state. The application server and SpiceDB agree about what the system should look like. If the server crashes at point #2, we're still okay - we've opened a database transaction but we haven't committed it, so the database will roll back the transaction and everything will be fine. If we crash at point #3, however, we're in a state where we've written to SpiceDB but we haven't committed the transaction to our database, and now SpiceDB and our database disagree about the state of the world.
\nThere isn't a neat way around this problem within the context of the process, either. This blog post goes further into potential approaches and their issues if you're curious. Things like adding a transactional semantic to SpiceDB or reordering the operations move the problem around but don't solve it, because there's still going to be some boundary in the code where the process could crash and leave you in an inconsistent state.
\nNote as well that there's nothing particularly unique about the dual-write problem in systems using SpiceDB and an application database, either. If we were writing to two different application databases, or to an application database and to a cache, or to two different RPC-invoked services, we still have the same issue.
\nWe can solve the dual-write problem in SpiceDB using a few different approaches, each with varying levels of complexity, prerequisites, and tradeoffs to be made
\nDoing nothing is an option that may be viable in the right context.\nThe sort of data inconsistency where SpiceDB and your application database disagree can be hard to diagnose.\nHowever, if there are mechanisms by which a user could recognize that something is wrong and remediate it in a timely manner, or if the authorized content in question isn't particularly sensitive, you may be able to run a naive implementation and avoid the complexity associated with other approaches.\nThe more stable your platform is, the more likely this is to cause fewer issues.
\nOut-of-band consistency checking would be one step beyond \"doing nothing.\"\nIf you have a source of truth that SpiceDB's state is meant to reflect in a given context, you can check that the two systems agree on a periodic basis.\nIf there's disagreement, the issues can be automatically remediated or flagged for manual intervention.
\nThis is a conceptually simple approach, but it's limited by both the size of your data and the velocity of changes to your data.\nThe more data you have, the more expensive and time-consuming the reconciliation process becomes.\nIf the data change rapidly, you could have false positives or false negatives when a change has been applied\nto one system but not the other.\nThis could theoretically be handled through locking or otherwise pinning SpiceDB and your application's database so that their data\nreflect the same version of the world while you're checking their associated states,\nbut that will greatly reduce your ability to make writes in your system.\nThe sync process itself can become a source of drift or inconsistency.
\nFor certain kinds of relationships and data, it may be sufficient to make SpiceDB the source of truth for that particular information.\nThis works best for data that matches SpiceDB's storage and access model: binary presence or absence of a relationship between two objects, and no requirement to sort those relationships or filter by anything other than which subject or object they're associated with.
\nIf your data meet those conditions, you can remove the application database from the question and make a single write to SpiceDB and avoid the dual-write problem entirely.
\nFor example, if we wanted to add a notion of a file \"owner\" to our example application, we probably wouldn't need an owner column with a foreign key to a user ID in our application database.\nInstead, we could represent the relationship entirely with an owner relation in SpiceDB, such that an API handler for adding or updating an owner of a file or folder would only talk to SpiceDB and not to the application database.\nBecause only one system is being written to in the handler, we avoid the dual-write problem.

The limitation here is that if you wanted to build a user interface where a user can see a table of all of the files they own, you wouldn't be able to filter, sort, or paginate\nthat table as easily, because SpiceDB isn't a general-purpose database and doesn't support that functionality in the same way.
\nEvent sourcing and CQRS are related ideas that involve treating your system as eventually consistent.\nRather than an API call being a procedure that runs to completion, an API call becomes an event that kicks off a chain of actions.\nThat event goes into an event stream, where consumers (to use Kafka's language) can pick them up and process them, which may involve producing new events.\nMultiple consumers can listen to the same topic.\nThe events flow through the system until they've all been processed, and the surrounding runtime ensures that nothing is dropped.
\nThere's a cute high-level illustration of how an event sourcing system works here: https://www.gentlydownthe.stream/
\nIn our example application, it might look like the following:
\nThe upside is that you're never particularly worried about the dual-write problem, because any individual failure of a subscriber can be recovered and re-run.\nEverything just percolates through until the system arrives at a new consistent state.
\nThe downside is that you can't treat API calls as RPCs.\nThe API call doesn't represent a change to the state of your system, but rather a command or request that will\neventually result in your desired changes happening.\nYou can work around this by having the client or UI listen to an event stream from the backend,\nsuch that all you're doing is passing messages back and forth, but this often requires\nsignificant rearchitecture, and not every runtime is amenable to this architecture.
\nHere are some examples of event queues that you might see in an event sourcing system:
\n\nA durable execution environment is a set of software tools that let you pretend that you're writing relatively simple transactional logic within your application while abstracting over the concerns involved in writing to multiple services. They promise to take care of errors, rollbacks, and coordination, provided you've written the according logic into the framework.
\nAn upside is that you don't have to rearchitect your system if you aren't already using the paradigms necessary for event sourcing.\nThe code that you write with these systems tends to be familiar, procedural, and imperative, which lowers the barrier to entry\nfor a dev trying to solve a dual-write problem.
\nA downside is that it can be difficult to know when your write has landed, because you're effectively dispatching it off to a job runner.\nThe business logic is moved off of the immediate request path. This means that the result of the business logic is also off of the request\npath, which raises a question of what you would return to an API client.
\nSome durable execution environments are explicitly for running jobs and don't give you introspection into the results;\nothers can be inserted into your code in such a way that you can wait for the result and pretend that everything happened synchronously.\nNote that this means that the associated runtime that handles those jobs becomes a part of the request path, which can carry operational overhead.
\nTemporal, Restate, Windmill, Trigger.dev, and Inngest are a few examples of durable execution environments. You'll have to evaluate which one best fits your architecture and infrastructure.
\nA transactional outbox pattern is related to both Event Sourcing and Durable Execution, in that it works around the dual-write problem\nthrough eventual consistency.\nThe idea is that within your application database, when there's a change that needs to be written to SpiceDB, you write to an outbox table, which is an append-only log of modifications that should happen to SpiceDB.\nThat write can happen within the same database transaction, which means you don't have the dual write problem.\nYou then read that log (or subscribe to a changestream) with a separate process which marks the entries as it reads them and then submits them to SpiceDB through some other mechanism.
\nAs long as this process is effectively single-threaded and retries operations until they succeed (which is helped by SpiceDB allowing for idempotent writes with its TOUCH operation), you have worked around the dual-write problem.
\nOne of the most commonly-used tools in a system based on the transactional outbox pattern is Debezium.\nIt watches changes in an outbox table and submits them as events to Kafka, which can then be consumed downstream to write to another system.
\nSome other resources are available here:
\nUnfortunately, when making writes to multiple systems, there are no easy answers. SpiceDB isn't unique in this regard, and most systems of sufficient complexity will eventually run into some variant of this problem. Which solution you choose will depend on the shape of your existing system, the requirements of your domain, and the appetite of your organization to make the associated changes. We still think it's worth it - when you centralize the data required for authorization decisions, you get big wins in consistency, performance, and safety. It just takes a little work.
", + "url": "https://authzed.com/blog/the-dual-write-problem", + "title": "The Dual-Write Problem", + "summary": "The dual-write problem is present in any distributed system and is difficult to solve. We discuss where the problem arises and several approaches.", + "image": "https://authzed.com/images/blogs/blog-featured-image.png", + "date_modified": "2025-01-02T12:48:00.000Z", + "date_published": "2025-01-02T12:48:00.000Z", + "author": { + "name": "Tanner Stirrat", + "url": "https://www.linkedin.com/in/tannerstirrat/" + } + }, + { + "id": "https://authzed.com/blog/spicedb-amazon-ecs", + "content_html": "Amazon Elastic Container Service (ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. This blog will illustrate how you can install SpiceDB on Amazon ECS and is divided into 3 parts:
\nIt's important to note that this guide is meant for:
\nIt is not recommended to use SpiceDB on ECS as a production deployment target. See the final section of this post for more details.
\nHere are the prerequisites to follow this guide:
\nLet’s start by pushing the SpiceDB Docker image to Amazon Elastic Container Registry (ECR)
\n
Alternately, you can create this using the AWS CLI with the following command:
\naws ecr create-repository --repository-name spicedb --region <your-region>\n\nAmazon ECR requires Docker to authenticate before pushing images.\nRetrieve an authentication token and authenticate your Docker client to your registry using the following command (you’ll need to replace region with your specific AWS region, like us-east-1)
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com\n\ndocker pull authzed/spicedb:latest\ndocker build -t spicedb .\n\ndocker tag spicedb:latest <account-id>.dkr.ecr.<region>.amazonaws.com/spicedb:latest\n\nNote: If you are using an Apple ARM-based machine (Ex: Mac with Apple Silicon) and you eventually want to deploy it to a x86-based instance you need to build this image for multi-architecture using the buildx command.
You cannot use docker buildx build with an image reference directly.\nInstead, create a lightweight Dockerfile to reference the existing image by adding this one line:
FROM authzed/spicedb:latest
and save it in the directory. While in that directory, build and push a Multi-Architecture Image using the buildx command:
docker buildx build --platform linux/amd64,linux/arm64 -t <account-id>.dkr.ecr.<region>.amazonaws.com/spicedb:latest --push .\n\ndocker push <account-id>.dkr.ecr.<region>.amazonaws.com/spicedb:latest\n\nReplace account-id and region with your AWS account ID and region.
spicedb repository. Verify that the spicedb:latest image is available.Note: All the above commands are pre-filled with your account details and can be seen by opening your repository on ECR and clicking the View push commands button
\n
Using AWS Console:
\nAlternately, you can create this using the AWS CLI with this command:
\naws ecs create-cluster --cluster-name spicedb-cluster\n\n
If you don’t see these roles, you can create them as follows:
\nCreating ecsTaskExecutionRole:
The ECS Task Execution Role is needed for ECS to pull container images from ECR, write logs to CloudWatch, and access other AWS resources.
\nGo to the IAM Console.
\nClick Create Role.
\nFor Trusted Entity Type, choose AWS Service.
\nSelect Elastic Container Service and then Elastic Container Service Task.
\nClick Next and attach the following policies:
\nOr use these commands using AWS CLI:
\naws iam create-role --role-name ecsTaskExecutionRole \n\n--assume-role-policy-document '{\"Version\": \"2012-10-17\", \"Statement\": [{\"Effect\": \"Allow\", \"Principal\": {\"Service\": \"ecs-tasks.amazonaws.com\"}, \"Action\": \"sts:AssumeRole\"}]}'\n\nAttach the AmazonECSTaskExecutionRolePolicy to the role:
\naws iam attach-role-policy --role-name ecsTaskExecutionRole \n\n--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy\n\nCreating ecsTaskRole (Optional):
The ECS Task Role is optional and should be created if your containers need access to other AWS services such as Amazon RDS or Secrets Manager.
\nOr use these commands using AWS CLI:
\nCreate the role using:
\naws iam create-role --role-name ecsTaskRole \n\n--assume-role-policy-document '{\"Version\": \"2012-10-17\", \"Statement\": [{\"Effect\": \"Allow\", \"Principal\": {\"Service\": \"ecs-tasks.amazonaws.com\"}, \"Action\": \"sts:AssumeRole\"}]}'\n\nAttach any policies based on the specific AWS services your application needs access to:
\naws iam attach-role-policy --role-name ecsTaskRole \n\n--policy-arn arn:aws:iam::<policy-arn-for-service-access>\n\nThe task definition defines how SpiceDB containers will be configured and run. Below is the JSON configuration for the task definition. To create a task definition:
\nAWS Console
\nCopy the JSON below:
\n{\n \"family\": \"spicedb-task\",\n \"networkMode\": \"awsvpc\",\n \"requiresCompatibilities\": [\"FARGATE\"], \n \"cpu\": \"512\", \n \"memory\": \"1024\", \n \"executionRoleArn\": \"arn:aws:iam::<account-id>:role/ecsTaskExecutionRole\", //Copy the ARN from the ecsTaskExecutionRole created above\n \"taskRoleArn\": \"arn:aws:iam::<account-id>:role/ecsTaskRole\", //Copy the ARN from the ecsTaskRole created above\n \"containerDefinitions\": [\n {\n \"name\": \"spicedb\",\n \"image\": \"<account-id>.dkr.ecr.<region>.amazonaws.com/spicedb\", //ECR Repository URI\n \"essential\": true,\n \"command\": [\n \"serve\",\n \"--grpc-preshared-key\",\n \"somekey\" \n ],\n \"portMappings\": [\n {\n \"containerPort\": 50051,\n \"hostPort\": 50051,\n \"protocol\": \"tcp\"\n }\n ],\n \"environment\": [],\n \"logConfiguration\": {\n \"logDriver\": \"awslogs\",\n \"options\": {\n \"awslogs-group\": \"/ecs/spicedb-ecs\",\n \"mode\": \"non-blocking\",\n \"awslogs-create-group\": \"true\",\n \"max-buffer-size\": \"25m\",\n \"awslogs-region\": \"us-east-1\",\n \"awslogs-stream-prefix\": \"ecs\"\n }\n }\n }\n ]\n}\n\nThe command section specifies serve which is the primary command for running SpiceDB.\nThis command serves the gRPC and HTTP APIs by default along with a pre-shared key for authenticated requests.
Note: This is purely for learning purposes so any permissions and relationships written to this instance of SpiceDB will be stored in-memory and not in a persistent database.\nTo write relationships to a persistent database, create a Amazon RDS instance for Postgres and note down the DB name, Master Password and Endpoint.
\nYou can add those into the task definition JSON in the command array like this:
\"command\": [\n \"serve\",\n \"--grpc-preshared-key\",\n \"somekey\",\n \"--datastore-engine\",\n \"postgres\",\n \"--datastore-conn-uri\",\n \"postgres://<username>:<password>@<RDS endpoint>:5432/<dbname>?sslmode=require\"\n ],\n\nThe defaults for username and dbname are usually postgres
You can also use the AWS CLI by storing the above JSON in a file an then running this command
\naws ecs register-task-definition --cli-input-json file://spicedb-task-definition.json\n\nNow that we’ve defined a task, we can create a task that would run within your ECS cluster.\nClick on your ECS Cluster created earlier
\nsuper_admin relation on every object that can be administered, add a root object to the hierarchy, in this example platform.
-Super admin users can be applied to platform and a relation to platform on top level objects.
-Admin permission on resources is then defined as the direct owner of the resource as well as through a traversal of the object hierarchy to the platform super admin.
-
-\n\nAI fundamentally changes the interface, but not the fundamentals of security. Read on to find out why
\n
It feels like eons ago when the Model Context Protocol (MCP) was introduced (it was only in November 2024 lol)
\nIt promised to become the USB-C of AI agents — a universal bridge for connecting LLMs to tools, APIs, documents, emails, codebases, databases and cloud infrastructure. In just months, the ecosystem exploded: dozens of tool servers, open-source integrations, host implementations, and hosted MCP registries began to appear.
\nAs the ecosystem rapidly adopted MCP, it presented the classic challenge of securing any new technology: developers connected powerful, sensitive systems without rigorously applying established security controls and fundamental principles to the new spec. By mid-2025, the vulnerabilities were exposed, confirming that the new AI-native world is governed by the same security principles as traditional software.
\nBelow is the first consolidated timeline tracing the major MCP-related breaches and security failures - what happened, what data was exposed, why it happened, and what they reveal about the new threat surface LLMs bring into organisations.
\nWhat happened: Invariant Labs demonstrated that a malicious MCP server could silently exfiltrate a user’s entire WhatsApp history by combining “tool poisoning” with a legitimate whatsapp-mcp server in the same agent. A “random fact of the day” tool morphed into a sleeper backdoor that rewrote how WhatsApp messages are sent. Invariant Labs Link
Data at risk & why: Once the agent read the poisoned tool description, it happily followed hidden instructions to send hundreds or thousands of past WhatsApp messages (personal chats, business deals, customer data) to an attacker-controlled phone number – all disguised as ordinary outbound messages, bypassing typical Data Loss Prevention (DLP) tooling.
\nWhat happened: Invariant Labs uncovered a prompt-injection attack against the official GitHub MCP server: a malicious public GitHub issue could hijack an AI assistant and make it pull data from private repos, then leak that data back to a public repo. Invariant Labs link
\nData breached & why: With a single over-privileged Personal Access Token wired into the MCP server, the compromised agent exfiltrated private repository contents, internal project details, and even personal financial/salary information into a public pull request. The root cause was broad PAT scopes combined with untrusted content (issues) in the LLM context, letting a prompt-injected agent abuse legitimate MCP tool calls.
\nWhat happened: Asana discovered a bug in its MCP-server feature that could allow data belonging to one organisation to be seen by other organisations using their system. Upguard link.
\nData breached & why: Projects, teams, tasks and other Asana objects belonging to one customer potentially accessible by a different customer. This was caused by a logic flaw in the access control of their MCP-enabled integration (cross-tenant access not properly isolated).
\nWhat happened: Researchers found that Anthropic’s MCP Inspector developer tool allowed unauthenticated remote code execution via its inspector–proxy architecture. An attacker could get arbitrary commands run on a dev machine just by having the victim inspect a malicious MCP server, or even by driving the inspector from a browser. CVE Link
\nData at risk & why: Because the inspector ran with the user’s privileges and lacked authentication while listening on localhost / 0.0.0.0, a successful exploit could expose the entire filesystem, API keys, and environment secrets on the developer workstation – effectively turning a debugging tool into a remote shell. VSec Medium Link
\nWhat happened: JFrog disclosed CVE-2025-6514, a critical OS command-injection bug in mcp-remote, a popular OAuth proxy for connecting local MCP clients to remote servers. Malicious MCP servers could send a booby-trapped authorization_endpoint that mcp-remote passed straight into the system shell, achieving remote code execution on the client machine. CVE Link
Data at risk & why: With over 437,000 downloads and adoption in Cloudflare, Hugging Face, Auth0 and other integration guides, the vuln effectively turned any unpatched install into a supply-chain backdoor: an attacker could execute arbitrary commands, steal API keys, cloud credentials, local files, SSH keys, and Git repo contents, all triggered by pointing your LLM host at a malicious MCP endpoint. Docker Blog
\nWhat happened: Security researchers found two critical flaws in Anthropic’s Filesystem-MCP server: sandbox escape and symlink/containment bypass, enabling arbitrary file access and code execution. Cymulate Link
\nData breached & why: Host filesystem access, meaning sensitive files, credentials, logs, or other data on servers could be impacted. The root cause was poor sandbox implementation and insufficient directory-containment enforcement in the MCP server’s file-tool interface.
\nWhat happened: A malicious MCP server package masquerading as a legitimate “Postmark MCP Server” was found injecting BCC copies of all email communications (including confidential docs) to an attacker’s server. IT Pro
\nData breached & why: Emails, internal memos, invoices — essentially all mail traffic processed by that MCP server were exposed. This was due to a supply-chain compromise / malicious package in MCP ecosystem, and the fact that MCP servers often run with high-privilege accesses which were exploited.
\nWhat happened: While researching Smithery’s hosted MCP server platform, GitGuardian found a path-traversal bug in the smithery.yaml build config. By setting dockerBuildPath: \"..\", attackers could make the registry build Docker images from the builder’s home directory, then exfiltrate its contents and credentials. GitGuardian Blog
Data breached & why: The exploit leaked the builder’s ~/.docker/config.json, including a Fly.io API token that granted control over >3,000 apps, most of them hosted MCP servers. From there, attackers could run arbitrary commands in MCP server containers and tap inbound client traffic that contained API keys and other secrets for downstream services (e.g. Brave API keys), turning the MCP hosting service itself into a high-impact supply-chain compromise.
What happened: A command-injection flaw was discovered in the Figma/Framelink MCP integration: unsanitised user input in shell commands could lead to remote code execution. The Hacker News Link
\nData breached & why: Because the integration allowed AI-agents to interact with Figma docs, the flaw could enable attackers to run arbitrary commands through the MCP tooling and access design data or infrastructure. The root cause was the unsafe use of child_process.exec with untrusted input in the MCP server code - essentially a lack of input sanitisation.CVE Link
..And we’re sure there are more to come. We’ll keep this blog updated with the latest in security and data breaches in the MCP world.
\nAcross all these breaches, common themes appear:
\n1. Local AI dev tools behave like exposed remote APIs
\nMCP Inspector, mcp-remote, and similar tooling turned into Remote Code Execution (RCE) surfaces simply by trusting localhost connections.
2. Over-privileged API tokens are catastrophic in MCP workflows
\nGitHub MCP, Smithery, and WhatsApp attacks all exploited overly broad token scopes.
\n3. “Tool poisoning” is a new, AI-native supply chain vector
\nTraditional security tools don’t monitor changes to MCP tool descriptions.
\n4. Hosted MCP registries concentrate risk
\nSmithery illustrated what happens when thousands of tenants rely on a single build pipeline.
\n5. Prompt injection becomes a full data breach
\nThe GitHub MCP incident demonstrated how natural language alone can cause exfiltration when MCP calls are available.
\nThe Model Context Protocol (MCP) presents a cutting-edge threat surface, yet the breaches detailed here are rooted in timeless flaws: over-privilege, inadequate input validation, and insufficient isolation.
\nAI fundamentally changes the interface, but not the fundamentals of security. To secure the AI era, we must rigorously apply old-school principles of least privilege and zero-trust to these powerful new software components.
\nAs adoption accelerates, organisations must treat MCP surfaces with the same seriousness as API gateways, CI/CD pipelines, and Cloud IAM.
\nBecause attackers already are.
", - "url": "https://authzed.com/blog/timeline-mcp-breaches", - "title": "A Timeline of Model Context Protocol (MCP) Security Breaches", - "summary": "AI fundamentally changes the interface, but not the fundamentals of security. Here's a timeline of security breaches in MCP Servers from the recent past.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-11-25T18:18:00.000Z", - "date_published": "2025-11-25T18:18:00.000Z", - "author": { - "name": "Sohan Maheshwar", - "url": "https://www.linkedin.com/in/sohanmaheshwar/" - } - }, - { - "id": "https://authzed.com/blog/building-a-multi-tenant-rag-with-fine-grain-authorization-using-motia-and-spicedb", - "content_html": "\n\nLearn how to build a complete retrieval-augmented generation pipeline with multi-tenant authorization using Motia's event-driven framework, OpenAI embeddings, Pinecone vector search, SpiceDB permissions, and natural language querying.
\n
If I was hard-pressed to pick my favourite computer game of all time, I'd go with Stardew Valley (sorry, Dangerous Dave). The stats from my Nintendo Profile is all the proof you need:
\n
Stardew Valley sits atop with 430 hours played and in second place is Mario Kart (not pictured) with ~45 hours played. That's a significant difference, and should indicate how much I adore this game.
\nWe've been talking about the importance of Fine-Grained Authorization and RAG recently, so when I sat down to build a sample usecase for a production-grade RAG with Fine-Grained Permissions, my immediate thought went to Stardew Valley.
\nFor those not familiar, Stardew Valley is a farm life simulation game where players manage a farm by clearing land, growing seasonal crops, and raising animals. So I thought I could build a logbook for a large farm that one could query using natural language processing. This usecase is ideal for RAG Pipelines (a technique that uses external data to improve the accuracy, relevancy, and usefulness of a LLM model’s output).
\nI focused on building something that was as close to production-grade as possible (and perhaps strayed from the original intent of a single farm) where an organization can own farms and data from the farms. The farms contain harvest data and users can log and query data for the farms they're part of. This provides a sticky situation for the authorization model. How does a LLM know who has access to what data?
\nHere's where SpiceDB and ReBAC was vital. By using metadata to indicate where the relevant embedings came from, the RAG system returned harvest data to the user only based on what data they had access to. In fact, OpenAI uses SpiceDB for their fine-grained authorization in ChatGPT Connectors using similar techniques.
\nWhile I know my way around SpiceDB and authorization, I needed help to build out the other components for a production-grade harvest logbook. So I reached out to my friend Rohit Ghumare from Motia for his expertise. Motia.dev is a backend framework that unifies APIs, background jobs, workflows, and AI Agents into a single core primitive with built-in observability and state management
\nHere's a photo of Rohit and myself at Kubecon Europe in 2025
\n
What follows below is a tutorial-style post on building a Retrieval Augmented Generation system with fine-grained authorization using the Motia framework and SpiceDB. We'll use Pinecone as our vector database, and OpenAI as our LLM.
\nIn this tutorial, you'll create a complete RAG system with authorization that:
\nBy the end of the tutorial, you'll have a complete system that combines semantic search with multi-tenant authorization.
\nBefore starting the tutorial, ensure you have:
\nCreate a new Motia project using the CLI:
\nnpx motia@latest create\n\nThe installer will prompt you:
\nBase (TypeScript)harvest-logbook-ragYesNavigate into your project:
\ncd harvest-logbook-rag\n\nYour initial project structure:
\nharvest-logbook-rag/\n├── src/\n│ └── services/\n│ └── pet-store/\n├── steps/\n│ └── petstore/\n├── .env\n└── package.json\n\nThe default template includes a pet store example. We'll replace this with our harvest logbook system. For more on Motia basics, see the Quick Start guide.
\nInstall the SpiceDB client for authorization:
\nnpm install @authzed/authzed-node\n\nThis is the only additional package needed.
\nPinecone will store the vector embeddings for semantic search.
\nClick Create Index
\nConfigure:
\nharvest-logbook (or your preference)1536 (for OpenAI embeddings)cosineClick Create Index
\nyour-index-abc123.svc.us-east-1.pinecone.io)Save these for the next step.
\nSpiceDB handles authorization and access control for the system.
\nRun this command to start SpiceDB locally:
\ndocker run -d \\\n --name spicedb \\\n -p 50051:50051 \\\n authzed/spicedb serve \\\n --grpc-preshared-key \"sometoken\"\n\nCheck that the container is running:
\ndocker ps | grep spicedb\n\nYou should see output similar to:
\n6316f6cb50b4 authzed/spicedb \"spicedb serve --grp…\" 31 seconds ago Up 31 seconds 0.0.0.0:50051->50051/tcp spicedb\n\nSpiceDB is now running on localhost:50051 and ready to handle authorization checks.
Create a .env file in the project root:
# OpenAI (Required for embeddings and chat)\nOPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx\n\n# Pinecone (Required for vector storage)\nPINECONE_API_KEY=pcsk_xxxxxxxxxxxxx\nPINECONE_INDEX_HOST=your-index-abc123.svc.us-east-1.pinecone.io\n\n# SpiceDB (Required for authorization)\nSPICEDB_ENDPOINT=localhost:50051\nSPICEDB_TOKEN=sometoken\n\n# LLM Configuration (OpenAI is default)\nUSE_OPENAI_CHAT=true\n\n# Logging Configuration (CSV is default)\nUSE_CSV_LOGGER=true\n\nReplace the placeholder values with your actual credentials from the previous steps.
\nSpiceDB needs a schema that defines the authorization model for organizations, farms, and users.
\nCreate src/services/harvest-logbook/spicedb.schema with the authorization model. A SpiceDB schema defines the types of objects found your application, how those objects can relate to one another, and the permissions that can be computed off of those relations.
Here's a snippet of the schema that defines user, organization and farm and the relations and permissions between them.
definition user {}\n\ndefinition organization {\n relation admin: user\n relation member: user\n \n permission view = admin + member\n permission edit = admin + member\n permission query = admin + member\n permission manage = admin\n}\n\ndefinition farm {\n relation organization: organization\n relation owner: user\n relation editor: user\n relation viewer: user\n \n permission view = viewer + editor + owner + organization->view\n permission edit = editor + owner + organization->edit\n permission query = viewer + editor + owner + organization->query\n permission manage = owner + organization->admin\n}\n\nView the complete schema on GitHub
\nThe schema establishes:
\nCreate a scripts/ folder and add three files:
scripts/setup-spicedb-schema.ts - Reads the schema file and writes it to SpiceDB
\nView on GitHub
scripts/verify-spicedb-schema.ts - Verifies the schema was written correctly
\nView on GitHub
scripts/create-sample-permissions.ts - Creates sample users and permissions for testing
\nView on GitHub
npm install -D tsx\n\n\"scripts\": {\n \"spicedb:setup\": \"tsx scripts/setup-spicedb-schema.ts\",\n \"spicedb:verify\": \"tsx scripts/verify-spicedb-schema.ts\",\n \"spicedb:sample\": \"tsx scripts/create-sample-permissions.ts\"\n}\n\n# Write schema to SpiceDB\nnpm run spicedb:setup\n\nYou should see output confirming the schema was written successfully:\n
Verify it was written correctly:
\nnpm run spicedb:verify\n\nThis displays the complete authorization schema showing all definitions and permissions:\n
The output shows:
\nCreate sample user (user_alice as owner of farm_1):
\nnpm run spicedb:sample\n\n
This creates user_alice as owner of farm_1, ready for testing.
Your authorization system is now ready.
\nStart the Motia development server:
\nnpm run dev\n\nThe server starts at http://localhost:3000. Open this URL in your browser to see the Motia Workbench.
You'll see the default pet store example. We'll replace this with our harvest logbook system in the next sections.
\n
Your development environment is now ready. All services are connected:
\nlocalhost:3000user_alice owns farm_1)Before we start building, let's understand the architecture we're creating.
\n┌─────────────────────────────────────────────────────────────┐\n│ POST /harvest_logbook │\n│ (Store harvest data + optional query) │\n└─────────┬───────────────────────────────────────────────────┘\n │\n ├─→ Authorization Middleware (SpiceDB)\n │ - Check user has 'edit' permission on farm\n │\n ├─→ ReceiveHarvestData Step (API)\n │ - Validate input\n │ - Emit events\n │\n ├─→ ProcessEmbeddings Step (Event)\n │ - Split text into chunks (400 chars, 40 overlap)\n │ - Generate embeddings (OpenAI)\n │ - Store vectors (Pinecone)\n │\n └─→ QueryAgent Step (Event) [if query provided]\n - Retrieve similar content (Pinecone)\n - Generate response (OpenAI/HuggingFace)\n - Emit logging event\n │\n └─→ LogToSheets Step (Event)\n - Log query & response (CSV/Sheets)\n\nOur system processes harvest data through these stages:
\nThe system uses Motia's event-driven model:
\nEvery API request passes through SpiceDB authorization:
\nWe'll create five main steps:
\nEach component is a single file in the steps/ directory. Motia automatically discovers and connects them based on the events they emit and subscribe to.
In this step, we'll create an API endpoint that receives harvest log data and triggers the processing pipeline. This is the entry point that starts the entire RAG workflow.
\nEvery workflow needs an entry point. In Motia, API steps serve as the gateway between external requests and your event-driven system. By using Motia's api step type, you get automatic HTTP routing, request validation, and event emission, all without writing boilerplate server code. When a farmer calls this endpoint with their harvest data, it validates the input, checks authorization, stores the entry, and emits events that trigger the embedding generation and optional query processing.
Create a new file at steps/harvest-logbook/receive-harvest-data.step.ts.
\n\nThe complete source code for all steps is available on GitHub. You can reference the working implementation at any time.
\n
View the complete Step 1 code on GitHub →
\n
Now let's understand the key parts you'll be implementing:
\nconst bodySchema = z.object({\n content: z.string().min(1, 'Content cannot be empty'),\n farmId: z.string().min(1, 'Farm ID is required for authorization'),\n metadata: z.record(z.any()).optional(),\n query: z.string().optional()\n});\n\nZod validates that requests include the harvest content and farm ID. The query field is optional - if provided, the system will also answer a natural language question about the data after storing it.
export const config: ApiRouteConfig = {\n type: 'api',\n name: 'ReceiveHarvestData',\n path: '/harvest_logbook',\n method: 'POST',\n middleware: [errorHandlerMiddleware, harvestEntryEditMiddleware],\n emits: ['process-embeddings', 'query-agent'],\n bodySchema\n};\n\ntype: 'api' makes this an HTTP endpointmiddleware runs authorization checks before the handleremits declares this step triggers embedding processing and optional query eventsmiddleware: [errorHandlerMiddleware, harvestEntryEditMiddleware]\n\nThe harvestEntryEditMiddleware checks SpiceDB to ensure the user has edit permission on the specified farm. If authorization fails, the request is rejected before reaching the handler. Authorization info is added to the request for use in the handler.
View authorization middleware →
\nexport const handler: Handlers['ReceiveHarvestData'] = async (req, { emit, logger, state }) => {\n const { content, farmId, metadata, query } = bodySchema.parse(req.body);\n const entryId = `harvest-${Date.now()}`;\n \n // Store entry data in state\n await state.set('harvest-entries', entryId, {\n content, farmId, metadata, timestamp: new Date().toISOString()\n });\n \n // Emit event to process embeddings\n await emit({\n topic: 'process-embeddings',\n data: { entryId, content, metadata }\n });\n};\n\nThe handler generates a unique entry ID, stores the data in Motia's state management, and emits an event to trigger embedding processing. If a query was provided, it also emits a query-agent event.
await emit({\n topic: 'process-embeddings',\n data: { entryId, content, metadata: { ...metadata, farmId, userId } }\n});\n\nif (query) {\n await emit({\n topic: 'query-agent',\n data: { entryId, query }\n });\n}\n\nEvents are how Motia steps communicate. The process-embeddings event triggers the next step to chunk the text and generate embeddings. If a query was provided, the query-agent event runs in parallel to answer the question using RAG.
This keeps the API response fast as it returns immediately while processing happens in the background.
\nOpen the Motia Workbench and test this endpoint:
\nharvest-logbook flowPOST /harvest_logbook in the sidebar {\n \"x-user-id\": \"user_alice\"\n }\n\n {\n \"content\": \"Harvested 500kg of tomatoes from field A. Weather was sunny.\",\n \"farmId\": \"farm_1\",\n \"metadata\": {\n \"field\": \"A\",\n \"crop\": \"tomatoes\"\n }\n }\n\nYou should see a success response with the entry ID. The Workbench will show the workflow executing in real-time, with events flowing to the next steps.
\nThis event handler takes the harvest data from Step 1, splits it into chunks, generates vector embeddings, and stores them in Pinecone for semantic search.
\nRAG systems need to break down large text into smaller chunks for better retrieval accuracy. By chunking text with overlap and generating embeddings for each piece, we enable semantic search that finds relevant context even when queries don't match exact keywords.
\nThis step runs in the background after the API returns, keeping the user experience fast while handling the background work of embedding generation and vector storage.
\nCreate a new file at steps/harvest-logbook/process-embeddings.step.ts.
View the complete Step 2 code on GitHub →
\nNow let's understand the key parts you'll be implementing:
\nconst inputSchema = z.object({\n entryId: z.string(),\n content: z.string(),\n metadata: z.record(z.any()).optional()\n});\n\nThis step receives the entry ID, content, and metadata from the previous step's event emission.
\nexport const config: EventConfig = {\n type: 'event',\n name: 'ProcessEmbeddings',\n subscribes: ['process-embeddings'],\n emits: [],\n input: inputSchema\n};\n\ntype: 'event' makes this a background event handlersubscribes: ['process-embeddings'] listens for events from Step 1const vectorIds = await HarvestLogbookService.storeEntry({\n id: entryId,\n content,\n metadata,\n timestamp: new Date().toISOString()\n});\n\nThe service handles text splitting (400 character chunks with 40 character overlap), embedding generation via OpenAI, and storage in Pinecone. This chunking strategy ensures semantic continuity across chunks.
\n\nThe OpenAI service generates 1536-dimension embeddings for each text chunk using the text-embedding-ada-002 model.
await state.set('harvest-vectors', entryId, {\n vectorIds,\n processedAt: new Date().toISOString(),\n chunkCount: vectorIds.length\n});\n\nAfter storing vectors in Pinecone, the step updates Motia's state with the vector IDs for tracking. Each chunk gets a unique ID like harvest-123-chunk-0, harvest-123-chunk-1, etc.
The embeddings are now stored and ready for semantic search when users query the system.
\nStep 2 runs automatically when Step 1 emits the process-embeddings event. To test it:
Send a request to the POST /harvest_logbook endpoint (from Step 1)
In the Workbench, watch the workflow visualization
\nYou'll see the ProcessEmbeddings step activate automatically
Check the Logs tab at the bottom to see:
\nThe step completes when you see \"Successfully stored embeddings\" in the logs. The vectors are now in Pinecone and ready for semantic search.
\nThis event handler performs the RAG query, it searches Pinecone for relevant content, retrieves matching chunks, and uses an LLM to generate natural language responses based on the retrieved context.
\nThis is where retrieval-augmented generation happens. Instead of the LLM generating responses from its training data alone, it uses actual harvest data from Pinecone as context. This ensures accurate, source-backed answers specific to the user's farm data.
\nThe step supports both OpenAI and HuggingFace LLMs, giving you flexibility in choosing your AI provider based on cost and performance needs.
\nCreate a new file at steps/harvest-logbook/query-agent.step.ts.
View the complete Step 3 code on GitHub →
\nNow let's understand the key parts you'll be implementing:
\nconst inputSchema = z.object({\n entryId: z.string(),\n query: z.string(),\n conversationHistory: z.array(z.object({\n role: z.enum(['user', 'assistant', 'system']),\n content: z.string()\n })).optional()\n});\n\nThe step receives the query text and optional conversation history for multi-turn conversations.
\nexport const config: EventConfig = {\n type: 'event',\n name: 'QueryAgent',\n subscribes: ['query-agent'],\n emits: ['log-to-sheets'],\n input: inputSchema\n};\n\nsubscribes: ['query-agent'] listens for query events from Step 1emits: ['log-to-sheets'] triggers logging after generating responseconst agentResponse = await HarvestLogbookService.queryWithAgent({\n query,\n conversationHistory\n});\n\nThe service orchestrates the RAG pipeline: embedding the query, searching Pinecone for similar vectors, extracting context from top matches, and generating a response using the LLM.
\nView RAG orchestration service →
\nThe query is embedded using OpenAI and searched against Pinecone to find the top 5 most similar chunks. Each result includes a similarity score and the original text.
\nView Pinecone query implementation →
\nawait state.set('agent-responses', entryId, {\n query,\n response: agentResponse.response,\n sources: agentResponse.sources,\n timestamp: agentResponse.timestamp\n});\n\nThe LLM generates a response using the retrieved context. The system supports both OpenAI (default) and HuggingFace, controlled by the USE_OPENAI_CHAT environment variable. The response includes source citations showing which harvest entries informed the answer.
View OpenAI chat service →
\nView HuggingFace service →
await emit({\n topic: 'log-to-sheets',\n data: {\n entryId,\n query,\n response: agentResponse.response,\n sources: agentResponse.sources\n }\n});\n\nAfter generating the response, the step emits a logging event to create an audit trail of all queries and responses.
\nStep 3 runs automatically when you include a query field in the Step 1 request. To test it:
POST /harvest_logbook with a query: {\n \"content\": \"Harvested 500kg of tomatoes from field A. Weather was sunny.\",\n \"farmId\": \"farm_1\",\n \"query\": \"What crops did we harvest?\"\n }\n\nIn the Workbench, watch the QueryAgent step activate
Check the Logs tab to see:
\nThe step completes when you see the AI-generated response in the logs. The query and response are automatically logged by Step 5.
\nThis API endpoint allows users to query their existing harvest data without storing new entries. It's a separate endpoint dedicated purely to RAG queries.
\nWhile Step 1 handles both storing and optionally querying data, users often need to just ask questions about their existing harvest logs. This dedicated endpoint keeps the API clean and focused - one endpoint for data entry, another for pure queries.
\nThis separation also makes it easier to apply different rate limits or permissions between data modification and read-only operations.
\nCreate a new file at steps/harvest-logbook/query-only.step.ts.
View the complete Step 4 code on GitHub →
\nNow let's understand the key parts you'll be implementing:
\nconst bodySchema = z.object({\n query: z.string().min(1, 'Query cannot be empty'),\n farmId: z.string().min(1, 'Farm ID is required for authorization'),\n conversationHistory: z.array(z.object({\n role: z.enum(['user', 'assistant', 'system']),\n content: z.string()\n })).optional()\n});\n\nThe request requires a query and farm ID. Conversation history is optional for multi-turn conversations.
\nexport const config: ApiRouteConfig = {\n type: 'api',\n name: 'QueryHarvestLogbook',\n path: '/harvest_logbook/query',\n method: 'POST',\n middleware: [errorHandlerMiddleware, harvestQueryMiddleware],\n emits: ['query-agent']\n};\n\npath: '/harvest_logbook/query' creates a dedicated query endpointharvestQueryMiddleware checks for query permission (not edit)emits: ['query-agent'] triggers the same RAG query handler as Step 3middleware: [errorHandlerMiddleware, harvestQueryMiddleware]\n\nThe harvestQueryMiddleware checks SpiceDB for query permission. This is less restrictive than edit - viewers can query but cannot modify data.
View authorization middleware →
\nexport const handler: Handlers['QueryHarvestLogbook'] = async (req, { emit, logger }) => {\n const { query, farmId } = bodySchema.parse(req.body);\n const queryId = `query-${Date.now()}`;\n \n await emit({\n topic: 'query-agent',\n data: { entryId: queryId, query }\n });\n \n return {\n status: 200,\n body: { success: true, queryId }\n };\n};\n\nThe handler generates a unique query ID and emits the same query-agent event used in Step 1. This reuses the RAG pipeline from Step 3 without duplicating code.
The API returns immediately with the query ID. The actual processing happens in the background, and results are logged by Step 5.
\nThis is the dedicated query endpoint. Test it directly:
\nPOST /harvest_logbook/query in the Workbench {\n \"x-user-id\": \"user_alice\"\n }\n\n {\n \"query\": \"What crops did we harvest?\",\n \"farmId\": \"farm_1\"\n }\n\nYou'll see a 200 OK response with the query ID. In the Logs tab, watch for:
QueryHarvestLogbook - Authorization and query receivedQueryAgent - Querying AI agentQueryAgent - Agent query completedThe query runs in the background and results are logged by Step 5. This endpoint is perfect for read-only query operations without storing new data.
\nThis event handler creates an audit trail by logging every query and its AI-generated response. It supports both local CSV files (for development) and Google Sheets (for production).
\nAudit logs are essential for understanding how users interact with your system. They help with debugging, monitoring usage patterns, and maintaining compliance. By logging queries and responses, you can track what questions users ask, identify common patterns, and improve the system over time.
\nThe dual logging strategy (CSV/Google Sheets) gives you flexibility, use CSV locally for quick testing, then switch to Google Sheets for production without changing code.
\nCreate a new file at steps/harvest-logbook/log-to-sheets.step.ts.
View the complete Step 5 code on GitHub →
\nNow let's understand the key parts you'll be implementing:
\nconst inputSchema = z.object({\n entryId: z.string(),\n query: z.string(),\n response: z.string(),\n sources: z.array(z.string()).optional()\n});\n\nThe step receives the query, AI response, and optional source citations from Step 3.
\nexport const config: EventConfig = {\n type: 'event',\n name: 'LogToSheets',\n subscribes: ['log-to-sheets'],\n emits: [],\n input: inputSchema\n};\n\nsubscribes: ['log-to-sheets'] listens for logging events from Step 3const useCSV = process.env.USE_CSV_LOGGER === 'true' || !process.env.GOOGLE_SHEETS_ID;\n\nawait HarvestLogbookService.logToSheets(query, response, sources);\n\nThe service automatically chooses between CSV and Google Sheets based on environment variables. This keeps the step code simple while supporting different deployment scenarios.
\nView CSV logger →
\nView Google Sheets service →
try {\n await HarvestLogbookService.logToSheets(query, response, sources);\n logger.info(`Successfully logged to ${destination}`);\n} catch (error) {\n logger.error('Failed to log query response');\n // Don't throw - logging failures shouldn't break the main flow\n}\n\nThe step catches logging errors without throwing. This ensures that even if logging fails, the main workflow completes successfully. Users get their query results even if the audit log has issues.
\nThe CSV logger saves entries to logs/harvest_logbook.csv with these columns:
Each entry is automatically escaped to handle quotes and commas in the content.
\nStep 5 runs automatically after Step 3 completes. To verify it's working:
\nPOST /harvest_logbook/queryLogToSheets entries cat logs/harvest_logbook.csv\n\nYou should see your query and response logged with a timestamp. Each subsequent query appends a new row to the CSV file.
\n
Now that all steps are built, let's test the complete workflow using the Motia Workbench.
\nnpm run dev\n\nOpen http://localhost:3000 in your browser to access the Workbench.
harvest-logbook flow from the dropdownPOST /harvest_logbook endpoint in the workflow {\n \"x-user-id\": \"user_alice\"\n }\n\n {\n \"content\": \"Harvested 500kg of tomatoes from field A. Weather was sunny, no pest damage observed.\",\n \"farmId\": \"farm_1\",\n \"metadata\": {\n \"field\": \"A\",\n \"crop\": \"tomatoes\",\n \"weight_kg\": 500\n }\n }\n\nWatch the workflow execute in real-time. You'll see:
\nPOST /harvest_logbook/query endpoint {\n \"x-user-id\": \"user_alice\"\n }\n\n {\n \"farmId\": \"farm_1\",\n \"query\": \"What crops did we harvest recently?\"\n }\n\nWatch the RAG pipeline execute:
\nTry querying as a user without permission:
\n {\n \"x-user-id\": \"user_unauthorized\"\n }\n\nYou'll see a 403 Forbidden response to verify if authorization works correctly.\nYou can also create different users with different levels of access and see fine-grained authorization in action.
\nCheck the audit trail:
\ncat logs/harvest_logbook.csv\n\nYou'll see all queries and responses logged with timestamps.
\nThe Workbench also provides trace visualization showing exactly how data flows through each step, making debugging straightforward.
\nYou've built a complete RAG system with multi-tenant authorization using Motia's event-driven framework. You learned how to:
\nYour system now handles:
\nYour RAG system is ready to help farmers query their harvest data naturally while keeping data secure with proper authorization.
\nThis was a fun exercise in tackling a complex authorization problem and also building something production-grade. I also got to play out some of my Stardew Valley fancies IRL. Maybe it's time I actually move to a cozy farm and grow my own crops (so long as the farm has a good Internet connection!)
\n
The repository can be found on the Motia GitHub.
\nFeel free to reach out to us on LinkedIn or jump into the SpiceDB Discord if you have any questions. Happy farming!
", - "url": "https://authzed.com/blog/building-a-multi-tenant-rag-with-fine-grain-authorization-using-motia-and-spicedb", - "title": "Build a Multi-Tenant RAG with Fine-Grain Authorization using Motia and SpiceDB", - "summary": "Learn how to build a complete retrieval-augmented generation pipeline with multi-tenant authorization using Motia's event-driven framework, OpenAI embeddings, Pinecone vector search, SpiceDB permissions, and natural language querying.", - "image": "https://authzed.com/images/blogs/motia-spicedb.png", - "date_modified": "2025-11-18T22:56:00.000Z", - "date_published": "2025-11-18T17:30:00.000Z", - "author": { - "name": "Sohan Maheshwar", - "url": "https://www.linkedin.com/in/sohanmaheshwar/" - } - }, - { - "id": "https://authzed.com/blog/terraform-and-opentofu-provider-for-authzed-dedicated", - "content_html": "Today, AuthZed is excited to introduce the Terraform and OpenTofu Provider for AuthZed Dedicated, giving customers a powerful way to manage their authorization infrastructure using industry standard best practices.
\nWith this new provider, teams can define, version, and automate their resources in the AuthZed Cloud Platform - entirely through declarative infrastructure-as-code. This makes it easier than ever to integrate authorization management into existing operational workflows.
\nModern infrastructure teams rely on Terraform and OpenTofu to manage everything from compute resources to networking and identity. With the new AuthZed provider, you can now manage your authorization layer in the same way — improving consistency, reducing manual configuration, and enabling repeatable deployments across environments.
\nThe Terraform and OpenTofu provider automates key components of your AuthZed Dedicated environment, including:
\nAnd we’re working to support additional resources in AuthZed Dedicated environments, including managing Permissions Systems.
\nBelow is a simple example of how to create a service account using the AuthZed Terraform provider:
\nprovider \"authzed\" {\n token = var.authzed_token\n}\n\nresource \"authzed_service_account\" \"example\" {\n name = \"ci-cd-access\"\n description = \"Service account for CI/CD pipeline\"\n}\n\nThis snippet demonstrates how straightforward it is to manage AuthZed resources alongside your existing infrastructure definitions.
\nThe introduction of the Terraform and OpenTofu provider makes it effortless to manage authorization infrastructure as code — ensuring your permission systems evolve safely and consistently as your organization scales.
\nFor AuthZed customers interested in using the Terraform and OpenTofu provider, please contact your account manager for access.
\nTo explore the provider and get started, visit the AuthZed Terraform Provider on GitHub.
\nNot an AuthZed customer, but want to take the technology for a spin? Sign up for AuthZed Cloud today to try it out.
", - "url": "https://authzed.com/blog/terraform-and-opentofu-provider-for-authzed-dedicated", - "title": "Terraform and OpenTofu Provider for AuthZed Dedicated", - "summary": "AuthZed now supports Terraform and OpenTofu. You can manage service accounts, API tokens, roles, and permission system configuration as code, just like your other infrastructure. Define resources declaratively, version them in git, and automate deployments across environments without manual configuration steps.", - "image": "https://authzed.com/images/blogs/opentofu-terraform-blog-image.png", - "date_modified": "2025-10-30T10:40:00.000Z", - "date_published": "2025-10-30T10:40:00.000Z", - "author": { - "name": "Veronica Lopez", - "url": "https://www.linkedin.com/in/veronica-lopez-8ba1b1256/" - } - }, - { - "id": "https://authzed.com/blog/why-were-not-renaming-the-company-authzed-ai", - "content_html": "It has become popular for companies to align themselves with AI. For good reason! AI has the potential, and ever increasing likelihood, of fundamentally transforming the way that companies work. The hype is out of control! People breathlessly compare AI to the internet and the industrial revolution. And who knows; they could even be right!
\nAt AuthZed, a rapidly growing segment of our customers are AI first companies, including OpenAI. As we work with more AI companies on authorization for AI systems, we often get asked if we will rebrand as an AI company.
\nCompanies have realigned themselves to varying degrees. SalesForce may one day soon be called AgentForce. As an April Fool’s joke, one company started a rumor that Nvidia was going to rebrand as NvidAI, and I think a lot of people probably thought to themselves: “yeah, that tracks.” Mega corps such as Google, Meta, and IBM have .ai top level websites that outline their activities in the AI space.
\nIt can make a lot of sense! After all, unprecedented shifts require unprecedented attention, and a rising tide floats all boats. Well: we’re not. In this post I will lay out some of the pros and cons of going all in on AI branding and alignment, and explain our reasons for keeping our brand in place.
\nWhen considering such a drastic change, I believe each company is looking at the upsides and downsides of a rebrand given their specific situation (revenue, brand value, momentum, staff, etc.) and making a calculated choice that may only apply in their specific context. So what are some of the upsides and downsides?
\n
The risks that I’ve been able to identify boil down to two areas: brand value and perception. Let’s start with brand value.
\nCompanies spend a lot of time and effort building their brand value. It is an intangible asset for companies that pays dividends in areas such as awareness, customer acquisition costs, and reach, just to name a few. Apple is widely considered to have the most valuable brand in the world, and BrandFinance currently values their brand at $575 billion, with a b. That’s approximately 15% of their $3.7 trillion market cap.
\nWhen you rebrand by changing your company’s name, you can put all of that hard work at risk. By changing your name, you need to regain any lost brand mindshare. When you change your web address, you need to re-establish SEO and domain authority that was hard fought and hard won. If Apple rebranded to treefruit.ai (dibs btw) tomorrow, we would expect their sales, mindshare, and even email deliverability to go down.
\nThe second major risk category is around perception. By rebranding around AI you’re signaling a few things to the market. First, you're weighing the upside of being aligned with AI heavily. Second, you signal that you’re willing and able to follow the hype. These factors combined may change the perception of your company to potential buyers: from established, steady, successful, to trendy, fast-moving, up and coming.
\nOn a longer time horizon, we’ve also seen many such trends come and go. Web 1.0, Web 2.0, SoLoMo, Cloud, Crypto, VR/AR, and now AI. In all cases these hype movements have had a massive effect on the way people perceive technology, but they have also become less hyped over time, as a new trend has arrived to supplant them. With AI, I can guarantee that at some point we will achieve an equilibrium where the value prop has been mostly established, and the hype adjusts to fit. Do you want to be saddled with an AI-forward brand when that happens? Will you have been able to ride the wave long and high enough to establish an enduring company that can survive on its own? One of my favorite quotes from Warren Buffet may apply here: “Only when the tide goes out do you discover who's been swimming naked.”
\nThere are many upsides that companies can expect to reap as well! Hype is its own form of reality distortion field, and it causes a lot of people to act in ways that they might not have otherwise. FOMO, or fear of missing out, is a well established phenomenon that we can leverage to our benefit. Let’s take a look at who is acting differently in this hype cycle.
\nInvestors. If you are a startup that’s hoping to raise capital, you had better have either: insane fundamentals or an AI story. Carta recently released an analysis on how AI is affecting fundraising, with the TL;DR being that AI companies are absorbing a ton of the money, and that growing round sizes can primarily be attributed to the AI companies that are raising. Counter to all of the hype, user Xodarap over at LessWrong.com has produced an analysis on YC companies post GenAI hitting the scene, that paints a less rosy picture of the outcomes associated with primarily AI-based companies so far. It’s possible (probable?) that we are just too early in the cycle to have identified the clear winners and losers for AI.
\nVendors. If partnerships are a big part of your model, there are a lot of dollars floating around for partnerships that revolve around AI. I've had a marketing exec from a vendor tell me straight up: “all of our marketing dollars are earmarked only for AI related initiatives right now.” If you can tell a compelling story here, you will be able to find someone willing to help you amplify it.
\nBusinesses. Last, and certainly not least, businesses are also changing their behavior. If you’re a B2B company, your customers are all figuring out what their AI story is too. That means opportunity. They’re looking for vendors, partners, analysts, really anyone who can help them be successful with AI. Their boss told them: “We need an AI story or we’re going to get our lunch eaten! Make it happen!” So they’re out there trying to make it happen. Unfortunately, a study out of MIT recently proclaimed that “95% of generative AI pilots at companies are failing.”
\nThe world is never quite as cut and dry as we think it might be. The good news is, that you can still reap some of the reward without a full rebrand. At AuthZed, we’ve found that you can still tell your AI story, and court customers who are looking to advance their AI initiatives even if you’re not completely AI-native, or all-aboard the hype train. Unfortunately, I don’t have intuition or data for what the comparative advantage is of a rebrand compared to attempting to make waves under a more neutral brand.
\nAt AuthZed, our context-specific decision not to rebrand was based primarily on how neutral our solution is. While many companies, both AI and traditional, are having success with using AuthZed to secure RAG pipelines and AI agents, we also serve many customers who want to protect their data from unauthorized access by humans. Or to build that new sharing workflow that is going to unlock new revenue. Or break into the enterprise. Put succinctly: we think we would be doing the world a great disservice if our technology was only being used for AI-adjacent purposes.
\nThe other, less important reason why we’re not rebranding, is that at AuthZed we often take a slightly contrarian or longer view than whatever the current hype cycle might dictate. We try not to cargo-cult our business decisions. Following the pack is almost by definition a median-caliber decision. Median-caliber decisions are likely to sum up to a median company outcome. The median startup outcome is death or an unprofitable exit. At AuthZed, we think that the opportunity that we have to reshape the way that the world thinks about authorization shouldn’t be wasted.
\nWith that said, I’ve been wrong many times in the past. Too many to count even. “Never say never” are words to live by! Hopefully if and when the time comes where our personal calculus shifts in favor of a big rebrand, I can recognize the changing landscape and we can do what’s right for the company. What’s a little egg on your face when you’re on a mission to fix the way that companies across the world do authorization.
", - "url": "https://authzed.com/blog/why-were-not-renaming-the-company-authzed-ai", - "title": "Why we’re not renaming the company AuthZed.ai", - "summary": "Should your company rebrand as an AI company? We decided not to.\nAI companies attract outsized funding and partnership dollars. Yet rebranding means trading established brand value and customer mindshare for alignment with today's hottest trend.\nWe stayed brand-neutral because our authorization solution serves both AI and non-AI companies alike. Limiting ourselves to AI-only would be a disservice to our broader mission and the diverse customers who depend on us.", - "image": "https://authzed.com/images/blogs/authzed-ai-bg.png", - "date_modified": "2025-10-27T11:45:00.000Z", - "date_published": "2025-10-27T11:45:00.000Z", - "author": { - "name": "Jake Moshenko", - "url": "https://www.linkedin.com/in/jacob-moshenko-381161b/" - } - }, - { - "id": "https://authzed.com/blog/authzed-adds-microsoft-azure-support", - "content_html": "Today, AuthZed is announcing support for Microsoft Azure in AuthZed Dedicated to provide more authorization infrastructure deployment options for customers.\nAuthZed now provides customers the opportunity to choose from all major cloud providers - AWS, Google Cloud and/or Microsoft Azure.
\n
AuthZed customers can now deploy authorization infrastructure to 23+ Azure regions to support their globally distributed applications.\nThis ensures fast, consistent permission decisions regardless of where your users are located.
\n\n\n\"I have been following the development of SpiceDB and AuthZed on how they are providing authorization infrastructure to companies of all sizes,\" said Lachlan Evenson, Principal PDM Manager, Azure Cloud Native Ecosystem.\n\"It's great to see their support for Microsoft Azure and we look forward to collaborating with AuthZed as they work with more Azure customers moving forward.\"
\n
This launch is the direct result of customer demand. Many teams asked for Azure support, and now they have the ability to deploy authorization infrastructure in the cloud of their choice.
\n
AuthZed Dedicated is our managed service that provides fully private deployments of our cloud platform in our customer’s provider and regions of choice.\nThis gives users the benefits of a proven, production-ready authorization system—without the burden of building and maintaining it themselves.
\nIndustry leaders such as OpenAI, Workday, and Turo rely on AuthZed Dedicated for their authorization infrastructure:
\n\n\n“We decided to buy instead of build early on.\nThis is an authorization system with established patterns.\nWe didn’t want to reinvent the wheel when we could move fast with a proven solution.”\n— Member of Technical Staff, OpenAI
\n
With Azure now available, you can deploy AuthZed Dedicated on the cloud of your choice.\nBook a call with our team to learn how AuthZed can power your authorization infrastructure.
", - "url": "https://authzed.com/blog/authzed-adds-microsoft-azure-support", - "title": "AuthZed Dedicated Now Available on Microsoft Azure", - "summary": "AuthZed now supports Microsoft Azure, giving customers the opportunity to choose from all major cloud providers - AWS, Google Cloud, and Microsoft Azure. Deploy authorization infrastructure to 23+ Azure regions for globally distributed applications.\n", - "image": "https://authzed.com/images/blogs/authzed-azure-support-og.png", - "date_modified": "2025-10-21T16:00:00.000Z", - "date_published": "2025-10-21T16:00:00.000Z", - "author": { - "name": "Jimmy Zelinskie", - "url": "https://twitter.com/jimmyzelinskie" - } - }, - { - "id": "https://authzed.com/blog/extended-t-augment-your-design-craft-with-ai-tools", - "content_html": "\n\nTL;DR
\n
\nAI doesn't replace design judgment. It widens my T-shaped skill set by surfacing on-brand options quickly. It's still on me to uphold craft, taste, and standards for what ships.
Designers on small teams, especially at startups, default to being T-shaped: deep in a core craft and broad enough to support adjacent disciplines. My vertical is brand and visual identity, while my horizontal spans marketing, product, illustration, creative strategy, and execution. Lately, AI tools have pushed that horizontal reach further than the usual constraints allow.
\nAt AuthZed, I use AI to explore ideas that would normally be blocked by time or budget: 3D modeling, character variation, and light manufacturing for physical pieces. The point is not to replace design craft with machine output. It is to expand the number of viable ideas I can evaluate, then curate and polish a final product that meets our design standard.
\nPrevious tools mostly sped up execution. AI speeds up exploration. When you can generate twenty plausible directions in minutes, the scarce skill is not pushing Bézier handles. It is knowing which direction communicates the right message, and why.
\nConcrete example: Photoshop made retouching faster, but great photography still depends on eye and intent. Figma made collaboration faster, but good product design still depends on hierarchy, flows, and clarity. AI widens the search field so designers can spend more time on curation instead of setup.
\n\n\nVolume before polish
\n
\nWhile at SVA we focused on volume before refinement. We would thumbnail dozens (sometimes a hundred) poster concepts before committing to one. That practice shaped how I use AI today: explore wide, then curate down to find the right solution. Richard Wilde's program emphasized iterative problem-solving and visual literacy long before today's tools made rapid exploration this easy.
AI works best when it is constrained by the systems you already trust, whether that is the permission model that controls who can view a file or the rules you enforce when writing code. Clarity is what turns an AI model from a toy into a multiplier. When we developed our mascot, Dibs, I knew we would eventually need dozens of consistent, reference-accurate variations: expressions, poses, environments. Historically, that meant a lot of sketching and cleanup before we could show anything.
\nWith specific instructions and a set of reference illustrations, I can review a new variation every few moments. None of those are final, but they land close while surfacing design choices I might not have explored on my own. I still adjust typography, tweak poses, and rebalance compositions before anything ships, so we stay on brand and accessible.
\nThis mirrors every major tool shift. Photoshop did not replace photographers. Figma did not replace designers. AI does not replace design thinking. It gives you a broader search field so you can make better choices earlier.
\n
For our offsite hackathon, I wanted trophies the team would be proud to earn and motivated to chase next time. Our mascot, Dibs, was the obvious hero. I started with approved 2D art and generated a character turn that covered front, side, back, and top views. From there I used a reconstruction tool (Meshy has been the most reliable lately) to get a starter mesh before moving into Blender for cleanup, posing, and print prep.
\n
I am not a Blender expert, but I have made a donut or two. With the starting mesh it was straightforward to get a printable file: repair holes, smooth odd vertices, and thicken delicate areas. When I hit something rusty, I leaned on documentation and the right prompts to fill the gaps. Before doing any of that refinement, I printed the raw export on my Bambu Lab P1P in PLA, cleaned up the supports, and dropped the proof on a teammate's desk. We went from concept to a physical artifact in under a day.
\nWe ended up producing twelve trophies printed in PETG with a removable base that hides a pocket for added weight (or whatever ends up in there). I finished them by hand with Rub 'n Buff, a prop-maker staple, to get a patinated metallic look. Once the pipeline was dialed in, I scaled it down for a sleeping Dibs keychain so everyone could bring something home, even if they were not on the podium. Small lift, real morale boost.
\n

When anyone can produce a hundred logos or pose variations, the value as a designer shifts to selection with intent. Brand expertise tells you which pose reads playful versus chaotic, which silhouette will hold up at small sizes, and which material choice survives handling at an event. The models handle brute-force trial. You own the taste, the narrative, and the necessary constraints.
\nThe result is horizontal expansion without vertical compromise. Consistency improves because character work starts from reference-accurate sources instead of ad-hoc one-offs. Physical production becomes realistic because you can iterate virtually before committing to materials and time.
\nWith newer models, I can get much closer to production-ready assets with far less back-and-forth prompting. I render initial concepts, select top options based on color, layout, expression, and composition, then create a small mood board for stakeholders to review before building the final production-ready version. The goal is not to outsource taste. It is to see more viable paths sooner, pick one, and refine by hand so the final assets stay original and on-brand.
\n\n\nProcess note: I drafted the outline and core ideas, then used an editor to tighten phrasing and proofread. Same pattern as the rest of my work: widen the search, keep the taste.
\n
What is a T-shaped designer?
\nA designer with deep expertise in one area (the vertical) and working knowledge across adjacent disciplines (the horizontal).
How does AI help T-shaped designers?
\nAI quickly generates plausible options so you can evaluate more directions, then apply judgment to pick, refine, and ship the best one.
How do I keep brand consistency with AI images?
\nDefine non-negotiables (proportions, palette, silhouette), use reference images, and keep a human finish pass for polish.
Which tools did you use in this workflow?
\nModel-guided image generation (e.g., Midjourney or a tuned model with references), a 2D-to-3D reconstruction step for a starter mesh (Rodin/Hyper3D or Meshy), Blender for cleanup, and a Bambu Lab P1P to slice G-code and print.
We're excited to announce the launch of two new MCP servers that bring SpiceDB resources closer to your AI workflow, making it easier to learn and get started using SpiceDB for your application permissions: the AuthZed MCP Server and the SpiceDB Dev MCP Server.
\nThe AuthZed MCP Server brings comprehensive documentation and learning resources directly into your AI tools. Whether you're exploring SpiceDB concepts, looking up API references, or searching for schema examples, this server provides instant access to all SpiceDB and AuthZed documentation pages, complete API method definitions, and a curated collection of authorization pattern examples. It's designed to make learning and referencing SpiceDB documentation seamless, right where you're already working.
\nThe SpiceDB Dev MCP Server takes things further by integrating directly into your development workflow. It connects to a sandboxed SpiceDB instance, allowing your AI coding assistant to help you learn and experiment with schema development, relationship testing, and permission checking. Need to validate a schema change? Want to test whether a specific permission check will work? Your AI assistant can now interact with SpiceDB on your behalf, making development faster and more intuitive.
\nReady to try them out? Head over to authzed.com/docs/mcp to get started with both servers.
\n
We've been experimenting with MCP since the first specification was published. Back when the term \"vibe coding\" was just starting to circulate, we built an early prototype MCP server for SpiceDB. The results were eye-opening. We were pleasantly surprised by how effectively LLMs could use the tools we provided, and delighted by the potential of being able to \"talk\" to SpiceDB through natural language.
\nThat initial prototype sparked conversations across the SpiceDB community. We connected with others who were equally excited about the possibilities, sharing ideas and exploring use cases together. Those early discussions helped shape our thinking about what MCP servers for SpiceDB could become.
\nAs the MCP specification continued evolving (particularly around enterprise readiness and authorization), we wanted to deeply understand these new capabilities. This led us to build a reference implementation of a remote MCP server using open source solutions. That reference implementation became a testbed for understanding the authorization aspects of the spec and exploring best practices for building production-ready MCP servers.
\nThrough our own experience with AI coding tools, we've seen firsthand how valuable it is to have the right resources and tools available directly in your AI workflow. Our team's usage of AI assistants has steadily increased, and we know the difference it makes when information and capabilities are just a prompt away.
\nFor AuthZed and SpiceDB users, we wanted to bring learning and development resources closer to where you're already working. Whether you're learning SpiceDB concepts, building a new schema, or debugging permissions logic, having immediate access to documentation, examples, and a sandbox SpiceDB instance can dramatically speed up the development process.
\nThat's why we built both servers: the AuthZed MCP Server puts knowledge at your fingertips, while the SpiceDB Dev MCP Server puts your development environment directly into your AI assistant's toolkit.
\nWe're still actively building and experimenting with MCP. While the specification provides guidance for authorization, there's significant responsibility on MCP server developers to implement appropriate access controls for resources and accurate permissions around tools.
\nThis is particularly important as MCP servers become more powerful and gain access to sensitive systems. We're learning as we build, and we'll be sharing new tools and lessons around building authorization into MCP servers as we discover them. We believe the combination of SpiceDB for MCP permissions and AuthZed for authorization infrastructure is especially well-suited for defining and enforcing the complex permissions that enterprise MCP servers require.
\nIn the meantime, we encourage you to try out our MCP servers. The documentation for each includes detailed use cases and security guidelines to help you use them safely and effectively.
\nIf you're building an enterprise MCP server and would like help integrating permissions and authorization, we'd love to chat. Book a call with our team and let's explore how we can help.
\nHappy coding, and we can't wait to see what you build with these new tools! 🚀
", - "url": "https://authzed.com/blog/introducing-authzeds-mcp-servers", - "title": "Introducing AuthZed's MCP Servers", - "summary": "We're launching two MCP servers to bring SpiceDB closer to your AI workflow. The AuthZed MCP Server provides instant access to documentation and examples, while the SpiceDB Dev MCP Server integrates with your development environment. Learn about our MCP journey from early prototypes to production, and discover how these tools can speed up your SpiceDB development.", - "image": "https://authzed.com/images/upload/chat-with-authzed-mcp.png", - "date_modified": "2025-09-30T10:45:00.000Z", - "date_published": "2025-09-30T10:45:00.000Z", - "author": { - "name": "Sam Kim", - "url": "https://github.com/samkim" - } - }, - { - "id": "https://authzed.com/blog/the-dual-write-problem-in-spicedb-a-deep-dive-from-google-and-canva-experience", - "content_html": "This talk was part of the Authorization Infrastructure event hosted by AuthZed on August 20, 2025.
\nIn this technical deep-dive, Canva software engineer Artie Shevchenko draws on five years of experience with centralized authorization systems—first with Google's Zanzibar and now with SpiceDB—to tackle one of the most challenging aspects of authorization system implementation: the dual-write problem.
\nThe dual-write problem emerges when data must be replicated between your main database (like Postgres or Spanner) and SpiceDB, creating potential inconsistencies due to network failures, race conditions, and system bugs. These inconsistencies can lead to false negatives (blocking legitimate access) or false positives (security vulnerabilities).
\nHowever, as Shevchenko explains, \"the good news is centralized authorization systems, they actually do simplify things quite a bit.\" Unlike traditional event-driven architectures where teams publish events hoping others interpret them correctly, \"with SpiceDB, you're fully in control\" of the entire replication process.
\nSpiceDB offers several key advantages: \"you're not replicating aggregates. Most often, it's simple booleans or relationships,\" making inconsistencies easier to reason about. Additionally, \"the volume of replication is also much smaller\" since authorization data can live primarily in SpiceDB, and you're \"replicating just to SpiceDB, not to 10 other services.\"
\nThe talk explores four solution approaches—from cron sync jobs to transactional outboxes—with real-world examples from Google and Canva. Shevchenko's key insight: \"dual write is not a SpiceDB problem. It's a data replication problem,\" but \"SpiceDB makes the dual write problem, and ultimately the data integrity problem, much more manageable.\"
\n\n\n\"First of all, as a team now, you own the whole replication process. Because you own both copies of the data. Which makes a huge difference. You're not just publishing an event that other teams would hopefully correctly interpret and apply to their data stores.\"
\n
Takeaway: SpiceDB gives you complete control over your authorization data replication, eliminating dependencies on other teams and reducing coordination overhead.
\n\n\n\"And then feed it as an input to our MapReduce style sync job, which would sync data for 100 millions of users in just a couple of hours.\"
\n
Takeaway: SpiceDB's approach has been battle-tested at Google scale, handling hundreds of millions of users efficiently.
\n\n\n\"But, the first three approaches without Zanzibar or SpiceDB would be really tricky, if not impossible. Not only because of the data ownership problem, but also because of aggregates. With event-driven replication, you're probably not replicating simple atomic facts.\"
\n
Takeaway: SpiceDB's simple data model (booleans and relationships) makes dual-write problems significantly more manageable compared to traditional event-driven architectures that deal with complex aggregates.
\nTalk by Artie Shevchenko, Software Engineer at Canva
\nAll right, let's talk about the dual-write problem. My name is Artie Shevchenko, and I'm a software engineer at Canva. My first experience with systems like SpiceDB was actually with Zanzibar at Google in 2017. And now I'm working on SpiceDB integration at Canva. So, yeah, almost five years working with this piece of tech.
\nAnd from my experience, there are two hard things in centralized authorization systems. It's dual-writes and data backfills. But neither of them is unique to Zanzibar or SpiceDB. In fact, dual-write is a fairly standard problem. And when we're talking about replication to another database, it is always challenging. Whether it's a permanent replication of some data to another microservice, or migration to a new database with zero downtime, or even replication to SpiceDB.
\nThe good news is centralized authorization systems, they actually do simplify things quite a bit. First of all, as a team now, you own the whole replication process. Because you own both copies of the data. Which makes a huge difference. You're not just publishing an event that other teams would hopefully correctly interpret and apply to their data stores. With SpiceDB, you're fully in control.
\nSecondly, with SpiceDB, you're not replicating aggregates. Most often, it's simple booleans or relationships. Which makes it much easier to reason about the possible inconsistencies.
\nAnd finally, the volume of replication is also much smaller. For two reasons. First, most of the authorization data you can store in SpiceDB only, once the migration is done. And second, with SpiceDB, you need to replicate just to SpiceDB, not to 10 other services. Well, there are also search indexes, but they're very special for multiple reasons. And the good news is search indexes, you don't need to solve them on the client side. Mostly, you can just delegate this to tools that materialize.
\nBut that said, even with replication to SpiceDB, there is a lot of essential complexity there that first, you need to understand. And second, you need to decide which approach you're going to use to solve the dual-write problem.
\nThe structure of this talk, unlike the topic itself, is super simple. I don't have any ambition to make the dual-write problem look simple. It's not. But I do hope to make it clear. So, the goal of this talk is to make the problems and the underlying causes clear. And we're going to spend quite a lot of time unpacking what are the practical problems we're solving. And then, talking about the solution space, the goal is to make it clear what works and what doesn't. And, of course, the pros and cons of the different alternatives.
\nBut let's start with a couple of definitions. Almost obvious definitions aside, let's take a look at the left side of the slide, at the diagrams. Throughout the talk, we'll be looking into storing the same piece of data in two databases. Of course, ideally, you would store it in exactly one of them. But in practice, unfortunately, it's not always possible, even with SpiceDB.
\nSo, when information in one database does not match the information in another database, we'll call it a discrepancy or inconsistency. Or I'll simply say that databases are out of sync.
\nWhen talking about the dual-write problem in general, I'll be using the term \"source of truth\" for the database that is kind of primary in the replication process. And the second database I'll call the second database. I was thinking about calling them primary and replica or maybe master and slave. But the problem is, these terms are typically used to describe replication within the same system. But I want to emphasize that these are different databases. And also, the same piece of knowledge may take very different forms in them. So, I'll stick to the terms \"source of truth\" and just some other second database. That's when I talk about the dual-write problem in general.
\nBut not to be too abstract, we'll be mostly looking at the dual-write problem in the context of data replication to SpiceDB, not just to some other abstract second database. And in this case, instead of using the term \"source of truth,\" I'll be using the term \"main database,\" referring to the traditional transactional database where you store most of your data, like Postgres, Dynamo, or Spanner. Because for the purposes of this talk, we'll assume that the main database is a source of truth for any replicated piece of data. Yes, theoretically, replicating in the other direction is also an option, but we won't consider that. We're replicating from the main database to SpiceDB.
\nSo, in different contexts, I'll refer to the database on the left side of this giant white replication arrow as either \"source of truth\" or \"main database\" or, even more specifically, Postgres or Spanner. Please keep this in mind.
\nAnd finally, don't get confused when I call SpiceDB a database. Maybe I can blame the name. Of course, it's more than just a database. It is a centralized authorization system. But in this talk, we actually care about the underlying database only. So, hopefully, that doesn't cause any confusion.
\nAll right. We're done with these primitive definitions. Now, let's define what the dual-write problem is. And let's start with an oversimplified but real example from home automation.
\nLet's say there are two types of resources, homes and devices. Users can be members of multiple homes, and they have access to all the devices in their homes. So, whether a device is in one home or another, that information obviously has to be stored both in the main database, in this case, Spanner, and in SpiceDB.
\nAnd if you want to move a device from one home to another, now you need to update the device's home in both databases. If you get a task to implement that, you would probably start with these two lines of code. You first write to the source of truth, which is Spanner, and then write to the second database, which is SpiceDB. The problem is you cannot write to both data stores in the same transaction, because these are literally different systems.
\nSo, a bunch of things can go wrong. If the first write fails, it's easy. You just let the error propagate to the client, and they can retry. But what about the second write? What if that one fails? Do you try to revert the first write and return an error to the client? But what if reverting the first one fails? It's getting complicated.
\nAnother idea. Maybe open a Spanner transaction and write to SpiceDB with the Spanner transaction open. I won't spend time on exploring this option, but it also doesn't solve anything, and in fact, just makes things worse. The truth is, none of the obvious workarounds actually make things better.
\nSo, we'll use these two simple lines of code as a starting point, and just acknowledge that there is a problem for us to solve there. The second write may fail for different reasons. It's either because of a network problem, or a problem with SpiceDB, or even the machine itself terminating after the first line. In all of these scenarios, the two databases become out of sync with each other. One of them will think that the device is in Home 1, and another will think that it is in Home 2.
\nThe second write failing can create two types of data integrity problems. It's either SpiceDB is too restrictive. It doesn't allow access to someone who should have access, which is called a false negative on the slides. Or the opposite. SpiceDB can be too permissive, allowing access to someone who shouldn't have access. False negatives are more visible. It's more likely you would get a bug report for it from a customer. But false positives are actually more dangerous, because that's potentially a security issue.
\nWe've already tried several obvious workarounds, and none of them worked. But let's give it one last shot, given that it is false positives that are the main issue here. Maybe there is a simple way to get rid of those. Let's try a special write operations ordering. Namely, let's do SpiceDB deletes first. Then, in the same transaction, make all the changes to the main database. And then, do SpiceDB upserts.
\nSo, in our example, the device is first removed from home 1 in SpiceDB. And then, after the Spanner write, the device is added to home 2 in SpiceDB. And it actually does the trick. And it's easy to prove that it works not only in this example, but in general. If there are no negations in the schema, such an ordering of writes ensures no false positives from SpiceDB. So, now the dual write problem looks like this. Much better, isn't it? No security issues anymore.
\nLet me play devil's advocate here. If the second or the third write fails, let's say, 100 times per month, we would probably hear from nobody. Or maybe one user. But for one user, can you fix it manually? But aren't we missing something here?
\nThe problem is, there is a whole class of issues we've ignored so far. It's race conditions. In this scenario from the slide, we're doing writes in the order that was supposed to totally eliminate the false positives. But as a result of these two requests from Alice and Bob, we get a false positive for Tom. That's because we're no longer talking about failing writes. None of the writes failed in this scenario. It is race conditions that caused the data integrity problem here.
\nSo, we have identified two causes or two sources of discrepancies between the two databases. The first is failing writes. And the second is race conditions. So, unfortunately, yet another workaround doesn't really make much difference. Back to our initial simple starting point. Two consecutive writes. First write to the main database. And then write to SpiceDB. Probably in a try-catch like here.
\nAnd one last note looking at this diagram. Often people think about the dual write problem very simplistically. They think if they can make all the writes eventually succeed, that would solve the problem for them. So, all they need is a transactional outbox or a CDC, change data capture, or something like this. But that's not exactly the case. Because at the very least, there are also race conditions. And as we'll see very soon, it's even more than that.
\nAnd now, let's add backfill to the picture. If you're introducing a new field, a new type of information that you want to be present in multiple databases, you just make the schema changes, implement the dual write logic, and that's it. You can immediately start reading from the new field or a new column in all the databases. But if it's not a new type of information, if there is pre-existing data, then the data needs to be backfilled.
\nThen the new column, field, or relation goes through these three phases. You can say there is a lifecycle. First, the schema definition changes. New column is created or something like this. Then, dual write is enabled. And finally, we do a backfill, which iterates through all of the existing data and writes it to the second database. And once the backfill is done, the data in the second database is ready to use. It's ready for reads and ready for access checks if we're talking about SpiceDB.
\nAnd as it's easy to see from the backfill pseudocode, backfill also contributes to race conditions. Simply because the data may change between the read and write operations. And again, welcome false positives.
\nOkay. So far, we've done two things. We've defined the problem. And we've examined multiple tempting workarounds just to find that they don't really solve anything. Now, let's take a look at several approaches used at Google and Canva that actually do work. And, of course, discuss their trade-offs.
\nFirst of all, doing nothing about it is probably not a good idea in most cases. Because authorization data integrity is really important. It's not only false negatives. It is false positives as well, which, as you remember, can be a security issue. The good news is there are multiple options to choose from if you want to solve the dual-write problem.
\nAnd let's start with a solution we used in our team at Google, which is pretty simple. We just had a cron sync job. That job would run several times per day and fix all the discrepancies between our Spanner instance and Zanzibar. Looking at the code on the right side, because of the sync job, we can keep the dual-write code itself very, very simple. It's basically the two lines of code we started with.
\nSync jobs at Google are super common. And what made it even easier for us here is consistent snapshots. We could literally have a snapshot of both Spanner and Zanzibar for exactly the same instant. And then feed it as an input to our MapReduce style sync job, which would sync data for 100 millions of users in just a couple of hours.
\nAnd interestingly, sync jobs are the only solution that truly guarantees eventual consistency, no matter what. Because in addition to write failures and races, there is also a third problem here. It is bugs in the data replication logic.
\nNow, the most interesting part is how did it perform in practice? And thanks to our sync job, we actually know for sure how did it go. Visibility into the data integrity is a huge, huge benefit. We not only knew that all the discrepancies get fixed within several hours, but we also knew how many of them we actually had. And interestingly, the number of discrepancies was really high only when we had bugs in our replication logic. Race conditions and failed writes, they did cause some inconsistencies too. But even at our scale, there were a small number of them, typically tens or hundreds per day.
\nNow, talking about the downsides of this approach, there are two main downsides. The first one is there are always some transient discrepancies, which can be there for several hours. Because we're not trying to address race conditions or failing writes in real time. And the second problem is infra costs. Running a sync job for a large database almost continuously is really, really expensive.
\nAll right. We're done with the sync jobs. Now, all the other approaches we'll be looking at, they leverage the transactional outbox pattern. For some of those approaches, you could achieve similar results with CDC, change data capture, instead of the outbox. But outbox is more flexible, so we'll stick to it.
\nAnd at its core, the transactional outbox pattern is really, really simple. When writing changes to the main database, in the same transaction, we also store a message saying, \"please write something to SpiceDB.\" And unlike traditional message queues outside of the main database, such an approach truly guarantees for us at-least-once delivery. And then there is a worker running continuously that pulls the messages from the outbox and acts upon them, makes the SpiceDB writes. Note that I mentioned a Zedtoken here in the code, but these are orthogonal to our topics, so I'll just skip them on the next slides.
\nAs I already mentioned, the problem the transactional outbox solves for us is reliable message delivery. Once SpiceDB and the network are in a healthy state, all the valid SpiceDB writes will eventually succeed. One less problem for us to worry about. But similar to CDC, it doesn't solve any of the other problems. It obviously doesn't provide any safety nets for the bugs in the data replication logic. And as it's easy to see from these examples, the transactional outbox is also subject to race conditions. Unless there are some extra properties guaranteed, which we'll talk very, very soon about.
\nOkay. Now that we've set the stage with transactional outboxes, let's take a look at several solutions. The second approach to solving the dual-write problem is what I would call micro-syncs. Not sure if there's a proper term for it, but let me explain what I mean. In many ways, it's very similar to the first approach, cron sync jobs. But instead of doing a sync for the whole databases, we would be doing targeted syncs for specific relationships only.
\nFor example, if Bob's role in Team X changed, we would completely resync Bob's membership in that team, including all his roles. So in the worker, we would pull the message from the outbox, then read the data from both databases, and fix it in SpiceDB if there are any discrepancies.
\nTo make it scale, instead of writing it to SpiceDB from the worker directly, we can pull those messages in batches and just put them into another durable queue, for example, into Amazon SQS. And then we can have as many workers as we need to process those messages.
\nBut aren't these micro-syncs subject to races themselves? They are. Here on this diagram, you can see an example of such a race condition creating a discrepancy. But adding just a several-seconds delay makes such races highly unlikely. And for our own peace of mind, we can even process the same message again, let's say in one hour. Then races become practically impossible. I mean, yes, in theory, the internet is a weird thing that doesn't make any guarantees. But in practice, even TCP retransmissions, they won't take an hour.
\nSo the race conditions are solved with significantly delayed micro-syncs. And you can even do multiple syncs for the same message with different delays.
\nNow, what about bugs in the data replication logic? And in practice, that's the only difference with the first approach, is that micro-syncs, they do not cover some types of bugs. Specifically, let's say you're introducing a new flow that modifies the source of truth, but then you simply forget to update SpiceDB in that particular flow. Obviously, if there is no message sent, there is no micro-sync, and there would be a discrepancy. But apart from that, there are no other substantial downsides in micro-syncs. They provide you with almost the same set of benefits as normal sync jobs, and even fix discrepancies on average much, much faster, which is pretty exciting.
\nAnd finally, let's take a look at a couple of options that do not rely on syncs between the databases. Let's introduce a version field for each replicated field. In our home automation example, it would be a home version column in the devices table, and a corresponding home version relation in the SpiceDB device definition. And then we must ensure that each write to the home ID field in Spanner increments the device home version value. And then in the message itself, we also provide this new version value so that when the worker writes to SpiceDB, it can do a conditional write to make sure it doesn't override a newer home value with an older one.
\nAnd there are different options for how to implement this. But none of them are really simple. So introducing a bug in the replication logic, honestly, is pretty easy. And the worst thing is, unlike sync jobs or even micro-syncs, this approach doesn't provide you with any safety nets. When you introduce a bug, it won't even make it visible. So yeah, that's the three downsides of this approach. It's complexity, no visibility into the replication consistency, and no safety nets. And the main benefit is, it does guarantee there would be no inconsistencies from race conditions or failed writes.
\nAnd the last option is here more for completeness. To explore the idea that lies on the surface and, in fact, almost works, but there are a lot of nuances, limitations, and pitfalls to avoid there. And that's the only option where we solve the dual write problem by actually abandoning the dual write logic. So let's say we have a transactional outbox. And the only thing the service code does, it writes to the main database and the transactional outbox. No SpiceDB writes there. So there is no dual write.
\nAnd there is just a single worker that processes a single message at a time, the oldest message available in the transactional outbox, and then it attempts to make a SpiceDB write until it succeeds. So the transactional outbox is basically a queue. And that by itself guarantees eventual consistency. I'll give you some time to digest this statement.
\nYou can prove that as long as there are no bugs, the transactional outbox is a queue, and there is a single consumer, eventual consistency between the main database and SpiceDB is guaranteed. Because it's FIFO, first in, first out, and there are no SpiceDB writes from service code.
\nHowever, a single worker processing one message at a time from a queue wouldn't provide us with a high throughput. So you might be tempted to, instead of writing to SpiceDB directly from the worker, to put it into another durable queue. But I'm sure you can see the problem with this change, right? We've lost the FIFO property. So now it's subject to races. Unless that second queue is FIFO as well, of course. But if it's FIFO, guess what? We're not increasing throughput.
\nSo yeah, if we're relying on the FIFO property to address race conditions, there is literally no reason to transfer messages into another durable queue. If you want to increase the throughput, just use bulk SpiceDB writes]. But you would need to preprocess them to make sure there are no conflicts within the same batch. Yes, there is no horizontal scalability, but maybe that's not a problem for you.
\nYet, what would probably be a problem for most use cases is that a single problematic write can stop the whole replication process. And once we actually experienced exactly this issue, a single malformed SpiceDB write halting the whole replication process for us. And that's pretty annoying, as it requires manual intervention and is pretty urgent.
\nAnd yet another class of race conditions is introduced by backfills. Because FIFO is a property of the transactional outbox. But backfill writes, fundamentally, they do not go through the outbox. So, yeah. While it's possible to introduce a delay to the transactional outbox specifically for the backfill phase, to address it, I would say the overall amount of problems with this approach is already pretty catastrophic.
\nSo, let's do a quick summary. We've explored four different approaches to solving the dual write problem. And here is a trade-off table with the pros and cons of each of them. The obvious loser is the last FIFO transactional outbox option. And probably conditional writes with the version field are not the most attractive solution as well. Mostly because of their complexity and lack of visibility into the replication consistency.
\nSo, the two options we're probably choosing from are the first and the second one. It's two types of syncs. Either a classic cron sync job or micro syncs. And, yeah. You can totally combine most of these approaches with each other if you want.
\nWe're almost done. I just wanted to reiterate the fact that dual write is not a SpiceDB problem. It's a data replication problem. So, let's say you're doing event-driven replication. Strictly speaking, there are no dual writes, same as in the last FIFO option. But, ultimately, there are two writes to two different systems, to two different databases. So, we're facing exactly the same set of problems.
\nAdding a transactional outbox can kind of ensure that all the valid writes will eventually succeed. But, probably only if you own the other end of the replication process. Then, you can also add the FIFO property to address race conditions, which is option four. But, the first three approaches without Zanzibar or SpiceDB would be really tricky, if not impossible. Not only because of the data ownership problem, but also because of aggregates. With event-driven replication, you're probably not replicating simple atomic facts.
\nSo, yeah. SpiceDB makes the dual write problem, and ultimately the data integrity problem, much more manageable.
\nAnd that's it. Hopefully, this presentation brought some clarity into the highly complex dual write problem.
", - "url": "https://authzed.com/blog/the-dual-write-problem-in-spicedb-a-deep-dive-from-google-and-canva-experience", - "title": "The Dual-Write Problem in SpiceDB: A Deep Dive from Google and Canva Experience", - "summary": "In this technical deep-dive, Canva software engineer Artie Shevchenko draws on five years of experience with centralized authorization systems, first with Google's Zanzibar and now with SpiceDB, to tackle one of the most challenging aspects of authorization system implementation: the dual-write problem. This talk was part of the Authorization Infrastructure event hosted by AuthZed on August 20, 2025.", - "image": "https://authzed.com/images/blogs/a5-recap-canva.png", - "date_modified": "2025-09-16T08:00:00.000Z", - "date_published": "2025-09-16T08:00:00.000Z", - "author": { - "name": "Artie Shevchenko", - "url": "https://au.linkedin.com/in/artie-shevchenko-67845a4b" - } - }, - { - "id": "https://authzed.com/blog/turos-spicedb-success-story-how-the-leading-car-sharing-platform-transformed-authorization", - "content_html": "This talk was part of the Authorization Infrastructure event hosted by AuthZed on August 20, 2025.
\nAndre, a software engineer at Turo, shared how the world's leading car-sharing platform solved critical security and scalability challenges by implementing SpiceDB with managed hosting from AuthZed Dedicated. Faced with fleet owners having to share passwords due to rigid ownership-based permissions, Turo built a relationship-based authorization system enabling fine-grained, team-based access control. The results speak for themselves: \"SpiceDB made it trivial to design and implement the solution compared to traditional relational databases\" while delivering \"much higher performance and throughput.\" The system proved remarkably adaptable—adding support for inactive team members required \"literally one single line of code\" to change in the schema. AuthZed's managed hosting proved equally impressive, with only one incident in over two years of production use. As Andre noted, \"ultimately hosting with AuthZed saved us money in the long run\" by eliminating the need for dedicated infrastructure engineering, allowing Turo to focus on their core business while maintaining a \"blistering fast\" authorization system.
\nOn Reliability and Expert Support:
\n\n\n\"In over two years [...] of operations in production, we had a single incident. And even then in that event, they demonstrated the capacity to recover from faults very, very quickly.\"
\n
On Business Focus:
\n\n\n\"For over two years, Turo has used AuthZed's [Dedicated] offering where they're responsible for deploying and maintaining all the infrastructure required by the SpiceDB clusters. And that gives us time back to focus on growing our business, which is our primary concern.\"
\n
Talk by Andre Sanches, Software Engineer at Turo
\nHello, everyone, and welcome. I'm Andre, a software engineer at Turo, working with SpiceDB for just over two years now. I'm here to share a bit of our experience with SpiceDB as a product and AuthZed as a hosting partner. Congratulations, by the way, to AuthZed for its five-year anniversary. It's a privilege to be celebrating this milestone together. So let's get started.
\nFirst, a quick introduction to those who don't know Turo. We're the leading car-sharing platform in the world, operating in most of the US and four other countries. Our mission is to put the world's 1.5 billion cars to better use. Our business model is similar to popular home-sharing platforms you may be familiar with, with a fundamental difference. Vehicles are less expensive compared to homes, so it's common that hosts build up fleets of vehicles on Turo. In fact, many of our hosts build successful businesses with our help, and therein lies a challenge that we solved with SpiceDB.
\nHosts have responsibilities, such as communicating with guests in a timely manner, taking pictures of vehicles prior to handoff, and again, upon return of the vehicle to resolve disputes that may happen, managing vehicle schedules, etc. These things take time and effort, and as you scale up your business, fleet owners often hire people to help. And the problem is, in the past, Turo had a flat, ownership-based permission model. You could only interact with the vehicles you own, so hosts had no other choice but to share their accounts and their passwords. It's safe to say that folks in the target audience of this event understand how big of a problem that can be.
\nMoreover, third-party companies started sprouting all over the place to bridge that gap, to manage teams by way of calling our backend, which adds yet another potential attack vector by accessing Turo's customer data. So, it had become a large enough risk and a feature gap that we set out to solve that problem.
\nThe solution was to augment the flat, ownership-based model with a team-based approach, where admin hosts, meaning the fleet owner, can create teams that authorize individual drivers to perform specific actions, really fine-grained, on one or more of the vehicles that they own. Members are invited to join teams via email, which gives them the opportunity to sign up for a Turo account if they don't yet have one.
\nSo, the solution from a technical standpoint is a graph-based solution that enables our backend to determine very quickly, can Driver ABC perform a certain action on vehicle XYZ? In this case right here, can Driver ABC communicate with guests that booked that certain vehicle? SpiceDB made it trivial to design and implement the solution compared to traditional relational databases, which is most of our backend. Moreover, it offloaded our monolithic database with a tool that offers much higher performance and throughput.
\nAnecdotally, the simplicity of SpiceDB helped implement a last-minute requirement that crept in late in the development cycle—support for inactive team members, the ones who are pending invitation acceptance. Prior to that, the invitation system was purely controlled in MySQL. And we realized, you know what, if we're storing the team in SpiceDB, why not make it so that we can store inactive users too? And the reason I'm mentioning this is this impressed everybody who was working on that feature at the time, because it was literally one single line of code that we had to change in the schema to enable this.
\nSo I'll talk more about this in a second where I show some technical things. But the graph that I just mentioned then roughly translates to this schema. So this is a simplified but still accurate rendition of what our SpiceDB schema looks like. Hopefully this clarifies how driver membership propagates to permissions on vehicles, if you're familiar with SpiceDB schemas.
\nSome noteworthy mentions here are self-referencing relations, this one up here, or all the way up there. So basically, this is how we implemented the inactive users. If you notice that there, there's the member role and then an active member role. And by way of adding a single record that connects the member role with an active member role in the hosting team, you can enable and disable drivers. So this was so incredibly impressive at the time, because we thought we're going to have to change the entire schema and a whole bunch of other changes. And no, that's all it took.
\nAnd again, it's one of those things that once it clicked, if you're familiar with the SpiceDB but not with the self-referencing relation, looking at this, that #member role and pointing to a relation in the same definition, it kind of looks a little daunting. It did to me. I don't know—you're probably smarter than I am, but it was daunting. But then one day it just clicked and I'm like, hmm, okay, that's how it is. And I was super stoked to continue working with SpiceDB and I'm going to implement more and more of the features. And help the feature team, actually, because it was a separate feature team that was working on this. So that self-referencing was interesting.
\nThe other noteworthy mention here is the same namespaces. If you notice in front of the definition, there's a hosting teams forward slash. This is how we separate the schema into multiple copies of the same schema in the same cluster. So we have an ephemeral test environment in which we create and destroy on command sandbox replicas of our entire backend system. This enables us to deploy dozens, if not hundreds, of isolated copies of the schema, along with everything else in our backend, to test new features in a controlled environment that we can break, that we can modify as we see fit without affecting customers. And the namespacing feature in SpiceDB allowed us to use the same cluster for all those copies and save us some money. So we don't have to stand up a new server. We, you know, there's no computational costs or delays or any of that in provisioning computing resources and this and that.
\nSo the feature was released the week of, you know, us going pre-live, in a test environment. And we were probably the first adopters of this and it was really cool.
\nSo let me see at a high level, this is how our hosting team feature works. You can see, let me use the mouse here. You can see how permissions propagate to teams. So, team pricing and availability goes to the relation of the team in the hosting team. Hosting team has the pricing and availability for active member roles or admin role. Plus sign, as you all know, is a or, and then it connects to the driver. Simple, fast. This is blistering fast.
\nOne other query that we make to SpiceDB very, very often—matter of fact, this is the single most, you know, issued query to SpiceDB at any given time—is, is the currently logged in user a cohost. And that's done for everybody. Even if you're not a cohost, this is how we determine whether you're a cohost or not. That will then drive UI, you know, decisions, what, what widgets to show. You know, only if you're, if it's pertinent to you, if you're a cohost, if not, then there's no, no reason to. To pollute the UI with, you know, cohosting features. Yeah.
\nAnd this is what the UI looks like. So, you, on a team, you have cohosts and you can add or invite, here's an interesting thing. The code name of the project was cohosting. It ended up being hosting teams because we then used the nomenclature cohosts to add people to teams. So, here you have your cohosts. You can invite them by email. They get an, an email that points them to sign up to Turo. If they already have an account, they can just log in. And the moment they log in, it automatically accepts the invitation.
\nNext you have the fine grain permissions of what your group can, or your team can do. In this case, we have trip management enabled. This is the base actually, you know, the base permission that you have to grant to everybody on the team. And then there's pricing and availability that allows you to set prices for vehicles, discounts, you know, see finances and all that stuff. So you can imagine why that's, you know, why it's very nice to be able to toggle this and not let, you know, just any cohost that has no business looking at your finances, you know, just hiding it from them by way of untoggling the permission here. And then you have your vehicles. The list shows all the vehicles you own. You just toggle the ones you want, save, and you're off to the races. Your hosting team is in place and working.
\nSo also that as a hosting partner, when you're considering using, you know, a big challenge of adopting a new system is setting it up and running it in a scalable and reliable way. You have to manage, you know, security issues. You have to manage our scaling. You have to manage all kinds of, you know, infrastructure challenges. And that costs money. In this day and age, it's really hard to find engineers who understand infrastructure well enough to manage all the moving parts of a highly scalable system such as SpiceDB.
\nFor over two years, Turo has used AuthZed's fully hosted cloud offering where they're responsible for deploying and maintaining all the infrastructure required by the SpiceDB clusters. And that gives us time back to focus on growing our business, which is our primary concern. So this is a great opportunity actually to give AuthZed a shout out for their excellent reliability.
\nIn over two years, over two years and three months now, actually of operations in production, we had a single incident. And even then in that event, they demonstrated the capacity to recover from faults very, very quickly to pinpoint the problem incredibly quickly. And, you know, take care of it. I think the outage was, we were out for like 38 minutes, something like that. It was, you know, we've had other partners that things were much, much more challenging. So, and once in two years, the root cause, the entire handling of the outage was very, very, you know, nice to see. Because it involved thorough analysis, post-mortems and making sure that it doesn't happen again, putting in safeguards to ensure that it doesn't happen again.
\nSo everything was, you know, systems fail. We understand that. And how we deal with it is how, is what shows how, you know, how good you are. And with AuthZed, we rest, you know, easy knowing that we're well taken care of. And ultimately hosting with AuthZed saved us money in the long run because it would otherwise take a lot of engineering time and effort just to keep the clusters running. So if your company is considering adopting SpiceDB, I would highly encourage you to have a chat with AuthZed about hosting as well. From our experience, it's well worth the investment.
", - "url": "https://authzed.com/blog/turos-spicedb-success-story-how-the-leading-car-sharing-platform-transformed-authorization", - "title": "Turo's SpiceDB Success Story: How the Leading Car-Sharing Platform Transformed Authorization", - "summary": "Andre, a software engineer at Turo, shared how the world's leading car-sharing platform solved critical security and scalability challenges by implementing SpiceDB with managed hosting from AuthZed Dedicated. This talk was part of the Authorization Infrastructure event hosted by AuthZed on August 20, 2025.", - "image": "https://authzed.com/images/blogs/a5-recap-turo.png", - "date_modified": "2025-09-15T13:49:00.000Z", - "date_published": "2025-09-15T13:49:00.000Z", - "author": { - "name": "Andre Sanches", - "url": "https://www.linkedin.com/in/ansanch" - } - }, - { - "id": "https://authzed.com/blog/authzed-is-5-event-recap-authorization-infrastructure-insights", - "content_html": "Last month we celebrated AuthZed's fifth birthday with our first-ever \"Authorization Infrastructure Event\" - a deep dive\ninto the technical challenges and innovations shaping the future of access control.
\nThe livestream brought together industry experts from companies like Canva and Turo to share real-world experiences with\nauthorization at scale, featured major product announcements including the launch of AuthZed Cloud, and included\nfascinating discussions with database researchers about the evolution of data infrastructure. From solving the\ndual-write consistency problem to powering OpenAI's document processing, we covered the full spectrum of modern\nauthorization challenges.
\nWatch the full event recording (2.5 hours)
\nBefore we dive into the technical talks, let's start with the big announcements:
\nWe finally launched AuthZed Cloud - a self-service platform that allows you to provision,\nmanage, and scale your\nauthorization infrastructure on demand. Sign up with a credit card, get your permission system running in minutes, and\nscale as needed - authorization that runs like cloud infrastructure. Through\nour AuthZed Cloud Starter Program, we're\nalso providing credits to help teams try out the platform.
\n\nOpenAI securely connects enterprise knowledge with ChatGPT by using AuthZed to\nhandle permissions for their corporate data connectors - when ChatGPT connects to your company's Google Drive or\nSharePoint. They've built connectors to process and search over 37 billion documents for more than 5 million\nbusiness users while respecting existing data permissions using AuthZed's authorization infrastructure.
\nThis demonstrates how authorization infrastructure has become critical for AI systems that need to understand and\nrespect complex organizational data permissions at massive scale.
\nArtie Shevchenko from Canva delivered an excellent explanation of the dual-write problem that many authorization\nteams face. Anyone who has tried to keep data consistent between two different databases (such as your main database +\nSpiceDB) will recognize this challenge. Watch Artie's full talk
\nArtie was direct about the reality: the dual-write problem is hard. Here's what teams need to understand:
\nThings Will Go Wrong
\nFour Ways to Deal With It
\nCanva uses sync jobs as their safety net. Artie's team found that most inconsistencies actually came from bugs in their replication logic, not from the network problems everyone worries about. The sync jobs caught everything and gave them visibility into what was actually happening.
\nThe Real Lesson: Don't try to be clever. Pick an approach, implement it well, and have monitoring so you know when things break.
\nAndre Sanches from Turo told the story of how they moved from \"just share your password with your employees\" to\naccurate fine-grained access controls. Watch Andre's talk
\nThe Problem Was Real\nTuro hosts were sharing account credentials with their team members. Fleet owners needed help managing vehicles, but\nTuro's permission system only understood \"you own it or you don't.\" This created significant security challenges.
\nThe Solution Was Surprisingly Straightforward\nAndre's team built a relationship-based permission system using SpiceDB that supports:
\nThe best part? When they needed to add support for inactive team members late in development, it was literally a\none-line schema change. This exemplifies the utility of SpiceDB schemas and authorization as infrastructure.
\nTwo Years Later\nTuro has had exactly one incident with their AuthZed Dedicated deployment in over two years - and that lasted 38 minutes. Andre made it clear: letting AuthZed handle the infrastructure complexity was absolutely worth it. His team focuses on building features, not babysitting databases.
\nProfessor Andy Pavlo from Carnegie Mellon joined our co-founder Jimmy Zelinskie for a chat about databases, AI,\nand why new data models keep trying to kill SQL. Watch the fireside chat
\nThe SQL Cycle\nAndy's been watching this pattern for decades:
\nVector databases? Being absorbed into PostgreSQL. Graph databases? SQL 2024 added property graph queries. NoSQL? Most of those companies quietly added SQL interfaces.
\nThe Spiciest Take\nJimmy dropped this one: \"The PostgreSQL wire protocol needs to die.\"
\nHis argument: Everyone keeps reimplementing PostgreSQL compatibility thinking they'll get all the client library benefits for free. But what actually happens is you inherit all the complexity of working around a pretty terrible wire protocol, and you never know how far down the rabbit hole you'll need to go.
\nAndy agreed it's terrible, but pointed out there's not enough incentive for anyone to build something better. Classic tech industry problem.
\nAI and Databases\nThey both agreed that current AI hardware isn't radically different from traditional computer architecture - it's just specialized accelerators. The real revolution will come from new hardware designs that change how we think about data processing entirely.
\nJoey Schorr (our CTO) showed off something that made me genuinely excited: a way to make SpiceDB look like regular\nPostgreSQL tables. Watch Joey's demo
\nYou can literally write SQL like this:
\nSELECT * FROM documents\nJOIN permissions ON documents.id = permissions.resource_id\nWHERE permissions.subject_id = 'user:jerry' AND permissions.permission = 'view'\nORDER BY documents.title DESC;\n\nThe foreign data wrapper handles the SpiceDB API calls behind the scenes, and PostgreSQL's query planner figures out the optimal way to fetch the data. Authorization-aware queries become just... queries.
\nVictor Roldán Betancort demonstrated AuthZed Materialize, which precomputes complex permission decisions so SpiceDB\ndoesn't have to traverse complex relationship graphs in real-time. Watch Victor's demo
\nThe demo showed streaming permission updates into DuckDB, then running SQL queries against the materialized permission\nsets. This creates a real-time index of who can access what, without the performance penalty of traversing permission\nhierarchies on every query.
\nSam Kim talked about authorization for Model Context Protocol servers and released a reference implementation for a\nMCP server with fine-grained authorization support build in. Watch Sam's MCP talk
\nThe key insight: if you don't build official MCP servers for your APIs, someone else will. And you probably won't like how they handle authorization. Better to get ahead of it with proper access controls baked in.
\nIrit Goihman (our VP of Engineering) shared some thoughts on how we approach building software. Watch Irit's insights
\nRemote-first engineering teams need different approaches to knowledge sharing and innovation.
\nWe recognized the contributors who make SpiceDB a thriving open source project. The community response has been\nexceptional:
\nCore SpiceDB Contributors:
\nClient Library Heroes (making SpiceDB accessible everywhere):
\nCommunity Tooling Builders (the ecosystem enablers):
\nEvery single one of these folks saw a gap and decided to fill it. That's what makes open source communities amazing.
\nFive years ago, application authorization was often something that was DIY and hard to scale. Today, companies are\nprocessing billions of permission checks through purpose-built infrastructure.
\nThe next five years? AI agents are going to need authorization systems that don't exist yet. Real-time permission materialization will become table stakes. Integration with existing databases will get so seamless you won't think about it.
\nIf you take anything away from our fifth birthday celebration, let it be this:
\nAuthorization infrastructure has gone from \"development requirement\" to \"strategic advantage.\" The companies that figure\nthis out first will have a significant edge in keeping pace with quickening development cycles and heightene security\nneeds.
\nThanks to everyone who joined AuthZed for the celebration, and here's to the next five years of fixing access control\nfor everyone.
\nWant to try AuthZed Cloud? Sign up here and get started in minutes.
\nJoin our community on Discord and\nstar SpiceDB on GitHub.
", - "url": "https://authzed.com/blog/authzed-is-5-event-recap-authorization-infrastructure-insights", - "title": "AuthZed is 5: What We Learned from Our First Authorization Infrastructure Event", - "summary": "We celebrated our 5th birthday with talks from Canva, Turo, and Carnegie Mellon. Here's what we learned about the dual-write problem, scaling authorization in production, and why everyone keeps reimplementing the PostgreSQL wire protocol.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-09-02T18:00:00.000Z", - "date_published": "2025-09-02T18:00:00.000Z", - "author": { - "name": "Corey Thomas", - "url": "https://www.linkedin.com/in/cor3ythomas/" - } - }, - { - "id": "https://authzed.com/blog/authzed-cloud-is-now-available", - "content_html": "Today marks a special milestone for AuthZed: we're celebrating our 5th anniversary! There are honestly too many thoughts and reflections swirling through my mind to fit into a single blog post. The reality is that most startups don't make it to 5 years, and I'm extremely proud of what we've built together as a team and community.
\nIf you want to hear me reflect on the journey of the past 5 years, I'm giving a talk today about exactly that, and we'll post a link to the recording here when it's ready. But today isn't just about looking back, it's also about looking forward, and I’ve personally been looking forward to launching our next iteration of authorization infrastructure: AuthZed Cloud.
\nIn this blog post, I'll cover what we've built and why, but if you don't need that context and just want to dive in, feel free to bail on this post and sign up right now!
\nTo understand why we built AuthZed Cloud, I need to first talk about AuthZed Dedicated, because in many ways, Dedicated represents our vision of the perfect authorization infrastructure product.
\nAuthZed Dedicated is nearly infinitely scalable: capable of handling millions of queries per second when you need it. It's co-located with your workloads, which means there's no internet or cross-cloud latency penalty for your authorization decisions, which are often in the critical path for user interactions. It can run nearly anywhere on earth, with support for all three major cloud providers, giving you the flexibility to deploy where your business needs demand.
\nPerhaps most importantly, Dedicated provides total isolation for each customer across datastore, network, and compute layers. It marries the best permissions database in the world (SpiceDB) with the best infrastructure design (Kubernetes + operators) to create what we believe is the best authorization infrastructure in the world.
\nSo how did we improve on this formula? We made it more accessible!
\nAuthZed Dedicated's biggest challenge isn't technical: it's the enterprise procurement cycle that comes with it. The question we kept asking ourselves was: how can we bring these powerful concepts to more companies, especially those who need enterprise-grade authorization but can't navigate lengthy procurement processes?
\nAuthZed Cloud takes the most powerful concepts from AuthZed Dedicated and makes them available in a self-service product that you can start using today.
\nWe've also made several key improvements over what’s available in Dedicated today:
\nSelf-service registration and deployment: No more waiting weeks for procurement approvals or implementation calls. Sign up, configure your permissions system, and start building. Scale when you need to!
\nRoles: We've added granular access controls that let you limit who can access and change things within your AuthZed organizations. This was a frequent request from teams who needed to federate access to our platform in different ways. You’ll be happy to know that this feature is, of course, also powered by SpiceDB.
\nUsage-based billing: Instead of committing to fixed infrastructure costs upfront, you can spin up resources on-demand and pay for what you actually use.
\nThe best part? These improvements will also be landing in Dedicated soon, so all our customers benefit!
\nDelivering on this vision does require some compromises. AuthZed Cloud uses a shared control plane and operates in pre-selected regions (though please let us know if you need a region we don't support today!). But honestly, that's about it for compromises.
\nAuthZed Cloud is designed for companies of all sizes. Despite the shared infrastructure approach, we've maintained high isolation standards. Your SpiceDB runs as separate Kubernetes deployments, and datastores are dedicated per permissions system. You still get the same scalable technology from Dedicated that allows you to scale up to millions of queries per second when needed, and the same enterprise-grade reliability.
\nWhat makes Cloud special is how attainable it is. The base price is a fraction of our base Dedicated deployment price, opening up AuthZed's capabilities to a much broader range of companies.
\nThat said, some organizations should still consider Dedicated. You might consider dedicated if you need higher isolation requirements like an isolated control plane or private networking, or if you need higher flexibility around custom legal terms or deployment in cloud provider regions that AuthZed Cloud doesn't yet support.
\nThe response during our early access period has been incredible. There was clearly pent-up demand for a product like this! We've had several long-time AuthZed customers already making the move to Cloud.
\nLita Cho, CTO at moment.dev, had this to say:
\n\n\n“We love Authzed—it makes evolving our permissions model effortless, with a powerful schema language, makes rapid\nprototyping possible along with rock-solid production performance, all without heavy maintenance. Authzed Cloud\ndelivers the power and reliability of Dedicated at a startup-friendly price, without the hassle of running SpiceDB. That\nlets me focus on building our modern docs platform, confident our authorization is secure, fast, and future-proof.”
\n
The best part about AuthZed Cloud is that you can sign up immediately and get started building. We've also set up a program where you can apply for credits to help with your initial implementation and testing.
\nAs we celebrate five years of AuthZed, I'm more excited than ever about the problems we're solving and the direction we're heading. Authorization remains one of the most critical and complex challenges in modern software development, and we're committed to making it accessible to every team that needs it.
\nHere's to the next five years of building the future of authorization together.
", - "url": "https://authzed.com/blog/authzed-cloud-is-now-available", - "title": "AuthZed Cloud is Now Available!", - "summary": "Bringing the power of AuthZed Dedicated to more with our new shared infrastructure, self-service offering: AuthZed Cloud.", - "image": "https://authzed.com/images/upload/AuthZed-Cloud-Blog@2x.png", - "date_modified": "2025-08-20T16:00:00.000Z", - "date_published": "2025-08-20T16:00:00.000Z", - "author": { - "name": "Jake Moshenko", - "url": "https://www.linkedin.com/in/jacob-moshenko-381161b/" - } - }, - { - "id": "https://authzed.com/blog/predicting-the-latest-owasp-top-10-with-cve-data", - "content_html": "OWASP is set to release their first Top 10 update since 2021, and this year’s list is one of the most awaited because of the generational shift that is AI. The security landscape has fundamentally shifted thanks to AI being embedded in production systems across enterprises from RAG pipelines to autonomous agents. I thought it would be a fun little exercise to look at CVE data from 2022-2025 and make predictions on what the top 5 in the updates list would look like. Read on to find out what I found.
\nThe OWASP Top 10 is a regularly updated list of the most critical security risks to web applications. It’s a go-to reference for organizations looking to prioritize their security efforts. We’ve always had a keen eye on this list as it’s our mission to fix broken access control.
\nThe last 4 lists have been released in 2010, 2013, 2017 and 2021 with the next list scheduled for release soon, in Q3 2025.
\nThe OWASP Foundation builds this list using a combination of large-scale vulnerability data, community surveys, and expert input. The goal is to create a snapshot of the most prevalent and impactful categories of web application risks. So I thought I’ll crunch some numbers from CVE data that is publicly available.
\nThis was not a scientific study — I’m not a data scientist, just an enthusiast in the cloud and security space. The aim here was to explore the data, learn more about how OWASP categories relate to CVEs and CWEs, and see if the trends point toward likely candidates for the upcoming list.
\nHere’s the process I followed to get some metrics around the most common CVEs:
\nCollect CVEs from 2022–2025
\nMap CWEs to OWASP Top 10 Categories
\nFor example:
\nCWE-201 - ‘Insertion of Sensitive Information Into Sent Data’ maps to ‘Broken Access Control’.
\n
def map_cwe_to_owasp(cwe_ids):\n owasp_set = set()\n for cwe in cwe_ids:\n try:\n cwe_num = int(cwe.replace(\"CWE-\", \"\"))\n if cwe_num in CWE_TO_OWASP:\n owasp_set.add(CWE_TO_OWASP[cwe_num])\n except ValueError:\n continue\n return list(owasp_set)\n\nCWE_TO_OWASP = {\n # A01: Broken Access Control\n 22: \"A01:2021 - Broken Access Control\",\n 23: \"A01:2021 - Broken Access Control\",\n # ...\n 1275: \"A01:2021 - Broken Access Control\",\n\n\n # A02: Cryptographic Failures\n 261: \"A02:2021 - Cryptographic Failures\",\n 296: \"A02:2021 - Cryptographic Failures\"\n # ...,\n 916: \"A02:2021 - Cryptographic Failures\",\n\n\n # A03: Injection\n 20: \"A03:2021 - Injection\",\n 74: \"A03:2021 - Injection\",\n # ...\n 917: \"A03:2021 - Injection\",\n\n\n # A04 Insecure Design\n 73: \"A04:2021 - Insecure Design\",\n 183: \"A04:2021 - Insecure Design\",\n # ...\n 1173: \"A04:2021 - Insecure Design\",\n\n\n # A05 Security Misconfiguration\n 2: \"A05:2021 - Security Misconfiguration\",\n 11: \"A05:2021 - Security Misconfiguration\",\n # ...\n 1032: \"A05:2021 - Security Misconfiguration\",\n \n # A05 Security Misconfiguration\n 937: \"A06:2021 - Vulnerable and Outdated Components\",\n # ... \n 1104: \"A06:2021 - Vulnerable and Outdated Components\",\n\n\n # A07:2021 - Identification and Authentication Failures\n 255: \"A07:2021 - Identification and Authentication Failures\",\n 259: \"A07:2021 - Identification and Authentication Failures\",\n # ...\n 1216: \"A07:2021 - Identification and Authentication Failures\",\n\n\n # A08:2021 - Software and Data Integrity Failures\n 345: \"A08:2021 - Software and Data Integrity Failures\",\n 353: \"A08:2021 - Software and Data Integrity Failures\",\n # ... \n 915: \"A08:2021 - Software and Data Integrity Failures\",\n\nMap CVEs to CWEs
\ncve.weaknesses[].description[].value with CWE IDs like CWE-201. I wrote a script to process the JSON containing NVD vulnerability data to extract CWE IDs for each CVE, and then map it to OWASP categories.def process_nvd_file(input_path, output_path):\n with open(input_path, \"r\") as f:\n data = json.load(f)\n\n\n results = []\n for entry in data[\"vulnerabilities\"]:\n cve_id = entry.get(\"cve\", {}).get(\"id\", \"UNKNOWN\")\n cwe_ids = []\n\n\n # Extract CWE IDs from weaknesses\n for problem in entry.get(\"cve\", {}).get(\"weaknesses\", []):\n for desc in problem.get(\"description\", []):\n cwe_id = desc.get(\"value\")\n if cwe_id and cwe_id != \"NVD-CWE-noinfo\":\n cwe_ids.append(cwe_id)\n\n\n mapped_owasp = map_cwe_to_owasp(cwe_ids)\n\n\n results.append({\n \"cve_id\": cve_id,\n \"cwe_ids\": cwe_ids,\n \"owasp_categories\": mapped_owasp\n })\n\n\n with open(output_path, \"w\") as f:\n json.dump(results, f, indent=2)\n\n\n print(f\"Wrote {len(results)} CVE entries with OWASP mapping to {output_path}\")\n\nWe now have a new JSON file with mapped outputs that has all the CVEs mapped to OWASP categories (if there’s a match). This is what it looks like:
\n{\n \"cve_id\": \"CVE-2024-0185\",\n \"cwe_ids\": [\n \"CWE-434\",\n \"CWE-434\"\n ],\n \"owasp_categories\": [\n \"A04:2021 - Insecure Design\"\n ]\n },\n {\n \"cve_id\": \"CVE-2024-0186\",\n \"cwe_ids\": [\n \"CWE-640\"\n ],\n \"owasp_categories\": [\n \"A07:2021 - Identification and Authentication Failures\"\n ]\n },\n\nI ran this code snippet for each data set from 2022-2025 and had separate JSON files for each year.
\nNow that we have this data of mapped outputs, we can run some data analysis to find the most common occurrences per year.
\nfor filename in os.listdir(DATA_DIR):\n\n# Loads the JSON data from the file, which contains a list of CVE entries.\n\n year = filename.replace(\"mapped_output_\", \"\").replace(\".json\", \"\")\n year_path = os.path.join(DATA_DIR, filename)\n\n with open(year_path, \"r\") as f:\n entries = json.load(f)\n\n for entry in entries:\n for category in entry.get(\"owasp_categories\", []):\n yearly_data[year][category] += 1\n\n# Convert to a DataFrame\ndf = pd.DataFrame(yearly_data).fillna(0).astype(int).sort_index()\ndf = df.T.sort_index() # years as rows\n\n# Save summary\ndf.to_csv(\"owasp_counts_by_year.csv\")\nprint(\"\\nSaved summary to owasp_counts_by_year.csv\")\n\n# Also print\nprint(\"\\n=== OWASP Category Counts by Year ===\")\nprint(df.to_string())\n\n# Plot OWASP trends over time\nplt.figure(figsize=(12, 7))\n\nfor column in df.columns:\n plt.plot(df.index, df[column], marker='o', label=column)\n\nplt.title(\"OWASP Top 10 Category Trends (2022–2025)\")\nplt.xlabel(\"Year\")\nplt.ylabel(\"Number of CVEs\")\nplt.xticks(rotation=45)\nplt.legend(title=\"OWASP Category\", bbox_to_anchor=(1.05, 1), loc='upper left')\nplt.tight_layout()\nplt.grid(True)\nplt.show()\n\nThis is what it looked like:
\n
Here’s a table with all the data:
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n| A01: Broken Access Control | A02:\u000b Cryptographic Failures | A03: \u000bInjection | A04: Insecure Design | A05:\u000b Security Misconfiguration | A06: Vulnerable & Outdated Components | A07: Identification & Authentication Failures | A08: Software & Data Integrity Failures | |
|---|---|---|---|---|---|---|---|---|
| 2022 | 4004 | 370 | 6496 | 1217 | 151 | 1 | 1233 | 334 |
| 2023 | 5498 | 411 | 8846 | 1480 | 178 | 1 | 1357 | 468 |
| 2024 | 7182 | 447 | 13280 | 1922 | 163 | 4 | 1430 | 584 |
| 2025 | 4314 | 209 | 7563 | 1056 | 90 | 2 | 774 | 418 |
| Totals | 20998 | 1437 | 36185 | 5675 | 582 | 8 | 4794 | 1804 |
So looking at purely the number of incidences in CVEs, the Top 5 would look like this:
\n#5 Software and Data Integrity Failures
\n#4 Identification & Authentication Failures
\n#3 Insecure Design
\n#2 Broken Access Control
\n#1 Injection
But wait, OWASP’s methodology in compiling the list involves not just the frequency (how common) but the severity or impact of each weakness. Also, 2 out of the 10 in the list are chosen from a community survey among application security professionals, to compensate for the gaps in public data. In the past OWASP has also merged categories to form a new category. So based on that here’s my prediction for the Top 5
\nThere’s absolutely no doubt in my mind that the security implications of AI will have a big impact on the list. One point of note is that OWASP released a Top 10 list of LLM in November 2024. Whether they decided to keep the two lists separate or have overlap will largely determine the Top 10 this year.
\nSo looking at the CVE data above (Broken Access Control and Injection had the most occurrences), and the rise of AI in production, here’s what I think will be the Top 5 in the OWASP list this year:
\n#5 Software and Data Integrity Failures
\n#4 Security Misconfigurations
\n#3 Insecure Design
\n#2 Injection
\n#1 Broken Access Control
With enterprises implementing AI Agents, RAG Pipelines and Model Context Protocol (MCP) in production, access control becomes a priority. Broken Access Control topped the list in 2021, and we’ve seen a slew of high profile data breaches recently so I think it will sit atop the list this year as well.
\nI asked Jake Moshenko, CEO of AuthZed about his Predictions for the list and while we agreed on the #1 position on the list, there were also a couple of things where we disagreed. Watch the video to find out what Jake thought the Top 5 would look like and which category he thinks might drop out of the Top 10 altogether.
\n\nAs I mentioned before, I’m not a data scientist so please feel free to improve upon this methodology in the Github Repo. I also need to state that:
\nWhat do you think the 2025 OWASP Top 10 will look like?
\nDo you agree with these trends, or do you think another category will spike?
\nI’d love to hear your thoughts in the comments on LinkedIn, BlueSky or Twitter
If you want to replicate this yourself, I’ve put the dataset links and code snippets on GitHub.
", - "url": "https://authzed.com/blog/predicting-the-latest-owasp-top-10-with-cve-data", - "title": "Predicting the latest OWASP Top 10 with CVE data ", - "summary": "OWASP is set to release their first Top 10 update since 2021, and this year’s list is one of the most awaited because of the generational shift that is AI. The security landscape has fundamentally shifted thanks to AI being embedded in production systems across enterprises from RAG pipelines to autonomous agents. I thought it would be a fun little exercise to look at CVE data from 2022-2025 and make predictions on what the top 5 in the updates list would look like. Read on to find out what I found.", - "image": "https://authzed.com/images/blogs/authzed-predict-owasp.png", - "date_modified": "2025-08-13T18:50:00.000Z", - "date_published": "2025-08-13T18:50:00.000Z", - "author": { - "name": "Sohan Maheshwar", - "url": "https://www.linkedin.com/in/sohanmaheshwar/" - } - }, - { - "id": "https://authzed.com/blog/prevent-ai-agents-from-accessing-unauthorized-data", - "content_html": "I just attended the Secure Minds Summit in Las Vegas, where security and application development experts shared lessons learned from applying AI in their fields. Being adjacent to Black Hat 2025, it's not surprising that a common theme was the security risks of AI agents and MCP (Model Context Protocol). There's an anxious excitement in the community about AI's potential to revolutionize how organizations operate through faster, smarter decision-making, while grappling with the challenge of doing it securely.
\nAs organizations explore AI agent deployment, one thing is clear: neither employees nor AI agents should have access to all data. You wouldn't want a marketing AI agent accessing raw payroll data, just as you wouldn't want an HR agent viewing confidential product roadmaps. Without proper access controls, AI agents can create chaos just as easily as they deliver value, since they don't inherently understand which data they should or shouldn't access.
\nThis is where robust permissions systems become critical. Proper access controls ensure AI agents operate within organizational policy boundaries, accessing only data they're explicitly authorized to use.
\nSohan, our Lead Developer Advocate at AuthZed, recently explored this topic on the AuthZed YouTube channel with a live demo of implementing AI-aware permissions systems.
\nWatch the demo here:
\n\nIn June, we launched AuthZed's Authorization Infrastructure for AI, purpose-built to ensure AI systems respect permissions, prevent data leaks, and maintain comprehensive audit trails.
\nAuthZed's infrastructure is powered by SpiceDB, our open-source project based on Google's Zanzibar. SpiceDB's scale and speed make it an ideal authorization solution for supporting AI's demanding performance requirements.
\nOur infrastructure delivers:
\nWant to learn more about the future of AuthZed and authorization infrastructure for AI? Join us on August 20th for \"AuthZed is 5: The Authorization Infrastructure Event.\" Register here.
", - "url": "https://authzed.com/blog/prevent-ai-agents-from-accessing-unauthorized-data", - "title": "Prevent AI Agents from Accessing Unauthorized Data", - "summary": "AI agents promise to revolutionize enterprise operations, but without proper access controls, they risk exposing sensitive data to unauthorized users. Learn how AuthZed's Authorization Infrastructure for AI prevents data leaks while supporting millions of authorization checks per second. Watch our live demo on implementing AI-aware permissions systems.\n\n", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-08-08T15:46:00.000Z", - "date_published": "2025-08-08T15:46:00.000Z", - "author": { - "name": "Sam Kim", - "url": "https://github.com/samkim" - } - }, - { - "id": "https://authzed.com/blog/authzed-is-5-authorization-infrastructure-event", - "content_html": "AuthZed is turning five years old, and we're throwing a celebration! On Wednesday, August 20th, we're hosting \"The Authorization Infrastructure Event\" by bringing together experts in authorization and database technology to talk about where this space is headed.
\n\nYou'll hear from industry experts who've been shaping how we think about authorization:
\nAnd the AuthZed team will be sharing what we've been building—new product announcements, plus a peek into our lab:
\nWe’ll be announcing new products that I think will genuinely change how people approach authorization infrastructure and I’m particularly excited to finally share about what we've been exploring in our lab, experimental work that could shape the future of access control.
\nIt's hard to believe but five years have gone by so fast. Back when I joined Jake, Jimmy, and Joey as the first employee, they had this clear understanding of why application authorization was such a pain point for developers, the Google Zanzibar paper as their guide, and an ambitious vision: bring better authorization infrastructure to everyone who needed it.
\n
Photo from our first team offsite in 2021. Not pictured: me because I'm taking the photo
\nLooking back at our journey, some moments that stand out:
\nWe've grown from that small founding team to a group of people who genuinely care about solving authorization the right way. Along the way, we've had the privilege of helping everyone from early-stage startups to large enterprises build and scale their applications without the usual authorization headaches.
\nThis event is our chance to share our latest work with the community that's supported us, celebrate how far we've all come together, and get a glimpse of what's ahead.
\nWhether you've been following our journey from the beginning or you're just discovering what we're about, we'd love to have you there. It's going to be the kind of event where you leave with new ideas, maybe some useful insights, and definitely a better sense of where authorization infrastructure is headed.
\nWant to share a birthday message with us? Record a short message here—we'd genuinely love to hear from you and share some of them during the event.
\nSee you on August 20th!
", - "url": "https://authzed.com/blog/authzed-is-5-authorization-infrastructure-event", - "title": "Celebrate With Us: AuthZed is 5!", - "summary": "AuthZed is turning five years old! Join us Wednesday, August 20th for our Authorization Infrastructure Event, where we're bringing together industry experts and sharing exciting new product developments plus experimental work from our lab.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-07-23T09:36:00.000Z", - "date_published": "2025-07-23T09:36:00.000Z", - "author": { - "name": "Sam Kim", - "url": "https://github.com/samkim" - } - }, - { - "id": "https://authzed.com/blog/coding-with-ai-my-personal-experience", - "content_html": "I’ve been in tech for over 20 years. I’ve written production code in everything from Fortran to Go, and for the last five of those years, I’ve been a startup founder and CEO. These days, I spend most of my time operating the business, not writing code. But recently, I dipped back in. I needed a new demo built, and fast.
\nIt wasn’t a simple side project. This demo would ideally have multiple applications, all wired into SpiceDB, built with an obscure UI framework, and designed to show off what a real-world, multi-language, permission-aware system looks like. Naturally, I started thinking about who should build it.
\nShould I ask engineering? Probably not a good idea since I didn’t want to interrupt core product work. What about an intern? Too late in the year for that. Maybe a contractor? I’ve had mixed results there. Skills tend to be oversold, results can fall short, and just finding and vetting someone would take time I didn’t have.
\nJust prior to this, Anthropic had just released Claude Code and Claude 4. A teammate (with good taste) had good things to say about the development experience, and internet consensus seems to be that (for today at least) Claude is kind for coding models, so I figured I’d give it a try. I’m no novice to working with AI: I have been a paying customer of OpenAI’s since Dall-E and ChatGPT had their first public launches. At AuthZed we also make extensive use of the AI features that are built into some of our most beloved tools, such as: Notion, Zoom, Figma, and GitHub. Many of these features have been helpful, but none felt like a game changer.
\nAt first, I wasn’t sure how much Claude Code could take on. I didn’t know how to structure my prompts or how detailed I needed to be. I started small: scaffold a project, get a “hello world” working, and set up the build system. It handled all of that cleanly.
\nEncouraged, I got a little overconfident. My prompts grew larger and fuzzier. The quality of output dropped quickly. I also didn’t have a source control strategy in place, and when Claude Code wandered off track, I lost a lot of work. It’s fantastically bad at undoing what it just did! It was a painful but valuable learning experience.
\nEventually, I found my rhythm. I started treating Claude Code like a highly capable but inexperienced intern. I wrote prompts as if they were JIRA tickets: specific, structured, and assuming zero context. I broke the work down into small, clear deliverables. I committed complete features as I went. When something didn’t feel right, I aborted early, git reverted, and started fresh.
\n
That approach worked really well.
\n

By the end of the project, Claude Code and I had built three application analogues for tools that exist in the Google Workspace suite, in three different languages! We wrote a Docs-like in Java, a Groups-like in Go, and a Gmail-like in Javascript, and a frontend coded up in a wacky wireframe widget library called Wired Elements. Each one was connected through SpiceDB, shared a unified view of group relationships, and included features like email permission checks and a share dialog in the documents app. It all ran in Docker with a single command. The entire effort cost me around $75 in API usage.
\nCheck it out for yourself: https://github.com/authzed/multi-app-demo
\nCould I have done this on my own? Sure, in theory. But I’m not a UI expert, and switching between backend languages would have eaten a lot of time. If I’d gone the solo route, I would’ve likely over-engineered the architecture to minimize how much code I had to write, which might have resulted in something more maintainable, but also something unfinished and way late.
\n
This was a different experience than I’d had with GitHub Copilot. Sometimes people describe Copilot as “spicy autocomplete”, and that feels apt. Claude Code felt like having a pair programmer who could actually build features with me.
\nMy buddy Jason Hall from Chainguard put it best in a post on LinkedIn: “AI coding agents are like giving everyone their own mech suit.” and “...if someone drops one off in my driveway I'm going to find a way to use it.”
\n
For the first time in a long while, I felt like I could create again. As a CEO, that felt energizing. It also made me start wondering what else I could personally accelerate.
\nOf course, I had some doubts. Maybe this only worked because it was greenfield. Maybe I’d regret not being the expert on the codebase. But the feeling of empowerment was real.
\nAt the same time, we had a growing need to migrate our sales CRM. We’d built a bespoke system in Notion, modeled loosely after Salesforce. Meanwhile, all of our marketing data already lived in HubSpot. It was time to unify everything.
\nOn paper, this looked straightforward: export from Notion, import into HubSpot. In reality, it was anything but. Traditional CRM migrations are done with flattened CSV files; that wouldn’t play nicely with the highly relational structure we’d built. And with so much existing marketing data in HubSpot, this was more of a merge than a migration.
\nI’ve been through enough migrations to know better than to try a one-shot cutover. It never goes right the first time, and data is always messier than expected. So I came up with a different plan: build a continuous sync tool.
\nThe idea was to keep both systems aligned while we gradually refined the data. That gave us time to validate everything and flip the switch only when we were ready. Both Notion and HubSpot have rich APIs, so I turned again to Claude Code.
\nOver the course of a week, Claude Code and I wrote about 5,000 lines of JavaScript. The tool matched Notion records to HubSpot objects using a mix of exact matching and fuzzy heuristics. We used Levenshtein distance to help with tricky matches caused by accented names or alternate spellings. The tool handled property synchronization and all the API interactions needed to link objects across systems.
\nThe cost came in at around $50 in Claude Code credits.
\nCould I have done it myself? Technically, yes. But it would have taken me a lot longer. I’m not fluent in JavaScript, and if I had been writing by hand, I would’ve insisted on TypeScript and clean abstractions. That would have been a waste of time for something we were planning to throw away after the migration.
\nOur current generation of coding agents are undeniably powerful. Yes, they’re technically still just a next-token predictor, but that description misses the point. It’s like saying Bagger 288 is “just a big shovel.” Sure, but it’s a shovel that can eat mountains.
\nI now feel confident taking on software projects again in my limited spare time. That’s not something I expected to feel again as a full-time CEO. And the most exciting part? This is probably the worst that these tools will ever be. From here, the tools only get better. Companies like OpenAI, with Codex, and Superblocks are already riffing on other possible user experiences for coding agents. I’m keen to see where the industry goes.
\nIt also seems clear that AI will play a bigger and bigger role in how code gets written. As an API provider, we’re going to need to design for that reality. In the not-too-distant future, our primary users will likely be coding agents, not just humans.
\nWe’re in the middle of a huge transformation, not just in software, but across the broader economy. The genie is out of the bottle. Even if the tools stopped improving tomorrow (and I don’t think they will) there’s already enough capability to change the way software gets built.
\nI’ll admit, it’s a little bittersweet. For most of my career, I have self-identified as a computer whisperer: someone who can speak just the right incantations to make computers (or sometimes whole datacenters) do what I need. But like most workplace superpowers, this one also turned out to be a time-limited arbitrage opportunity.
\nWhat hasn’t changed is the need for control. As AI gets more capable, the need for clear, enforceable boundaries becomes more important than ever. The answer to “what should this AI be allowed to do?” isn’t “more AI.” It’s strong, principled authorization.
\nThat’s exactly what we’re building at AuthZed. And you’ll be seeing more from us soon about how we’re thinking about AI-first developer experience and AI-native authorization.
\nStay tuned.
", - "url": "https://authzed.com/blog/coding-with-ai-my-personal-experience", - "title": "Coding with AI: My Personal Experience", - "summary": "AuthZed CEO Jake Moshenko shares his experience coding with AI.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-07-16T08:21:00.000Z", - "date_published": "2025-07-16T08:21:00.000Z", - "author": { - "name": "Jake Moshenko", - "url": "https://www.linkedin.com/in/jacob-moshenko-381161b/" - } - }, - { - "id": "https://authzed.com/blog/authzed-cloud-is-coming-soon", - "content_html": "Here at AuthZed, we are counting down the days until we launch AuthZed Cloud because we are so eager to bring the power of our authorization infrastructure to every company, large and small. If you're just as excited as we are about AuthZed Cloud, sign up for the waitlist. We will be in touch with AuthZed Cloud news, and you'll be the first to know when the product launches.
\n\n
From the start of our journey, we have had a strong focus on serving the needs of authorization at enterprise businesses. Our most popular product, AuthZed Dedicated, is a reflection of that focus as it caters to those looking for dedicated hardware resources and fully-isolated deployment environments. However, not everyone has such strict requirements, and there are many companies who prefer a self-service product where they can sign up, manage their deployments from a single, shared control plane with other users, and pay for dynamic usage with a credit card. The latter is how we consumed most of our high-value services at our last startup when we were building the first enterprise container registry: Quay.io. In fact, you can read more about our journey from Quay to AuthZed here.
\nThe most gratifying part of creating AuthZed has been working alongside so many amazing companies that are changing the landscape of various industries. It's truly validating to see them come to the same conclusion: homegrown authorization solutions are not sufficient for modern businesses. With AuthZed Cloud, we expect to expand the number of companies we can work alongside to set a new standard of security that ensures the safety of all of our private data by fixing access control.
", - "url": "https://authzed.com/blog/authzed-cloud-is-coming-soon", - "title": "AuthZed Cloud is Coming Soon", - "summary": "AuthZed Cloud is coming soon, expanding beyond enterprise-only solutions to offer self-service authorization infrastructure for companies of all sizes. Join our waitlist to be first in line when we launch this game-changing platform.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-07-03T10:31:00.000Z", - "date_published": "2025-07-03T10:31:00.000Z", - "author": { - "name": "Jimmy Zelinskie", - "url": "https://twitter.com/jimmyzelinskie" - } - }, - { - "id": "https://authzed.com/blog/authzed-brings-additional-observability-to-authorization-via-the-datadog-integration", - "content_html": "Today, AuthZed is providing additional observability capabilities to AuthZed's cloud products with the introduction of our official Datadog Integration. All critical infrastructure should be observable and authorization is no exception. Our integration with Datadog gives engineering teams instant insight into authorization performance, latency, and anomalies—without adding custom tooling or overhead.
\nWith this new integration, customers can now centralize that observability data with the rest of their data in Datadog—giving them the ability to correlate events across their entire platform. AuthZed's cloud products continue to include a web console with out-of-the-box dashboards containing metrics across the various infrastructure components that power a permissions system. At the same time, users of the Datadog integration will also have a mirror of these dashboards available in Datadog if they do not wish to create their own.
\n
\"Being able to visualize how AuthZed performs alongside our other systems gives us real peace of mind,\" said Eric Zaporzan, Director of Infrastructure, at Neo Financial. \"Since we already use Datadog, it was simple to send AuthZed metrics there and gain a unified view of our entire stack.\"
\nAuthZed metrics allow developers and SREs to monitor their deployments, including request latency, cache metrics (such as size and hit/miss rates), and datastore connection and query performance. These metrics help diagnose performance issues and fine-tune the performance of their SpiceDB clusters.
\nThe Datadog integration is available in the AuthZed Dashboard under the “Settings” tab on a Permission System.
\nTo ensure that the dashboard graph for latency correctly shows the p50, p95, and p99 latencies, you’ll also need to set the Percentiles setting for the authzed.grpc.server_handling metric in the Metrics Summary view to ON.
\nTADA 🎉 You should see metrics start to flow to Datadog shortly thereafter.
\nI want to thank all of the AuthZed engineers involved in shipping this feature, but especially Tanner Stirrat who shepherded this project from inception and I can't wait to see all the custom dashboards our customers make in the future!
\n
\nInterested in learning more? Join our Office Hours on July 3rd here on YouTube.
Secure your AI systems with fine-grained authorization for RAG pipelines and agents
\nToday we are announcing Authorization Infrastructure for AI, providing official support for Retrieval-Augmented Generation (RAG) pipelines and agentic AI systems. With this launch, teams building AI into their applications, developing AI products or building an AI company can enforce fine-grained permissions across every stage - from document ingestion to vector search to agent behavior - ensuring data is protected, actions are authorized, and compliance is maintained.
\nAI is quickly becoming a first-class feature in modern applications. From retrieval-augmented search to autonomous agents, engineering teams are building smarter user experiences by integrating large language models (LLMs) into their platforms.
\nBut with that intelligence comes risk.
\nAI systems do not just interact with public endpoints. They pull data from sensitive internal systems, reason over embeddings that bypass traditional filters, and trigger actions on behalf of users. Without strong access control, they can expose customer records, cross tenant boundaries, or operate with more agency than intended.
\nThis is the authorization problem for AI. And it is one every team building with LLMs now faces.
\nWhen you add AI to your application, you also expand your attack surface. Consider just a few examples:
\nAccording to the OWASP Top 10 for LLM Applications, four of the top risks require robust authorization controls as a primary mitigation. And yet, most developers are still relying on brittle, manual enforcement scattered across their codebases.
\nWe believe it’s time for a better solution.
\n
AuthZed’s authorization infrastructure for AI brings enterprise-grade permission systems to AI workloads. AuthZed has been better positioned to support AI from the get-go because of SpiceDB.
\nSpiceDB is an open-source Google Zanzibar-inspired database for storing and computing permissions data that companies use to build global-scale fine grained authorization services. Since it is based on Google Zanzibar’s proven architecture, it can scale to massive datasets while handling complex permissions queries. In fact SpiceDB can scale to trillions of access control lists and millions of authorization checks per second.
\n“AI systems are only as trustworthy as the infrastructure that governs them,\" said Janakiram MSV, industry analyst of Janakiram & Associates. \"AuthZed’s SpiceDB brings proven, cloud-native authorization principles to AI, delivering the control enterprises need to adopt AI safely and at scale.”
\nUsing SpiceDB to enforce access policies at every step of your AI pipeline ensures that data and actions remain properly governed. With AuthZed’s Authorization Infrastructure for AI, teams can safely scale their AI features without introducing security risks or violating data boundaries.
\nRetrieval-Augmented Generation improves the usefulness of LLMs by injecting external knowledge. But when that knowledge includes sensitive customer or corporate data, access rules must be enforced at every stage.
\nAuthZed enables teams to:
\nWhether you are building with a private knowledge base, CRM data, or support logs, SpiceDB ensures your AI respects the same access controls as the rest of your systems.
\nAI agents are designed to act autonomously, but autonomy without boundaries is dangerous. With the AuthZed Agentic AI Authorization Model, teams can enforce clear limits on what agents can access and do.
\nThis model includes:
\nWhether your agent is summarizing data, booking a meeting, or triggering a workflow, it should only ever do what it is explicitly allowed to do.
\nLet’s say an employee types a natural language query into your internal AI assistant:
\n“What was our Q3 revenue?”
\nWithout authorization, the assistant might retrieve sensitive board slides or budget drafts and present them directly to the user. No checks, no logs, no traceability.
\nWith AuthZed:
\nThis is what AuthZed’s Authorization Infrastructure for AI makes possible.
\nYou should not have to choose between building smart features and maintaining secure boundaries. With AuthZed:
\nAnd it is already being used in production. Workday uses AuthZed Dedicated to\nsecure its AI-driven contract lifecycle platform. Other major AI providers rely on SpiceDB to enforce permissions across\nmulti-tenant LLM infrastructure.
\nIf you are building AI features, AuthZed’s Authorization Infrastructure for AI helps you ship faster by allowing you to focus on your product, instead of cobbling together an authorization solution. Whether you are securing vector search, gating agent behavior, or building out internal tools, AuthZed provides the authorization infrastructure you need.
\nFor the team at AuthZed, our mission is to fix access control. The first step is creating the foundational infrastructure for others to build their access control systems upon. Infrastructure for Authorization, you say? Didn't infrastructure just go through its largest transformation ever with cloud computing? From introduction to the eventual mass adoption of cloud computing, the industry has had to learn to manage all of the cloud resources they created. In response, cloud providers offered APIs for managing resource lifecycles. Our infrastructure follows this same pattern, so today we're proud to announce the AuthZed Cloud API is in Tech Preview.
\nThe AuthZed Cloud API is a RESTful JSON API for managing the infrastructure provisioned on AuthZed Dedicated Cloud. Today, it is able to list the available permissions systems and fully manage the configuration for restricting API-level access to SpiceDB within those permissions systems.
\nAs with all Tech Preview functionality, to get started, you must reach out to your account team and request access. Afterwards, you will be provided credentials for accessing the API. With these credentials, you're free to automate AuthZed Cloud infrastructure in any way you like! We recommend getting started by heading over to Postman to explore the API. Next, why not break out a little bit of curl?
\nListing all of your permissions systems:
\ncurl --location 'https://api.$YOUR_AUTHZED_DEDICATED_ENDPOINT/ps' \\\n --header 'X-API-Version: 25r1' \\\n --header 'Accept: application/json' \\\n --header 'Authorization: Bearer $YOUR_CREDENTIALS_HERE' | jq .\u000b[{\n \"id\": \"ps-8HXyWFOzGtk0Yq8dH0GBT\",\n \"name\": \"example\",\n \"systemType\": \"Production\",\n \"systemState\": {\n \"status\": \"RUNNING\"\n },\n \"version\": {\n \"selectedChannel\": \"Rapid\",\n \"currentVersion\": {\n \"displayName\": \"SpiceDB 1.41.0\",\n \"version\": \"v1.41.0+enterprise.v1\",\n \"supportedFeatureNames\": [\n \"FineGrainedAccessManagement\"\n ]\n }\n }\n }]\n\nTake note of the required headers: the API requires specifying a version as a header so that changes can be made to the API in the future releases.
\nI'm eager to see all of the integrations our customers will build with API-level access to our cloud platform! Look out for another announcement coming very soon about an integration that we've built using this new API, too!
\nJoin us on the mission to fix access control.
\nSchedule a call with us to learn more about how AuthZed can help you.
", - "url": "https://authzed.com/blog/introducing-the-authzed-cloud-api", - "title": "Introducing The AuthZed Cloud API", - "summary": "Announcing the AuthZed Cloud API in Tech Preview—an API for managing AuthZed Dedicated Cloud infrastructure. Following the cloud computing pattern of lifecycle management APIs, this new tool allows you to manage permissions systems and restrict API-level access to SpiceDB within your authorization infrastructure.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-05-28T12:00:00.000Z", - "date_published": "2025-05-28T12:00:00.000Z", - "author": { - "name": "Jimmy Zelinskie", - "url": "https://twitter.com/jimmyzelinskie" - } - }, - { - "id": "https://authzed.com/blog/a-closer-look-at-authzed-dedicated", - "content_html": "At AuthZed, our mission is to fix broken access control. After years of suffering in industry from insufficient solutions for building authorization systems, we concluded that we'd have to start from the ground up by building the right infrastructure software. SpiceDB, open sourced in late 2021, was our first-step to providing the solution that modern enterprises need. AuthZed Dedicated Cloud, often referred to as simply Dedicated, launched in early 2022 and productized SpiceDB by offering a dedicated cloud platform for provisioning SpiceDB deployments similar to the user experience you'd find provisioning infrastructure on a major cloud provider.
\n
Dedicated Clouds are a relatively new concept. When AWS hit the market, the term Public Cloud was coined; Public Clouds are cloud platforms that share their underlying hardware resources across a variety of customers. At the same time this term got coined, folks needed a term used to refer to what most folks were already doing before AWS launched: running their own dedicated infrastructure. Unfortunately, instead of calling this Dedicated Cloud, it became known as Private Cloud. So what are Dedicated Clouds? Well, they're the middle ground between Private and Public Clouds; Dedicated Clouds provide varying levels of isolation and dedicated resources than Public Clouds, but aren't placing end users fully in control quite like the traditional Private Cloud. Enterprises in regulated industries, or those that want to isolate particularly sensitive data, increasingly reach for Dedicated Cloud because it can provide most of the niceties of the Public Cloud while also delivering better security.
\n
When AuthZed looked to create the first commercial offering of SpiceDB, we looked at where the industry was heading and implemented a Serverless product. However, it turned out that most enterprises value peace of mind that comes from isolating their authorization data from a shared data plane with other tenants. This was a happy coincidence because at the same time we learned that the best way to operate low-latency systems is to isolate workloads by having dedicated hardware resources. With our new insights, we launched Dedicated, our \"middleground\" that provided dedicated cloud environments with reserved compute resources and private networking. Dedicated customers get a private control plane deployed into their cloud regions of choice where they can provision their own deployments using our web console, API, or Terraform/OpenTOFU. Remaining true to the Infrastructure-as-a-Service (IaaS) spirit, pricing is done on a resource consumption basis.
\nSince launch, Dedicated immediately became our flagship product. However, we recognized that some customers didn't require all of its isolation features.These are the same users looking for a self-service product to try things out without a long enterprise sales cycle. Our Serverless product inadvertently fits this description, but it's a limited experience compared to Dedicated. What if we could bridge the gap and bring a version of our Dedicated product where customers could share the control plane? We're calling this AuthZed Cloud (as opposed to AuthZed Dedicated Cloud) and it's under active development and expected to launch later this year. Best of all, because both Cloud and Dedicated will share the same codebase, all of the self-service features we're building will also be coming to Dedicated.
\nIf you are interested in learning more about AuthZed Cloud, you can sign up here for the beta waitlist.
\n", - "url": "https://authzed.com/blog/a-closer-look-at-authzed-dedicated", - "title": "A Closer Look at AuthZed Dedicated", - "summary": "AuthZed tackles broken access control through innovative authorization infrastructure. After launching open-source SpiceDB in 2021, they created AuthZed Dedicated Cloud—offering enterprises the security benefits of private clouds with public cloud convenience. This middle-ground solution provides isolated authorization data processing with dedicated resources, perfect for regulated industries requiring enhanced security.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-05-20T13:00:00.000Z", - "date_published": "2025-05-20T13:00:00.000Z", - "author": { - "name": "Jimmy Zelinskie", - "url": "https://twitter.com/jimmyzelinskie" - } - }, - { - "id": "https://authzed.com/blog/building-better-authorization-infrastructure-with-arm", - "content_html": "How ARM helps AuthZed build and operate authorization infrastructure, from day-to-day productivity gains to cost-effective, performant cloud compute.
\nToday's cloud-native development environment requires running a growing list of simultaneous services: container orchestration, monitoring, databases, observability tools, and more. For engineering teams, this creates a critical challenge: how to balance performance, cost, and efficiency across both development environments and production deployments.
\nAt AuthZed, we provide flexible, scalable authorization infrastructure—the permissions systems that secure access for your applications’ data and functionality—enabling engineering teams to focus on building what matters—their core products. For our customers using AuthZed's dedicated cloud, the balance of performance, cost, and efficiency is also crucial—they expect a reliable, performant, and cost-effective solution.
\nARM architecture has become our strategic advantage in meeting these challenges across our entire workflow.
\nThe availability of ARM-based laptops with customizable configurations and ample RAM has transformed our development environment. Our journey began with ARM processors in early 2022 and expanded to more powerful variants as they became available. The developer community quickly adopted these machines, and tooling and library support rapidly matured, enabling us to fully adopt ARM as our primary architecture in development.
\nAt AuthZed, we work with distributed systems and databases daily, and running the full stack locally can be resource-intensive, often requiring significant CPU and memory. ARM's efficient performance helps utilize machine capacity, while its energy efficiency keeps our laptops cool enough to truly stay on laps—even when running our resource-intensive local environment.
\nAfter upgrading to higher-performance ARM-based laptops, notable improvements compared to our previous development environment included:
\nThe qualitative benefits have been even more significant—true mobility with our laptops due to minimal battery drain and absence of overheating, smoother performance during resource-intensive tasks, and most importantly, tighter feedback loops during debugging and testing.
\nAuthZed has been building and publishing multi-architecture Docker images for our tools and authorization database for over three years (since March 2022), so we recognized the value of multi-architecture support in CI/CD early on.
\nThere's now robust support for third-party ARM-based action runners for GitHub Actions, our CI/CD platform. Combined with toolchain maturity across runner images for popular architectures, migration to ARM for CI/CD has never been easier.
\nBuild and test workflows are unique to each project and evolve as the project develops. Consequently, the benefits and tradeoffs for a CI/CD platform change over time. We've benefited from being able to easily migrate between architectures and runner providers to best meet our engineering needs at different stages.
\nMajor providers like Google Cloud, AWS, and Azure have all released custom-designed ARM-based CPUs for their cloud compute platforms. The expanding ARM ecosystem bolsters our multi-cloud strategy for AuthZed Dedicated and allows our production workloads to benefit from ARM's design, which prioritizes high core count and power efficiency under load.
\nAuthZed Dedicated is our dedicated authorization infrastructure deployed adjacent to customer applications in their preferred cloud platform. This allows for the lowest latency between user applications and our permissions systems, and for the most comprehensive region support. With the availability of ARM-based compute options across the major providers, we are able to take advantage of the economic and performance advantages of ARM-based infrastructure in production:
\nFrom developer laptops to cloud infrastructure, ARM delivers consistent advantages throughout our engineering pipeline. For AuthZed, it's now our preferred platform for building and running authorization infrastructure that helps customers secure applications with confidence and scale efficiently.
\nThe combination of developer productivity, cost efficiency, and performance gains enables our growing startup to innovate and compete effectively. As cloud providers continue expanding ARM-based offerings and development tools mature further, we expect these advantages to compound, creating even more opportunities to deliver value through our authorization infrastructure.
\nBy embracing ARM across development and production environments, we've created a seamless experience that benefits both our team and our customers—accelerating development while delivering more performant and cost-effective services.
\nCurious about the inspiration behind AuthZed’s modern approach to authorization? Explore the Google Zanzibar research paper with our annotations and foreword by Kelsey Hightower to learn how it all began.
\nhttps://authzed.com/z/google-zanzibar-annotated-paper
Zed is the command line interface (CLI) tool that you can use to interact with your SpiceDB cluster. With it you can easily switch between clusters, write and read schemas, write and read relationships, and check for permissions. It can be launched as a standalone binary or as a Docker container. Detailed installation options documented here.
\nOver the last few months we’ve been making many improvements to it, such as:
\nzed backup commandAnd many other small fixes that are too many to list here. We are happy to announce that last week we released zed v0.30.2, which includes all of these changes.
\nIn the near future we expect to be adding support for a new test syntax in schema files, which will allow you to validate that your schema and relationships work as you expect them to. Stay tuned!
\nAs you can see, we are continuously making improvements to zed. If you see anything not working as expected, or if you have an idea for a new feature, please don’t hesitate to open an issue in https://github.com/authzed/zed. Also, while you’re at it, please give us a star!
", - "url": "https://authzed.com/blog/zed-v0-30-2-release", - "title": "Zed v0.30.2 Release", - "summary": "Zed CLI provides seamless interaction with SpiceDB clusters, allowing you to manage schemas, relationships, and permissions checks. Our v0.30.2 release adds composable schema support, automatic retries, backup functionality, and upcoming Windows package integration via Chocolatey.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-05-01T11:12:00.000Z", - "date_published": "2025-05-01T11:12:00.000Z", - "author": { - "name": "Maria Inés Parnisari", - "url": "https://github.com/miparnisari" - } - }, - { - "id": "https://authzed.com/blog/kubecon-europe-2025-highlights-navigating-authorization-challenges-in-fintech-with-authzeds-jimmy-zelinskie-and-pierre-alexandre-lacerte-from-upgrade", - "content_html": "At this year's KubeCon + CloudNativeCon Europe 2025 in London, AuthZed CPO Jimmy Zelinskie sat down with Pierre-Alexandre Lacerte, Director of Software Development at Upgrade, for an insightful discussion on modern authorization challenges and solutions. The interview, hosted by Michael Vizard of Techstrong TV, covers several key topics that should be on every developer's radar.
\nBefore diving into the highlights, you can watch the complete interview on Techstrong TV here. It's packed with valuable insights for anyone interested in authorization, security, and cloud-native architectures.
\nJimmy shares the origin story of AuthZed, explaining how his experience building Quay (one of the first private Docker registries) revealed fundamental challenges with authorization:
\n\n\n\"When you think about it, the only thing that makes a private Docker registry different from like a regular Docker registry where anyone can pull any container down is literally authorization... the core differentiator of that product was authorization.\"
\n
The turning point came when Google published the Zanzibar paper in 2019:
\n\n\n\"We read this paper and said, this is actually how you're supposed to solve these problems. This would have solved all the problems we had building Quay.\"
\n
One of the most valuable segments of the interview explains the concept of relationship-based access control:
\n\n\n\"The approach in the Zanzibar paper is basically this idea of relationship-based access control, which is not how most people are doing things today. The idea is essentially that you can save sets of relationships inside of a database and then query that later to determine who has access.\"
\n
Jimmy illustrates this with a simple example that makes the concept accessible:
\n\n\n\"Jimmy is a part of this team. This team has access to this resource. And then if I can find that chain from Jimmy through the team to that resource, that means Jimmy has access to that resource transitively through those relationships.\"
\n
Pierre-Alexandre explains the decision-making process that led Upgrade to adopt SpiceDB rather than building an in-house solution:
\n\n\n\"We're a fintech, so we offer personal loans, checking accounts. But eventually we started developing more advanced products where we had to kind of change the foundation of our authorization model... we're kind of not that small, but at the same time we cannot allocate like 200 engineers on authorization.\"
\n
Their evaluation involved looking at industry leaders:
\n\n\n\"We started looking at a few solutions actually, and then also the landscape, like what is GitHub doing? What is the Carta, Airbnb doing?... a lot of those solutions were kind of hedging into the direction of Zanzibar or Zanzibar-ish approach.\"
\n
The interview highlights a critical advantage of centralized authorization systems:
\n\n\n\"The real end solution to all that is centralization. If there's only one system of record, it's really easy to make sure you've just removed that person from the one single system of record.\"
\n
Pierre-Alexandre describes how Upgrade implemented this approach:
\n\n\n\"When someone leaves the company or when someone changes teams, we do have automation that would propagate the changes across the applications you have access to down to the SpiceDB instance. So we have this kind of sync infrastructure that makes sure that this is replicated within a few seconds.\"
\n
For companies operating in regulated industries like fintech, having a cloud-native solution is essential. Pierre-Alexandre emphasizes:
\n\n\n\"We're on Amazon EKS, so Kubernetes Foundation... For us, finding something that was cloud native, Kubernetes native was very important.\"
\n
One of the most forward-looking parts of the discussion addresses the intersection of authorization and AI:
\n\n\n\"The real kind of question is actually applying authorization to AI and not vice versa... now with AI, we don't have that same advantage of it just being like a couple folks. If you train a model or have tons of embeddings around your personal private data, now anyone querying that LLM has access to all that data at your business.\"
\n
Upgrade is already exploring solutions:
\n\n\n\"In our lab, we're exploring different patterns, leveraging SpiceDB where we have a lot of internal documentation and the idea is to ingest those documents and tag them on SpiceDB and then leveraging some tools in the GenAI space to query some of this data.\"
\n
Perhaps the most quotable moment from the interview is Jimmy's passionate plea to developers:
\n\n\n\"If there's like one takeaway from kind of us building this business, it's that folks shouldn't be building their own authorization. Whether the tool is SpiceDB that they end up choosing or another one, like developers, they wouldn't dream of building their own database when they're building their applications. But authorization systems, they've been studied and researched and written about in computer science since the exact same time. Yet every developer thinks they can write custom code for each app implementing custom logic for a thing they don't have no background in, right? And I think this is kind of just like preposterous.\"
\n
Pierre-Alexandre adds a pragmatic perspective from the customer side:
\n\n\n\"Obviously, I probably have decided to go with SpiceDB sooner. But yeah, I mean, we had to do our homework and learn.\"
\n
The full interview covers additional topics not summarized here, including:
\nInterested in learning more about modern authorization approaches after watching the interview?
\nDon't miss this insightful conversation that challenges conventional wisdom about authorization and provides a glimpse into how forward-thinking companies are approaching these challenges. Watch the full interview now →
", - "url": "https://authzed.com/blog/kubecon-europe-2025-highlights-navigating-authorization-challenges-in-fintech-with-authzeds-jimmy-zelinskie-and-pierre-alexandre-lacerte-from-upgrade", - "title": "Techstrong.tv Interview with Jimmy Zelinskie and Pierre-Alexandre Lacerte from Upgrade", - "summary": "Watch AuthZed CPO Jimmy Zelinskie and Upgrade's Pierre-Alexandre Lacerte discuss modern authorization challenges, relationship-based access control, and why companies shouldn't build their own authorization systems in this insightful KubeCon Europe 2025 interview with Techstrong.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-04-08T16:15:00.000Z", - "date_published": "2025-04-08T16:15:00.000Z", - "author": { - "name": "Sam Kim", - "url": "https://github.com/samkim" - } - }, - { - "id": "https://authzed.com/blog/meet-dibs-the-mascot-bringing-spicedb-to-life", - "content_html": "We're pleased to introduce you to the official SpiceDB mascot – the Muad'dib, or Dibs for short. As we prepare for KubeCon + CloudNativeCon EU in London, we're unveiling this distinctive character who will represent our project in meaningful ways.
\n
The name \"Muad'dib\" continues our tradition of referencing Frank Herbert's Dune series. For those unfamiliar with Dune, the Muad'dib is a small desert mouse known for its resilience and adaptability—qualities we strive to incorporate into SpiceDB.
\nWith its distinctive oversized ears and agile movements, the Muad'dib is far more than just a charming emblem. In the unforgiving desert, every step matters, and this remarkable creature's fast, efficient navigation mirrors how SpiceDB processes complex data in real time. Those attentive ears serve as a reminder to remain vigilant and responsive, embodying survival instincts honed in the harshest environments.
\nMuch like SpiceDB's approach to authorization challenges, the Muad'dib transforms obstacles into opportunities. This desert-dwelling creature represents our commitment to resilience, speed, and a collaborative spirit – all values that drive SpiceDB forward in the cloud-native ecosystem.
\nWe will be at KubeCon + CloudNativeCon in London so stop by our booth #: N632 to pick up your very own Dibs swag.
\nAnd join us for our scheduled activities:
\nKelsey Hightower AMA at our booth #: N632
\n
Come party with AuthZed, Spotify, Rootly and Infiscal at the Munich Cricket Club Canary Wharf.
\n\n
We would love to talk with you about how we can help fix your access control and provide the infrastructure necessary to support your applications.
\nWe look forward to seeing how our community connects with Dibs the Muad'dib. Here's how you can get involved:
\nThis creature represents not just our project, but the spirit of our community – adaptable, resilient, and ready to navigate complex challenges.
\nWelcome, Dibs.
", - "url": "https://authzed.com/blog/meet-dibs-the-mascot-bringing-spicedb-to-life", - "title": "Meet Dibs: The Mascot Bringing SpiceDB to Life", - "summary": "Meet Dibs the Muad'dib, SpiceDB's new mascot that embodies our commitment to resilience, adaptability, and precision in solving complex authorization challenges. Drawing inspiration from Frank Herbert's Dune universe, this vigilant desert creature symbolizes how SpiceDB navigates the harsh terrain of modern access control with efficiency and intelligence.", - "image": "https://authzed.com/images/upload/blog-meet_dibs-2x.png", - "date_modified": "2025-03-25T12:17:00.000Z", - "date_published": "2025-03-25T12:17:00.000Z", - "author": { - "name": "Corey Thomas", - "url": "https://www.linkedin.com/in/cor3ythomas/" - } - }, - { - "id": "https://authzed.com/blog/the-evolution-of-expiration", - "content_html": "We are excited to announce that as of the SpiceDB v1.40 release, users now have access to a new experimental feature: Relationship Expiration. When writing relationships, requests can now include an optional expiration time, after which a relationship will be treated as removed, and eventually automatically cleaned up.
\nEven when first setting out to create SpiceDB, there was never any doubt whether or not users would want time-bound access control to their resources. However, the inspiration for SpiceDB, Google's Zanzibar system, has no public documentation for how this functionality is built. As our initial goals for the SpiceDB project were to be as faithful to Google's design as possible, we initially left expiration as an exercise to the user.
\nWithout explicit support within SpiceDB, users could still use external systems like workflow engines (e.g. Temporal) to schedule calls to the SpiceDB DeleteRelationships or WriteRelationships APIs in order to solve this problem. This is a perfectly valid way to solve this problem, but it has a major tradeoff: users must adopt yet another system to coordinate their usage of the SpiceDB API.
\nAfter we had successfully reached our goal of being the premier implementation of the concepts expressed in the Google Zanzibar paper, we turned our focus to improving developer experience and more real-world requirements outside of the walls of Google. This led us to collaborating with Netflix on a system for supporting lightweight policies to more effectively model ABAC-style use cases. This design came to be known as Relationship Caveats. Caveats allow SpiceDB users to write conditional relationships that exist depending on whether a CEL expression evaluates to true while their request is being processed. With the introduction of Caveats, SpiceDB had its first way to create time-bounding without relying on any external system. The use case was so obvious, even our first examples of Caveats demonstrated how to implement time-bounded relationship expiration.
\nAs more SpiceDB users adopted Caveats, we began to acknowledge some trends in its usage. Many folks didn't actually need or want the full expressiveness of policy; instead they cared solely about modelling expiration itself. Eventually it became obvious that expiration was its own fully-fledged use case. If we could craft an experience specifically for expiration, we could steer many folks away from some of the tradeoffs associated with caveats. If you still need caveats for reasons other than expiration and you're wondering if relationships support both caveats and expiration simultaneously, they do!
\nIf you've spent time reading some of the deeper discussions on SpiceDB internals or studying other systems, you might be familiar with the fact that time is incredibly nebulous in distributed systems. Distributed systems typically eschew \"wall clocks\" altogether. Instead, for correctness they need to model time based on the ordering of events that occur in the system. This observation, among others, ultimately led Leslie Lamport to win a Turing Award. SpiceDB is no exception to this research: the opaque values encoded into SpiceDB's ZedTokens act as logical clocks used to provide consistency guarantees throughout the system.
\nIf the problem here isn’t already clear: fundamentally, relationship expiration is tied to wall clock time, but distributed systems research proves this is a Bad Idea™. In order to avoid any inconsistencies caused by the skew in synchronization of clocks across machines, SpiceDB implements expiration by pushing as much logic into the underlying datastore as possible. For a datastore like PostgreSQL, there is no longer a synchronization problem because there's only one clock that matters: the one on the leader's machine. Some datastores even have their own first-class expiration primitives that SpiceDB can leverage in order to offload this logic entirely while ensuring that the removal of relationships are done as efficiently as possible. This strategy is only possible because of SpiceDB's unique architecture of reusing other existing databases for its storage layer rather than the typical disk-backed key-value store.
\nThere are only a few steps required to try out expiration once you've upgraded to SpiceDB v1.40:
\nspicedb serve --enable-experimental-relationship-expiration [...]\n\nuse expiration\u000b\u000b\n\ndefinition folder {}\u000b\ndefinition resource {\n relation parent: folder\n}\n\nuse expiration\u000b\u000b\n\ndefinition folder {}\u000b\n definition resource {\n relation parent: folder with expiration\n}\n\nWriteRelationshipsRequest { Updates: [\n RelationshipUpdate {\n Operation: CREATE\n Relationship: {\n Resource: { ObjectType: \"resource\", ObjectId: \"123\", },\n Relation: \"parent\",\n Subject: { ObjectType: \"folder\", ObjectId: \"456\", },\n OptionalExpiresAt: \"2025-12-31T23:59:59Z\"\n }\n }]\n}\n\nRelationship Expiration is a great example of our never-ending journey to achieve the best possible performance for SpiceDB users. As SpiceDB is put to the test in an ever-increasing number of diverse enterprise use-cases, we learn new things about where optimizations should be made in order to deliver the best product for scaling authorization. Sometimes it requires going back to the drawing board on a problem we thought we had previously solved and totally reconsidering its design. With that, I encourage you to go out and experiment with Relationship Expiration so that we learn even more about the problemspace and continue refining our approach.
", - "url": "https://authzed.com/blog/the-evolution-of-expiration", - "title": "The Evolution of Expiration", - "summary": "We are excited to announce that as of the SpiceDB 1.40 release, users now have access to a new experimental feature: Relationship Expiration. When writing relationships, requests can now include an optional expiration time, after which a relationship will be treated as removed, and eventually automatically cleaned up.", - "image": "https://authzed.com/images/blogs/blog-eng-relationship-expiration-hero-2x.png", - "date_modified": "2025-02-13T10:16:00.000Z", - "date_published": "2025-02-13T10:16:00.000Z", - "author": { - "name": "Jimmy Zelinskie", - "url": "https://twitter.com/jimmyzelinskie" - } - }, - { - "id": "https://authzed.com/blog/build-time-bound-permissions-with-relationship-expiration-in-spicedb", - "content_html": "Today we are announcing the experimental release of Relationship Expiration, which is a straightforward, secure, and dynamic way to manage time-bound permissions directly within SpiceDB.
\nBuilding secure applications is hard, especially when it comes to implementing temporary access management for sensitive data. You need to grant the right level of access to the right people for the right duration, without creating long-term vulnerabilities or drowning in administrative overhead.
\nConsider the last time you needed to give a contractor access to your brand guidelines, a vendor access to a staging environment, or a new employee access to onboarding materials. The usual workarounds – emailing files, uploading to external systems, or (please, please don’t) sharing logins – quickly become a tangled mess of version control nightmares, security risks, and administrative headaches. And what happened when you completed the project? How did you guarantee that access gets promptly revoked? Leaving lingering access privileges hanging around is an AppSec war room waiting to happen.
\nWe’re helping application development teams solve this problem with this powerful new feature in SpiceDB v1.40.
\n\"Authorization is essential for building secure applications with advanced sharing capabilities,\" said Larry Carvalho, Principal Consultant and Founder at RobustCloud. \"SpiceDB, inspired by Google's approach to authorization, provides developers with a much-needed feature for managing fine-grained access control. By leveraging AuthZed’s expertise, developers can build the next generation of applications with greater efficiency, security, and flexibility.\"
\nWhile workarounds exist – scheduling API calls with external tools like Temporal or crafting complex policies – they add complexity and can be difficult to manage and deploy at scale (think 10,000 relationships generated and refreshed every 10 minutes). SpiceDB's Relationship Expiration provides first-class support for building time-bound permissions, leveraging SpiceDB’s powerful relationship-based approach.
\nAs the name suggests, expirations are attached as a trait to relationships between subjects and resources in SpiceDB’s graph-based permissions evaluation engine. Once the relationship expires, SpiceDB automatically removes it. Without this built-in support, conditional time-bound relationships in a Zanzibar-style schema clutter the permissions graph, bloating the system and impacting performance.
\nTime-bound access helps teams to collaborate securely and efficiently. By eliminating the friction of manual access management, it frees up valuable time and resources while minimizing the risk of human error. Knowing that access will automatically expire fosters a culture of confident sharing, removing the hesitation that can lead to information silos and slower project cycles. Additionally, just-in-time access with session-based privileges streamlines workflows and minimizes the risk of unauthorized access.
\nPut access control in the hands of your users: they can define expiration limits for the resources they manage, unlocking powerful workflows like time-limited review cycles or project-based access. A designer, for example, could grant edit access to a file for a specific review period, with access automatically revoked afterward. This granular control enhances security by minimizing the window of opportunity for unauthorized access and fosters a culture of security awareness. Leave a positive impression with custom permissions options that welcome a broad range of use cases.
\nWith millions of users and billions of resources, authorization can become a major performance bottleneck, especially since permissions checks sit in the critical path between user input and service response. By automatically removing expired relationships, SpiceDB reduces the size of its database and load on its system, leading to more performant authorization checks and lower costs.
\nWant to learn more TODAY? Join Sohan, AuthZed technical evangelist, and Joey Schorr, one of the founders of AuthZed, during our biweekly Office Hours livestream at 9 am PT / 12 pm ET on February 13th! We hope to see you there.
\n\nOr, hop over to Jimmy Zelinskie’s blog post to learn more about how to implement expiring relationships and try them out in SpiceDB today.
\nYou may have noticed that we've lined up this launch just in time for Valentine’s Day. Most relationships between humans do, sadly, have an expiration date… To recognize the (somewhat) unfortunate timing of this release, we’ve compiled a Spotify list of songs sourced from the AuthZed team just for those nursing broken hearts this season. And if you’re one of the lucky ones celebrating, hey, it’s fun music to jam to while you learn SpiceDB.
\n\nIf you haven’t already, give SpiceDB a star on GitHub, or follow us on LinkedIn, X, or BlueSky to stay up to date on all things AuthZed. Or ready to get started? Schedule a call with us to talk about how we can help with your authorization needs.
", - "url": "https://authzed.com/blog/build-time-bound-permissions-with-relationship-expiration-in-spicedb", - "title": "Build Time-Bound Permissions with Relationship Expiration in SpiceDB", - "summary": "Today we are announcing the experimental release of Relationship Expiration, which is a straightforward, secure, and dynamic way to manage time-bound permissions directly within SpiceDB. \n", - "image": "https://authzed.com/images/blogs/blog-relationship-expiration-hero-2x.png", - "date_modified": "2025-02-13T10:16:00.000Z", - "date_published": "2025-02-13T10:16:00.000Z", - "author": { - "name": "Jess Hustace", - "url": "https://twitter.com/_jessdesu" - } - }, - { - "id": "https://authzed.com/blog/deepseek-balancing-potential-and-precaution-with-spicedb", - "content_html": "DeepSeek has emerged as a phenomenon since its announcement in late December 2024 by hedge fund company High-Flyer. The AI industry and general public have been captivated by both its capabilities and potential implications.
\nSecurity has been at the forefront of recent conversation due to reports from Wiz that the DeepSeek database is leaking sensitive information, including chat history as well as geopolitical concerns. Even RedMonk analyst Stephen O’Grady discussed DeepSeek and the Enterprise focusing on considerations for business adoption.
\nAt AuthZed, we recognize that trust and security fundamentally shape how organizations evaluate AI models, which is why we're sharing our perspective on this crucial discussion.
\nWhat makes DeepSeek particularly noteworthy is its unique combination of features. As an open-source model, it demonstrates performance comparable to frontier models from industry leaders like OpenAI and Anthropic, yet achieves this with (reportedly) significantly lower training costs. The R1 version exhibits impressive reasoning capabilities, further challenging conventional assumptions about the infrastructure investments required for advancing LLM performance.
\nWhile these factors drive DeepSeek’s popularity, they’ve also drawn skepticism alongside geopolitical considerations based on DeepSeek’s origin. The uncertainty surrounding the source of training data and potential biases in responses warrants careful consideration. A recent data breach of the hosted service has heightened privacy concerns, particularly given the official hosted service’s terms of service permit user data retention for future model training.
\nDespite the concerns, users and companies increasingly express interest in exploring its capabilities. Organizations seeking to leverage DeepSeek's capabilities while maintaining data security can adopt permissions systems to define data access controls. This strategy is especially relevant for applications built on DeepSeek's large language models, where protecting sensitive information is paramount.
\nSpiceDB offers a robust framework for organizations integrating AI capabilities. Its fine-grained permissions help avoid oversharing by letting you precisely define which data the model can and cannot access. This granular control extends beyond data access - you can prevent excessive agency by explicitly defining the scope of actions a DeepSeek-based agent is permitted to take. This dual approach to security - controlling both data exposure and action boundaries - makes SpiceDB particularly valuable for organizations that want to leverage DeepSeek’s capabilities but in a controlled environment.
\nTo help organizations get started, we've created a demo notebook showcasing SpiceDB integration with a DeepSeek-based RAG system: https://github.com/authzed/workshops/tree/deepseek/secure-rag-pipelines
\nFor further exploration and community support, join our SpiceDB Discord community to connect with other developers implementing secure AI applications.
", - "url": "https://authzed.com/blog/deepseek-balancing-potential-and-precaution-with-spicedb", - "title": "DeepSeek: Balancing Potential and Precaution with SpiceDB", - "summary": "DeepSeek has emerged as a phenomenon since its announcement in late December 2024 and security has been at the forefront of recent conversation. At AuthZed, we recognize that trust and security fundamentally shape how organizations evaluate AI models, which is why we're sharing our perspective on this crucial discussion.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-01-31T07:56:00.000Z", - "date_published": "2025-01-31T07:56:00.000Z", - "author": { - "name": "Sam Kim", - "url": "https://github.com/samkim" - } - }, - { - "id": "https://authzed.com/blog/2024-soc2-reflection", - "content_html": "I'm happy to announce that AuthZed recently renewed our SOC2 compliance and our SOC2 Type 2 and SOC3 reports are now available on security.authzed.com.
\nHaving just endured the audit process again, I figured it would be a good time to reflect on my personal feelings toward compliance and how my opinion has evolved.
\nIf you're reading this now and aren't familiar with SOC2 and SOC3, I'll give you an overview by someone that isn't trying to sell you a compliance tool (feel free to skip this section):
\nSOC (System and Organization Controls) is a suite of annual reports that result from conducting an audit of the internal controls that you use to guarantee security practices at your company. An example of an \"internal control\" is a company-wide policy that enforces that \"all employees have an anti-virus installed on their devices\". Controls vary greatly and can be automated by using software like password managers and MDM solutions, but some will always require human intervention, such as performing quarterly security reviews and annual employee performance reviews.
\nIn the tech industry, SOC2 is the standard customers expect (or ISO27001 if you're in the EU, but they are similar enough that you often only need either one). As I wrote this, it came to my attention that I have no idea what SOC1 is, so I looked it up to discover that it is apparently a financial report which I've never heard of customers requesting in the tech industry. SOC3 is a summary of a SOC2 report that contains less detail and is designed to be more publicly sharable so that you don't necessarily need to sign an NDA to get some details. SOC2 comes in two variants \"Type 1\" and \"Type 2\". It's fairly confusing, but this is just shorthand for how long the audit period was. Type 1 means that the audit looked at the company at one point in time, while Type 2 means that the auditor actually monitored the company over a period of time usually 6 or 12 months.
\nTo engineering organizations, compliance is often seen as a nuisance or a distraction from shipping code that moves the needle for actual security issues. Software engineers are those deepest in the weeds, so they have the code that they're familiar with at the top of mind when you ask where security concerns lie. Because I knew where the bodies were buried when I first transitioned my career to product management from engineering, I always tried to push back and shield my team from having to deliver compliance features. The team celebrated this as a win for focus, but we never got to fully understand the externalities of this approach.
\nFast forward a few years, I've now gotten much wider exposure to the rest of the business functions at a technology company. From the overarching view of an executive, the perspective of the software engineer seems quite amiss. If you asked an engineer what they're concerned about, it might be that they quickly used the defaults for bcrypt and didn't spend the time evaluating the ideal number of bcrypt rounds or alternative algorithms. This perspective is valuable, but can also be missing the forest for the trees; it's far easier to perform phishing attacks on a new hire than it is to reverse engineer the cryptography in their codebase. That simple fact makes it clear that if you haven't already addressed the foundational security processes at your business, it doesn't matter how secure the software you're building is.
\nAll of that said, AuthZed's engineering-heavy team is not innocent from this line of thinking, especially since our core product is engineering security infrastructure. However, if we put our egos aside, there is one thing that reigns supreme regardless of the product you're building: the trust you build with your customers.
\nThe compliance industry was never trying to hide that its end goal is purely trust in processes. SOC2 is defined by the American Institute of Certified Public Accountants and not a cybersecurity standards body; this is because compliance is about ensuring processes at your business and not finding remote code execution in your codebase. That doesn't mean that compliance cannot uncover deep code issues because SOC2 audits actually require you to perform an annual penetration test from an actual cybersecurity vendor. Coding vulnerabilities are only one aspect of the comprehensive approach that compliance is focused on.
\nWithout compliance, our industry would be stuck having to blindly trust that vendors are following acceptable security practices. By conforming to the processes required for certifications like SOC2, we can build trust with our partners and customers as well as prove the maturity of our products and business. While it may feel like toil at times, it's a necessary evil to ensure consistency across our supply chains.
\nThe final thought I'd like to leave you with is the idea that compliance isn't a checkbox to do business. It's a continuous process where you offer transparency to your customers to prove that they should trust you. I'm looking forward to seeing if my opinions change next renewal.
\nI'd like to thank the teams at SecureFrame and Modern Assurance who we've collaborated with during this last audit as well as all of the vendors and data subprocessors we rely on to operate our business everyday.
", - "url": "https://authzed.com/blog/2024-soc2-reflection", - "title": "Our SOC2 Renewal and Reflections on Compliance", - "summary": "I'm happy to announce that AuthZed recently renewed our SOC2 compliance and our SOC2 Type 2 and SOC3 reports are now available on security.authzed.com.\nHaving just endured the audit process again, I figured it would be a good time to reflect on my personal feelings toward compliance and how my opinion has evolved.\n", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-01-07T20:20:00.000Z", - "date_published": "2025-01-07T20:20:00.000Z", - "author": { - "name": "Jimmy Zelinskie", - "url": "https://twitter.com/jimmyzelinskie" - } - }, - { - "id": "https://authzed.com/blog/the-dual-write-problem", - "content_html": "The dual-write problem presents itself in all distributed systems. A system that uses SpiceDB for authorization and also has an application database (read: most of them) is a distributed system. Working around the dual-write problem typically requires a non-trivial amount of work.
\nIf you've heard this one before, feel free to skip down where we talk about solutions and approaches to the dual-write problem. If it's your first time, welcome!
\nLet's consider a typical monolithic web application. Perhaps it's for managing and sharing files and folders, which makes it a natural candidate for a relation-based access control system like SpiceDB. The application has an upload endpoint that looks something like the following:
\ndef upload(req):\n validate_request(req)\n with new_transaction() as db:\n db.write_file(req.file)\n return Response(status=200)\n\nAll of the access control logic is neatly contained within the application database, so no other work needed to happen up to this point. However, we want to start using SpiceDB in anticipation of the application growing more complex and services splitting off of our main monolith.
\nWe start with a simple schema:
\ndefinition user {}\n\ndefinition folder {\n relation viewer: user\n permission view = viewer\n}\n\ndefinition file {\n relation viewer: user\n relation folder: folder\n permission view = viewer + folder->viewer\n}\n\nNote that if a user is a viewer of the folder, they are able to view any file within the folder. That means that we'll need to keep SpiceDB updated with the relationships between files and folders, which is held in the folder relation on the file.

That doesn't sound so bad. Let's go and implement it:
\ndef upload(req):\n validate_request(req)\n with new_transaction() as db:\n file_id = db.write_file(req.file)\n write_folder_relationship(\n file_id=file_id\n folder_id=req.folder_id\n )\n \n return Response(status=200)\n\nWe've got a problem, though. What happens if the server crashes? We're going to use a server crash as an example problem because it's relatively conceptually simple and is also something that's hard to recover from. Let's mark up the function and then consider what happens if the server crashes at each point:
\ndef upload(req):\n validate_request(req)\n # point 1\n with new_transaction() as db:\n file_id = db.write_file(req.file)\n # point 2\n write_folder_relationship(\n file_id=file_id\n folder_id=req.folder_id\n )\n # point 3\n # point 4 (outside of the transaction)\n return Response(status=200)\n\nNote that the points refer to the boundaries between lines of code, rather than pointing at the line of code above or below them.\nHere's an alternative view of things in a sequence diagram:

If the server crashes at points #1 or #4, we're fine - the request will fail, but we're still in a consistent state. The application server and SpiceDB agree about what the system should look like. If the server crashes at point #2, we're still okay - we've opened a database transaction but we haven't committed it, so the database will roll back the transaction and everything will be fine. If we crash at point #3, however, we're in a state where we've written to SpiceDB but we haven't committed the transaction to our database, and now SpiceDB and our database disagree about the state of the world.
\nThere isn't a neat way around this problem within the context of the process, either. This blog post goes further into potential approaches and their issues if you're curious. Things like adding a transactional semantic to SpiceDB or reordering the operations move the problem around but don't solve it, because there's still going to be some boundary in the code where the process could crash and leave you in an inconsistent state.
\nNote as well that there's nothing particularly unique about the dual-write problem in systems using SpiceDB and an application database, either. If we were writing to two different application databases, or to an application database and to a cache, or to two different RPC-invoked services, we still have the same issue.
\nWe can solve the dual-write problem in SpiceDB using a few different approaches, each with varying levels of complexity, prerequisites, and tradeoffs to be made
\nDoing nothing is an option that may be viable in the right context.\nThe sort of data inconsistency where SpiceDB and your application database disagree can be hard to diagnose.\nHowever, if there are mechanisms by which a user could recognize that something is wrong and remediate it in a timely manner, or if the authorized content in question isn't particularly sensitive, you may be able to run a naive implementation and avoid the complexity associated with other approaches.\nThe more stable your platform is, the more likely this is to cause fewer issues.
\nOut-of-band consistency checking would be one step beyond \"doing nothing.\"\nIf you have a source of truth that SpiceDB's state is meant to reflect in a given context, you can check that the two systems agree on a periodic basis.\nIf there's disagreement, the issues can be automatically remediated or flagged for manual intervention.
\nThis is a conceptually simple approach, but it's limited by both the size of your data and the velocity of changes to your data.\nThe more data you have, the more expensive and time-consuming the reconciliation process becomes.\nIf the data change rapidly, you could have false positives or false negatives when a change has been applied\nto one system but not the other.\nThis could theoretically be handled through locking or otherwise pinning SpiceDB and your application's database so that their data\nreflect the same version of the world while you're checking their associated states,\nbut that will greatly reduce your ability to make writes in your system.\nThe sync process itself can become a source of drift or inconsistency.
\nFor certain kinds of relationships and data, it may be sufficient to make SpiceDB the source of truth for that particular information.\nThis works best for data that matches SpiceDB's storage and access model: binary presence or absence of a relationship between two objects, and no requirement to sort those relationships or filter by anything other than which subject or object they're associated with.
\nIf your data meet those conditions, you can remove the application database from the question and make a single write to SpiceDB and avoid the dual-write problem entirely.
\nFor example, if we wanted to add a notion of a file \"owner\" to our example application, we probably wouldn't need an owner column with a foreign key to a user ID in our application database.\nInstead, we could represent the relationship entirely with an owner relation in SpiceDB, such that an API handler for adding or updating an owner of a file or folder would only talk to SpiceDB and not to the application database.\nBecause only one system is being written to in the handler, we avoid the dual-write problem.

The limitation here is that if you wanted to build a user interface where a user can see a table of all of the files they own, you wouldn't be able to filter, sort, or paginate\nthat table as easily, because SpiceDB isn't a general-purpose database and doesn't support that functionality in the same way.
\nEvent sourcing and CQRS are related ideas that involve treating your system as eventually consistent.\nRather than an API call being a procedure that runs to completion, an API call becomes an event that kicks off a chain of actions.\nThat event goes into an event stream, where consumers (to use Kafka's language) can pick them up and process them, which may involve producing new events.\nMultiple consumers can listen to the same topic.\nThe events flow through the system until they've all been processed, and the surrounding runtime ensures that nothing is dropped.
\nThere's a cute high-level illustration of how an event sourcing system works here: https://www.gentlydownthe.stream/
\nIn our example application, it might look like the following:
\nThe upside is that you're never particularly worried about the dual-write problem, because any individual failure of a subscriber can be recovered and re-run.\nEverything just percolates through until the system arrives at a new consistent state.
\nThe downside is that you can't treat API calls as RPCs.\nThe API call doesn't represent a change to the state of your system, but rather a command or request that will\neventually result in your desired changes happening.\nYou can work around this by having the client or UI listen to an event stream from the backend,\nsuch that all you're doing is passing messages back and forth, but this often requires\nsignificant rearchitecture, and not every runtime is amenable to this architecture.
\nHere are some examples of event queues that you might see in an event sourcing system:
\n\nA durable execution environment is a set of software tools that let you pretend that you're writing relatively simple transactional logic within your application while abstracting over the concerns involved in writing to multiple services. They promise to take care of errors, rollbacks, and coordination, provided you've written the according logic into the framework.
\nAn upside is that you don't have to rearchitect your system if you aren't already using the paradigms necessary for event sourcing.\nThe code that you write with these systems tends to be familiar, procedural, and imperative, which lowers the barrier to entry\nfor a dev trying to solve a dual-write problem.
\nA downside is that it can be difficult to know when your write has landed, because you're effectively dispatching it off to a job runner.\nThe business logic is moved off of the immediate request path. This means that the result of the business logic is also off of the request\npath, which raises a question of what you would return to an API client.
\nSome durable execution environments are explicitly for running jobs and don't give you introspection into the results;\nothers can be inserted into your code in such a way that you can wait for the result and pretend that everything happened synchronously.\nNote that this means that the associated runtime that handles those jobs becomes a part of the request path, which can carry operational overhead.
\nTemporal, Restate, Windmill, Trigger.dev, and Inngest are a few examples of durable execution environments. You'll have to evaluate which one best fits your architecture and infrastructure.
\nA transactional outbox pattern is related to both Event Sourcing and Durable Execution, in that it works around the dual-write problem\nthrough eventual consistency.\nThe idea is that within your application database, when there's a change that needs to be written to SpiceDB, you write to an outbox table, which is an append-only log of modifications that should happen to SpiceDB.\nThat write can happen within the same database transaction, which means you don't have the dual write problem.\nYou then read that log (or subscribe to a changestream) with a separate process which marks the entries as it reads them and then submits them to SpiceDB through some other mechanism.
\nAs long as this process is effectively single-threaded and retries operations until they succeed (which is helped by SpiceDB allowing for idempotent writes with its TOUCH operation), you have worked around the dual-write problem.
\nOne of the most commonly-used tools in a system based on the transactional outbox pattern is Debezium.\nIt watches changes in an outbox table and submits them as events to Kafka, which can then be consumed downstream to write to another system.
\nSome other resources are available here:
\nUnfortunately, when making writes to multiple systems, there are no easy answers. SpiceDB isn't unique in this regard, and most systems of sufficient complexity will eventually run into some variant of this problem. Which solution you choose will depend on the shape of your existing system, the requirements of your domain, and the appetite of your organization to make the associated changes. We still think it's worth it - when you centralize the data required for authorization decisions, you get big wins in consistency, performance, and safety. It just takes a little work.
", - "url": "https://authzed.com/blog/the-dual-write-problem", - "title": "The Dual-Write Problem", - "summary": "The dual-write problem is present in any distributed system and is difficult to solve. We discuss where the problem arises and several approaches.", - "image": "https://authzed.com/images/blogs/blog-featured-image.png", - "date_modified": "2025-01-02T12:48:00.000Z", - "date_published": "2025-01-02T12:48:00.000Z", - "author": { - "name": "Tanner Stirrat", - "url": "https://www.linkedin.com/in/tannerstirrat/" - } - }, - { - "id": "https://authzed.com/blog/spicedb-amazon-ecs", - "content_html": "Amazon Elastic Container Service (ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. This blog will illustrate how you can install SpiceDB on Amazon ECS and is divided into 3 parts:
\nIt's important to note that this guide is meant for:
\nIt is not recommended to use SpiceDB on ECS as a production deployment target. See the final section of this post for more details.
\nHere are the prerequisites to follow this guide:
\nLet’s start by pushing the SpiceDB Docker image to Amazon Elastic Container Registry (ECR)
\n
Alternately, you can create this using the AWS CLI with the following command:
\naws ecr create-repository --repository-name spicedb --region <your-region>\n\nAmazon ECR requires Docker to authenticate before pushing images.\nRetrieve an authentication token and authenticate your Docker client to your registry using the following command (you’ll need to replace region with your specific AWS region, like us-east-1)
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com\n\ndocker pull authzed/spicedb:latest\ndocker build -t spicedb .\n\ndocker tag spicedb:latest <account-id>.dkr.ecr.<region>.amazonaws.com/spicedb:latest\n\nNote: If you are using an Apple ARM-based machine (Ex: Mac with Apple Silicon) and you eventually want to deploy it to a x86-based instance you need to build this image for multi-architecture using the buildx command.
You cannot use docker buildx build with an image reference directly.\nInstead, create a lightweight Dockerfile to reference the existing image by adding this one line:
FROM authzed/spicedb:latest
and save it in the directory. While in that directory, build and push a Multi-Architecture Image using the buildx command:
docker buildx build --platform linux/amd64,linux/arm64 -t <account-id>.dkr.ecr.<region>.amazonaws.com/spicedb:latest --push .\n\ndocker push <account-id>.dkr.ecr.<region>.amazonaws.com/spicedb:latest\n\nReplace account-id and region with your AWS account ID and region.
spicedb repository. Verify that the spicedb:latest image is available.Note: All the above commands are pre-filled with your account details and can be seen by opening your repository on ECR and clicking the View push commands button
\n
Using AWS Console:
\nAlternately, you can create this using the AWS CLI with this command:
\naws ecs create-cluster --cluster-name spicedb-cluster\n\n
If you don’t see these roles, you can create them as follows:
\nCreating ecsTaskExecutionRole:
The ECS Task Execution Role is needed for ECS to pull container images from ECR, write logs to CloudWatch, and access other AWS resources.
\nGo to the IAM Console.
\nClick Create Role.
\nFor Trusted Entity Type, choose AWS Service.
\nSelect Elastic Container Service and then Elastic Container Service Task.
\nClick Next and attach the following policies:
\nOr use these commands using AWS CLI:
\naws iam create-role --role-name ecsTaskExecutionRole \n\n--assume-role-policy-document '{\"Version\": \"2012-10-17\", \"Statement\": [{\"Effect\": \"Allow\", \"Principal\": {\"Service\": \"ecs-tasks.amazonaws.com\"}, \"Action\": \"sts:AssumeRole\"}]}'\n\nAttach the AmazonECSTaskExecutionRolePolicy to the role:
\naws iam attach-role-policy --role-name ecsTaskExecutionRole \n\n--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy\n\nCreating ecsTaskRole (Optional):
The ECS Task Role is optional and should be created if your containers need access to other AWS services such as Amazon RDS or Secrets Manager.
\nOr use these commands using AWS CLI:
\nCreate the role using:
\naws iam create-role --role-name ecsTaskRole \n\n--assume-role-policy-document '{\"Version\": \"2012-10-17\", \"Statement\": [{\"Effect\": \"Allow\", \"Principal\": {\"Service\": \"ecs-tasks.amazonaws.com\"}, \"Action\": \"sts:AssumeRole\"}]}'\n\nAttach any policies based on the specific AWS services your application needs access to:
\naws iam attach-role-policy --role-name ecsTaskRole \n\n--policy-arn arn:aws:iam::<policy-arn-for-service-access>\n\nThe task definition defines how SpiceDB containers will be configured and run. Below is the JSON configuration for the task definition. To create a task definition:
\nAWS Console
\nCopy the JSON below:
\n{\n \"family\": \"spicedb-task\",\n \"networkMode\": \"awsvpc\",\n \"requiresCompatibilities\": [\"FARGATE\"], \n \"cpu\": \"512\", \n \"memory\": \"1024\", \n \"executionRoleArn\": \"arn:aws:iam::<account-id>:role/ecsTaskExecutionRole\", //Copy the ARN from the ecsTaskExecutionRole created above\n \"taskRoleArn\": \"arn:aws:iam::<account-id>:role/ecsTaskRole\", //Copy the ARN from the ecsTaskRole created above\n \"containerDefinitions\": [\n {\n \"name\": \"spicedb\",\n \"image\": \"<account-id>.dkr.ecr.<region>.amazonaws.com/spicedb\", //ECR Repository URI\n \"essential\": true,\n \"command\": [\n \"serve\",\n \"--grpc-preshared-key\",\n \"somekey\" \n ],\n \"portMappings\": [\n {\n \"containerPort\": 50051,\n \"hostPort\": 50051,\n \"protocol\": \"tcp\"\n }\n ],\n \"environment\": [],\n \"logConfiguration\": {\n \"logDriver\": \"awslogs\",\n \"options\": {\n \"awslogs-group\": \"/ecs/spicedb-ecs\",\n \"mode\": \"non-blocking\",\n \"awslogs-create-group\": \"true\",\n \"max-buffer-size\": \"25m\",\n \"awslogs-region\": \"us-east-1\",\n \"awslogs-stream-prefix\": \"ecs\"\n }\n }\n }\n ]\n}\n\nThe command section specifies serve which is the primary command for running SpiceDB.\nThis command serves the gRPC and HTTP APIs by default along with a pre-shared key for authenticated requests.
Note: This is purely for learning purposes so any permissions and relationships written to this instance of SpiceDB will be stored in-memory and not in a persistent database.\nTo write relationships to a persistent database, create a Amazon RDS instance for Postgres and note down the DB name, Master Password and Endpoint.
\nYou can add those into the task definition JSON in the command array like this:
\"command\": [\n \"serve\",\n \"--grpc-preshared-key\",\n \"somekey\",\n \"--datastore-engine\",\n \"postgres\",\n \"--datastore-conn-uri\",\n \"postgres://<username>:<password>@<RDS endpoint>:5432/<dbname>?sslmode=require\"\n ],\n\nThe defaults for username and dbname are usually postgres
You can also use the AWS CLI by storing the above JSON in a file an then running this command
\naws ecs register-task-definition --cli-input-json file://spicedb-task-definition.json\n\nNow that we’ve defined a task, we can create a task that would run within your ECS cluster.\nClick on your ECS Cluster created earlier
\n