-
Notifications
You must be signed in to change notification settings - Fork 1
Volumes and Persistent Storage
Version 0.9.2 · Last updated February 2026
Volumes provide node-local persistent storage for pods. Data written to a volume survives pod restarts and can be shared across replicas running on the same node.
By default pods are ephemeral — when a pod stops, any data it produced is lost. Volumes solve this by providing a named storage unit that is tied to a specific node and persists independently of any individual pod.
Key properties:
| Property | Detail |
|---|---|
| Scope | Node-local. A volume lives on one node. |
| Uniqueness | Name is unique per node (the same name can exist on different nodes). |
| Persistence | Data survives pod restarts, deletes, and re-creates. |
| Sharing | Multiple pods/replicas on the same node can mount the same volume simultaneously. |
| Isomorphic | Works on both Node.js and browser runtimes (different storage backends). |
A named, node-local persistent storage unit managed by the orchestrator. Created ahead of time with stark volume create and referenced at pod/service creation using --volume.
A mapping from a volume name to a mount path inside the pack runtime. The mount path is an absolute path (e.g. /app/data) that the pack code uses to read and write files.
--volume <name>:<mount-path>
Example: --volume counter-data:/app/data mounts the volume named counter-data at /app/data inside the pod.
| Runtime | Backend | Location |
|---|---|---|
| Node.js | File system | <agent-cwd>/volumes/<volume-name>/ |
| Browser | IndexedDB |
stark-volumes database, files object store |
Both backends expose the same API to pack code, making packs portable across runtimes.
stark volume create counter-data --node production-node-1stark pack register examples/bundle_volume_counter.js \
--name volume-counter -V 0.0.1 -r universal --visibility publicstark pod create volume-counter \
--node production-node-1 \
--volume counter-data:/app/dataThe pod's pack code can now read and write files under /app/data, and the data persists across restarts.
Create a named volume on a node.
stark volume create <name> --node <name-or-uuid>| Option | Description |
|---|---|
--name <name> |
Volume name (alternative to positional argument) |
-n, --node <nameOrId> |
(required) Target node (name or UUID) |
Volume names must be lowercase alphanumeric with hyphens, starting/ending with an alphanumeric character, max 63 characters. (DNS-like: ^[a-z0-9]([a-z0-9-]*[a-z0-9])?$)
List volumes, optionally filtered by node.
stark volume list [--node <name-or-uuid>]Alias: stark volume ls
| Option | Description |
|---|---|
-n, --node <nameOrId> |
Filter by node (name or UUID) |
Download volume contents as a tar archive.
stark volume download <name> --node <name-or-uuid> --output ./backup.tar| Option | Description |
|---|---|
--name <name> |
Volume name (alternative to positional argument) |
-n, --node <nameOrId> |
(required) Node where the volume resides |
-o, --output <path> |
(required) Output file path |
Use --volume (repeatable) when creating a pod:
stark pod create my-pack \
--node production-node-1 \
--volume data:/app/data \
--volume logs:/app/logsFormat: <volume-name>:<mount-path>
Services propagate volume mounts to every replica pod. Because volumes are node-local, --node is required when --volume is specified:
stark service create my-svc \
--pack shared-log --replicas 3 \
--node production-node-1 \
--volume shared-log:/app/logsAll three replicas will mount the same shared-log volume at /app/logs, enabling shared storage across the service.
When a pod has volume mounts, the orchestrator injects file I/O helpers into the pack context:
context.volumeMounts: Array<{ name: string; mountPath: string }> | undefinedAn array of volume mount descriptors the pod was created with. Use this to detect whether a volume is available.
context.readFile(filePath: string): Promise<string>Read a file from a mounted volume. The path must start with one of the mountPath prefixes in context.volumeMounts. Throws if the path is not inside a mounted volume or the file does not exist.
context.writeFile(filePath: string, content: string): Promise<void>Write (overwrite) a file on a mounted volume. Intermediate directories are created automatically. Throws if the path is not inside a mounted volume.
context.appendFile(filePath: string, content: string): Promise<void>Append to a file on a mounted volume (available in some runtimes).
File I/O is sandboxed — only paths under a mounted volume's mountPath are accessible. Attempting to read or write outside a mount throws an error:
Error: Path '/etc/passwd' is not inside any mounted volume
A pod that persists an incrementing counter to a volume, surviving restarts:
module.exports.default = async function(context) {
var volumePath = '/app/data';
var counterFile = volumePath + '/counter.json';
var counter = 0;
// Restore counter from volume
if (context.readFile) {
try {
var raw = await context.readFile(counterFile);
counter = JSON.parse(raw).count || 0;
} catch (_e) { /* first run */ }
}
// Increment and persist
while (!context.lifecycle?.isShuttingDown) {
counter++;
if (context.writeFile) {
await context.writeFile(counterFile,
JSON.stringify({ count: counter, updatedAt: new Date().toISOString() }));
}
await new Promise(r => setTimeout(r, 1000));
}
return { lastCount: counter };
};Run it:
stark volume create counter-data --node production-node-1
stark pack register examples/bundle_volume_counter.js \
--name volume-counter -V 0.0.1 -r universal --visibility public
stark pod create volume-counter \
--node production-node-1 \
--volume counter-data:/app/dataMultiple service replicas sharing an append-only log file via a common volume:
module.exports.default = async function(context) {
var logFile = '/app/logs/shared.log';
var entryCount = 0;
while (!context.lifecycle?.isShuttingDown) {
entryCount++;
var entry = new Date().toISOString() +
' [pod:' + context.podId + '] Entry #' + entryCount;
// Append to shared log
if (context.appendFile) {
await context.appendFile(logFile, entry + '\n');
}
// Periodically read all entries from all replicas
if (entryCount % 5 === 0 && context.readFile) {
var contents = await context.readFile(logFile);
console.log('Total shared entries: ' +
contents.split('\n').filter(Boolean).length);
}
await new Promise(r => setTimeout(r, 2000));
}
};Run it:
stark volume create shared-log --node production-node-1
stark pack register examples/bundle_volume_shared_log.js \
--name shared-log -V 0.0.1 -r universal --visibility public
stark service create shared-log-svc \
--pack shared-log --replicas 3 \
--node production-node-1 \
--volume shared-log:/app/logsVolumes are stored in the volumes table:
| Column | Type | Description |
|---|---|---|
id |
UUID | Primary key |
name |
TEXT | Volume name |
node_id |
UUID | Node FK (CASCADE on delete) |
created_at |
TIMESTAMPTZ | Creation timestamp |
updated_at |
TIMESTAMPTZ | Last update timestamp |
Constraints:
-
UNIQUE(name, node_id)— volume name is unique per node -
node_idreferencesnodes(id)withON DELETE CASCADE
Pods and services store their mounts in a volume_mounts JSONB column containing an array of { name, mountPath } objects.
| Field | Rule |
|---|---|
| Volume name | Lowercase alphanumeric + hyphens, starts/ends alphanumeric, 1–63 chars |
| Mount path | Absolute path (/...), alphanumeric with _, ., /, -
|
| Mounts per pod | Max 20 |
| Duplicate mount paths | Not allowed within the same pod/service |
| Node ID | Valid UUID |
- Volumes are control-plane metadata — the orchestrator tracks which volumes exist and where, but actual file storage is handled by the node runtime.
- On Node.js runtimes, volume data is stored on the local filesystem under
<agent-cwd>/volumes/<volume-name>/. Files are read/written synchronously vianode:fs. - On browser runtimes, volume data is stored in an IndexedDB database named
stark-volumeswith afilesobject store. Keys follow the patternstark-volumes/<volume-name>/<relative-path>. - Volume I/O closures cannot be serialized across IPC/postMessage boundaries, so they are recreated inside workers using the serializable
volumeMountsmetadata. - Volume download (
stark volume download) is a V1 placeholder — full content retrieval from remote nodes is planned for a future release.
- Volumes are node-local only. There is no cross-node replication or migration.
- Services using
--volumemust specify--nodeto pin all replicas to the same node. - Browser-runtime volumes are scoped to the browser's IndexedDB and are subject to browser storage limits and eviction policies.
- The
appendFilehelper is not universally available across all runtime implementations. - Volume download returns a placeholder tar archive in V1.
- Home
- Getting Started
- Concepts
- Core Architecture
- Tutorials
- Reference
- Advanced Topics
- Contribution