Skip to content

Volumes and Persistent Storage

Adrian Burlacu edited this page Feb 15, 2026 · 2 revisions

Volumes & Persistent Storage

Version 0.9.2 · Last updated February 2026

Volumes provide node-local persistent storage for pods. Data written to a volume survives pod restarts and can be shared across replicas running on the same node.


Overview

By default pods are ephemeral — when a pod stops, any data it produced is lost. Volumes solve this by providing a named storage unit that is tied to a specific node and persists independently of any individual pod.

Key properties:

Property Detail
Scope Node-local. A volume lives on one node.
Uniqueness Name is unique per node (the same name can exist on different nodes).
Persistence Data survives pod restarts, deletes, and re-creates.
Sharing Multiple pods/replicas on the same node can mount the same volume simultaneously.
Isomorphic Works on both Node.js and browser runtimes (different storage backends).

Concepts

Volume

A named, node-local persistent storage unit managed by the orchestrator. Created ahead of time with stark volume create and referenced at pod/service creation using --volume.

Volume Mount

A mapping from a volume name to a mount path inside the pack runtime. The mount path is an absolute path (e.g. /app/data) that the pack code uses to read and write files.

--volume <name>:<mount-path>

Example: --volume counter-data:/app/data mounts the volume named counter-data at /app/data inside the pod.

Storage Backends

Runtime Backend Location
Node.js File system <agent-cwd>/volumes/<volume-name>/
Browser IndexedDB stark-volumes database, files object store

Both backends expose the same API to pack code, making packs portable across runtimes.


Quick Start

1. Create a volume

stark volume create counter-data --node production-node-1

2. Register a pack that uses volumes

stark pack register examples/bundle_volume_counter.js \
  --name volume-counter -V 0.0.1 -r universal --visibility public

3. Create a pod with the volume mounted

stark pod create volume-counter \
  --node production-node-1 \
  --volume counter-data:/app/data

The pod's pack code can now read and write files under /app/data, and the data persists across restarts.


CLI Commands

volume create

Create a named volume on a node.

stark volume create <name> --node <name-or-uuid>
Option Description
--name <name> Volume name (alternative to positional argument)
-n, --node <nameOrId> (required) Target node (name or UUID)

Volume names must be lowercase alphanumeric with hyphens, starting/ending with an alphanumeric character, max 63 characters. (DNS-like: ^[a-z0-9]([a-z0-9-]*[a-z0-9])?$)

volume list

List volumes, optionally filtered by node.

stark volume list [--node <name-or-uuid>]

Alias: stark volume ls

Option Description
-n, --node <nameOrId> Filter by node (name or UUID)

volume download

Download volume contents as a tar archive.

stark volume download <name> --node <name-or-uuid> --output ./backup.tar
Option Description
--name <name> Volume name (alternative to positional argument)
-n, --node <nameOrId> (required) Node where the volume resides
-o, --output <path> (required) Output file path

Mounting Volumes on Pods and Services

Pod creation

Use --volume (repeatable) when creating a pod:

stark pod create my-pack \
  --node production-node-1 \
  --volume data:/app/data \
  --volume logs:/app/logs

Format: <volume-name>:<mount-path>

Service creation

Services propagate volume mounts to every replica pod. Because volumes are node-local, --node is required when --volume is specified:

stark service create my-svc \
  --pack shared-log --replicas 3 \
  --node production-node-1 \
  --volume shared-log:/app/logs

All three replicas will mount the same shared-log volume at /app/logs, enabling shared storage across the service.


Pack Runtime API

When a pod has volume mounts, the orchestrator injects file I/O helpers into the pack context:

context.volumeMounts

context.volumeMounts: Array<{ name: string; mountPath: string }> | undefined

An array of volume mount descriptors the pod was created with. Use this to detect whether a volume is available.

context.readFile(path)

context.readFile(filePath: string): Promise<string>

Read a file from a mounted volume. The path must start with one of the mountPath prefixes in context.volumeMounts. Throws if the path is not inside a mounted volume or the file does not exist.

context.writeFile(path, content)

context.writeFile(filePath: string, content: string): Promise<void>

Write (overwrite) a file on a mounted volume. Intermediate directories are created automatically. Throws if the path is not inside a mounted volume.

context.appendFile(path, content)

context.appendFile(filePath: string, content: string): Promise<void>

Append to a file on a mounted volume (available in some runtimes).

Security

File I/O is sandboxed — only paths under a mounted volume's mountPath are accessible. Attempting to read or write outside a mount throws an error:

Error: Path '/etc/passwd' is not inside any mounted volume

Examples

Volume Counter

A pod that persists an incrementing counter to a volume, surviving restarts:

module.exports.default = async function(context) {
  var volumePath = '/app/data';
  var counterFile = volumePath + '/counter.json';
  var counter = 0;

  // Restore counter from volume
  if (context.readFile) {
    try {
      var raw = await context.readFile(counterFile);
      counter = JSON.parse(raw).count || 0;
    } catch (_e) { /* first run */ }
  }

  // Increment and persist
  while (!context.lifecycle?.isShuttingDown) {
    counter++;
    if (context.writeFile) {
      await context.writeFile(counterFile,
        JSON.stringify({ count: counter, updatedAt: new Date().toISOString() }));
    }
    await new Promise(r => setTimeout(r, 1000));
  }

  return { lastCount: counter };
};

Run it:

stark volume create counter-data --node production-node-1
stark pack register examples/bundle_volume_counter.js \
  --name volume-counter -V 0.0.1 -r universal --visibility public
stark pod create volume-counter \
  --node production-node-1 \
  --volume counter-data:/app/data

Shared Volume Log

Multiple service replicas sharing an append-only log file via a common volume:

module.exports.default = async function(context) {
  var logFile = '/app/logs/shared.log';
  var entryCount = 0;

  while (!context.lifecycle?.isShuttingDown) {
    entryCount++;
    var entry = new Date().toISOString() +
      ' [pod:' + context.podId + '] Entry #' + entryCount;

    // Append to shared log
    if (context.appendFile) {
      await context.appendFile(logFile, entry + '\n');
    }

    // Periodically read all entries from all replicas
    if (entryCount % 5 === 0 && context.readFile) {
      var contents = await context.readFile(logFile);
      console.log('Total shared entries: ' +
        contents.split('\n').filter(Boolean).length);
    }

    await new Promise(r => setTimeout(r, 2000));
  }
};

Run it:

stark volume create shared-log --node production-node-1
stark pack register examples/bundle_volume_shared_log.js \
  --name shared-log -V 0.0.1 -r universal --visibility public
stark service create shared-log-svc \
  --pack shared-log --replicas 3 \
  --node production-node-1 \
  --volume shared-log:/app/logs

Database Schema

Volumes are stored in the volumes table:

Column Type Description
id UUID Primary key
name TEXT Volume name
node_id UUID Node FK (CASCADE on delete)
created_at TIMESTAMPTZ Creation timestamp
updated_at TIMESTAMPTZ Last update timestamp

Constraints:

  • UNIQUE(name, node_id) — volume name is unique per node
  • node_id references nodes(id) with ON DELETE CASCADE

Pods and services store their mounts in a volume_mounts JSONB column containing an array of { name, mountPath } objects.


Validation Rules

Field Rule
Volume name Lowercase alphanumeric + hyphens, starts/ends alphanumeric, 1–63 chars
Mount path Absolute path (/...), alphanumeric with _, ., /, -
Mounts per pod Max 20
Duplicate mount paths Not allowed within the same pod/service
Node ID Valid UUID

Architecture Notes

  • Volumes are control-plane metadata — the orchestrator tracks which volumes exist and where, but actual file storage is handled by the node runtime.
  • On Node.js runtimes, volume data is stored on the local filesystem under <agent-cwd>/volumes/<volume-name>/. Files are read/written synchronously via node:fs.
  • On browser runtimes, volume data is stored in an IndexedDB database named stark-volumes with a files object store. Keys follow the pattern stark-volumes/<volume-name>/<relative-path>.
  • Volume I/O closures cannot be serialized across IPC/postMessage boundaries, so they are recreated inside workers using the serializable volumeMounts metadata.
  • Volume download (stark volume download) is a V1 placeholder — full content retrieval from remote nodes is planned for a future release.

Limitations

  • Volumes are node-local only. There is no cross-node replication or migration.
  • Services using --volume must specify --node to pin all replicas to the same node.
  • Browser-runtime volumes are scoped to the browser's IndexedDB and are subject to browser storage limits and eviction policies.
  • The appendFile helper is not universally available across all runtime implementations.
  • Volume download returns a placeholder tar archive in V1.

Related Topics

Clone this wiki locally