-
Notifications
You must be signed in to change notification settings - Fork 1
Glossary
A comprehensive glossary of terms used in Stark Orchestrator.
A bundled, versioned JavaScript package that can be deployed to nodes. Packs are immutable once registered—a version 1.0.0 pack will always contain the same code.
- Created via:
stark pack bundlethenstark pack register - Has: name, version, runtime type, visibility
- Analogous to: Docker image, npm package
A running instance of a pack on a specific node. Pods are ephemeral and track execution state.
- Created via:
stark pod createor by a Service - Has: status, assigned node, pack reference, labels
- Lifecycle: Pending → Scheduled → Running → Succeeded/Failed
- Analogous to: Docker container, Kubernetes pod
An execution environment that runs pods. Nodes can be Node.js servers or browsers.
- Types: Node.js runtime, Browser runtime
- Has: labels, taints, resources (CPU, memory), status
- Connects via WebSocket
- Analogous to: Kubernetes node
A declarative resource that manages pods automatically. Services ensure the desired state is maintained.
- Created via:
stark service create - Features: auto-healing, scaling, rolling updates
- Modes: replica-based, DaemonSet (replicas=0)
An encrypted key-value store for sensitive data injected into pods at runtime. Secrets are encrypted at rest using AES-256-GCM and never exposed in CLI output or logs.
- Created via:
stark secret create - Types:
opaque(generic),tls(certificate + key),docker-registry(registry credentials) - Injection modes: environment variables (
env) or volume mounts (volume) - Referenced by services via
--secret <name>
A node-local named persistent storage unit managed by the orchestrator. Volumes persist data across pod restarts and can be shared by multiple pods on the same node.
- Created via:
stark volume create - Has: name, nodeId, createdAt, updatedAt
- Uniqueness: name is unique per node
- Storage backend: file system (Node.js) or IndexedDB (browser)
- See: Volumes & Persistent Storage
A mapping from a volume name to a mount path inside the pack runtime. Specified as <name>:<mount-path> when creating pods or services.
--volume counter-data:/app/data- Pack code accesses volume data via
context.readFile()/context.writeFile() - Paths outside mounted volumes are rejected (sandboxed I/O)
- Max 20 mounts per pod/service
- See: Volumes & Persistent Storage
An ephemeral, overlapping collection of pods grouped by a locally-computed groupId. PodGroups are created lazily on the first join and garbage-collected when all members expire. Unlike Services (which are persistent and managed by the reconciler), PodGroups are in-memory, TTL-scoped, and self-managed by pods.
- Created via:
context.ephemeral.joinGroup(groupId) - Has: groupId, members (each with their own TTL), createdAt, updatedAt
- Stored in: in-memory
PodGroupStore(not persisted to database) - Use for: presence, signalling, contact tracing, ephemeral fan-out queries
- See: PodGroups & Ephemeral Data Plane
The transient communication layer for pod-to-pod state that lives outside the persistent control plane. Consists of the PodGroupStore (membership) and EphemeralDataPlane (high-level API). All state is in-memory with automatic TTL-based expiration.
- Contrast with: Control plane (ServiceRegistry, PodStore) which is persistent and database-backed
- API:
context.ephemeralinjected into packs - See: PodGroups & Ephemeral Data Plane
A lightweight, non-persistent probe sent to one or more pods for transient state. Unlike ServiceRequest (which is a full HTTP-like RPC via the service mesh), ephemeral queries are fan-out reads that bypass routing, policies, and WebRTC handshakes.
- API:
plane.queryPods(podIds, path, query) - Returns: aggregated
EphemeralQueryResultwith per-pod responses and timeout tracking
An isolated resource boundary for organizing resources. Provides logical separation without infrastructure isolation.
- Default:
default - Use for: environment separation (dev/staging/prod), team isolation
A key-value pair attached to resources for identification and selection.
--label env=production
--label tier=frontendA query that matches resources by their labels.
--node-selector env=productionA node attribute that repels pods unless they have a matching toleration. Used for dedicated workloads.
# On node
--taint gpu=dedicated:NoSchedule
# Format: key=value:effect
# Effects: NoSchedule, PreferNoSchedule, NoExecuteA pod attribute that allows scheduling on tainted nodes.
# On pod
--toleration gpu=dedicated:NoScheduleRules that attract pods to certain nodes based on criteria.
The minimum resources a pod requires to run.
--cpu 500 # 500 millicores (0.5 CPU)
--memory 256 # 256 MB RAMA numeric value (0-1000) that determines scheduling order and preemption eligibility. Higher priority pods are scheduled first.
--priority 200A set of permissions assigned to users.
| Role | Description |
|---|---|
admin |
Full access to all resources |
user |
Self-service, manage own resources |
node |
Node agents, update assigned pods |
viewer |
Read-only access |
Controls who can deploy a pack.
| Visibility | Who Can Deploy |
|---|---|
private |
Only the owner |
public |
Any user |
The server-side runtime for running packs in Node.js environments.
- Use for: API servers, workers, background jobs
- Runtime identifier:
node
The client-side runtime for running packs in web browsers.
- Use for: UI applications, client-side logic
- Runtime identifier:
browser - Requires: Self-contained bundle with inlined assets
An individual pod instance managed by a service.
--replicas 3 # Create 3 pod instancesA service mode where replicas=0 means deploy to all matching nodes.
--replicas 0 # Deploy to all nodes matching selectorsA service update strategy where new pods are created before old pods are removed.
Reverting to a previous pack version when the current version fails.
stark pod rollback <pod-id> --ver 1.0.0A service setting that automatically updates to new pack versions.
--follow-latest| State | Description |
|---|---|
Pending |
Created, waiting for scheduling |
Scheduled |
Assigned to node, waiting for execution |
Running |
Actively executing |
Succeeded |
Completed successfully |
Failed |
Execution failed |
Stopping |
Graceful shutdown in progress |
Stopped |
Terminated |
| State | Description |
|---|---|
Ready |
Online and accepting pods |
NotReady |
Not accepting new pods |
Offline |
Disconnected from orchestrator |
A periodic signal from nodes to the orchestrator indicating the node is alive.
The HTTP API for managing resources. Available at https://<host>/api/.
The real-time API for node connections and live updates. Available at wss://<host>/ws.
- Home
- Getting Started
- Concepts
- Core Architecture
- Tutorials
- Reference
- Advanced Topics
- Contribution