s2.dev is a serverless datastore for real-time, streaming data.
This repository contains:
- s2-cli - The official S2 command-line interface
- s2-lite - An open source, self-hostable server implementation of the S2 API
brew install s2-streamstore/s2/s2cargo install --locked s2-clicurl -fsSL s2.dev/install.sh | bashOr specify a version with VERSION=x.y.z before the command. See all releases.
docker pull ghcr.io/s2-streamstore/s2s2-lite is available as the s2 lite subcommand. It's a self-hostable server implementation of the S2 API.
It uses SlateDB as its storage engine, which relies entirely on object storage for durability.
It is easy to run s2 lite against object stores like AWS S3 and Tigris. It is a single-node binary with no other external dependencies. Just like s2.dev, data is always durable on object storage before being acknowledged or returned to readers.
You can also simply not specify a --bucket, which makes it operate entirely in-memory. This is great for integration tests involving S2.
Note
Point the S2 CLI or SDKs at your lite instance like this:
export S2_ACCOUNT_ENDPOINT="http://localhost:8080"
export S2_BASIN_ENDPOINT="http://localhost:8080"
export S2_ACCESS_TOKEN="redundant"Here's how you can run in-memory without any external dependency:
# Using Docker
docker run -p 8080:80 ghcr.io/s2-streamstore/s2 lite
# Or directly with the CLI
s2 lite --port 8080AWS S3 bucket example
docker run -p 8080:80 \
-e AWS_PROFILE=${AWS_PROFILE} \
-v ~/.aws:/root/.aws:ro \
ghcr.io/s2-streamstore/s2 lite \
--bucket ${S3_BUCKET} \
--path s2liteStatic credentials example (Tigris, R2 etc)
docker run -p 8080:80 \
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
-e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
-e AWS_ENDPOINT_URL_S3=${AWS_ENDPOINT_URL_S3} \
ghcr.io/s2-streamstore/s2 lite \
--bucket ${S3_BUCKET} \
--path s2liteLet's make sure the server is ready:
while ! curl -sf ${S2_ACCOUNT_ENDPOINT}/health -o /dev/null; do echo Waiting...; sleep 2; done && echo Up!Install the CLI (see Installation above) or upgrade if s2 --version is older than 0.26
Let's create a basin with auto-creation of streams enabled:
s2 create-basin liteness --create-stream-on-append --create-stream-on-readTest your performance:
s2 bench liteness --target-mibps 10 --duration 5s --catchup-delay 0sNow let's try streaming sessions. In one or more new terminals (make sure you re-export the env vars noted above),
s2 read s2://liteness/starwars 2> /dev/nullNow back from your original terminal, let's write to the stream:
nc starwars.s2.dev 23 | s2 append s2://liteness/starwars/health will return 200 on success for readiness and liveness checks
/metrics returns Prometheus text format
Use SL8_ prefixed environment variables, e.g.:
# Defaults to 50ms for remote bucket / 5ms in-memory
SL8_FLUSH_INTERVAL=10ms- HTTP serving is implemented using axum
- Each stream corresponds to a Tokio task called
streamerthat owns the currenttailposition, serializes appends, and broadcasts acknowledged records to followers - Appends are pipelined to improve performance against high-latency object storage
- Temporary disabled by default, you can try it with
S2LITE_PIPELINE=true
- Temporary disabled by default, you can try it with
lite::backend::kv::Keydocuments the data modeling in SlateDB
- Deletion is not fully plumbed up yet
- Pipelining needs to be made safe and default #48
- CLI ✅ v0.26+
- TypeScript SDK ✅ v0.22+
- Go SDK ✅ v0.11+
- Rust SDK ✅ v0.22+
- Python 🚧 needs to be migrated to v1 API
- Java 🚧 needs to be migrated to v1 API
Tip
Complete specs are available: OpenAPI for the REST-ful core, Protobuf definitions, and S2S which is the streaming session protocol.
Fully supported
/basins/streams/streams/{stream}/records
Important
Unlike the cloud service where the basin is implicit as a subdomain, /streams/* requests must specify the basin using the S2-Basin header. The SDKs take care of this automatically.
Not supported
/access-tokens#28/metrics

