-
Install Homebrew.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -
Install PostgreSQL.
brew install postgresql@17
-
Start PostgreSQL.
brew services start postgresql@17
-
Install Rust.
brew install rustup rustup-init
-
The backend uses SQLx to interact with PostgreSQL. Install the SQLx in order to run migrations and perform other administrative operations.
cargo install sqlx-cli --no-default-features --features rustls,postgres
-
Configure SQLx with the DATABASE_URL.
export DATABASE_URL=postgresql://localhost/kosoAlso, add the environment variable to the appropriate profile file (
~/.profile,~/.bash_profile,~/.bashrc,~/.zshrc,~/.zshenv) so you don't have to run it every time. -
Create the database and run the DB migrations.
In the
backendfolder, run:sqlx database create sqlx migrate run
-
Install Node.js.
brew install node pnpm
-
Install the Stripe CLI
brew install stripe/stripe-cli/stripe stripe login
-
Run the most recent DB migrations.
In the
backendfolder, run:sqlx migrate run
-
Install the latest frontend dependencies.
In the
frontendfolder, run:pnpm install
-
Start the backend server.
In the
backendfolder, run:cargo run
-
Start the frontend server.
In the
frontendfolder, run:pnpm run dev
-
Navigate to http://localhost:5173/
The Koso Workspace is configured for development in VS Code.
The following plugins are recommended:
- Rust Analyzer
- Even Better TOML
- Svelte for VS Code
- Tailwind CSS IntelliSense
- Prettier - Code Formatter
- ESLint
- Vitest
- Playwright Test for VSCode
Add a migration:
sqlx migrate add some-meaningful-nameRun migrations
sqlx migrate runOnce a server has been started, you can interact with it at http://localhost:3000. There are example requests in koso.http which you can run with REST Client.
Tired of manually restarting your server after editing the code? Use systemfd and cargo-watch to automatically recompile and restart the server whenever the source code changes. It uses listenfd to be able to migrate connections from an old version of the app to a newly-compiled version.
One time setup:
cargo install cargo-watch systemfdRunning:
systemfd --no-pid -s http::3000 -- cargo watch -x runThis setup is similar to how the app will run in production. A single server serves the API, WebSocket, and static frontend files.
-
In the
frontendfolder, run:pnpm run build
-
In the
backendfolder, run the server:systemfd --no-pid -s http::3000 -- cargo watch -x run
This will create a frontend/build folder. The backend/static folder is symlinked to that folder and will serve the compiled frontend directly from the backend.
Playwright tests, i.e. integration tests, flex the entire system end-to-end via the frontend testing framework Playwright. The tests run as part of CI, but you may also run them locally.
Make changes and run the tests quickly without rebuilding the world. Start a frontend and backend server in the usual manner, see above, and run the tests in VSCode using the Playwright extension or via the CLI:
pnpm exec playwright testFollow "Running a Built Frontend with the Backend" above to build the frontend and run the backend. Run the tests:
PW_SERVER_PORT=3000 pnpm exec playwright testThis is what our CI workflows do. playwright will build the frontend and run a backend for the duration of the tests:
CI=true pnpm exec playwright testBuild and run the docker image defined in Dockerfile.
- Download and install Docker: https://www.docker.com/products/docker-desktop/
-
Build the image:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build -t ghcr.io/kosolabs/koso . -
Configure the DATABASE_URL.
export DATABASE_URL=postgresql://localhost/kosoAlso, add the environment variable to the appropriate profile file (
~/.profile,~/.bash_profile,~/.bashrc,~/.zshrc,~/.zshenv) so you don't have to run it every time. -
Run database migrations:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run \ --env DATABASE_URL \ --network=host \ --rm -it \ ghcr.io/kosolabs/koso:latest \ "./sqlx" migrate run -
Run the server:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run \ --env KOSO_ENV=dev \ -v $HOME/.secrets:/.secrets \ --network=host \ --rm -it \ ghcr.io/kosolabs/koso:latest
-
Install docker:
sudo su &&\ apt update &&\ apt install ca-certificates curl gnupg apt-transport-https gpg curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker.gpg apt update echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/debian bookworm stable" |tee /etc/apt/sources.list.d/docker.list > /dev/null apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose systemctl is-active docker echo $PULL_TOKEN| docker login ghcr.io -u $USER --password-stdin
We use a Github Environment configured on the Deploy workflow which exposes a KOSO_KEY to access the server.
-
Add 172.17.0.1 to /etc/postgresql/17/main/postgresql.conf:
listen_addresses = 'localhost,172.17.0.1' -
Add entry to /etc/postgresql/17/main/pg_hba.conf:
# Allow docker bridge host all all 172.0.0.0/8 scram-sha-256
ssh-keygen -t ed25519 -C "koso-github-read-key" -f /root/.ssh/koso_github_read_id_ed25519 -N ''
eval "$(ssh-agent -s)"
cat >>/root/.ssh/config <<EOL
Host github.com
AddKeysToAgent yes
IdentityFile ~/.ssh/koso_github_read_id_ed25519
EOL
# MANUAL - add a new deploy key with the public key (e.g. ssh-ed25519 KEY) to https://github.com/kosolabs/koso/settings/keys/new
cat /root/.ssh/koso_github_read_id_ed25519.pub
ssh -T git@github.com && echo "Github auth works"Rather than using our personal key and since we only need read access, we use Github Deploy Keys to authenticate with Github from our server.
psql_backup.sh exports backups of our Postgresql DB to cloud storage.
The script is ran by a daily cron and logs are available at koso-psql-backups/backups.log.
Backups are stored in a GCP cloud storage bucket named koso-psql-backups. The bucket has
soft deletion and object versioning configured, along with lifecycle rules to
auto-delete objects after 30 days.
Identify the backup to restore in the cloud console and update backup_name below with the target object name.
backup_name=TARGET-backup.sql.gzDownload and unzip the backup:
backup_object=gs://$koso-psql-backups/$backup_name
gcloud storage cp --print-created-message $backup_object ./
gzip -dk $backup_nameRestore the backup:
PGPASSWORD=$PSQL_PASSWORD pg_restore \
--host="$PSQL_HOST" \
--port="$PSQL_PORT" \
--db="$PSQL_DB" \
--username="$PSQL_USER" \
-f \
$backup_nameUpgrade Postgres to a new major version. In the example below, from 16 to 17.
-
Update the postgres image version from postgres:16 to postgres:17 in ci.yml and merge.
-
Install the new version of posgres:
sudo apt update sudo apt install postgresql-17 pg_lsclusters
-
Backup the cluster just in case:
pg_dumpall > ~/postgres-dump-$(date -u "+%Y-%m-%dT%H-%M-%S-%3NZ")
-
Upgrade the cluster:
sudo service postgresql stop sudo pg_renamecluster 17 main main_pristine sudo pg_upgradecluster 16 main sudo service postgresql start pg_lsclusters -
Verify the new cluster is working by visiting our app and verifying things work. Look at backend logs as well for anything suspicious.
pg_lsclusters
-
Drop the old and transition version:
sudo pg_dropcluster 16 main --stop sudo pg_dropcluster 17 main_pristine --stop
References:
- https://docs.github.com/en/webhooks/webhook-events-and-payloads
- https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/using-webhooks-with-github-apps
- https://docs.github.com/en/webhooks/testing-and-troubleshooting-webhooks/testing-webhooks
Install Smee
npm install --global smee-clientAfter starting your local server:
- Configure your development webhook secret in:
koso/.secrets/github/webhook_secret - Start a new Smee channel: https://smee.io/
- Start smee locally with the new channel
smee -u $CHANNEL_URL --port 3000 --path /plugins/github/app/webhook - Trigger or redeliver some events
-
Install the CLI:
brew install stripe/stripe-cli/stripe stripe login
We use the Koso Labs Sandbox Stripe sandbox for testing. Login to Stripe and switch to the Sandbox to find API keys and webhook details. Feel free to create a new sandbox if needed.
After starting your local server:
-
Configure your sandbox secret API key in
koso/.secrets/stripe/secret_key -
Configure your sandbox webhook secret
stripe listen --api-key $(cat koso/.secrets/stripe/secret_key) --print-secret > koso/.secrets/stripe/webhook_secret
-
Start a local listener with stripe listen. Add events as needed. Omit the API key to use an empemeral test environment.
stripe listen \ --forward-to localhost:3000/api/billing/stripe/webhook \ --api-key=$(cat koso/.secrets/stripe/secret_key) \ --events=checkout.session.completed,invoice.paid,customer.subscription.created,customer.subscription.deleted,customer.subscription.paused,customer.subscription.resumed,customer.subscription.updated
With this in place and your local servers running, you can:
- Trigger events on demand with
stripe trigger. For example:stripe trigger checkout.session.completed - Test interactively using the
4242 4242 4242 4242card number: https://docs.stripe.com/testing#testing-interactively