Skip to content

Latest commit

 

History

History
454 lines (305 loc) · 12.6 KB

File metadata and controls

454 lines (305 loc) · 12.6 KB

Development Environment (MacOS)

First Time Setup

  1. Install Homebrew.

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Install PostgreSQL.

    brew install postgresql@17
  3. Start PostgreSQL.

    brew services start postgresql@17
  4. Install Rust.

    brew install rustup
    rustup-init
  5. The backend uses SQLx to interact with PostgreSQL. Install the SQLx in order to run migrations and perform other administrative operations.

    cargo install sqlx-cli --no-default-features --features rustls,postgres
  6. Configure SQLx with the DATABASE_URL.

    export DATABASE_URL=postgresql://localhost/koso

    Also, add the environment variable to the appropriate profile file (~/.profile, ~/.bash_profile, ~/.bashrc, ~/.zshrc, ~/.zshenv) so you don't have to run it every time.

  7. Create the database and run the DB migrations.

    In the backend folder, run:

    sqlx database create
    sqlx migrate run
  8. Install Node.js.

    brew install node pnpm
  9. Install the Stripe CLI

    brew install stripe/stripe-cli/stripe
    stripe login

Once A Day / After Every Pull

  1. Run the most recent DB migrations.

    In the backend folder, run:

    sqlx migrate run
  2. Install the latest frontend dependencies.

    In the frontend folder, run:

    pnpm install

Start Backend and Frontend

  1. Start the backend server.

    In the backend folder, run:

    cargo run
  2. Start the frontend server.

    In the frontend folder, run:

    pnpm run dev
  3. Navigate to http://localhost:5173/

VS Code

The Koso Workspace is configured for development in VS Code.

The following plugins are recommended:

DB Migrations

Add a migration:

sqlx migrate add some-meaningful-name

Run migrations

sqlx migrate run

Backend Interactions

Once a server has been started, you can interact with it at http://localhost:3000. There are example requests in koso.http which you can run with REST Client.

Backend Auto-reload

Tired of manually restarting your server after editing the code? Use systemfd and cargo-watch to automatically recompile and restart the server whenever the source code changes. It uses listenfd to be able to migrate connections from an old version of the app to a newly-compiled version.

One time setup:

cargo install cargo-watch systemfd

Running:

systemfd --no-pid -s http::3000 -- cargo watch -x run

Running a Built Frontend with the Backend

This setup is similar to how the app will run in production. A single server serves the API, WebSocket, and static frontend files.

  1. In the frontend folder, run:

    pnpm run build
  2. In the backend folder, run the server:

    systemfd --no-pid -s http::3000 -- cargo watch -x run

This will create a frontend/build folder. The backend/static folder is symlinked to that folder and will serve the compiled frontend directly from the backend.

Running playwright tests

Playwright tests, i.e. integration tests, flex the entire system end-to-end via the frontend testing framework Playwright. The tests run as part of CI, but you may also run them locally.

Option A: Iterate quickly by running against your development server

Make changes and run the tests quickly without rebuilding the world. Start a frontend and backend server in the usual manner, see above, and run the tests in VSCode using the Playwright extension or via the CLI:

pnpm exec playwright test

Option B: Mimic production and run against a built frontend with production

Follow "Running a Built Frontend with the Backend" above to build the frontend and run the backend. Run the tests:

PW_SERVER_PORT=3000 pnpm exec playwright test

Option C: Mimic CI and build things from scratch

This is what our CI workflows do. playwright will build the frontend and run a backend for the duration of the tests:

CI=true pnpm exec playwright test

Development Docker builds

Build and run the docker image defined in Dockerfile.

One time setup

  1. Download and install Docker: https://www.docker.com/products/docker-desktop/

Build & run

  1. Build the image:

    DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build -t ghcr.io/kosolabs/koso .
  2. Configure the DATABASE_URL.

    export DATABASE_URL=postgresql://localhost/koso

    Also, add the environment variable to the appropriate profile file (~/.profile, ~/.bash_profile, ~/.bashrc, ~/.zshrc, ~/.zshenv) so you don't have to run it every time.

  3. Run database migrations:

    DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run \
       --env DATABASE_URL \
       --network=host \
       --rm -it \
       ghcr.io/kosolabs/koso:latest \
       "./sqlx" migrate run
  4. Run the server:

    DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run \
       --env KOSO_ENV=dev \
       -v $HOME/.secrets:/.secrets \
       --network=host \
       --rm -it \
       ghcr.io/kosolabs/koso:latest

Server setup

  1. Install docker:

    sudo su &&\
    apt update &&\
    apt install ca-certificates curl gnupg apt-transport-https gpg
    curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker.gpg
    apt update
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/debian bookworm stable" |tee /etc/apt/sources.list.d/docker.list > /dev/null
    apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
    systemctl is-active docker
    
    echo $PULL_TOKEN| docker login ghcr.io -u $USER --password-stdin

Environment

We use a Github Environment configured on the Deploy workflow which exposes a KOSO_KEY to access the server.

Access in bridge mode (old)

  1. Add 172.17.0.1 to /etc/postgresql/17/main/postgresql.conf:

    listen_addresses = 'localhost,172.17.0.1'
    
  2. Add entry to /etc/postgresql/17/main/pg_hba.conf:

    # Allow docker bridge
    host    all             all             172.0.0.0/8             scram-sha-256
    

Configure github ssh keys (old)

https://docs.github.com/en/authentication/connecting-to-github-with-ssh/managing-deploy-keys#set-up-deploy-keys

ssh-keygen -t ed25519 -C "koso-github-read-key" -f /root/.ssh/koso_github_read_id_ed25519 -N ''
eval "$(ssh-agent -s)"
cat >>/root/.ssh/config <<EOL
Host github.com
  AddKeysToAgent yes
  IdentityFile  ~/.ssh/koso_github_read_id_ed25519
EOL
# MANUAL - add a new deploy key with the public key (e.g. ssh-ed25519 KEY) to https://github.com/kosolabs/koso/settings/keys/new
cat /root/.ssh/koso_github_read_id_ed25519.pub
ssh -T git@github.com && echo "Github auth works"

Server Github access (old)

Rather than using our personal key and since we only need read access, we use Github Deploy Keys to authenticate with Github from our server.

Postgres

Backups

psql_backup.sh exports backups of our Postgresql DB to cloud storage.

The script is ran by a daily cron and logs are available at koso-psql-backups/backups.log.

Backups are stored in a GCP cloud storage bucket named koso-psql-backups. The bucket has soft deletion and object versioning configured, along with lifecycle rules to auto-delete objects after 30 days.

Restore

Identify the backup to restore in the cloud console and update backup_name below with the target object name.

backup_name=TARGET-backup.sql.gz

Download and unzip the backup:

backup_object=gs://$koso-psql-backups/$backup_name
gcloud storage cp --print-created-message $backup_object ./
gzip -dk $backup_name

Restore the backup:

PGPASSWORD=$PSQL_PASSWORD pg_restore \
   --host="$PSQL_HOST" \
   --port="$PSQL_PORT" \
   --db="$PSQL_DB" \
   --username="$PSQL_USER" \
   -f \
   $backup_name

Upgrade

Upgrade Postgres to a new major version. In the example below, from 16 to 17.

  1. Update the postgres image version from postgres:16 to postgres:17 in ci.yml and merge.

  2. Install the new version of posgres:

    sudo apt update
    sudo apt install postgresql-17
    pg_lsclusters
  3. Backup the cluster just in case:

    pg_dumpall > ~/postgres-dump-$(date -u "+%Y-%m-%dT%H-%M-%S-%3NZ")
  4. Upgrade the cluster:

    sudo service postgresql stop
    sudo pg_renamecluster 17 main main_pristine
    sudo pg_upgradecluster 16 main
    sudo service postgresql start
    
    pg_lsclusters
    
  5. Verify the new cluster is working by visiting our app and verifying things work. Look at backend logs as well for anything suspicious.

    pg_lsclusters
  6. Drop the old and transition version:

    sudo pg_dropcluster 16 main --stop
    sudo pg_dropcluster 17 main_pristine --stop

Github Webhooks

References:

One-time setup

Install Smee

npm install --global smee-client

Testing locally

After starting your local server:

  1. Configure your development webhook secret in: koso/.secrets/github/webhook_secret
  2. Start a new Smee channel: https://smee.io/
  3. Start smee locally with the new channel smee -u $CHANNEL_URL --port 3000 --path /plugins/github/app/webhook
  4. Trigger or redeliver some events

Stripe

One-time setup

  1. Install the CLI:

    brew install stripe/stripe-cli/stripe
    stripe login

Testing locally

We use the Koso Labs Sandbox Stripe sandbox for testing. Login to Stripe and switch to the Sandbox to find API keys and webhook details. Feel free to create a new sandbox if needed.

After starting your local server:

  1. Configure your sandbox secret API key in koso/.secrets/stripe/secret_key

  2. Configure your sandbox webhook secret

    stripe listen --api-key $(cat koso/.secrets/stripe/secret_key) --print-secret > koso/.secrets/stripe/webhook_secret
  3. Start a local listener with stripe listen. Add events as needed. Omit the API key to use an empemeral test environment.

    stripe listen \
       --forward-to localhost:3000/api/billing/stripe/webhook \
       --api-key=$(cat koso/.secrets/stripe/secret_key) \
       --events=checkout.session.completed,invoice.paid,customer.subscription.created,customer.subscription.deleted,customer.subscription.paused,customer.subscription.resumed,customer.subscription.updated

With this in place and your local servers running, you can: