-
Microsoft SQL Server - MSSQL
- Mail Dev - MailDev
- MailDev Docker Image
-
In the
apifolder. -
Create a
.env.developmentfile with the following content:BLOB_CONNECTION_STRING=....
See
docker-compose.development.yml->x-default-environmentfor optional values that you can customize as needed. Only theBLOB_CONNECTION_STRINGmust be manually supplied during setup as it is a secret type value. -
Go back to the top level directory, and do the same in the
archivefolder. -
Set up the
devcommand, or usedocker compose -f docker-compose.development.ymlinstead ofdevin all instructions. -
Boot the api, web, and db services via
dev up --watchordev watchordocker compose -f docker-compose.development.yml up --watch. This will run the boot pipeline and create the database, run migrations, and run seeds. -
Stop the api, web, and db services via
ctrl+cordev downor if you want to wipe the databasedev down -v. -
Install local dependencies by installing
asdfand node viaasdfand then runningnpm installat the top level of the project. -
To get the local per-service node_modules, so your code editor gets linting and types, do
cd api && npm iandcd web && npm i.
-
Boot only the api service using:
dev up api # or docker compose -f docker-compose.development.yml up --watch api # or cd api npm run start
-
Access the api by logging in to the front-end, then going to http://localhost:3000
-
Boot only the web service using:
dev up web # or docker compose -f docker-compose.development.yml up --watch web # or cd web npm run start
-
Log in to the front-end service at http://localhost:8080
-
Boot only the db service using:
dev up db # or docker compose -f docker-compose.development.yml up --watch dbMigrations run automatically, as do seeds. NOTE: production and development have different seeds.
-
You can access the
sqlcmdcommand line viadev sqlcmd # or docker compose -f docker-compose.development.yml exec db sh -c ' /opt/mssql-tools18/bin/sqlcmd -U "$DB_USERNAME" -P "$DB_PASSWORD" -H "$DB_HOST" -d "$DB_DATABASE" -C \ -I '
You can also run migrations and seeding manually after login in to the web UI by going to
- http://localhost:3000/migrate/latest
- http://localhost:3000/migrate/up
- http://localhost:3000/migrate/down
- http://localhost:3000/migrate/seed
You can also skip seeding if database is not empty by setting the SKIP_SEEDING_UNLESS_EMPTY=true environment variable.
- Access the web interface at http://localhost:1080
If you are getting a bunch of "Login required" errors in the console, make sure that you have disabled any kind of enhanced tracking protection.
Auth0 use third-party cookies for authentication, and they get blocked by all major browsers by default.
- Run the api test suite via
dev test_api.
See api/tests/README.md for more detailed info.
This project is using knex with the config hoisted from the db/db-client.ts file.
NOTE: Migrations should use snake_case. While database table and column names use snake_case, we are using Sequelize for our models so that we get camelCase to match the JS standard, in the JS section of the codebase.
-
To create a new migration from the template sample-migration do:
dev migrate make create-users-table
or
dev knex migrate:make create-users-table
Or
dev api sh npm run knex migrate:make create-users-table
If you are using Linux, all files created in docker will be created as
rootso you won't be able to edit them. Luckily, this is handle by thedev knexcommand, when using Linux, after you provide yoursudopassword. -
To run the all new migrations do:
dev migrate latest
or
dev migrate up
-
To rollback the last executed migration:
dev migrate down
-
To rollback all migrations:
dev migrate rollback --all
Seeding is similar to migrating but with less options, see Knex docs.
Currently only has these commands:
dev seed make fill-users-tabledev seed run
Can also use the other patterns
dev knex seed:make fill-users-tableOr
dev api sh
npm run knex seed:make fill-users-tableSeeds are separated by environment. i.e. api/src/db/seeds/development vs. api/src/db/seeds/production
This allows for the convenient loading of required defaults in production, with more complex seeds in development, for easy QA.
Seeds currently don't keep track of whether they have run or not. As such, seed code should be idempotent, so that it can be executed at any point in every environment.
If you want to take over a directory or file in Linux you can use dev ownit <path-to-directory-or-file>.
If you are on Windows or Mac, and you want that to work, you should implement it in the bin/dev file. You might never actually need to take ownership of anything, so this might not be relevant to you.
The dev command vastly simplifies development using docker compose. It only requires ruby; however, direnv and asdf will make it easier to use.
It's simply a wrapper around docker compose with the ability to quickly add custom helpers.
All commands are just strings joined together, so it's easy to add new commmands. dev prints out each command that it runs, so that you can run the command manually to debug it, or just so you learn some docker compose syntax as you go.
-
(optional) Install
asdfas seen in https://asdf-vm.com/guide/getting-started.html.e.g. for Linux
apt install curl git git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.12.0 echo ' # asdf . "$HOME/.asdf/asdf.sh" . "$HOME/.asdf/completions/asdf.bash" ' >> ~/.bashrc
-
Install
rubyviaasdfas seen here https://github.com/asdf-vm/asdf-ruby, or using whatever custom Ruby install method works for your platform.e.g. for Linux
asdf plugin add ruby https://github.com/asdf-vm/asdf-ruby.git # install version from .tool-versions file asdf install ruby asdf reshim rubyYou will now be able to run the
./bin/devcommand. -
(optional) Install direnv and create an
.envrcwith#!/usr/bin/env bash PATH_add binand then run
direnv allow.You will now be able to do
dev xxxinstead ov./bin/dev xxx.
- Create the appropriate database, as specified by the
DB_DATABASEenvironment variable, and - Make sure the default
dboschema exists in that database.
Files:
- Dockerfile
- docker-compose.yml
- Non-commited
.envfile
-
Create a
.envfile in top level directory with the appropriate values.VITE_APPLICATION_NAME="Traditional Knowledge" HOST_PORT=8080 API_PORT=8080 DB_HOST=db DB_PORT=1433 DB_USERNAME=sa DB_PASSWORD=DevPwd99! DB_DATABASE=traditional_knowledge_production VITE_API_BASE_URL="http://localhost:8080" VITE_AUTH0_CLIENT_ID="fsWyrDohhHtojdOpOFnAYtFMxwAMHUEF" VITE_AUTH0_AUDIENCE="testing" VITE_AUTH0_DOMAIN="https://dev-0tc6bn14.eu.auth0.com" MAIL_HOST="mail" MAIL_PORT=1025 MAIL_FROM="traditional-knowledge@yukon.ca" MAIL_SERVICE="MailDev" # Outlook365 or unset in production environment
-
(optional) If testing build arguments do
dc build --build-arg RELEASE_TAG=2024.01.8.1 --build-arg GIT_COMMIT_HASH=532bd759c301ddc3352a1cee41ceac8061bfa3f7
or
dc build \ --build-arg RELEASE_TAG=$(date +%Y.%m.%d) \ --build-arg GIT_COMMIT_HASH=$(git rev-parse HEAD)
and then in the next step drop the
--buildflag. -
Build and boot the production image via
docker compose up --build
-
Go to http://localhost:3000/ and log in.
-
Navigate around the app and do some stuff and see if it works.
- https://nektosact.com/installation/gh.html
- https://github.com/cli/cli/blob/trunk/docs/install_linux.md
- Install GitHub CLI, via:
See https://github.com/cli/cli/blob/trunk/docs/install_linux.md NOTE:
sudo apt install gh
snapversion ofghhas permission limits, and will not work correctly, so use theaptversion instead. - Install GitHub publish library via:
See https://nektosact.com/installation/gh.html
gh extension install https://github.com/nektos/gh-act
- Generate secrets file via:
./bin/build-github-act-secrets-file.sh
- Run publish action via:
Wait a long time, this will be very slow and not show much progress.
gh act push \ -P ubuntu-latest=-self-hosted \ --job build \ --env PUSH_ENABLED=false \ --secret-file .secrets
- Check that the secrets built correctly:
docker run --rm -it \ --entrypoint /bin/sh \ icefoganalytics.azurecr.io/traditional-knowledge-archiver:<LATEST_TAG> cat /etc/ssl/private/icefog.pem # permission should be "denied" cat /etc/ssl/certs/fullchain.pem # should show multi-line output