- We use Kubernetes and all resources are located in k8s folder
- We use Skaffold to handle the workflow for building, pushing and deploying our applications
- We use Kustomize for easier separation between different environments and less code duplication for Kubernetes resources
- We use Github actions for automatic releases to the
dev environment - We make manual releases to the
prod environmentusingskaffold run, described further down
- docker
- kubectl
- skaffold
- kustomize
- kubectx + kubens (not needed but highly recommended)
- We use our own Kubernetes cluster, more information how to connect to it you can get in the drift Slack channel
- All resources are located in k8s folder
- We use kubernetes namespaces to separate environments
- The namespaces we use are:
predictivemovementforprodavailable at (https://admin.predictivemovement.se)predictivemovement-devfordevpeliasfor all Pelias services that we usepredictivemovement-sefor the website https://predictivemovement.se
NOTE: At the time of writing (2021-07-07)
- we use the cluster at 192.168.100.90
- we have only the
predictivemovementandpredictivemovement-senamespaces setuppredictivemovement-devnamespace and github-actions are not configured as mentioned above FIXME- the k8s resources for https://predictivemovement.se were not added to this repository FIXME
cicd-rbac.yamlcreates a service account that is used for connecting to the cluster from Github actionsconfig.yamlcontains the cluster connection config that is used from Github actions
NOTE: More about Github actions setup is described further down
peliasfolder contains everything for peliasbasefolder containsdependencies(databases) andstack(applications) that are deployed topredictivemovement-devoverlayfolder containsdependencies-prod(databases) andstack-prod(applications) that use the corresponding folder frombaseand extend it and then are deployed topredictivemovementnamespace
NOTE: Read me about our use of Kustomize overlays below
ghost-websitefolder contains the configuration forhttps://predictivemovement.se(instructions are available further down)
-
Secrets are created manually
-
You find the values in LastPass for different environments (
dev,prod) or feel free to create and update them in the specific namespace used -
Replace
<FROM_LASTPASS>with the correct value and<NAMESPACE>with what applies for your needskubectl create secret generic booking-token --from-literal=BOOKING_TOKEN=<FROM_LASTPASS> -n <NAMESPACE> kubectl create secret generic driver-token --from-literal=DRIVER_TOKEN=<FROM_LASTPASS> -n <NAMESPACE> kubectl create secret generic google-token --from-literal=GOOGLE_TOKEN=<FROM_LASTPASS> -n <NAMESPACE> kubectl create secret generic minio-password --from-literal=MINIO_ROOT_PASSWORD=<FROM_LASTPASS> -n <NAMESPACE> kubectl create secret generic postgres-password --from-literal=POSTGRES_PASSWORD=<FROM_LASTPASS> -n <NAMESPACE> kubectl create secret generic postnord-api-key --from-literal=POSTNORD_KEY=<FROM_LASTPASS> -n <NAMESPACE> kubectl create secret generic ui-basic-auth --from-file=auth
NOTE: Learn to create the auth file for basic auth at https://imti.co/kubernetes-ingress-basic-auth/
- All workflows are located in
./github - The current flows are:
mainthat runs on themainbranch and will install dependencies (kubectl, skaffold) and runskaffold runcommandtestruns on a pull request and runs all tests in different packages
We have credentials for a service account for Docker in LastPass. Add them as secrets in Github.
- DOCKER_USER
- DOCKER_PASSWORD
- in the
main.ymlworkflow we replace placeholder values from the config defined ink8s/config.yaml - the replacement values are stored in the following Github secrets
- KUBE_CLUSTER_NAME (doesn't affect connectivity)
- KUBE_CLUSTER_SERVER (replace with external IP of the cluster so that github can connect to it)
- KUBE_CLUSTER_CERTIFICATE (replace with certificate that is trusted for the external IP)
- KUBE_USER_NAME (replace with value
service-account) - KUBE_USER_TOKEN (Find the name of the secret with
kubectl get secrets --all-namespacesand look for something like "cicd", then the get the secret usingkubectl get secrets <secret name> -o yaml) -
NOTE: kubectl gives you the secret base64-encoded, you might need to decode it
-
Kustomize allows you to define
basesthat you can extend fromoverlays -
a simple example how to use kustomize
- for the
devenvironment we define everything ink8s/base/stack - the
kustomization.yamlhas a list ofresourcesthat are created, the namespace that will be used and a config map generator that creates ConfigMaps in our cluster (one for common properties reused by majority of pods and an engine specific one)
- for the
prodenvironment we define everything ink8s/overlays/stack-prod - the
kustomization.yamlinside this reuses the base fromk8s/base/stackand extends that with some customizations like patchStrategicMerge that allows us to duplicate less code (like the Ingress that should have a different URL between different environments), use a different namespace and define only properties that differ from the ones indevwith the configMapGenerator
- We use
kustomizeto separate the resource files fordependencies(minio, postgres, rabbitmq, redis...) - Since majority of our database configs are defined using
StatefulSet, this approach withkustomizeandskaffoldworks best when you setup the cluster from scratch.
- for the
NOTE: An issue with this approach (rather than using plain
kubectl apply -fcommands) is that when you want to add a new database, and create the yaml file, add it to thekustomizationfile and run theskaffoldcommand, you might get error due to statefulsets not allowing some updates. It should still successfully apply the new configuration and create your new database.
Skaffold is the tool that allows us to automate the building, pushing and deploying of our code
-
there are 2 configuration files
skaffold.yamlused for packages andskaffold-dependencies.yamlused for databases (explained above) -
skaffold.yamlcontains abuildsection where we define the Docker images we build with the correct path to the package, adeploysection where we specify that we want to usekustomizeand aprofilessection where we define aprodprofile (this profile builds the same Docker images but defines a differentdeploysection)this means the skaffold commands can be run with
--profile prod -
when
skaffoldruns, if a Docker repository doesn't exist, it will create it as long as the Docker user logged in has permission to do thatthis means that for Github Actions, you need to go the Dockerhub website and either update permissions for the repository or create the repository and change permissions to give the service account read/write access
-
skaffoldallows you to debug code by runningskaffold dev(it's like anodemonwhich runs the code in the cluster you are currently connected to)NOTE: When you exit
skaffold devit will cleanup all resources created so DO NOT RUN THIS ON prodNOTE: No instructions for deploying
devas that is done by Github actions (see above)To deploy the dependencies of the stack (ie. databases)
skaffold -f skaffold-dependencies.yaml run --profile prodSet environment variables used at build time by Docker and run the skaffold command with production profile:
export REACT_APP_MAPBOX_ACCESS_TOKEN=<FROM LASTPASS> export REACT_APP_ENGINE_SERVER=https://engine-server.iteamdev.io skaffold run --profile prod
We use postgres-backup
To restore a backup, exec into the postgres-backup pod
kubectl exec -it postgres-backup /bin/bashAnd run
/restore.sh /backup/latest.psql.gz # or choose a different backup you want- The csv-importer.yaml requires data from Lantmäteriet to be available on the node on the folder
/storage/lantmateriet/csv/(it will read *.csv) - To deploy
kubectl apply -f k8s/pelias
- Is currently only inside
k8s/overlays/dependencies-prodand deployed only toprodbut used from localhost, dev, prod
NOTE: OSRM should probably be moved to its own namespace FIXME
-
This assumes that the DNS configuration is setup so that
predictivemovement.sepoints to the cluster you use -
Edit the secrets
k8s/predictivemovement.se/ghost-secret.yamlandk8s/predictivemovement.se/mariadb-password-secret.yamland replace the template values (instructions are in the yaml files when you open them) -
After editing the secrets, you can apply all k8s configuration files (secrets and deployments for ghost and mariadb) running:
kubectl apply -f k8s/predictivemovement.se
-
After the pods are running you have to add the ghost and mariadb content for the website to the volumes used by the deployments.
-
Download the backups from Google Drive - in the Predictive Movement/Data folder
-
Retrieve the specific pod name for ghost or mariadb
kubectl get pods -n predictivemovement-se
-
Using kubectl cp copy the backup contents to each pod (you've retrieved their names with the command above) at these paths:
/bitnami/mariadbfor mariadb/bitnami/ghostfor ghost
NOTE: You might have to restart mariadb after restoring the backup.
Or possibly you might not be able to overwrite existing mariadb pod folder while it's running the mariadb command.
In case you cannot overwrite the existing mariadb folder, updateghost-mariadb.yamland add a command for the container to sleep.
Changing the command should allow you to usekubectl cpand correctly overwrite the existing contents and after that just revert the sleep command and runkubectl applyon mariadb again. -