-
Notifications
You must be signed in to change notification settings - Fork 0
Minikube
This page aims to provide detailed information about how the cluster is configured
beyond the simple minikube start and should provide a runbook to get this up and running locally on a k8s cluster.
Obviously check the individual steps but the SUPER high level steps of getting this minikube cluster running are described here:
- Download & install pre-req software.
- Edit your local hostfile with
vets.internaladdresses. - Use
minikube startto create & boot up a k8s cluster. - Deploy
argoCD&cluster-services - Start the tunnel
minikube -p vets tunnel - Initialise Hashicorp vault (create root keys & unseal)
- Run vault
terraformto create auth backends and secrets. - Deploy
vets-appenvironments (dev&production).
-
python3 If you want to run the project outside of
minikubeon your machine this becomes a requirement. - sqlite3 For interacting with localdb
To use the Ingress rules we create, create some local hostfile entries.
# Host entries for fluffy-octo-telegram testing
127.0.0.1 dev.vets.internal production.vets.internal
# ci/cd entries
127.0.0.1 argocd.vets.internal workflows.vets.internal
# Logging
127.0.0.1 kibana.vets.internal
# Monitoring
127.0.0.1 grafana.vets.internal alertmanager.vets.internal prometheus.vets.internal
# user admin / secrets
127.0.0.1 reset.vets.internal admin.vets.internal vault.vets.internal
# pgadmin
127.0.0.1 pgadmin.dev.vets.internal pgadmin.production.vets.internalminikube start --nodes 1 --addons ingress \
--cpus max --memory 12192 --addons metrics-server \
--extra-config=kubelet.max-pods=1000 -p vetsThis will churn away for a few minutes after which you will have a running single node kubernetes cluster running on your local machine, nice one!
N.B. you may want to tune the memory of the
After that you can start/stop/open traffic to the cluster with these commands
minikube start -p vets
minikube stop -p vets
minikube status -p vets
minikube tunnel -p vetsTwo ways to do this, one using the kubectl bundled in with minikube:
minikube -p vets kubectl -- get pods --all-namespacesAnd the other just using kubectl (if you installed it as part of the optional tooling):
kubectl get pods --all-namespacesMuch nicer :)
# Create namespace for argocd
kubectl create ns argocd
# Install argocd, the main CD tool I'm playing with right now
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Get the inital admin password for argocd
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 -d
# Check everything is up
kubectl get pods -n argocd
# Install the rest of the cluster services (as argocd Applications)
helm install cluster-services deploy-descriptors/cluster/chart --namespace argocdGive this 5 mins or so to warn up all the pods and make sure you have started
minikube -p vets tunnelIn another terminal which will route the localhost entries into the ingress rules of the cluster.
We can then start configuring the Vault service as this needs to be populated before we deploy the vets-apps
Goto Vault and create the initial root keys for your vault installation.
Just create one unseal key and save the .json file it dumps for you somewhere very safe. You will need it everytime
you start the cluster to unseal the vault.
Manually for now
kubectl create ns dev-vets
kubectl create secret generic vets-app -n dev-vets \
--from-literal=DJANGO_SECRET_KEY='<<<<<<<<<<<<<<<<<<< A VERY LONG RANDOM STRING >>>>>>>>>>>>>>>>>>>' \
--from-literal=POSTGRES_PASSWORD='<<<<< A COMPLEX PASSWORD >>>>>'Repeat for namespace production-vets as well.
This will be moved to a vault implementation of secrets soon.
Create a token in dockerhub and update below with your own creds for pushes to your own dockerhub.
export DOCKER_USERNAME=******
export DOCKER_TOKEN=******
kubectl create secret generic docker-config \
--from-literal="config.json={\"auths\": {\"https://index.docker.io/v1/\": {\"auth\": \" \
$(echo -n $DOCKER_USERNAME:$DOCKER_TOKEN|base64)\"}}}"If you do not do this we can just set push=false on the buildkit step in the
CI pipeline later. No worries.
You can now use minikube tunnel -p vets (if not already) to open the ports
as needed and get to the ingress controller.
You can now hit argocd and monitor the rest of the cluster services deploy from there
Goto Vault and follow its init steps to unseal it and save the creds somewhere safe.
After you do this its state in argo will go Healthy.
In order for argo-workflows to be able to use output parameters it needs to be able to patch pods in the namespace. This should really be a proper role assigned to only allow it what it needs to do.
kubectl create rolebinding default-admin \
--clusterrole=admin --serviceaccount=argo-workflows:default -n argo-workflowsNow you can move over to the Testing page for how to deploy the ci pipelines and the vets apps to the namespaces.