This tutorial will walk you through running the vms management server, setting up a management backbone on a Kubernetes cluster, and connecting a van to the backbone from a second Kubernetes cluster.
-
Access to at least two Kubernetes clusters, from any provider you choose.
NOTE: The cluster running the management backbone must be an OpenShift cluster as the Management Controller requires routes
-
The
kubectlcommand-line tool, version 1.15 or later (installation guide). -
cert-manager installed on the cluster you are running the management server from
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.2/cert-manager.yaml
You can get the most recent version from https://cert-manager.io/docs/installation/kubectl/
NOTE: cert-manager is included in OpenShift
-
Keycloak instance running and configured for the management-controller. For instructions, see Keycloak setup guide.
Open a new terminal window and run the following commands:
export KUBECONFIG=~/.kube/management-server
<provider-specific login command>Note: The login procedure varies by provider
Note: This must be done on an OpenShift cluster as the Management Controller requires routes
kubectl create namespace vms
kubectl config set-context --current --namespace vmsApply the following yaml files found in the /yaml folder in the root of the repo.
Note: Paths relative to repository root
Get the default storage class from your cluster
export STORAGE_CLASS=$(kubectl get storageclasses.storage.k8s.io -o json | jq -r '.items[] | select(.metadata.annotations."storageclass.kubernetes.io/is-default-class" == "true") | .metadata.name')Update the storageClassName on yaml/openshift-postgres.yaml and yaml/postgres-pvc-pv.yaml files, using:
sed -i "s/storageClassName: .*/storageClassName: ${STORAGE_CLASS:?}/g" yaml/openshift-postgres.yaml yaml/postgres-pvc-pv.yamlNow you can proceed and apply the following yamls on OpenShift.
kubectl -n vms apply -f yaml/postgres-config.yaml
kubectl -n vms apply -f yaml/postgres-pvc-pv.yaml
kubectl -n vms apply -f yaml/openshift-postgres.yaml
kubectl -n vms apply -f yaml/root-ca.yamlkubectl apply -f yaml/postgres-config.yaml
kubectl apply -f yaml/postgres-deployment.yaml
kubectl apply -f yaml/postgres-pvc-pv.yaml
kubectl apply -f yaml/postgres-service.yaml
kubectl apply -f yaml/root-ca.yamlFrom the root of the repo, run the following command to install the necessary Node packages. Then, navigate to the management-controller directory.
pnpm install
cd ./components/management-controllerNote: It is required to use pnpm, not npm, as the package manager for the install.
export PGUSER=access
export PGPASSWORD=password
export PGDATABASE=studiodb
export SKX_STANDALONE_NAMESPACE=vms
export APP_USER_PASSWORD=password
export APP_SYSTEM_PASSWORD=password
export VMS_SESSION_SECRET=mysecretTo set the PGHOST environment variable, run the following command to find the cluster IP of the postgres service (if you are not using OpenShift).
export PGHOST=$(kubectl -n vms get svc postgres -o json | jq -r .spec.clusterIP)To set up the postgres database schema, run the following command against the postgres pod to execute the database setup script found in ./scripts from the root of the repo.
kubectl -n vms exec -it statefulsets/postgres -- psql -U $PGUSER -d $PGDATABASE -v APP_USER_PASSWORD=password -v APP_SYSTEM_PASSWORD=password < ./scripts/db-setup.sqlkubectl exec -it deployment/postgres -- psql -U $PGUSER -d $PGDATABASE -v APP_USER_PASSWORD=password -v APP_SYSTEM_PASSWORD=password < ./scripts/db-setup.sqlNOTE: If using minikube, run minikube tunnel in a separate terminal.
Place keycloak.json in components/management-controller/ as described in the Keycloak setup guide; the API server expects it at startup.
From inside the management-controller directory, run:
Note: If you are on OpenShift, you can port-forward localhost port 5432 to your pod/postgres-0 and set
PGHOST=localhost.
# under components/management-controller, run:
node index.jsOpen a new terminal window and run the following commands:
export KUBECONFIG=~/.kube/backbone
<provider-specific login command>kubectl create namespace <backbone-namespace>
kubectl config set-context --current --namespace <backbone-namespace>- Navigate to http://localhost:8085 and open the "backbones" tab
- Create a new backbone and give it a name
- Click on the newly created backbone and click the "Activate" button
- Click "Create site...", give it a name, and select "skx-prototype" as the target platform
- Create an access point on the newly created site with kind: "manage"
- Create a second access point with kind: "van"
- Click the "Bootstrap Step 1" download link and apply the downloaded yaml in your backbone namespace
- Run
kubectl exec -it <vms-site-pod> -c controller -- skxhosts - Copy the output ingress data into the "Upload ingress data" section in the vms console under "Bootstrap Step 2"
- Once the host and port data appear on the "manage" access point, click the "Bootstrap Step 3" download link and apply the downloaded yaml file in your backbone namespace
Open a new terminal window and run the following commands:
export KUBECONFIG=~/.kube/van
<provider-specific login command>Note: The login procedure varies by provider
-
Apply the following crds to your cluster:
kubectl apply -f https://github.com/fgiorgetti/skupper/raw/refs/heads/multi-van/config/crd/bases/skupper_network_crd.yaml kubectl apply -f https://github.com/fgiorgetti/skupper/raw/refs/heads/multi-van/config/crd/bases/skupper_network_link_crd.yaml kubectl apply -f https://github.com/fgiorgetti/skupper/raw/refs/heads/multi-van/config/crd/bases/skupper_inter_network_ingress_crd.yaml kubectl apply -f https://github.com/fgiorgetti/skupper/raw/refs/heads/multi-van/config/crd/bases/skupper_network_access_crd.yaml kubectl apply -f https://github.com/fgiorgetti/skupper/raw/refs/heads/multi-van/config/crd/bases/skupper_certificate_request_crd.yaml
-
Change the skupper-controller deployment to use multi-van images
a. Run
kubectl edit deployment skupper-controller -n skupperb. Swap the kube-adaptor, skupper-router, and controller images for the following:- quay.io/fgiorgetti/kube-adaptor:multi-van
- quay.io/tedlross/skupper-router:multi-van
- quay.io/fgiorgetti/controller:multi-van
-
Run
kubectl edit clusterrole skupper-controller -n skupperand add the following to the skupper.io apiGroups section- networks
- networks/status
- internetworkingresses
- internetworkingresses/status
- networklinks
- networklinks/status
- networkaccesses
- networkaccesses/status
- certificaterequests
- certificaterequests/status
kubectl create namespace van
kubectl config set-context --current --namespace vanskupper site create van-site- In the console, navigate to the "VANs" tab and click "Create Externally-Created VAN..."
- Give it a name and click "Submit"
- Click on the newly created van and navigate to the "Configuration" tab
- Select the configuration type of "VAN Site Connected to the Management Backbone" and select the backbone we created in the previous steps from the dropdown
- Click the "download configuration" button and apply the downloaded yaml in the van namespace
In order to check that the van and backbone are connected and communicating, run the following command in the backbone terminal:
kubectl get routesThere should be two active routes, one for the "manage" access point and another for the "van" access point