This directory contains the core infrastructure definitions (Layer 4 & 5) managed by ArgoCD.
Note that this relies on the user having prepared their environment (see dev_setup.md in the Yggdrasil project), created a k3d cluster, and run the bootstrap script.
Create the k3d cluster (disable built-in Traefik since Nordri installs its own):
k3d cluster create refr-k8s \
--port "8080:80@loadbalancer" --port "8443:443@loadbalancer" \
--agents 2 --k3s-arg "--disable=traefik@server:*"For Rancher Desktop: disable Traefik via the GUI instead.
Run the bootstrap from within a compatible bash shell:
- Mac/Linux bash:
./bootstrap.sh homelab - Windows:
cd "C:\Program Files\Git\bin"or wherever Git Bash is installed- Run
bashor./bash.exeif using a PowerShell terminal cd /d/Dev/GitWS/nordrior wherever you cloned the repo- Run
./bootstrap.sh
After bootstrapping, you can access the services:
- Gitea:
nordri-admin/nordri-password-change-me(Defined inbootstrap.sh) - ArgoCD:
- User:
admin - Password: Run the command below to retrieve it.
kubectl -n argo get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
- User:
If you have deployed the IngressRoutes (Layer 4), you can access them at:
- ArgoCD:
http://argocd.localhost(or loadbalancer IP) - Gitea:
http://gitea.localhost - Longhorn:
http://longhorn.localhost - Garage S3:
http://s3.localhost(S3 API Endpoint - Returns XML) - Garage Web:
http://garage.localhost(Static Site Hosting - Returns 404 by default until a bucket is configured for website hosting)
If Ingress is not yet up (or you are debugging Layer 4), use Port Forwarding:
# ArgoCD
kubectl port-forward svc/argocd-server -n argo 8080:443
# Access at https://localhost:8080
# Gitea
kubectl port-forward svc/gitea-http -n gitea 3000:3000
# Access at http://localhost:3000
# Longhorn
kubectl port-forward svc/longhorn-frontend -n longhorn 8000:80
# Access at http://localhost:8000
# Garage S3
kubectl port-forward svc/garage -n garage 3900:3900
# Access at http://localhost:3900We use a Kustomize-based App-of-Apps pattern to handle environment differences.
/platform
/argocd # The Root App-of-Apps definition
/fundamentals # Layer 4 Components (Traefik, Crossplane, etc.)
/apps # ArgoCD Application definitions (Helm Wrappers)
/manifests # Raw Kubernetes Resources (ClusterIssuers, Providers)
/overlays # Kustomize Overlays per Environment
/gke # Includes Apps specific to GKE (e.g., CertManager)
/homelab # Includes Apps specific to Homelab (e.g., Garage, Longhorn)
- Bootstrap: The
bootstrap.shscript hydrates the Git repo from the user's workspace into the cluster, then points the Root App to the correct overlay (e.g.,platform/fundamentals/overlays/homelab). - Argo Sync: ArgoCD syncs the
kustomization.yamlin that overlay. - Application Creation: The overlay includes the specific
Applicationmanifests fromapps/. - Resource Creation: The overlay includes raw manifests from
manifests/.
-
Storage Strategy: The default
local-pathprovisioner (built-in to k3d/k3s) is used for development. It is node-local and does not replicate across nodes.- k3d (Docker): Longhorn is non-functional — k3d containers lack
iscsid. Uselocal-pathfor development. - Rancher Desktop: The bootstrap script auto-installs
open-iscsiin the VM, so Longhorn works here. - Multi-node homelab / production: Longhorn (or another distributed storage like Rook-Ceph) is essential since
local-pathdoesn't survive node loss. This is a future migration target. - GKE: Uses its own CSI driver (Persistent Disk). Longhorn is not needed.
- k3d (Docker): Longhorn is non-functional — k3d containers lack
-
Argonception: Most YAML files in
apps/arekind: Application. They tell Argo to sync another Helm chart (e.g., the official Traefik chart). -
Namespaces: Any "loose" manifest (like
ClusterIssuer) applied by the App-of-Apps will default to theargonamespace unless explicitly namespaced in the file. -
Values: Environment-specific values (e.g., LoadBalancer vs NodePort) are injected via the
envs/directory, which the App-of-Apps or individual Applications reference.
To push changes to the cluster without re-running the full bootstrap (which can be slow):
- Edit files in
platform/orenvs/. - Run:
This hydrates the configuration, pushes it to the internal Gitea, and triggers an ArgoCD sync.
./update.sh homelab
To verify the health of all platform components (Pods, Ingress, Storage):
pip install -r requirements.txt
python validate.pyGarage requires an initial layout assignment to function. This is NOT handled by Helm/ArgoCD automatically. If you reset the cluster, run:
# Assign all nodes to zone 'dc1' with 1GB capacity
kubectl exec -n garage garage-0 -- /garage layout assign -z dc1 -c 1G <NODE_ID_1> <NODE_ID_2> ...
# Apply changes
kubectl exec -n garage garage-0 -- /garage layout apply --version 1(You can get Node IDs via /garage status).
For testing Crossplane Compositions (Infrastructure Logic), we recommend KUTTL (Kubernetes Test Tool).
- It allows declarative testing (YAML) of infrastructure claims.
- See jonashackt/crossplane-kuttl for examples.
- The Issuer uses a hardcoded email admin@yggdrasil.cloud and Gateway name traefik-gateway. You may want to templated these using Kustomize overlays in envs/ later if they vary significantly.
- Need to compare Crossplane versions with what worked in Mimir.
- Need to cover Velero in a later Day 2 step. Can fetch original details from outdated/setup.md