This repository manages the full infrastructure of my homelab using GitOps with ArgoCD.
All manifests are organized under day2/ and applied to the cluster via ArgoCD (OpenShift GitOps).
Dell Precision Tower 7810
| Component | Spec | Purpose |
|---|---|---|
| CPU | 72 cores | Bare metal SNO + nested VMs |
| RAM | 160 GB | Bare metal SNO + nested VMs |
| NVMe 1 | 512 GB | SNO bare metal OS (RHCOS) |
| NVMe 2 | 1 TB | Nested cluster VM root disks (lvms-fast-ssd-pool) |
| SATA SSD 1 | — | Image registry (local-sata-registry) |
| SATA SSD 2 | — | Future use |
| NIC | 1x 1GbE (enp0s25) | Single interface, enslaved to OVS br-ex |
192.168.2.0/24
├── 192.168.2.1 Gateway (router)
├── 192.168.2.50 SNO — main.araclab.xyz (bare metal, OCP 4.21)
│ └── OpenShift Virtualization
│ ├── master-0 192.168.2.62
│ ├── master-1 192.168.2.63
│ ├── master-2 192.168.2.64
│ └── worker-0 192.168.2.65
├── 192.168.2.51 CoreDNS — Raspberry Pi (Wi-Fi)
├── 192.168.2.60 Nested API VIP — api.nested.araclab.xyz
└── 192.168.2.61 Nested Ingress VIP — *.apps.nested.araclab.xyz
- Domain:
main.araclab.xyz - OCP version: 4.21
- Installed via: Interactive assisted installer (bare metal)
- Domain:
nested.araclab.xyz - OCP version: 4.21
- Topology: 3 control planes + 1 worker running as KubeVirt VMs on the SNO
- Installed via: Agent-based installer (
openshift-install agent create image)
- CoreDNS running on a Raspberry Pi (Wi-Fi, 192.168.2.51)
- Handles internal resolution for both
main.araclab.xyzandnested.araclab.xyz - Forwards all other queries to
1.1.1.1and8.8.8.8
| Operator | Purpose |
|---|---|
| OpenShift Virtualization | Run nested VMs |
| Kubernetes NMState | Network state management |
| LVM Storage | Manage NVMe 2 — storage class lvms-fast-ssd-pool |
| Local Storage Operator | Manage SATA SSD 1 — storage class local-sata-registry |
day2/
├── nested/
│ ├── agent-installer/ # install-config.yaml and agent-config.yaml (reference)
│ ├── networking/ # NetworkAttachmentDefinition for OVN localnet
│ └── vms/ # VirtualMachine, DataVolume, and Namespace manifests
├── operators/ # Operator subscriptions and operands
├── lvm/ # LVMCluster manifest
├── lso/ # Local Storage and image registry manifests
├── acme-issuer/ # Cluster certificate issuer
├── argo-apps/ # ArgoCD app-of-apps definitions
├── coredns/ # CoreDNS Corefile (deployed on Raspberry Pi)
└── day1-gitops-manual/ # Bootstrap manifests to install ArgoCD itself
Installed OpenShift 4.21 on the Dell Precision using the interactive assisted installer.
The single NIC (enp0s25) is managed by OVN-Kubernetes and enslaved to the OVS bridge br-ex.
Via the OCP console or CLI, installed:
- OpenShift Virtualization
- LVM Storage — created
LVMClusterpointing to NVMe 2, producinglvms-fast-ssd-pool - Local Storage Operator — created
LocalVolumeon SATA SSD 1 for the image registry - Kubernetes NMState
Configured CoreDNS on the Raspberry Pi with:
- A zone for
main.araclab.xyzpointing to the SNO node (192.168.2.50) - A zone for
nested.araclab.xyzpointing to the nested cluster VIPs (192.168.2.60/61) - A wildcard rewrite for
*.apps.*.araclab.xyz - A global forwarder to
1.1.1.1/8.8.8.8for all other lookups
The Corefile lives in coredns/Corefile in this repo.
Created install-config.yaml and agent-config.yaml under day2/nested/agent-installer/.
Key decisions:
platform: none— no cloud providernetworkType: OVNKubernetes- Static IPs assigned per host via MAC address in
agent-config.yaml rendezvousIP: 192.168.2.62(master-0 acts as the bootstrap rendezvous host)- DNS server set to
192.168.2.51(Raspberry Pi CoreDNS)
openshift-install agent create image --dir day2/nested/agent-installer/Created a DataVolume for the ISO upload and used virtctl from a Mac to upload:
virtctl image-upload dv nested-ocp-agent-iso \
--namespace nested-cluster \
--size=5Gi \
--image-path agent.x86_64.iso \
--storage-class lvms-fast-ssd-pool \
--upload-proxy-url https://cdi-uploadproxy-openshift-cnv.apps.main.araclab.xyz \
--insecureEach VM also gets a cloned copy of the ISO (see iso-datavolume.yaml).
The challenge: the SNO has a single NIC (enp0s25) already managed by OVN-Kubernetes
and enslaved to the OVS bridge br-ex. VMs needed L2 access to 192.168.2.0/24 so they
could get their static IPs and reach the internet for image pulls.
Solution: OVN localnet topology
The OVS bridge mapping physnet:br-ex already exists on the node. A
NetworkAttachmentDefinition of type ovn-k8s-cni-overlay with topology: localnet
uses this mapping to give VMs direct L2 access to the physical network — no additional
bridges or NMState changes required.
Critical detail: the NAD config.name field must be "physnet" (matching the OVS bridge
mapping key), not the NAD resource name. The subnets: field must be omitted to prevent
OVN from taking over IPAM and enforcing port security.
# day2/nested/networking/networkattachmentdef.yaml
config: |
{
"cniVersion": "0.3.1",
"name": "physnet",
"type": "ovn-k8s-cni-overlay",
"topology": "localnet",
"netAttachDefName": "nested-cluster/br-nested"
}Warning: Never apply an NMState
NodeNetworkConfigurationPolicythat referencesbr-ex. Doing so removes OVN-K patch ports and breaks all cluster networking. The node becomes unreachable and NMState's checkpoint rollback does not reliably recover it.
Created DataVolume resources for the root disks (blank, sized per role) and
VirtualMachine resources with:
- MAC addresses pinned to match the static IP mapping in
agent-config.yaml - Boot order: agent ISO first, then root disk
- Interface:
bridgebinding on the localnet NAD
MAC to IP mapping:
| VM | MAC | IP |
|---|---|---|
| master-0 | 02:7c:71:f2:81:4b |
192.168.2.62 |
| master-1 | 02:7c:71:f2:81:4c |
192.168.2.63 |
| master-2 | 02:7c:71:f2:81:4d |
192.168.2.64 |
| worker-0 | 02:7c:71:f2:81:4e |
192.168.2.65 |
Started all 4 VMs. The agent installer ran on each node:
master-0(192.168.2.62) served as the rendezvous host- Each node pulled the agent image from
quay.ioand registered with master-0 - The cluster installation proceeded automatically once all nodes registered
Monitor progress:
ssh core@192.168.2.62 "sudo journalctl -u agent -f"