Starting with Proxmox & Ansible and building out from there
DOMAIN=example.com PUBLIC_KEY=ssh-ed25519 asdfasdfasdfasdfasdf willam@example.com PROXMOX_API_USER=proxmox_api_user@pam PROXMOX_API_PASSWORD=password
- Data files are snapshotted by ZFS and backed up for remote sync by borg.
- VMs and such are backed up by Proxmox Backup Server, with retention controlled in Proxmox. We keep a minimal number of ZFS snapshots of those locations (just enough to rule out short term error).
Use k9s, it's amazing for peering at kubernetes cluster resources. The logs can be a bit finicky though, so still use kubectl logs as necessary.
Useful line for ad-hoc operations (where beets-import-27836809-c6wsg is a current pod, beets-debug is a replica pod you will make from it, beets-import is the container name in that pod to attach to):
kubectl debug -n apps -it --copy-to=beets-debug --container=beets-import beets-import-27836809-c6wsg -- sh
Using this example deployment, notice the initContainer and rancher-key Volume, which points to an SSH key that can be used to connect to the old server.
apiVersion: apps/v1
kind: Deployment
metadata:
name: niftyapp
labels:
app: niftyapp
spec:
selector:
matchLabels:
app: niftyapp
template:
metadata:
labels:
app: niftyapp
spec:
initContainers:
- name: config-data
image: debian
command: ["/bin/sh", "-c"]
args: ["apt update; apt install -y rsync openssh-client; rsync -vrtplDog --append-verify --chown=1000:1000 rancher@192.168.0.112:/home/rancher/docker-vulcanus/nifty/config/* /data/"]
volumeMounts:
- mountPath: /data
name: config
- name: rancher-key
mountPath: /root/.ssh/
readOnly: true
containers:
- name: niftyapp
image: nifty/app
ports:
- containerPort: 32400
volumeMounts:
- mountPath: /config
name: config
volumes:
- name: config
persistentVolumeClaim:
claimName: nifty-config-pvc
- name: rancher-key
secret:
secretName: rancher-key
defaultMode: 0400
items:
- key: ssh-privatekey
path: id_rsa
- key: ssh-publickey
path: id_rsa.pub
- key: known_hosts
path: known_hosts
Follow the guide at docs/disk_management.md
- 192.168.0.202: IP address for CoreDNS. Designed to be a DNS server for the whole internal network (including and beyond kubernetes).
- 192.168.0.203: IP address for the internal kubernetes ingress controller. CoreDNS (192.168.0.202) will fall through to this for any *.immortalkeep.com domains.
- SSH into the Proxmox host
- Run
qm resize 910 virtio1 +512G. Where910is the VM ID,virtio1is the disk ID, and+512Gis the number of Gigabytes to add. - Back in this repo, edit
terraform/main.tfand update theopenebs_disk_sizeto match the new VM disk size. tofu apply
kubectl exec into the Plex container and see if the file /config/Library/Application\ Support/Plex\ Media\ Server/Preferences.xml is empty. If so, delete it and then recreate the pod. Then launching Plex in a browser might not work, because it hasn't been locally claimed. To claim it locally do a kubectl port-forward -n apps plex-blahblah-blah 32400:32400 (with the proper pod name) to forward its local port to your machine, and then visit http://localhost:32400/web/index.html in your browser (yes the full URL is important).
Especially if Mumble's logs say that it can't write to the database, this is likely a sign that that the VM disk is full. Increase its size.
- SSH into the Proxmox host
qm resize 107 virtio1 +548G- SSH into the Proxmox Backup Manager host
ssh root@192.168.0.107 - Confirm you're operating on the right disk:
lsblk sgdisk -e /dev/vdbsgdisk -d 1 /dev/vdbsgdisk -N 1 /dev/vdbpartprobe /dev/vdbresize2fs /dev/vdb1
Ansible connection fails with `Failed to connect to the host via ssh: ssh_askpass: exec(): No such file or directory
Ansible can't handle asking about SSH keys with passphrases, so we need to use an ssh-agent instead. Made more difficult if running in fish shell:
eval (ssh-agent -c)
ssh-add ~/.ssh/id_ed25519
ansible-playbook wireguard.yaml
- https://www.nathancurry.com/blog/14-ansible-deployment-with-proxmox/
- Replacing a ZFS Proxmox boot disk: http://r00t.dk/post/2022/05/02/proxmox-ve-7-replace-zfs-boot-disk/