Skip to content

DArtagan/vulcanus-proxmox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,333 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vulcanus-proxmox

Starting with Proxmox & Ansible and building out from there

Required environment variables

DOMAIN=example.com PUBLIC_KEY=ssh-ed25519 asdfasdfasdfasdfasdf willam@example.com PROXMOX_API_USER=proxmox_api_user@pam PROXMOX_API_PASSWORD=password

Principles

  • Data files are snapshotted by ZFS and backed up for remote sync by borg.
  • VMs and such are backed up by Proxmox Backup Server, with retention controlled in Proxmox. We keep a minimal number of ZFS snapshots of those locations (just enough to rule out short term error).

Ad-hoc Kubernetes

Use k9s, it's amazing for peering at kubernetes cluster resources. The logs can be a bit finicky though, so still use kubectl logs as necessary.

Useful line for ad-hoc operations (where beets-import-27836809-c6wsg is a current pod, beets-debug is a replica pod you will make from it, beets-import is the container name in that pod to attach to):

kubectl debug -n apps -it --copy-to=beets-debug --container=beets-import beets-import-27836809-c6wsg -- sh

Data migration to Kubernetes

Using this example deployment, notice the initContainer and rancher-key Volume, which points to an SSH key that can be used to connect to the old server.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: niftyapp
  labels:
    app: niftyapp
spec:
  selector:
    matchLabels:
      app: niftyapp
  template:
    metadata:
      labels:
        app: niftyapp
    spec:
      initContainers:
        - name: config-data
          image: debian
          command: ["/bin/sh", "-c"]
          args: ["apt update; apt install -y rsync openssh-client; rsync -vrtplDog --append-verify --chown=1000:1000 rancher@192.168.0.112:/home/rancher/docker-vulcanus/nifty/config/* /data/"]
          volumeMounts:
            - mountPath: /data
              name: config
            - name: rancher-key
              mountPath: /root/.ssh/
              readOnly: true
      containers:
        - name: niftyapp
          image: nifty/app
          ports:
            - containerPort: 32400
          volumeMounts:
            - mountPath: /config
              name: config
      volumes:
        - name: config
          persistentVolumeClaim:
            claimName: nifty-config-pvc
        - name: rancher-key
          secret:
            secretName: rancher-key
            defaultMode: 0400
            items:
              - key: ssh-privatekey
                path: id_rsa
              - key: ssh-publickey
                path: id_rsa.pub
              - key: known_hosts
                path: known_hosts

Hard drive management & disk replacement

Follow the guide at docs/disk_management.md

Networking notes

  • 192.168.0.202: IP address for CoreDNS. Designed to be a DNS server for the whole internal network (including and beyond kubernetes).
  • 192.168.0.203: IP address for the internal kubernetes ingress controller. CoreDNS (192.168.0.202) will fall through to this for any *.immortalkeep.com domains.

Increase VM disk sizew

  1. SSH into the Proxmox host
  2. Run qm resize 910 virtio1 +512G. Where 910 is the VM ID, virtio1 is the disk ID, and +512G is the number of Gigabytes to add.
  3. Back in this repo, edit terraform/main.tf and update the openebs_disk_size to match the new VM disk size.
  4. tofu apply

Troubleshooting

Plex is failing to start

kubectl exec into the Plex container and see if the file /config/Library/Application\ Support/Plex\ Media\ Server/Preferences.xml is empty. If so, delete it and then recreate the pod. Then launching Plex in a browser might not work, because it hasn't been locally claimed. To claim it locally do a kubectl port-forward -n apps plex-blahblah-blah 32400:32400 (with the proper pod name) to forward its local port to your machine, and then visit http://localhost:32400/web/index.html in your browser (yes the full URL is important).

Mumble is failing to start, plex is kinda unreachable

Especially if Mumble's logs say that it can't write to the database, this is likely a sign that that the VM disk is full. Increase its size.

Proxmox Backup Manager (PBM) out of disk space

  1. SSH into the Proxmox host
  2. qm resize 107 virtio1 +548G
  3. SSH into the Proxmox Backup Manager host ssh root@192.168.0.107
  4. Confirm you're operating on the right disk: lsblk
  5. sgdisk -e /dev/vdb
  6. sgdisk -d 1 /dev/vdb
  7. sgdisk -N 1 /dev/vdb
  8. partprobe /dev/vdb
  9. resize2fs /dev/vdb1

Ansible connection fails with `Failed to connect to the host via ssh: ssh_askpass: exec(): No such file or directory

Ansible can't handle asking about SSH keys with passphrases, so we need to use an ssh-agent instead. Made more difficult if running in fish shell:

eval (ssh-agent -c)
ssh-add ~/.ssh/id_ed25519
ansible-playbook wireguard.yaml

References

About

Starting with Proxmox & Ansible and building out from there

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors