A how to guide to plan and track a Proxmox home lab build. This guide was initially written for Proxmox 6.2 but will be kept up to date as necessary.
These are my goals for this project.
- Easy to build & maintain, in case I need to rebuild it from scratch
- Modular, I prefer to put things in a Container or VM rather then the base config
- Secure, Security built in
- Single host (no HA), this is a home lab build not Enterprise Production, I can afford some downtime and don't need High Availability
- Redundant Storage, In case of drive failure
- Offsite/Offhost Backups, In case of complete host failure
- Monitoring & Alerts, to know whats going on
- Automated, Automate the world
- 1. Initial Install
- 2. Base Config
- 3. Setting up Networking
- 4. Building ZFS Storage
- 5. Securing Proxmox
- 6. Deploying the first VM :: PFSense
- 7. Deploying the first Container :: NAS
- 8. To Do
- Intel EMT64 or AMD64 with Intel VT/AMD-V CPU flag.
- Memory, minimum 2 GB for OS and Proxmox VE services. Plus designated memory for guests. For Ceph or ZFS additional memory is required, approximately 1 GB memory for every TB used storage.
- Fast and redundant storage, best results with SSD disks.
- OS storage: Hardware RAID with batteries protected write cache (“BBU”) or non-RAID with ZFS and SSD cache.
- VM storage: For local storage use a hardware RAID with battery backed write cache (BBU) or non-RAID for ZFS. Neither ZFS nor Ceph are compatible with a hardware RAID controller. Shared and distributed storage is also possible.
- Redundant Gbit NICs, additional NICs depending on the preferred storage technology and cluster setup – 10 Gbit and higher is also supported.
- For PCI(e) passthrough a CPU with VT-d/AMD-d CPU flag is needed.
In addition to the official system requirements we have to take into account our use of ZFS as a file storage system so this should be noted as well.
ZFS depends heavily on memory, so you need at least 8GB to start. In practice, use as much you can get for your hardware/budget. To prevent data corruption, we recommend the use of high quality ECC RAM.
If you use a dedicated cache and/or log disk, you should use an enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can increase the overall performance significantly.
Important Do not use ZFS on top of hardware controller which has its own cache management. ZFS needs to directly communicate with disks. An HBA adapter is the way to go, or something like LSI controller flashed in “IT” mode.
If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don’t use virtio for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with virtio SCSI controller type).
A word on choosing an appropriate boot drive. While ESXI suggests using a USB key or SD Card to boot its minimal OS that is primarly because it loads the OS in memory after the initial boot and limits all writes to the boot drive after its loaded. FreeNAS as well, has been optimized to run on USB keys and limits writes to the boot drive. Proxmox, on the other hand, is based on Debian and has not optimized the OS for use in USB keys as a boot drive.
So you have a choice . . .
- Install Proxmox to a storage medium that is optimized for lots of writes. Example Harddrive, high endurance SSD/USB key.
- If installing to standard USB key make sure you have a mirrored boot drive so that when your boot drive fails (and it will) you have a backup ready to take over.
- Both 1 & 2
My choice in this regard is #3 because I have lots of harddrives in my server and I can use two in a mirrored vdev ZFS RAID1 Pool.
| Hardware | |
|---|---|
| Server | Cisco C240 M4L |
| CPU | 2 x E5-2620 v3 (12 cores / 24 Threads) |
| RAM | 64GB ECC |
| Boot Drive | 2 x 2TB |
| Data Drive | 10 x 2TB |
| HBA | CISCO UCSC-MRAID12G |
| Network | 2 x 10GE, 2 x 1GE + IPMI |
| GPU | None |
A note on the RAID card. I have set the drives to JBOD, disabled all caching & turned off the cards BIOS in its settings so its operating in HBA mode. Need to test whether or not the Host OS can see the drives directly without a cache in between as this, generally speaking, isn't a recommended card for ZFS. However, if I try to put in any 3rd party card that the bios doesn't recognize (re: non Cisco) the server will override the fan policy & turn the fans on full blast which converts my Home Lab Build into a Cessna on takeoff.
I'm not going to re-write the installation process here as the official documentation is good in this regard and if not there are lots of resources on the web to help. Its pretty easy however and can be summed up by . . .
- Download the ISO, https://www.proxmox.com/en/downloads/category/iso-images-pve
- Write the ISO to a bootable USB flash drive using a tool like Etcher
- Insert the USB key into your server and follow the prompts
Although in my case I use the KVM on the IPMI port of my server to load the ISO to a virtual drive and boot the server off of that :D
Using the Proxmox VE Installer
Some Notes:
- Choose,
Install Proxmox VE:) - Because I am installing a ZFS root using a Mirrored or RAID1 setup I'll click
Options, selectzfs (RAID1)as my filesystem & pick my two boot drives. - I leave the
advanced optionsat the default.
I know I am wasting a ton of space by not partitioning my boot disk into separate partitions for the Host, VM's, Containers & data but my goal is to keep things simple. So if I need to perform a complete rebuild or transfer my system to another host I can easily reinstall Proxmox on a new drive, resetup the base config/networking, connect my data drives, import my pools, VM's & containers and I'm pretty much back up and running from a boot drive failure.
- the rest is pretty straight forward
Great, you still with me :) This is where the fun part starts.
https://<server_ip>:8006
AND
ssh root@<server_ip>
type in credentials:
- Username: root
- Password: <the_password_you_choose_during_install>
Assuming you were able to log in then you should be good at this point to disconnect the monitor, keyboard and mouse from the server and complete the rest of the steps remotely.
If you weren't able to connect you need to figure out why, I would suggest to try pinging the <server_ip> and if it responds the problem may be the username/password. If it doesn't respond then issue is most likely network related, start by tracing the physical cable and check your network config after that.
Some useful links:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
https://ubuntu.com/blog/if-youre-still-using-ifconfig-youre-living-in-the-past
Some useful commands:
systemctl status networking
systemctl restart networking
ip address show
ip link show
to make changes to the network Config . . .
nano /etc/network/interfaces update config, save the file, then
systemctl restart networking &
ip address show to check if the changes had an effect & check remote connectivity again
Because this is a Home Lab and not enterprise you need to update the package repos to reflect this.
nano /etc/apt/sources.list
Add the no subscription repo . . .
# PVE pve-no-subscription repository provided by proxmox.com, # NOT recommended for production use deb http://download.proxmox.com/debian/pve buster pve-no-subscription
nano /etc/apt/sources.list.d/pve-enterprise.list
comment out the line in that file like this otherwise any apt-get update will fail because you don't have access to that repo . . .
#deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
reference : https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo
You should now be able to update either via the gui or command line
apt-get update && apt-get dist-upgrade -y
Perform a reboot...'just in case'
reboot
reference : https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_system_software_updates
nano /etc/systemd/timesyncd.conf
uncomment all of the lines and update the first line to look like this but pick your own time servers to sync too
[Time]
NTP=time.nrc.ca time.chu.nrc.ca ntp.torix.ca
systemctl restart systemd-timesyncd
journalctl --since -1h -u systemd-timesyncd
reference:: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_time_synchronization
apt install libsasl2-modules
cp /etc/postfix/main.cf /etc/postfix/main.cf.bak
nano /etc/postfix/main.cf
relayhost = [smtp.gmail.com]:587
smtp_sender_dependent_authentication = yes
sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relayhost.hash
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_auth.hash
smtp_sasl_security_options = noanonymous
smtp_use_tls = yes
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
NOTE: if you have MFA configured you'll need to create an App password for your Gmail account and add that below as yourpassword
https://support.google.com/accounts/answer/185833?hl=en
echo [smtp.gmail.com]:587 your_username@gmail.com:yourpassword > /etc/postfix/sasl_auth.hash
2.4.5 Create your sender_relayhost file (this makes sure that you always use your gmail as the sender:
echo your_username@gmail.com [smtp.gmail.com]:587 > /etc/postfix/sender_relayhost.hash
postmap /etc/postfix/sender_relayhost.hash
postmap /etc/postfix/sasl_auth.hash
chmod 400 /etc/postfix/sasl_auth.*
postfix reload OR systemctl restart postfix.service
Test from Postfix:
systemctl status postfix.service
echo "Test mail from postfix" | mail -s "Test Postfix" test@test.com
Test from PVE:
echo "test" | /usr/bin/pvemailforward
/var/log/mail.warn
/var/log/mail.info
References:
https://github.com/ShoGinn/homelab/wiki/Proxmox-PostFix---Email
https://forum.proxmox.com/threads/get-postfix-to-send-notifications-email-externally.59940/
https://slack.com/apps/A0F7XDUAZ-incoming-webhooks
Configure it how you want but copy down the web hook url for later, it will look something like this . . .
https://hooks.slack.com/services/d8a9das90d8/ds79d07asd0a/dff9sdf89dfdpivcs this is faked
apt-get install dh-make-perl
cpan Slack::WebHook
nano /usr/local/sbin/post2slack.pl
Add the following lines to the file . . .
#!/usr/bin/perl -T
# https://metacpan.org/pod/Slack::WebHook
use strict;
use warnings;
use Slack::WebHook;$ENV{'PATH'} = '/sbin:/bin:/usr/sbin:/usr/bin';
my $hook = Slack::WebHook->new( url => 'https://hooks.slack.com/services/fdsafdsfdsfds/ffdsasdfsdfsdf'
);my
$stdin_h = do { local $ /; <STDIN> };$hook->post_ok( $stdin_h );
chmod 755 /usr/local/sbin/post2slack.pl
echo "|/usr/local/sbin/post2slack.pl" >> /root/.forward
echo "Test mail from postfix sent to slack" | mail -s "Test Slack" root
If you have a paid slack account you can alternatively setup a email address that slack will monitor for incoming emails and post it to a slack channel
This is the slack app you need to setup
https://slack.com/apps/A0F81496D-email
Add the slack email to the bottom of root's .forwards file
echo "foobar@example.com" >> /root/.forward
And test
echo "Test mail from postfix sent to slack" | mail -s "Test Slack" root
References:
https://api.slack.com/messaging/webhooks
https://metacpan.org/pod/Slack::WebHook
https://forum.proxmox.com/threads/proxmox-alert-emails-can-you-automatically-cc-people.53332/
If you really want to keep the network simple the basic linux bridge is perfectly fine. In fact in recent versions of Linux & Proxmox actually has most of the functionality you would need/want. However the goals of this Homelab is not only to be Simple but also Modular, Secure, extensible & maybe optimized for Speed. For that we need Open vSwitch.
To Sum Up . . .
Linux Bridge = LAN/Switch
Linux Bond = Port-Channel
Linux VLAN = Virtual LAN
Linux Tap = Virtual Interface
OVS Bridge = LAN/Switch with one or more Ports.
OVS Port = A Port in a Bridge. A Port can have 1 or more Interfaces. A Port with more then 1 Interface is a Bonded Port.
OVS Interface = An Interface in a Port. An Interface can be many types but the most basic ones are System, Internal & tap.
See man ovs-vsctl or man ovs-vswitchd.conf.db or man ovs-vswitchd for more details, it's really well documented.
References:
https://github.com/openvswitch/ovs/blob/master/debian/openvswitch-switch.README.Debian
Layer 2 Switch connecting one or more physical or virtual interfaces together. Bridges direct traffic to the appropriate interfaces based on Mac Addresses so traffic goes directly from one host to another through the bridge.
Default Configuration using a Bridge via Proxmox Admin Guide 3.3.4

Default Linux Bridge Example
via the CLI
nano /etc/network/interfaces
auto eno1
iface eno1 inet manualauto vmbr0
iface vmbr0 inet manual
bridge_ports eno1
bridge_stp off
bridge_fd 0
Default Linux Bridge Example with a static ip for the host
auto eno1
iface eno1 inet manualauto vmbr0
iface vmbr0 inet static
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
How to show the Mac Addresses of the interfaces on the bridge . . .
bridge fdb show
01:00:5e:00:00:fb dev rename4 master vmbr0
01:00:5e:7f:ff:fa dev eno1 vlan 1 master vmbr0 permanent
01:00:5e:7f:ff:fb dev eno1 master vmbr0 permanent
40:2c:ff:ec:f8:ff dev eno1 master vmbr0
00:0c:11:90:c6:11 dev eno1 master vmbr0
01:00:5e:7f:ff:ff dev eno1 self permanent
33:33:00:00:00:01 dev vmbr0 self permanent
How to show the ARP table for the host . . .
ip neigh show
192.168.10.1 dev vmbr0 lladdr 00:0c:11:90:c6:11 STALE
192.168.10.2 dev vmbr0 lladdr 40:2c:ff:ec:f8:ff REACHABLE
192.168.10.100 dev vmbr0 lladdr 40:2c:ff:ec:f8:ff REACHABLE
Other userful commands
# Check if interfaces are added to the bridge
brctl show vmbr0
# Check if the interfaces are up and receiving traffic
ip link show
References:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
Bonds join multiple interfaces together, think port-channel/Etherchannel, link-aggregation (LACP), NIC Teaming. When you create a bond both interfaces act as one and they don't need to have there own configuration section.
Linux Bond Example
via the CLI
nano /etc/network/interfaces
# Auto is used to bring up the interfaces
# manual defines the interface with no default configuration
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual# Bonding the interfaces together using 802.3ad (LACP)
auto bond0
iface bond0 inet manual
slaves eno1 eno2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3# Create a new bridge with the bonded port
auto vmbr0
iface vmbr0 inet manual
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
Other userful commands
# Check if interfaces are added to the bond
cat /proc/net/bonding/bond1
References:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
https://www.kernel.org/doc/Documentation/networking/bonding.txt
A VLAN lets you divide up (subnet) your physical network/Interface/LAN/Bridge into separate logical ones. A benefit would be to create a guest wifi network and give friends access to the internet only (VLAN10) but nothing internal to your home network (VLAN20).
References:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
From the ssh terminal install the OpenvSwitch Switch package otherwise the gui will complain that its not installed when you try to create an OVS Bridge . . .
apt update
apt install openvswitch-switch
You can only support Linux Bridge or Open VSwitch, so choose wisely.
If you want to have OVS in the long run its best to do it now before you create any containers or VM's.
The easiest way to do this would be to edit /etc/network/interfaces directly and then restart Proxmox or reload network services but first we want to make a backup.
# Make a backup
cp /etc/network/interfaces /etc/network/interfaces.bak
# Edit Networking
nano /etc/network/interfaces
# Replace the contents of the file with the below
OpenvSwitch Bridge Example
# Loopback interface
auto lo
iface lo inet loopback# Bridge for our eth0 physical interfaces and vlan virtual interfaces (our VMs will
# also attach to this bridge)
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
# NOTE: we MUST mention eno1, vlan11, vlan 12 and vlan999 even though each
# of them lists ovs_bridge vmbr0! Not sure why it needs this
# kind of cross-referencing but it won't work without it! ovs_ports eno1 vlan11 vlan12# Physical interface for traffic coming into the system. Retag untagged
# traffic into vlan 1, but pass through other tags.
auto eno1
allow-vmbr0 eno1
iface eno1 inet manual
ovs_bridge vmbr0
ovs_type OVSPort
ovs_options tag=11 vlan_mode=native-untagged
# Alternatively if you want to also restrict what vlans are allowed through
# you could use:
# ovs_options tag=11 vlan_mode=native-untagged trunks=11,12# Virtual interface to take advantage of originally untagged management traffic
allow-vmbr0 vlan11
iface vlan11 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=11
address 10.50.10.44
netmask 255.255.255.0
gateway 10.50.10.1
ovs_mtu 1500
# Either reload Proxmox or restart network
# If doing this via ssh, be careful to do this in one command
systemctl restart networking
References:
https://pve.proxmox.com/wiki/Open_vSwitch
https://www.hindawi.com/journals/jece/2016/5249421/
https://www.actualtechmedia.com/wp-content/uploads/2018/01/CUMULUS-Understanding-Linux-Internetworking.pdf
http://arthurchiao.art/blog/ovs-deep-dive-6-internal-port/







