You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Installs and configures a Proxmox 5.x/6.x cluster with the following features:
6
+
Installs and configures Proxmox Virtual Environment 6.x/7.x on Debian servers.
8
7
9
-
- Ensures all hosts can connect to one another as root
10
-
- Ability to create/manage groups, users, access control lists and storage
11
-
- Ability to create or add nodes to a PVE cluster
12
-
- Ability to setup Ceph on the nodes
13
-
- IPMI watchdog support
14
-
- BYO HTTPS certificate support
15
-
- Ability to use either `pve-no-subscription` or `pve-enterprise` repositories
8
+
This role allows you to deploy and manage single-node PVE installations and PVE
9
+
clusters (3+ nodes) on Debian Buster (10) and Bullseye (11). You are able to
10
+
configure the following with the assistance of this role:
11
+
12
+
- PVE RBAC definitions (roles, groups, users, and access control lists)
13
+
- PVE Storage definitions
14
+
-[`datacenter.cfg`][datacenter-cfg]
15
+
- HTTPS certificates for the Proxmox Web GUI (BYO)
16
+
- PVE repository selection (e.g. `pve-no-subscription` or `pve-enterprise`)
17
+
- Watchdog modules (IPMI and NMI) with applicable pve-ha-manager config
18
+
- ZFS module setup and ZED notification email
19
+
20
+
With clustering enabled, this role does (or allows you to do) the following:
21
+
22
+
- Ensure all hosts can connect to one another as root over SSH
23
+
- Initialize a new PVE cluster (or possibly adopt an existing one)
24
+
- Create or add new nodes to a PVE cluster
25
+
- Setup Ceph on a PVE cluster
26
+
- Create and manage high availability groups
27
+
28
+
## Support/Contributing
29
+
30
+
For support or if you'd like to contribute to this role but want guidance, feel
31
+
free to join this Discord server: https://discord.gg/cjqr6Fg. Please note, this
32
+
is an temporary invite, so you'll need to wait for @lae to assign you a role,
33
+
otherwise Discord will remove you from the server when you logout.
16
34
17
35
## Quickstart
18
36
@@ -30,20 +48,15 @@ Copy the following playbook to a file like `install_proxmox.yml`:
30
48
- hosts: all
31
49
become: True
32
50
roles:
33
-
- {
34
-
role: geerlingguy.ntp,
35
-
ntp_manage_config: true,
36
-
ntp_servers: [
37
-
clock.sjc.he.net,
38
-
clock.fmt.he.net,
39
-
clock.nyc.he.net
40
-
]
41
-
}
42
-
- {
43
-
role: lae.proxmox,
44
-
pve_group: all,
45
-
pve_reboot_on_kernel_update: true
46
-
}
51
+
- role: geerlingguy.ntp
52
+
ntp_manage_config: true
53
+
ntp_servers:
54
+
- clock.sjc.he.net,
55
+
- clock.fmt.he.net,
56
+
- clock.nyc.he.net
57
+
- role: lae.proxmox
58
+
- pve_group: all
59
+
- pve_reboot_on_kernel_update: true
47
60
48
61
Install this role and a role for configuring NTP:
49
62
@@ -63,12 +76,7 @@ file containing a list of hosts).
63
76
Once complete, you should be able to access your Proxmox VE instance at
64
77
`https://$SSH_HOST_FQDN:8006`.
65
78
66
-
## Support/Contributing
67
-
68
-
For support or if you'd like to contribute to this role but want guidance, feel
69
-
free to join this Discord server: https://discord.gg/cjqr6Fg
70
-
71
-
## Deploying a fully-featured PVE 5.x cluster
79
+
## Deploying a fully-featured PVE 7.x cluster
72
80
73
81
Create a new playbook directory. We call ours `lab-cluster`. Our playbook will
74
82
eventually look like this, but yours does not have to follow all of the steps:
@@ -195,10 +203,6 @@ pvecluster. Here, a file lookup is used to read the contents of a file in the
195
203
playbook, e.g. `files/pve01/lab-node01.key`. You could possibly just use host
196
204
variables instead of files, if you prefer.
197
205
198
-
`pve_ssl_letsencrypt` allows to obtain a Let's Encrypt SSL certificate for
199
-
pvecluster. The Ansible role [systemli.letsencrypt](https://galaxy.ansible.com/systemli/letsencrypt/)
200
-
needs to be installed first in order to use this function.
201
-
202
206
`pve_cluster_enabled` enables the role to perform all cluster management tasks.
203
207
This includes creating a cluster if it doesn't exist, or adding nodes to the
204
208
existing cluster. There are checks to make sure you're not mixing nodes that
@@ -209,8 +213,8 @@ must already exist) to access PVE and gives them the Administrator role as part
209
213
of the `ops` group. Read the **User and ACL Management** section for more info.
210
214
211
215
`pve_storages` allows to create different types of storage and configure them.
212
-
The backend needs to be supported by [Proxmox](https://pve.proxmox.com/pve-docs/chapter-pvesm.html).
213
-
Read the **Storage Management** section for more info.
216
+
The backend needs to be supported by [Proxmox][pvesm]. Read the **Storage
217
+
Management** section for more info.
214
218
215
219
`pve_ssh_port` allows you to change the SSH port. If your SSH is listening on
216
220
a port other than the default 22, please set this variable. If a new node is
@@ -220,7 +224,7 @@ joining the cluster, the PVE cluster needs to communicate once via SSH.
220
224
would make to your SSH server config. This is useful if you use another role
221
225
to manage your SSH server. Note that setting this to false is not officially
222
226
supported, you're on your own to replicate the changes normally made in
223
-
ssh_cluster_config.yml.
227
+
`ssh_cluster_config.yml` and `pve_add_node.yml`.
224
228
225
229
`interfaces_template` is set to the path of a template we'll use for configuring
226
230
the network on these Debian machines. This is only necessary if you want to
@@ -354,29 +358,24 @@ serially during a maintenance period.) It will also enable the IPMI watchdog.
354
358
- hosts: pve01
355
359
become: True
356
360
roles:
357
-
- {
358
-
role: geerlingguy.ntp,
359
-
ntp_manage_config: true,
360
-
ntp_servers: [
361
-
clock.sjc.he.net,
362
-
clock.fmt.he.net,
363
-
clock.nyc.he.net
364
-
]
365
-
}
366
-
- {
367
-
role: lae.proxmox,
368
-
pve_group: pve01,
369
-
pve_cluster_enabled: yes,
370
-
pve_reboot_on_kernel_update: true,
361
+
- role: geerlingguy.ntp
362
+
ntp_manage_config: true
363
+
ntp_servers:
364
+
- clock.sjc.he.net,
365
+
- clock.fmt.he.net,
366
+
- clock.nyc.he.net
367
+
- role: lae.proxmox
368
+
pve_group: pve01
369
+
pve_cluster_enabled: yes
370
+
pve_reboot_on_kernel_update: true
371
371
pve_watchdog: ipmi
372
-
}
373
372
374
373
## Role Variables
375
374
376
375
```
377
376
[variable]: [default] #[description/purpose]
378
377
pve_group: proxmox # host group that contains the Proxmox hosts to be clustered together
379
-
pve_repository_line: "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" # apt-repository configuration - change to enterprise if needed (although TODO further configuration may be needed)
378
+
pve_repository_line: "deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription" # apt-repository configuration - change to enterprise if needed (although TODO further configuration may be needed)
380
379
pve_remove_subscription_warning: true # patches the subscription warning messages in proxmox if you are using the community edition
381
380
pve_extra_packages: [] # Any extra packages you may want to install, e.g. ngrep
382
381
pve_run_system_upgrades: false # Let role perform system upgrades
@@ -391,8 +390,9 @@ pve_watchdog_ipmi_timeout: 10 # Number of seconds the watchdog should wait
391
390
pve_zfs_enabled: no # Specifies whether or not to install and configure ZFS packages
392
391
# pve_zfs_options: "" # modprobe parameters to pass to zfs module on boot/modprobe
393
392
# pve_zfs_zed_email: "" # Should be set to an email to receive ZFS notifications
393
+
pve_zfs_create_volumes: [] # List of ZFS Volumes to create (to use as PVE Storages). See section on Storage Management.
394
394
pve_ceph_enabled: false # Specifies wheter or not to install and configure Ceph packages. See below for an example configuration.
395
-
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-nautilus buster main" # apt-repository configuration. Will be automatically set for 5.x and 6.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
395
+
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-pacific bullseye main" # apt-repository configuration. Will be automatically set for 6.x and 7.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
# pve_ceph_cluster_network: "" # Optional, if the ceph cluster network is different from the public network (see https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_install_wizard)
398
398
pve_ceph_nodes: "{{ pve_group }}" # Host group containing all Ceph nodes
@@ -405,7 +405,6 @@ pve_ceph_fs: [] # List of CephFS filesystems to create
405
405
pve_ceph_crush_rules: [] # List of CRUSH rules to create
406
406
# pve_ssl_private_key: "" # Should be set to the contents of the private key to use for HTTPS
407
407
# pve_ssl_certificate: "" # Should be set to the contents of the certificate to use for HTTPS
408
-
pve_ssl_letsencrypt: false # Specifies whether or not to obtain a SSL certificate using Let's Encrypt
409
408
pve_roles: [] # Added more roles with specific privileges. See section on User Management.
410
409
pve_groups: [] # List of group definitions to manage in PVE. See section on User Management.
411
410
pve_users: [] # List of user definitions to manage in PVE. See section on User Management.
@@ -454,8 +453,8 @@ pve_cluster_ha_groups:
454
453
restricted: 0
455
454
```
456
455
457
-
All configuration options supported in the datacenter.cfg file are documented in the
`pve_ceph_network` by default uses the `ipaddr` filter, which requires the
676
696
`netaddr` library to be installed and usable by your Ansible controller.
677
697
678
-
`pve_ceph_nodes` by default uses `pve_group`, this parameter allows to specify on which nodes install Ceph (e.g. if you don't want to install Ceph on all your nodes).
698
+
`pve_ceph_nodes` by default uses `pve_group`, this parameter allows to specify
699
+
on which nodes install Ceph (e.g. if you don't want to install Ceph on all your
700
+
nodes).
701
+
702
+
`pve_ceph_osds` by default creates unencrypted ceph volumes. To use encrypted
703
+
volumes the parameter `encrypted` has to be set per drive to `true`.
704
+
705
+
## Developer Notes
706
+
707
+
When developing new features or fixing something in this role, you can test out
708
+
your changes by using Vagrant (only libvirt is supported currently). The
709
+
playbook can be found in `tests/vagrant` (so be sure to modify group variables
710
+
as needed). Be sure to test any changes on both Debian 10 and 11 (update the
711
+
Vagrantfile locally to use `debian/buster64`) before submitting a PR.
712
+
713
+
You can also specify an apt caching proxy (e.g. `apt-cacher-ng`, and it must
714
+
run on port 3142) with the `APT_CACHE_HOST` environment variable to speed up
715
+
package downloads if you have one running locally in your environment. The
716
+
vagrant playbook will detect whether or not the caching proxy is available and
717
+
only use it if it is accessible from your network, so you could just
718
+
permanently set this variable in your development environment if you prefer.
719
+
720
+
For example, you could run the following to show verbose/easier to read output,
721
+
use a caching proxy, and keep the VMs running if you run into an error (so that
722
+
you can troubleshoot it and/or run `vagrant provision` after fixing):
679
723
680
-
`pve_ceph_osds` by default creates unencrypted ceph volumes. To use encrypted volumes the parameter `encrypted` has to be set per drive to `true`.
724
+
APT_CACHE_HOST=10.71.71.10 ANSIBLE_STDOUT_CALLBACK=debug vagrant up --no-destroy-on-error
681
725
682
726
## Contributors
683
727
684
728
Musee Ullah ([@lae](https://github.com/lae), <[email protected]>) - Main developer
685
729
Fabien Brachere ([@Fbrachere](https://github.com/Fbrachere)) - Storage config support
0 commit comments