Skip to content

Commit f3bcd26

Browse files
committed
Release 1.7.0
Merge branch develop@bfa12b006fd326cb89201ad63c8328e78175710f into main
2 parents 0e91945 + bfa12b0 commit f3bcd26

38 files changed

+568
-1102
lines changed

.ansible-lint

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
skip_list:
2+
- no-handler

LICENSE_IMPORTS

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
==============================================================================
2+
3+
The following files are licensed under APL2:
4+
5+
library/pve_ceph_volume.py (This is a combined version of the original files module_utils/ca_common.py and library/ceph_volume.py)
6+
7+
The license text from ceph/ceph-ansible is as follows:
8+
9+
Copyright [2014] [Sébastien Han]
10+
11+
Licensed under the Apache License, Version 2.0 (the "License");
12+
you may not use this file except in compliance with the License.
13+
You may obtain a copy of the License at
14+
15+
http://www.apache.org/licenses/LICENSE-2.0
16+
17+
Unless required by applicable law or agreed to in writing, software
18+
distributed under the License is distributed on an "AS IS" BASIS,
19+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
20+
See the License for the specific language governing permissions and
21+
limitations under the License.
22+
23+
==============================================================================
24+
25+
# Licenses for libraries imported in the future should go here

README.md

Lines changed: 107 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,36 @@
1-
[![Build Status](https://travis-ci.org/lae/ansible-role-proxmox.svg?branch=master)](https://travis-ci.org/lae/ansible-role-proxmox)
21
[![Galaxy Role](https://img.shields.io/badge/ansible--galaxy-proxmox-blue.svg)](https://galaxy.ansible.com/lae/proxmox/)
32

43
lae.proxmox
54
===========
65

7-
Installs and configures a Proxmox 5.x/6.x cluster with the following features:
6+
Installs and configures Proxmox Virtual Environment 6.x/7.x on Debian servers.
87

9-
- Ensures all hosts can connect to one another as root
10-
- Ability to create/manage groups, users, access control lists and storage
11-
- Ability to create or add nodes to a PVE cluster
12-
- Ability to setup Ceph on the nodes
13-
- IPMI watchdog support
14-
- BYO HTTPS certificate support
15-
- Ability to use either `pve-no-subscription` or `pve-enterprise` repositories
8+
This role allows you to deploy and manage single-node PVE installations and PVE
9+
clusters (3+ nodes) on Debian Buster (10) and Bullseye (11). You are able to
10+
configure the following with the assistance of this role:
11+
12+
- PVE RBAC definitions (roles, groups, users, and access control lists)
13+
- PVE Storage definitions
14+
- [`datacenter.cfg`][datacenter-cfg]
15+
- HTTPS certificates for the Proxmox Web GUI (BYO)
16+
- PVE repository selection (e.g. `pve-no-subscription` or `pve-enterprise`)
17+
- Watchdog modules (IPMI and NMI) with applicable pve-ha-manager config
18+
- ZFS module setup and ZED notification email
19+
20+
With clustering enabled, this role does (or allows you to do) the following:
21+
22+
- Ensure all hosts can connect to one another as root over SSH
23+
- Initialize a new PVE cluster (or possibly adopt an existing one)
24+
- Create or add new nodes to a PVE cluster
25+
- Setup Ceph on a PVE cluster
26+
- Create and manage high availability groups
27+
28+
## Support/Contributing
29+
30+
For support or if you'd like to contribute to this role but want guidance, feel
31+
free to join this Discord server: https://discord.gg/cjqr6Fg. Please note, this
32+
is an temporary invite, so you'll need to wait for @lae to assign you a role,
33+
otherwise Discord will remove you from the server when you logout.
1634

1735
## Quickstart
1836

@@ -30,20 +48,15 @@ Copy the following playbook to a file like `install_proxmox.yml`:
3048
- hosts: all
3149
become: True
3250
roles:
33-
- {
34-
role: geerlingguy.ntp,
35-
ntp_manage_config: true,
36-
ntp_servers: [
37-
clock.sjc.he.net,
38-
clock.fmt.he.net,
39-
clock.nyc.he.net
40-
]
41-
}
42-
- {
43-
role: lae.proxmox,
44-
pve_group: all,
45-
pve_reboot_on_kernel_update: true
46-
}
51+
- role: geerlingguy.ntp
52+
ntp_manage_config: true
53+
ntp_servers:
54+
- clock.sjc.he.net,
55+
- clock.fmt.he.net,
56+
- clock.nyc.he.net
57+
- role: lae.proxmox
58+
- pve_group: all
59+
- pve_reboot_on_kernel_update: true
4760

4861
Install this role and a role for configuring NTP:
4962

@@ -63,12 +76,7 @@ file containing a list of hosts).
6376
Once complete, you should be able to access your Proxmox VE instance at
6477
`https://$SSH_HOST_FQDN:8006`.
6578

66-
## Support/Contributing
67-
68-
For support or if you'd like to contribute to this role but want guidance, feel
69-
free to join this Discord server: https://discord.gg/cjqr6Fg
70-
71-
## Deploying a fully-featured PVE 5.x cluster
79+
## Deploying a fully-featured PVE 7.x cluster
7280

7381
Create a new playbook directory. We call ours `lab-cluster`. Our playbook will
7482
eventually look like this, but yours does not have to follow all of the steps:
@@ -195,10 +203,6 @@ pvecluster. Here, a file lookup is used to read the contents of a file in the
195203
playbook, e.g. `files/pve01/lab-node01.key`. You could possibly just use host
196204
variables instead of files, if you prefer.
197205

198-
`pve_ssl_letsencrypt` allows to obtain a Let's Encrypt SSL certificate for
199-
pvecluster. The Ansible role [systemli.letsencrypt](https://galaxy.ansible.com/systemli/letsencrypt/)
200-
needs to be installed first in order to use this function.
201-
202206
`pve_cluster_enabled` enables the role to perform all cluster management tasks.
203207
This includes creating a cluster if it doesn't exist, or adding nodes to the
204208
existing cluster. There are checks to make sure you're not mixing nodes that
@@ -209,8 +213,8 @@ must already exist) to access PVE and gives them the Administrator role as part
209213
of the `ops` group. Read the **User and ACL Management** section for more info.
210214

211215
`pve_storages` allows to create different types of storage and configure them.
212-
The backend needs to be supported by [Proxmox](https://pve.proxmox.com/pve-docs/chapter-pvesm.html).
213-
Read the **Storage Management** section for more info.
216+
The backend needs to be supported by [Proxmox][pvesm]. Read the **Storage
217+
Management** section for more info.
214218

215219
`pve_ssh_port` allows you to change the SSH port. If your SSH is listening on
216220
a port other than the default 22, please set this variable. If a new node is
@@ -220,7 +224,7 @@ joining the cluster, the PVE cluster needs to communicate once via SSH.
220224
would make to your SSH server config. This is useful if you use another role
221225
to manage your SSH server. Note that setting this to false is not officially
222226
supported, you're on your own to replicate the changes normally made in
223-
ssh_cluster_config.yml.
227+
`ssh_cluster_config.yml` and `pve_add_node.yml`.
224228

225229
`interfaces_template` is set to the path of a template we'll use for configuring
226230
the network on these Debian machines. This is only necessary if you want to
@@ -354,29 +358,24 @@ serially during a maintenance period.) It will also enable the IPMI watchdog.
354358
- hosts: pve01
355359
become: True
356360
roles:
357-
- {
358-
role: geerlingguy.ntp,
359-
ntp_manage_config: true,
360-
ntp_servers: [
361-
clock.sjc.he.net,
362-
clock.fmt.he.net,
363-
clock.nyc.he.net
364-
]
365-
}
366-
- {
367-
role: lae.proxmox,
368-
pve_group: pve01,
369-
pve_cluster_enabled: yes,
370-
pve_reboot_on_kernel_update: true,
361+
- role: geerlingguy.ntp
362+
ntp_manage_config: true
363+
ntp_servers:
364+
- clock.sjc.he.net,
365+
- clock.fmt.he.net,
366+
- clock.nyc.he.net
367+
- role: lae.proxmox
368+
pve_group: pve01
369+
pve_cluster_enabled: yes
370+
pve_reboot_on_kernel_update: true
371371
pve_watchdog: ipmi
372-
}
373372

374373
## Role Variables
375374

376375
```
377376
[variable]: [default] #[description/purpose]
378377
pve_group: proxmox # host group that contains the Proxmox hosts to be clustered together
379-
pve_repository_line: "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" # apt-repository configuration - change to enterprise if needed (although TODO further configuration may be needed)
378+
pve_repository_line: "deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription" # apt-repository configuration - change to enterprise if needed (although TODO further configuration may be needed)
380379
pve_remove_subscription_warning: true # patches the subscription warning messages in proxmox if you are using the community edition
381380
pve_extra_packages: [] # Any extra packages you may want to install, e.g. ngrep
382381
pve_run_system_upgrades: false # Let role perform system upgrades
@@ -391,8 +390,9 @@ pve_watchdog_ipmi_timeout: 10 # Number of seconds the watchdog should wait
391390
pve_zfs_enabled: no # Specifies whether or not to install and configure ZFS packages
392391
# pve_zfs_options: "" # modprobe parameters to pass to zfs module on boot/modprobe
393392
# pve_zfs_zed_email: "" # Should be set to an email to receive ZFS notifications
393+
pve_zfs_create_volumes: [] # List of ZFS Volumes to create (to use as PVE Storages). See section on Storage Management.
394394
pve_ceph_enabled: false # Specifies wheter or not to install and configure Ceph packages. See below for an example configuration.
395-
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-nautilus buster main" # apt-repository configuration. Will be automatically set for 5.x and 6.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
395+
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-pacific bullseye main" # apt-repository configuration. Will be automatically set for 6.x and 7.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
396396
pve_ceph_network: "{{ (ansible_default_ipv4.network +'/'+ ansible_default_ipv4.netmask) | ipaddr('net') }}" # Ceph public network
397397
# pve_ceph_cluster_network: "" # Optional, if the ceph cluster network is different from the public network (see https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_install_wizard)
398398
pve_ceph_nodes: "{{ pve_group }}" # Host group containing all Ceph nodes
@@ -405,7 +405,6 @@ pve_ceph_fs: [] # List of CephFS filesystems to create
405405
pve_ceph_crush_rules: [] # List of CRUSH rules to create
406406
# pve_ssl_private_key: "" # Should be set to the contents of the private key to use for HTTPS
407407
# pve_ssl_certificate: "" # Should be set to the contents of the certificate to use for HTTPS
408-
pve_ssl_letsencrypt: false # Specifies whether or not to obtain a SSL certificate using Let's Encrypt
409408
pve_roles: [] # Added more roles with specific privileges. See section on User Management.
410409
pve_groups: [] # List of group definitions to manage in PVE. See section on User Management.
411410
pve_users: [] # List of user definitions to manage in PVE. See section on User Management.
@@ -454,8 +453,8 @@ pve_cluster_ha_groups:
454453
restricted: 0
455454
```
456455

457-
All configuration options supported in the datacenter.cfg file are documented in the
458-
[Proxmox manual datacenter.cfg section][datacenter-cfg].
456+
All configuration options supported in the datacenter.cfg file are documented
457+
in the [Proxmox manual datacenter.cfg section][datacenter-cfg].
459458

460459
In order for live reloading of network interfaces to work via the PVE web UI,
461460
you need to install the `ifupdown2` package. Note that this will remove
@@ -537,14 +536,14 @@ pve_acls:
537536
- test_users
538537
```
539538

540-
Refer to `library/proxmox_role.py` [link][user-module] and
539+
Refer to `library/proxmox_role.py` [link][user-module] and
541540
`library/proxmox_acl.py` [link][acl-module] for module documentation.
542541

543542
## Storage Management
544543

545544
You can use this role to manage storage within Proxmox VE (both in
546545
single server deployments and cluster deployments). For now, the only supported
547-
types are `dir`, `rbd`, `nfs`, `cephfs` ,`lvm` and `lvmthin`.
546+
types are `dir`, `rbd`, `nfs`, `cephfs`, `lvm`,`lvmthin`, and `zfspool`.
548547
Here are some examples.
549548

550549
```
@@ -588,6 +587,26 @@ pve_storages:
588587
- 10.0.0.1
589588
- 10.0.0.2
590589
- 10.0.0.3
590+
- name: zfs1
591+
type: zfspool
592+
content: [ "images", "rootdir" ]
593+
pool: rpool/data
594+
sparse: true
595+
```
596+
597+
Currently the `zfspool` type can be used only for `images` and `rootdir` contents.
598+
If you want to store the other content types on a ZFS volume, you need to specify
599+
them with type `dir`, path `/<POOL>/<VOLUME>` and add an entry in
600+
`pve_zfs_create_volumes`. This example adds a `iso` storage on a ZFS pool:
601+
602+
```
603+
pve_zfs_create_volumes:
604+
- rpool/iso
605+
pve_storages:
606+
- name: iso
607+
type: dir
608+
path: /rpool/iso
609+
content: [ "iso" ]
591610
```
592611

593612
Refer to `library/proxmox_storage.py` [link][storage-module] for module
@@ -627,7 +646,8 @@ pve_ceph_osds:
627646
block.db: /dev/sdb1
628647
encrypted: true
629648
# Crush rules for different storage classes
630-
# By default 'type' is set to host, you can find valid types at (https://docs.ceph.com/en/latest/rados/operations/crush-map/)
649+
# By default 'type' is set to host, you can find valid types at
650+
# (https://docs.ceph.com/en/latest/rados/operations/crush-map/)
631651
# listed under 'TYPES AND BUCKETS'
632652
pve_ceph_crush_rules:
633653
- name: replicated_rule
@@ -675,15 +695,40 @@ pve_ceph_fs:
675695
`pve_ceph_network` by default uses the `ipaddr` filter, which requires the
676696
`netaddr` library to be installed and usable by your Ansible controller.
677697

678-
`pve_ceph_nodes` by default uses `pve_group`, this parameter allows to specify on which nodes install Ceph (e.g. if you don't want to install Ceph on all your nodes).
698+
`pve_ceph_nodes` by default uses `pve_group`, this parameter allows to specify
699+
on which nodes install Ceph (e.g. if you don't want to install Ceph on all your
700+
nodes).
701+
702+
`pve_ceph_osds` by default creates unencrypted ceph volumes. To use encrypted
703+
volumes the parameter `encrypted` has to be set per drive to `true`.
704+
705+
## Developer Notes
706+
707+
When developing new features or fixing something in this role, you can test out
708+
your changes by using Vagrant (only libvirt is supported currently). The
709+
playbook can be found in `tests/vagrant` (so be sure to modify group variables
710+
as needed). Be sure to test any changes on both Debian 10 and 11 (update the
711+
Vagrantfile locally to use `debian/buster64`) before submitting a PR.
712+
713+
You can also specify an apt caching proxy (e.g. `apt-cacher-ng`, and it must
714+
run on port 3142) with the `APT_CACHE_HOST` environment variable to speed up
715+
package downloads if you have one running locally in your environment. The
716+
vagrant playbook will detect whether or not the caching proxy is available and
717+
only use it if it is accessible from your network, so you could just
718+
permanently set this variable in your development environment if you prefer.
719+
720+
For example, you could run the following to show verbose/easier to read output,
721+
use a caching proxy, and keep the VMs running if you run into an error (so that
722+
you can troubleshoot it and/or run `vagrant provision` after fixing):
679723

680-
`pve_ceph_osds` by default creates unencrypted ceph volumes. To use encrypted volumes the parameter `encrypted` has to be set per drive to `true`.
724+
APT_CACHE_HOST=10.71.71.10 ANSIBLE_STDOUT_CALLBACK=debug vagrant up --no-destroy-on-error
681725

682726
## Contributors
683727

684728
Musee Ullah ([@lae](https://github.com/lae), <[email protected]>) - Main developer
685729
Fabien Brachere ([@Fbrachere](https://github.com/Fbrachere)) - Storage config support
686730
Gaudenz Steinlin ([@gaundez](https://github.com/gaudenz)) - Ceph support, etc
731+
Richard Scott ([@zenntrix](https://github.com/zenntrix)) - Ceph support, PVE 7.x support, etc
687732
Thoralf Rickert-Wendt ([@trickert76](https://github.com/trickert76)) - PVE 6.x support, etc
688733
Engin Dumlu ([@roadrunner](https://github.com/roadrunner))
689734
Jonas Meurer ([@mejo-](https://github.com/mejo-))
@@ -695,6 +740,7 @@ Michael Holasek ([@mholasek](https://github.com/mholasek))
695740
[pve-cluster]: https://pve.proxmox.com/wiki/Cluster_Manager
696741
[install-ansible]: http://docs.ansible.com/ansible/intro_installation.html
697742
[pvecm-network]: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_separate_cluster_network
743+
[pvesm]: https://pve.proxmox.com/pve-docs/chapter-pvesm.html
698744
[user-module]: https://github.com/lae/ansible-role-proxmox/blob/master/library/proxmox_user.py
699745
[group-module]: https://github.com/lae/ansible-role-proxmox/blob/master/library/proxmox_group.py
700746
[acl-module]: https://github.com/lae/ansible-role-proxmox/blob/master/library/proxmox_group.py

Vagrantfile

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,11 @@
11
Vagrant.configure("2") do |config|
2-
config.vm.box = "debian/buster64"
2+
config.vm.box = "debian/bullseye64"
33

44
config.vm.provider :libvirt do |libvirt|
5-
libvirt.memory = 2048
5+
libvirt.memory = 2560
66
libvirt.cpus = 2
7-
libvirt.storage :file, :size => '2G'
7+
libvirt.storage :file, :size => '128M'
8+
libvirt.storage :file, :size => '128M'
89
end
910

1011
N = 3

defaults/main.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,9 @@ pve_watchdog_ipmi_timeout: 10
1616
pve_zfs_enabled: no
1717
# pve_zfs_options: "parameters to pass to zfs module"
1818
# pve_zfs_zed_email: "email address for zfs events"
19+
pve_zfs_create_volumes: []
1920
pve_ceph_enabled: false
20-
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/{% if ansible_distribution_release == 'stretch' %}ceph-luminous stretch{% else %}ceph-nautilus buster{% endif %} main"
21+
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/{% if ansible_distribution_release == 'buster' %}ceph-nautilus buster{% else %}ceph-pacific bullseye{% endif %} main"
2122
pve_ceph_network: "{{ (ansible_default_ipv4.network +'/'+ ansible_default_ipv4.netmask) | ipaddr('net') }}"
2223
pve_ceph_nodes: "{{ pve_group }}"
2324
pve_ceph_mon_group: "{{ pve_group }}"
@@ -36,7 +37,6 @@ pve_manage_hosts_enabled: yes
3637
# pve_cluster_addr1: "{{ ansible_eth1.ipv4.address }}
3738
pve_datacenter_cfg: {}
3839
pve_cluster_ha_groups: []
39-
pve_ssl_letsencrypt: false
4040
# additional roles for your cluster (f.e. for monitoring)
4141
pve_roles: []
4242
pve_groups: []
@@ -45,3 +45,4 @@ pve_acls: []
4545
pve_storages: []
4646
pve_ssh_port: 22
4747
pve_manage_ssh: true
48+
pve_hooks: {}

files/00_remove_checked_command_buster.patch

Lines changed: 0 additions & 13 deletions
This file was deleted.

0 commit comments

Comments
 (0)