You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+33-7Lines changed: 33 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
lae.proxmox
4
4
===========
5
5
6
-
Installs and configures Proxmox Virtual Environment 6.x/7.x on Debian servers.
6
+
Installs and configures Proxmox Virtual Environment 6.x/7.x/8.x on Debian servers.
7
7
8
8
This role allows you to deploy and manage single-node PVE installations and PVE
9
9
clusters (3+ nodes) on Debian Buster (10) and Bullseye (11). You are able to
@@ -78,7 +78,7 @@ file containing a list of hosts).
78
78
Once complete, you should be able to access your Proxmox VE instance at
79
79
`https://$SSH_HOST_FQDN:8006`.
80
80
81
-
## Deploying a fully-featured PVE 7.x cluster
81
+
## Deploying a fully-featured PVE 8.x cluster
82
82
83
83
Create a new playbook directory. We call ours `lab-cluster`. Our playbook will
84
84
eventually look like this, but yours does not have to follow all of the steps:
@@ -395,7 +395,7 @@ pve_zfs_enabled: no # Specifies whether or not to install and configure ZFS pack
395
395
pve_zfs_create_volumes: [] # List of ZFS Volumes to create (to use as PVE Storages). See section on Storage Management.
396
396
pve_ceph_enabled: false # Specifies wheter or not to install and configure Ceph packages. See below for an example configuration.
397
397
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-pacific bullseye main" # apt-repository configuration. Will be automatically set for 6.x and 7.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
# pve_ceph_cluster_network: "" # Optional, if the ceph cluster network is different from the public network (see https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_install_wizard)
400
400
pve_ceph_nodes: "{{ pve_group }}" # Host group containing all Ceph nodes
401
401
pve_ceph_mon_group: "{{ pve_group }}" # Host group containing all Ceph monitor hosts
@@ -585,8 +585,8 @@ Refer to `library/proxmox_role.py` [link][user-module] and
585
585
586
586
You can use this role to manage storage within Proxmox VE (both in
587
587
single server deployments and cluster deployments). For now, the only supported
588
-
types are `dir`, `rbd`, `nfs`, `cephfs`, `lvm`,`lvmthin`, and `zfspool`.
589
-
Here are some examples.
588
+
types are `dir`, `rbd`, `nfs`, `cephfs`, `lvm`,`lvmthin`, `zfspool`, `btrfs`,
589
+
and `pbs`. Here are some examples.
590
590
591
591
```
592
592
pve_storages:
@@ -629,13 +629,28 @@ pve_storages:
629
629
- 10.0.0.1
630
630
- 10.0.0.2
631
631
- 10.0.0.3
632
+
- name: pbs1
633
+
type: pbs
634
+
content: [ "backup" ]
635
+
server: 192.168.122.2
636
+
username: user@pbs
637
+
password: PBSPassword1
638
+
datastore: main
632
639
- name: zfs1
633
640
type: zfspool
634
641
content: [ "images", "rootdir" ]
635
642
pool: rpool/data
636
643
sparse: true
644
+
- name: btrfs1
645
+
type: btrfs
646
+
content: [ "images", "rootdir" ]
647
+
nodes: [ "lab-node01.local", "lab-node02.local" ]
648
+
path: /mnt/proxmox_storage
649
+
is_mountpoint: true
637
650
```
638
651
652
+
Refer to https://pve.proxmox.com/pve-docs/api-viewer/index.html for more information.
653
+
639
654
Currently the `zfspool` type can be used only for `images` and `rootdir` contents.
640
655
If you want to store the other content types on a ZFS volume, you need to specify
641
656
them with type `dir`, path `/<POOL>/<VOLUME>` and add an entry in
@@ -734,8 +749,9 @@ pve_ceph_fs:
734
749
mountpoint: /srv/proxmox/backup
735
750
```
736
751
737
-
`pve_ceph_network` by default uses the `ipaddr` filter, which requires the
738
-
`netaddr` library to be installed and usable by your Ansible controller.
752
+
`pve_ceph_network` by default uses the `ansible.utils.ipaddr` filter, which
753
+
requires the `netaddr` library to be installed and usable by your Ansible
754
+
controller.
739
755
740
756
`pve_ceph_nodes` by default uses `pve_group`, this parameter allows to specify
741
757
on which nodes install Ceph (e.g. if you don't want to install Ceph on all your
@@ -777,7 +793,17 @@ Jonas Meurer ([@mejo-](https://github.com/mejo-))
777
793
Ondrej Flidr ([@SniperCZE](https://github.com/SniperCZE))
778
794
niko2 ([@niko2](https://github.com/niko2))
779
795
Christian Aublet ([@caublet](https://github.com/caublet))
796
+
Gille Pietri ([@gilou](https://github.com/gilou))
780
797
Michael Holasek ([@mholasek](https://github.com/mholasek))
798
+
Alexander Petermann ([@lexxxel](https://github.com/lexxxel)) - PVE 8.x support, etc
799
+
Bruno Travouillon ([@btravouillon](https://github.com/btravouillon)) - UX improvements
800
+
Tobias Negd ([@wu3rstle](https://github.com/wu3rstle)) - Ceph support
801
+
PendaGTP ([@PendaGTP](https://github.com/PendaGTP)) - Ceph support
802
+
John Marion ([@jmariondev](https://github.com/jmariondev))
803
+
foerkede ([@foerkede](https://github.com/foerkede)) - ZFS storage support
804
+
Guiffo Joel ([@futuriste](https://github.com/futuriste)) - Pool configuration support
805
+
806
+
[Full list of contributors](https://github.com/lae/ansible-role-proxmox/graphs/contributors)
0 commit comments