You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+32-14Lines changed: 32 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -340,9 +340,9 @@ For example:
340
340
This will ask for a sudo password, then login to the `admin1` user (using public
341
341
key auth - add `-k` for pw) and run the playbook.
342
342
343
-
That's it! You should now have a fully deployed Proxmox cluster. You may want to
344
-
create Ceph storage on it afterward, which this role does not (yet?) do, and
345
-
other tasks possibly, but the hard part is mostly complete.
343
+
That's it! You should now have a fully deployed Proxmox cluster. You may want
344
+
to create Ceph storage on it afterwards (see Ceph for more info) and other
345
+
tasks possibly, but the hard part is mostly complete.
346
346
347
347
348
348
## Example Playbook
@@ -394,7 +394,8 @@ pve_zfs_enabled: no # Specifies whether or not to install and configure ZFS pack
394
394
# pve_zfs_zed_email: "" # Should be set to an email to receive ZFS notifications
395
395
pve_ceph_enabled: false # Specifies wheter or not to install and configure Ceph packages. See below for an example configuration.
396
396
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-nautilus buster main" # apt-repository configuration. Will be automatically set for 5.x and 6.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
# pve_ceph_cluster_network: "" # Optional, if the ceph cluster network is different from the public network (see https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_install_wizard)
398
399
pve_ceph_mon_group: "{{ pve_group }}" # Host group containing all Ceph monitor hosts
399
400
pve_ceph_mds_group: "{{ pve_group }}" # Host group containing all Ceph metadata server hosts
400
401
pve_ceph_osds: [] # List of OSD disks
@@ -417,18 +418,18 @@ pve_cluster_enabled: no # Set this to yes to configure hosts to be clustered tog
417
418
pve_cluster_clustername: "{{ pve_group }}" # Should be set to the name of the PVE cluster
418
419
```
419
420
420
-
Information about the following can be found in the PVE Documentation in the
421
-
[Cluster Manager][pvecm-network] chapter.
421
+
The following variables are used to provide networking information to corosync.
422
+
These are known as ring0_addr/ring1_addr or link0_addr/link1_addr, depending on
423
+
PVE version. They should be IPv4 or IPv6 addresses. For more information, refer
424
+
to the [Cluster Manager][pvecm-network] chapter in the PVE Documentation.
0 commit comments