-
Notifications
You must be signed in to change notification settings - Fork 36
Description
Hello I get the same error
selinux is off
I found previous issuere here but advices from there don't help
try package in master and version 1.0.4 and pack from iso image
TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] ***
fatal: [brest2.f.com]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'str object' has no attribute 'vgname'\n\nThe error appears to be in '/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Group devices by volume group name, including existing devices\n ^ here\n"}/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml'
this is my deploy file single node gfs
hc_nodes:
hosts:
brest2.f.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/data
lvname: gluster_lv_data
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore
lvname: gluster_lv_vmstore
vgname: gluster_vg_sdb
blacklist_mpath_devices:
- sdb
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 3G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_data
lvsize: 500G
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstore
lvsize: 500G
vars:
gluster_infra_disktype: RAID6
gluster_infra_stripe_unit_size: 256
gluster_infra_diskcount: 10
gluster_set_selinux_labels: true
gluster_infra_fw_ports:
- 2049/tcp
- 54321/tcp
- 5900/tcp
- 5900-6923/tcp
- 5666/tcp
- 16514/tcp
gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled
gluster_infra_fw_zone: public
gluster_infra_fw_services:
- glusterfs
gluster_features_force_varlogsizecheck: false
cluster_nodes:
- brest2.f.com
gluster_features_hci_cluster: '{{ cluster_nodes }}'
gluster_features_hci_volumes:
- volname: engine
brick: /gluster_bricks/engine/engine
arbiter: 0
- volname: data
brick: /gluster_bricks/data/data
arbiter: 0
- volname: vmstore
brick: /gluster_bricks/vmstore/vmstore
arbiter: 0
gluster_features_hci_volume_options:
storage.owner-uid: '36'
storage.owner-gid: '36'
features.shard: 'on'
performance.low-prio-threads: '32'
performance.strict-o-direct: 'on'
network.remote-dio: 'off'
network.ping-timeout: '30'
user.cifs: 'off'
nfs.disable: 'on'
performance.quick-read: 'off'
performance.read-ahead: 'off'
performance.io-cache: 'off'
cluster.eager-lock: enable
how to resolve it i use ovirt 4.5 node iso