Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
faccbe6
Create Create_PBR_config.yml
Akshada-Thorat Sep 22, 2023
546384f
Create PBR_variable.txt
Akshada-Thorat Sep 22, 2023
1d3e251
Create README.txt
Akshada-Thorat Sep 22, 2023
32c9c05
Update PBR_variable.txt
Akshada-Thorat Sep 25, 2023
a6178f4
Update Create_PBR_config.yml
Akshada-Thorat Apr 1, 2024
8e3e997
Create Create_mTLs.yml
Akshada-Thorat Apr 1, 2024
4e5d676
Update PBR_variable.txt
Akshada-Thorat Apr 1, 2024
d31200a
Update README.txt
Akshada-Thorat Apr 1, 2024
24121bd
Update Create_PBR_config.yml
Akshada-Thorat Apr 2, 2024
d71966c
Create Create_mdiskgrp_drp.yml
Akshada-Thorat Apr 2, 2024
a2d18e7
Update Create_PBR_config.yml
Akshada-Thorat Apr 2, 2024
e26e933
Update Create_PBR_config.yml
Akshada-Thorat Apr 3, 2024
c58160d
Update Create_mTLs.yml
Akshada-Thorat Apr 3, 2024
7ffc429
Update and rename Create_mdiskgrp_drp.yml to Create_mdiskgrp_drp_prov…
Akshada-Thorat Apr 3, 2024
03ea21c
Update PBR_variable.txt
Akshada-Thorat Apr 3, 2024
ab06741
Update README.txt
Akshada-Thorat Apr 3, 2024
7078b9e
Update README.txt
Akshada-Thorat Apr 3, 2024
3dcad4d
Rename Create_PBR_config.yml to mainplaybook.yml
Akshada-Thorat Apr 3, 2024
2ff3b54
Rename PBR_variable.txt to PBR_variable.yml
Akshada-Thorat Apr 3, 2024
91d3ed2
Update mainplaybook.yml
Akshada-Thorat Apr 3, 2024
ad7b7b5
Update mainplaybook.yml
Akshada-Thorat Apr 24, 2024
23a2624
Update Create_mdiskgrp_drp_proviPolicy.yml
Akshada-Thorat Apr 24, 2024
fcb0415
Update README.txt
Akshada-Thorat Apr 24, 2024
af70d49
Update Create_mTLs.yml
Akshada-Thorat Apr 24, 2024
b206d75
Rename mainplaybook.yml to main.yml
Akshada-Thorat Apr 24, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 41 additions & 0 deletions playbooks/PBR/Create_mTLs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
- name: Generate certificate
ibm_svctask_command:
command: "svctask chsystemcert -mkselfsigned"
clustername: "{{item.cluster_ip}}"
username: "{{item.cluster_username}}"
password: "{{item.cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"
loop: "{{users_data}}"

- name: Export SSL certificate internally
ibm_sv_manage_ssl_certificate:
clustername: "{{item.cluster_ip}}"
username: "{{item.cluster_username}}"
password: "{{item.cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"
certificate_type: "system"
loop: "{{users_data}}"

- name: Create truststore on primary
ibm_sv_manage_truststore_for_replication:
clustername: "{{users_data[0].cluster_ip}}"
username: "{{users_data[0].cluster_username}}"
password: "{{users_data[0].cluster_password}}"
log_path: "{{ log_path | default('/tmp/ansiblePB.debug') }}"
name: trust
remote_clustername: "{{users_data[1].cluster_ip}}"
remote_username: "{{users_data[1].cluster_username}}"
remote_password: "{{users_data[1].cluster_password}}"
state: "present"

- name: Create truststore on secondary
ibm_sv_manage_truststore_for_replication:
clustername: "{{users_data[1].cluster_ip}}"
username: "{{users_data[1].cluster_username}}"
password: "{{users_data[1].cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"
name: trust
remote_clustername: "{{users_data[0].cluster_ip}}"
remote_username: "{{users_data[0].cluster_username}}"
remote_password: "{{users_data[0].cluster_password}}"
state: "present"
156 changes: 156 additions & 0 deletions playbooks/PBR/Create_mdiskgrp_drp_proviPolicy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
- name: create mdiskgrp on both clusters
ibm_svc_mdiskgrp:
clustername: "{{item.cluster_ip}}"
username: "{{item.cluster_username}}"
password: "{{item.cluster_password}}"
log_path: "{{ log_path | default('/tmp/ansiblePB.debug') }}"
name: mdg0
state: present
datareduction: yes
ext: 1024
loop: "{{users_data}}"

- name: Get drive info
register: results
ibm_svcinfo_command:
command: "svcinfo lsdrive"
clustername: "{{users_data[0].cluster_ip}}"
username: "{{users_data[0].cluster_username}}"
password: "{{users_data[0].cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"

- name: set drive id
set_fact:
drive_id: "{{item['id']}}"
loop: "{{(results['stdout'])}}"

- name: set drive status
set_fact:
drive_status: "{{item['use']}}"
loop: "{{(results['stdout'])}}"

- name: Set drive count
set_fact:
TotalDrive: "{{drive_id|int + 1|int}}"

- name: set level
set_fact:
Level:

- name: Decide Level
set_fact:
Level: raid1
when: (TotalDrive|int <= 3)

- name: Decide Level
set_fact:
Level: raid6
when: (TotalDrive|int > 3)

- name: Create a List of variable
set_fact:
list1: []

- name: set variable
set_fact:
member: member

- name: Make drive in candidate state
ibm_svctask_command:
command: [ "svctask chdrive -use candidate {{item}}" ]
clustername: "{{users_data[0].cluster_ip}}"
username: "{{users_data[0].cluster_username}}"
password: "{{users_data[0].cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"
with_sequence: start=0 end="{{drive_id}}"
when: drive_status != member

- name: create distribute array on primary
ibm_svc_mdisk:
clustername: "{{users_data[0].cluster_ip}}"
username: "{{users_data[0].cluster_username}}"
password: "{{users_data[0].cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"
name: mdisk0
state: present
level: "{{Level}}"
drivecount: "{{TotalDrive|int}}"
driveclass: 0
encrypt: no
mdiskgrp: mdg0

- name: Get drive info
register: results
ibm_svcinfo_command:
command: "svcinfo lsdrive"
clustername: "{{users_data[1].cluster_ip}}"
username: "{{users_data[1].cluster_username}}"
password: "{{users_data[1].cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"

- name: set drive id
set_fact:
drive_id: "{{item['id']}}"
loop: "{{(results['stdout'])}}"

- name: set drive status
set_fact:
drive_status1: "{{item['use']}}"
loop: "{{(results['stdout'])}}"

- name: Drive count
set_fact:
TotalDrive2: "{{drive_id|int + 1|int}}"

- name: set level
set_fact:
Level2:

- name: Decide Level
set_fact:
Level2: raid1
when: (TotalDrive2|int <= 3 )

- name: Decide Level
set_fact:
Level2: raid6
when: (TotalDrive2|int > 3 )

- name: set variable as a member
set_fact:
member: member

- name: Make drive in candidate state
ibm_svctask_command:
command: [ "svctask chdrive -use candidate {{item}}" ]
clustername: "{{users_data[1].cluster_ip}}"
username: "{{users_data[1].cluster_username}}"
password: "{{users_data[1].cluster_password}}"
log_path: "{{ log_path | default('/tmp/ansiblePB.debug') }}"
with_sequence: start=0 end="{{drive_id}}"
when: drive_status1 != member

- name: create distribute array on secondary
ibm_svc_mdisk:
clustername: "{{users_data[1].cluster_ip}}"
username: "{{users_data[1].cluster_username}}"
password: "{{users_data[1].cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"
name: mdisk0
state: present
level: "{{Level2}}"
drivecount: "{{TotalDrive2|int}}"
driveclass: 0
encrypt: no
mdiskgrp: mdg0

- name: Create provisioning policy on both the clusters
ibm_sv_manage_provisioning_policy:
clustername: "{{item.cluster_ip}}"
username: "{{item.cluster_username}}"
password: "{{item.cluster_password}}"
log_path: "{{log_path | default('/tmp/ansiblePB.debug')}}"
name: provisioning_policy0
capacitysaving: "drivebased"
state: present
loop: "{{users_data}}"
17 changes: 17 additions & 0 deletions playbooks/PBR/PBR_variable.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
- users_data:
- cluster_name: < primary_cluster_name>
cluster_ip: <primary_cluster_ip>
cluster_username: <primary_cluster_username>
cluster_password: <primary_cluster_password>

- cluster_name: <secondary_cluster_name>
cluster_ip: <secondary_cluster_ip>
cluster_username: <secondary_cluster_username>
cluster_password: <secondary_cluster_password>

- host_name: <host_name>
- volume_size: <volume_size>
- volume_prefix: <volume_prefix>
- volume_group_name: <volume_group_name>
- number_of_volumes: <number_of_volumes>
- log_path: <log_path>
32 changes: 32 additions & 0 deletions playbooks/PBR/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
Objective:
Set up mTLS and configure Policy Based Replication.

Prerequisite:
- IBM storage Virtualize ansible collection plugins must be installed.

These playbook set up mTLS and configure Policy Based Replication between a primary cluster and the secondary cluster.
- It uses storage virtualize ansible modules.
- This playbook is designed to set up mTLS on both the site and configure Policy Based Replication between source cluster to destination cluster. This is designed in a way that it creates Data Reduction Pool , links them, creates provision policy and replication policy
- These playbooks also creates multiple Volumes with specified prefix along with volume group and maps all of them to the specified host.

There are total 4 files used for this use-case.
1. mainplaybook.yml:
This is main playbook file user needs to execute only this playbook it leaverages rest of the 3 files.
It executes 2 playbooks like 'Create_mTLS.yml' and 'Create_mdiskgrp_drp_proviPolicy.yml' and later on this the playbook creates volume group and associated volumes with volume_prefix name specified in inventroy file ‘PBR_variable.txt’. It also maps all the volumes to specified host.
After first execution of this playbook for next execution we can add volumes on existing/new volume group with existing replication policy and provision policy. It mapped this newly added volumes to the existing host object.

2. PBR_inventory.yml:
This file has all the variables required for playbooks.
- user_data : Parameters contain primary cluster details from where user wants to replicate data as well as secondary cluster details to where volume will be replicated to.
- host_name : It is the host name to which all the volumes should be mapped after creation. It assumes host is already created on primary clusters.
- volume* : Parameters starting volume contain details for volume such as name prefix for volume and size for the volumes to be created.It also has a volume group name
- number_of_volumes : It is the number of volumes to be created between clusters.
- log_path : It specifies the log path of playbook. If not specified then logs will generate at default path ‘/tmp/ansiblePB.debug’

3. Create_mTLS.yml:
This playbook sets mTLS (Mutual Transport Layer Security) which includes ceritficate generation on individual cluster, export it to remote location, creates certificate truststore which contains the certificate bundle. This operation performed on primary as well as secondary site. This playbook is called under 'Create_PBR_config.yml'.

4. Create_mdiskgrp_drp_proviPolicy.yml:
This playbook check the drive status, drive count based on that it creates mdiskgrp, Data reduction Pool with specified level. It links pool of both the site. It creates provision policy, replication policy.This playbook is called under 'Create_PBR_config.yml'.

Authors: Akshada Thorat ([email protected]) , Sandip Rajbanshi ([email protected])
Loading