Skip to content

Commit 80d2cf2

Browse files
Update Blog “highly-available-nfs-workload-on-hpe-greenlake-for-private-cloud-enterprise-using-serviceguard-for-linux”
1 parent 9b3ef86 commit 80d2cf2

File tree

1 file changed

+67
-43
lines changed

1 file changed

+67
-43
lines changed

content/blog/highly-available-nfs-workload-on-hpe-greenlake-for-private-cloud-enterprise-using-serviceguard-for-linux.md

Lines changed: 67 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Selecting a Terraform provider
5454
The first section of the file will enumerate the “providers” you rely upon for building your infrastructure, and they could be multiple providers in a single TF file. In the case here, you only have the HPE GreenLake provider referenced as hpe/hpegl in the official Terraform registry.
5555
The first lines of your Terraform configuration file should look like this:
5656

57-
```
57+
```json
5858
# Load HPE GreenLake terraform provider
5959

6060
terraform {
@@ -79,7 +79,7 @@ Note: Because this is open source, do not hesitate to open issues, or even a pul
7979

8080
Set up the required parameters for hpegl provider that was specified earlier. As previously explained, you can either explicitly set those parameters in your TF file or have them set in a series of environment variables or have a mix of both. It is recommended to add the following two parameters in your TF file:
8181

82-
```
82+
```json
8383
# Setup provider environment (location and space)
8484
provider "hpegl" {
8585
vmaas {
@@ -111,7 +111,7 @@ The value for the tenant ID may be seen in the Tenant ID field under the API Acc
111111

112112
With this you can now build a resource file that defines the following environment variables:
113113

114-
```
114+
```shell
115115
export HPEGL_TENANT_ID=<Your Tenant ID>
116116
export HPEGL_USER_ID=<Client ID of the API Client>
117117
export HPEGL_USER_SECRET=<Secret Key displayed when you created the API Client>
@@ -134,7 +134,7 @@ API Clients which are used to create virtual machines can also set Linux and Win
134134
Here is a sample script which reads the VM\*USERNAME and VM_PASSWORD environment variables and uses the values for Linux and Windows username and password for the API Client. The script assumes a Location value of ‘FTC06’ and Space value of ‘Default’.
135135
To execute this script, first set appropriate values for the VM_USERNAME and VM_PASSWORD environment variables. Next, execute the resource file, which was created earlier, which sets the HPEGL\** environment variables for your API Client. Finally, execute the script below.
136136

137-
```
137+
```shell
138138
#!/bin/bash
139139
export LOCATION='FTC06'
140140
export SPACE='Default'
@@ -172,37 +172,27 @@ Querying for infrastructure components
172172

173173
Your next step with the TF file is to query the HPE GreenLake provider to collect information needed to create your first VM instance. From the documentation, you can see that you need to gather the following information:
174174

175-
176175
• Cloud ID
177176

178-
179177
• Group ID
180178

181-
182179
• Layout ID
183180

184-
185181
• Plan ID
186182

187-
188183
• Instance type code
189184

190-
191185
• Network ID
192186

193-
194187
• Resource Pool ID
195188

196-
197189
• Template ID
198190

199-
200191
• Folder Code
201192

202-
203193
For this, you will use the Terraform data statements. For example, the following statement retrieves the Cloud ID and stores it (in variable called cloud), which we can later retrieve using: data.hpegl_vmaas_cloud.cloud.id
204194

205-
```
195+
```json
206196
# Retrieve cloud id
207197
data "hpegl_vmaas_cloud" "cloud" {
208198
name = "HPE GreenLake VMaaS Cloud"
@@ -211,7 +201,7 @@ data "hpegl_vmaas_cloud" "cloud" {
211201

212202
Using a similar technique, you can retrieve the rest of the data you need:
213203

214-
```
204+
```json
215205
# And a network
216206
data "hpegl_vmaas_network" "blue_segment" {
217207
name = "Blue-Segment"
@@ -256,7 +246,7 @@ You can get information about each of the data statements supported by the hpegl
256246

257247
The next step is to use a Terraform resource statement to create a random integer (used in VM names) and a second resource to request the creation of several VM instances:
258248

259-
```
249+
```json
260250
resource "random_integer" "random" {
261251
min = 1
262252
max = 50000
@@ -300,7 +290,7 @@ resource "hpegl_vmaas_instance" " my_HA_NFS" {
300290

301291
Finally, we will create a VM to act as Serviceguard quorum node:
302292

303-
```
293+
```jsoniq
304294
resource "hpegl_vmaas_instance" "my_quorum" {
305295
count = 1
306296
name = "drbd-${count.index}-qs-${random_integer.random.result}"
@@ -332,20 +322,26 @@ resource "hpegl_vmaas_instance" "my_quorum" {
332322
```
333323

334324
3 VMs need to be created to setup SGLX. 2 VMs will be used to create Serviceguard for Linux nodes where the NFS service will be up and running. The third VM will act as a quorum server for the Serviceguard cluster to ensure that split brain of the cluster does not impact the availability of the monitored workload.
335-
Note: You can get information about each of the resource statements supported by the hpegl provider from GitHub.
336-
Note: An existing Serviceguard Quorum Server in your environment can be used instead of provisioning a third VM, provided the Quorum Server is reachable to the 2 VM’s that were created.
337325

338-
Terraform init
326+
327+
**Note:** You can get information about each of the resource statements supported by the hpegl provider from [GitHub](https://github.com/hpe/terraform-provider-hpegl/tree/main/docs/resources).
328+
329+
330+
**Note:** An existing Serviceguard Quorum Server in your environment can be used instead of provisioning a third VM, provided the Quorum Server is reachable to the 2 VM’s that were created.
331+
332+
### Terraform init
339333

340334
Before you can use Terraform, you need to initialize it from the configuration file we have created. This is done with the following step:
335+
336+
341337
`terraform init`
342338

343-
Terraform ready to plan
339+
### Terraform ready to plan
344340

345341
To validate your configuration file, it is recommended to run the terraform validate command. Once ready, the terraform plan command will provide the a summary of the deployment that would be built when terraform apply method would be used.
346342
Once you agree with the plan and confirm, you can apply the configuration.
347343

348-
## Terraform ready to apply
344+
### Terraform ready to apply
349345

350346
The command you need to use is now:
351347

@@ -386,7 +382,7 @@ Now that the VMs are provisioned, we can now deploy HPE Serviceguard for Linux o
386382
Serviceguard and all its components can be installed using Ansible playbooks
387383
Clone the repository on ansible control node.
388384

389-
```
385+
```shell
390386
git clone https://github.com/HewlettPackard/serviceguard.git
391387
cd serviceguard/ansible-sglx
392388
```
@@ -402,17 +398,37 @@ To upgrade to the latest version of the playbooks:
402398
`git pull https://github.com/HewlettPackard/serviceguard.git`
403399

404400
Master playbook `site.yml` contains the roles which will be executed for the inventory defined in hosts.
405-
When the master playbook is run, version specified in the parameters file will be installed. The parameters for the master playbook, roles are configured in group_vars/all.yml. We will now look into some of the fields in this file which needs to be configured.
406-
We should configure the version of Serviceguard to be installed, in this case SGLX 15.10.00 will be installed. `sglx_version : 15.10.00`
401+
When the master playbook is run, version specified in the parameters file will be installed. The parameters for the master playbook, roles are configured in `group_vars/all.yml`.
402+
403+
We will now look into some of the fields in this file which needs to be configured.
404+
We should configure the version of Serviceguard to be installed, in this case SGLX 15.10.00 will be installed.
405+
406+
`sglx_version : 15.10.00`
407+
408+
407409
Now provide the Serviceguard for Linux ISO location on the controller node
408-
`sglx_inst_upg_mode: iso
410+
411+
```
412+
sglx_inst_upg_mode: iso
409413
sglx_inst_upg_additional_params:
410414
..
411415
iso_params:
412-
iso_location: <absolute path of the iso on ansible controller node>`
416+
iso_location: <absolute path of the iso on ansible controller node>
417+
```
418+
419+
420+
413421
Next, install Serviceguard NFS add-on.
414-
`sglx_add_on_inst_upg_params:
415-
sglx_addon: nfs`Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.
422+
423+
```
424+
sglx_add_on_inst_upg_params:
425+
sglx_addon: nfs
426+
```
427+
428+
429+
Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.
430+
431+
416432
`sglx_sgmgr_password: "{{ vault_sglx_sgmgr_password }}"`
417433

418434
Ansible vault will be used to encrypt this password, run the command as below
@@ -421,42 +437,50 @@ Ansible vault will be used to encrypt this password, run the command as below
421437

422438
The generated output must be substituted in
423439

424-
`vault_sglx_sgmgr_password: !vault |
440+
```
441+
vault_sglx_sgmgr_password: !vault |
425442
$ANSIBLE_VAULT;1.1;AES256
426443
34363834323266326237363636613833396665333061653138623431626261343064373363656165
427444
6639383863383633643035656336336639373161323663380a303331306337396435366535313663
428445
31336636333862303462346234336138393135393363323739633661653534306162323565646561
429446
6662396366333534350a663033303862646331613765306433353632316435306630343761623237
430-
3863`
447+
3863
448+
```
449+
450+
``
431451

432452
``
433453

434454
Once these parameters are populated, one can modify the hosts file to add the 2 VMs that were provisioned earlier where the cluster will be formed, and the quorum server that was provisioned earlier. In this case, it’s as shown below
435455

436-
`[sglx-storage-flex-add-on-hosts]
456+
```
457+
[sglx-storage-flex-add-on-hosts]
437458
drbd-0-808
438-
drbd-1-808`\
439-
`[sglx-cluster-hosts:children]
459+
drbd-1-808
460+
[sglx-cluster-hosts:children]
440461
sglx-storage-flex-add-on-hosts
441462
[quorum-server-hosts]
442463
drbd-0-qs-808
443464
[primary]
444465
drbd-0-808
445466
[secondary]
446-
drbd-1-808`
467+
drbd-1-808
468+
```
447469

448-
When the parameters specified above are configured, playbook site.yml can be run from the directory where the repository is cloned on the ansible control node.
470+
When the parameters specified above are configured, playbook `site.yml` can be run from the directory where the repository is cloned on the ansible control node.
449471

450-
`cd serviceguard/ansible-sglx
451-
ansible-playbook -i hosts -v --vault-password-file <path_to_vault_password_file> site.yml`
472+
```
473+
cd serviceguard/ansible-sglx
474+
ansible-playbook -i hosts -v --vault-password-file <path_to_vault_password_file> site.yml
475+
```
452476

453477
This completes the Serviceguard software installation.
454478

455479
## Configuring data replication using Serviceguard flex Storage Add-on
456480

457481
Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing, replicated storage solution that mirrors the content of block devices. NFS server export data will be replicated to all Serviceguard cluster nodes using this add-on. Ansible snippet below can be used to configure the replication.
458482

459-
```
483+
```yaml
460484
- hosts: sglx-storage-flex-add-on-hosts
461485
tasks:
462486
- name: Populate /etc/drbd.d/global_common.conf file
@@ -535,7 +559,7 @@ Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing,
535559
536560
Once data replication is configured on the nodes, we can now configure LVM on top of the DRBD disk /dev/drbd0. The following Ansible snippet can be used to configure the LVM volume group named nfsvg and an logical volume names nfsvol of size 45GB
537561
538-
```
562+
```yaml
539563
---
540564
- hosts: sglx-storage-flex-add-on-hosts
541565
tasks:
@@ -583,7 +607,7 @@ Once data replication is configured on the nodes, we can now configure LVM on to
583607
584608
Now we will start the NFS service and export the NFS share from the primary node using the ansible snippet below.
585609
586-
```
610+
```yaml
587611
---
588612
- hosts: sglx-storage-flex-add-on-hosts
589613
tasks:
@@ -633,7 +657,7 @@ Now we will start the NFS service and export the NFS share from the primary node
633657
634658
Once NFS share is configured, we will now look into creating an SGLX cluster and deploy the NFS workload in the SGLX environment to make it highly available. The below snippet will help us achieve the same.
635659
636-
```
660+
```yaml
637661
---
638662
- hosts: primary
639663
- name: Build string of primary nodes

0 commit comments

Comments
 (0)