Skip to content

Commit ed01c63

Browse files
Update Blog “highly-available-nfs-workload-on-hpe-greenlake-for-private-cloud-enterprise-using-serviceguard-for-linux”
1 parent 5e4f360 commit ed01c63

File tree

1 file changed

+12
-67
lines changed

1 file changed

+12
-67
lines changed

content/blog/highly-available-nfs-workload-on-hpe-greenlake-for-private-cloud-enterprise-using-serviceguard-for-linux.md

Lines changed: 12 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,10 @@ priority: ""
66
author: John Lenihan, Thirukkannan M, Saurabh Kadiyali
77
authorimage: /img/Avatar1.svg
88
disable: false
9+
tags:
10+
- high-availability
11+
- serviceguard-for-linux
12+
- greenlake-private-cloud-enterprise
913
---
1014
# Introduction
1115

@@ -96,7 +100,6 @@ In the capture below, Default is the space you will use for your work with Terra
96100

97101
## Setting up API Client access
98102

99-
100103
Next, you need to create a new API Client access dedicated to Terraform. You can do this from the HPE GreenLake console under your settings icon, select User Management and then the API Clients tab.
101104

102105
![](/img/picture3.png)
@@ -115,22 +118,17 @@ export HPEGL_USER_SECRET=<Secret Key displayed when you created the API Client>
115118
export HPEGL_IAM_SERVICE_URL=<Issuer URL>
116119
```
117120

118-
119-
120121
And execute it on your machine to set these environment variables.
121122

122123
## Assign Roles to API Client
123124

124-
125125
Once your API Client has been created, you need to assign a Role and a Space. You can assign a Role and a Space by clicking on your new API Client and then clicking the Create Assignment button.
126126
Since intent is to use this API Client to create resources in the Virtual Machines Service, we need to assign an appropriate Virtual Machines Role. Choose a Role like ‘Private Cloud Tenant Contributor’ and choose the same Space as used earlier, I.e., ‘Default.’
127127

128128
Note: More details on HPE GreenLake user roles can be found in the HPE GreenLake documentation.
129129

130-
##
131130
Set API Client Usernames and Passwords
132131

133-
134132
When a user creates virtual machines using the HPE GreenLake for Private Cloud Enterprise: Virtual Machines user interface, they first set the Linux and Windows username and password. Once this is done, any virtual machines subsequently created by that user will inherit these credentials. The user can later use these credentials to log into these virtual machines.
135133
API Clients which are used to create virtual machines can also set Linux and Windows username and password values. Since the API Client does not use the HPE GreenLake for Private Cloud Enterprise: Virtual Machines user interface, this must be done via an API call.
136134
Here is a sample script which reads the VM\*USERNAME and VM_PASSWORD environment variables and uses the values for Linux and Windows username and password for the API Client. The script assumes a Location value of ‘FTC06’ and Space value of ‘Default’.
@@ -168,13 +166,10 @@ curl -s -k -X POST \
168166
"windowsPassword": '${VM_PASSWORD}'
169167
}
170168
}'
171-
172169
```
173170

174-
##
175171
Querying for infrastructure components
176172

177-
178173
Your next step with the TF file is to query the HPE GreenLake provider to collect information needed to create your first VM instance. From the documentation, you can see that you need to gather the following information:
179174
• Cloud ID
180175
• Group ID
@@ -192,7 +187,6 @@ For this, you will use the Terraform data statements. For example, the following
192187
data "hpegl_vmaas_cloud" "cloud" {
193188
name = "HPE GreenLake VMaaS Cloud"
194189
}
195-
196190
```
197191

198192
Using a similar technique, you can retrieve the rest of the data you need:
@@ -234,21 +228,14 @@ data "hpegl_vmaas_layout" "vmware" {
234228
data "hpegl_vmaas_template" "vanilla" {
235229
name = "redhat8-20220331T1850"
236230
}
237-
238231
```
239232

240-
######
241-
242-
243233
You can get information about each of the data statements supported by the hpegl provider from [GitHub.](https://github.com/hpe/terraform-provider-hpegl/tree/main/docs/data-sources)
244234

245235
## Creating VM resources
246236

247-
248237
The next step is to use a Terraform resource statement to create a random integer (used in VM names) and a second resource to request the creation of several VM instances:
249238

250-
251-
252239
```
253240
resource "random_integer" "random" {
254241
min = 1
@@ -289,13 +276,10 @@ resource "hpegl_vmaas_instance" " my_HA_NFS" {
289276
}
290277
291278
}
292-
293279
```
294280

295-
296281
Finally, we will create a VM to act as Serviceguard quorum node:
297282

298-
299283
```
300284
resource "hpegl_vmaas_instance" "my_quorum" {
301285
count = 1
@@ -325,33 +309,24 @@ resource "hpegl_vmaas_instance" "my_quorum" {
325309
create_user = true
326310
}
327311
}
328-
329312
```
330313

331-
332-
333314
3 VMs need to be created to setup SGLX. 2 VMs will be used to create Serviceguard for Linux nodes where the NFS service will be up and running. The third VM will act as a quorum server for the Serviceguard cluster to ensure that split brain of the cluster does not impact the availability of the monitored workload.
334315
Note: You can get information about each of the resource statements supported by the hpegl provider from GitHub.
335316
Note: An existing Serviceguard Quorum Server in your environment can be used instead of provisioning a third VM, provided the Quorum Server is reachable to the 2 VM’s that were created.
336317

337-
##
338318
Terraform init
339319

340-
341320
Before you can use Terraform, you need to initialize it from the configuration file we have created. This is done with the following step:
342321
`terraform init`
343322

344-
##
345323
Terraform ready to plan
346324

347-
348325
To validate your configuration file, it is recommended to run the terraform validate command. Once ready, the terraform plan command will provide the a summary of the deployment that would be built when terraform apply method would be used.
349326
Once you agree with the plan and confirm, you can apply the configuration.
350327

351-
352328
## Terraform ready to apply
353329

354-
355330
The command you need to use is now:
356331

357332
`terraform apply`
@@ -380,40 +355,33 @@ hpegl_vmaas_instance.my_quorum\[0]: Creation complete after 2m8s \[id=3108]
380355
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
381356
```
382357

383-
384-
385358
Once the command completes, your virtual machines are ready.
386359

387360
# Configuring a Highly Available NFS solution
388361

389362
Now that the VMs are provisioned, we can now deploy HPE Serviceguard for Linux on these VMs to create a cluster to provide high availability for the applications running on the VMs, NFS server in this case.
390363

391-
## Installing Serviceguard for Linux
392-
364+
## Installing Serviceguard for Linux
393365

394366
Serviceguard and all its components can be installed using Ansible playbooks
395367
Clone the repository on ansible control node.
396368

397-
398369
```
399370
git clone https://github.com/HewlettPackard/serviceguard.git
400371
cd serviceguard/ansible-sglx
401372
```
402373

403374
Checkout the stable branch. For ex: to checkout branch 1.0,
404375

405-
406-
`git checkout Stable-v1.0 `
376+
`git checkout Stable-v1.0`
407377

408378
``
409379

410380
To upgrade to the latest version of the playbooks:
411381

412-
413382
`git pull https://github.com/HewlettPackard/serviceguard.git`
414383

415-
416-
Master playbook ` site.yml` contains the roles which will be executed for the inventory defined in hosts.
384+
Master playbook `site.yml` contains the roles which will be executed for the inventory defined in hosts.
417385
When the master playbook is run, version specified in the parameters file will be installed. The parameters for the master playbook, roles are configured in group_vars/all.yml. We will now look into some of the fields in this file which needs to be configured.
418386
We should configure the version of Serviceguard to be installed, in this case SGLX 15.10.00 will be installed. `sglx_version : 15.10.00`
419387
Now provide the Serviceguard for Linux ISO location on the controller node
@@ -424,21 +392,16 @@ sglx_inst_upg_additional_params:
424392
iso_location: <absolute path of the iso on ansible controller node>`
425393
Next, install Serviceguard NFS add-on.
426394
`sglx_add_on_inst_upg_params:
427-
sglx_addon: nfs
428-
`Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.
395+
sglx_addon: nfs`Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.
429396
`sglx_sgmgr_password: "{{ vault_sglx_sgmgr_password }}"`
430397

431-
`
432-
`Ansible vault will be used to encrypt this password, run the command as below
398+
Ansible vault will be used to encrypt this password, run the command as below
433399

434-
`
435-
ansible-vault encrypt_string 'your_password' --name 'vault_sglx_sgmgr_password'`
400+
`ansible-vault encrypt_string 'your_password' --name 'vault_sglx_sgmgr_password'`
436401

437-
`
438-
`The generated output must be substituted in
402+
The generated output must be substituted in
439403

440-
`
441-
vault_sglx_sgmgr_password: !vault |
404+
`vault_sglx_sgmgr_password: !vault |
442405
$ANSIBLE_VAULT;1.1;AES256
443406
34363834323266326237363636613833396665333061653138623431626261343064373363656165
444407
6639383863383633643035656336336639373161323663380a303331306337396435366535313663
@@ -448,10 +411,8 @@ vault_sglx_sgmgr_password: !vault |
448411

449412
``
450413

451-
452414
Once these parameters are populated, one can modify the hosts file to add the 2 VMs that were provisioned earlier where the cluster will be formed, and the quorum server that was provisioned earlier. In this case, it’s as shown below
453415

454-
455416
`[sglx-storage-flex-add-on-hosts]
456417
drbd-0-808
457418
drbd-1-808`\
@@ -464,19 +425,15 @@ drbd-0-808
464425
[secondary]
465426
drbd-1-808`
466427

467-
468428
When the parameters specified above are configured, playbook site.yml can be run from the directory where the repository is cloned on the ansible control node.
469429

470-
471430
`cd serviceguard/ansible-sglx
472431
ansible-playbook -i hosts -v --vault-password-file <path_to_vault_password_file> site.yml`
473432

474-
475433
This completes the Serviceguard software installation.
476434

477435
## Configuring data replication using Serviceguard flex Storage Add-on
478436

479-
480437
Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing, replicated storage solution that mirrors the content of block devices. NFS server export data will be replicated to all Serviceguard cluster nodes using this add-on. Ansible snippet below can be used to configure the replication.
481438

482439
```
@@ -552,14 +509,10 @@ Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing,
552509
shell: |
553510
drbdadm primary drbd0 --force
554511
when: res.rc != 0
555-
556512
```
557513

558-
559-
560514
## Configuring LVM
561515

562-
563516
Once data replication is configured on the nodes, we can now configure LVM on top of the DRBD disk /dev/drbd0. The following Ansible snippet can be used to configure the LVM volume group named nfsvg and an logical volume names nfsvol of size 45GB
564517

565518
```
@@ -604,14 +557,10 @@ Once data replication is configured on the nodes, we can now configure LVM on to
604557
filesystem:
605558
dev: /dev/nfsvg/nfsvol
606559
fstype: xfs
607-
608560
```
609561

610-
611-
612562
## Setting up the NFS server
613563

614-
615564
Now we will start the NFS service and export the NFS share from the primary node using the ansible snippet below.
616565

617566
```
@@ -658,12 +607,10 @@ Now we will start the NFS service and export the NFS share from the primary node
658607
shell: |
659608
mount /dev/nfsvg/nfsvol /nfs
660609
exportfs -a
661-
662610
```
663611

664612
## Creating an SGLX cluster and providing HA to the NFS workload
665613

666-
667614
Once NFS share is configured, we will now look into creating an SGLX cluster and deploy the NFS workload in the SGLX environment to make it highly available. The below snippet will help us achieve the same.
668615

669616
```
@@ -762,8 +709,6 @@ $SGSBIN/cmcheckconf -P /tmp/nfs_drbd.conf
762709
become: True
763710
shell: |
764711
$SGSBIN /cmmodpkg -e nfs_drbd
765-
766-
767712
```
768713

769714
Now we have the NFS server deployed in Serviceguard cluster with high availability.

0 commit comments

Comments
 (0)