You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/highly-available-nfs-workload-on-hpe-greenlake-for-private-cloud-enterprise-using-serviceguard-for-linux.md
+67-43Lines changed: 67 additions & 43 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ Selecting a Terraform provider
54
54
The first section of the file will enumerate the “providers” you rely upon for building your infrastructure, and they could be multiple providers in a single TF file. In the case here, you only have the HPE GreenLake provider referenced as hpe/hpegl in the official Terraform registry.
55
55
The first lines of your Terraform configuration file should look like this:
56
56
57
-
```
57
+
```json
58
58
# Load HPE GreenLake terraform provider
59
59
60
60
terraform {
@@ -79,7 +79,7 @@ Note: Because this is open source, do not hesitate to open issues, or even a pul
79
79
80
80
Set up the required parameters for hpegl provider that was specified earlier. As previously explained, you can either explicitly set those parameters in your TF file or have them set in a series of environment variables or have a mix of both. It is recommended to add the following two parameters in your TF file:
81
81
82
-
```
82
+
```json
83
83
# Setup provider environment (location and space)
84
84
provider "hpegl" {
85
85
vmaas {
@@ -111,7 +111,7 @@ The value for the tenant ID may be seen in the Tenant ID field under the API Acc
111
111
112
112
With this you can now build a resource file that defines the following environment variables:
113
113
114
-
```
114
+
```shell
115
115
export HPEGL_TENANT_ID=<Your Tenant ID>
116
116
export HPEGL_USER_ID=<Client ID of the API Client>
117
117
export HPEGL_USER_SECRET=<Secret Key displayed when you created the API Client>
@@ -134,7 +134,7 @@ API Clients which are used to create virtual machines can also set Linux and Win
134
134
Here is a sample script which reads the VM\*USERNAME and VM_PASSWORD environment variables and uses the values for Linux and Windows username and password for the API Client. The script assumes a Location value of ‘FTC06’ and Space value of ‘Default’.
135
135
To execute this script, first set appropriate values for the VM_USERNAME and VM_PASSWORD environment variables. Next, execute the resource file, which was created earlier, which sets the HPEGL\** environment variables for your API Client. Finally, execute the script below.
136
136
137
-
```
137
+
```shell
138
138
#!/bin/bash
139
139
export LOCATION='FTC06'
140
140
export SPACE='Default'
@@ -172,37 +172,27 @@ Querying for infrastructure components
172
172
173
173
Your next step with the TF file is to query the HPE GreenLake provider to collect information needed to create your first VM instance. From the documentation, you can see that you need to gather the following information:
174
174
175
-
176
175
• Cloud ID
177
176
178
-
179
177
• Group ID
180
178
181
-
182
179
• Layout ID
183
180
184
-
185
181
• Plan ID
186
182
187
-
188
183
• Instance type code
189
184
190
-
191
185
• Network ID
192
186
193
-
194
187
• Resource Pool ID
195
188
196
-
197
189
• Template ID
198
190
199
-
200
191
• Folder Code
201
192
202
-
203
193
For this, you will use the Terraform data statements. For example, the following statement retrieves the Cloud ID and stores it (in variable called cloud), which we can later retrieve using: data.hpegl_vmaas_cloud.cloud.id
204
194
205
-
```
195
+
```json
206
196
# Retrieve cloud id
207
197
data "hpegl_vmaas_cloud" "cloud" {
208
198
name = "HPE GreenLake VMaaS Cloud"
@@ -211,7 +201,7 @@ data "hpegl_vmaas_cloud" "cloud" {
211
201
212
202
Using a similar technique, you can retrieve the rest of the data you need:
213
203
214
-
```
204
+
```json
215
205
# And a network
216
206
data "hpegl_vmaas_network" "blue_segment" {
217
207
name = "Blue-Segment"
@@ -256,7 +246,7 @@ You can get information about each of the data statements supported by the hpegl
256
246
257
247
The next step is to use a Terraform resource statement to create a random integer (used in VM names) and a second resource to request the creation of several VM instances:
3 VMs need to be created to setup SGLX. 2 VMs will be used to create Serviceguard for Linux nodes where the NFS service will be up and running. The third VM will act as a quorum server for the Serviceguard cluster to ensure that split brain of the cluster does not impact the availability of the monitored workload.
335
-
Note: You can get information about each of the resource statements supported by the hpegl provider from GitHub.
336
-
Note: An existing Serviceguard Quorum Server in your environment can be used instead of provisioning a third VM, provided the Quorum Server is reachable to the 2 VM’s that were created.
337
325
338
-
Terraform init
326
+
327
+
**Note:** You can get information about each of the resource statements supported by the hpegl provider from [GitHub](https://github.com/hpe/terraform-provider-hpegl/tree/main/docs/resources).
328
+
329
+
330
+
**Note:** An existing Serviceguard Quorum Server in your environment can be used instead of provisioning a third VM, provided the Quorum Server is reachable to the 2 VM’s that were created.
331
+
332
+
### Terraform init
339
333
340
334
Before you can use Terraform, you need to initialize it from the configuration file we have created. This is done with the following step:
335
+
336
+
341
337
`terraform init`
342
338
343
-
Terraform ready to plan
339
+
### Terraform ready to plan
344
340
345
341
To validate your configuration file, it is recommended to run the terraform validate command. Once ready, the terraform plan command will provide the a summary of the deployment that would be built when terraform apply method would be used.
346
342
Once you agree with the plan and confirm, you can apply the configuration.
347
343
348
-
## Terraform ready to apply
344
+
###Terraform ready to apply
349
345
350
346
The command you need to use is now:
351
347
@@ -386,7 +382,7 @@ Now that the VMs are provisioned, we can now deploy HPE Serviceguard for Linux o
386
382
Serviceguard and all its components can be installed using Ansible playbooks
Master playbook `site.yml` contains the roles which will be executed for the inventory defined in hosts.
405
-
When the master playbook is run, version specified in the parameters file will be installed. The parameters for the master playbook, roles are configured in group_vars/all.yml. We will now look into some of the fields in this file which needs to be configured.
406
-
We should configure the version of Serviceguard to be installed, in this case SGLX 15.10.00 will be installed. `sglx_version : 15.10.00`
401
+
When the master playbook is run, version specified in the parameters file will be installed. The parameters for the master playbook, roles are configured in `group_vars/all.yml`.
402
+
403
+
We will now look into some of the fields in this file which needs to be configured.
404
+
We should configure the version of Serviceguard to be installed, in this case SGLX 15.10.00 will be installed.
405
+
406
+
`sglx_version : 15.10.00`
407
+
408
+
407
409
Now provide the Serviceguard for Linux ISO location on the controller node
iso_location: <absolute path of the iso on ansible controller node>
417
+
```
418
+
419
+
420
+
413
421
Next, install Serviceguard NFS add-on.
414
-
`sglx_add_on_inst_upg_params:
415
-
sglx_addon: nfs`Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.
422
+
423
+
```
424
+
sglx_add_on_inst_upg_params:
425
+
sglx_addon: nfs
426
+
```
427
+
428
+
429
+
Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.
Once these parameters are populated, one can modify the hosts file to add the 2 VMs that were provisioned earlier where the cluster will be formed, and the quorum server that was provisioned earlier. In this case, it’s as shown below
435
455
436
-
`[sglx-storage-flex-add-on-hosts]
456
+
```
457
+
[sglx-storage-flex-add-on-hosts]
437
458
drbd-0-808
438
-
drbd-1-808`\
439
-
`[sglx-cluster-hosts:children]
459
+
drbd-1-808
460
+
[sglx-cluster-hosts:children]
440
461
sglx-storage-flex-add-on-hosts
441
462
[quorum-server-hosts]
442
463
drbd-0-qs-808
443
464
[primary]
444
465
drbd-0-808
445
466
[secondary]
446
-
drbd-1-808`
467
+
drbd-1-808
468
+
```
447
469
448
-
When the parameters specified above are configured, playbook site.yml can be run from the directory where the repository is cloned on the ansible control node.
470
+
When the parameters specified above are configured, playbook `site.yml` can be run from the directory where the repository is cloned on the ansible control node.
This completes the Serviceguard software installation.
454
478
455
479
## Configuring data replication using Serviceguard flex Storage Add-on
456
480
457
481
Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing, replicated storage solution that mirrors the content of block devices. NFS server export data will be replicated to all Serviceguard cluster nodes using this add-on. Ansible snippet below can be used to configure the replication.
@@ -535,7 +559,7 @@ Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing,
535
559
536
560
Once data replication is configured on the nodes, we can now configure LVM on top of the DRBD disk /dev/drbd0. The following Ansible snippet can be used to configure the LVM volume group named nfsvg and an logical volume names nfsvol of size 45GB
537
561
538
-
```
562
+
```yaml
539
563
---
540
564
- hosts: sglx-storage-flex-add-on-hosts
541
565
tasks:
@@ -583,7 +607,7 @@ Once data replication is configured on the nodes, we can now configure LVM on to
583
607
584
608
Now we will start the NFS service and export the NFS share from the primary node using the ansible snippet below.
585
609
586
-
```
610
+
```yaml
587
611
---
588
612
- hosts: sglx-storage-flex-add-on-hosts
589
613
tasks:
@@ -633,7 +657,7 @@ Now we will start the NFS service and export the NFS share from the primary node
633
657
634
658
Once NFS share is configured, we will now look into creating an SGLX cluster and deploy the NFS workload in the SGLX environment to make it highly available. The below snippet will help us achieve the same.
0 commit comments