You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/highly-available-nfs-workload-on-hpe-greenlake-for-private-cloud-enterprise-using-serviceguard-for-linux.md
+12-67Lines changed: 12 additions & 67 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,10 @@ priority: ""
6
6
author: John Lenihan, Thirukkannan M, Saurabh Kadiyali
7
7
authorimage: /img/Avatar1.svg
8
8
disable: false
9
+
tags:
10
+
- high-availability
11
+
- serviceguard-for-linux
12
+
- greenlake-private-cloud-enterprise
9
13
---
10
14
# Introduction
11
15
@@ -96,7 +100,6 @@ In the capture below, Default is the space you will use for your work with Terra
96
100
97
101
## Setting up API Client access
98
102
99
-
100
103
Next, you need to create a new API Client access dedicated to Terraform. You can do this from the HPE GreenLake console under your settings icon, select User Management and then the API Clients tab.
101
104
102
105

@@ -115,22 +118,17 @@ export HPEGL_USER_SECRET=<Secret Key displayed when you created the API Client>
115
118
export HPEGL_IAM_SERVICE_URL=<Issuer URL>
116
119
```
117
120
118
-
119
-
120
121
And execute it on your machine to set these environment variables.
121
122
122
123
## Assign Roles to API Client
123
124
124
-
125
125
Once your API Client has been created, you need to assign a Role and a Space. You can assign a Role and a Space by clicking on your new API Client and then clicking the Create Assignment button.
126
126
Since intent is to use this API Client to create resources in the Virtual Machines Service, we need to assign an appropriate Virtual Machines Role. Choose a Role like ‘Private Cloud Tenant Contributor’ and choose the same Space as used earlier, I.e., ‘Default.’
127
127
128
128
Note: More details on HPE GreenLake user roles can be found in the HPE GreenLake documentation.
129
129
130
-
##
131
130
Set API Client Usernames and Passwords
132
131
133
-
134
132
When a user creates virtual machines using the HPE GreenLake for Private Cloud Enterprise: Virtual Machines user interface, they first set the Linux and Windows username and password. Once this is done, any virtual machines subsequently created by that user will inherit these credentials. The user can later use these credentials to log into these virtual machines.
135
133
API Clients which are used to create virtual machines can also set Linux and Windows username and password values. Since the API Client does not use the HPE GreenLake for Private Cloud Enterprise: Virtual Machines user interface, this must be done via an API call.
136
134
Here is a sample script which reads the VM\*USERNAME and VM_PASSWORD environment variables and uses the values for Linux and Windows username and password for the API Client. The script assumes a Location value of ‘FTC06’ and Space value of ‘Default’.
@@ -168,13 +166,10 @@ curl -s -k -X POST \
168
166
"windowsPassword": '${VM_PASSWORD}'
169
167
}
170
168
}'
171
-
172
169
```
173
170
174
-
##
175
171
Querying for infrastructure components
176
172
177
-
178
173
Your next step with the TF file is to query the HPE GreenLake provider to collect information needed to create your first VM instance. From the documentation, you can see that you need to gather the following information:
179
174
• Cloud ID
180
175
• Group ID
@@ -192,7 +187,6 @@ For this, you will use the Terraform data statements. For example, the following
192
187
data "hpegl_vmaas_cloud" "cloud" {
193
188
name = "HPE GreenLake VMaaS Cloud"
194
189
}
195
-
196
190
```
197
191
198
192
Using a similar technique, you can retrieve the rest of the data you need:
@@ -234,21 +228,14 @@ data "hpegl_vmaas_layout" "vmware" {
234
228
data "hpegl_vmaas_template" "vanilla" {
235
229
name = "redhat8-20220331T1850"
236
230
}
237
-
238
231
```
239
232
240
-
######
241
-
242
-
243
233
You can get information about each of the data statements supported by the hpegl provider from [GitHub.](https://github.com/hpe/terraform-provider-hpegl/tree/main/docs/data-sources)
244
234
245
235
## Creating VM resources
246
236
247
-
248
237
The next step is to use a Terraform resource statement to create a random integer (used in VM names) and a second resource to request the creation of several VM instances:
3 VMs need to be created to setup SGLX. 2 VMs will be used to create Serviceguard for Linux nodes where the NFS service will be up and running. The third VM will act as a quorum server for the Serviceguard cluster to ensure that split brain of the cluster does not impact the availability of the monitored workload.
334
315
Note: You can get information about each of the resource statements supported by the hpegl provider from GitHub.
335
316
Note: An existing Serviceguard Quorum Server in your environment can be used instead of provisioning a third VM, provided the Quorum Server is reachable to the 2 VM’s that were created.
336
317
337
-
##
338
318
Terraform init
339
319
340
-
341
320
Before you can use Terraform, you need to initialize it from the configuration file we have created. This is done with the following step:
342
321
`terraform init`
343
322
344
-
##
345
323
Terraform ready to plan
346
324
347
-
348
325
To validate your configuration file, it is recommended to run the terraform validate command. Once ready, the terraform plan command will provide the a summary of the deployment that would be built when terraform apply method would be used.
349
326
Once you agree with the plan and confirm, you can apply the configuration.
350
327
351
-
352
328
## Terraform ready to apply
353
329
354
-
355
330
The command you need to use is now:
356
331
357
332
`terraform apply`
@@ -380,40 +355,33 @@ hpegl_vmaas_instance.my_quorum\[0]: Creation complete after 2m8s \[id=3108]
Once the command completes, your virtual machines are ready.
386
359
387
360
# Configuring a Highly Available NFS solution
388
361
389
362
Now that the VMs are provisioned, we can now deploy HPE Serviceguard for Linux on these VMs to create a cluster to provide high availability for the applications running on the VMs, NFS server in this case.
390
363
391
-
## Installing Serviceguard for Linux
392
-
364
+
## Installing Serviceguard for Linux
393
365
394
366
Serviceguard and all its components can be installed using Ansible playbooks
Master playbook ` site.yml` contains the roles which will be executed for the inventory defined in hosts.
384
+
Master playbook `site.yml` contains the roles which will be executed for the inventory defined in hosts.
417
385
When the master playbook is run, version specified in the parameters file will be installed. The parameters for the master playbook, roles are configured in group_vars/all.yml. We will now look into some of the fields in this file which needs to be configured.
418
386
We should configure the version of Serviceguard to be installed, in this case SGLX 15.10.00 will be installed. `sglx_version : 15.10.00`
419
387
Now provide the Serviceguard for Linux ISO location on the controller node
`Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.
395
+
sglx_addon: nfs`Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.
Once these parameters are populated, one can modify the hosts file to add the 2 VMs that were provisioned earlier where the cluster will be formed, and the quorum server that was provisioned earlier. In this case, it’s as shown below
453
415
454
-
455
416
`[sglx-storage-flex-add-on-hosts]
456
417
drbd-0-808
457
418
drbd-1-808`\
@@ -464,19 +425,15 @@ drbd-0-808
464
425
[secondary]
465
426
drbd-1-808`
466
427
467
-
468
428
When the parameters specified above are configured, playbook site.yml can be run from the directory where the repository is cloned on the ansible control node.
This completes the Serviceguard software installation.
476
434
477
435
## Configuring data replication using Serviceguard flex Storage Add-on
478
436
479
-
480
437
Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing, replicated storage solution that mirrors the content of block devices. NFS server export data will be replicated to all Serviceguard cluster nodes using this add-on. Ansible snippet below can be used to configure the replication.
481
438
482
439
```
@@ -552,14 +509,10 @@ Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing,
552
509
shell: |
553
510
drbdadm primary drbd0 --force
554
511
when: res.rc != 0
555
-
556
512
```
557
513
558
-
559
-
560
514
## Configuring LVM
561
515
562
-
563
516
Once data replication is configured on the nodes, we can now configure LVM on top of the DRBD disk /dev/drbd0. The following Ansible snippet can be used to configure the LVM volume group named nfsvg and an logical volume names nfsvol of size 45GB
564
517
565
518
```
@@ -604,14 +557,10 @@ Once data replication is configured on the nodes, we can now configure LVM on to
604
557
filesystem:
605
558
dev: /dev/nfsvg/nfsvol
606
559
fstype: xfs
607
-
608
560
```
609
561
610
-
611
-
612
562
## Setting up the NFS server
613
563
614
-
615
564
Now we will start the NFS service and export the NFS share from the primary node using the ansible snippet below.
616
565
617
566
```
@@ -658,12 +607,10 @@ Now we will start the NFS service and export the NFS share from the primary node
658
607
shell: |
659
608
mount /dev/nfsvg/nfsvol /nfs
660
609
exportfs -a
661
-
662
610
```
663
611
664
612
## Creating an SGLX cluster and providing HA to the NFS workload
665
613
666
-
667
614
Once NFS share is configured, we will now look into creating an SGLX cluster and deploy the NFS workload in the SGLX environment to make it highly available. The below snippet will help us achieve the same.
0 commit comments