You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.rst
+22-22Lines changed: 22 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,7 +59,7 @@ The app services that consume database objects could reside close to the end-use
59
59
.. figure:: assets/diagram1.png
60
60
61
61
Step 1: Prepare environment for HA Load
62
-
######################################
62
+
#######################################
63
63
64
64
F5 Distributed Cloud Services allow to create edge sites with worker nodes on a wide variety of cloud providers: AWS, Azure, GCP. The pre-requisite is one or more Distributed Cloud CE Sites, and once deployed, you can expose the services created on these edge sites via a Site mesh and any additional Load Balancers. The selection of TCP (L3/L4) or HTTP/S (L7) Load Balancers depends on the requirements for the services to communicate with each other. In our case, since we’re exposing a database service, which is a fit for a TCP Load Balancer. Should there be a backend service or anything that exposes an HTTP endpoint for other services to connect to, we could have used an HTTP/S LB instead. (Note that a single CE Site may support one or more virtual sites, which is similar to a logical grouping of site resources.)
65
65
@@ -72,7 +72,7 @@ The diagram shows how VK8S clusters can be deployed across multiple CEs with vir
72
72
.. figure:: assets/diagr.png
73
73
74
74
Creating an Azure VNET site
75
-
********************
75
+
***************************
76
76
77
77
Let's start creating the Azure VNET site with worker nodes. Log in the F5 Distributed Cloud Console and navigate to the **Multi-Cloud Network Connect** service, then to **Site Management** and select **Azure VNET Sites**. Click the **Add Azure VNET Site** button.
78
78
@@ -193,7 +193,7 @@ And finally, type in the Azure VNET site name, assign it as a label value, and c
193
193
Note the virtual site name, as it will be required later.
194
194
195
195
Creating VK8S cluster
196
-
********************
196
+
*********************
197
197
198
198
At this point, our edge site for the HA Database deployment is ready. Now create the VK8S cluster. Select both virtual sites (one on CE and one on RE) by using the corresponding label: the one created earlier and the *ves-io-shared/ves-io-all-res*. The *all-res* one will be used for the deployment of workloads on all RE’s.
199
199
@@ -220,14 +220,14 @@ Step 2: Deploy HA PostgreSQL to CE
220
220
Now that the environment for both RE and CE deployments is ready, we can move on to deploying HA PostgreSQL to CE. We will use Helm charts to deploy a PostgreSQL cluster configuration with the help of Bitnami, which provides ready-made Helm charts for HA databases: MongoDB, MariaDB, PostgreSQL, etc., in available in the Bitnami Library for Kubernetes: `https://github.com/bitnami/charts <https://github.com/bitnami/charts>`_. In general, these Helm charts work very similarly, so the example used here can be applied to most other databases or services.
221
221
222
222
HA PostgreSQL Architecture in vK8s
223
-
*****************************
223
+
**********************************
224
224
225
225
There are several ways of deploying the HA PostgreSQL. The architecture used in this guide is shown in the picture below. The pgPool deployment will be used to ensure the HA features.
226
226
227
227
.. figure:: assets/diagram2.png
228
228
229
229
Downloading Key
230
-
**************
230
+
***************
231
231
232
232
To operate with kubectl utility or, in our case, HELM, the *kubeconfig* key is required. xC provides an easy way to get the *kubeconfig* file, control its expiration date, etc. So, let's download the *kubeconfig* for the created VK8s cluster.
233
233
@@ -240,14 +240,14 @@ In the popup window that appears, select the expiration date, and then click **D
240
240
.. figure:: assets/kubeconfigdate.png
241
241
242
242
Adding Bitnami Helm Chart repository to Helm
243
-
*****************************************
243
+
********************************************
244
244
245
245
Now we need to add the Bitnami Helm chart repository to Helm and then deploy the chart::
Before we can proceed to the next step, we will need to update the creds in the Makefile. Go to the Makefile and update the following variables:
253
253
@@ -264,7 +264,7 @@ Before we can proceed to the next step, we will need to update the creds in the
264
264
5. Indicate your *docker-password* (which can be password or access token).
265
265
266
266
Making Secrets
267
-
************
267
+
**************
268
268
269
269
VK8s need to download docker images from the storage. This might be *docker.io* or any other docker registry your company uses. The docker secrets need to be created from command line using the *kubectl create secret* command. Use the name of the *kubeconfig* file that you downloaded in the previous step.
270
270
@@ -274,7 +274,7 @@ NOTE. Please, note that the created secret will not be seen from Registries UI a
274
274
275
275
276
276
Updating DB Deployment Chart Values
277
-
********************************
277
+
***********************************
278
278
279
279
Bitnami provides ready charts for HA database deployments. The postgresql-ha chart can be used. The chart install requires setup of the corresponding variables so that the HA cluster can run in xC environment. The main things to change are:
280
280
@@ -307,14 +307,14 @@ Let's proceed to specify the above-mentioned values in the *values.yaml*:
307
307
runAsNonRoot: true
308
308
309
309
Deploying HA PostgreSQL chart to xC vK8s
310
-
********************************
310
+
****************************************
311
311
312
312
As values are now setup to run in xC, deploy the chart to xC vK8s cluster using the **xc-deploy-bd** command in the Visual Studio Code CLI::
313
313
314
314
make xc-deploy-bd
315
315
316
316
Checking deployment
317
-
******************
317
+
*******************
318
318
319
319
After we deployed the HA PostgreSQL to vK8s, we can check that pods and services are deployed successfully from distributed virtual Kubernetes dashboard.
320
320
@@ -340,19 +340,19 @@ Go one step back and take the same steps for the second pod to see its status. T
340
340
.. figure:: assets/logs2.png
341
341
342
342
Step 3: Expose CE services to RE deployment
343
-
####################################
343
+
###########################################
344
344
345
345
The CE deployment is up and running. Now it is necessary to create a secure channel between RE and CE to communicate. RE will read data from the CE deployed database. To do so, two additional objects need to be created.
346
346
347
347
Exposing CE services
348
-
*****************
348
+
********************
349
349
350
350
To access HA Database deployed to CE site, we will need to expose this service via a TCP Load Balancer. Since Load Balancers are created on the basis of an Origin Pool, we will start with creating a pool.
351
351
352
352
.. figure:: assets/diagram3.png
353
353
354
354
Creating origin pool
355
-
*****************
355
+
********************
356
356
357
357
To create an Origin Pool for the vk8s deployed service follow the step below.
358
358
@@ -399,7 +399,7 @@ From the Origin Pool drop-down menu, select the origin pool created in the previ
399
399
.. figure:: assets/tcppool.png
400
400
401
401
Advertising Load Balancer on RE
402
-
**************************
402
+
*******************************
403
403
404
404
From the **Where to Advertise the VIP** menu, select **Advertise Custom** to configure our own custom config and click **Configure**.
405
405
@@ -422,10 +422,10 @@ Complete creating the load balancer by clicking **Save and Exit**.
422
422
.. figure:: assets/saveadvertise.png
423
423
424
424
Step 4: Test connection from RE to DB
425
-
#################################
425
+
#####################################
426
426
427
427
Infrastructure to Test the deployed PostgreSQL
428
-
****************************************
428
+
**********************************************
429
429
430
430
To test access to the CE deployed Database from RE deployment, we will use an NGINX reverse proxy with a module that gets data from PosgreSQL and this service will be deployed to the Regional Edge. It is not a good idea to use this type of a data pull in production, but it is very useful for test purposes. So, test user will query the RE Deployed NGINX Reverse proxy, which will perform a query to the database. The HTTP Load Balancer and Origin Pool also should be created to access NGINX from RE.
431
431
@@ -451,7 +451,7 @@ And now let’s build all this by running the **make docker** command in the Vis
451
451
.. figure:: assets/makedocker.png
452
452
453
453
NGINX Reverse Proxy Config to Query PostgreSQL DB
454
-
***********************************************
454
+
*************************************************
455
455
456
456
NGINX creates a server, listening to port 8080. The default location gets all items from article table and caches them. The following NGINX config sets up the reverse proxy configuration to forward traffic from RE to CE, where “re2ce.internal” is the TCP load balancer we created earlier `Creating TCP Load Balancer`_.
457
457
@@ -461,15 +461,15 @@ It also sets up a server on a port 8080 to present the query data that returns a
461
461
.. figure:: assets/proxyconfig.png
462
462
463
463
Deploying NGINX Reverse Proxy
464
-
****************************
464
+
*****************************
465
465
466
466
To deploy NGINX run the following command in the Visual Studio Code CLI::
467
467
468
468
make xc-deploy-nginx
469
469
470
470
471
471
Overviewing the NGINX Deployment
472
-
******************************
472
+
********************************
473
473
474
474
The vK8s deployment now has additional RE deployments, which contain the newly-configured NGINX proxy. The RE locations included many Points of Presence (PoPs) worldwide, and when selected, it is possible to have our Reverse Proxy service deployed automatically to each of these sites.
475
475
@@ -528,7 +528,7 @@ Complete creating the load balancer by clicking **Save and Exit**.
528
528
.. figure:: assets/httpsave.png
529
529
530
530
Testing: Request data from PostgreSQL DB
531
-
************************************
531
+
****************************************
532
532
533
533
So, in just a few steps above, the HTTP Load Balancer is set up and can be used to access the reverse Proxy which pulls the data from our PostgreSQL DB backend deployed on the CE. Let's copy the generated **CNAME value** of the created HTTP Load Balancer to test requesting data from the PostgreSQL database.
534
534
@@ -546,7 +546,7 @@ Refresh the page and pay attention to the decrease in the loading time.
546
546
547
547
548
548
Wrap-Up
549
-
########
549
+
#######
550
550
551
551
At this stage you should have successfully deployed a highly-available distributed app architecture with:
0 commit comments