Skip to content

Commit 773e643

Browse files
authored
fixed markup
1 parent 8570501 commit 773e643

File tree

1 file changed

+22
-22
lines changed

1 file changed

+22
-22
lines changed

README.rst

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ The app services that consume database objects could reside close to the end-use
5959
.. figure:: assets/diagram1.png
6060

6161
Step 1: Prepare environment for HA Load
62-
######################################
62+
#######################################
6363

6464
F5 Distributed Cloud Services allow to create edge sites with worker nodes on a wide variety of cloud providers: AWS, Azure, GCP. The pre-requisite is one or more Distributed Cloud CE Sites, and once deployed, you can expose the services created on these edge sites via a Site mesh and any additional Load Balancers. The selection of TCP (L3/L4) or HTTP/S (L7) Load Balancers depends on the requirements for the services to communicate with each other. In our case, since we’re exposing a database service, which is a fit for a TCP Load Balancer. Should there be a backend service or anything that exposes an HTTP endpoint for other services to connect to, we could have used an HTTP/S LB instead. (Note that a single CE Site may support one or more virtual sites, which is similar to a logical grouping of site resources.)
6565

@@ -72,7 +72,7 @@ The diagram shows how VK8S clusters can be deployed across multiple CEs with vir
7272
.. figure:: assets/diagr.png
7373

7474
Creating an Azure VNET site
75-
********************
75+
***************************
7676

7777
Let's start creating the Azure VNET site with worker nodes. Log in the F5 Distributed Cloud Console and navigate to the **Multi-Cloud Network Connect** service, then to **Site Management** and select **Azure VNET Sites**. Click the **Add Azure VNET Site** button.
7878

@@ -193,7 +193,7 @@ And finally, type in the Azure VNET site name, assign it as a label value, and c
193193
Note the virtual site name, as it will be required later.
194194

195195
Creating VK8S cluster
196-
********************
196+
*********************
197197

198198
At this point, our edge site for the HA Database deployment is ready. Now create the VK8S cluster. Select both virtual sites (one on CE and one on RE) by using the corresponding label: the one created earlier and the *ves-io-shared/ves-io-all-res*. The *all-res* one will be used for the deployment of workloads on all RE’s.
199199

@@ -220,14 +220,14 @@ Step 2: Deploy HA PostgreSQL to CE
220220
Now that the environment for both RE and CE deployments is ready, we can move on to deploying HA PostgreSQL to CE. We will use Helm charts to deploy a PostgreSQL cluster configuration with the help of Bitnami, which provides ready-made Helm charts for HA databases: MongoDB, MariaDB, PostgreSQL, etc., in available in the Bitnami Library for Kubernetes: `https://github.com/bitnami/charts <https://github.com/bitnami/charts>`_. In general, these Helm charts work very similarly, so the example used here can be applied to most other databases or services.
221221

222222
HA PostgreSQL Architecture in vK8s
223-
*****************************
223+
**********************************
224224

225225
There are several ways of deploying the HA PostgreSQL. The architecture used in this guide is shown in the picture below. The pgPool deployment will be used to ensure the HA features.
226226

227227
.. figure:: assets/diagram2.png
228228

229229
Downloading Key
230-
**************
230+
***************
231231

232232
To operate with kubectl utility or, in our case, HELM, the *kubeconfig* key is required. xC provides an easy way to get the *kubeconfig* file, control its expiration date, etc. So, let's download the *kubeconfig* for the created VK8s cluster.
233233

@@ -240,14 +240,14 @@ In the popup window that appears, select the expiration date, and then click **D
240240
.. figure:: assets/kubeconfigdate.png
241241

242242
Adding Bitnami Helm Chart repository to Helm
243-
*****************************************
243+
********************************************
244244

245245
Now we need to add the Bitnami Helm chart repository to Helm and then deploy the chart::
246246

247247
helm repo add bitnami https://charts.bitnami.com/bitnami
248248

249249
Updating Credentials in Makefile
250-
***************************
250+
********************************
251251

252252
Before we can proceed to the next step, we will need to update the creds in the Makefile. Go to the Makefile and update the following variables:
253253

@@ -264,7 +264,7 @@ Before we can proceed to the next step, we will need to update the creds in the
264264
5. Indicate your *docker-password* (which can be password or access token).
265265

266266
Making Secrets
267-
************
267+
**************
268268

269269
VK8s need to download docker images from the storage. This might be *docker.io* or any other docker registry your company uses. The docker secrets need to be created from command line using the *kubectl create secret* command. Use the name of the *kubeconfig* file that you downloaded in the previous step.
270270

@@ -274,7 +274,7 @@ NOTE. Please, note that the created secret will not be seen from Registries UI a
274274

275275

276276
Updating DB Deployment Chart Values
277-
********************************
277+
***********************************
278278

279279
Bitnami provides ready charts for HA database deployments. The postgresql-ha chart can be used. The chart install requires setup of the corresponding variables so that the HA cluster can run in xC environment. The main things to change are:
280280

@@ -307,14 +307,14 @@ Let's proceed to specify the above-mentioned values in the *values.yaml*:
307307
runAsNonRoot: true
308308

309309
Deploying HA PostgreSQL chart to xC vK8s
310-
********************************
310+
****************************************
311311

312312
As values are now setup to run in xC, deploy the chart to xC vK8s cluster using the **xc-deploy-bd** command in the Visual Studio Code CLI::
313313

314314
make xc-deploy-bd
315315
316316
Checking deployment
317-
******************
317+
*******************
318318

319319
After we deployed the HA PostgreSQL to vK8s, we can check that pods and services are deployed successfully from distributed virtual Kubernetes dashboard.
320320

@@ -340,19 +340,19 @@ Go one step back and take the same steps for the second pod to see its status. T
340340
.. figure:: assets/logs2.png
341341

342342
Step 3: Expose CE services to RE deployment
343-
####################################
343+
###########################################
344344

345345
The CE deployment is up and running. Now it is necessary to create a secure channel between RE and CE to communicate. RE will read data from the CE deployed database. To do so, two additional objects need to be created.
346346

347347
Exposing CE services
348-
*****************
348+
********************
349349

350350
To access HA Database deployed to CE site, we will need to expose this service via a TCP Load Balancer. Since Load Balancers are created on the basis of an Origin Pool, we will start with creating a pool.
351351

352352
.. figure:: assets/diagram3.png
353353

354354
Creating origin pool
355-
*****************
355+
********************
356356

357357
To create an Origin Pool for the vk8s deployed service follow the step below.
358358

@@ -399,7 +399,7 @@ From the Origin Pool drop-down menu, select the origin pool created in the previ
399399
.. figure:: assets/tcppool.png
400400

401401
Advertising Load Balancer on RE
402-
**************************
402+
*******************************
403403

404404
From the **Where to Advertise the VIP** menu, select **Advertise Custom** to configure our own custom config and click **Configure**.
405405

@@ -422,10 +422,10 @@ Complete creating the load balancer by clicking **Save and Exit**.
422422
.. figure:: assets/saveadvertise.png
423423

424424
Step 4: Test connection from RE to DB
425-
#################################
425+
#####################################
426426

427427
Infrastructure to Test the deployed PostgreSQL
428-
****************************************
428+
**********************************************
429429

430430
To test access to the CE deployed Database from RE deployment, we will use an NGINX reverse proxy with a module that gets data from PosgreSQL and this service will be deployed to the Regional Edge. It is not a good idea to use this type of a data pull in production, but it is very useful for test purposes. So, test user will query the RE Deployed NGINX Reverse proxy, which will perform a query to the database. The HTTP Load Balancer and Origin Pool also should be created to access NGINX from RE.
431431

@@ -451,7 +451,7 @@ And now let’s build all this by running the **make docker** command in the Vis
451451
.. figure:: assets/makedocker.png
452452

453453
NGINX Reverse Proxy Config to Query PostgreSQL DB
454-
***********************************************
454+
*************************************************
455455

456456
NGINX creates a server, listening to port 8080. The default location gets all items from article table and caches them. The following NGINX config sets up the reverse proxy configuration to forward traffic from RE to CE, where “re2ce.internal” is the TCP load balancer we created earlier `Creating TCP Load Balancer`_.
457457

@@ -461,15 +461,15 @@ It also sets up a server on a port 8080 to present the query data that returns a
461461
.. figure:: assets/proxyconfig.png
462462

463463
Deploying NGINX Reverse Proxy
464-
****************************
464+
*****************************
465465

466466
To deploy NGINX run the following command in the Visual Studio Code CLI::
467467

468468
make xc-deploy-nginx
469469

470470

471471
Overviewing the NGINX Deployment
472-
******************************
472+
********************************
473473

474474
The vK8s deployment now has additional RE deployments, which contain the newly-configured NGINX proxy. The RE locations included many Points of Presence (PoPs) worldwide, and when selected, it is possible to have our Reverse Proxy service deployed automatically to each of these sites.
475475

@@ -528,7 +528,7 @@ Complete creating the load balancer by clicking **Save and Exit**.
528528
.. figure:: assets/httpsave.png
529529

530530
Testing: Request data from PostgreSQL DB
531-
************************************
531+
****************************************
532532

533533
So, in just a few steps above, the HTTP Load Balancer is set up and can be used to access the reverse Proxy which pulls the data from our PostgreSQL DB backend deployed on the CE. Let's copy the generated **CNAME value** of the created HTTP Load Balancer to test requesting data from the PostgreSQL database.
534534

@@ -546,7 +546,7 @@ Refresh the page and pay attention to the decrease in the loading time.
546546

547547

548548
Wrap-Up
549-
########
549+
#######
550550

551551
At this stage you should have successfully deployed a highly-available distributed app architecture with:
552552

0 commit comments

Comments
 (0)