- F5 Enterprise Networking with High-Availability Config (in Microsoft Azure)
Use this guide to explore an Azure-specific example of High-Availability (HA) Application Delivery using F5 Distributed Cloud Services (XC). This guide walks through sample configuration of F5 Enterprise Networking (F5 Distributed Cloud Network Connect) and F5 Load Balancing (F5 Distributed Cloud App Connect). Leveraging distributed architecture such as this simplifies the deployment and management of workloads across multiple clouds and regions.
This will help you get familiar with the general pattern of deploying high-availability configurations on Microsoft Azure by using Kubernetes Helm charts in a multi-node F5 Distributed Cloud Customer Edge (CE) "Site", which itself can then be exposed to other services. This is a common use-case for deploying a highly-available backend or a database on Kubernetes, which can then be used in conjunction with Regional Edge (RE) deployments that consume and/or interact with the central CE location.
In this guide we will use an example of a PostgreSQL database deployment in a High-Availability (HA) configuration on a CE and expose it to RE location(s) closer to end-users for lower latency, improved performance, and data resiliency.
The guide includes the following key steps:
• Preparing the environment for HA workload; • Deploying HA PostgreSQL database to CE; • Exposing CE services to RE deployment; • Testing the connection from RE to DB.
The resulting architecture will be a PostgreSQL database deployed in an HA configuration on Kubernetes running on several compute nodes within an Azure VNET, and exposing via TCP Load Balancer to a service in an RE that reads and presents the database contents to the end-users, which is a perfect fit for a CE deployment.
Each of the steps in this guide addresses a specific part of the whole deployment process and describes it in detail. Therefore, this guide can be completed step-by-step or by skipping some of the steps if you are already familiar with them.
For more information on the use cases covered by this Demo Guide, please see the following resources including DevCentral article(s) and YouTube video(s):
-
DevCentral Articles:
-
YouTube videos:
• F5 Distributed Cloud Console account
• Microsoft Azure for deploying a CE Site
• A web browser to test the connectivity from RE to DB
• Kubernetes CLI
F5 Distributed Cloud Sites on an Azure VNET provide a framework for easily connecting and managing distributed workloads. Such deployment results in a robust app infrastructure with full mesh connectivity, and ease of management as if it were a single Kubernetes (K8s) cluster. It provides an ideal platform for several nodes to be provisioned in a high-availability configuration for a PostgreSQL database cluster. The services within this cluster can then be exposed to other app services via a TCP load balancer.
The app services that consume database objects could reside close to the end-user if they are deployed in F5 Distributed Cloud RE, resulting in the following optimized architecture:
F5 Distributed Cloud Services allow you to create edge sites with worker nodes on a wide variety of cloud providers: AWS, Azure, GCP. The prerequisite is one or more Distributed Cloud CE Sites, and once deployed, you can expose the services created on these edge sites via a Site mesh and any additional Load Balancers. The selection of TCP (L3/L4) or HTTP/S (L7) Load Balancers depends on the requirements for the services to communicate with each other. In our case, we're exposing a database service, which is a fit for a TCP Load Balancer. Should there be a backend service or anything that exposes an HTTP endpoint for other services to connect to, we could have used an HTTP/S LB instead. (Note that a single CE Site may support one or more virtual sites, which is similar to a logical grouping of site resources.)
A single virtual site can also be deployed across multiple CEs, thus creating a multi-cloud infrastructure. It is also possible to place several virtual sites into one CE, each with their own policy settings for more granular security and app service management. It's also feasible for several virtual sites to share both the same and different CE sites as underlying resources.
During the creation of sites & virtual sites, labels such as site name, site type and others can be used to organize site resources. If you want to use site name to organize an edge site as a virtual site, then the ves.io/siteName label can be used.
The diagram shows how vK8s clusters can be deployed across multiple CEs with virtual sites to control distributed cloud infrastructure. Note that this architecture shows four virtual clusters assigned to CE sites in different ways.
Let's start by creating the Azure VNET site with worker nodes. Log into the Console and navigate to the Multi-Cloud Network Connect service, then to Site Management and select Azure VNET Sites. Click the Add Azure VNET Site button.
Give the site a name and scroll down to the Site Type Selection section.
From the Cloud Credentials drop-down menu, select the existing Azure credentials object. Fill in the Resource Group and select the Azure Region Name for it. In this guide we use the centralus region.
Enter the 10.0.0.0/16 CIDR in the IPv4 CIDR block field and move on to set the node configuration. Under the Ingress Gateway (One Interface) on Recommended Region click Configure.
Click Add Item to configure the Ingress Gateway (One Interface).
Now we will configure the first node: select 1 from the Azure AZ name menu which matches the configured Azure Region. Enter the new subnet address 10.0.1.0/24 in IPv4 Subnet. Click Apply to save the first node settings.
Click again the Add Item button to configure the second node.
Let's now configure the second node: select 2 from the Azure AZ name menu and enter the new subnet address 10.0.2.0/24 in IPv4 Subnet. Then click Apply to save the node settings.
Click the Add Item button one more time to configure the third node.
Select 3 from the Azure AZ name menu and enter the new subnet address 10.0.3.0/24 in IPv4 Subnet. Then click Apply to save the node settings.
After we configured 3 nodes, let's proceed and apply the configuration.
Next, we will paste the Public SSH key to access the site. Note that if you don't have a key, you can generate one using the ssh-keygen command and then display it with the command cat ~/.ssh/id_rsa.pub.
Next, we will configure Desired Worker Nodes in the advanced configuration. To do that, in the Advanced Configuration section, enable the Show Advanced Fields option. From the Desired Worker Nodes Selection menu, choose the Desired Worker Nodes Per AZ option and enter the number of worker nodes 1 for this demo. The number of worker nodes you set here will be created per the availability zone in which you created nodes. Then click the Add Azure VNET Site button to complete the Azure VNET Site creation.
Note that site upgrades may take up to 10 minutes per site node. Once a site upgrade has been completed, we need to apply the Terraform parameters to the site. The Status box for the VNET site object displays Validation Succeeded. So, click Apply in the Deployment column.
First, the Status field for the Azure VNET object changes to Applying. Wait for the apply process to complete and the status to change to Applied.
When the site is created, a label should be assigned. Use the ves.io/siteName label to name the site. Follow the instructions below to configure the site.
First, open the menu of the created Azure VNET site and navigate to Manage Configuration.
Open the editing mode and click Add Label.
As mentioned before, select the ves.io/siteName key.
And then type in the Azure VNET site name we created earlier to assign its custom value as the key.
Click Save Azure VNET Site to apply the label configuration.
As soon as an edge site is created and the label is assigned, create a virtual site, as described below. The virtual site should be of the CE type and the label must be ves.io/siteName with operation == and the name of the Azure VNET site.
Navigate to the Distributed Apps service and select Virtual Sites in the Manage section. After that click Add Virtual Site to load the creation form.
In the Metadata section Name field, enter a virtual site name. In the Site Type section, select the CE site type from the drop-down menu, and then move on to adding a label.
Now we will configure the label expression. First, select ves.io/siteName as a key.
Then select the == operator.
And finally, type in the name of the Azure VNET site we created earlier, assign it as a label value, and complete the process by clicking the Add Virtual site button.
Note the virtual site name, as it will be required later.
At this point, our edge site for the HA Database deployment is ready. Now create the vK8s cluster. Select both virtual sites (one on CE and one on RE) by using the corresponding labels: the one created earlier and the ves-io-shared/ves-io-all-res. The all-res one will be used for the deployment of workloads on all REs.
Navigate to the Virtual K8s and click the Add Virtual K8s button to create a vK8s object.
In the Name field, enter a name. In the Virtual Sites section, select Add Item.
Then select the virtual site we created from the drop-down menu. Click Add Item again to add the second virtual site which is on RE.
Select the ves-io-shared/ves-io-all-res. The all-res one will be used for the deployment of workloads on all REs. It includes all regional edge sites across the F5 ADN.
Complete creating the vK8s object by clicking Add Virtual K8s. Wait for the vK8s object to get created and displayed.
Now that the environment for both RE and CE deployments is ready, we can move on to deploying HA PostgreSQL to CE. We will use Helm charts to deploy a PostgreSQL cluster configuration with the help of Bitnami, which provides ready-made Helm charts for HA databases: MongoDB, MariaDB, PostgreSQL, etc., and is available in the Bitnami Library for Kubernetes: https://github.com/bitnami/charts. In general, these Helm charts work very similarly, so the example used here can be applied to most other databases or services.
There are several ways to deploy the HA PostgreSQL. The architecture used in this guide is shown in the picture below. The pgPool deployment will be used to ensure the HA features.
To operate with the kubectl utility or, in our case, Helm, the kubeconfig key is required. The Console provides an easy way to get the kubeconfig file, control its expiration date, etc. So, let's download the kubeconfig for the created vK8s cluster.
Open the menu of the created virtual K8s and click Kubeconfig.
In the popup window that appears, select the expiration date, and then click Download Credential.
Now we need to add the Bitnami Helm chart repository to Helm and then deploy the chart:
helm repo add bitnami https://charts.bitnami.com/bitnami
Before we can proceed to the next step, we will need to update the creds in the Makefile. Go to the Makefile and update the following variables:
-
Indicate your docker registry (which is your docker registry user name).
-
Replace the names of our kubeconfig file with the one you just downloaded (note there are a few mentions of it).
-
Indicate your docker-server (which is https://index.docker.io/v1 for Docker Hub).
-
Indicate your docker-username.
-
Indicate your docker-password (which can be password or access token).
vK8s needs to download docker images from the storage. This might be docker.io or any other docker registry your company uses. The docker secrets need to be created from the command line using the kubectl create secret command. Use the name of the kubeconfig file that you downloaded in the previous step.
NOTE: Please note that the created secret will not be seen from the Registries UI as this section is used to create Deployments from the UI. But the Helm script will be used in this demo.
Bitnami provides ready charts for HA database deployments. The postgresql-ha chart can be used. The chart installation requires setup of the corresponding variables so that the HA cluster can run in the Distributed Cloud Platform environment. The main things to change are:
- ves.io/virtual-sites to specify the virtual site name where the chart will be deployed.
- The CE virtual site we created needs to be specified.
- Also, the clusterDomain key must be set, so that PostgreSQL services could resolve.
- And finally, the kubeVersion key.
NOTE: It is important to specify memory and CPU resource values for PostgreSQL services unless the Console applies its own minimal values, which are not enough for PostgreSQL successful operation.
Let's proceed to specify the above-mentioned values in the values.yaml:
- To deploy the load to a predefined virtual site, copy virtual site name and namespace by navigating to the Virtual Sites. Paste the namespace and the virtual site name to the ves.io/virtual-sites value in the values.yaml.
- An important key in values for the database is clusterDomain. Let's proceed to construct the value this way: {sitename}.{tenant_id}.tenant.local. Note that site_id here is the Edge site id, not the virtual one. We can get this information from site settings. First, navigate to the Multi-Cloud Network Connect service, proceed to the Site Management section, and select the Azure VNET Sites option. Open the JSON settings of the site in the Azure VNET Sites list. Tenant id and site name will be shown as tenant and name fields of the object.
- Next, let's get the kubeVersion key. Open the terminal and run the KUBECONFIG=YOURFILENAME.yaml kubectl version command to get the kubectl version. Then copy the value into the values.yaml.
- Since vK8s supports only non-root containers, make sure the following values are specified:
containerSecurityContext:
runAsNonRoot: true
As the values are now set up to run in the Distributed Cloud Platform, deploy the chart to the vK8s cluster using the xc-deploy-bd command in the Visual Studio Code CLI:
make xc-deploy-bd
After we deployed the HA PostgreSQL to vK8s, we can check that pods and services are deployed successfully from the distributed virtual Kubernetes dashboard.
To do that, take the following steps. On the Virtual K8s page, click the vK8s we created earlier to drill down into its details.
Then move on to the Pods tab, open the menu of the first pod and select Show Logs.
Open the drop-down menu to select the postgresql as a container to show the logs from.
As we can see, the first pod is successfully deployed, up and running.
Go one step back and take the same steps for the second pod to see its status. That's what we will see after selecting postgresql as the container to show the logs from: the second pod is up and running and was properly deployed.
The CE deployment is up and running. Now it is necessary to create a secure channel between RE and CE to communicate. The RE will read data from the CE deployed database. To do so, two additional objects need to be created.
To access the HA Database deployed to the CE site, we will need to expose this service via a TCP Load Balancer. Since Load Balancers are created on the basis of an Origin Pool, we will start with creating a pool.
To create an Origin Pool for the vK8s deployed service, follow the steps below.
First, copy the name of the service we will create the pool for. Then navigate to Load Balancer and proceed to Origin Pools.
Click Add Origin Pool to open the origin pool creation form.
In the Name field, enter a name. In the Origin Servers section click Add Item.
From the Select Type of Origin Server menu, select the K8s Service Name of Origin Server on given Sites type to specify the origin server with its K8s service name. Then enter the service name of the origin server (including the service name we copied earlier and namespace). Select the Virtual Site option in the Site or Virtual Site menu. And select the virtual site created earlier. After that, pick the vK8s Networks on the Site network. Finally, click Apply.
Enter a port number in the Port field. We use 5432 for this guide. Complete creating the origin pool by clicking Add Origin Pool.
As soon as the Origin Pool is ready, the TCP Load Balancer can be created, as described below. This load balancer needs to be accessible only from the RE network, or, in other words, to be advertised there, which will be done in the next step.
Navigate to the TCP Load Balancers option of the Load Balancers section. Then click Add TCP Load Balancer to open the load balancer creation form.
In the Metadata section, enter a name for your TCP load balancer. Then click Add item to add a domain.
In the Domains field, enter the name of the domain to be used with this load balancer – re2ce.internal, and in the Listen Port field, enter a 5432 port. This makes it possible to access the service by TCP Load Balancer domain and port. If the domain is specified as re2ce.internal and the port is 5432, the connection to the DB might be performed from the RE using these settings. Then move on to the Origin Pools section and click Add Item to open the configuration form.
From the Origin Pool drop-down menu, select the origin pool created in the previous step and click Apply.
From the Where to Advertise the VIP menu, select Advertise Custom to configure our own custom config and click Configure.
Click Add Item to add a site to advertise.
First, select vK8s Service Network on RE for the Select Where to Advertise field. Then select Virtual Site Reference for the reference type, and assign ves-io-shared/ves-io-all-res as one. Move on to configure a Listen Port as 5432. Finally, click Apply.
Take a look at the custom advertise VIP configuration and proceed by clicking Apply.
Complete creating the load balancer by clicking Add TCP Load Balancer.
To test access to the CE deployed database from the RE deployment, we will use an NGINX reverse proxy with a module that gets data from PostgreSQL and this service will be deployed to the Regional Edge. It is not a good idea to use this type of data pull in production, but it is very useful for test purposes. So, the test user will query the RE deployed NGINX Reverse proxy, which will perform a query to the database. The HTTP Load Balancer and Origin Pool should also be created to access NGINX from the RE.
To query our PostgreSQL data, the data should first be put in the database. The easiest way to initialize a database is to use the migrate/migrate project. As a Dockerfile we will use dockerfile.migrate.nonroot. The only customization required is to run the docker in non-root mode.
The default NGINX build does not have the PostgreSQL Module included. Luckily, the OpenResty project allows easy compilation of NGINX with the module. Take a look at Dockerfile.openrestry.
The NGINX deployed on the RE should run in non-root mode. So we convert the openresty compiled NGINX into the nonroot one.
And now let's build all this by running the make docker command in the Visual Studio Code CLI. Please note this may take some time.
NGINX creates a server, listening on port 8080. The default location gets all items from the article table and caches them. The following NGINX config sets up the reverse proxy configuration to forward traffic from RE to CE, where "re2ce.internal" is the TCP load balancer we created earlier in Creating TCP Load Balancer.
It also sets up a server on port 8080 to present the query data that returns all items from the "articles" table.
To deploy NGINX run the following command in the Visual Studio Code CLI:
make xc-deploy-nginx
The vK8s deployment now has additional RE deployments, which contain the newly-configured NGINX proxy. The RE locations include many Points of Presence (PoPs) worldwide, and when selected, it is possible to have our Reverse Proxy service deployed automatically to each of these sites.
Let's now take a look at the NGINX Deployment. Go back to the F5 Distributed Cloud Console and navigate to the Distributed Apps service. Proceed to the Virtual K8s and click the one we created earlier.
Here we can drill down into the cluster information to see the number of pods in it and their status, deployed applications and their services, sites, memory and storage.
Next, let's look at the pods in the cluster. Click the Pods tab to proceed.
Here we will drill down into the cluster pods: their nodes, statuses, virtual sites they are referenced to, and more.
To access our NGINX module that pulls the data from PostgreSQL, we need an HTTP Load Balancer. This load balancer needs to be advertised on the internet so that it can be accessed from outside of the vK8s cluster. Let's move on and create an HTTP Load Balancer.
Navigate to Load Balancers and select the HTTP Load Balancers option. Then click the Add HTTP Load Balancer button to open the creation form.
In the Name field, enter a name for the new load balancer. Then proceed to the Domains section and fill in the nginx.domain.
Next, from the Load Balancer Type drop-down menu, select HTTP to create the HTTP type of load balancer. After that move on to the Origins section and click Add Item to add an origin pool for the HTTP Load Balancer.
To create a new origin pool, click Add Item.
First, give it a name, then specify the 9080 port and proceed to add Origin Servers by clicking the Add Item button.
First, from the Select Type of Origin Server menu, select K8s Service Name of Origin Server on given Sites to specify the origin server with its K8s service name. Then enter the nginx-rp.ha-services-ce service name in the Service Name field where nginx-rp is the deployed service name and ha-services-ce is the namespace. Next, select the Virtual Site option in the Site or Virtual Site menu to choose ves-io-shared/ves-io-all-res site which includes all regional edge sites across the F5 ADN. After that, select vK8s Networks on Site which means that the origin server is on the vK8s network on the site and, finally, click Apply.
Click Add Origin Pool to move on to apply the origin pool configuration.
Click the Apply button to apply the origin pool configuration to the HTTP Load Balancer.
Complete creating the load balancer by clicking Add HTTP Load Balancer.
So, in just a few steps above, the HTTP Load Balancer is set up and can be used to access the reverse proxy which pulls the data from our PostgreSQL DB backend deployed on the CE. Let's copy the generated CNAME value of the created HTTP Load Balancer to test requesting data from the PostgreSQL database.
Click on the DNS info and then the copy icon.
Go to your browser and open the developer tools. Then paste the copied CNAME value. Take a look at the loading time.
Refresh the page and pay attention to the decrease in the loading time.
At this stage you should have successfully deployed a highly-available distributed app architecture with:
- A PostgreSQL database in an HA configuration in a central location, deployed across multiple vK8s pods that run on several compute nodes running within a Customer Edge Site in an Azure VNET;
- A TCP load balancer that exposes and advertises this workload to other deployments within our topology;
- An RE deployment that can run across many geographic regions, and contains an NGINX Reverse Proxy with a module that reads the data from our central database.
Such a configuration could be used as a reference architecture for deploying a centralized database or backend service by way of Helm Charts running in Kubernetes, which can be connected to REs containing customer-facing apps & services closer to the users' region. These services can all be deployed and managed via the F5 Distributed Cloud Console for faster time-to-value and more control. Of course, any of these services can also be secured with Web Application and API Protection solutions as well, further improving the reliability and robustness of the resulting architecture.
We hope you now have a better understanding of F5 Distributed Cloud Services that provide virtual Kubernetes (vK8s) capabilities to simplify the deployment and management of distributed workloads across multiple clouds and regions and are now ready to implement them for your own organization. Should you have any issues or questions, please feel free to raise them via GitHub. Thank you!