This is the TCP Installation Guide for the NGINX Loadbalancer for Kubernetes Controller Solution. It contains detailed instructions for implementing the different components for the Solution.
![]() |
|---|
This Solution from NGINX provides Enterprise class features which address common challenges with networking, traffic management, and High Availability for On-Premises Kubernetes Clusters.
- Provides a
replacement Loadbalancer Service.The Loadbalancer Service is a key component provided by most Cloud Providers. However, when running a cluster On Premises, theLoadbalancer Service is not available. This Solution provides a replacement, using an NGINX Server, and a new K8s Controller. These two components work together to watch the NodePort Service in the cluster, and immediately update the NGINX Loadbalancing Server when changes occur. No more staticExternalIPneeded in yourloadbalancer.yamlManifests! - Provides automatic NGINX upstream config updates, application health checks, advanced Loadbalancing algorithms, and enhanced metrics.
- Provides an upgrade option to NGINX's powerful HTTP processing -
dynamic, ratio-based Load Balancing for Multiple Clusters.This allows for advanced traffic steering, and operation efficiency with no Reloads or downtime. See the HTTP Install Guide for additional details on the advanced HTTP Solution, which can provide:
- MultiCluster Loadbalancing and High Availability
- Horizontal Cluster scaling
- Non-stop seemless K8s Cluster upgrades, migrations, patching
- HTTP Split clients for
A/B, Blue/Green, and Canary testingand production traffic - Additional security features like App Protect Firewall, JWT auth, Rate Limiting, Service and Bandwidth controls, FIPS, advanced TLS features.
- Install NGINX Ingress Controller in your Cluster
- Install NGINX Cafe Demo Application in your Cluster
- Install NGINX Plus on the Loadbalancer Server(s)
- Configure NGINX Plus for TCP Load Balancing
- Install NLK NGINX Loadbalancing for Kubernetes Controller in your Cluster
- Install NLK LoadBalancer or NodePort Service manifest
- Test out NLK
-
Working Kubernetes cluster, with admin privleges
-
Running
nginx-ingress controller, either OSS or Plus. This install guide followed the instructions for deploying an NGINX Ingress Controller here: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests -
Demo application, this install guide uses the NGINX Cafe example, found here: https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/complete-example
-
A bare metal Linux server or VM for the external NGINX Loadbalancing Server, connected to a network external to the cluster. Two of these will be required if High Availability is needed, as shown here.
-
NGINX Plus software loaded on the Loadbalancing Server(s). This install guide follows the instructions for installing NGINX Plus on Centos 7, located here: https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/
-
The NGINX Loadbalancer for Kubernetes (NLK) Controller, new software from NGINX for this Solution.
A standard K8s cluster is all that is required. There must be enough resources available to run the NGINX Ingress Controller, and the new NGINX Loadbalancer for Kubernetes Controller, and test application like the Cafe Demo. You must have administrative access to be able to create the namespace, services, and deployments for this Solution. This Solution was tested on Kubernetes version 1.23.
The NGINX Ingress Controller in this Solution is the destination target for traffic (north-south) that is being sent to the cluster. The installation of the actual Ingress Controller is outside the scope of this installation guide, but we include the links to the docs for your reference. The NIC installation must follow the documents exactly as written, as this Solution refers to the nginx-ingress namespace and service objects. Only the very last step is changed.
NOTE: This Solution only works with nginx-ingress from NGINX. It will not work with the Community version of Ingress, called ingress-nginx.
If you are unsure which Ingress Controller you are running, check out the blog on nginx.com:
https://www.nginx.com/blog/guide-to-choosing-ingress-controller-part-4-nginx-ingress-controller-options
Important! Do not complete the very last step in the NIC deployment with Manifests,
do not deploy the loadbalancer.yaml or nodeport.yaml Service file!You will apply a different loadbalancer or nodeport Service manifest later, after the NLK Controller is up and running.The nginx-ingress Service file must be changed- it is not the default file.
This is not part of the actual Solution, but it is useful to have a well-known application running in the cluster, as a known-good target for test commands. The example provided here is used by the Solution to demonstrate proper traffic flows.
Note: If you choose a different Application to test with, the NGINX health checks provided here will likely NOT work, and will need to be modified to work correctly.
-
Use the provided Cafe Demo manifests in the cafe-demo folder:
kubectl apply -f cafe-secret.yaml kubectl apply -f cafe.yaml kubectl apply -f cafe-virtualserver.yaml
-
The Cafe Demo reference files are located here:
https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/complete-example
-
The Cafe Demo Docker image used here is an upgraded one, with simple graphics and additional TCP/IP and HTTP variables added.
https://hub.docker.com/r/nginxinc/ingress-demo
IMPORTANT - Do not use the
cafe-ingress.yamlfile. Rather, use thecafe-virtualserver.yamlfile that is provided here. It uses the NGINX Plus CRDs to define a VirtualServer, and the related Virtual Server Routes needed. If you are using NGINX OSS Ingress Controller, you will need to use the appropriate manifests, which is not covered in this Solution.
This can be any standard Linux OS system, based on the Linux Distro and Technical Specs required for NGINX Plus, which can be found here: https://docs.nginx.com/nginx/technical-specs/
This Solution followed the Installation of NGINX Plus on Centos/Redhat/Oracle steps for installing NGINX Plus.
https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/
NOTE: This Solution will only work with NGINX Plus, as NGINX OpenSource does not have the API that is used in this Solution. Installation on unsupported Linux Distros is not recommended.
If you need a license for NGINX Plus, a 30-day Trial license is available here:
https://www.nginx.com/free-trial-request/
This is the configuration required for the NGINX Loadbalancing Server, external to the cluster. It must be configured for the following:
-
Move the NGINX default Welcome page from port 80 to port 8080. Port 80 will be used by the stream context, instead of the http context.
-
Plus API with write access enabled on port 9000.
-
Plus Dashboard enabled, used for testing, monitoring, and visualization of the Solution working.
-
The NGINX
streamcontext is enabled, and configured for TCP loadbalancing.
For easy installation/configuration, Git Clone this repository onto the Loadbalancing Server, it contains all the example files that are used here.
https://github.com/nginxinc/nginx-loadbalancer-kubernetes.gitetc/
└── nginx/
├── conf.d/
│ ├── dashboard.conf........ NGINX Plus API and Dashboard config
│ └── default.conf.......... New default.conf config
├── nginx.conf................ New nginx.conf
└── stream
└── nginxk8slb.conf....... NGINX TCP Loadbalancing config nginx-loadbalancer-kubernetes/
└── docs/
└── tcp/
├── loadbalancer-nlk.yaml........ LoadBalancer manifest
└── nodeport-nlk.yaml ........... NodePort manifestAfter the new installation of NGINX Plus, make the following configuration changes:
-
Change NGINX's http default server to port 8080. See the included
default-tcp.conffile. After reloading NGINX, the defaultWelcome to NGINXpage will be located at http://localhost:8080.cat /etc/nginx/conf.d/default.conf # NGINX Loadbalancing for Kubernetes Solution # Chris Akker, Apr 2023 # Example default.conf # Change default_server to port 8080 # server { listen 8080 default_server; # Changed to 8080 server_name localhost; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } ### other sections removed for clarity }
-
Enable the NGINX Plus dashboard. Use the
dashboard.conffile provided. It will enable the /api endpoint, change the port to 9000, and provide access to the Plus Dashboard. Note: There is no security for the /api endpoint in this example config, it should be secured as approprite with TLS or IP allow list.Place this file in the /etc/nginx/conf.d folder, and reload nginx. The Plus dashboard is now accessible at http://nginx-lbserver-ip:9000/dashboard.html. It should look similar to this:
-
Create a new folder for the NGINX stream .conf files.
/etc/nginx/streamis used in this Solution.mkdir /etc/nginx/stream
-
Enable the
streamcontext for NGINX, which provides TCP load balancing. See the included nginx.conf file. Notice that the stream context is no longer commented out, the new folder is included, and a new stream.log logfile is used to track requests/responses.cat /etc/nginx/nginx.conf # NGINX Loadbalancing for Kubernetes Solution # Chris Akker, Apr 2023 # Example nginx.conf # Enable Stream context, add /var/log/nginx/stream.log # user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } # TCP/UDP proxy and load balancing block # stream { include /etc/nginx/stream/*.conf; log_format stream '$remote_addr - $server_addr [$time_local] $status $upstream_addr $upstream_bytes_sent'; access_log /var/log/nginx/stream.log stream; }
-
Configure NGINX Stream for TCP loadbalancing for this Solution.
Notice that this example Solution uses Ports 80 and 443.Place this file in the /etc/nginx/stream folder, and reload NGINX. Notice the match block and health check directives are for the cafe.example.com Demo application from NGINX.
# NGINX Loadbalancing for Kubernetes Stream configuration, for L4 load balancing # Chris Akker, Apr 2023 # TCP Proxy and load balancing block # NGINX Loadbalancer for Kubernetes # State File for persistent reloads/restarts # Health Check Match example for cafe.example.com # #### nginxk8slb.conf upstream nginx-lb-http { zone nginx-lb-http 256k; #servers managed by NLK Controller state /var/lib/nginx/state/nginx-lb-http.state; } upstream nginx-lb-https { zone nginx-lb-https 256k; #servers managed by NLK Controller state /var/lib/nginx/state/nginx-lb-https.state; } server { listen 80; status_zone nginx-lb-http; proxy_pass nginx-lb-http; health_check match=cafe; } server { listen 443; status_zone nginx-lb-https; proxy_pass nginx-lb-https; health_check match=cafe; } match cafe { send "GET cafe.example.com/ HTTP/1.0\r\n"; expect ~ "30*"; }
-
Check the NGINX Plus Dashboard, at http://nginx-lbserver-ip:9000/dashboard.html. You should see something like this:
-
If you have 2 NGINX Loadbalancing Servers for High Availability, repeat the previous NGINX Plus installation and configuration steps on the second Loadbalancing Server.
This is the new K8s Controller from NGINX, which is configured to watch the k8s environment, the nginx-ingress Service object, and send API updates to the NGINX Loadbalancing Server(s) when there are changes. It only requires three things:
- New Kubernetes namespace and RBAC
- NLK ConfigMap, to configure the Controller
- NLK Deployment, to deploy and run the Controller
- Create the new K8s namespace:
kubectl create namespace nlk- Apply the manifests for Secret, Service, ClusterRole, and ClusterRoleBinding:
kubectl apply -f secret.yaml serviceaccount.yaml clusterrole.yaml clusterrolebinding.yamlModify the ConfigMap manifest to match your NGINX Loadbalancing Server(s). Change the nginx-hosts IP address to match your NGINX Loadbalancing Server IP. If you have 2 or more Loadbalancing Servers, separate them with a comma. Keep the port number for the Plus API endpoint, and the /api URL as shown.
apiVersion: v1
kind: ConfigMap
data:
nginx-hosts:
"http://10.1.1.4:9000/api,http://10.1.1.5:9000/api" # change IP(s) to match NGINX Loadbalancing Server(s)
metadata:
name: nlk-config
namespace: nlk
Apply the updated ConfigMap:
kubectl apply -f nlk-configmap.yamlDeploy the NLK Controller:
kubectl apply -f nlk-deployment.yamlCheck to see if the NLK Controller is running, with the updated ConfigMap:
kubectl get pods -n nlkkubectl describe cm nlk-config -n nlkThe status should show "running", your nginx-hosts should have the Loadbalancing Server IP:9000/api.
To make it easy to watch the NLK Controller's log messages, add the following bash alias:
alias nlk-follow-logs='kubectl -n nlk get pods | grep nlk-deployment | cut -f1 -d" " | xargs kubectl logs -n nlk --follow $1'Using a Terminal, you can watch the NLK Controller log:
nlk-follow-logsLeave this Terminal window open, so you can watch the log messages.
Select which Service Type you would like, and follow the appropriate steps below. Do not use both the LoadBalancer and NodePort Service files at the same time.
Instead, use the loadbalancer-nlk.yaml or nodeport-nlk.yaml manifest file that is provided here with this Solution. The "ports name" in the manifests MUST be in the correct format for this Solution to work correctly.
The port name is the mapping from NodePorts to the Loadbalancing Server's upstream blocks.The port names are intentionally changed to avoid conflicts with other NodePort definitions.
Review the new loadbalancer-nlk.yaml Service definition file:
# NLK LoadBalancer Service file
# Spec -ports name must be in the format of
# nlk-<upstream-block-name>
# The nginxinc.io Annotation must be added
# externalIPs are set to NGINX Loadbalancing Servers
# Chris Akker, Apr 2023
#
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
nginxinc.io/nlk-nginx-lb-http: "stream" # Must be added
nginxinc.io/nlk-nginx-lb-https: "stream" # Must be added
spec:
type: LoadBalancer
externalIPs:
- 10.1.1.4 #NGINX Loadbalancing1 Server
- 10.1.1.5 #NGINX Loadbalancing2 Server
ports:
- port: 80
targetPort: 80
protocol: TCP
name: nlk-nginx-lb-http # Must be changed
- port: 443
targetPort: 443
protocol: TCP
name: nlk-nginx-lb-https # Must be changed
selector:
app: nginx-ingress
- Apply the NLK Compatible LoadBalancer
loadbalancer-nlk.yamlService Manifest:
kubectl apply -f loadbalancer-nlk.yaml- Verify the LoadBalancer is now defined:
kubectl get svc nginx-ingress -n nginx-ingressThe nginx-ingress Service, ExternalIPs should match your external NGINX Loadbalancing Server IP(s):
Legend:
- Orange is the
TYPE LoadBalancerService. - Red is the LoadBalancer Service
EXTERNAL-IP, which are your NGINX Loadbalancing Server IP(s); 10.1.1.4 and 10.1.1.5 in this example. - Blue is the
K8s NodePort mappingfor Port 80. - Indigo is the
K8s NodePort mappingfor Port 443. - Green is the NLK Log messages, creating the upstreams to match.
- The new NLK Controller updates the NGINX Loadbalancing Server upstreams with these, shown on the dashboard.
No Reload of NGINX needed! The NLK Controller uses the Plus API to dynamically add/delete/modify the upstreams as the
nginx-ingress Servicechanges.
Review the new nodeport-nlk.yaml Service defintion file:
# NLK Nodeport Service file
# NodePort -ports name must be in the format of
# nlk-<upstream-block-name>
# The nginxinc.io Annotation must be added
# Chris Akker, Apr 2023
#
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
nginxinc.io/nlk-nginx-lb-http: "stream" # Must be added
nginxinc.io/nlk-nginx-lb-https: "stream" # Must be added
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: nlk-nginx-lb-http # Must be changed
- port: 443
targetPort: 443
protocol: TCP
name: nlk-nginx-lb-https # Must be changed
selector:
app: nginx-ingress
- Create the NLK compatible NodePort Service, using the
nodeport-nlk.yamlmanifest provided:
kubectl apply -f nodeport-nlk.yaml- Verify the NodePort is now defined:
kubectl get svc nginx-ingress -n nginx-ingressLegend:
- Orange is the
TYPE NodePortService. - Notice the EXTERNAL-IP is blank, as expected.
- Blue is the
K8s NodePort mappingfor Port 80. - Indigo is the
K8s NodePort mappingfor Port 443.
The name of the Service port is matched to the name of the upstream block in NGINX. The Plus API, follows a defined format, so the url for the API call must be correct, in order to update the correct NGINX upstream block. There are 2 types of upstreams in NGINX. Stream upstreams are used in the stream context, for TCP/UDP load balancing configurations. Http upstreams are used in the http context, for HTTP/HTTPS configurations. (See details for HTTP in the http-installation-guide.md, here: HTTP Guide.
When you are finished, the NGINX Plus Dashboard on the Loadbalancing Server should look similar to the following image:
Important items for reference:
- Orange are the upstream server blocks, from the
etc/nginx/stream/nginxk8slb.conffile. - Blue is the IP:Port of the nginx-ingress Service for http.
- Indigo is the IP:Port of the nginx-ingress Service for https.
Note: In this example, there is a 3-Node K8s cluster, with one Control Node, and 2 Worker Nodes. The NLK Controller only configures
Worker NodeIP addresses, which are:
- 10.1.1.8
- 10.1.1.10
Note: K8s Control Nodes are excluded intentionally.
Configure DNS, or your local hosts file, for cafe.example.com > NGINXLoadbalancing Server IP Address. In this example:
cat /etc/hosts
10.1.1.4 cafe.example.com- Open a browser tab to https://cafe.example.com/coffee.
The Dashboard's TCP/UDP Upstreams Connection counters will increase as you refresh the browser page several times.
- Using a Terminal, delete the
nginx-ingress loadbalancer serviceornginx-ingress nodeport servicedefinition.
kubectl delete -f loadbalancer-nlk.yamlor
kubectl delete -f nodeport-nlk.yamlNow the nginx-ingress Service is gone, and the upstream lists will be empty in the Dashboard.
The NLK log messages confirm the deletion of the upstreams:
- If you refresh the cafe.example.com browser page, it will Time Out. There are NO upstreams for NGINX to send the request to!
- Add the
nginx-ingressService back to the cluster:
kubectl apply -f loadbalancer-nlk.yamlor
kubectl apply -f nodeport-nlk.yaml- Verify the nginx-ingress Service is re-created. Notice the the NodePort Numbers have changed!
kubectl get svc nginx-ingress -n nginx-ingressThe NLK Controller detects this change, and modifies the Loadbalancing Server(s) upstreams to match. The Dashboard will show you the new Port numbers, matching the new LoadBalancer or NodePort definitions. The NLK logs show these messages, confirming the changes:
or
The Completes the Testing Section.
- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
- Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc.










