Skip to content

Commit b0700f7

Browse files
luis5tbmaxwelldb
andauthored
OpenStack: Add scaling information for API and Ingress (#23467)
Add scaling information for API and Ingress Co-authored-by: Max Bridges <[email protected]>
1 parent 0aace3f commit b0700f7

File tree

5 files changed

+267
-1
lines changed

5 files changed

+267
-1
lines changed
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/openstack/load-balancing-openstack.adoc
4+
5+
[id="installation-osp-api-octavia_{context}"]
6+
= Scaling clusters for application traffic by using Octavia
7+
8+
{product-title} clusters that run on {rh-openstack-first} can use the Octavia
9+
load balancing service to distribute traffic across multiple VMs or floating IP
10+
addresses. This feature mitigates the bottleneck that single machines or
11+
addresses create.
12+
13+
If your cluster uses Kuryr, the Cluster Network Operator created an internal
14+
Octavia load balancer at deployment. You can use this load balancer for
15+
application network scaling.
16+
17+
If your cluster does not use Kuryr, you must create your own Octavia load
18+
balancer to use it for application network scaling.
Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/openstack/load-balancing-openstack.adoc
4+
5+
[id="installation-osp-api-scaling_{context}"]
6+
= Scaling clusters by using Octavia
7+
8+
If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it.
9+
10+
.Prerequisites
11+
12+
* Octavia is available on your {rh-openstack} deployment.
13+
14+
.Procedure
15+
16+
. From a command line, create an Octavia load balancer that uses the Amphora driver:
17+
+
18+
[source,terminal]
19+
----
20+
$ openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>
21+
----
22+
+
23+
You can use a name of your choice instead of `API_OCP_CLUSTER`.
24+
25+
. After the load balancer becomes active, create listeners:
26+
+
27+
[source,terminal]
28+
----
29+
$ openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER
30+
----
31+
+
32+
[NOTE]
33+
====
34+
To view the load balancer's status, enter `openstack loadbalancer list`.
35+
====
36+
37+
. Create a pool that uses the round robin algorithm and has session persistence enabled:
38+
+
39+
[source,terminal]
40+
----
41+
$ openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
42+
----
43+
44+
. To ensure that control plane machines are available, create a health monitor:
45+
+
46+
[source,terminal]
47+
----
48+
$ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443
49+
----
50+
51+
. Add the control plane machines as members of the load balancer pool:
52+
+
53+
[source,terminal]
54+
----
55+
$ for SERVER in $(MASTER-0-IP MASTER-1-IP MASTER-2-IP)
56+
do
57+
openstack loadbalancer member create --address $SERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443
58+
done
59+
----
60+
61+
. Optional: To reuse the cluster API floating IP address, unset it:
62+
+
63+
[source,terminal]
64+
----
65+
$ openstack floating ip unset $API_FIP
66+
----
67+
68+
. Add either the unset `API_FIP` or a new address to the created load balancer VIP:
69+
+
70+
[source,terminal]
71+
----
72+
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP
73+
----
74+
75+
Your cluster now uses Octavia for load balancing.
76+
77+
[NOTE]
78+
====
79+
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora VM.
80+
81+
You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.
82+
====
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/openstack/load-balancing-openstack.adoc
4+
5+
[id="installation-osp-kuryr-api-scaling_{context}"]
6+
= Scaling clusters that use Kuryr by using Octavia
7+
8+
If your cluster uses Kuryr, associate your cluster's API floating IP address
9+
with the pre-existing Octavia load balancer.
10+
11+
.Prerequisites
12+
13+
* Your {product-title} cluster uses Kuryr.
14+
15+
* Octavia is available on your {rh-openstack} deployment.
16+
17+
.Procedure
18+
19+
. Optional: From a command line, to reuse the cluster API floating IP address, unset it:
20+
+
21+
[source,terminal]
22+
----
23+
$ openstack floating ip unset $API_FIP
24+
----
25+
26+
. Add either the unset `API_FIP` or a new address to the created load balancer VIP:
27+
+
28+
[source,terminal]
29+
----
30+
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value ${OCP_CLUSTER}-kuryr-api-loadbalancer) $API_FIP
31+
----
32+
33+
Your cluster now uses Octavia for load balancing.
34+
35+
[NOTE]
36+
====
37+
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora VM.
38+
39+
You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.
40+
====
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/openstack/load-balancing-openstack.adoc
4+
5+
[id="installation-osp-kuryr-octavia-scale_{context}"]
6+
= Scaling for ingress traffic by using {rh-openstack} Octavia
7+
8+
You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr.
9+
10+
.Prerequisites
11+
12+
* Your {product-title} cluster uses Kuryr.
13+
14+
* Octavia is available on your {rh-openstack} deployment.
15+
16+
.Procedure
17+
18+
. To copy the current internal router service, on a command line, enter:
19+
+
20+
[source,terminal]
21+
----
22+
$ oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml
23+
----
24+
25+
. In the file `external_router.yaml`, change the values of `metadata.name` and `spec.type` to
26+
`LoadBalancer`.
27+
+
28+
[source,yaml]
29+
.Example router file
30+
----
31+
apiVersion: v1
32+
kind: Service
33+
metadata:
34+
labels:
35+
ingresscontroller.operator.openshift.io/owning-ingresscontroller: default
36+
name: router-external-default <1>
37+
namespace: openshift-ingress
38+
spec:
39+
ports:
40+
- name: http
41+
port: 80
42+
protocol: TCP
43+
targetPort: http
44+
- name: https
45+
port: 443
46+
protocol: TCP
47+
targetPort: https
48+
- name: metrics
49+
port: 1936
50+
protocol: TCP
51+
targetPort: 1936
52+
selector:
53+
ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
54+
sessionAffinity: None
55+
type: LoadBalancer <2>
56+
----
57+
<1> Ensure that this value is descriptive, like `router-external-default`.
58+
<2> Ensure that this value is `LoadBalancer`.
59+
60+
[NOTE]
61+
====
62+
You can delete timestamps and other information that is irrelevant to load balancing.
63+
====
64+
65+
. From a command line, create a service from the `external_router.yaml` file:
66+
+
67+
[source,terminal]
68+
----
69+
$ oc apply -f external_router.yaml
70+
----
71+
72+
. Verify that the service's external IP address is the same as the one that is associated with the load balancer:
73+
.. On a command line, retrieve the service's external IP address:
74+
+
75+
[source,terminal]
76+
----
77+
$ oc -n openshift-ingress get svc
78+
----
79+
+
80+
[source,terminal]
81+
.Example output
82+
----
83+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
84+
router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s
85+
router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h
86+
----
87+
88+
.. Retrieve the load balancer's IP address:
89+
+
90+
[source,terminal]
91+
----
92+
$ openstack loadbalancer list | grep router-external
93+
----
94+
+
95+
.Example output
96+
[source,terminal]
97+
----
98+
| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |
99+
----
100+
101+
.. Verify that the addresses you retrieved in the previous steps are associated with each other in the floating IP list:
102+
+
103+
[source,terminal]
104+
----
105+
$ openstack floating ip list | grep 172.30.235.33
106+
----
107+
+
108+
.Example output
109+
[source,terminal]
110+
----
111+
| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |
112+
----
113+
114+
You can now use the value of `EXTERNAL-IP` as the new Ingress address.
115+
116+
117+
[NOTE]
118+
====
119+
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora VM.
120+
121+
You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.
122+
====

networking/load-balancing-openstack.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,8 @@ include::modules/common-attributes.adoc[]
55

66
toc::[]
77

8-
include::modules/installation-osp-kuryr-octavia-upgrade.adoc[leveloffset=+1]
8+
include::modules/installation-osp-kuryr-octavia-upgrade.adoc[leveloffset=+1]
9+
include::modules/installation-osp-api-octavia.adoc[leveloffset=+1]
10+
include::modules/installation-osp-api-scaling.adoc[leveloffset=+2]
11+
include::modules/installation-osp-kuryr-api-scaling.adoc[leveloffset=+2]
12+
include::modules/installation-osp-kuryr-ingress-scaling.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)