|
| 1 | +# Deploying OpenShift with a user managed load balancer |
| 2 | + |
| 3 | +This document explains how to deploy OpenShift with a user managed load balancer, rather than with the self-hosted, OpenShift managed, load balancer based on HAproxy & Keepalived. |
| 4 | + |
| 5 | +## Table of Contents |
| 6 | +- [Deploying OpenShift with a user managed load balancer](#deploying-openshift-with-an-user-managed-load-balancer) |
| 7 | + - [Table of Contents](#table-of-contents) |
| 8 | + - [Common prerequisites](#common-prerequisites) |
| 9 | + - [Deploy the load balancer](#deploy-the-load-balancer) |
| 10 | + - [Deploy OpenShift](#deploy-openshift) |
| 11 | + - [Known limitations](#known-limitations) |
| 12 | + - [Notes](#notes) |
| 13 | + |
| 14 | +## Common prerequisites |
| 15 | + |
| 16 | +* When deploying OpenShift with a user managed load balancer, it's required to bring your own network(s) before |
| 17 | + the deployment. This can be a tenant network or a provider network. |
| 18 | +* The load balancer(s) will have to be deployed before installing OpenShift. |
| 19 | + * It has be connected to the network(s) where OpenShift will be deployed. |
| 20 | + * If it's on a server managed by OpenStack, allowed address pairs have to be configured |
| 21 | + for the port that will serve API and Ingress traffic, otherwise the VIP traffic |
| 22 | + will be rejected by the OpenStack SDN when port security is enabled. |
| 23 | + * The firewall has to allow the following traffic on the load balancer (if a server managed by OpenStack, create a security group): |
| 24 | + * 22/TCP - SSH (to allow Ansible to perform remote tasks from the host where it runs). This rule can be removed after the load balancer is deployed. |
| 25 | + * 6443/TCP (from within and outside the cluster) - OpenShift API |
| 26 | + * 80/TCP (from within and outside the cluster) - Ingress HTTP |
| 27 | + * 443/TCP (from within and outside the cluster) - Ingress HTTPS |
| 28 | + * 22623/TCP (from within the OCP network) - Machine Config Server |
| 29 | + |
| 30 | +## Deploy the load balancer |
| 31 | + |
| 32 | +Before you install OpenShift, you must provision at least one load balancer. |
| 33 | +The load balancer will manage the VIPs for API, Ingress and Machine Config Server services. |
| 34 | +If you deploy in production, at least two load balancers should be deployed per network fabric for high availability. |
| 35 | + |
| 36 | +You can use your own solution that suits your needs, or you can use this [Ansible role](https://github.com/shiftstack/ansible-role-routed-lb) |
| 37 | +that has been created for testing purpose and can be used as an example. This role has not been tested in production, |
| 38 | +therefore we can't recommand to use it outside of testing environments. |
| 39 | + |
| 40 | +## Deploy OpenShift |
| 41 | + |
| 42 | +Now that your load balancer(s) are ready, you can deploy OpenShift. |
| 43 | +Here is an example of an `install-config.yaml`: |
| 44 | + |
| 45 | +```yaml |
| 46 | +apiVersion: v1 |
| 47 | +baseDomain: mydomain.test |
| 48 | +compute: |
| 49 | +- name: worker |
| 50 | + platform: |
| 51 | + openstack: |
| 52 | + type: m1.xlarge |
| 53 | + replicas: 3 |
| 54 | +controlPlane: |
| 55 | + name: master |
| 56 | + platform: |
| 57 | + openstack: |
| 58 | + type: m1.xlarge |
| 59 | + replicas: 3 |
| 60 | +metadata: |
| 61 | + name: mycluster |
| 62 | +networking: |
| 63 | + clusterNetwork: |
| 64 | + - cidr: 10.128.0.0/14 |
| 65 | + hostPrefix: 23 |
| 66 | + machineNetwork: |
| 67 | + - cidr: 192.168.10.0/24 |
| 68 | +platform: |
| 69 | + openstack: |
| 70 | + cloud: mycloud |
| 71 | + machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a |
| 72 | + apiVIPs: |
| 73 | + - 192.168.10.5 |
| 74 | + ingressVIPs: |
| 75 | + - 192.168.10.7 |
| 76 | + loadBalancer: |
| 77 | + type: UserManaged |
| 78 | +``` |
| 79 | +
|
| 80 | +There are some important things to note here: |
| 81 | +
|
| 82 | +* `loadBalancer` is a new stanza created in OCP 4.13. The default type is `OpenShiftManagedDefault` (which will deploy HAproxy and Keepalived in OCP, known as the OpenShift managed load balancer). Setting it to `UserManaged` will allow a user managed load balancer to replace the OpenShift managed one. |
| 83 | +* `machinesSubnet` is the subnet ID where both the OpenShift cluster and the user managed load balancer are deployed. |
| 84 | +* In OCP 4.13 the feature had to be enabled as Technology Preview. This can be done by adding featureSet: `TechPreviewNoUpgrade` into the install-config.yaml. |
| 85 | + |
| 86 | + |
| 87 | +Deploy the cluster: |
| 88 | +```bash |
| 89 | +openshift-install create cluster |
| 90 | +``` |
| 91 | + |
| 92 | +## Known limitations |
| 93 | + |
| 94 | +These limitations will eventually be addressed in our roadmap: |
| 95 | + |
| 96 | +* Deploying OpenShift with static IPs for the machines is only supported on Baremetal platform. |
| 97 | +* Changing the IP address for any OpenShift control plane VIP (API + Ingress) is currently not supported: once the user managed LB and the OpenShift cluster is deployed, the VIPs can’t be changed. |
| 98 | +* Migrating an OpenShift cluster from the OpenShift managed LB to a user managed LB is currently not supported. |
| 99 | + |
| 100 | + |
| 101 | +## Notes |
| 102 | + |
| 103 | +* In combination with `FailureDomains`, this feature allows customers to deploy their OpenShift cluster across multiple subnets. |
| 104 | +* Using a user managed load balancer has been proven to reduce the load on Kube API and help large scale deployments to perform better. This is because there are less |
| 105 | + pods fetching the API every few seconds (haproxy and keepalived monitors). |
0 commit comments