|
| 1 | +# Octavia DCN |
| 2 | + |
| 3 | +## Octavia in DCN deployments |
| 4 | + |
| 5 | +The deployment of the Octavia services in DCN differs from standard |
| 6 | +deployments. |
| 7 | +While it supports using only one Octavia management network across the |
| 8 | +Availability Zones for communication between the control plane and the Amphora |
| 9 | +instances, admins might want to isolate the network traffic and use one |
| 10 | +management network per AZ. |
| 11 | + |
| 12 | +In this case, they must configure the octavia-operator to define specific |
| 13 | +settings for those AZs. |
| 14 | + |
| 15 | +## Configuration of the Neutron AZs |
| 16 | + |
| 17 | +When deploying DCN, each compute node is assigned to an AZ (example: az[1..n]), |
| 18 | +the default AZ created for the control plane (az0 in this document) is not used |
| 19 | +by the compute nodes. |
| 20 | +It means that the `lb-mgmt-net` network created by the octavia-operator for the |
| 21 | +default AZ is not required. |
| 22 | +It can be (optionally) disabled by removing the route from the octavia Network |
| 23 | +Attachment Definition: |
| 24 | + |
| 25 | +Example: |
| 26 | + |
| 27 | +```shell |
| 28 | +oc edit network-attachment-definitions.k8s.cni.cncf.io octavia |
| 29 | +``` |
| 30 | + |
| 31 | +```yaml |
| 32 | +spec: |
| 33 | + config: | |
| 34 | + { |
| 35 | + "cniVersion": "0.3.1", |
| 36 | + "name": "octavia", |
| 37 | + "type": "bridge", |
| 38 | + "bridge": "octbr", |
| 39 | + "ipam": { |
| 40 | + "type": "whereabouts", |
| 41 | + "range": "172.23.0.0/24", |
| 42 | + "range_start": "172.23.0.30", |
| 43 | + "range_end": "172.23.0.70" |
| 44 | + } |
| 45 | + } |
| 46 | +``` |
| 47 | +
|
| 48 | +The `lbMgmtNetwork.availabilityZones` spec of the Octavia Kind must contain the |
| 49 | +AZ of the control plane. |
| 50 | + |
| 51 | +The `lbMgmtNetwork.createDefaultLbMgmtNetwork` spec can be optionaly set to |
| 52 | +`false` to prevent the operator to create the default `lb-mgmt-net` network for |
| 53 | +default AZ. |
| 54 | +In this case, they should set `lbMgmtNetwork.lbMgmtRouterGateway` to an IP |
| 55 | +address of the octavia NAD, this address should be selected in a range that |
| 56 | +starts after the `ipam.range_end` IP address. |
| 57 | + |
| 58 | +Then `lbMgmtNetwork.availabilityZonesCIDRs` spec should define a different CIDR |
| 59 | +for each AZ. The octavia-operator will ensure that those CIDRs are routable from |
| 60 | +the Octavia service through a Neutron router. |
| 61 | + |
| 62 | +```shell |
| 63 | +oc patch openstackcontrolplane openstack-galera-network-isolation --type=merge --patch=' |
| 64 | + spec: |
| 65 | + octavia: |
| 66 | + template: |
| 67 | + lbMgmtNetwork: |
| 68 | + createDefaultLbMgmtNetwork: false |
| 69 | + lbMgmtRouterGateway: 172.23.0.150 |
| 70 | + availabilityZones: |
| 71 | + - az0 |
| 72 | + availabilityZoneCIDRs: |
| 73 | + az1: 172.34.0.0/16 |
| 74 | + az2: 172.44.0.0/16 |
| 75 | +' |
| 76 | +``` |
| 77 | + |
| 78 | +With those settings, the octavia-operator will create: |
| 79 | + |
| 80 | +* a `lb-mgmt-az1-net` network with a `lb-mgmt-az1-subnet` subnet (CIDR |
| 81 | + `172.34.0.0/16`) with availability_hints `az1` |
| 82 | +* a `lb-mgmt-az2-net` network with a `lb-mgmt-az2-subnet` subnet (CIDR |
| 83 | + `172.44.0.0/16`) with availability_hints `az2` |
| 84 | +* an `octavia-provider-net` network with an `octavia-provider-subnet` subnet |
| 85 | + (CIDR `172.23.0.0/24`) |
| 86 | +* an `octavia-link-router` router in `az0`, `az1` and `az2`, |
| 87 | + `octavia-provider-subnet` is plugged into this router through a port with the |
| 88 | + IP address `172.23.0.150`, `lb-mgmt-az1-subnet` and `lb-mgmt-az2-subnet` are |
| 89 | + also plugged into the router |
| 90 | + |
| 91 | +## Creating Octavia Availability Zone Profiles and Availability Zones |
| 92 | + |
| 93 | +When creating a Load Balancer for a specific AZ in Octavia, some metadata must |
| 94 | +be passed to the Octavia service, to indicate which compute AZ and management network it should use to create Amphora VMs. |
| 95 | + |
| 96 | +Those metadata are stored in Octavia Availability Zone Profile and Availability |
| 97 | +Zones. They can be created by admins: |
| 98 | + |
| 99 | +```shell |
| 100 | +oc rsh openstackclient |
| 101 | +network_id=$(openstack network show -c id -f value lb-mgmt-az1-net) |
| 102 | +openstack loadbalancer availabilityzoneprofile create \ |
| 103 | + --provider amphora \ |
| 104 | + --availability-zone-data '{"compute_zone": "az1", "management_network": "'$network_id'"}' \ |
| 105 | + --name azp1 |
| 106 | +openstack loadbalancer availabilityzone create \ |
| 107 | + --availabilityzoneprofile azp1 \ |
| 108 | + --name az1 |
| 109 | +``` |
| 110 | + |
| 111 | +```shell |
| 112 | +oc rsh openstackclient |
| 113 | +network_id=$(openstack network show -c id -f value lb-mgmt-az2-net) |
| 114 | +openstack loadbalancer availabilityzoneprofile create \ |
| 115 | + --provider amphora \ |
| 116 | + --availability-zone-data '{"compute_zone": "az2", "management_network": "'$network_id'"}' \ |
| 117 | + --name azp2 |
| 118 | +openstack loadbalancer availabilityzone create \ |
| 119 | + --availabilityzoneprofile azp2 \ |
| 120 | + --name az2 |
| 121 | +``` |
| 122 | + |
| 123 | +A user can then pass an `availability-zone` parameter to the Octavia API when |
| 124 | +creating a Load Balancer |
| 125 | + |
| 126 | +```shell |
| 127 | +openstack loadbalancer create \ |
| 128 | + --availability-zone az2 \ |
| 129 | + --vip-subnet-id public-subnet \ |
| 130 | + --name lb1 |
| 131 | +``` |
0 commit comments