Skip to content

Commit 6926714

Browse files
authored
Merge pull request #74064 from ahardin-rh/configuring-ip-failover-audit
OSDOCS-8661: Configuring IP failover content audit
2 parents c5a86ff + 68d0ea5 commit 6926714

8 files changed

+42
-59
lines changed

modules/nw-ipfailover-cluster-ha-ingress.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@
33
// * networking/configuring-ipfailover.adoc
44

55
[id="nw-ipfailover-cluster-ha-ingress_{context}"]
6-
= High availability for ingressIP
6+
= High availability For ExternalIP
77

8-
In non-cloud clusters, IP failover and `ingressIP` to a service can be combined. The result is high availability services for users that create services using `ingressIP`.
8+
In non-cloud clusters, IP failover and `ExternalIP` to a service can be combined. The result is high availability services for users that create services using `ExternalIP`.
99

10-
The approach is to specify an `ingressIPNetworkCIDR` range and then use the same range in creating the IP failover configuration.
10+
The approach is to specify an `spec.ExternalIP.autoAssignCIDRs` range of the cluster network configuration, and then use the same range in creating the IP failover configuration.
1111

12-
Because IP failover can support up to a maximum of 255 VIPs for the entire cluster, the `ingressIPNetworkCIDR` must be `/24` or smaller.
12+
Because IP failover can support up to a maximum of 255 VIPs for the entire cluster, the `spec.ExternalIP.autoAssignCIDRs` must be `/24` or smaller.

modules/nw-ipfailover-configuration.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="nw-ipfailover-configuration_{context}"]
7-
= Configuring IP failover
7+
= Configuring IP failover in your cluster
88

99
As a cluster administrator, you can configure IP failover on an entire cluster, or on a subset of nodes, as defined by the label selector. You can also configure multiple IP failover deployment configurations in your cluster, where each one is independent of the others.
1010

@@ -35,6 +35,10 @@ $ oc create sa ipfailover
3535
[source,terminal]
3636
----
3737
$ oc adm policy add-scc-to-user privileged -z ipfailover
38+
----
39+
+
40+
[source,terminal]
41+
----
3842
$ oc adm policy add-scc-to-user hostnetwork -z ipfailover
3943
----
4044
+

modules/nw-ipfailover-configuring-check-notify-scripts.adoc

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,34 +6,36 @@
66
[id="nw-ipfailover-configuring-check-notify-scripts_{context}"]
77
= Configuring check and notify scripts
88

9-
Keepalived monitors the health of the application by periodically running an optional user supplied check script. For example, the script can test a web server by issuing a request and verifying the response.
9+
Keepalived monitors the health of the application by periodically running an optional user-supplied check script. For example, the script can test a web server by issuing a request and verifying the response. As cluster administrator, you can provide an optional notify script, which is called whenever the state changes.
10+
11+
The check and notify scripts run in the IP failover pod and use the pod file system, not the host file system. However, the IP failover pod makes the host file system available under the `/hosts` mount path. When configuring a check or notify script, you must provide the full path to the script. The recommended approach for providing the scripts is to use a `ConfigMap` object.
12+
13+
The full path names of the check and notify scripts are added to the Keepalived configuration file, `_/etc/keepalived/keepalived.conf`, which is loaded every time Keepalived starts. The scripts can be added to the pod with a `ConfigMap` object as described in the following methods.
14+
15+
*Check script*
1016

1117
When a check script is not provided, a simple default script is run that tests the TCP connection. This default test is suppressed when the monitor port is `0`.
1218

13-
Each IP failover pod manages a Keepalived daemon that manages one or more virtual IPs (VIP) on the node where the pod is running. The Keepalived daemon keeps the state of each VIP for that node. A particular VIP on a particular node may be in `master`, `backup`, or `fault` state.
19+
Each IP failover pod manages a Keepalived daemon that manages one or more virtual IP (VIP) addresses on the node where the pod is running. The Keepalived daemon keeps the state of each VIP for that node. A particular VIP on a particular node might be in `master`, `backup`, or `fault` state.
1420

15-
When the check script for that VIP on the node that is in `master` state fails, the VIP on that node enters the `fault` state, which triggers a renegotiation. During renegotiation, all VIPs on a node that are not in the `fault` state participate in deciding which node takes over the VIP. Ultimately, the VIP enters the `master` state on some node, and the VIP stays in the `backup` state on the other nodes.
21+
If the check script returns non-zero, the node enters the `backup` state, and any VIPs it holds are reassigned.
1622

17-
When a node with a VIP in `backup` state fails, the VIP on that node enters the `fault` state. When the check script passes again for a VIP on a node in the `fault` state, the VIP on that node exits the `fault` state and negotiates to enter the `master` state. The VIP on that node may then enter either the `master` or the `backup` state.
23+
*Notify script*
1824

19-
As cluster administrator, you can provide an optional notify script, which is called whenever the state changes. Keepalived passes the following three parameters to the script:
25+
Keepalived passes the following three parameters to the notify script:
2026

2127
* `$1` - `group` or `instance`
2228
* `$2` - Name of the `group` or `instance`
2329
* `$3` - The new state: `master`, `backup`, or `fault`
2430
25-
The check and notify scripts run in the IP failover pod and use the pod file system, not the host file system. However, the IP failover pod makes the host file system available under the `/hosts` mount path. When configuring a check or notify script, you must provide the full path to the script. The recommended approach for providing the scripts is to use a config map.
26-
27-
The full path names of the check and notify scripts are added to the Keepalived configuration file, `_/etc/keepalived/keepalived.conf`, which is loaded every time Keepalived starts. The scripts can be added to the pod with a config map as follows.
28-
2931
.Prerequisites
3032

3133
* You installed the OpenShift CLI (`oc`).
3234
* You are logged in to the cluster with a user with `cluster-admin` privileges.
3335
3436
.Procedure
3537

36-
. Create the desired script and create a config map to hold it. The script has no input arguments and must return `0` for `OK` and `1` for `fail`.
38+
. Create the desired script and create a `ConfigMap` object to hold it. The script has no input arguments and must return `0` for `OK` and `1` for `fail`.
3739
+
3840
The check script, `_mycheckscript.sh_`:
3941
+
@@ -45,14 +47,14 @@ The check script, `_mycheckscript.sh_`:
4547
exit 0
4648
----
4749

48-
. Create the config map:
50+
. Create the `ConfigMap` object :
4951
+
5052
[source,terminal]
5153
----
5254
$ oc create configmap mycustomcheck --from-file=mycheckscript.sh
5355
----
5456
+
55-
. Add the script to the pod. The `defaultMode` for the mounted config map files must able to run by using `oc` commands or by editing the deployment configuration. A value of `0755`, `493` decimal, is typical:
57+
. Add the script to the pod. The `defaultMode` for the mounted `ConfigMap` object files must able to run by using `oc` commands or by editing the deployment configuration. A value of `0755`, `493` decimal, is typical:
5658
+
5759
[source,terminal]
5860
----

modules/nw-ipfailover-configuring-vrrp-preemption.adoc

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,9 @@
66
[id="nw-ipfailover-configuring-vrrp-preemption_{context}"]
77
= Configuring VRRP preemption
88

9-
When a Virtual IP (VIP) on a node leaves the `fault` state by passing the check script, the VIP on the node enters the `backup` state if it has lower priority than the VIP on the node that is currently in the `master` state. However, if the VIP on the node that is leaving `fault` state has a higher priority, the preemption strategy determines its role in the cluster.
10-
9+
When a Virtual IP (VIP) on a node leaves the `fault` state by passing the check script, the VIP on the node enters the `backup` state if it has lower priority than the VIP on the node that is currently in the `master` state.
1110
The `nopreempt` strategy does not move `master` from the lower priority VIP on the host to the higher priority VIP on the host. With `preempt_delay 300`, the default, Keepalived waits the specified 300 seconds and moves `master` to the higher priority VIP on the host.
1211

13-
.Prerequisites
14-
15-
* You installed the OpenShift CLI (`oc`).
16-
1712
.Procedure
1813

1914
* To specify preemption enter `oc edit deploy ipfailover-keepalived` to edit the router deployment configuration:

modules/nw-ipfailover-remove.adoc

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -92,11 +92,12 @@ spec:
9292
- name: remove-ipfailover
9393
image: quay.io/openshift/origin-keepalived-ipfailover:{product-version}
9494
command: ["/var/lib/ipfailover/keepalived/remove-failover.sh"]
95-
nodeSelector:
96-
kubernetes.io/hostname: <host_name> <.>
95+
nodeSelector: <1>
96+
kubernetes.io/hostname: <host_name> <2>
9797
restartPolicy: Never
9898
----
99-
<.> Run the job for each node in your cluster that was configured for IP failover and replace the hostname each time.
99+
<1> The `nodeSelector` is likely the same as the selector used in the old IP failover deployment.
100+
<2> Run the job for each node in your cluster that was configured for IP failover and replace the hostname each time.
100101

101102
.. Run the job:
102103
+

modules/nw-ipfailover-virtual-ip-addresses-concept.adoc

Lines changed: 0 additions & 19 deletions
This file was deleted.

modules/nw-ipfailover-vrrp-ip-offset.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
:_mod-docs-content-type: CONCEPT
66
[id="nw-ipfailover-vrrp-ip-offset_{context}"]
7-
= About VRRP ID offset
7+
= Deploying multiple IP failover instances
88

99
Each IP failover pod managed by the IP failover deployment configuration, `1` pod per node or replica, runs a Keepalived daemon. As more IP failover deployment configurations are configured, more pods are created and more daemons join into the common Virtual Router Redundancy Protocol (VRRP) negotiation. This negotiation is done by all the Keepalived daemons and it determines which nodes service which virtual IPs (VIP).
1010

networking/configuring-ipfailover.adoc

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -8,17 +8,24 @@ toc::[]
88

99
This topic describes configuring IP failover for pods and services on your {product-title} cluster.
1010

11-
IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long a single node is available, the VIPs are served. There is no way to explicitly distribute the VIPs over the nodes, so there can be nodes with no VIPs and other nodes with many VIPs. If there is only one node, all VIPs are on it.
11+
IP failover uses link:http://www.keepalived.org/[Keepalived] to host a set of externally accessible Virtual IP (VIP) addresses on a set of hosts. Each VIP address is only serviced by a single host at a time. Keepalived uses the Virtual Router Redundancy Protocol (VRRP) to determine which host, from the set of hosts, services which VIP. If a host becomes unavailable, or if the service that Keepalived is watching does not respond, the VIP is switched to another host from the set. This means a VIP is always serviced as long as a host is available.
12+
13+
Every VIP in the set is serviced by a node selected from the set. If a single node is available, the VIPs are served. There is no way to explicitly distribute the VIPs over the nodes, so there can be nodes with no VIPs and other nodes with many VIPs. If there is only one node, all VIPs are on it.
14+
15+
The administrator must ensure that all of the VIP addresses meet the following requirements:
16+
17+
* Accessible on the configured hosts from outside the cluster.
18+
* Not used for any other purpose within the cluster.
19+
20+
Keepalived on each node determines whether the needed service is running. If it is, VIPs are supported and Keepalived participates in the negotiation to determine which node serves the VIP. For a node to participate, the service must be listening on the watch port on a VIP or the check must be disabled.
1221

1322
[NOTE]
1423
====
15-
The VIPs must be routable from outside the cluster.
24+
Each VIP in the set might be served by a different node.
1625
====
1726

1827
IP failover monitors a port on each VIP to determine whether the port is reachable on the node. If the port is not reachable, the VIP is not assigned to the node. If the port is set to `0`, this check is suppressed. The check script does the needed testing.
1928

20-
IP failover uses link:http://www.keepalived.org/[Keepalived] to host a set of externally accessible VIP addresses on a set of hosts. Each VIP is only serviced by a single host at a time. Keepalived uses the Virtual Router Redundancy Protocol (VRRP) to determine which host, from the set of hosts, services which VIP. If a host becomes unavailable, or if the service that Keepalived is watching does not respond, the VIP is switched to another host from the set. This means a VIP is always serviced as long as a host is available.
21-
2229
When a node running Keepalived passes the check script, the VIP on that node can enter the `master` state based on its priority and the priority of the current master and as determined by the preemption strategy.
2330

2431
A cluster administrator can provide a script through the `OPENSHIFT_HA_NOTIFY_SCRIPT` variable, and this script is called whenever the state of the VIP on the node changes. Keepalived uses the `master` state when it is servicing the VIP, the `backup` state when another node is servicing the VIP, or in the `fault` state when the check script fails. The notify script is called with the new state whenever the state changes.
@@ -27,21 +34,16 @@ You can create an IP failover deployment configuration on {product-title}. The I
2734

2835
When using VIPs to access a pod with host networking, the application pod runs on all nodes that are running the IP failover pods. This enables any of the IP failover nodes to become the master and service the VIPs when needed. If application pods are not running on all nodes with IP failover, either some IP failover nodes never service the VIPs or some application pods never receive any traffic. Use the same selector and replication count, for both IP failover and the application pods, to avoid this mismatch.
2936

30-
While using VIPs to access a service, any of the nodes can be in the IP failover set of nodes, since the service is reachable on all nodes, no matter where the application pod is running. Any of the IP failover nodes can become master at any time. The service can either use external IPs and a service port or it can use a `NodePort`.
37+
While using VIPs to access a service, any of the nodes can be in the IP failover set of nodes, since the service is reachable on all nodes, no matter where the application pod is running. Any of the IP failover nodes can become master at any time. The service can either use external IPs and a service port or it can use a `NodePort`. Setting up a `NodePort` is a privileged operation.
3138

3239
When using external IPs in the service definition, the VIPs are set to the external IPs, and the IP failover monitoring port is set to the service port. When using a node port, the port is open on every node in the cluster, and the service load-balances traffic from whatever node currently services the VIP. In this case, the IP failover monitoring port is set to the `NodePort` in the service definition.
3340

34-
[IMPORTANT]
35-
====
36-
Setting up a `NodePort` is a privileged operation.
37-
====
38-
3941
[IMPORTANT]
4042
====
4143
Even though a service VIP is highly available, performance can still be affected. Keepalived makes sure that each of the VIPs is serviced by some node in the configuration, and several VIPs can end up on the same node even when other nodes have none. Strategies that externally load-balance across a set of VIPs can be thwarted when IP failover puts multiple VIPs on the same node.
4244
====
4345

44-
When you use `ingressIP`, you can set up IP failover to have the same VIP range as the `ingressIP` range. You can also disable the monitoring port. In this case, all the VIPs appear on same node in the cluster. Any user can set up a service with an `ingressIP` and have it highly available.
46+
When you use `ExternalIP`, you can set up IP failover to have the same VIP range as the `ExternalIP` range. You can also disable the monitoring port. In this case, all of the VIPs appear on same node in the cluster. Any user can set up a service with an `ExternalIP` and make it highly available.
4547

4648
[IMPORTANT]
4749
====
@@ -52,8 +54,6 @@ include::modules/nw-ipfailover-environment-variables.adoc[leveloffset=+1]
5254

5355
include::modules/nw-ipfailover-configuration.adoc[leveloffset=+1]
5456

55-
include::modules/nw-ipfailover-virtual-ip-addresses-concept.adoc[leveloffset=+1]
56-
5757
include::modules/nw-ipfailover-configuring-check-notify-scripts.adoc[leveloffset=+1]
5858

5959
include::modules/nw-ipfailover-configuring-vrrp-preemption.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)