You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/nw-ipfailover-cluster-ha-ingress.adoc
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,10 +3,10 @@
3
3
// * networking/configuring-ipfailover.adoc
4
4
5
5
[id="nw-ipfailover-cluster-ha-ingress_{context}"]
6
-
= High availability for ingressIP
6
+
= High availability For ExternalIP
7
7
8
-
In non-cloud clusters, IP failover and `ingressIP` to a service can be combined. The result is high availability services for users that create services using `ingressIP`.
8
+
In non-cloud clusters, IP failover and `ExternalIP` to a service can be combined. The result is high availability services for users that create services using `ExternalIP`.
9
9
10
-
The approach is to specify an `ingressIPNetworkCIDR` range and then use the same range in creating the IP failover configuration.
10
+
The approach is to specify an `spec.ExternalIP.autoAssignCIDRs` range of the cluster network configuration, and then use the same range in creating the IP failover configuration.
11
11
12
-
Because IP failover can support up to a maximum of 255 VIPs for the entire cluster, the `ingressIPNetworkCIDR` must be `/24` or smaller.
12
+
Because IP failover can support up to a maximum of 255 VIPs for the entire cluster, the `spec.ExternalIP.autoAssignCIDRs` must be `/24` or smaller.
Copy file name to clipboardExpand all lines: modules/nw-ipfailover-configuration.adoc
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
5
5
:_mod-docs-content-type: PROCEDURE
6
6
[id="nw-ipfailover-configuration_{context}"]
7
-
= Configuring IP failover
7
+
= Configuring IP failover in your cluster
8
8
9
9
As a cluster administrator, you can configure IP failover on an entire cluster, or on a subset of nodes, as defined by the label selector. You can also configure multiple IP failover deployment configurations in your cluster, where each one is independent of the others.
Keepalived monitors the health of the application by periodically running an optional user supplied check script. For example, the script can test a web server by issuing a request and verifying the response.
9
+
Keepalived monitors the health of the application by periodically running an optional user-supplied check script. For example, the script can test a web server by issuing a request and verifying the response. As cluster administrator, you can provide an optional notify script, which is called whenever the state changes.
10
+
11
+
The check and notify scripts run in the IP failover pod and use the pod file system, not the host file system. However, the IP failover pod makes the host file system available under the `/hosts` mount path. When configuring a check or notify script, you must provide the full path to the script. The recommended approach for providing the scripts is to use a `ConfigMap` object.
12
+
13
+
The full path names of the check and notify scripts are added to the Keepalived configuration file, `_/etc/keepalived/keepalived.conf`, which is loaded every time Keepalived starts. The scripts can be added to the pod with a `ConfigMap` object as described in the following methods.
14
+
15
+
*Check script*
10
16
11
17
When a check script is not provided, a simple default script is run that tests the TCP connection. This default test is suppressed when the monitor port is `0`.
12
18
13
-
Each IP failover pod manages a Keepalived daemon that manages one or more virtual IPs (VIP) on the node where the pod is running. The Keepalived daemon keeps the state of each VIP for that node. A particular VIP on a particular node may be in `master`, `backup`, or `fault` state.
19
+
Each IP failover pod manages a Keepalived daemon that manages one or more virtual IP (VIP) addresses on the node where the pod is running. The Keepalived daemon keeps the state of each VIP for that node. A particular VIP on a particular node might be in `master`, `backup`, or `fault` state.
14
20
15
-
When the check script for that VIP on the node that is in `master` state fails, the VIP on that node enters the `fault` state, which triggers a renegotiation. During renegotiation, all VIPs on a node that are not in the `fault` state participate in deciding which node takes over the VIP. Ultimately, the VIP enters the `master` state on some node, and the VIP stays in the `backup` state on the other nodes.
21
+
If the check script returns non-zero, the node enters the `backup` state, and any VIPs it holds are reassigned.
16
22
17
-
When a node with a VIP in `backup` state fails, the VIP on that node enters the `fault` state. When the check script passes again for a VIP on a node in the `fault` state, the VIP on that node exits the `fault` state and negotiates to enter the `master` state. The VIP on that node may then enter either the `master` or the `backup` state.
23
+
*Notify script*
18
24
19
-
As cluster administrator, you can provide an optional notify script, which is called whenever the state changes. Keepalived passes the following three parameters to the script:
25
+
Keepalived passes the following three parameters to the notify script:
20
26
21
27
* `$1` - `group` or `instance`
22
28
* `$2` - Name of the `group` or `instance`
23
29
* `$3` - The new state: `master`, `backup`, or `fault`
24
30
25
-
The check and notify scripts run in the IP failover pod and use the pod file system, not the host file system. However, the IP failover pod makes the host file system available under the `/hosts` mount path. When configuring a check or notify script, you must provide the full path to the script. The recommended approach for providing the scripts is to use a config map.
26
-
27
-
The full path names of the check and notify scripts are added to the Keepalived configuration file, `_/etc/keepalived/keepalived.conf`, which is loaded every time Keepalived starts. The scripts can be added to the pod with a config map as follows.
28
-
29
31
.Prerequisites
30
32
31
33
* You installed the OpenShift CLI (`oc`).
32
34
* You are logged in to the cluster with a user with `cluster-admin` privileges.
33
35
34
36
.Procedure
35
37
36
-
. Create the desired script and create a config map to hold it. The script has no input arguments and must return `0` for `OK` and `1` for `fail`.
38
+
. Create the desired script and create a `ConfigMap` object to hold it. The script has no input arguments and must return `0` for `OK` and `1` for `fail`.
37
39
+
38
40
The check script, `_mycheckscript.sh_`:
39
41
+
@@ -45,14 +47,14 @@ The check script, `_mycheckscript.sh_`:
. Add the script to the pod. The `defaultMode` for the mounted config map files must able to run by using `oc` commands or by editing the deployment configuration. A value of `0755`, `493` decimal, is typical:
57
+
. Add the script to the pod. The `defaultMode` for the mounted `ConfigMap` object files must able to run by using `oc` commands or by editing the deployment configuration. A value of `0755`, `493` decimal, is typical:
When a Virtual IP (VIP) on a node leaves the `fault` state by passing the check script, the VIP on the node enters the `backup` state if it has lower priority than the VIP on the node that is currently in the `master` state. However, if the VIP on the node that is leaving `fault` state has a higher priority, the preemption strategy determines its role in the cluster.
10
-
9
+
When a Virtual IP (VIP) on a node leaves the `fault` state by passing the check script, the VIP on the node enters the `backup` state if it has lower priority than the VIP on the node that is currently in the `master` state.
11
10
The `nopreempt` strategy does not move `master` from the lower priority VIP on the host to the higher priority VIP on the host. With `preempt_delay 300`, the default, Keepalived waits the specified 300 seconds and moves `master` to the higher priority VIP on the host.
12
11
13
-
.Prerequisites
14
-
15
-
* You installed the OpenShift CLI (`oc`).
16
-
17
12
.Procedure
18
13
19
14
* To specify preemption enter `oc edit deploy ipfailover-keepalived` to edit the router deployment configuration:
Copy file name to clipboardExpand all lines: modules/nw-ipfailover-vrrp-ip-offset.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
5
5
:_mod-docs-content-type: CONCEPT
6
6
[id="nw-ipfailover-vrrp-ip-offset_{context}"]
7
-
= About VRRP ID offset
7
+
= Deploying multiple IP failover instances
8
8
9
9
Each IP failover pod managed by the IP failover deployment configuration, `1` pod per node or replica, runs a Keepalived daemon. As more IP failover deployment configurations are configured, more pods are created and more daemons join into the common Virtual Router Redundancy Protocol (VRRP) negotiation. This negotiation is done by all the Keepalived daemons and it determines which nodes service which virtual IPs (VIP).
Copy file name to clipboardExpand all lines: networking/configuring-ipfailover.adoc
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,17 +8,24 @@ toc::[]
8
8
9
9
This topic describes configuring IP failover for pods and services on your {product-title} cluster.
10
10
11
-
IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long a single node is available, the VIPs are served. There is no way to explicitly distribute the VIPs over the nodes, so there can be nodes with no VIPs and other nodes with many VIPs. If there is only one node, all VIPs are on it.
11
+
IP failover uses link:http://www.keepalived.org/[Keepalived] to host a set of externally accessible Virtual IP (VIP) addresses on a set of hosts. Each VIP address is only serviced by a single host at a time. Keepalived uses the Virtual Router Redundancy Protocol (VRRP) to determine which host, from the set of hosts, services which VIP. If a host becomes unavailable, or if the service that Keepalived is watching does not respond, the VIP is switched to another host from the set. This means a VIP is always serviced as long as a host is available.
12
+
13
+
Every VIP in the set is serviced by a node selected from the set. If a single node is available, the VIPs are served. There is no way to explicitly distribute the VIPs over the nodes, so there can be nodes with no VIPs and other nodes with many VIPs. If there is only one node, all VIPs are on it.
14
+
15
+
The administrator must ensure that all of the VIP addresses meet the following requirements:
16
+
17
+
* Accessible on the configured hosts from outside the cluster.
18
+
* Not used for any other purpose within the cluster.
19
+
20
+
Keepalived on each node determines whether the needed service is running. If it is, VIPs are supported and Keepalived participates in the negotiation to determine which node serves the VIP. For a node to participate, the service must be listening on the watch port on a VIP or the check must be disabled.
12
21
13
22
[NOTE]
14
23
====
15
-
The VIPs must be routable from outside the cluster.
24
+
Each VIP in the set might be served by a different node.
16
25
====
17
26
18
27
IP failover monitors a port on each VIP to determine whether the port is reachable on the node. If the port is not reachable, the VIP is not assigned to the node. If the port is set to `0`, this check is suppressed. The check script does the needed testing.
19
28
20
-
IP failover uses link:http://www.keepalived.org/[Keepalived] to host a set of externally accessible VIP addresses on a set of hosts. Each VIP is only serviced by a single host at a time. Keepalived uses the Virtual Router Redundancy Protocol (VRRP) to determine which host, from the set of hosts, services which VIP. If a host becomes unavailable, or if the service that Keepalived is watching does not respond, the VIP is switched to another host from the set. This means a VIP is always serviced as long as a host is available.
21
-
22
29
When a node running Keepalived passes the check script, the VIP on that node can enter the `master` state based on its priority and the priority of the current master and as determined by the preemption strategy.
23
30
24
31
A cluster administrator can provide a script through the `OPENSHIFT_HA_NOTIFY_SCRIPT` variable, and this script is called whenever the state of the VIP on the node changes. Keepalived uses the `master` state when it is servicing the VIP, the `backup` state when another node is servicing the VIP, or in the `fault` state when the check script fails. The notify script is called with the new state whenever the state changes.
@@ -27,21 +34,16 @@ You can create an IP failover deployment configuration on {product-title}. The I
27
34
28
35
When using VIPs to access a pod with host networking, the application pod runs on all nodes that are running the IP failover pods. This enables any of the IP failover nodes to become the master and service the VIPs when needed. If application pods are not running on all nodes with IP failover, either some IP failover nodes never service the VIPs or some application pods never receive any traffic. Use the same selector and replication count, for both IP failover and the application pods, to avoid this mismatch.
29
36
30
-
While using VIPs to access a service, any of the nodes can be in the IP failover set of nodes, since the service is reachable on all nodes, no matter where the application pod is running. Any of the IP failover nodes can become master at any time. The service can either use external IPs and a service port or it can use a `NodePort`.
37
+
While using VIPs to access a service, any of the nodes can be in the IP failover set of nodes, since the service is reachable on all nodes, no matter where the application pod is running. Any of the IP failover nodes can become master at any time. The service can either use external IPs and a service port or it can use a `NodePort`. Setting up a `NodePort` is a privileged operation.
31
38
32
39
When using external IPs in the service definition, the VIPs are set to the external IPs, and the IP failover monitoring port is set to the service port. When using a node port, the port is open on every node in the cluster, and the service load-balances traffic from whatever node currently services the VIP. In this case, the IP failover monitoring port is set to the `NodePort` in the service definition.
33
40
34
-
[IMPORTANT]
35
-
====
36
-
Setting up a `NodePort` is a privileged operation.
37
-
====
38
-
39
41
[IMPORTANT]
40
42
====
41
43
Even though a service VIP is highly available, performance can still be affected. Keepalived makes sure that each of the VIPs is serviced by some node in the configuration, and several VIPs can end up on the same node even when other nodes have none. Strategies that externally load-balance across a set of VIPs can be thwarted when IP failover puts multiple VIPs on the same node.
42
44
====
43
45
44
-
When you use `ingressIP`, you can set up IP failover to have the same VIP range as the `ingressIP` range. You can also disable the monitoring port. In this case, all the VIPs appear on same node in the cluster. Any user can set up a service with an `ingressIP` and have it highly available.
46
+
When you use `ExternalIP`, you can set up IP failover to have the same VIP range as the `ExternalIP` range. You can also disable the monitoring port. In this case, all of the VIPs appear on same node in the cluster. Any user can set up a service with an `ExternalIP` and make it highly available.
0 commit comments