Skip to content

Commit 411b46a

Browse files
committed
OLS-1903: OCP docs update, week of 2025/07/07
1 parent 62c6a0b commit 411b46a

File tree

141 files changed

+3261
-895
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

141 files changed

+3261
-895
lines changed

ocp-product-docs-plaintext/4.15/installing/installing_bare_metal/installing-bare-metal-network-customizations.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1790,7 +1790,7 @@ $ coreos-installer install \
17901790
```
17911791

17921792
The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console.
1793-
The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8. If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation.
1793+
The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8. If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation.
17941794
3. Reboot into the installed system.
17951795

17961796
[NOTE]

ocp-product-docs-plaintext/4.15/installing/installing_bare_metal/installing-bare-metal.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1779,7 +1779,7 @@ $ coreos-installer install \
17791779
```
17801780

17811781
The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console.
1782-
The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8. If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation.
1782+
The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8. If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation.
17831783
3. Reboot into the installed system.
17841784

17851785
[NOTE]

ocp-product-docs-plaintext/4.15/installing/installing_bare_metal/installing-restricted-networks-bare-metal.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1765,7 +1765,7 @@ $ coreos-installer install \
17651765
```
17661766

17671767
The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console.
1768-
The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8. If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation.
1768+
The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8. If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation.
17691769
3. Reboot into the installed system.
17701770

17711771
[NOTE]

ocp-product-docs-plaintext/4.15/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.txt

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,13 @@ If the timeout period of the CLB is shorter than the route timeout or Ingress Co
1616

1717
## Configuring route timeouts
1818

19-
You can configure the default timeouts for an existing route when you
20-
have services in need of a low timeout, which is required for Service Level
21-
Availability (SLA) purposes, or a high timeout, for cases with a slow
22-
back end.
19+
You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end.
20+
21+
22+
[IMPORTANT]
23+
----
24+
If you configured a user-managed external load balancer in front of your Red Hat OpenShift Container Platform cluster, ensure that the timeout value for the user-managed external load balancer is higher than the timeout value for the route. This configuration prevents network congestion issues over the network that your cluster uses.
25+
----
2326

2427
* You need a deployed Ingress Controller on a running cluster.
2528

@@ -30,10 +33,9 @@ $ oc annotate route <route_name> \
3033
--overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1
3134
```
3235

33-
Supported time units are microseconds (us), milliseconds (ms), seconds (s),
34-
minutes (m), hours (h), or days (d).
36+
Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d).
3537

36-
The following example sets a timeout of two seconds on a route named myroute:
38+
The following example sets a timeout of two seconds on a route named myroute:
3739

3840
```terminal
3941
$ oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s

ocp-product-docs-plaintext/4.15/networking/ovn_kubernetes_network_provider/configuring-ipsec-ovn.txt

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,15 @@
33

44
By enabling IPsec, you can encrypt both internal pod-to-pod cluster traffic between nodes and external traffic between pods and IPsec endpoints external to your cluster. All pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode.
55
IPsec is disabled by default. You can enable IPsec either during or after installing the cluster. For information about cluster installation, see Red Hat OpenShift Container Platform installation overview.
6+
7+
[NOTE]
8+
----
9+
Upgrading your cluster to Red Hat OpenShift Container Platform 4.15 when the libreswan and NetworkManager-libreswan packages have different Red Hat OpenShift Container Platform versions causes two consecutive compute node reboot operations. For the first reboot, the Cluster Network Operator (CNO) applies the IPsec configuration to compute nodes. For the second reboot, the Machine Config Operator (MCO) applies the latest machine configs to the cluster.
10+
To combine the CNO and MCO updates into a single node reboot, complete the following tasks:
11+
* Before upgrading your cluster, set the paused parameter to true in the MachineConfigPools custom resource (CR) that groups compute nodes.
12+
* After you upgrade your cluster, set the parameter to false.
13+
For more information, see Performing a Control Plane Only update.
14+
----
615
The following support limitations exist for IPsec on a Red Hat OpenShift Container Platform cluster:
716
* On IBM Cloud(R), IPsec supports only NAT-T. Encapsulating Security Payload (ESP) is not supported on this platform.
817
* If your cluster uses hosted control planes for Red Hat Red Hat OpenShift Container Platform, IPsec is not supported for IPsec encryption of either pod-to-pod or traffic to external hosts.
@@ -472,4 +481,4 @@ $ oc patch networks.operator.openshift.io cluster --type=merge \
472481
* Installing Butane
473482
* About the OVN-Kubernetes Container Network Interface (CNI) network plugin
474483
* Changing the MTU for the cluster network
475-
* Network [operator.openshift.io/v1] API
484+
* xref:../../rest_api/operator_apis/network-operator-openshift-io-v1.adoc#network-operator-openshift-io-v1[Network [operator.openshift.io/v1\] API

ocp-product-docs-plaintext/4.15/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.txt

Lines changed: 103 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,9 @@
33

44
As a cluster administrator, you can migrate to the OVN-Kubernetes network plugin from the OpenShift software-defined networking (SDN) plugin.
55
The following methods exist for migrating from the OpenShift SDN network plugin to the OVN-Kubernetes plugin:
6-
Limited live migration (preferred method):: This is an automated process that migrates your cluster from OpenShift SDN to OVN-Kubernetes.
6+
Ansible playbook:: The Ansible playbook method automates the offline migration method steps. This method has the same usage scenarios as the manual offline migration method.
77
Offline migration:: This is a manual process that includes some downtime. This method is primarily used for self-managed Red Hat OpenShift Container Platform deployments, and consider using this method when you cannot perform a limited live migration to the OVN-Kubernetes network plugin.
8+
Limited live migration (Preferred method):: This is an automated process that migrates your cluster from OpenShift SDN to OVN-Kubernetes.
89

910
[WARNING]
1011
----
@@ -623,7 +624,7 @@ The following table shows you the available metrics and the label values populat
623624

624625
# Offline migration to the OVN-Kubernetes network plugin overview
625626

626-
The offline migration method is a manual process that includes some downtime, during which your cluster is unreachable. This method is primarily used for self-managed Red Hat OpenShift Container Platform deployments, and is an alternative to the limited live migration procedure. It should be used in the event that you cannot perform a limited live migration to the OVN-Kubernetes network plugin.
627+
The offline migration method is a manual process that includes some downtime, during which your cluster is unreachable. You can use an Ansible playbook that automates the offline migration steps so that you can save time. These methods are primarily used for self-managed Red Hat OpenShift Container Platform deployments, and are an alternative to the limited live migration procedure. Use these methods only when you cannot perform a limited live migration to the OVN-Kubernetes network plugin.
627628

628629
Although a rollback procedure is provided, the offline migration is intended to be a one-way process.
629630

@@ -635,13 +636,13 @@ OpenShift SDN CNI is deprecated as of Red Hat OpenShift Container Platform 4.14.
635636

636637
The following sections provide more information about the offline migration method.
637638

638-
## Supported platforms when using the offline migration method
639+
## Supported platforms when using the offline migration methods
639640

640-
The following table provides information about the supported platforms for the offline migration type.
641+
The following table provides information about the supported platforms for the manual offline migration type.
641642

642643

643644

644-
## Considerations for offline migration to the OVN-Kubernetes network plugin
645+
## Considerations for the offline migration methods to the OVN-Kubernetes network plugin
645646

646647
If you have more than 150 nodes in your Red Hat OpenShift Container Platform cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plugin.
647648

@@ -732,6 +733,101 @@ The following table summarizes the migration process by segmenting between the u
732733

733734

734735

736+
## Using an Ansible playbook to migrate to the OVN-Kubernetes network plugin
737+
738+
As a cluster administrator, you can use an Ansible collection, network.offline_migration_sdn_to_ovnk, to migrate from the OpenShift SDN Container Network Interface (CNI) network plugin to the OVN-Kubernetes plugin for your cluster. The Ansible collection includes the following playbooks:
739+
740+
* playbooks/playbook-migration.yml: Includes playbooks that execute in a sequence where each playbook represents a step in the migration process.
741+
* playbooks/playbook-rollback.yml: Includes playbooks that execute in a sequence where each playbook represents a step in the rollback process.
742+
743+
* You installed the python3 package, minimum version 3.10.
744+
* You installed the jmespath and jq packages.
745+
* You logged in to the Red Hat Hybrid Cloud Console and opened the Ansible Automation Platform web console.
746+
* You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081 for all nodes on all cloud platforms. If you do not do this task, your cluster might fail to schedule pods.
747+
* If the OpenShift-SDN plugin uses the 100.64.0.0/16 and 100.88.0.0/16 address ranges, you patched the address ranges. For more information, see "Patching OVN-Kubernetes address ranges" in the Additional resources section.
748+
749+
1. Install the ansible-core package, minimum version 2.15. The following example command shows how to install the ansible-core package on Red Hat Enterprise Linux (RHEL):
750+
751+
```terminal
752+
$ sudo dnf install -y ansible-core
753+
```
754+
755+
2. Create an ansible.cfg file and add information similar to the following example to the file. Ensure that file exists in the same directory as where the ansible-galaxy commands and the playbooks run.
756+
757+
```ini
758+
$ cat << EOF >> ansible.cfg
759+
[galaxy]
760+
server_list = automation_hub, validated
761+
762+
[galaxy_server.automation_hub]
763+
url=https://console.redhat.com/api/automation-hub/content/published/
764+
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
765+
token=
766+
767+
#[galaxy_server.release_galaxy]
768+
#url=https://galaxy.ansible.com/
769+
770+
[galaxy_server.validated]
771+
url=https://console.redhat.com/api/automation-hub/content/validated/
772+
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
773+
token=
774+
EOF
775+
```
776+
777+
3. From the Ansible Automation Platform web console, go to the Connect to Hub page and complete the following steps:
778+
1. In the Offline token section of the page, click the Load token button.
779+
2. After the token loads, click the Copy to clipboard icon.
780+
3. Open the ansible.cfg file and paste the API token in the token= parameter. The API token is required for authenticating against the server URL specified in the ansible.cfg file.
781+
4. Install the network.offline_migration_sdn_to_ovnk Ansible collection by entering the following ansible-galaxy command:
782+
783+
```terminal
784+
$ ansible-galaxy collection install network.offline_migration_sdn_to_ovnk
785+
```
786+
787+
5. Verify that the network.offline_migration_sdn_to_ovnk Ansible collection is installed on your system:
788+
789+
```terminal
790+
$ ansible-galaxy collection list | grep network.offline_migration_sdn_to_ovnk
791+
```
792+
793+
Example output
794+
795+
```terminal
796+
network.offline_migration_sdn_to_ovnk 1.0.2
797+
```
798+
799+
800+
The network.offline_migration_sdn_to_ovnk Ansible collection is saved in the default path of ~/.ansible/collections/ansible_collections/network/offline_migration_sdn_to_ovnk/.
801+
6. Configure migration features in the playbooks/playbook-migration.yml file:
802+
803+
```yaml
804+
# ...
805+
migration_interface_name: eth0
806+
migration_disable_auto_migration: true
807+
migration_egress_ip: false
808+
migration_egress_firewall: false
809+
migration_multicast: false
810+
migration_mtu: 1400
811+
migration_geneve_port: 6081
812+
migration_ipv4_subnet: "100.64.0.0/16"
813+
# ...
814+
```
815+
816+
migration_interface_name:: If you use an NodeNetworkConfigurationPolicy (NNCP) resource on a primary interface, specify the interface name in the migration-playbook.yml file so that the NNCP resource gets deleted on the primary interface during the migration process.
817+
migration_disable_auto_migration:: Disables the auto-migration of OpenShift SDN CNI plug-in features to the OVN-Kubernetes plugin. If you disable auto-migration of features, you must also set the migration_egress_ip, migration_egress_firewall, and migration_multicast parameters to false. If you need to enable auto-migration of features, set the parameter to false.
818+
migration_mtu:: Optional parameter that sets a specific maximum transmission unit (MTU) to your cluster network after the migration process.
819+
migration_geneve_port:: Optional parameter that sets a Geneve port for OVN-Kubernetes. The default port is 6081.
820+
migration_ipv4_subnet:: Optional parameter that sets an IPv4 address range for internal use by OVN-Kubernetes. The default value for the parameter is 100.64.0.0/16.
821+
7. To run the playbooks/playbook-migration.yml file, enter the following command:
822+
823+
```terminal
824+
$ ansible-playbook -v playbooks/playbook-migration.yml
825+
```
826+
827+
828+
* Patching OVN-Kubernetes address ranges
829+
* Getting started with playbooks (Red Hat Ansible Automation Platform documentation)
830+
735831
## Migrating to the OVN-Kubernetes network plugin by using the offline migration method
736832

737833
As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster.
@@ -891,7 +987,8 @@ $ oc patch Network.operator.openshift.io cluster --type=merge \
891987
"ovnKubernetesConfig":{
892988
"mtu":<mtu>,
893989
"genevePort":<port>,
894-
"v4InternalSubnet":"<ipv4_subnet>"
990+
"ipv4":
991+
"InternalJoinSubnet": "<ipv4_subnet>"
895992
}}}}'
896993
```
897994

ocp-product-docs-plaintext/4.15/networking/ovn_kubernetes_network_provider/rollback-to-openshift-sdn.txt

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -316,6 +316,92 @@ $ oc delete namespace openshift-ovn-kubernetes
316316
```
317317

318318

319+
# Using an Ansible playbook to roll back to the OpenShift SDN network plugin
320+
321+
As a cluster administrator, you can use the playbooks/playbook-rollback.yml from the network.offline_migration_sdn_to_ovnk Ansible collection to roll back from the OVN-Kubernetes plugin to the OpenShift SDN Container Network Interface (CNI) network plugin.
322+
323+
* You installed the python3 package, minimum version 3.10.
324+
* You installed the jmespath and jq packages.
325+
* You logged in to the Red Hat Hybrid Cloud Console and opened the Ansible Automation Platform web console.
326+
* You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081 for all nodes on all cloud platforms. If you do not do this task, your cluster might fail to schedule pods.
327+
328+
1. Install the ansible-core package, minimum version 2.15. The following example command shows how to install the ansible-core package on Red Hat Enterprise Linux (RHEL):
329+
330+
```terminal
331+
$ sudo dnf install -y ansible-core
332+
```
333+
334+
2. Create an ansible.cfg file and add information similar to the following example to the file. Ensure that file exists in the same directory as where the ansible-galaxy commands and the playbooks run.
335+
336+
```ini
337+
$ cat << EOF >> ansible.cfg
338+
[galaxy]
339+
server_list = automation_hub, validated
340+
341+
[galaxy_server.automation_hub]
342+
url=https://console.redhat.com/api/automation-hub/content/published/
343+
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
344+
token=
345+
346+
#[galaxy_server.release_galaxy]
347+
#url=https://galaxy.ansible.com/
348+
349+
[galaxy_server.validated]
350+
url=https://console.redhat.com/api/automation-hub/content/validated/
351+
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
352+
token=
353+
EOF
354+
```
355+
356+
3. From the Ansible Automation Platform web console, go to the Connect to Hub page and complete the following steps:
357+
1. In the Offline token section of the page, click the Load token button.
358+
2. After the token loads, click the Copy to clipboard icon.
359+
3. Open the ansible.cfg file and paste the API token in the token= parameter. The API token is required for authenticating against the server URL specified in the ansible.cfg file.
360+
4. Install the network.offline_migration_sdn_to_ovnk Ansible collection by entering the following ansible-galaxy command:
361+
362+
```terminal
363+
$ ansible-galaxy collection install network.offline_migration_sdn_to_ovnk
364+
```
365+
366+
5. Verify that the network.offline_migration_sdn_to_ovnk Ansible collection is installed on your system:
367+
368+
```terminal
369+
$ ansible-galaxy collection list | grep network.offline_migration_sdn_to_ovnk
370+
```
371+
372+
Example output
373+
374+
```terminal
375+
network.offline_migration_sdn_to_ovnk 1.0.2
376+
```
377+
378+
379+
The network.offline_migration_sdn_to_ovnk Ansible collection is saved in the default path of ~/.ansible/collections/ansible_collections/network/offline_migration_sdn_to_ovnk/.
380+
6. Configure rollback features in the playbooks/playbook-migration.yml file:
381+
382+
```terminal
383+
# ...
384+
rollback_disable_auto_migration: true
385+
rollback_egress_ip: false
386+
rollback_egress_firewall: false
387+
rollback_multicast: false
388+
rollback_mtu: 1400
389+
rollback_vxlanPort: 4790
390+
# ...
391+
```
392+
393+
rollback_disable_auto_migration:: Disables the auto-migration of OVN-Kubernetes plug-in features to the OpenShift SDN CNI plug-in. If you disable auto-migration of features, you must also set the rollback_egress_ip, rollback_egress_firewall, and rollback_multicast parameters to false. If you need to enable auto-migration of features, set the parameter to false.
394+
rollback_mtu:: Optional parameter that sets a specific maximum transmission unit (MTU) to your cluster network after the migration process.
395+
rollback_vxlanPort:: Optional parameter that sets a VXLAN (Virtual Extensible LAN) port for use by OpenShift SDN CNI plug-in. The default value for the parameter is 4790.
396+
7. To run the playbooks/playbook-rollback.yml file, enter the following command:
397+
398+
```terminal
399+
$ ansible-playbook -v playbooks/playbook-rollback.yml
400+
```
401+
402+
403+
* Patching OVN-Kubernetes address ranges
404+
319405
# Using the limited live migration method to roll back to the OpenShift SDN network plugin
320406

321407
As a cluster administrator, you can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the limited live migration method. During the migration with this method, nodes are automatically rebooted and service to the cluster is not interrupted.

0 commit comments

Comments
 (0)