Skip to content

Commit 5ebd4e0

Browse files
committed
OCP docs update, week of 2025/06/02
1 parent 52e8cfd commit 5ebd4e0

File tree

297 files changed

+5319
-2979
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

297 files changed

+5319
-2979
lines changed

ocp-product-docs-plaintext/4.15/architecture/nvidia-gpu-architecture-overview.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ You can deploy Red Hat OpenShift Container Platform to one of the major cloud se
105105

106106
Two modes of operation are available: a fully managed deployment and a self-managed deployment.
107107

108-
* In a fully managed deployment, everything is automated by Red Hat in collaboration with CSP. You can request an OpenShift instance through the CSP web console, and the cluster is automatically created and fully managed by Red Hat. You do not have to worry about node failures or errors in the environment. Red Hat is fully responsible for maintaining the uptime of the cluster. The fully managed services are available on AWS and Azure. For AWS, the OpenShift service is called ROSA (Red Hat OpenShift Service on AWS). For Azure, the service is called Azure Red Hat OpenShift.
108+
* In a fully managed deployment, everything is automated by Red Hat in collaboration with CSP. You can request an OpenShift instance through the CSP web console, and the cluster is automatically created and fully managed by Red Hat. You do not have to worry about node failures or errors in the environment. Red Hat is fully responsible for maintaining the uptime of the cluster. The fully managed services are available on AWS, Azure, and GCP. For AWS, the OpenShift service is called ROSA (Red Hat OpenShift Service on AWS). For Azure, the service is called Azure Red Hat OpenShift. For GCP, the service is called OpenShift Dedicated on GCP.
109109
* In a self-managed deployment, you are responsible for instantiating and maintaining the OpenShift cluster. Red Hat provides the OpenShift-install utility to support the deployment of the OpenShift cluster in this case. The self-managed services are available globally to all CSPs.
110110

111111
It is important that this compute instance is a GPU-accelerated compute instance and that the GPU type matches the list of supported GPUs from NVIDIA AI Enterprise. For example, T4, V100, and A100 are part of this list.

ocp-product-docs-plaintext/4.15/installing/install_config/configuring-firewall.txt

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
# Configuring your firewall
22

33

4-
If you use a firewall, you must configure it so that Red Hat OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use
5-
Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies.
4+
If you use a firewall, you must configure it so that Red Hat OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies.
65

76
# Configuring your firewall for Red Hat OpenShift Container Platform
87

ocp-product-docs-plaintext/4.15/installing/installing_aws/installing-aws-outposts.txt

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -154,12 +154,12 @@ To extend your VPC cluster into an Outpost, you must complete the following netw
154154

155155
## Changing the cluster network MTU to support AWS Outposts
156156

157-
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster.
158-
You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
157+
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
159158

160159

161160
[IMPORTANT]
162161
----
162+
You cannot roll back an MTU value for nodes during the MTU migration process, but you can roll back the value after the MTU migration process completes.
163163
The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect.
164164
----
165165

@@ -170,6 +170,8 @@ For more details about the migration process, including important service interr
170170
* You have identified the target MTU for your cluster.
171171
* The MTU for the OVN-Kubernetes network plugin must be set to 100 less than the lowest hardware MTU value in your cluster.
172172
* The MTU for the OpenShift SDN network plugin must be set to 50 less than the lowest hardware MTU value in your cluster.
173+
* If your nodes are physical machines, ensure that the cluster network and the connected network switches support jumbo frames.
174+
* If your nodes are virtual machines (VMs), ensure that the hypervisor and the connected network switches support jumbo frames.
173175

174176
1. To obtain the current MTU for the cluster network, enter the following command:
175177

ocp-product-docs-plaintext/4.15/installing/installing_sno/install-sno-installing-sno.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -214,6 +214,11 @@ $ coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ig
214214
```
215215

216216

217+
[IMPORTANT]
218+
----
219+
The SSL certificates for the RHCOS ISO installation image are only valid for 24 hours. If you use the ISO image to install a node more than 24 hours after creating the image, the installation can fail. To re-create the image after 24 hours, delete the ocp directory and re-create the Red Hat OpenShift Container Platform assets.
220+
----
221+
217222
* See Requirements for installing OpenShift on a single node for more information about installing Red Hat OpenShift Container Platform on a single node.
218223
* See Cluster capabilities for more information about enabling cluster capabilities that were disabled before installation.
219224
* See Optional cluster capabilities in Red Hat OpenShift Container Platform 4.15 for more information about the features provided by each capability.

ocp-product-docs-plaintext/4.15/installing/overview/cluster-capabilities.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,7 @@ Disable the DeploymentConfig capability only if you do not require DeploymentCon
204204

205205
The Insights Operator provides the features for the Insights capability.
206206

207-
The Insights Operator gathers Red Hat OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com.
207+
The Insights Operator gathers Red Hat OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through the Insights advisor service on console.redhat.com.
208208

209209
## Notes
210210

ocp-product-docs-plaintext/4.15/machine_management/control_plane_machine_management/cpmso_provider_configurations/cpmso-config-options-azure.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -438,11 +438,11 @@ The machines should be in the Running state.
438438
2. For a machine that is running and has a node attached, validate the partition by running the following command:
439439

440440
```terminal
441-
$ oc debug node/<node-name> -- chroot /host lsblk
441+
$ oc debug node/<node_name> -- chroot /host lsblk
442442
```
443443

444444

445-
In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.
445+
In this command, oc debug node/<node_name> starts a debugging shell on the node <node_name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.
446446

447447
* To use an ultra disk on the control plane, reconfigure your workload to use the control plane's ultra disk mount point.
448448

ocp-product-docs-plaintext/4.15/machine_management/creating_machinesets/creating-machineset-azure.txt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -603,11 +603,11 @@ For <role>-user-data-x5, specify the name of the secret. Replace <role> with {ma
603603
5. Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command:
604604

605605
```terminal
606-
$ oc edit machineset <machine-set-name>
606+
$ oc edit machineset <machine_set_name>
607607
```
608608

609609

610-
where <machine-set-name> is the machine set that you want to provision machines with ultra disks.
610+
where <machine_set_name> is the machine set that you want to provision machines with ultra disks.
611611
6. Add the following lines in the positions indicated:
612612

613613
```yaml
@@ -641,7 +641,7 @@ Specify the user data secret created earlier. Replace <role> with {machine-role}
641641
7. Create a machine set using the updated configuration by running the following command:
642642

643643
```terminal
644-
$ oc create -f <machine-set-name>.yaml
644+
$ oc create -f <machine_set_name>.yaml
645645
```
646646

647647

@@ -656,11 +656,11 @@ The machines should be in the Running state.
656656
2. For a machine that is running and has a node attached, validate the partition by running the following command:
657657

658658
```terminal
659-
$ oc debug node/<node-name> -- chroot /host lsblk
659+
$ oc debug node/<node_name> -- chroot /host lsblk
660660
```
661661

662662

663-
In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.
663+
In this command, oc debug node/<node_name> starts a debugging shell on the node <node_name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.
664664

665665
* To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:
666666

ocp-product-docs-plaintext/4.15/networking/changing-cluster-network-mtu.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ As a cluster administrator, you can increase or decrease the maximum transmissio
4848

4949
[IMPORTANT]
5050
----
51+
You cannot roll back an MTU value for nodes during the MTU migration process, but you can roll back the value after the MTU migration process completes.
5152
The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect.
5253
----
5354

@@ -58,6 +59,8 @@ The following procedure describes how to change the cluster network MTU by using
5859
* You have identified the target MTU for your cluster.
5960
* The MTU for the OVN-Kubernetes network plugin must be set to 100 less than the lowest hardware MTU value in your cluster.
6061
* The MTU for the OpenShift SDN network plugin must be set to 50 less than the lowest hardware MTU value in your cluster.
62+
* If your nodes are physical machines, ensure that the cluster network and the connected network switches support jumbo frames.
63+
* If your nodes are virtual machines (VMs), ensure that the hypervisor and the connected network switches support jumbo frames.
6164

6265
1. To obtain the current MTU for the cluster network, enter the following command:
6366

ocp-product-docs-plaintext/4.15/networking/dns-operator.txt

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -354,6 +354,24 @@ $ oc get configmap/dns-default -n openshift-dns -o yaml
354354
```
355355

356356

357+
# Viewing the CoreDNS logs
358+
359+
You can view CoreDNS logs by using the oc logs command.
360+
361+
* View the logs of a specific CoreDNS pod by entering the following command:
362+
363+
```terminal
364+
$ oc -n openshift-dns logs -c dns <core_dns_pod_name>
365+
```
366+
367+
* Follow the logs of all CoreDNS pods by entering the following command:
368+
369+
```terminal
370+
$ oc -n openshift-dns logs -c dns -l dns.operator.openshift.io/daemonset-dns=default -f --max-log-requests=<number> 1
371+
```
372+
373+
Specifies the number of DNS pods to stream logs from. The maximum is 6.
374+
357375
# Setting the CoreDNS Operator log level
358376

359377
Cluster administrators can configure the Operator log level to more quickly track down OpenShift DNS issues. The valid values for operatorLogLevel are Normal, Debug, and Trace. Trace has the most detailed information. The default operatorlogLevel is Normal. There are seven logging levels for issues: Trace, Debug, Info, Warning, Error, Fatal and Panic. After the logging level is set, log entries with that severity or anything above it will be logged.

ocp-product-docs-plaintext/4.15/nodes/pods/nodes-pods-vertical-autoscaler.txt

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -616,6 +616,49 @@ spec:
616616
```
617617

618618

619+
## Custom memory bump-up after OOM event
620+
621+
If your cluster experiences an OOM (out of memory) event, the Vertical Pod Autoscaler Operator (VPA) increases the memory recommendation based on the memory consumption observed during the OOM event and a specified multiplier value in order to prevent future crashes due to insufficient memory.
622+
623+
The recommendation is the higher of two calculations: the memory in use by the pod when the OOM event happened multiplied by a specified number of bytes or a specified percentage. The calculation is represented by the following formula:
624+
625+
626+
```text
627+
recommendation = max(memory-usage-in-oom-event + oom-min-bump-up-bytes, memory-usage-in-oom-event * oom-bump-up-ratio)
628+
```
629+
630+
631+
You can configure the memory increase by specifying the following values in the recommender pod:
632+
633+
* oom-min-bump-up-bytes. This value, in bytes, is a specific increase in memory after an OOM event occurs. The default is 100MiB.
634+
* oom-bump-up-ratio. This value is a percentage increase in memory when the OOM event occurred. The default value is 1.2.
635+
636+
For example, if the pod memory usage during an OOM event is 100MB, and oom-min-bump-up-bytes is set to 150MB with a oom-min-bump-ratio of 1.2, after an OOM event, the VPA would recommend increasing the memory request for that pod to 150 MB, as it is higher than at 120MB (100MB * 1.2).
637+
638+
639+
```yaml
640+
apiVersion: apps/v1
641+
kind: Deployment
642+
metadata:
643+
name: vpa-recommender-default
644+
namespace: openshift-vertical-pod-autoscaler
645+
# ...
646+
spec:
647+
# ...
648+
template:
649+
# ...
650+
spec
651+
containers:
652+
- name: recommender
653+
args:
654+
- --oom-bump-up-ratio=2.0
655+
- --oom-min-bump-up-bytes=524288000
656+
# ...
657+
```
658+
659+
660+
* Understanding OOM kill policy
661+
619662
## Using an alternative recommender
620663

621664
You can use your own recommender to autoscale based on your own algorithms. If you do not specify an alternative recommender, Red Hat OpenShift Container Platform uses the default recommender, which suggests CPU and memory requests based on historical usage. Because there is no universal recommendation policy that applies to all types of workloads, you might want to create and deploy different recommenders for specific workloads.

0 commit comments

Comments
 (0)