Skip to content

OLS-2020: OCP docs and Python deps update 2025/08/11 #473

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 11, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion Containerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ USER 0
WORKDIR /workdir

COPY requirements.gpu.txt .
RUN pip3.11 install --no-cache-dir -r requirements.gpu.txt
RUN pip3.11 install --no-cache-dir -r requirements.gpu.txt && ln -s /usr/local/lib/python3.11/site-packages/llama_index/core/_static/nltk_cache /root/nltk_data

COPY ocp-product-docs-plaintext ./ocp-product-docs-plaintext
COPY runbooks ./runbooks
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,11 @@
Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets.
Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule.
You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR.
The following are the different backup types for a Backup CR:
* The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage.
* If you use Velero's snapshot feature to back up data stored on the persistent volume, only snapshot related information is stored in the S3 bucket along with the Openshift object data.
* If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots.
If the underlying storage or the backup bucket are part of the same cluster, then the data might be lost in case of disaster.
For more information about CSI volume snapshots, see CSI volume snapshots.

[IMPORTANT]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -364,4 +364,49 @@ Prior to the installation of the Red Hat OpenShift Container Platform cluster, g
* Control plane and worker nodes are configured.
* All nodes accessible via out-of-band management.
* (Optional) A separate management network has been created.
* Required data for installation.
* Required data for installation.

# Installation overview

The installation program supports interactive mode. However, you can prepare an install-config.yaml file containing the provisioning details for all of the bare-metal hosts, and the relevant cluster details, in advance.

The installation program loads the install-config.yaml file and the administrator generates the manifests and verifies all prerequisites.

The installation program performs the following tasks:

* Enrolls all nodes in the cluster
* Starts the bootstrap virtual machine (VM)
* Starts the metal platform components as systemd services, which have the following containers:
* Ironic-dnsmasq: The DHCP server responsible for handing over the IP addresses to the provisioning interface of various nodes on the provisioning network. Ironic-dnsmasq is only enabled when you deploy an Red Hat OpenShift Container Platform cluster with a provisioning network.
* Ironic-httpd: The HTTP server that is used to ship the images to the nodes.
* Image-customization
* Ironic
* Ironic-inspector (available in Red Hat OpenShift Container Platform 4.16 and earlier)
* Ironic-ramdisk-logs
* Extract-machine-os
* Provisioning-interface
* Metal3-baremetal-operator

The nodes enter the validation phase, where each node moves to a manageable state after Ironic validates the credentials to access the Baseboard Management Controller (BMC).

When the node is in the manageable state, the inspection phase starts. The inspection phase ensures that the hardware meets the minimum requirements needed for a successful deployment of Red Hat OpenShift Container Platform.

The install-config.yaml file details the provisioning network. On the bootstrap VM, the installation program uses the Pre-Boot Execution Environment (PXE) to push a live image to every node with the Ironic Python Agent (IPA) loaded. When using virtual media, it connects directly to the BMC of each node to virtually attach the image.

When using PXE boot, all nodes reboot to start the process:

* The ironic-dnsmasq service running on the bootstrap VM provides the IP address of the node and the TFTP boot server.
* The first-boot software loads the root file system over HTTP.
* The ironic service on the bootstrap VM receives the hardware information from each node.

The nodes enter the cleaning state, where each node must clean all the disks before continuing with the configuration.

After the cleaning state finishes, the nodes enter the available state and the installation program moves the nodes to the deploying state.

IPA runs the coreos-installer command to install the Red Hat Enterprise Linux CoreOS (RHCOS) image on the disk defined by the rootDeviceHints parameter in the install-config.yaml file. The node boots by using RHCOS.

After the installation program configures the control plane nodes, it moves control from the bootstrap VM to the control plane nodes and deletes the bootstrap VM.

The Bare-Metal Operator continues the deployment of the workers, storage, and infra nodes.

After the installation completes, the nodes move to the active state. You can then proceed with postinstallation configuration and other Day 2 tasks.
Original file line number Diff line number Diff line change
@@ -1,9 +1,16 @@
# Installing a cluster on vSphere using the Agent-based Installer



The Agent-based installation method provides the flexibility to boot your on-premise servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments.

Agent-based installation is a subcommand of the Red Hat OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an Red Hat OpenShift Container Platform cluster with an available release image.

# Additional resources
For more information about installing a cluster using the Agent-based Installer, see Preparing to install with the Agent-based Installer.


* Preparing to install with the Agent-based Installer
[IMPORTANT]
----
Your vSphere account must include privileges for reading and creating the resources required to install an Red Hat OpenShift Container Platform cluster.
For more information about privileges, see vCenter requirements.
----
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Red Hat OpenShift Container Platform supports both automatic and manual IP addre
To use IP address blocks defined by autoAssignCIDRs in Red Hat OpenShift Container Platform, you must configure the necessary IP address assignment and routing for your host network.
----

The following YAML describes a service with an external IP address configured:
The following YAML shows a Service object with a configured external IP:


```yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -249,6 +249,12 @@ For the configuration in the previous example, Red Hat OpenShift Container Platf
The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide; it is not created in a namespace.


[IMPORTANT]
----
EgressIP selected pods cannot serve as backends for services with externalTrafficPolicy set to Local. If you try this configuration, service ingress traffic that targets the pods gets incorrectly rerouted to the egress node that hosts the EgressIP. This situation negatively impacts the handling of incoming service traffic and causes connections to drop. This leads to unavailable and non-functional services.
----


```yaml
apiVersion: k8s.ovn.org/v1
kind: EgressIP
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# About Logging 6.0



As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs.

You can use logging to perform the following tasks:

* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage.
* Visualize your log data in the Red Hat OpenShift Container Platform web console.


[NOTE]
----
Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging 6 documentation is available as a separate documentation set at Red Hat OpenShift Logging.
----

This file was deleted.

Loading