diff --git a/Containerfile b/Containerfile index 89351280..93d16a07 100644 --- a/Containerfile +++ b/Containerfile @@ -20,7 +20,7 @@ USER 0 WORKDIR /workdir COPY requirements.gpu.txt . -RUN pip3.11 install --no-cache-dir -r requirements.gpu.txt +RUN pip3.11 install --no-cache-dir -r requirements.gpu.txt && ln -s /usr/local/lib/python3.11/site-packages/llama_index/core/_static/nltk_cache /root/nltk_data COPY ocp-product-docs-plaintext ./ocp-product-docs-plaintext COPY runbooks ./runbooks diff --git a/ocp-product-docs-plaintext/4.15/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt b/ocp-product-docs-plaintext/4.15/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt index 88bb90d0..06de27d6 100644 --- a/ocp-product-docs-plaintext/4.15/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt +++ b/ocp-product-docs-plaintext/4.15/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt @@ -4,8 +4,11 @@ Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR. +The following are the different backup types for a Backup CR: * The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. +* If you use Velero's snapshot feature to back up data stored on the persistent volume, only snapshot related information is stored in the S3 bucket along with the Openshift object data. * If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots. +If the underlying storage or the backup bucket are part of the same cluster, then the data might be lost in case of disaster. For more information about CSI volume snapshots, see CSI volume snapshots. [IMPORTANT] diff --git a/ocp-product-docs-plaintext/4.15/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt b/ocp-product-docs-plaintext/4.15/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt index e731bbbc..a53b50ec 100644 --- a/ocp-product-docs-plaintext/4.15/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt +++ b/ocp-product-docs-plaintext/4.15/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt @@ -364,4 +364,49 @@ Prior to the installation of the Red Hat OpenShift Container Platform cluster, g * Control plane and worker nodes are configured. * All nodes accessible via out-of-band management. * (Optional) A separate management network has been created. -* Required data for installation. \ No newline at end of file +* Required data for installation. + +# Installation overview + +The installation program supports interactive mode. However, you can prepare an install-config.yaml file containing the provisioning details for all of the bare-metal hosts, and the relevant cluster details, in advance. + +The installation program loads the install-config.yaml file and the administrator generates the manifests and verifies all prerequisites. + +The installation program performs the following tasks: + +* Enrolls all nodes in the cluster +* Starts the bootstrap virtual machine (VM) +* Starts the metal platform components as systemd services, which have the following containers: +* Ironic-dnsmasq: The DHCP server responsible for handing over the IP addresses to the provisioning interface of various nodes on the provisioning network. Ironic-dnsmasq is only enabled when you deploy an Red Hat OpenShift Container Platform cluster with a provisioning network. +* Ironic-httpd: The HTTP server that is used to ship the images to the nodes. +* Image-customization +* Ironic +* Ironic-inspector (available in Red Hat OpenShift Container Platform 4.16 and earlier) +* Ironic-ramdisk-logs +* Extract-machine-os +* Provisioning-interface +* Metal3-baremetal-operator + +The nodes enter the validation phase, where each node moves to a manageable state after Ironic validates the credentials to access the Baseboard Management Controller (BMC). + +When the node is in the manageable state, the inspection phase starts. The inspection phase ensures that the hardware meets the minimum requirements needed for a successful deployment of Red Hat OpenShift Container Platform. + +The install-config.yaml file details the provisioning network. On the bootstrap VM, the installation program uses the Pre-Boot Execution Environment (PXE) to push a live image to every node with the Ironic Python Agent (IPA) loaded. When using virtual media, it connects directly to the BMC of each node to virtually attach the image. + +When using PXE boot, all nodes reboot to start the process: + +* The ironic-dnsmasq service running on the bootstrap VM provides the IP address of the node and the TFTP boot server. +* The first-boot software loads the root file system over HTTP. +* The ironic service on the bootstrap VM receives the hardware information from each node. + +The nodes enter the cleaning state, where each node must clean all the disks before continuing with the configuration. + +After the cleaning state finishes, the nodes enter the available state and the installation program moves the nodes to the deploying state. + +IPA runs the coreos-installer command to install the Red Hat Enterprise Linux CoreOS (RHCOS) image on the disk defined by the rootDeviceHints parameter in the install-config.yaml file. The node boots by using RHCOS. + +After the installation program configures the control plane nodes, it moves control from the bootstrap VM to the control plane nodes and deletes the bootstrap VM. + +The Bare-Metal Operator continues the deployment of the workers, storage, and infra nodes. + +After the installation completes, the nodes move to the active state. You can then proceed with postinstallation configuration and other Day 2 tasks. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt b/ocp-product-docs-plaintext/4.15/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt index 1921582c..2558ab8b 100644 --- a/ocp-product-docs-plaintext/4.15/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt +++ b/ocp-product-docs-plaintext/4.15/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt @@ -1,9 +1,16 @@ # Installing a cluster on vSphere using the Agent-based Installer + The Agent-based installation method provides the flexibility to boot your on-premise servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. + Agent-based installation is a subcommand of the Red Hat OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an Red Hat OpenShift Container Platform cluster with an available release image. -# Additional resources +For more information about installing a cluster using the Agent-based Installer, see Preparing to install with the Agent-based Installer. + -* Preparing to install with the Agent-based Installer \ No newline at end of file +[IMPORTANT] +---- +Your vSphere account must include privileges for reading and creating the resources required to install an Red Hat OpenShift Container Platform cluster. +For more information about privileges, see vCenter requirements. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/networking/configuring_ingress_cluster_traffic/configuring-externalip.txt b/ocp-product-docs-plaintext/4.15/networking/configuring_ingress_cluster_traffic/configuring-externalip.txt index 7e057331..375eb4f0 100644 --- a/ocp-product-docs-plaintext/4.15/networking/configuring_ingress_cluster_traffic/configuring-externalip.txt +++ b/ocp-product-docs-plaintext/4.15/networking/configuring_ingress_cluster_traffic/configuring-externalip.txt @@ -61,7 +61,7 @@ Red Hat OpenShift Container Platform supports both automatic and manual IP addre To use IP address blocks defined by autoAssignCIDRs in Red Hat OpenShift Container Platform, you must configure the necessary IP address assignment and routing for your host network. ---- -The following YAML describes a service with an external IP address configured: +The following YAML shows a Service object with a configured external IP: ```yaml diff --git a/ocp-product-docs-plaintext/4.15/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.txt b/ocp-product-docs-plaintext/4.15/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.txt index 1cd77082..6b5b2f97 100644 --- a/ocp-product-docs-plaintext/4.15/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.txt +++ b/ocp-product-docs-plaintext/4.15/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.txt @@ -249,6 +249,12 @@ For the configuration in the previous example, Red Hat OpenShift Container Platf The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide; it is not created in a namespace. +[IMPORTANT] +---- +EgressIP selected pods cannot serve as backends for services with externalTrafficPolicy set to Local. If you try this configuration, service ingress traffic that targets the pods gets incorrectly rerouted to the egress node that hosts the EgressIP. This situation negatively impacts the handling of incoming service traffic and causes connections to drop. This leads to unavailable and non-functional services. +---- + + ```yaml apiVersion: k8s.ovn.org/v1 kind: EgressIP diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-about-logging.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-about-logging.txt new file mode 100644 index 00000000..8bd7a0d9 --- /dev/null +++ b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-about-logging.txt @@ -0,0 +1,16 @@ +# About Logging 6.0 + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging 6 documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-about.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-about.txt deleted file mode 100644 index 01fad372..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-about.txt +++ /dev/null @@ -1,153 +0,0 @@ -# Logging 6.0 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and Outputs - -Inputs specify the sources of logs to be forwarded. Logging provides built-in input types: application, infrastructure, and audit, which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver Input Type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec defines the configuration for a receiver input. - -# Pipelines and Filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator Behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field: - -* When set to Managed (default), the operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick Start - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a secret to access an existing object storage bucket: -Example command for AWS - -```terminal -$ oc create secret generic logging-loki-s3 \ - --from-literal=bucketnames="" \ - --from-literal=endpoint="" \ - --from-literal=access_key_id="" \ - --from-literal=access_key_secret="" \ - --from-literal=region="" \ - -n openshift-logging -``` - -3. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2022-06-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -4. Create a service account for the collector: - -```shell -$ oc create sa collector -n openshift-logging -``` - -5. Bind the ClusterRole to the service account: - -```shell -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - -6. Create a UIPlugin to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Add additional roles to the collector service account: - -```shell -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - -8. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-clf.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-clf.txt deleted file mode 100644 index b12d8ea7..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-clf.txt +++ /dev/null @@ -1,765 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -Annotations -<1> rules: Specifies the permissions granted by this ClusterRole. -<2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -<3> loki.grafana.com: The API group for managing Loki-related resources. -<4> resources: The resource type that the ClusterRole grants permission to interact with. -<5> application: Refers to the application resources within the Loki logging system. -<6> resourceNames: Specifies the names of resources that this role can manage. -<7> logs: Refers to the log resources that can be created. -<8> verbs: The actions allowed on the resources. -<9> create: Grants permission to create new logs in the Loki system. -``` - - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -elasticsearch:: Forwards logs to an external Elasticsearch instance. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -## Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -### Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -## Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -## Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -## Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -## Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-loki.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-loki.txt deleted file mode 100644 index f35c7e2a..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-loki.txt +++ /dev/null @@ -1,770 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- - -# Prerequisites - -* You have installed the Loki Operator by using the CLI or web console. -* You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder. -* The serviceAccount is assigned collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles. - -# Core Setup and Configuration - -Role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -## Enhanced Reliability and Performance - -Configurations to ensure Loki’s reliability and efficiency in production. - -## Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -## Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -## LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -## Advanced Deployment and Scalability - -Specialized configurations for high availability, scalability, and error handling. - -## Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -## Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -### Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -## Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - -Example Fluentd error message - -```text -2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-release-notes.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-release-notes.txt deleted file mode 100644 index 2acb75cd..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-release-notes.txt +++ /dev/null @@ -1,144 +0,0 @@ -# Release notes - - - -# Logging 6.0.3 - -This release includes RHBA-2024:10991. - -## New features and enhancements - -* With this update, the Loki Operator supports the configuring of the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in Red Hat OpenShift Container Platform 4.17 or later. (LOG-6421) - -## Bug fixes - -* Before this update, the collector used the default settings to collect audit logs, which did not account for back pressure from output receivers. With this update, the audit log collection is optimized for file handling and log reading. (LOG-6034) -* Before this update, any namespace containing openshift or kube was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces: default, kube, openshift, and namespaces that begin with openshift- or kube-. (LOG-6204) -* Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. (LOG-6343) -* Before this update, pipeline validation might enter an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6352) -* Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6406) -* Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6441) -* Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6486) -* Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6543) - -## CVEs - -* CVE-2019-12900 -* CVE-2024-2511 -* CVE-2024-3596 -* CVE-2024-4603 -* CVE-2024-4741 -* CVE-2024-5535 -* CVE-2024-10963 -* CVE-2024-50602 - -# Logging 6.0.2 - -This release includes RHBA-2024:10051. - -## Bug fixes - -* Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. (LOG-5325) -* Before this update, the collector would discard audit log messages that exceeded the configured threshold. This modifies the audit configuration thresholds for the maximum line size as well as the number of bytes read during a read cycle. (LOG-5998) -* Before this update, the Cluster Logging Operator did not watch and reconcile resources associated with an instance of a ClusterLogForwarder like it did in prior releases. This update modifies the operator to watch and reconcile all resources it owns and creates. (LOG-6264) -* Before this update, log events with an unknown severity level sent to Google Cloud Logging would trigger a warning in the vector collector, which would then default the severity to 'DEFAULT'. With this update, log severity levels are now standardized to match Google Cloud Logging specifications, and audit logs are assigned a severity of 'INFO'. (LOG-6296) -* Before this update, when infrastructure namespaces were included in application inputs, the log_type was set as application. With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure. (LOG-6354) -* Before this update, specifying a value for the syslog.enrichment field of the ClusterLogForwarder added namespace_name, container_name, and pod_name to the messages of non-container logs. With this update, only container logs include namespace_name, container_name, and pod_name in their messages when syslog.enrichment is set. (LOG-6402) - -## CVEs - -* CVE-2024-6119 -* CVE-2024-6232 - -# Logging 6.0.1 - -This release includes OpenShift Logging Bug Fix Release 6.0.1. - -## Bug fixes - -* With this update, the default memory limit for the collector has been increased from 1024 Mi to 2024 Mi. However, users should always adjust their resource limits according to their cluster specifications and needs. (LOG-6180) -* Before this update, the Loki Operator failed to add the default namespace label to all AlertingRule resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. -(LOG-6151) -* Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. -(LOG-6129) -* Before this update, it was possible to set log_source in the prune filter, which could lead to inconsistent log data. With this update, the configuration is validated before being applied, and any configuration that includes log_source in the prune filter is rejected. -(LOG-6202) - -## CVEs - -* CVE-2024-24791 -* CVE-2024-34155 -* CVE-2024-34156 -* CVE-2024-34158 -* CVE-2024-6104 -* CVE-2024-6119 -* CVE-2024-45490 -* CVE-2024-45491 -* CVE-2024-45492 - -# Logging 6.0.0 - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.0.0 - - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- - - - -# Removal notice - -* With this release, logging no longer supports the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io custom resources. Refer to the product documentation for details on the replacement features. (LOG-5803) -* With this release, logging no longer manages or deploys log storage (such as Elasticsearch), visualization (such as Kibana), or Fluentd-based log collectors. (LOG-5368) - - -[NOTE] ----- -In order to continue to use Elasticsearch and Kibana managed by the elasticsearch-operator, the administrator must modify those object's ownerRefs before deleting the ClusterLogging resource. ----- - -# New features and enhancements - -* This feature introduces a new architecture for logging for Red Hat OpenShift by shifting component responsibilities to their relevant Operators, such as for storage, visualization, and collection. It introduces the ClusterLogForwarder.observability.openshift.io API for log collection and forwarding. Support for the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io APIs, along with the Red Hat managed Elastic stack (Elasticsearch and Kibana), is removed. Users are encouraged to migrate to the Red Hat LokiStack for log storage. Existing managed Elasticsearch deployments can be used for a limited time. Automated migration for log collection is not provided, so administrators need to create a new ClusterLogForwarder.observability.openshift.io specification to replace their previous custom resources. Refer to the official product documentation for more details. (LOG-3493) -* With this release, the responsibility for deploying the logging view plugin shifts from the Red Hat OpenShift Logging Operator to the Cluster Observability Operator (COO). For new log storage installations that need visualization, the Cluster Observability Operator and the associated UIPlugin resource must be deployed. Refer to the Cluster Observability Operator Overview product documentation for more details. (LOG-5461) -* This enhancement sets default requests and limits for Vector collector deployments' memory and CPU usage based on Vector documentation recommendations. (LOG-4745) -* This enhancement updates Vector to align with the upstream version v0.37.1. (LOG-5296) -* This enhancement introduces an alert that triggers when log collectors buffer logs to a node's file system and use over 15% of the available space, indicating potential back pressure issues. (LOG-5381) -* This enhancement updates the selectors for all components to use common Kubernetes labels. (LOG-5906) -* This enhancement changes the collector configuration to deploy as a ConfigMap instead of a secret, allowing users to view and edit the configuration when the ClusterLogForwarder is set to Unmanaged. (LOG-5599) -* This enhancement adds the ability to configure the Vector collector log level using an annotation on the ClusterLogForwarder, with options including trace, debug, info, warn, error, or off. (LOG-5372) -* This enhancement adds validation to reject configurations where Amazon CloudWatch outputs use multiple AWS roles, preventing incorrect log routing. (LOG-5640) -* This enhancement removes the Log Bytes Collected and Log Bytes Sent graphs from the metrics dashboard. (LOG-5964) -* This enhancement updates the must-gather functionality to only capture information for inspecting Logging 6.0 components, including Vector deployments from ClusterLogForwarder.observability.openshift.io resources and the Red Hat managed LokiStack. (LOG-5949) -* This enhancement improves Azure storage secret validation by providing early warnings for specific error conditions. (LOG-4571) -* This enhancement updates the ClusterLogForwarder API to follow the Kubernetes standards. (LOG-5977) -Example of a new configuration in the ClusterLogForwarder custom resource for the updated API - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: -spec: - outputs: - - name: - type: - : - tuning: - deliveryMode: AtMostOnce -``` - - -# Technology Preview features - -* This release introduces a Technology Preview feature for log forwarding using OpenTelemetry. A new output type,` OTLP`, allows sending JSON-encoded log records using the OpenTelemetry data model and resource semantic conventions. (LOG-4225) - -# Bug fixes - -* Before this update, the CollectorHighErrorRate and CollectorVeryHighErrorRate alerts were still present. With this update, both alerts are removed in the logging 6.0 release but might return in a future release. (LOG-3432) - -# CVEs - -* CVE-2024-34397 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-upgrading-to-6.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-upgrading-to-6.txt deleted file mode 100644 index c23045a4..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-upgrading-to-6.txt +++ /dev/null @@ -1,544 +0,0 @@ -# Upgrading to Logging 6.0 - - -Logging v6.0 is a significant upgrade from previous releases, achieving several longstanding goals of Cluster Logging: -* Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization). -* Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana). -* Deprecation of the Fluentd log collector implementation. -* Removal of support for ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io resources. - -[NOTE] ----- -The cluster-logging-operator does not provide an automated upgrade process. ----- -Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator. This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io resources for common use cases are included. - -# Using the oc explain command - -The oc explain command is an essential tool in the OpenShift CLI oc that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster. - -## Resource Descriptions - -oc explain offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators. - -To view the documentation for the outputs field of the ClusterLogForwarder custom resource, you can use: - - -```terminal -$ oc explain clusterlogforwarders.observability.openshift.io.spec.outputs -``` - - - -[NOTE] ----- -In place of clusterlogforwarder the short form obsclf can be used. ----- - -This will display detailed information about these fields, including their types, default values, and any associated sub-fields. - -## Hierarchical Structure - -The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options. - -For instance, here’s how you can drill down into the storage configuration for a LokiStack custom resource: - - -```terminal -$ oc explain lokistacks.loki.grafana.com -$ oc explain lokistacks.loki.grafana.com.spec -$ oc explain lokistacks.loki.grafana.com.spec.storage -$ oc explain lokistacks.loki.grafana.com.spec.storage.schemas -``` - - -Each command reveals a deeper level of the resource specification, making the structure clear. - -## Type Information - -oc explain also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types. - -For example: - - -```terminal -$ oc explain lokistacks.loki.grafana.com.spec.size -``` - - -This will show that size should be defined using an integer value. - -## Default Values - -When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified. - -Again using lokistacks.loki.grafana.com as an example: - - -```terminal -$ oc explain lokistacks.spec.template.distributor.replicas -``` - - - -```terminal -GROUP: loki.grafana.com -KIND: LokiStack -VERSION: v1 - -FIELD: replicas - -DESCRIPTION: - Replicas defines the number of replica pods of the component. -``` - - -# Log Storage - -The only managed log storage solution available in this release is a Lokistack, managed by the Loki Operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process. - - -[IMPORTANT] ----- -To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the Elasticsearch Operator, remove the owner references from the Elasticsearch resource named elasticsearch, and the Kibana resource named kibana in the openshift-logging namespace before removing the ClusterLogging resource named instance in the same namespace. ----- - -1. Temporarily set ClusterLogging resource to the Unmanaged state by running the following command: - -```terminal -$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge -``` - -2. Remove the ownerReferences parameter from the Elasticsearch resource by running the following command: - -The following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource’s logStore field will no longer affect the Elasticsearch resource. - -```terminal -$ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge -``` - -3. Remove the ownerReferences parameter from the Kibana resource. - -The following command ensures that Cluster Logging no longer owns the Kibana resource. Updates to the ClusterLogging resource’s visualization field will no longer affect the Kibana resource. - -```terminal -$ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge -``` - -4. Set the ClusterLogging resource to the Managed state by running the following command: - -```terminal -$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge -``` - - -# Log Visualization - -The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator. - -# Log Collection and Forwarding - -Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io API group. The following sections highlight the differences from the old API resources. - - -[NOTE] ----- -Vector is the only supported collector implementation. ----- - -# Management, Resource Allocation, and Workload Scheduling - -Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: "ClusterLogging" -spec: - managementState: "Managed" - collection: - resources: - limits: {} - requests: {} - nodeSelector: {} - tolerations: {} -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - managementState: Managed - collector: - resources: - limits: {} - requests: {} - nodeSelector: {} - tolerations: {} -``` - - -# Input Specifications - -The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application, infrastructure, and audit to collect these sources. - -## Application Inputs - -Namespace and container inclusions and exclusions have been consolidated into a single field. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: application-logs - type: application - application: - namespaces: - - foo - - bar - includes: - - namespace: my-important - container: main - excludes: - - container: too-verbose -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: application-logs - type: application - application: - includes: - - namespace: foo - - namespace: bar - - namespace: my-important - container: main - excludes: - - container: too-verbose -``` - - - -[NOTE] ----- -application, infrastructure, and audit are reserved words and cannot be used as names when defining an input. ----- - -## Input Receivers - -Changes to input receivers include: - -* Explicit configuration of the type at the receiver level. -* Port settings moved to the receiver level. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: an-http - receiver: - http: - port: 8443 - format: kubeAPIAudit - - name: a-syslog - receiver: - type: syslog - syslog: - port: 9442 -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: an-http - type: receiver - receiver: - type: http - port: 8443 - http: - format: kubeAPIAudit - - name: a-syslog - type: receiver - receiver: - type: syslog - port: 9442 -``` - - -# Output Specifications - -High-level changes to output specifications include: - -* URL settings moved to each output type specification. -* Tuning parameters moved to each output type specification. -* Separation of TLS configuration from authentication. -* Explicit configuration of keys and secret/configmap for TLS and authentication. - -# Secrets and TLS Configuration - -Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions. - -# Red Hat Managed Elasticsearch - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogging -metadata: - name: instance - namespace: openshift-logging -spec: - logStore: - type: elasticsearch -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging -spec: - serviceAccount: - name: - managementState: Managed - outputs: - - name: audit-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: audit-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - - name: app-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: app-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - - name: infra-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: infra-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - pipelines: - - name: app - inputRefs: - - application - outputRefs: - - app-elasticsearch - - name: audit - inputRefs: - - audit - outputRefs: - - audit-elasticsearch - - name: infra - inputRefs: - - infrastructure - outputRefs: - - infra-elasticsearch -``` - - -# Red Hat Managed LokiStack - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogging -metadata: - name: instance - namespace: openshift-logging -spec: - logStore: - type: lokistack - lokistack: - name: logging-loki -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging -spec: - serviceAccount: - name: - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - outputRefs: - - default-lokistack - - inputRefs: - - application - - infrastructure -``` - - -# Filters and Pipeline Configuration - -Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from previous releases have been converted to filters in this release. Individual filters are defined in the filters specification and referenced by a pipeline. - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogForwarder -spec: - pipelines: - - name: application-logs - parse: json - labels: - foo: bar - detectMultilineErrors: true -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -spec: - filters: - - name: detectexception - type: detectMultilineException - - name: parse-json - type: parse - - name: labels - type: openshiftLabels - openshiftLabels: - foo: bar - pipelines: - - name: application-logs - filterRefs: - - detectexception - - labels - - parse-json -``` - - -# Validation and Status - -Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from previous releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time. - -Instances of the ClusterLogForwarder.observability.openshift.io must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -status: - conditions: - - lastTransitionTime: "2024-09-13T03:28:44Z" - message: 'permitted to collect log types: [application]' - reason: ClusterRolesExist - status: "True" - type: observability.openshift.io/Authorized - - lastTransitionTime: "2024-09-13T12:16:45Z" - message: "" - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/Valid - - lastTransitionTime: "2024-09-13T12:16:45Z" - message: "" - reason: ReconciliationComplete - status: "True" - type: Ready - filterConditions: - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: filter "detectexception" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidFilter-detectexception - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: filter "parse-json" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidFilter-parse-json - inputConditions: - - lastTransitionTime: "2024-09-13T12:23:03Z" - message: input "application1" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidInput-application1 - outputConditions: - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: output "default-lokistack-application1" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidOutput-default-lokistack-application1 - pipelineConditions: - - lastTransitionTime: "2024-09-13T03:28:44Z" - message: pipeline "default-before" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidPipeline-default-before -``` - - - -[NOTE] ----- -Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue. ----- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-visual.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-visual.txt deleted file mode 100644 index 28ccf097..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.0/log6x-visual.txt +++ /dev/null @@ -1,11 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. - - -[IMPORTANT] ----- -Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on Red Hat OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA. ----- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-about-6.1.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-about-6.1.txt deleted file mode 100644 index 0b313dc8..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-about-6.1.txt +++ /dev/null @@ -1,323 +0,0 @@ -# Logging 6.1 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and outputs - -Inputs specify the sources of logs to be forwarded. Logging provides built-in input types: application, receiver, infrastructure, and audit, which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver input type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec defines the configuration for a receiver input. - -# Pipelines and filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: - -* When set to Managed (default), the operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick start - -OpenShift Logging supports two data models: - -* ViaQ (General Availability) -* OpenTelemetry (Technology Preview) - -You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack. - - -[NOTE] ----- -In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. ----- - -## Quick start with ViaQ - -To use the default ViaQ data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - authentication: - token: - from: serviceAccount - target: - name: logging-loki - namespace: openshift-logging - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -[NOTE] ----- -The dataModel field is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. ----- - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. - -## Quick start with OpenTelemetry - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You have installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 -spec: - serviceAccount: - name: collector - outputs: - - name: loki-otlp - type: lokiStack 2 - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - dataModel: Otel 3 - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: my-pipeline - inputRefs: - - application - - infrastructure - outputRefs: - - loki-otlp -``` - -Use the annotation to enable the Otel data model, which is a Technology Preview feature. -Define the output type as lokiStack. -Specifies the OpenTelemetry data model. - -[NOTE] ----- -You cannot use lokiStack.labelKeys when dataModel is Otel. To achieve similar functionality when dataModel is Otel, refer to "Configuring LokiStack for OTLP data ingestion". ----- - -* To verify that OTLP is functioning correctly, complete the following steps: -1. In the OpenShift web console, click Observe -> OpenShift Logging -> LokiStack -> Writes. -2. Check the Distributor - Structured Metadata section. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-about-logging.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-about-logging.txt new file mode 100644 index 00000000..7b0dbb28 --- /dev/null +++ b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-about-logging.txt @@ -0,0 +1,16 @@ +# About Logging 6.1 + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging 6 documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-clf-6.1.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-clf-6.1.txt deleted file mode 100644 index a969b758..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-clf-6.1.txt +++ /dev/null @@ -1,819 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -Annotations -<1> rules: Specifies the permissions granted by this ClusterRole. -<2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -<3> loki.grafana.com: The API group for managing Loki-related resources. -<4> resources: The resource type that the ClusterRole grants permission to interact with. -<5> application: Refers to the application resources within the Loki logging system. -<6> resourceNames: Specifies the names of resources that this role can manage. -<7> logs: Refers to the log resources that can be created. -<8> verbs: The actions allowed on the resources. -<9> create: Grants permission to create new logs in the Loki system. -``` - - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -elasticsearch:: Forwards logs to an external Elasticsearch instance. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -# Configuring OTLP output - -Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 - name: clf-otlp -spec: - serviceAccount: - name: - outputs: - - name: otlp - type: otlp - otlp: - tuning: - compression: gzip - deliveryMode: AtLeastOnce - maxRetryDuration: 20 - maxWrite: 10M - minRetryDuration: 5 - url: 2 - pipelines: - - inputRefs: - - application - - infrastructure - - audit - name: otlp-logs - outputRefs: - - otlp -``` - -Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. -This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. - - -[NOTE] ----- -The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. ----- - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -## Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -### Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -## Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -## Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -## Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -## Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt deleted file mode 100644 index 94dfcddb..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt +++ /dev/null @@ -1,181 +0,0 @@ -# OTLP data ingestion in Loki - - -Logging 6.1 enables an API endpoint using the OpenTelemetry Protocol (OTLP). As OTLP is a standardized format not specifically designed for Loki, it requires additional configuration on Loki's side to map OpenTelemetry's data format to Loki's data model. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into three categories: -* Resource -* Scope -* Log -This allows metadata to be set for multiple entries simultaneously or individually as needed. - -# Configuring LokiStack for OTLP data ingestion - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: - -* Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. - -1. Set the schema version: -* When creating a new LokiStack CR, set version: v13 in the storage schema configuration. - -[NOTE] ----- -For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). ----- -2. Configure the storage schema as follows: -Example configure storage schema - -```yaml -# ... -spec: - storage: - schemas: - - version: v13 - effectiveDate: 2024-10-25 -``` - - -Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. - -# Attribute mapping - -When the Loki Operator is set to openshift-logging mode, it automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with Loki’s stream labels and structured metadata. - -For typical setups, these default mappings should be sufficient. However, you might need to customize attribute mapping in the following cases: - -* Using a custom Collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki. -* Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. - - -[IMPORTANT] ----- -Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki. ----- - -## Custom attribute mapping for OpenShift - -When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift defaults, but custom mappings can be configured to adjust these. Custom mappings allow further configurations to meet specific needs. - -In openshift-logging mode, custom attribute mappings can be configured globally for all tenants or for individual tenants as needed. When custom mappings are defined, they are appended to the OpenShift defaults. If default recommended labels are not required, they can be disabled in the tenant configuration. - - -[NOTE] ----- -A major difference between the Loki Operator and Loki itself lies in inheritance handling. Loki only copies default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in openshift-logging mode. ----- - -Within LokiStack, attribute mapping configuration is managed through the limits setting: - - -```yaml -# ... -spec: - limits: - global: - otlp: {} 1 - tenants: - application: - otlp: {} 2 -``` - - -Global OTLP attribute configuration. -OTLP attribute configuration for the application tenant within openshift-logging mode. - - -[NOTE] ----- -Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement. ----- - -Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects: - - -```yaml -spec: - limits: - global: - otlp: - streamLabels: - resourceAttributes: - - name: "k8s.namespace.name" - - name: "k8s.pod.name" - - name: "k8s.container.name" -``` - - -Structured metadata, in contrast, can be generated from resource, scope or log-level attributes: - - -```yaml -# ... -spec: - limits: - global: - otlp: - streamLabels: - # ... - structuredMetadata: - resourceAttributes: - - name: "process.command_line" - - name: "k8s\\.pod\\.labels\\..+" - regex: true - scopeAttributes: - - name: "service.name" - logAttributes: - - name: "http.route" -``` - - - -[TIP] ----- -Use regular expressions by setting regex: true for attributes names when mapping similar attributes in Loki. ----- - - -[IMPORTANT] ----- -Avoid using regular expressions for stream labels, as this can increase data volume. ----- - -## Customizing OpenShift defaults - -In openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be disabled if performance is impacted. - -When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations. - -## Removing recommended attributes - -To reduce default attributes in openshift-logging mode, disable recommended attributes: - - -```yaml -# ... -spec: - tenants: - mode: openshift-logging - openshift: - otlp: - disableRecommendedAttributes: true 1 -``` - - -Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes. - - -[NOTE] ----- -This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries. ----- - -# Additional resources - -* Loki labels -* Structured metadata -* OpenTelemetry attribute \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-loki-6.1.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-loki-6.1.txt deleted file mode 100644 index e66a4654..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-loki-6.1.txt +++ /dev/null @@ -1,770 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Prerequisites - -* You have installed the Loki Operator by using the CLI or web console. -* You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder. -* The serviceAccount is assigned collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles. - -# Core Setup and Configuration - -Role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -## Enhanced Reliability and Performance - -Configurations to ensure Loki’s reliability and efficiency in production. - -## Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -## Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -## LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -## Advanced Deployment and Scalability - -Specialized configurations for high availability, scalability, and error handling. - -## Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -## Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -### Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -## Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - -Example Fluentd error message - -```text -2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt deleted file mode 100644 index 71eb6a76..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt +++ /dev/null @@ -1,81 +0,0 @@ -# OpenTelemetry data model - - -This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging's OpenTelemetry support with Logging 6.1. - -[IMPORTANT] ----- -The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -# Forwarding and ingestion protocol - -Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification. OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources. - -# Semantic conventions - -The log collector in this solution gathers the following log streams: - -* Container logs -* Cluster node journal logs -* Cluster node auditd logs -* Kubernetes and OpenShift API server logs -* OpenShift Virtual Network (OVN) logs - -You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name, cluster_id, pod_name, namespace, and possibly deployment or app_name. These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data. - -In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage. - -The following sections define the attributes that are generally forwarded. - -## Log entry structure - -All log streams include the following log data fields: - -The Applicable Sources column indicates which log sources each field applies to: - -* all: This field is present in all logs. -* container: This field is present in Kubernetes container logs, both application and infrastructure. -* audit: This field is present in Kubernetes, OpenShift API, and OVN logs. -* auditd: This field is present in node auditd logs. -* journal: This field is present in node journal logs. - - - -## Attributes - -Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table. - -The Location column specifies the type of attribute: - -* resource: Indicates a resource attribute -* scope: Indicates a scope attribute -* log: Indicates a log attribute - -The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored: - -* stream label: -* Enables efficient filtering and querying based on specific labels. -* Can be labeled as required if the Loki Operator enforces this attribute in the configuration. -* structured metadata: -* Allows for detailed filtering and storage of key-value pairs. -* Enables users to use direct labels for streamlined queries without requiring JSON parsing. - -With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries. - - - - -[NOTE] ----- -Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases. ----- - -Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: (.,/,-) will be replaced by underscores (_). For example, k8s.namespace.name will become k8s_namespace_name. - -# Additional resources - -* Semantic Conventions -* Logs Data Model -* General Logs Attributes \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-release-notes-6.1.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-release-notes-6.1.txt deleted file mode 100644 index d248add1..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-release-notes-6.1.txt +++ /dev/null @@ -1,71 +0,0 @@ -# Logging 6.1 - - - -# Logging 6.1.1 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1. - -## New Features and Enhancements - -* With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in Red Hat OpenShift Container Platform 4.17 or later. (LOG-6420) - -## Bug Fixes - -* Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.]. With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size, max_line_bytes, is 3145728 bytes. The maximum number of bytes read during a read cycle, max_read_bytes, is 262144 bytes. (LOG-6379) -* Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. (LOG-6383) -* Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6405) -* Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6407) -* Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6449) -* Before this update, the ValidLokistackOTLPOutputs condition appeared in the status of the ClusterLogForwarder custom resource even when the output type is not LokiStack. With this update, the ValidLokistackOTLPOutputs condition is removed, and the validation messages for the existing output conditions are corrected. (LOG-6469) -* Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6484) -* Before this update, the must-gather script of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, the must-gather script is fixed, and the LokiStack data is gathered reliably. (LOG-6498) -* Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6533) - -## CVEs - -* CVE-2019-12900 -* CVE-2024-2511 -* CVE-2024-3596 -* CVE-2024-4603 -* CVE-2024-4741 -* CVE-2024-5535 -* CVE-2024-10963 -* CVE-2024-50602 - -# Logging 6.1.0 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0. - -## New Features and Enhancements - -### Log Collection - -* This enhancement adds the source iostream to the attributes sent from collected container logs. The value is set to either stdout or stderr based on how the collector received it. (LOG-5292) -* With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster’s specific needs and specifications. (LOG-6072) -* With this update, users can now set the syslog output delivery mode of the ClusterLogForwarder CR to either AtLeastOnce or AtMostOnce. (LOG-6355) - -### Log Storage - -* With this update, the new 1x.pico LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). (LOG-5939) - -## Technology Preview - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* With this update, OpenTelemetry logs can now be forwarded using the OTel (OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add the observability.openshift.io/tech-preview-otlp-output: "enabled" annotation to your ClusterLogForwarder configuration. For additional configuration information, see OTLP Forwarding. -* With this update, a dataModel field has been added to the lokiStack output specification. Set the dataModel to Otel to configure log forwarding using the OpenTelemetry data format. The default is set to Viaq. For information about data mapping see OTLP Specification. - -## Bug Fixes - -None. - -## CVEs - -* CVE-2024-6119 -* CVE-2024-6232 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-visual-6.1.txt b/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-visual-6.1.txt deleted file mode 100644 index 28ccf097..00000000 --- a/ocp-product-docs-plaintext/4.15/observability/logging/logging-6.1/log6x-visual-6.1.txt +++ /dev/null @@ -1,11 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. - - -[IMPORTANT] ----- -Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on Red Hat OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA. ----- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt index 91af9607..3eb2121a 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt @@ -41,7 +41,7 @@ cluster administrator or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Container Platform and user-defined projects in the Metrics UI. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. From the Administrator perspective in the Red Hat OpenShift Container Platform web console, select Observe -> Metrics. 2. To add one or more queries, do any of the following: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt index 6f0afcb5..749af895 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt @@ -88,7 +88,7 @@ Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. [NOTE] diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt index f44a14d1..dd36bf73 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -84,7 +84,7 @@ If you do not need the local Alertmanager, you can disable it by configuring the * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config config map in the openshift-monitoring project: @@ -129,7 +129,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -180,7 +180,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt index 70fe93c4..a0206e8f 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -449,7 +449,7 @@ You can create cluster ID labels for metrics by adding the write_relabel setting * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt index 87a145f2..114eefff 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt @@ -29,7 +29,7 @@ You cannot add a node selector constraint directly to an existing scheduled pod. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -85,7 +85,7 @@ You can assign tolerations to any of the monitoring stack components to enable m * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -151,7 +151,7 @@ Prometheus then considers this target to be down and sets its up metric value to ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: @@ -194,7 +194,7 @@ To configure CPU and memory resources, specify values for resource limits and re * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the ConfigMap object named cluster-monitoring-config. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -325,7 +325,7 @@ For more information about the support scope of Red Hat Technology Preview featu To choose a metrics collection profile for core Red Hat OpenShift Container Platform monitoring components, edit the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have enabled Technology Preview features by using the FeatureGate custom resource (CR). * You have created the cluster-monitoring-config ConfigMap object. * You have access to the cluster as a user with the cluster-admin cluster role. @@ -385,7 +385,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt index 6eae9317..42976610 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt @@ -34,7 +34,7 @@ Each procedure that requires a change in the config map includes its expected ou You can configure the core Red Hat OpenShift Container Platform monitoring components by creating and updating the cluster-monitoring-config config map in the openshift-monitoring project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Check whether the cluster-monitoring-config ConfigMap object exists: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt index c12efb24..4b7a1dd2 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -113,7 +113,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. * You have configured at least one PVC for core Red Hat OpenShift Container Platform monitoring components. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -187,7 +187,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -242,7 +242,7 @@ data: In default platform monitoring, you can configure the audit log level for the Prometheus Adapter. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. @@ -344,7 +344,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -409,7 +409,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -475,7 +475,7 @@ For default platform monitoring in the openshift-monitoring project, you can ena Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt index c6e4b9be..087ec9aa 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -95,7 +95,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -146,7 +146,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -233,7 +233,7 @@ If you are a non-administrator user who has been given the alert-routing-edit cl * A cluster administrator has enabled monitoring for user-defined projects. * A cluster administrator has enabled alert routing for user-defined projects. * You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml. 2. Add an AlertmanagerConfig YAML definition to the file. For example: @@ -278,7 +278,7 @@ All features of a supported version of upstream Alertmanager are also supported * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled a separate instance of Alertmanager for user-defined alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Print the currently active Alertmanager configuration into the file alertmanager.yaml: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt index cbe11296..df57b0cf 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -457,7 +457,7 @@ You cannot override this default configuration by setting the value of the honor * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt index b840a8f6..306ac705 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt @@ -28,7 +28,7 @@ It is not permitted to move components to control plane or infrastructure nodes. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -84,7 +84,7 @@ You can assign tolerations to the components that monitor user-defined projects, * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -145,7 +145,7 @@ You can configure these limits and requests for monitoring components that monit To configure CPU and memory resources, specify values for resource limits and requests in the {configmap-name} ConfigMap object in the {namespace-name} namespace. * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -232,7 +232,7 @@ If you set sample or label limits, no further sample data is ingested for that t * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -289,7 +289,7 @@ You can create alerts that notify you when: * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml: @@ -353,7 +353,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt index 652f03bb..a5f80005 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt @@ -55,7 +55,7 @@ You must have access to the cluster as a user with the cluster-admin cluster rol ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have created the cluster-monitoring-config ConfigMap object. * You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. @@ -116,7 +116,7 @@ As a cluster administrator, you can assign the user-workload-monitoring-config-e * You have access to the cluster as a user with the cluster-admin cluster role. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: @@ -175,7 +175,7 @@ You can allow users to create user-defined alert routing configurations that use * You have access to the cluster as a user with the cluster-admin cluster role. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object: @@ -258,7 +258,7 @@ You can grant users permission to configure alert routing for user-defined proje * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled monitoring for user-defined projects. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * Assign the alert-routing-edit cluster role to a user in the user-defined project: @@ -268,7 +268,7 @@ $ oc -n adm policy add-role-to-user alert-routing-edit 1 For , substitute the namespace for the user-defined project, such as ns1. For , substitute the username for the account to which you want to assign the role. -Configuring alert notifications +* Configuring alert notifications # Granting users permissions for monitoring for user-defined projects diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt index e341c10e..88072b4f 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -118,7 +118,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have configured at least one PVC for components that monitor user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -197,7 +197,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -247,7 +247,7 @@ By default, for user-defined projects, Thanos Ruler automatically retains metric * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -311,7 +311,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -376,7 +376,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt index 215d2a76..7eaa0b6c 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt @@ -164,7 +164,7 @@ You can create alerting rules for user-defined projects. Those alerting rules wi * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -210,7 +210,7 @@ To list alerting rules for a user-defined project, you must have been assigned t * You have enabled monitoring for user-defined projects. * You are logged in as a user that has the monitoring-rules-view cluster role for your project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. To list alerting rules in : @@ -231,7 +231,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt index 30c3e085..6b535b1b 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt @@ -169,7 +169,7 @@ These alerting rules trigger alerts based on the values of chosen metrics. ---- * You have access to the cluster as a user that has the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-alerting-rule.yaml. 2. Add an AlertingRule resource to the YAML file. @@ -218,7 +218,7 @@ As a cluster administrator, you can modify core platform alerts before Alertmana For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-modified-alerting-rule.yaml. 2. Add an AlertRelabelConfig resource to the YAML file. @@ -285,7 +285,7 @@ You can create alerting rules for user-defined projects. Those alerting rules wi * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -331,7 +331,7 @@ As a cluster administrator, you can list alerting rules for core Red Hat OpenShift Container Platform and user-defined projects together in a single view. * You have access to the cluster as a user with the cluster-admin role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. From the Administrator perspective of the Red Hat OpenShift Container Platform web console, go to Observe -> Alerting -> Alerting rules. 2. Select the Platform and User sources in the Filter drop-down menu. @@ -347,7 +347,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: diff --git a/ocp-product-docs-plaintext/4.15/observability/monitoring/troubleshooting-monitoring-issues.txt b/ocp-product-docs-plaintext/4.15/observability/monitoring/troubleshooting-monitoring-issues.txt index d489223b..56e040b6 100644 --- a/ocp-product-docs-plaintext/4.15/observability/monitoring/troubleshooting-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.15/observability/monitoring/troubleshooting-monitoring-issues.txt @@ -188,7 +188,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective, navigate to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -261,7 +261,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.15/observability/network_observability/observing-network-traffic.txt b/ocp-product-docs-plaintext/4.15/observability/network_observability/observing-network-traffic.txt index db2949df..45884b9b 100644 --- a/ocp-product-docs-plaintext/4.15/observability/network_observability/observing-network-traffic.txt +++ b/ocp-product-docs-plaintext/4.15/observability/network_observability/observing-network-traffic.txt @@ -132,6 +132,8 @@ See the Additional resources in this section for more information about enabling You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be recorded. Then only the packets that match the filter are cached and the rest are not cached. +You can apply multiple filter rules. + ### Ingress and egress traffic filtering CIDR notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation. @@ -381,7 +383,14 @@ When you refresh the Network Traffic page, the Overview, Traffic Flow, and Topol ## Filtering eBPF flow data using a global rule -You can configure the FlowCollector to filter eBPF flows using a global rule to control the flow of packets cached in the eBPF flow table. +You can configure the FlowCollector custom resource to filter eBPF flows using multiple rules to control the flow of packets cached in the eBPF flow table. + + +[IMPORTANT] +---- +* You cannot use duplicate Classless Inter-Domain Routing (CIDRs) in filter rules. +* When an IP address matches multiple filter rules, the rule with the most specific CIDR prefix (longest prefix) takes precedence. +---- 1. In the web console, navigate to Operators -> Installed Operators. 2. Under the Provided APIs heading for Network Observability, select Flow Collector. diff --git a/ocp-product-docs-plaintext/4.15/post_installation_configuration/machine-configuration-tasks.txt b/ocp-product-docs-plaintext/4.15/post_installation_configuration/machine-configuration-tasks.txt index ae2e00a8..5a6f88cb 100644 --- a/ocp-product-docs-plaintext/4.15/post_installation_configuration/machine-configuration-tasks.txt +++ b/ocp-product-docs-plaintext/4.15/post_installation_configuration/machine-configuration-tasks.txt @@ -228,7 +228,7 @@ UPDATED:: The True status indicates that the MCO has applied the current machine UPDATING:: The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated. DEGRADED:: A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready. MACHINECOUNT:: Indicates the total number of machines in that MCP. -READYMACHINECOUNT:: Indicates the total number of machines in that MCP that are ready for scheduling. +READYMACHINECOUNT:: Indicates the number of machines that are both running the current machine config and are ready for scheduling. This count is always less than or equal to the UPDATEDMACHINECOUNT number. UPDATEDMACHINECOUNT:: Indicates the total number of machines in that MCP that have the current machine config. DEGRADEDMACHINECOUNT:: Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable. diff --git a/ocp-product-docs-plaintext/4.15/release_notes/ocp-4-15-release-notes.txt b/ocp-product-docs-plaintext/4.15/release_notes/ocp-4-15-release-notes.txt index f169f430..d1c3a434 100644 --- a/ocp-product-docs-plaintext/4.15/release_notes/ocp-4-15-release-notes.txt +++ b/ocp-product-docs-plaintext/4.15/release_notes/ocp-4-15-release-notes.txt @@ -1396,6 +1396,30 @@ This section will continue to be updated over time to provide notes on enhanceme For any Red Hat OpenShift Container Platform release, always review the instructions on updating your cluster properly. ---- +## RHSA-2025:12370 - Red Hat OpenShift Container Platform 4.15.56 bug fix and security update + +Issued: 06 August 2025 + +Red Hat OpenShift Container Platform release 4.15.56 is now available. The list of bug fixes that are included in this update is documented in the RHSA-2025:12370 advisory. The RPM packages that are included in this update are provided by the RHBA-2025:12371 advisory. + +Space precluded documenting all of the container images for this release in the advisory. + +You can view the container images in this release by running the following command: + + +```terminal +$ oc adm release info 4.15.56 --pullspecs +``` + + +### Bug fixes + +* Before this update, the loopback certificate expired because there was no validity period. With this release, a validity period is set and the loopback certificate does not expire. (OCPBUGS-59147) + +### Updating + +To update an Red Hat OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI. + ## RHSA-2025:11351 - Red Hat OpenShift Container Platform 4.15.55 bug fix update Issued: 23 July 2025 diff --git a/ocp-product-docs-plaintext/4.15/service_mesh/v2x/servicemesh-release-notes.txt b/ocp-product-docs-plaintext/4.15/service_mesh/v2x/servicemesh-release-notes.txt index 30f95c54..b2f115c8 100644 --- a/ocp-product-docs-plaintext/4.15/service_mesh/v2x/servicemesh-release-notes.txt +++ b/ocp-product-docs-plaintext/4.15/service_mesh/v2x/servicemesh-release-notes.txt @@ -2,14 +2,32 @@ +# Red Hat OpenShift Service Mesh version 2.6.9 + +This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.9, and includes the following ServiceMeshControlPlane resource version updates: 2.6.9 and 2.5.12. + +This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. + +You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. + +## Component updates + + + +# Red Hat OpenShift Service Mesh version 2.5.12 + +This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.9 and is supported on Red Hat OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). + +## Component updates + + + # Red Hat OpenShift Service Mesh version 2.6.8 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.8, and includes the following ServiceMeshControlPlane resource version updates: 2.6.8 and 2.5.11. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. -The most current version of the Red Hat OpenShift Service Mesh Operator can be used with all supported versions of Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. - You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. ## Component updates diff --git a/ocp-product-docs-plaintext/4.15/support/troubleshooting/investigating-monitoring-issues.txt b/ocp-product-docs-plaintext/4.15/support/troubleshooting/investigating-monitoring-issues.txt index a8209aca..05e0d7f0 100644 --- a/ocp-product-docs-plaintext/4.15/support/troubleshooting/investigating-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.15/support/troubleshooting/investigating-monitoring-issues.txt @@ -192,7 +192,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective, navigate to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -263,7 +263,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.15/support/troubleshooting/troubleshooting-installations.txt b/ocp-product-docs-plaintext/4.15/support/troubleshooting/troubleshooting-installations.txt index f3897bf6..d5109503 100644 --- a/ocp-product-docs-plaintext/4.15/support/troubleshooting/troubleshooting-installations.txt +++ b/ocp-product-docs-plaintext/4.15/support/troubleshooting/troubleshooting-installations.txt @@ -110,7 +110,7 @@ $ ./openshift-install create ignition-configs --dir=./install_dir You can monitor high-level installation, bootstrap, and control plane logs as an Red Hat OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. * You have the fully qualified domain names of the bootstrap and control plane nodes. diff --git a/ocp-product-docs-plaintext/4.15/virt/about_virt/about-virt.txt b/ocp-product-docs-plaintext/4.15/virt/about_virt/about-virt.txt index 20b2bee3..937b1e9e 100644 --- a/ocp-product-docs-plaintext/4.15/virt/about_virt/about-virt.txt +++ b/ocp-product-docs-plaintext/4.15/virt/about_virt/about-virt.txt @@ -30,6 +30,8 @@ You can use OpenShift Virtualization with OVN-Kubernetes, OpenShift SDN, or one You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies. +For information about partnering with Independent Software Vendors (ISVs) and Services partners for specialized storage, networking, backup, and additional functionality, see the Red Hat Ecosystem Catalog. + ## OpenShift Virtualization supported cluster version The latest stable release of OpenShift Virtualization 4.15 is 4.15.1. @@ -40,6 +42,8 @@ OpenShift Virtualization 4.15 is supported for use on Red Hat OpenShift Containe If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.15/virt/install/preparing-cluster-for-virt.txt b/ocp-product-docs-plaintext/4.15/virt/install/preparing-cluster-for-virt.txt index 187ec437..5f207fd6 100644 --- a/ocp-product-docs-plaintext/4.15/virt/install/preparing-cluster-for-virt.txt +++ b/ocp-product-docs-plaintext/4.15/virt/install/preparing-cluster-for-virt.txt @@ -114,6 +114,8 @@ To mark a storage class as the default for virtualization workloads, set the ann If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.15/virt/monitoring/virt-prometheus-queries.txt b/ocp-product-docs-plaintext/4.15/virt/monitoring/virt-prometheus-queries.txt index 83ac9f0f..420ffa9b 100644 --- a/ocp-product-docs-plaintext/4.15/virt/monitoring/virt-prometheus-queries.txt +++ b/ocp-product-docs-plaintext/4.15/virt/monitoring/virt-prometheus-queries.txt @@ -17,7 +17,7 @@ cluster administrator or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Container Platform and user-defined projects in the Metrics UI. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. From the Administrator perspective in the Red Hat OpenShift Container Platform web console, select Observe -> Metrics. 2. To add one or more queries, do any of the following: diff --git a/ocp-product-docs-plaintext/4.15/virt/vm_networking/virt-hot-plugging-network-interfaces.txt b/ocp-product-docs-plaintext/4.15/virt/vm_networking/virt-hot-plugging-network-interfaces.txt index 27249fb4..80903d6b 100644 --- a/ocp-product-docs-plaintext/4.15/virt/vm_networking/virt-hot-plugging-network-interfaces.txt +++ b/ocp-product-docs-plaintext/4.15/virt/vm_networking/virt-hot-plugging-network-interfaces.txt @@ -25,21 +25,12 @@ If you restart the VM after hot plugging an interface, that interface becomes pa Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. * A network attachment definition is configured in the same namespace as your VM. +* The VM to which you want to hot plug the network interface is running. * You have installed the virtctl tool. -* You have installed the OpenShift CLI (oc). - -1. If the VM to which you want to hot plug the network interface is not running, start it by using the following command: - -```terminal -$ virtctl start -n -``` - -2. Use the following command to add the new network interface to the running VM. Editing the VM specification adds the new network interface to the VM and virtual machine instance (VMI) configuration but does not attach it to the running VM. - -```terminal -$ oc edit vm -``` +* You have permission to create and list VirtualMachineInstanceMigration objects. +* You have installed the OpenShift CLI (`oc`). +1. Use your preferred text editor to edit the VirtualMachine manifest, as shown in the following example: Example VM configuration ```yaml @@ -70,7 +61,7 @@ template: Specifies the name of the new network interface. Specifies the name of the network. This must be the same as the name of the new network interface that you defined in the template.spec.domain.devices.interfaces list. Specifies the name of the NetworkAttachmentDefinition object. -3. To attach the network interface to the running VM, live migrate the VM by running the following command: +2. To attach the network interface to the running VM, live migrate the VM by running the following command: ```terminal $ virtctl migrate diff --git a/ocp-product-docs-plaintext/4.16/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt b/ocp-product-docs-plaintext/4.16/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt index 78c8b731..2ef031f1 100644 --- a/ocp-product-docs-plaintext/4.16/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt +++ b/ocp-product-docs-plaintext/4.16/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt @@ -4,8 +4,11 @@ Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR. +The following are the different backup types for a Backup CR: * The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. +* If you use Velero's snapshot feature to back up data stored on the persistent volume, only snapshot related information is stored in the S3 bucket along with the Openshift object data. * If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots. +If the underlying storage or the backup bucket are part of the same cluster, then the data might be lost in case of disaster. For more information about CSI volume snapshots, see CSI volume snapshots. [IMPORTANT] diff --git a/ocp-product-docs-plaintext/4.16/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt b/ocp-product-docs-plaintext/4.16/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt index e731bbbc..a53b50ec 100644 --- a/ocp-product-docs-plaintext/4.16/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt +++ b/ocp-product-docs-plaintext/4.16/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt @@ -364,4 +364,49 @@ Prior to the installation of the Red Hat OpenShift Container Platform cluster, g * Control plane and worker nodes are configured. * All nodes accessible via out-of-band management. * (Optional) A separate management network has been created. -* Required data for installation. \ No newline at end of file +* Required data for installation. + +# Installation overview + +The installation program supports interactive mode. However, you can prepare an install-config.yaml file containing the provisioning details for all of the bare-metal hosts, and the relevant cluster details, in advance. + +The installation program loads the install-config.yaml file and the administrator generates the manifests and verifies all prerequisites. + +The installation program performs the following tasks: + +* Enrolls all nodes in the cluster +* Starts the bootstrap virtual machine (VM) +* Starts the metal platform components as systemd services, which have the following containers: +* Ironic-dnsmasq: The DHCP server responsible for handing over the IP addresses to the provisioning interface of various nodes on the provisioning network. Ironic-dnsmasq is only enabled when you deploy an Red Hat OpenShift Container Platform cluster with a provisioning network. +* Ironic-httpd: The HTTP server that is used to ship the images to the nodes. +* Image-customization +* Ironic +* Ironic-inspector (available in Red Hat OpenShift Container Platform 4.16 and earlier) +* Ironic-ramdisk-logs +* Extract-machine-os +* Provisioning-interface +* Metal3-baremetal-operator + +The nodes enter the validation phase, where each node moves to a manageable state after Ironic validates the credentials to access the Baseboard Management Controller (BMC). + +When the node is in the manageable state, the inspection phase starts. The inspection phase ensures that the hardware meets the minimum requirements needed for a successful deployment of Red Hat OpenShift Container Platform. + +The install-config.yaml file details the provisioning network. On the bootstrap VM, the installation program uses the Pre-Boot Execution Environment (PXE) to push a live image to every node with the Ironic Python Agent (IPA) loaded. When using virtual media, it connects directly to the BMC of each node to virtually attach the image. + +When using PXE boot, all nodes reboot to start the process: + +* The ironic-dnsmasq service running on the bootstrap VM provides the IP address of the node and the TFTP boot server. +* The first-boot software loads the root file system over HTTP. +* The ironic service on the bootstrap VM receives the hardware information from each node. + +The nodes enter the cleaning state, where each node must clean all the disks before continuing with the configuration. + +After the cleaning state finishes, the nodes enter the available state and the installation program moves the nodes to the deploying state. + +IPA runs the coreos-installer command to install the Red Hat Enterprise Linux CoreOS (RHCOS) image on the disk defined by the rootDeviceHints parameter in the install-config.yaml file. The node boots by using RHCOS. + +After the installation program configures the control plane nodes, it moves control from the bootstrap VM to the control plane nodes and deletes the bootstrap VM. + +The Bare-Metal Operator continues the deployment of the workers, storage, and infra nodes. + +After the installation completes, the nodes move to the active state. You can then proceed with postinstallation configuration and other Day 2 tasks. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt b/ocp-product-docs-plaintext/4.16/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt index 1921582c..2558ab8b 100644 --- a/ocp-product-docs-plaintext/4.16/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt +++ b/ocp-product-docs-plaintext/4.16/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt @@ -1,9 +1,16 @@ # Installing a cluster on vSphere using the Agent-based Installer + The Agent-based installation method provides the flexibility to boot your on-premise servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. + Agent-based installation is a subcommand of the Red Hat OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an Red Hat OpenShift Container Platform cluster with an available release image. -# Additional resources +For more information about installing a cluster using the Agent-based Installer, see Preparing to install with the Agent-based Installer. + -* Preparing to install with the Agent-based Installer \ No newline at end of file +[IMPORTANT] +---- +Your vSphere account must include privileges for reading and creating the resources required to install an Red Hat OpenShift Container Platform cluster. +For more information about privileges, see vCenter requirements. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/machine_configuration/index.txt b/ocp-product-docs-plaintext/4.16/machine_configuration/index.txt index 8b4a5888..65201372 100644 --- a/ocp-product-docs-plaintext/4.16/machine_configuration/index.txt +++ b/ocp-product-docs-plaintext/4.16/machine_configuration/index.txt @@ -335,7 +335,7 @@ UPDATED:: The True status indicates that the MCO has applied the current machine UPDATING:: The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated. DEGRADED:: A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready. MACHINECOUNT:: Indicates the total number of machines in that MCP. -READYMACHINECOUNT:: Indicates the total number of machines in that MCP that are ready for scheduling. +READYMACHINECOUNT:: Indicates the number of machines that are both running the current machine config and are ready for scheduling. This count is always less than or equal to the UPDATEDMACHINECOUNT number. UPDATEDMACHINECOUNT:: Indicates the total number of machines in that MCP that have the current machine config. DEGRADEDMACHINECOUNT:: Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable. diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt index e3efb179..205d2c4c 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt @@ -20,30 +20,18 @@ The AWS Load Balancer Operator can tag the public subnets if the kubernetes.io/r The AWS Load Balancer Operator supports the Kubernetes service resource of type LoadBalancer by using Network Load Balancer (NLB) with the instance target type only. -1. You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a Subscription object by running the following command: +1. To deploy the AWS Load Balancer Operator on-demand from OperatorHub, create a Subscription object by running the following command: ```terminal $ oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}' ``` -Example output - -```terminal -install-zlfbt -``` - 2. Check if the status of an install plan is Complete by running the following command: ```terminal $ oc -n aws-load-balancer-operator get ip --template='{{.status.phase}}{{"\n"}}' ``` -Example output - -```terminal -Complete -``` - 3. View the status of the aws-load-balancer-operator-controller-manager deployment by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/dns-operator.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/dns-operator.txt index 81d40e67..8a3e7145 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/dns-operator.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/dns-operator.txt @@ -71,6 +71,12 @@ The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. +2. To find the service CIDR range, such as 172.30.0.0/16, of your cluster, use the oc get command: + +```terminal +$ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}' +``` + # Using DNS forwarding @@ -131,7 +137,7 @@ spec: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions: - ... +... ``` Must comply with the rfc6335 service name syntax. @@ -337,7 +343,7 @@ The string value can be a combination of units such as 0.5h10m and is converted 1. To review the change, look at the config map again by running the following command: ```terminal -oc get configmap/dns-default -n openshift-dns -o yaml +$ oc get configmap/dns-default -n openshift-dns -o yaml ``` 2. Verify that you see entries that look like the following example: @@ -368,19 +374,12 @@ The following are use cases for changing the DNS Operator managementState: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' ``` -2. Review managementState of the DNS Operator using the jsonpath command-line JSON parser: +2. Review managementState of the DNS Operator by using the jsonpath command-line JSON parser: ```terminal $ oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}' ``` -Example output - -```terminal -"Unmanaged" -``` - - [NOTE] ---- diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt index 45caae03..e4ecdcf0 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt @@ -26,14 +26,8 @@ $ oc -n external-dns-operator patch subscription external-dns-operator --type='j ``` -* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command: +* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added, outputted as trusted-ca, to the external-dns-operator deployment by running the following command: ```terminal $ oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME ``` - -Example output - -```terminal -trusted-ca -``` diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt index d899e333..475a42b8 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt @@ -7,22 +7,20 @@ You can create DNS records on AWS and AWS GovCloud by using the External DNS Ope You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud. -1. Check the user. The user must have access to the kube-system namespace. If you don’t have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: +1. Check the user profile, such as system:admin, by running the following command. The user profile must have access to the kube-system namespace. If you do not have the credentials, you can fetch the credentials from the kube-system namespace to use the cloud provider client by running the following command: ```terminal $ oc whoami ``` -Example output +2. Fetch the values from aws-creds secret present in kube-system namespace. ```terminal -system:admin +$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) ``` -2. Fetch the values from aws-creds secret present in kube-system namespace. ```terminal -$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) $ export AWS_SECRET_ACCESS_KEY=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) ``` @@ -39,7 +37,7 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None ``` -4. Get the list of dns zones to find the one which corresponds to the previously found route's domain: +4. Get the list of DNS zones and find the DNS zone that corresponds to the domain of the route that you previously queried: ```terminal $ aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt index 32c37150..d09fbc90 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt @@ -51,18 +51,12 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None ``` -6. Get a list of managed zones by running the following command: +6. Get a list of managed zones, such as qe-cvs4g-private-zone test.gcp.example.com, by running the following command: ```terminal $ gcloud dns managed-zones list | grep test.gcp.example.com ``` -Example output - -```terminal -qe-cvs4g-private-zone test.gcp.example.com -``` - 7. Create a YAML file, for example, external-dns-sample-gcp.yaml, that defines the ExternalDNS object: Example external-dns-sample-gcp.yaml file diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt index 5d82bab3..67559fa9 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt @@ -131,22 +131,8 @@ external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m $ oc -n external-dns-operator get subscription ``` -Example output - -```terminal -NAME PACKAGE SOURCE CHANNEL -external-dns-operator external-dns-operator redhat-operators stable-v1 -``` - 5. Check the external-dns-operator version by running the following command: ```terminal $ oc -n external-dns-operator get csv ``` - -Example output - -```terminal -NAME DISPLAY VERSION REPLACES PHASE -external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded -``` diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt index 508d9cd6..89abe4d8 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt @@ -11,30 +11,18 @@ The External DNS Operator implements the External DNS API from the olm.openshift You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a Subscription object. -1. Check the name of an install plan by running the following command: +1. Check the name of an install plan, such as install-zcvlr, by running the following command: ```terminal $ oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' ``` -Example output - -```terminal -install-zcvlr -``` - 2. Check if the status of an install plan is Complete by running the following command: ```terminal $ oc -n external-dns-operator get ip -o yaml | yq '.status.phase' ``` -Example output - -```terminal -Complete -``` - 3. View the status of the external-dns-operator deployment by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/ingress-operator.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/ingress-operator.txt index d2a1e5e0..ed3583b3 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/ingress-operator.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/ingress-operator.txt @@ -314,19 +314,12 @@ certificate authority that you configured in a custom PKI. * Your certificate meets the following requirements: * The certificate is valid for the ingress domain. * The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com. -* You must have an IngressController CR. You may use the default one: +* You must have an IngressController CR, which includes just having the default IngressController CR. You can run the following command to check that you have an IngressController CR: ```terminal $ oc --namespace openshift-ingress-operator get ingresscontrollers ``` -Example output - -```terminal -NAME AGE -default 10m -``` - [NOTE] @@ -617,18 +610,12 @@ $ oc apply -f ingress-autoscaler.yaml * Verify that the default Ingress Controller is scaled out to match the value returned by the kube-state-metrics query by running the following commands: -* Use the grep command to search the Ingress Controller YAML file for replicas: +* Use the grep command to search the Ingress Controller YAML file for the number of replicas: ```terminal $ oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas: ``` -Example output - -```terminal - replicas: 3 -``` - * Get the pods in the openshift-ingress project: ```terminal @@ -670,39 +657,18 @@ Scaling is not an immediate action, as it takes time to create the desired numbe $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ``` -Example output - -```terminal -2 -``` - -2. Scale the default IngressController to the desired number of replicas using -the oc patch command. The following example scales the default IngressController -to 3 replicas: +2. Scale the default IngressController to the desired number of replicas by using the oc patch command. The following example scales the default IngressController to 3 replicas. ```terminal $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge ``` -Example output - -```terminal -ingresscontroller.operator.openshift.io/default patched -``` - -3. Verify that the default IngressController scaled to the number of replicas -that you specified: +3. Verify that the default IngressController scaled to the number of replicas that you specified: ```terminal $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ``` -Example output - -```terminal -3 -``` - [TIP] ---- @@ -1519,18 +1485,12 @@ Optional: Domain for Red Hat OpenShift Container Platform infrastructure to use ---- Wait for the openshift-apiserver finish rolling updates before exposing the route. ---- -1. Expose the route: +1. Expose the route by entering the following command. The command outputs route.route.openshift.io/hello-openshift exposed to designate exposure of the route. ```terminal $ oc expose service hello-openshift ``` -Example output - -```terminal -route.route.openshift.io/hello-openshift exposed -``` - 2. Get a list of routes by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt index 2357465d..a8293b39 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt @@ -30,7 +30,7 @@ You can install the Kubernetes NMState Operator by using the web console or the ## Installing the Kubernetes NMState Operator by using the web console -You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. +You can install the Kubernetes NMState Operator by using the web console. After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. * You are logged in as a user with cluster-admin privileges. @@ -49,8 +49,6 @@ The name restriction is a known issue. The instance is a singleton for the entir ---- 9. Accept the default settings and click Create to create the instance. -Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. - ## Installing the Kubernetes NMState Operator by using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI (oc). After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. @@ -112,13 +110,6 @@ $ oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -kubernetes-nmstate-operator.4.16.0-202210210157 Succeeded -``` - 5. Create an instance of the nmstate Operator: ```terminal @@ -130,19 +121,31 @@ metadata: EOF ``` -6. Verify that all pods for the NMState Operator are in a Running state: +6. If your cluster has problems with the DNS health check probe because of DNS connectivity issues, you can add the following DNS host name configuration to the NMState CRD to build in health checks that can resolve these issues: ```terminal -$ oc get pod -n openshift-nmstate +apiVersion: nmstate.io/v1 +kind: NMState +metadata: + name: nmstate +spec: + probeConfiguration: + dns: + host: redhat.com +# ... +``` + +1. Apply the DNS host name configuration to your cluster network by running the following command. Ensure that you replace with the name of your CRD file. + +```yaml +$ oc apply -f .yaml ``` -Example output + +* Verify that all pods for the NMState Operator have the Running status by entering the following command: ```terminal -Name Ready Status Restarts Age -pod/nmstate-handler-wn55p 1/1 Running 0 77s -pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s -... +$ oc get pod -n openshift-nmstate ``` diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/metallb-operator/metallb-operator-install.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/metallb-operator/metallb-operator-install.txt index a16a1d28..ce2fc591 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/metallb-operator/metallb-operator-install.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/metallb-operator/metallb-operator-install.txt @@ -119,20 +119,13 @@ install-wzg94 metallb-operator.4.16.0-nnnnnnnnnnnn Automatic true ---- Installation of the Operator might take a few seconds. ---- -2. To verify that the Operator is installed, enter the following command: +2. To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -metallb-operator.4.16.0-nnnnnnnnnnnn Succeeded -``` - # Starting MetalLB on your cluster diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt index dd9d693e..39c5f81b 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt @@ -42,13 +42,6 @@ spec: $ oc -n metallb-system get csv ``` -Example output - -```terminal -NAME DISPLAY VERSION REPLACES PHASE -metallb-operator.v4.16.0 MetalLB Operator 4.16.0 Succeeded -``` - 4. Check the install plan that exists in the namespace by entering the following command. ```terminal @@ -76,19 +69,12 @@ $ oc edit installplan -n metallb-system After you edit the install plan, the upgrade operation starts. If you enter the oc -n metallb-system get csv command during the upgrade operation, the output might show the Replacing or the Pending status. ---- -1. Verify the upgrade was successful by entering the following command: +* To verify that the Operator is upgraded, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc -n metallb-system get csv ``` -Example output - -```terminal -NAME DISPLAY VERSION REPLACE PHASE -metallb-operator.v4..0-202503102139 MetalLB Operator 4.16.0-202503102139 metallb-operator.v4.16.0-202502261233 Succeeded -``` - # Additional resources diff --git a/ocp-product-docs-plaintext/4.16/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt b/ocp-product-docs-plaintext/4.16/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt index e2d6311d..8f91163c 100644 --- a/ocp-product-docs-plaintext/4.16/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt +++ b/ocp-product-docs-plaintext/4.16/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt @@ -78,20 +78,13 @@ EOF ``` -* Check that the Operator is installed by entering the following command: +* To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -sriov-network-operator.4.16.0-202406131906 Succeeded -``` - ## Web console: Installing the SR-IOV Network Operator diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log60-cluster-logging-support.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log60-cluster-logging-support.txt deleted file mode 100644 index d4c8e815..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log60-cluster-logging-support.txt +++ /dev/null @@ -1,141 +0,0 @@ -# Support - - -Only the configuration options described in this documentation are supported for logging. -Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. - -[NOTE] ----- -If you must perform configurations not described in the Red Hat OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed. ----- - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- -Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. -Logging is not: -* A high scale log collection system -* Security Information and Event Monitoring (SIEM) compliant -* A "bring your own" (BYO) log collector configuration -* Historical or long term log retention or storage -* A guaranteed log sink -* Secure storage - audit logs are not stored by default - -# Supported API custom resource definitions - -The following table describes the supported Logging APIs. - - - -# Unsupported configurations - -You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: - -* The collector configuration file -* The collector daemonset - -Explicitly unsupported cases include: - -* Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector. -* Configuring how the log collector normalizes logs. You cannot modify default log normalization. - -# Support policy for unmanaged Operators - -The management state of an Operator determines whether an Operator is actively -managing the resources for its related component in the cluster as designed. If -an Operator is set to an unmanaged state, it does not respond to changes in -configuration nor does it receive updates. - -While this can be helpful in non-production clusters or during debugging, -Operators in an unmanaged state are unsupported and the cluster administrator -assumes full control of the individual component configurations and upgrades. - -An Operator can be set to an unmanaged state using the following methods: - -* Individual Operator configuration - -Individual Operators have a managementState parameter in their configuration. -This can be accessed in different ways, depending on the Operator. For example, -the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource -(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide -configuration resource. - -Changing the managementState parameter to Unmanaged means that the Operator -is not actively managing its resources and will take no action related to the -related component. Some Operators might not support this management state as it -might damage the cluster and require manual recovery. - -[WARNING] ----- -Changing individual Operators to the Unmanaged state renders that particular -component and functionality unsupported. Reported issues must be reproduced in -Managed state for support to proceed. ----- -* Cluster Version Operator (CVO) overrides - -The spec.overrides parameter can be added to the CVO’s configuration to allow -administrators to provide a list of overrides to the CVO’s behavior for a -component. Setting the spec.overrides[].unmanaged parameter to true for a -component blocks cluster upgrades and alerts the administrator after a CVO -override has been set: - -```terminal -Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. -``` - - -[WARNING] ----- -Setting a CVO override puts the entire cluster in an unsupported state. Reported -issues must be reproduced after removing any overrides for support to proceed. ----- - -# Collecting logging data for Red Hat Support - -When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. - -You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. -For prompt support, supply diagnostic information for both Red Hat OpenShift Container Platform and logging. - -## About the must-gather tool - -The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. - -For your logging, must-gather collects the following information: - -* Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level -* Cluster-level resources, including nodes, roles, and role bindings at the cluster level -* OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer - -When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory. - -## Collecting logging data - -You can use the oc adm must-gather CLI command to collect information about logging. - -To collect logging information with must-gather: - -1. Navigate to the directory where you want to store the must-gather information. -2. Run the oc adm must-gather command against the logging image: - -```terminal -$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') -``` - - -The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: -must-gather.local.4157245944708210408. -3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: - -```terminal -$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 -``` - -4. Attach the compressed file to your support case on the Red Hat Customer Portal. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-about-logging.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-about-logging.txt new file mode 100644 index 00000000..8bd7a0d9 --- /dev/null +++ b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-about-logging.txt @@ -0,0 +1,16 @@ +# About Logging 6.0 + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging 6 documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-about.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-about.txt deleted file mode 100644 index 7777605d..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-about.txt +++ /dev/null @@ -1,160 +0,0 @@ -# Logging 6.0 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and Outputs - -Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: - -* application -* receiver -* infrastructure -* audit - -You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver Input Type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec field defines the configuration for a receiver input. - -# Pipelines and Filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator Behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field: - -* When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick Start - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a secret to access an existing object storage bucket: -Example command for AWS - -```terminal -$ oc create secret generic logging-loki-s3 \ - --from-literal=bucketnames="" \ - --from-literal=endpoint="" \ - --from-literal=access_key_id="" \ - --from-literal=access_key_secret="" \ - --from-literal=region="" \ - -n openshift-logging -``` - -3. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2022-06-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -4. Create a service account for the collector: - -```shell -$ oc create sa collector -n openshift-logging -``` - -5. Bind the ClusterRole to the service account: - -```shell -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - -6. Create a UIPlugin to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Add additional roles to the collector service account: - -```shell -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - -8. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-clf.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-clf.txt deleted file mode 100644 index 55d26652..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-clf.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions granted by this ClusterRole. -apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -loki.grafana.com: The API group for managing Loki-related resources. -resources: The resource type that the ClusterRole grants permission to interact with. -application: Refers to the application resources within the Loki logging system. -resourceNames: Specifies the names of resources that this role can manage. -logs: Refers to the log resources that can be created. -verbs: The actions allowed on the resources. -create: Grants permission to create new logs in the Loki system. - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -## Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -### Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -## Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -## Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -## Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -## Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-loki.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-loki.txt deleted file mode 100644 index 740ca474..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-loki.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- - -# Prerequisites - -* You have installed the Loki Operator by using the CLI or web console. -* You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder. -* The serviceAccount is assigned collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles. - -# Core Setup and Configuration - -Role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -## Enhanced Reliability and Performance - -Configurations to ensure Loki’s reliability and efficiency in production. - -## Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -## Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -## LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -## Advanced Deployment and Scalability - -Specialized configurations for high availability, scalability, and error handling. - -## Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -## Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -### Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -## Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-release-notes.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-release-notes.txt deleted file mode 100644 index 2acb75cd..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-release-notes.txt +++ /dev/null @@ -1,144 +0,0 @@ -# Release notes - - - -# Logging 6.0.3 - -This release includes RHBA-2024:10991. - -## New features and enhancements - -* With this update, the Loki Operator supports the configuring of the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in Red Hat OpenShift Container Platform 4.17 or later. (LOG-6421) - -## Bug fixes - -* Before this update, the collector used the default settings to collect audit logs, which did not account for back pressure from output receivers. With this update, the audit log collection is optimized for file handling and log reading. (LOG-6034) -* Before this update, any namespace containing openshift or kube was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces: default, kube, openshift, and namespaces that begin with openshift- or kube-. (LOG-6204) -* Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. (LOG-6343) -* Before this update, pipeline validation might enter an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6352) -* Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6406) -* Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6441) -* Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6486) -* Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6543) - -## CVEs - -* CVE-2019-12900 -* CVE-2024-2511 -* CVE-2024-3596 -* CVE-2024-4603 -* CVE-2024-4741 -* CVE-2024-5535 -* CVE-2024-10963 -* CVE-2024-50602 - -# Logging 6.0.2 - -This release includes RHBA-2024:10051. - -## Bug fixes - -* Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. (LOG-5325) -* Before this update, the collector would discard audit log messages that exceeded the configured threshold. This modifies the audit configuration thresholds for the maximum line size as well as the number of bytes read during a read cycle. (LOG-5998) -* Before this update, the Cluster Logging Operator did not watch and reconcile resources associated with an instance of a ClusterLogForwarder like it did in prior releases. This update modifies the operator to watch and reconcile all resources it owns and creates. (LOG-6264) -* Before this update, log events with an unknown severity level sent to Google Cloud Logging would trigger a warning in the vector collector, which would then default the severity to 'DEFAULT'. With this update, log severity levels are now standardized to match Google Cloud Logging specifications, and audit logs are assigned a severity of 'INFO'. (LOG-6296) -* Before this update, when infrastructure namespaces were included in application inputs, the log_type was set as application. With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure. (LOG-6354) -* Before this update, specifying a value for the syslog.enrichment field of the ClusterLogForwarder added namespace_name, container_name, and pod_name to the messages of non-container logs. With this update, only container logs include namespace_name, container_name, and pod_name in their messages when syslog.enrichment is set. (LOG-6402) - -## CVEs - -* CVE-2024-6119 -* CVE-2024-6232 - -# Logging 6.0.1 - -This release includes OpenShift Logging Bug Fix Release 6.0.1. - -## Bug fixes - -* With this update, the default memory limit for the collector has been increased from 1024 Mi to 2024 Mi. However, users should always adjust their resource limits according to their cluster specifications and needs. (LOG-6180) -* Before this update, the Loki Operator failed to add the default namespace label to all AlertingRule resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. -(LOG-6151) -* Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. -(LOG-6129) -* Before this update, it was possible to set log_source in the prune filter, which could lead to inconsistent log data. With this update, the configuration is validated before being applied, and any configuration that includes log_source in the prune filter is rejected. -(LOG-6202) - -## CVEs - -* CVE-2024-24791 -* CVE-2024-34155 -* CVE-2024-34156 -* CVE-2024-34158 -* CVE-2024-6104 -* CVE-2024-6119 -* CVE-2024-45490 -* CVE-2024-45491 -* CVE-2024-45492 - -# Logging 6.0.0 - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.0.0 - - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- - - - -# Removal notice - -* With this release, logging no longer supports the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io custom resources. Refer to the product documentation for details on the replacement features. (LOG-5803) -* With this release, logging no longer manages or deploys log storage (such as Elasticsearch), visualization (such as Kibana), or Fluentd-based log collectors. (LOG-5368) - - -[NOTE] ----- -In order to continue to use Elasticsearch and Kibana managed by the elasticsearch-operator, the administrator must modify those object's ownerRefs before deleting the ClusterLogging resource. ----- - -# New features and enhancements - -* This feature introduces a new architecture for logging for Red Hat OpenShift by shifting component responsibilities to their relevant Operators, such as for storage, visualization, and collection. It introduces the ClusterLogForwarder.observability.openshift.io API for log collection and forwarding. Support for the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io APIs, along with the Red Hat managed Elastic stack (Elasticsearch and Kibana), is removed. Users are encouraged to migrate to the Red Hat LokiStack for log storage. Existing managed Elasticsearch deployments can be used for a limited time. Automated migration for log collection is not provided, so administrators need to create a new ClusterLogForwarder.observability.openshift.io specification to replace their previous custom resources. Refer to the official product documentation for more details. (LOG-3493) -* With this release, the responsibility for deploying the logging view plugin shifts from the Red Hat OpenShift Logging Operator to the Cluster Observability Operator (COO). For new log storage installations that need visualization, the Cluster Observability Operator and the associated UIPlugin resource must be deployed. Refer to the Cluster Observability Operator Overview product documentation for more details. (LOG-5461) -* This enhancement sets default requests and limits for Vector collector deployments' memory and CPU usage based on Vector documentation recommendations. (LOG-4745) -* This enhancement updates Vector to align with the upstream version v0.37.1. (LOG-5296) -* This enhancement introduces an alert that triggers when log collectors buffer logs to a node's file system and use over 15% of the available space, indicating potential back pressure issues. (LOG-5381) -* This enhancement updates the selectors for all components to use common Kubernetes labels. (LOG-5906) -* This enhancement changes the collector configuration to deploy as a ConfigMap instead of a secret, allowing users to view and edit the configuration when the ClusterLogForwarder is set to Unmanaged. (LOG-5599) -* This enhancement adds the ability to configure the Vector collector log level using an annotation on the ClusterLogForwarder, with options including trace, debug, info, warn, error, or off. (LOG-5372) -* This enhancement adds validation to reject configurations where Amazon CloudWatch outputs use multiple AWS roles, preventing incorrect log routing. (LOG-5640) -* This enhancement removes the Log Bytes Collected and Log Bytes Sent graphs from the metrics dashboard. (LOG-5964) -* This enhancement updates the must-gather functionality to only capture information for inspecting Logging 6.0 components, including Vector deployments from ClusterLogForwarder.observability.openshift.io resources and the Red Hat managed LokiStack. (LOG-5949) -* This enhancement improves Azure storage secret validation by providing early warnings for specific error conditions. (LOG-4571) -* This enhancement updates the ClusterLogForwarder API to follow the Kubernetes standards. (LOG-5977) -Example of a new configuration in the ClusterLogForwarder custom resource for the updated API - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: -spec: - outputs: - - name: - type: - : - tuning: - deliveryMode: AtMostOnce -``` - - -# Technology Preview features - -* This release introduces a Technology Preview feature for log forwarding using OpenTelemetry. A new output type,` OTLP`, allows sending JSON-encoded log records using the OpenTelemetry data model and resource semantic conventions. (LOG-4225) - -# Bug fixes - -* Before this update, the CollectorHighErrorRate and CollectorVeryHighErrorRate alerts were still present. With this update, both alerts are removed in the logging 6.0 release but might return in a future release. (LOG-3432) - -# CVEs - -* CVE-2024-34397 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-upgrading-to-6.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-upgrading-to-6.txt deleted file mode 100644 index c23045a4..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-upgrading-to-6.txt +++ /dev/null @@ -1,544 +0,0 @@ -# Upgrading to Logging 6.0 - - -Logging v6.0 is a significant upgrade from previous releases, achieving several longstanding goals of Cluster Logging: -* Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization). -* Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana). -* Deprecation of the Fluentd log collector implementation. -* Removal of support for ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io resources. - -[NOTE] ----- -The cluster-logging-operator does not provide an automated upgrade process. ----- -Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator. This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io resources for common use cases are included. - -# Using the oc explain command - -The oc explain command is an essential tool in the OpenShift CLI oc that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster. - -## Resource Descriptions - -oc explain offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators. - -To view the documentation for the outputs field of the ClusterLogForwarder custom resource, you can use: - - -```terminal -$ oc explain clusterlogforwarders.observability.openshift.io.spec.outputs -``` - - - -[NOTE] ----- -In place of clusterlogforwarder the short form obsclf can be used. ----- - -This will display detailed information about these fields, including their types, default values, and any associated sub-fields. - -## Hierarchical Structure - -The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options. - -For instance, here’s how you can drill down into the storage configuration for a LokiStack custom resource: - - -```terminal -$ oc explain lokistacks.loki.grafana.com -$ oc explain lokistacks.loki.grafana.com.spec -$ oc explain lokistacks.loki.grafana.com.spec.storage -$ oc explain lokistacks.loki.grafana.com.spec.storage.schemas -``` - - -Each command reveals a deeper level of the resource specification, making the structure clear. - -## Type Information - -oc explain also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types. - -For example: - - -```terminal -$ oc explain lokistacks.loki.grafana.com.spec.size -``` - - -This will show that size should be defined using an integer value. - -## Default Values - -When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified. - -Again using lokistacks.loki.grafana.com as an example: - - -```terminal -$ oc explain lokistacks.spec.template.distributor.replicas -``` - - - -```terminal -GROUP: loki.grafana.com -KIND: LokiStack -VERSION: v1 - -FIELD: replicas - -DESCRIPTION: - Replicas defines the number of replica pods of the component. -``` - - -# Log Storage - -The only managed log storage solution available in this release is a Lokistack, managed by the Loki Operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process. - - -[IMPORTANT] ----- -To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the Elasticsearch Operator, remove the owner references from the Elasticsearch resource named elasticsearch, and the Kibana resource named kibana in the openshift-logging namespace before removing the ClusterLogging resource named instance in the same namespace. ----- - -1. Temporarily set ClusterLogging resource to the Unmanaged state by running the following command: - -```terminal -$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge -``` - -2. Remove the ownerReferences parameter from the Elasticsearch resource by running the following command: - -The following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource’s logStore field will no longer affect the Elasticsearch resource. - -```terminal -$ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge -``` - -3. Remove the ownerReferences parameter from the Kibana resource. - -The following command ensures that Cluster Logging no longer owns the Kibana resource. Updates to the ClusterLogging resource’s visualization field will no longer affect the Kibana resource. - -```terminal -$ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge -``` - -4. Set the ClusterLogging resource to the Managed state by running the following command: - -```terminal -$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge -``` - - -# Log Visualization - -The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator. - -# Log Collection and Forwarding - -Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io API group. The following sections highlight the differences from the old API resources. - - -[NOTE] ----- -Vector is the only supported collector implementation. ----- - -# Management, Resource Allocation, and Workload Scheduling - -Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: "ClusterLogging" -spec: - managementState: "Managed" - collection: - resources: - limits: {} - requests: {} - nodeSelector: {} - tolerations: {} -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - managementState: Managed - collector: - resources: - limits: {} - requests: {} - nodeSelector: {} - tolerations: {} -``` - - -# Input Specifications - -The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application, infrastructure, and audit to collect these sources. - -## Application Inputs - -Namespace and container inclusions and exclusions have been consolidated into a single field. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: application-logs - type: application - application: - namespaces: - - foo - - bar - includes: - - namespace: my-important - container: main - excludes: - - container: too-verbose -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: application-logs - type: application - application: - includes: - - namespace: foo - - namespace: bar - - namespace: my-important - container: main - excludes: - - container: too-verbose -``` - - - -[NOTE] ----- -application, infrastructure, and audit are reserved words and cannot be used as names when defining an input. ----- - -## Input Receivers - -Changes to input receivers include: - -* Explicit configuration of the type at the receiver level. -* Port settings moved to the receiver level. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: an-http - receiver: - http: - port: 8443 - format: kubeAPIAudit - - name: a-syslog - receiver: - type: syslog - syslog: - port: 9442 -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: an-http - type: receiver - receiver: - type: http - port: 8443 - http: - format: kubeAPIAudit - - name: a-syslog - type: receiver - receiver: - type: syslog - port: 9442 -``` - - -# Output Specifications - -High-level changes to output specifications include: - -* URL settings moved to each output type specification. -* Tuning parameters moved to each output type specification. -* Separation of TLS configuration from authentication. -* Explicit configuration of keys and secret/configmap for TLS and authentication. - -# Secrets and TLS Configuration - -Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions. - -# Red Hat Managed Elasticsearch - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogging -metadata: - name: instance - namespace: openshift-logging -spec: - logStore: - type: elasticsearch -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging -spec: - serviceAccount: - name: - managementState: Managed - outputs: - - name: audit-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: audit-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - - name: app-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: app-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - - name: infra-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: infra-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - pipelines: - - name: app - inputRefs: - - application - outputRefs: - - app-elasticsearch - - name: audit - inputRefs: - - audit - outputRefs: - - audit-elasticsearch - - name: infra - inputRefs: - - infrastructure - outputRefs: - - infra-elasticsearch -``` - - -# Red Hat Managed LokiStack - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogging -metadata: - name: instance - namespace: openshift-logging -spec: - logStore: - type: lokistack - lokistack: - name: logging-loki -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging -spec: - serviceAccount: - name: - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - outputRefs: - - default-lokistack - - inputRefs: - - application - - infrastructure -``` - - -# Filters and Pipeline Configuration - -Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from previous releases have been converted to filters in this release. Individual filters are defined in the filters specification and referenced by a pipeline. - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogForwarder -spec: - pipelines: - - name: application-logs - parse: json - labels: - foo: bar - detectMultilineErrors: true -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -spec: - filters: - - name: detectexception - type: detectMultilineException - - name: parse-json - type: parse - - name: labels - type: openshiftLabels - openshiftLabels: - foo: bar - pipelines: - - name: application-logs - filterRefs: - - detectexception - - labels - - parse-json -``` - - -# Validation and Status - -Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from previous releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time. - -Instances of the ClusterLogForwarder.observability.openshift.io must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -status: - conditions: - - lastTransitionTime: "2024-09-13T03:28:44Z" - message: 'permitted to collect log types: [application]' - reason: ClusterRolesExist - status: "True" - type: observability.openshift.io/Authorized - - lastTransitionTime: "2024-09-13T12:16:45Z" - message: "" - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/Valid - - lastTransitionTime: "2024-09-13T12:16:45Z" - message: "" - reason: ReconciliationComplete - status: "True" - type: Ready - filterConditions: - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: filter "detectexception" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidFilter-detectexception - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: filter "parse-json" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidFilter-parse-json - inputConditions: - - lastTransitionTime: "2024-09-13T12:23:03Z" - message: input "application1" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidInput-application1 - outputConditions: - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: output "default-lokistack-application1" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidOutput-default-lokistack-application1 - pipelineConditions: - - lastTransitionTime: "2024-09-13T03:28:44Z" - message: pipeline "default-before" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidPipeline-default-before -``` - - - -[NOTE] ----- -Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue. ----- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-visual.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-visual.txt deleted file mode 100644 index c669b518..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.0/log6x-visual.txt +++ /dev/null @@ -1,5 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log61-cluster-logging-support.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log61-cluster-logging-support.txt deleted file mode 100644 index d4c8e815..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log61-cluster-logging-support.txt +++ /dev/null @@ -1,141 +0,0 @@ -# Support - - -Only the configuration options described in this documentation are supported for logging. -Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. - -[NOTE] ----- -If you must perform configurations not described in the Red Hat OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed. ----- - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- -Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. -Logging is not: -* A high scale log collection system -* Security Information and Event Monitoring (SIEM) compliant -* A "bring your own" (BYO) log collector configuration -* Historical or long term log retention or storage -* A guaranteed log sink -* Secure storage - audit logs are not stored by default - -# Supported API custom resource definitions - -The following table describes the supported Logging APIs. - - - -# Unsupported configurations - -You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: - -* The collector configuration file -* The collector daemonset - -Explicitly unsupported cases include: - -* Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector. -* Configuring how the log collector normalizes logs. You cannot modify default log normalization. - -# Support policy for unmanaged Operators - -The management state of an Operator determines whether an Operator is actively -managing the resources for its related component in the cluster as designed. If -an Operator is set to an unmanaged state, it does not respond to changes in -configuration nor does it receive updates. - -While this can be helpful in non-production clusters or during debugging, -Operators in an unmanaged state are unsupported and the cluster administrator -assumes full control of the individual component configurations and upgrades. - -An Operator can be set to an unmanaged state using the following methods: - -* Individual Operator configuration - -Individual Operators have a managementState parameter in their configuration. -This can be accessed in different ways, depending on the Operator. For example, -the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource -(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide -configuration resource. - -Changing the managementState parameter to Unmanaged means that the Operator -is not actively managing its resources and will take no action related to the -related component. Some Operators might not support this management state as it -might damage the cluster and require manual recovery. - -[WARNING] ----- -Changing individual Operators to the Unmanaged state renders that particular -component and functionality unsupported. Reported issues must be reproduced in -Managed state for support to proceed. ----- -* Cluster Version Operator (CVO) overrides - -The spec.overrides parameter can be added to the CVO’s configuration to allow -administrators to provide a list of overrides to the CVO’s behavior for a -component. Setting the spec.overrides[].unmanaged parameter to true for a -component blocks cluster upgrades and alerts the administrator after a CVO -override has been set: - -```terminal -Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. -``` - - -[WARNING] ----- -Setting a CVO override puts the entire cluster in an unsupported state. Reported -issues must be reproduced after removing any overrides for support to proceed. ----- - -# Collecting logging data for Red Hat Support - -When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. - -You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. -For prompt support, supply diagnostic information for both Red Hat OpenShift Container Platform and logging. - -## About the must-gather tool - -The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. - -For your logging, must-gather collects the following information: - -* Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level -* Cluster-level resources, including nodes, roles, and role bindings at the cluster level -* OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer - -When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory. - -## Collecting logging data - -You can use the oc adm must-gather CLI command to collect information about logging. - -To collect logging information with must-gather: - -1. Navigate to the directory where you want to store the must-gather information. -2. Run the oc adm must-gather command against the logging image: - -```terminal -$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') -``` - - -The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: -must-gather.local.4157245944708210408. -3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: - -```terminal -$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 -``` - -4. Attach the compressed file to your support case on the Red Hat Customer Portal. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-about-6.1.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-about-6.1.txt deleted file mode 100644 index 4d7fe521..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-about-6.1.txt +++ /dev/null @@ -1,330 +0,0 @@ -# Logging 6.1 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and outputs - -Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: - -* application -* receiver -* infrastructure -* audit - -You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver input type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec field defines the configuration for a receiver input. - -# Pipelines and filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: - -* When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick start - -OpenShift Logging supports two data models: - -* ViaQ (General Availability) -* OpenTelemetry (Technology Preview) - -You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack. - - -[NOTE] ----- -In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. ----- - -## Quick start with ViaQ - -To use the default ViaQ data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - authentication: - token: - from: serviceAccount - target: - name: logging-loki - namespace: openshift-logging - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -[NOTE] ----- -The dataModel field is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. ----- - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. - -## Quick start with OpenTelemetry - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You have installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 -spec: - serviceAccount: - name: collector - outputs: - - name: loki-otlp - type: lokiStack 2 - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - dataModel: Otel 3 - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: my-pipeline - inputRefs: - - application - - infrastructure - outputRefs: - - loki-otlp -``` - -Use the annotation to enable the Otel data model, which is a Technology Preview feature. -Define the output type as lokiStack. -Specifies the OpenTelemetry data model. - -[NOTE] ----- -You cannot use lokiStack.labelKeys when dataModel is Otel. To achieve similar functionality when dataModel is Otel, refer to "Configuring LokiStack for OTLP data ingestion". ----- - -* To verify that OTLP is functioning correctly, complete the following steps: -1. In the OpenShift web console, click Observe -> OpenShift Logging -> LokiStack -> Writes. -2. Check the Distributor - Structured Metadata section. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-about-logging.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-about-logging.txt new file mode 100644 index 00000000..7b0dbb28 --- /dev/null +++ b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-about-logging.txt @@ -0,0 +1,16 @@ +# About Logging 6.1 + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging 6 documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-clf-6.1.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-clf-6.1.txt deleted file mode 100644 index eee9c76a..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-clf-6.1.txt +++ /dev/null @@ -1,818 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions granted by this ClusterRole. -apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -loki.grafana.com: The API group for managing Loki-related resources. -resources: The resource type that the ClusterRole grants permission to interact with. -application: Refers to the application resources within the Loki logging system. -resourceNames: Specifies the names of resources that this role can manage. -logs: Refers to the log resources that can be created. -verbs: The actions allowed on the resources. -create: Grants permission to create new logs in the Loki system. - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -# Configuring OTLP output - -Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 - name: clf-otlp -spec: - serviceAccount: - name: - outputs: - - name: otlp - type: otlp - otlp: - tuning: - compression: gzip - deliveryMode: AtLeastOnce - maxRetryDuration: 20 - maxWrite: 10M - minRetryDuration: 5 - url: 2 - pipelines: - - inputRefs: - - application - - infrastructure - - audit - name: otlp-logs - outputRefs: - - otlp -``` - -Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. -This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. - - -[NOTE] ----- -The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. ----- - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -## Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -### Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -## Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -## Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -## Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -## Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt deleted file mode 100644 index d9bc000e..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt +++ /dev/null @@ -1,180 +0,0 @@ -# OTLP data ingestion in Loki - - -You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging 6.1. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories: -* Resource -* Scope -* Log -You can set metadata for multiple entries simultaneously or individually as needed. - -# Configuring LokiStack for OTLP data ingestion - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: - -* Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. - -1. Set the schema version: -* When creating a new LokiStack CR, set version: v13 in the storage schema configuration. - -[NOTE] ----- -For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). ----- -2. Configure the storage schema as follows: -Example configure storage schema - -```yaml -# ... -spec: - storage: - schemas: - - version: v13 - effectiveDate: 2024-10-25 -``` - - -Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. - -# Attribute mapping - -When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki. - -For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases: - -* Using a custom collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki. -* Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. - - -[IMPORTANT] ----- -Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki. ----- - -## Custom attribute mapping for OpenShift - -When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. -In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration. - - -[NOTE] ----- -A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode. ----- - -Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration: - - -```yaml -# ... -spec: - limits: - global: - otlp: {} 1 - tenants: - application: - otlp: {} 2 -``` - - -Defines global OTLP attribute configuration. -OTLP attribute configuration for the application tenant within openshift-logging mode. - - -[NOTE] ----- -Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement. ----- - -Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects: - - -```yaml -spec: - limits: - global: - otlp: - streamLabels: - resourceAttributes: - - name: "k8s.namespace.name" - - name: "k8s.pod.name" - - name: "k8s.container.name" -``` - - -Structured metadata, in contrast, can be generated from resource, scope or log-level attributes: - - -```yaml -# ... -spec: - limits: - global: - otlp: - streamLabels: -# ... - structuredMetadata: - resourceAttributes: - - name: "process.command_line" - - name: "k8s\\.pod\\.labels\\..+" - regex: true - scopeAttributes: - - name: "service.name" - logAttributes: - - name: "http.route" -``` - - - -[TIP] ----- -Use regular expressions by setting regex: true for attributes names when mapping similar attributes in Loki. ----- - - -[IMPORTANT] ----- -Avoid using regular expressions for stream labels, as this can increase data volume. ----- - -## Customizing OpenShift defaults - -In openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be disabled if performance is impacted. - -When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations. - -## Removing recommended attributes - -To reduce default attributes in openshift-logging mode, disable recommended attributes: - - -```yaml -# ... -spec: - tenants: - mode: openshift-logging - openshift: - otlp: - disableRecommendedAttributes: true 1 -``` - - -Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes. - - -[NOTE] ----- -This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries. ----- - -# Additional resources - -* Loki labels -* Structured metadata -* OpenTelemetry attribute \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-loki-6.1.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-loki-6.1.txt deleted file mode 100644 index 620da8db..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-loki-6.1.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Prerequisites - -* You have installed the Loki Operator by using the CLI or web console. -* You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder. -* The serviceAccount is assigned collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles. - -# Core Setup and Configuration - -Role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -## Enhanced Reliability and Performance - -Configurations to ensure Loki’s reliability and efficiency in production. - -## Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -## Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -## LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -## Advanced Deployment and Scalability - -Specialized configurations for high availability, scalability, and error handling. - -## Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -## Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -### Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -## Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt deleted file mode 100644 index 71eb6a76..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt +++ /dev/null @@ -1,81 +0,0 @@ -# OpenTelemetry data model - - -This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging's OpenTelemetry support with Logging 6.1. - -[IMPORTANT] ----- -The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -# Forwarding and ingestion protocol - -Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification. OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources. - -# Semantic conventions - -The log collector in this solution gathers the following log streams: - -* Container logs -* Cluster node journal logs -* Cluster node auditd logs -* Kubernetes and OpenShift API server logs -* OpenShift Virtual Network (OVN) logs - -You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name, cluster_id, pod_name, namespace, and possibly deployment or app_name. These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data. - -In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage. - -The following sections define the attributes that are generally forwarded. - -## Log entry structure - -All log streams include the following log data fields: - -The Applicable Sources column indicates which log sources each field applies to: - -* all: This field is present in all logs. -* container: This field is present in Kubernetes container logs, both application and infrastructure. -* audit: This field is present in Kubernetes, OpenShift API, and OVN logs. -* auditd: This field is present in node auditd logs. -* journal: This field is present in node journal logs. - - - -## Attributes - -Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table. - -The Location column specifies the type of attribute: - -* resource: Indicates a resource attribute -* scope: Indicates a scope attribute -* log: Indicates a log attribute - -The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored: - -* stream label: -* Enables efficient filtering and querying based on specific labels. -* Can be labeled as required if the Loki Operator enforces this attribute in the configuration. -* structured metadata: -* Allows for detailed filtering and storage of key-value pairs. -* Enables users to use direct labels for streamlined queries without requiring JSON parsing. - -With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries. - - - - -[NOTE] ----- -Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases. ----- - -Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: (.,/,-) will be replaced by underscores (_). For example, k8s.namespace.name will become k8s_namespace_name. - -# Additional resources - -* Semantic Conventions -* Logs Data Model -* General Logs Attributes \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-release-notes-6.1.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-release-notes-6.1.txt deleted file mode 100644 index d248add1..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-release-notes-6.1.txt +++ /dev/null @@ -1,71 +0,0 @@ -# Logging 6.1 - - - -# Logging 6.1.1 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1. - -## New Features and Enhancements - -* With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in Red Hat OpenShift Container Platform 4.17 or later. (LOG-6420) - -## Bug Fixes - -* Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.]. With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size, max_line_bytes, is 3145728 bytes. The maximum number of bytes read during a read cycle, max_read_bytes, is 262144 bytes. (LOG-6379) -* Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. (LOG-6383) -* Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6405) -* Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6407) -* Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6449) -* Before this update, the ValidLokistackOTLPOutputs condition appeared in the status of the ClusterLogForwarder custom resource even when the output type is not LokiStack. With this update, the ValidLokistackOTLPOutputs condition is removed, and the validation messages for the existing output conditions are corrected. (LOG-6469) -* Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6484) -* Before this update, the must-gather script of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, the must-gather script is fixed, and the LokiStack data is gathered reliably. (LOG-6498) -* Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6533) - -## CVEs - -* CVE-2019-12900 -* CVE-2024-2511 -* CVE-2024-3596 -* CVE-2024-4603 -* CVE-2024-4741 -* CVE-2024-5535 -* CVE-2024-10963 -* CVE-2024-50602 - -# Logging 6.1.0 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0. - -## New Features and Enhancements - -### Log Collection - -* This enhancement adds the source iostream to the attributes sent from collected container logs. The value is set to either stdout or stderr based on how the collector received it. (LOG-5292) -* With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster’s specific needs and specifications. (LOG-6072) -* With this update, users can now set the syslog output delivery mode of the ClusterLogForwarder CR to either AtLeastOnce or AtMostOnce. (LOG-6355) - -### Log Storage - -* With this update, the new 1x.pico LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). (LOG-5939) - -## Technology Preview - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* With this update, OpenTelemetry logs can now be forwarded using the OTel (OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add the observability.openshift.io/tech-preview-otlp-output: "enabled" annotation to your ClusterLogForwarder configuration. For additional configuration information, see OTLP Forwarding. -* With this update, a dataModel field has been added to the lokiStack output specification. Set the dataModel to Otel to configure log forwarding using the OpenTelemetry data format. The default is set to Viaq. For information about data mapping see OTLP Specification. - -## Bug Fixes - -None. - -## CVEs - -* CVE-2024-6119 -* CVE-2024-6232 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-visual-6.1.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-visual-6.1.txt deleted file mode 100644 index c669b518..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.1/log6x-visual-6.1.txt +++ /dev/null @@ -1,5 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log62-cluster-logging-support.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log62-cluster-logging-support.txt deleted file mode 100644 index d4c8e815..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log62-cluster-logging-support.txt +++ /dev/null @@ -1,141 +0,0 @@ -# Support - - -Only the configuration options described in this documentation are supported for logging. -Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. - -[NOTE] ----- -If you must perform configurations not described in the Red Hat OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed. ----- - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- -Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. -Logging is not: -* A high scale log collection system -* Security Information and Event Monitoring (SIEM) compliant -* A "bring your own" (BYO) log collector configuration -* Historical or long term log retention or storage -* A guaranteed log sink -* Secure storage - audit logs are not stored by default - -# Supported API custom resource definitions - -The following table describes the supported Logging APIs. - - - -# Unsupported configurations - -You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: - -* The collector configuration file -* The collector daemonset - -Explicitly unsupported cases include: - -* Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector. -* Configuring how the log collector normalizes logs. You cannot modify default log normalization. - -# Support policy for unmanaged Operators - -The management state of an Operator determines whether an Operator is actively -managing the resources for its related component in the cluster as designed. If -an Operator is set to an unmanaged state, it does not respond to changes in -configuration nor does it receive updates. - -While this can be helpful in non-production clusters or during debugging, -Operators in an unmanaged state are unsupported and the cluster administrator -assumes full control of the individual component configurations and upgrades. - -An Operator can be set to an unmanaged state using the following methods: - -* Individual Operator configuration - -Individual Operators have a managementState parameter in their configuration. -This can be accessed in different ways, depending on the Operator. For example, -the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource -(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide -configuration resource. - -Changing the managementState parameter to Unmanaged means that the Operator -is not actively managing its resources and will take no action related to the -related component. Some Operators might not support this management state as it -might damage the cluster and require manual recovery. - -[WARNING] ----- -Changing individual Operators to the Unmanaged state renders that particular -component and functionality unsupported. Reported issues must be reproduced in -Managed state for support to proceed. ----- -* Cluster Version Operator (CVO) overrides - -The spec.overrides parameter can be added to the CVO’s configuration to allow -administrators to provide a list of overrides to the CVO’s behavior for a -component. Setting the spec.overrides[].unmanaged parameter to true for a -component blocks cluster upgrades and alerts the administrator after a CVO -override has been set: - -```terminal -Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. -``` - - -[WARNING] ----- -Setting a CVO override puts the entire cluster in an unsupported state. Reported -issues must be reproduced after removing any overrides for support to proceed. ----- - -# Collecting logging data for Red Hat Support - -When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. - -You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. -For prompt support, supply diagnostic information for both Red Hat OpenShift Container Platform and logging. - -## About the must-gather tool - -The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. - -For your logging, must-gather collects the following information: - -* Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level -* Cluster-level resources, including nodes, roles, and role bindings at the cluster level -* OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer - -When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory. - -## Collecting logging data - -You can use the oc adm must-gather CLI command to collect information about logging. - -To collect logging information with must-gather: - -1. Navigate to the directory where you want to store the must-gather information. -2. Run the oc adm must-gather command against the logging image: - -```terminal -$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') -``` - - -The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: -must-gather.local.4157245944708210408. -3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: - -```terminal -$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 -``` - -4. Attach the compressed file to your support case on the Red Hat Customer Portal. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-about-6.2.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-about-6.2.txt deleted file mode 100644 index 2b6545ea..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-about-6.2.txt +++ /dev/null @@ -1,330 +0,0 @@ -# Logging 6.2 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and outputs - -Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: - -* application -* receiver -* infrastructure -* audit - -You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver input type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec field defines the configuration for a receiver input. - -# Pipelines and filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: - -* When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick start - -OpenShift Logging supports two data models: - -* ViaQ (General Availability) -* OpenTelemetry (Technology Preview) - -You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack. - - -[NOTE] ----- -In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. ----- - -## Quick start with ViaQ - -To use the default ViaQ data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - authentication: - token: - from: serviceAccount - target: - name: logging-loki - namespace: openshift-logging - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -[NOTE] ----- -The dataModel field is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. ----- - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. - -## Quick start with OpenTelemetry - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You have installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 -spec: - serviceAccount: - name: collector - outputs: - - name: loki-otlp - type: lokiStack 2 - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - dataModel: Otel 3 - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: my-pipeline - inputRefs: - - application - - infrastructure - outputRefs: - - loki-otlp -``` - -Use the annotation to enable the Otel data model, which is a Technology Preview feature. -Define the output type as lokiStack. -Specifies the OpenTelemetry data model. - -[NOTE] ----- -You cannot use lokiStack.labelKeys when dataModel is Otel. To achieve similar functionality when dataModel is Otel, refer to "Configuring LokiStack for OTLP data ingestion". ----- - -* To verify that OTLP is functioning correctly, complete the following steps: -1. In the OpenShift web console, click Observe -> OpenShift Logging -> LokiStack -> Writes. -2. Check the Distributor - Structured Metadata section. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-about-logging.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-about-logging.txt new file mode 100644 index 00000000..e7f9a947 --- /dev/null +++ b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-about-logging.txt @@ -0,0 +1,16 @@ +# About Logging 6.2 + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-clf-6.2.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-clf-6.2.txt deleted file mode 100644 index d1c4390a..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-clf-6.2.txt +++ /dev/null @@ -1,988 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions granted by this ClusterRole. -apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -loki.grafana.com: The API group for managing Loki-related resources. -resources: The resource type that the ClusterRole grants permission to interact with. -application: Refers to the application resources within the Loki logging system. -resourceNames: Specifies the names of resources that this role can manage. -logs: Refers to the log resources that can be created. -verbs: The actions allowed on the resources. -create: Grants permission to create new logs in the Loki system. - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -elasticsearch:: Forwards logs to an external Elasticsearch instance. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -# Configuring OTLP output - -Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 - name: clf-otlp -spec: - serviceAccount: - name: - outputs: - - name: otlp - type: otlp - otlp: - tuning: - compression: gzip - deliveryMode: AtLeastOnce - maxRetryDuration: 20 - maxWrite: 10M - minRetryDuration: 5 - url: 2 - pipelines: - - inputRefs: - - application - - infrastructure - - audit - name: otlp-logs - outputRefs: - - otlp -``` - -Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. -This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. - - -[NOTE] ----- -The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. ----- - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -# Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -## Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -# Forwarding logs over HTTP - -To enable forwarding logs over HTTP, specify http as the output type in the ClusterLogForwarder custom resource (CR). - -* Create or edit the ClusterLogForwarder CR using the template below: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - managementState: Managed - outputs: - - name: - type: http - http: - headers: 1 - h1: v1 - h2: v2 - authentication: - username: - key: username - secretName: - password: - key: password - secretName: - timeout: 300 - proxyURL: 2 - url: 3 - tls: - insecureSkipVerify: 4 - ca: - key: - secretName: 5 - pipelines: - - inputRefs: - - application - name: pipe1 - outputRefs: - - 6 - serviceAccount: - name: 7 -``` - -Additional headers to send with the log record. -Optional: URL of the HTTP/HTTPS proxy that should be used to forward logs over http or https from this output. This setting overrides any default proxy settings for the cluster or the node. -Destination address for logs. -Values are either true or false. -Secret name for destination credentials. -This value should be the same as the output name. -The name of your service account. - -# Forwarding logs using the syslog protocol - -You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from Red Hat OpenShift Container Platform. - -To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection. - -* You must have a logging server that is configured to receive the logging data using the specified protocol or format. - -1. Create or edit a YAML file that defines the ClusterLogForwarder CR object: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector -spec: - managementState: Managed - outputs: - - name: rsyslog-east 1 - syslog: - appName: 2 - enrichment: KubernetesMinimal - facility: 3 - msgId: 4 - payloadKey: 5 - procId: 6 - rfc: 7 - severity: informational 8 - tuning: - deliveryMode: 9 - url: 10 - tls: 11 - ca: - key: ca-bundle.crt - secretName: syslog-secret - type: syslog - pipelines: - - inputRefs: 12 - - application - name: syslog-east 13 - outputRefs: - - rsyslog-east - serviceAccount: 14 - name: logcollector -``` - -Specify a name for the output. -Optional: Specify the value for the APP-NAME part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Specify the value for Facility part of the syslog-msg header. -Optional: Specify the value for MSGID part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Specify the record field to use as the payload. The payloadKey value must be a single field path encased in single curly brackets {}. Example: {.}. -Optional: Specify the value for the PROCID part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Set the RFC that the generated messages conform to. The value can be RFC3164 or RFC5424. -Optional: Set the severity level for the message. For more information, see The Syslog Protocol. -Optional: Set the delivery mode for log forwarding. The value can be either AtLeastOnce, or AtMostOnce. -Specify the absolute URL with a scheme. Valid schemes are: tcp, tls, and udp. For example: tls://syslog-receiver.example.com:6514. -Specify the settings for controlling options of the transport layer security (TLS) client connections. -Specify which log types to forward by using the pipeline: application, infrastructure, or audit. -Specify a name for the pipeline. -The name of your service account. -2. Create the CR object: - -```terminal -$ oc create -f .yaml -``` - - -## Adding log source information to the message output - -You can add namespace_name, pod_name, and container_name elements to the message field of the record by adding the enrichment field to your ClusterLogForwarder custom resource (CR). - - -```yaml -# ... - spec: - outputs: - - name: syslogout - syslog: - enrichment: KubernetesMinimal: true - facility: user - payloadKey: message - rfc: RFC3164 - severity: debug - tag: mytag - type: syslog - url: tls://syslog-receiver.example.com:6514 - pipelines: - - inputRefs: - - application - name: test-app - outputRefs: - - syslogout -# ... -``` - - - -[NOTE] ----- -This configuration is compatible with both RFC3164 and RFC5424. ----- - - -```text - 2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...} -``` - - - -```text -2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...} -``` - - -# Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -# Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -# Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt deleted file mode 100644 index c1df8b35..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt +++ /dev/null @@ -1,172 +0,0 @@ -# OTLP data ingestion in Loki - - -You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories: -* Resource -* Scope -* Log -You can set metadata for multiple entries simultaneously or individually as needed. - -# Configuring LokiStack for OTLP data ingestion - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: - -* Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. - -1. Set the schema version: -* When creating a new LokiStack CR, set version: v13 in the storage schema configuration. - -[NOTE] ----- -For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). ----- -2. Configure the storage schema as follows: -Example configure storage schema - -```yaml -# ... -spec: - storage: - schemas: - - version: v13 - effectiveDate: 2024-10-25 -``` - - -Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. - -# Attribute mapping - -When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki. - -For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases: - -* Using a custom collector: If your setup includes a custom collector that generates additional attributes that you do not want to store, consider customizing the mapping to ensure these attributes are dropped by Loki. -* Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. - -## Custom attribute mapping for OpenShift - -When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. -In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration. - - -[NOTE] ----- -A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode. ----- - -Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration: - - -```yaml -# ... -spec: - limits: - global: - otlp: {} 1 - tenants: - application: 2 - otlp: {} -``` - - -Defines global OTLP attribute configuration. -Defines the OTLP attribute configuration for the application tenant within the openshift-logging mode. You can also configure infrastructure and audit tenants in addition to application tenants. - - -[NOTE] ----- -You can use both global and per-tenant OTLP configurations for mapping attributes to stream labels. ----- - -Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects. See the following LokiStack example configuration: - - -```yaml -spec: - limits: - global: - otlp: - streamLabels: - resourceAttributes: - - name: "k8s.namespace.name" - - name: "k8s.pod.name" - - name: "k8s.container.name" -``` - - -You can drop attributes of type resource, scope, or log from the log entry. - - -```yaml -# ... -spec: - limits: - global: - otlp: - streamLabels: -# ... - drop: - resourceAttributes: - - name: "process.command_line" - - name: "k8s\\.pod\\.labels\\..+" - regex: true - scopeAttributes: - - name: "service.name" - logAttributes: - - name: "http.route" -``` - - -You can use regular expressions by setting regex: true to apply a configuration for attributes with similar names. - - -[IMPORTANT] ----- -Avoid using regular expressions for stream labels, as this can increase data volume. ----- - -Attributes that are not explicitly set as stream labels or dropped from the entry are saved as structured metadata by default. - -## Customizing OpenShift defaults - -In the openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be dropped if performance is impacted. For information about the attributes, see OpenTelemetry data model attributes. - -When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or some attributes need to be droped, use custom configuration. Custom configurations can merge with default configurations. - -## Removing recommended attributes - -To reduce default attributes in the openshift-logging mode, disable recommended attributes: - - -```yaml -# ... -spec: - tenants: - mode: openshift-logging - openshift: - otlp: - disableRecommendedAttributes: true 1 -``` - - -Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes or stream labels. - -[NOTE] ----- -This setting might negatively impact query performance, as it removes default stream labels. You must pair this option with a custom attribute configuration to retain attributes essential for queries. ----- - -# Additional resources - -* Loki labels (Grafana documentation) -* Structured metadata (Grafana documentation) -* OpenTelemetry data model -* OpenTelemetry attribute (OpenTelemetry documentation) \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-loki-6.2.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-loki-6.2.txt deleted file mode 100644 index fd0b4f0e..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-loki-6.2.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack custom resource (CR) to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Prerequisites - -* You have installed the Loki Operator by using the command-line interface (CLI) or web console. -* You have created a serviceAccount CR in the same namespace as the ClusterLogForwarder CR. -* You have assigned the collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles to the serviceAccount CR. - -# Core set up and configuration - -Use role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -## Enhanced reliability and performance - -Use the following configurations to ensure reliability and efficiency of Loki in production. - -## Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -## Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -## LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -## Advanced deployment and scalability - -To configure high availability, scalability, and error handling, use the following information. - -## Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -## Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -### Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -## Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-release-notes-6.2.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-release-notes-6.2.txt deleted file mode 100644 index a8856929..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-release-notes-6.2.txt +++ /dev/null @@ -1,114 +0,0 @@ -# Logging 6.2 Release Notes - - - -# Logging 6.2.3 Release Notes - -This release includes RHBA-2025:8138. - -## Bug Fixes - -* Before this update, the cluster logging installation page contained an incorrect URL to the installation steps in the documentation. With this update, the link has been corrected, resolving the issue and helping users successfully navigate to the documentation. (LOG-6760) -* Before this update, the API documentation about default settings of the tuning delivery mode for log forwarding lacked clarity and sufficient detail. This could lead to users experiencing difficulty in understanding or optimally configuring these settings for their logging pipelines. With this update, the documentation has been revised to provide more comprehensive and clearer guidance on tuning delivery mode default settings, resolving potential ambiguities. (LOG-7131) -* Before this update, merging data from the message field into the root of a Syslog log event caused the log event to be inconsistent with the ViaQ data model. The inconsistency could lead to overwritten system information, data duplication, or event corruption. This update revises Syslog parsing and merging for the Syslog output to align with other output types, resolving this inconsistency. (LOG-7185) -* Before this update, log forwarding failed if you configured a cluster-wide proxy with a URL containing a username with an encoded at sign (@); for example user%40name. This update resolves the issue by adding correct support for URL-encoded values in proxy configurations. (LOG-7188) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-12087 -* CVE-2024-12088 -* CVE-2024-12133 -* CVE-2024-12243 -* CVE-2024-12747 -* CVE-2024-56171 -* CVE-2025-0395 -* CVE-2025-24928 - -# Logging 6.2.2 Release Notes - -This release includes RHBA-2025:4526. - -## Bug Fixes - -* Before this update, logs without the responseStatus.code field caused parsing errors in the Loki distributor component. This happened when using the OpenTelemetry data model. With this update, logs without the responseStatus.code field are parsed correctly. (LOG-7012) -* Before this update, the Cloudwatch output supported log events up to 256 KB in size. With this update, the Cloudwatch output supports up to 1 MB in size to match the updates published by Amazon Web Services (AWS). (LOG-7013) -* Before this update, auditd log messages with multiple msg keys could cause errors in collector pods, because the standard auditd log format expects a single msg field per log entry that follows the msg=audit(TIMESTAMP:ID) structure. With this update, only the first msg value is used, which resolves the issue and ensures accurate extraction of audit metadata. (LOG-7014) -* Before this update, collector pods would enter a crash loop due to a configuration error when attempting token-based authentication with an Elasticsearch output. With this update, token authentication with an Elasticsearch output generates a valid configuration. (LOG-7017) - -# Logging 6.2.1 Release Notes - -This release includes RHBA-2025:3908. - -## Bug Fixes - -* Before this update, application programming interface (API) audit logs collected from the management cluster used the cluster_id value from the management cluster. With this update, API audit logs use the cluster_id value from the guest cluster. (LOG-4445) -* Before this update, issuing the oc explain obsclf.spec.filters command did not list all the supported filters in the command output. With this update, all the supported filter types are listed in the command output. (LOG-6753) -* Before this update the log collector flagged a ClusterLogForwarder resource with multiple inputs to a LokiStack output as invalid due to incorrect internal processing logic. This update fixes the issue. (LOG-6758) -* Before this update, issuing the oc explain command for the clusterlogforwarder.spec.outputs.syslog resource returned an incomplete result. With this update, the missing supported types for rfc and enrichment attributes are listed in the result correctly. (LOG-6869) -* Before this update, empty OpenTelemetry (OTEL) tuning configuration caused validation errors. With this update, validation rules have been updated to accept empty tuning configuration. (LOG-6878) -* Before this update the Red Hat OpenShift Logging Operator could not update the securitycontextconstraint resource that is required by the log collector. With this update, the required cluster role has been provided to the service account of the Red Hat OpenShift Logging Operator. As a result of which, Red Hat OpenShift Logging Operator can create or update the securitycontextconstraint resource. (LOG-6879) -* Before this update, the API documentation for the URL attribute of the syslog resource incorrectly mentioned the value udps as a supported value. With this update, all references to udps have been removed. (LOG-6896) -* Before this update, the Red Hat OpenShift Logging Operator was intermittently unable to update the object in logs due to update conflicts. This update resolves the issue and prevents conflicts during object updates by using the Patch() function instead of the Update() function. (LOG-6953) -* Before this update, Loki ingesters that got into an unhealthy state due to networking issues stayed in that state even after the network recovered. With this update, you can configure the Loki Operator to perform service discovery more often so that unhealthy ingesters can rejoin the group. (LOG-6992) -* Before this update, the Vector collector could not forward Open Virtual Network (OVN) and Auditd logs. With this update, the Vector collector can forward OVN and Auditd logs. (LOG-6997) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-2236 -* CVE-2024-5535 -* CVE-2024-56171 -* CVE-2025-24928 - -# Logging 6.2.0 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.2.0. - -## New Features and Enhancements - -### Log Collection - -* With this update, HTTP outputs include a proxy field that you can use to send log data through an HTTP proxy. (LOG-6069) - -### Log Storage - -* With this update, time-based stream sharding in Loki is now enabled by the Loki Operator. This solves the issue of ingesting log entries older than the sliding time-window used by Loki. (LOG-6757) -* With this update, you can configure a custom certificate authority (CA) certificate with Loki Operator when using Swift as an object store. (LOG-4818) -* With this update, you can configure workload identity federation on Google Cloud Platform (GCP) by using the Cluster Credential Operator in OpenShift 4.17 and later releases with the Loki Operator. (LOG-6158) - -## Technology Preview - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* With this update, OpenTelemetry support offered by OpenShift Logging continues to improve, specifically in the area of enabling migrations from the ViaQ data model to OpenTelemetry when forwarding to LokiStack. (LOG-6146) -* With this update, the structuredMetadata field has been removed from Loki Operator in the otlp configuration because structured metadata is now the default type. Additionally, the update introduces a drop field that administrators can use to drop OpenTelemetry attributes when receiving data through OpenTelemetry protocol (OTLP). (LOG-6507) - -## Bug Fixes - -* Before this update, the timestamp shown in the console logs did not match the @timestamp field in the message. With this update the timestamp is correctly shown in the console. (LOG-6222) -* The introduction of ClusterLogForwarder 6.x modified the ClusterLogForwarder API to allow for a consistent templating mechanism. However, this was not applied to the syslog output spec API for the facility and severity fields. This update adds the required validation to the ClusterLogForwarder API for the facility and severity fields. (LOG-6661) -* Before this update, an error in the Loki Operator generating the Loki configuration caused the amount of workers to delete to be zero when 1x.pico was set as the LokiStack size. With this update, the number of workers to delete is set to 10. (LOG-6781) - -## Known Issues - -* The previous data model encoded all information in JSON. The console still uses the query of the previous data model to decode both old and new entries. The logs that are stored by using the new OpenTelemetry data model for the LokiStack output display the following error in the logging console: - -``` -__error__ JSONParserErr -__error_details__ Value looks like object, but can't find closing '}' symbol -``` - - -You can ignore the error as it is only a result of the query and not a data-related error. (LOG-6808) -* Currently, the API documentation incorrectly mentions OpenTelemetry protocol (OTLP) attributes as included instead of excluded in the descriptions of the drop field. (LOG-6839). - -## CVEs - -* CVE-2020-11023 -* CVE-2024-12797 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-visual-6.2.txt b/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-visual-6.2.txt deleted file mode 100644 index c669b518..00000000 --- a/ocp-product-docs-plaintext/4.16/observability/logging/logging-6.2/log6x-visual-6.2.txt +++ /dev/null @@ -1,5 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt index 91af9607..3eb2121a 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt @@ -41,7 +41,7 @@ cluster administrator or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Container Platform and user-defined projects in the Metrics UI. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. From the Administrator perspective in the Red Hat OpenShift Container Platform web console, select Observe -> Metrics. 2. To add one or more queries, do any of the following: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt index 4b1ee0cf..9f731ada 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt @@ -88,7 +88,7 @@ Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. [NOTE] diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt index 922efde6..dd6aaccb 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -84,7 +84,7 @@ If you do not need the local Alertmanager, you can disable it by configuring the * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config config map in the openshift-monitoring project: @@ -129,7 +129,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -180,7 +180,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt index 70fe93c4..a0206e8f 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -449,7 +449,7 @@ You can create cluster ID labels for metrics by adding the write_relabel setting * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt index a7c3d5ad..4ce7bd22 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt @@ -29,7 +29,7 @@ You cannot add a node selector constraint directly to an existing scheduled pod. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -85,7 +85,7 @@ You can assign tolerations to any of the monitoring stack components to enable m * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -151,7 +151,7 @@ Prometheus then considers this target to be down and sets its up metric value to ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: @@ -194,7 +194,7 @@ To configure CPU and memory resources, specify values for resource limits and re * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the ConfigMap object named cluster-monitoring-config. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -325,7 +325,7 @@ For more information about the support scope of Red Hat Technology Preview featu To choose a metrics collection profile for core Red Hat OpenShift Container Platform monitoring components, edit the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have enabled Technology Preview features by using the FeatureGate custom resource (CR). * You have created the cluster-monitoring-config ConfigMap object. * You have access to the cluster as a user with the cluster-admin cluster role. @@ -385,7 +385,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt index ce3b00df..39104491 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt @@ -34,7 +34,7 @@ Each procedure that requires a change in the config map includes its expected ou You can configure the core Red Hat OpenShift Container Platform monitoring components by creating and updating the cluster-monitoring-config config map in the openshift-monitoring project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Check whether the cluster-monitoring-config ConfigMap object exists: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt index 43e06841..c4cc3be2 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -113,7 +113,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. * You have configured at least one PVC for core Red Hat OpenShift Container Platform monitoring components. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -187,7 +187,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -305,7 +305,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -370,7 +370,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -436,7 +436,7 @@ For default platform monitoring in the openshift-monitoring project, you can ena Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt index c6e4b9be..087ec9aa 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -95,7 +95,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -146,7 +146,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -233,7 +233,7 @@ If you are a non-administrator user who has been given the alert-routing-edit cl * A cluster administrator has enabled monitoring for user-defined projects. * A cluster administrator has enabled alert routing for user-defined projects. * You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml. 2. Add an AlertmanagerConfig YAML definition to the file. For example: @@ -278,7 +278,7 @@ All features of a supported version of upstream Alertmanager are also supported * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled a separate instance of Alertmanager for user-defined alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Print the currently active Alertmanager configuration into the file alertmanager.yaml: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt index cbe11296..df57b0cf 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -457,7 +457,7 @@ You cannot override this default configuration by setting the value of the honor * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt index 5e38ead7..b5b3bcd2 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt @@ -28,7 +28,7 @@ It is not permitted to move components to control plane or infrastructure nodes. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -84,7 +84,7 @@ You can assign tolerations to the components that monitor user-defined projects, * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -145,7 +145,7 @@ You can configure these limits and requests for monitoring components that monit To configure CPU and memory resources, specify values for resource limits and requests in the {configmap-name} ConfigMap object in the {namespace-name} namespace. * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -232,7 +232,7 @@ If you set sample or label limits, no further sample data is ingested for that t * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -289,7 +289,7 @@ You can create alerts that notify you when: * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml: @@ -357,7 +357,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt index afda2cf5..624c19a8 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt @@ -55,7 +55,7 @@ You must have access to the cluster as a user with the cluster-admin cluster rol ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have created the cluster-monitoring-config ConfigMap object. * You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. @@ -116,7 +116,7 @@ As a cluster administrator, you can assign the user-workload-monitoring-config-e * You have access to the cluster as a user with the cluster-admin cluster role. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: @@ -175,7 +175,7 @@ You can allow users to create user-defined alert routing configurations that use * You have access to the cluster as a user with the cluster-admin cluster role. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object: @@ -258,7 +258,7 @@ You can grant users permission to configure alert routing for user-defined proje * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled monitoring for user-defined projects. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * Assign the alert-routing-edit cluster role to a user in the user-defined project: @@ -268,7 +268,7 @@ $ oc -n adm policy add-role-to-user alert-routing-edit 1 For , substitute the namespace for the user-defined project, such as ns1. For , substitute the username for the account to which you want to assign the role. -Configuring alert notifications +* Configuring alert notifications # Granting users permissions for monitoring for user-defined projects diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt index e341c10e..88072b4f 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -118,7 +118,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have configured at least one PVC for components that monitor user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -197,7 +197,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -247,7 +247,7 @@ By default, for user-defined projects, Thanos Ruler automatically retains metric * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -311,7 +311,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -376,7 +376,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt index 215d2a76..7eaa0b6c 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt @@ -164,7 +164,7 @@ You can create alerting rules for user-defined projects. Those alerting rules wi * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -210,7 +210,7 @@ To list alerting rules for a user-defined project, you must have been assigned t * You have enabled monitoring for user-defined projects. * You are logged in as a user that has the monitoring-rules-view cluster role for your project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. To list alerting rules in : @@ -231,7 +231,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt index 30c3e085..6b535b1b 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt @@ -169,7 +169,7 @@ These alerting rules trigger alerts based on the values of chosen metrics. ---- * You have access to the cluster as a user that has the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-alerting-rule.yaml. 2. Add an AlertingRule resource to the YAML file. @@ -218,7 +218,7 @@ As a cluster administrator, you can modify core platform alerts before Alertmana For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-modified-alerting-rule.yaml. 2. Add an AlertRelabelConfig resource to the YAML file. @@ -285,7 +285,7 @@ You can create alerting rules for user-defined projects. Those alerting rules wi * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -331,7 +331,7 @@ As a cluster administrator, you can list alerting rules for core Red Hat OpenShift Container Platform and user-defined projects together in a single view. * You have access to the cluster as a user with the cluster-admin role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. From the Administrator perspective of the Red Hat OpenShift Container Platform web console, go to Observe -> Alerting -> Alerting rules. 2. Select the Platform and User sources in the Filter drop-down menu. @@ -347,7 +347,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: diff --git a/ocp-product-docs-plaintext/4.16/observability/monitoring/troubleshooting-monitoring-issues.txt b/ocp-product-docs-plaintext/4.16/observability/monitoring/troubleshooting-monitoring-issues.txt index 9e0d0078..ff187c0c 100644 --- a/ocp-product-docs-plaintext/4.16/observability/monitoring/troubleshooting-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.16/observability/monitoring/troubleshooting-monitoring-issues.txt @@ -200,7 +200,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective, navigate to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -273,7 +273,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.16/security/cert_manager_operator/cert-manager-customizing-api-fields.txt b/ocp-product-docs-plaintext/4.16/security/cert_manager_operator/cert-manager-customizing-api-fields.txt index 7f2cffda..06da814e 100644 --- a/ocp-product-docs-plaintext/4.16/security/cert_manager_operator/cert-manager-customizing-api-fields.txt +++ b/ocp-product-docs-plaintext/4.16/security/cert_manager_operator/cert-manager-customizing-api-fields.txt @@ -1,13 +1,111 @@ -# Customizing cert-manager Operator API fields +# Customizing the cert-manager Operator by using the CertManager custom resource -You can customize the cert-manager Operator for Red Hat OpenShift API fields by overriding environment variables and arguments. +After installing the cert-manager Operator for Red Hat OpenShift, you can perform the following actions by configuring the CertManager custom resource (CR): +* Configure the arguments to modify the behavior of the cert-manager components, such as the cert-manager controller, CA injector, and Webhook. +* Set environment variables for the controller pod. +* Define resource requests and limits to manage CPU and memory usage. +* Configure scheduling rules to control where pods run in your cluster. + +```yaml +apiVersion: operator.openshift.io/v1alpha1 +kind: CertManager +metadata: + name: cluster +spec: + controllerConfig: + overrideArgs: + - "--dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53" + overrideEnv: + - name: HTTP_PROXY + value: http://proxy.example.com:8080 + overrideResources: + limits: + cpu: "200m" + memory: "512Mi" + requests: + cpu: "100m" + memory: "256Mi" + overrideScheduling: + nodeSelector: + custom: "label" + tolerations: + - key: "key1" + operator: "Equal" + value: "value1" + effect: "NoSchedule" + + webhookConfig: + overrideArgs: +#... + overrideResources: +#... + overrideScheduling: +#... + + cainjectorConfig: + overrideArgs: +#... + overrideResources: +#... + overrideScheduling: +#... +``` + [WARNING] ---- To override unsupported arguments, you can add spec.unsupportedConfigOverrides section in the CertManager resource, but using spec.unsupportedConfigOverrides is unsupported. ---- +# Explanation of fields in the CertManager custom resource + +You can use the CertManager custom resource (CR) to configure the following core components of the cert-manager Operator for Red Hat OpenShift: + +* Cert-manager controller: You can use the spec.controllerConfig field to configure the cert‑manager controller pod. +* Webhook: You can use the spec.webhookConfig field to configure the webhook pod, which handles validation and mutation requests. +* CA injector: You can use the spec.cainjectorConfig field to configure the CA injector pod. + +## Common configurable fields in the CertManager CR for the cert-manager components + +The following table lists the common fields that you can configure in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + + + +## Overridable arguments for the cert-manager components + +You can configure the overridable arguments for the cert-manager components in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the overridable arguments for the cert-manager components: + + + +## Overridable environment variables for the cert-manager controller + +You can configure the overridable environment variables for the cert-manager controller in the spec.controllerConfig.overrideEnv field in the CertManager CR. + +The following table describes the overridable environment variables for the cert-manager controller: + + + +## Overridable resource parameters for the cert-manager components + +You can configure the CPU and memory limits for the cert-manager components in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the overridable resource parameters for the cert-manager components: + + + +## Overridable scheduling parameters for the cert-manager components + +You can configure the pod scheduling constrainsts for the cert-manager components in the spec.controllerConfig, spec.webhookConfig field, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the pod scheduling parameters for the cert-manager components: + + + +* Deleting a TLS secret automatically upon Certificate removal + # Customizing cert-manager by overriding environment variables from the cert-manager Operator API You can override the supported environment variables for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. @@ -42,6 +140,11 @@ spec: Replace with the proxy server URL. Replace with a comma separated list of domains. These domains are ignored by the proxy server. + +[NOTE] +---- +For more information about the overridable environment variables, see "Overridable environment variables for the cert-manager components" in "Explanation of fields in the CertManager custom resource". +---- 3. Save your changes and quit the text editor to apply your changes. 1. Verify that the cert-manager controller pod is redeployed by running the following command: @@ -77,6 +180,8 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Customizing cert-manager by overriding arguments from the cert-manager Operator API You can override the supported arguments for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. @@ -102,30 +207,24 @@ spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=' 1 - - '--dns01-recursive-nameservers-only' 2 - - '--acme-http01-solver-nameservers=:' 3 - - '--v=' 4 - - '--metrics-listen-address=:' 5 - - '--issuer-ambient-credentials' 6 + - '--dns01-recursive-nameservers-only' + - '--acme-http01-solver-nameservers=:' + - '--v=' + - '--metrics-listen-address=:' + - '--issuer-ambient-credentials' + - '--acme-http01-solver-resource-limits-cpu=' + - '--acme-http01-solver-resource-limits-memory=' + - '--acme-http01-solver-resource-request-cpu=' + - '--acme-http01-solver-resource-request-memory=' webhookConfig: overrideArgs: - - '--v=4' 4 + - '--v=' cainjectorConfig: overrideArgs: - - '--v=2' 4 + - '--v=' ``` -Provide a comma-separated list of nameservers to query for the DNS-01 self check. The nameservers can be specified either as :, for example, 1.1.1.1:53, or use DNS over HTTPS (DoH), for example, https://1.1.1.1/dns-query. -Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. -Provide a comma-separated list of : nameservers to query for the Automated Certificate Management Environment (ACME) HTTP01 self check. For example, --acme-http01-solver-nameservers=1.1.1.1:53. -Specify to set the log level verbosity to determine the verbosity of log messages. -Specify the host and port for the metrics endpoint. The default value is --metrics-listen-address=0.0.0.0:9402. -You must use the --issuer-ambient-credentials argument when configuring an ACME Issuer to solve DNS-01 challenges by using ambient credentials. - -[NOTE] ----- -DNS over HTTPS (DoH) is supported starting only from cert-manager Operator for Red Hat OpenShift version 1.13.0 and later. ----- +For information about the overridable aruguments, see "Overridable arguments for the cert-manager components" in "Explanation of fields in the CertManager custom resource". 3. Save your changes and quit the text editor to apply your changes. * Verify that arguments are updated for cert-manager pods by running the following command: @@ -176,6 +275,8 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Deleting a TLS secret automatically upon Certificate removal You can enable the --enable-certificate-owner-ref flag for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. The --enable-certificate-owner-ref flag sets the certificate resource as an owner of the secret where the TLS certificate is stored. @@ -248,7 +349,7 @@ Example output # Overriding CPU and memory limits for the cert-manager components -After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components such as cert-manager controller, CA injector, and Webhook. +After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components, such as the cert-manager controller, CA injector, and Webhook. * You have access to the Red Hat OpenShift Container Platform cluster as a user with the cluster-admin role. * You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. @@ -316,48 +417,37 @@ Example output The spec.resources field is empty by default. The cert-manager components do not have CPU and memory limits. 3. To configure the CPU and memory limits for the cert-manager controller, CA injector, and Webhook, enter the following command: -```yaml +```terminal $ oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: - overrideResources: - limits: 1 - cpu: 200m 2 - memory: 64Mi 3 - requests: 4 - cpu: 10m 2 - memory: 16Mi 3 + overrideResources: 1 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi webhookConfig: overrideResources: - limits: 5 - cpu: 200m 6 - memory: 64Mi 7 - requests: 8 - cpu: 10m 6 - memory: 16Mi 7 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi cainjectorConfig: overrideResources: - limits: 9 - cpu: 200m 10 - memory: 64Mi 11 - requests: 12 - cpu: 10m 10 - memory: 16Mi 11 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi " ``` -Defines the maximum amount of CPU and memory that a single container in a cert-manager controller pod can request. -You can specify the CPU limit that a cert-manager controller pod can request. The default value is 10m. -You can specify the memory limit that a cert-manager controller pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the cert-manager controller pod. -Defines the maximum amount of CPU and memory that a single container in a CA injector pod can request. -You can specify the CPU limit that a CA injector pod can request. The default value is 10m. -You can specify the memory limit that a CA injector pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the CA injector pod. -Defines the maximum amount of CPU and memory Defines the maximum amount of CPU and memory that a single container in a Webhook pod can request. -You can specify the CPU limit that a Webhook pod can request. The default value is 10m. -You can specify the memory limit that a Webhook pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the Webhook pod. +For information about the overridable resource parameters, see "Overridable resource parameters for the cert-manager components" in "Explanation of fields in the CertManager custom resource". Example output ```terminal @@ -429,9 +519,11 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Configuring scheduling overrides for cert-manager components -You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components such as cert-manager controller, CA injector, and Webhook. +You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components, such as the cert-manager controller, CA injector, and Webhook. * You have access to the Red Hat OpenShift Container Platform cluster as a user with the cluster-admin role. * You have installed version 1.15.0 or later of the cert-manager Operator for Red Hat OpenShift. @@ -442,37 +534,33 @@ You can configure the pod scheduling from the cert-manager Operator for Red Hat $ oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: - overrideScheduling: + overrideScheduling: 1 nodeSelector: - node-role.kubernetes.io/control-plane: '' 1 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule 2 + effect: NoSchedule webhookConfig: overrideScheduling: nodeSelector: - node-role.kubernetes.io/control-plane: '' 3 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule 4 + effect: NoSchedule cainjectorConfig: overrideScheduling: nodeSelector: - node-role.kubernetes.io/control-plane: '' 5 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule" 6 + effect: NoSchedule" +" ``` -Defines the nodeSelector for the cert-manager controller deployment. -Defines the tolerations for the cert-manager controller deployment. -Defines the nodeSelector for the cert-manager webhook deployment. -Defines the tolerations for the cert-manager webhook deployment. -Defines the nodeSelector for the cert-manager cainjector deployment. -Defines the tolerations for the cert-manager cainjector deployment. +For information about the overridable scheduling parameters, see "Overridable scheduling parameters for the cert-manager components" in "Explanation of fields in the CertManager custom resource". 1. Verify pod scheduling settings for cert-manager pods: 1. Check the deployments in the cert-manager namespace to confirm they have the correct nodeSelector and tolerations by running the following command: @@ -517,3 +605,6 @@ cert-manager-webhook ```terminal $ oc get events -n cert-manager --field-selector reason=Scheduled ``` + + +* Explanation of fields in the CertManager custom resource \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.16/security/cert_manager_operator/cert-manager-operator-release-notes.txt b/ocp-product-docs-plaintext/4.16/security/cert_manager_operator/cert-manager-operator-release-notes.txt index 40c02a83..44681f93 100644 --- a/ocp-product-docs-plaintext/4.16/security/cert_manager_operator/cert-manager-operator-release-notes.txt +++ b/ocp-product-docs-plaintext/4.16/security/cert_manager_operator/cert-manager-operator-release-notes.txt @@ -5,6 +5,44 @@ The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that p These release notes track the development of cert-manager Operator for Red Hat OpenShift. For more information, see About the cert-manager Operator for Red Hat OpenShift. +# cert-manager Operator for Red Hat OpenShift 1.17.0 + +Issued: 2025-08-06 + +The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.17.0: + +* RHBA-2025:13182 +* RHBA-2025:13134 +* RHBA-2025:13133 + +Version 1.17.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.17.4. For more information, see the cert-manager project release notes for v1.17.4. + +## Bug fixes + +* Previously, the status field in the IstioCSR custom resource (CR) was not set to Ready even after the successful deployment of Istio‑CSR. With this fix, the status field is correctly set to Ready, ensuring consistent and reliable status reporting. (CM-546) + +## New features and enhancements + +Support to configure resource requests and limits for ACME HTTP‑01 solver pods + +With this release, the cert-manager Operator for Red Hat OpenShift supports configuring CPU and memory resource requests and limits for ACME HTTP‑01 solver pods. You can configure the CPU and memory resource requests and limits by using the following overridable arguments in the CertManager custom resource (CR): + +* --acme-http01-solver-resource-limits-cpu +* --acme-http01-solver-resource-limits-memory +* --acme-http01-solver-resource-request-cpu +* --acme-http01-solver-resource-request-memory + +For more information, see Overridable arguments for the cert‑manager components. + +## CVEs + +* CVE-2025-22866 +* CVE-2025-22868 +* CVE-2025-22872 +* CVE-2025-22870 +* CVE-2025-27144 +* CVE-2025-22871 + # cert-manager Operator for Red Hat OpenShift 1.16.1 Issued: 2025-07-10 diff --git a/ocp-product-docs-plaintext/4.16/service_mesh/v2x/servicemesh-release-notes.txt b/ocp-product-docs-plaintext/4.16/service_mesh/v2x/servicemesh-release-notes.txt index 30f95c54..b2f115c8 100644 --- a/ocp-product-docs-plaintext/4.16/service_mesh/v2x/servicemesh-release-notes.txt +++ b/ocp-product-docs-plaintext/4.16/service_mesh/v2x/servicemesh-release-notes.txt @@ -2,14 +2,32 @@ +# Red Hat OpenShift Service Mesh version 2.6.9 + +This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.9, and includes the following ServiceMeshControlPlane resource version updates: 2.6.9 and 2.5.12. + +This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. + +You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. + +## Component updates + + + +# Red Hat OpenShift Service Mesh version 2.5.12 + +This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.9 and is supported on Red Hat OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). + +## Component updates + + + # Red Hat OpenShift Service Mesh version 2.6.8 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.8, and includes the following ServiceMeshControlPlane resource version updates: 2.6.8 and 2.5.11. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. -The most current version of the Red Hat OpenShift Service Mesh Operator can be used with all supported versions of Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. - You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. ## Component updates diff --git a/ocp-product-docs-plaintext/4.16/support/troubleshooting/investigating-monitoring-issues.txt b/ocp-product-docs-plaintext/4.16/support/troubleshooting/investigating-monitoring-issues.txt index de764350..07a86573 100644 --- a/ocp-product-docs-plaintext/4.16/support/troubleshooting/investigating-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.16/support/troubleshooting/investigating-monitoring-issues.txt @@ -204,7 +204,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective, navigate to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -275,7 +275,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.16/support/troubleshooting/troubleshooting-installations.txt b/ocp-product-docs-plaintext/4.16/support/troubleshooting/troubleshooting-installations.txt index 6631cb67..9d102970 100644 --- a/ocp-product-docs-plaintext/4.16/support/troubleshooting/troubleshooting-installations.txt +++ b/ocp-product-docs-plaintext/4.16/support/troubleshooting/troubleshooting-installations.txt @@ -110,7 +110,7 @@ $ ./openshift-install create ignition-configs --dir=./install_dir You can monitor high-level installation, bootstrap, and control plane logs as an Red Hat OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. * You have the fully qualified domain names of the bootstrap and control plane nodes. diff --git a/ocp-product-docs-plaintext/4.16/virt/about_virt/about-virt.txt b/ocp-product-docs-plaintext/4.16/virt/about_virt/about-virt.txt index 1d83218d..1812c569 100644 --- a/ocp-product-docs-plaintext/4.16/virt/about_virt/about-virt.txt +++ b/ocp-product-docs-plaintext/4.16/virt/about_virt/about-virt.txt @@ -30,6 +30,8 @@ You can use OpenShift Virtualization with OVN-Kubernetes, OpenShift SDN, or one You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies. +For information about partnering with Independent Software Vendors (ISVs) and Services partners for specialized storage, networking, backup, and additional functionality, see the Red Hat Ecosystem Catalog. + ## OpenShift Virtualization supported cluster version The latest stable release of OpenShift Virtualization 4.15 is 4.15.0. @@ -40,6 +42,8 @@ OpenShift Virtualization 4.15 is supported for use on Red Hat OpenShift Containe If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.16/virt/install/preparing-cluster-for-virt.txt b/ocp-product-docs-plaintext/4.16/virt/install/preparing-cluster-for-virt.txt index d58aa8db..88621987 100644 --- a/ocp-product-docs-plaintext/4.16/virt/install/preparing-cluster-for-virt.txt +++ b/ocp-product-docs-plaintext/4.16/virt/install/preparing-cluster-for-virt.txt @@ -120,6 +120,8 @@ To mark a storage class as the default for virtualization workloads, set the ann If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.16/virt/monitoring/virt-prometheus-queries.txt b/ocp-product-docs-plaintext/4.16/virt/monitoring/virt-prometheus-queries.txt index 83ac9f0f..420ffa9b 100644 --- a/ocp-product-docs-plaintext/4.16/virt/monitoring/virt-prometheus-queries.txt +++ b/ocp-product-docs-plaintext/4.16/virt/monitoring/virt-prometheus-queries.txt @@ -17,7 +17,7 @@ cluster administrator or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Container Platform and user-defined projects in the Metrics UI. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. From the Administrator perspective in the Red Hat OpenShift Container Platform web console, select Observe -> Metrics. 2. To add one or more queries, do any of the following: diff --git a/ocp-product-docs-plaintext/4.16/virt/vm_networking/virt-hot-plugging-network-interfaces.txt b/ocp-product-docs-plaintext/4.16/virt/vm_networking/virt-hot-plugging-network-interfaces.txt index a16a31ea..1098f24b 100644 --- a/ocp-product-docs-plaintext/4.16/virt/vm_networking/virt-hot-plugging-network-interfaces.txt +++ b/ocp-product-docs-plaintext/4.16/virt/vm_networking/virt-hot-plugging-network-interfaces.txt @@ -25,21 +25,12 @@ If you restart the VM after hot plugging an interface, that interface becomes pa Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. * A network attachment definition is configured in the same namespace as your VM. +* The VM to which you want to hot plug the network interface is running. * You have installed the virtctl tool. -* You have installed the OpenShift CLI (oc). - -1. If the VM to which you want to hot plug the network interface is not running, start it by using the following command: - -```terminal -$ virtctl start -n -``` - -2. Use the following command to add the new network interface to the running VM. Editing the VM specification adds the new network interface to the VM and virtual machine instance (VMI) configuration but does not attach it to the running VM. - -```terminal -$ oc edit vm -``` +* You have permission to create and list VirtualMachineInstanceMigration objects. +* You have installed the OpenShift CLI (`oc`). +1. Use your preferred text editor to edit the VirtualMachine manifest, as shown in the following example: Example VM configuration ```yaml @@ -70,7 +61,7 @@ template: Specifies the name of the new network interface. Specifies the name of the network. This must be the same as the name of the new network interface that you defined in the template.spec.domain.devices.interfaces list. Specifies the name of the NetworkAttachmentDefinition object. -3. To attach the network interface to the running VM, live migrate the VM by running the following command: +2. To attach the network interface to the running VM, live migrate the VM by running the following command: ```terminal $ virtctl migrate diff --git a/ocp-product-docs-plaintext/4.17/architecture/architecture.txt b/ocp-product-docs-plaintext/4.17/architecture/architecture.txt index 69383728..ffbc7af1 100644 --- a/ocp-product-docs-plaintext/4.17/architecture/architecture.txt +++ b/ocp-product-docs-plaintext/4.17/architecture/architecture.txt @@ -144,7 +144,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.17/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt b/ocp-product-docs-plaintext/4.17/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt index eaa29887..2e75155d 100644 --- a/ocp-product-docs-plaintext/4.17/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt +++ b/ocp-product-docs-plaintext/4.17/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt @@ -4,8 +4,11 @@ Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR. +The following are the different backup types for a Backup CR: * The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. +* If you use Velero's snapshot feature to back up data stored on the persistent volume, only snapshot related information is stored in the S3 bucket along with the Openshift object data. * If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots. +If the underlying storage or the backup bucket are part of the same cluster, then the data might be lost in case of disaster. For more information about CSI volume snapshots, see CSI volume snapshots. [IMPORTANT] diff --git a/ocp-product-docs-plaintext/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt b/ocp-product-docs-plaintext/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt index f4285558..1a807614 100644 --- a/ocp-product-docs-plaintext/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt +++ b/ocp-product-docs-plaintext/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt @@ -9,8 +9,8 @@ The management cluster is not the same thing as the managed cluster. A managed c ---- The hosted control planes feature is enabled by default. The multicluster engine Operator supports only the default local-cluster, which is a hub cluster that is managed, and the hub cluster as the management cluster. If you have Red Hat Advanced Cluster Management installed, you can use the managed hub cluster, also known as the local-cluster, as the management cluster. -A hosted cluster is an Red Hat OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command-line interface, hcp, to create a hosted cluster. -The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see Disabling the automatic import of hosted clusters into multicluster engine Operator. +A hosted cluster is an Red Hat OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command-line interface (hcp) to create a hosted cluster. +The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator". # Preparing to deploy hosted control planes on bare metal @@ -213,7 +213,7 @@ cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m To create a hosted cluster by using the console, complete the following steps. -1. Open the Red Hat OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see Accessing the web console. +1. Open the Red Hat OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see "Accessing the web console". 2. In the console header, ensure that All Clusters is selected. 3. Click Infrastructure -> Clusters. 4. Click Create cluster -> Host inventory -> Hosted control plane. @@ -224,7 +224,7 @@ The Create cluster page is displayed. [NOTE] ---- As you enter details about the cluster, you might find the following tips useful: -* If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see Creating a credential for an on-premises environment. +* If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see "Creating a credential for an on-premises environment". * On the Cluster details page, the pull secret is your Red Hat OpenShift Container Platform pull secret that you use to access Red Hat OpenShift Container Platform resources. If you selected a host inventory credential, the pull secret is automatically populated. * On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace. * On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the NodePort type. A DNS entry must exist for the api.. setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods. @@ -281,7 +281,7 @@ The --api-server-address flag defines the IP address that is used for the Kubern Specify the icsp.yaml file that defines ICSP and your mirror registries. Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub. Specify your hosted cluster namespace. -Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.17.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see Extracting the Red Hat OpenShift Container Platform release image digest. +Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.17.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see "Extracting the Red Hat OpenShift Container Platform release image digest". * To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment. * To access a hosted cluster, see Accessing the hosted cluster. diff --git a/ocp-product-docs-plaintext/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt b/ocp-product-docs-plaintext/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt index f329629b..a314c02d 100644 --- a/ocp-product-docs-plaintext/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt +++ b/ocp-product-docs-plaintext/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt @@ -282,7 +282,7 @@ The --api-server-address flag defines the IP address that is used for the Kubern Specify the icsp.yaml file that defines ICSP and your mirror registries. Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub. Specify your hosted cluster namespace. -Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.17.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see Extracting the Red Hat OpenShift Container Platform release image digest. +Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.17.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see "Extracting the Red Hat OpenShift Container Platform release image digest". * To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment. * To access a hosted cluster, see Accessing the hosted cluster. diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-china.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-china.txt index 0f36f1ff..cd5cc096 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-china.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-china.txt @@ -1184,7 +1184,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1239,9 +1239,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1249,7 +1249,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-customizations.txt index 41d4389d..21ff5516 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-customizations.txt @@ -885,7 +885,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -940,9 +940,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -950,7 +950,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-default.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-default.txt index 75cac4f1..576bf440 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-default.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-default.txt @@ -30,7 +30,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -110,9 +110,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -120,7 +120,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-government-region.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-government-region.txt index 855e1f84..3adf54d1 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-government-region.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-government-region.txt @@ -1102,7 +1102,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1157,9 +1157,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1167,7 +1167,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-localzone.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-localzone.txt index c8b1cc26..b4f6bce9 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-localzone.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-localzone.txt @@ -1165,7 +1165,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1224,9 +1224,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1234,7 +1234,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-network-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-network-customizations.txt index 85b4a6ef..aea012db 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-network-customizations.txt @@ -1122,7 +1122,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1177,9 +1177,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1187,7 +1187,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-private.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-private.txt index 0919b1fe..d7d7cf16 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-private.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-private.txt @@ -1037,7 +1037,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1092,9 +1092,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1102,7 +1102,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-secret-region.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-secret-region.txt index 2d18c2c5..d8abb269 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-secret-region.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-secret-region.txt @@ -1196,7 +1196,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1251,9 +1251,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1261,7 +1261,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-vpc.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-vpc.txt index c9c6ade0..683ad881 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-vpc.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-vpc.txt @@ -1036,7 +1036,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1091,9 +1091,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1101,7 +1101,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt index cc58c794..5404d3b7 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt @@ -1225,7 +1225,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1284,9 +1284,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1294,7 +1294,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt index 06749cc7..b56bf0d0 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt @@ -1040,7 +1040,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1095,9 +1095,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1105,7 +1105,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt index 15d187f8..4008d49e 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt @@ -25,7 +25,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/installing-aws-user-infra.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/installing-aws-user-infra.txt index dc0eb725..2e6452ec 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/installing-aws-user-infra.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/installing-aws-user-infra.txt @@ -1690,9 +1690,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1700,7 +1700,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/installing-restricted-networks-aws.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/installing-restricted-networks-aws.txt index dedb51e8..6a17e8a1 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/installing-restricted-networks-aws.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/installing-restricted-networks-aws.txt @@ -2075,9 +2075,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2085,7 +2085,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/upi-aws-preparing-to-install.txt b/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/upi-aws-preparing-to-install.txt index 5921ae16..54a0991c 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/upi-aws-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_aws/upi/upi-aws-preparing-to-install.txt @@ -26,7 +26,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-customizations.txt index 8d632942..0e1d850c 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-customizations.txt @@ -1049,7 +1049,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1097,9 +1097,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1107,7 +1107,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-default.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-default.txt index ad7e6359..5c9dfbb2 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-default.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-default.txt @@ -21,7 +21,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -100,9 +100,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -110,7 +110,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-government-region.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-government-region.txt index 46e0574e..fcde5cfb 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-government-region.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-government-region.txt @@ -623,7 +623,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -686,9 +686,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -696,7 +696,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-network-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-network-customizations.txt index c6e6560d..0f6a4d03 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-network-customizations.txt @@ -1065,7 +1065,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1113,9 +1113,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1123,7 +1123,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt index afe40274..57487de1 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt @@ -12,7 +12,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-private.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-private.txt index 62a204d9..7d8aec09 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-private.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-private.txt @@ -1084,7 +1084,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1147,9 +1147,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1157,7 +1157,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-vnet.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-vnet.txt index cc0eda40..2a4ca735 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-vnet.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-azure-vnet.txt @@ -943,7 +943,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt index 7c0ae2aa..a590c9de 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt @@ -1101,7 +1101,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1149,9 +1149,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1159,7 +1159,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-azure-preparing-upi.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-azure-preparing-upi.txt index 0e014b42..06006d86 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-azure-preparing-upi.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-azure-preparing-upi.txt @@ -12,7 +12,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-azure-user-infra.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-azure-user-infra.txt index fb637adc..deece80c 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-azure-user-infra.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-azure-user-infra.txt @@ -29,7 +29,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1939,9 +1939,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1949,7 +1949,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt index 5fd03d46..7f63c61e 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt @@ -59,7 +59,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1988,9 +1988,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1998,7 +1998,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt index 9d2db5a5..edf05e5e 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt @@ -312,7 +312,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -360,9 +360,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -370,7 +370,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt index 4f6572f4..fa107d3f 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt @@ -528,7 +528,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -576,9 +576,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -586,7 +586,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt index eb3ea15e..e203c6cd 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt @@ -15,7 +15,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt index bc28cb3c..fcb1eb68 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt @@ -1339,9 +1339,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1349,7 +1349,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt index 477da857..23541e9a 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt @@ -14,7 +14,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-bare-metal-network-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-bare-metal-network-customizations.txt index b8dbe3fd..d6dad985 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-bare-metal-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-bare-metal-network-customizations.txt @@ -21,7 +21,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2983,9 +2983,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2993,7 +2993,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-bare-metal.txt b/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-bare-metal.txt index 2c29a5c4..cdf2a98e 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-bare-metal.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-bare-metal.txt @@ -30,7 +30,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2981,9 +2981,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2991,7 +2991,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-restricted-networks-bare-metal.txt b/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-restricted-networks-bare-metal.txt index e6b77f43..b7a5214d 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-restricted-networks-bare-metal.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal/installing-restricted-networks-bare-metal.txt @@ -69,7 +69,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2961,9 +2961,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2971,7 +2971,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt b/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt index 03ca0f0f..cb50b9b2 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_bare_metal_ipi/ipi-install-prerequisites.txt @@ -370,4 +370,49 @@ Prior to the installation of the Red Hat OpenShift Container Platform cluster, g * Control plane and worker nodes are configured. * All nodes accessible via out-of-band management. * (Optional) A separate management network has been created. -* Required data for installation. \ No newline at end of file +* Required data for installation. + +# Installation overview + +The installation program supports interactive mode. However, you can prepare an install-config.yaml file containing the provisioning details for all of the bare-metal hosts, and the relevant cluster details, in advance. + +The installation program loads the install-config.yaml file and the administrator generates the manifests and verifies all prerequisites. + +The installation program performs the following tasks: + +* Enrolls all nodes in the cluster +* Starts the bootstrap virtual machine (VM) +* Starts the metal platform components as systemd services, which have the following containers: +* Ironic-dnsmasq: The DHCP server responsible for handing over the IP addresses to the provisioning interface of various nodes on the provisioning network. Ironic-dnsmasq is only enabled when you deploy an Red Hat OpenShift Container Platform cluster with a provisioning network. +* Ironic-httpd: The HTTP server that is used to ship the images to the nodes. +* Image-customization +* Ironic +* Ironic-inspector (available in Red Hat OpenShift Container Platform 4.16 and earlier) +* Ironic-ramdisk-logs +* Extract-machine-os +* Provisioning-interface +* Metal3-baremetal-operator + +The nodes enter the validation phase, where each node moves to a manageable state after Ironic validates the credentials to access the Baseboard Management Controller (BMC). + +When the node is in the manageable state, the inspection phase starts. The inspection phase ensures that the hardware meets the minimum requirements needed for a successful deployment of Red Hat OpenShift Container Platform. + +The install-config.yaml file details the provisioning network. On the bootstrap VM, the installation program uses the Pre-Boot Execution Environment (PXE) to push a live image to every node with the Ironic Python Agent (IPA) loaded. When using virtual media, it connects directly to the BMC of each node to virtually attach the image. + +When using PXE boot, all nodes reboot to start the process: + +* The ironic-dnsmasq service running on the bootstrap VM provides the IP address of the node and the TFTP boot server. +* The first-boot software loads the root file system over HTTP. +* The ironic service on the bootstrap VM receives the hardware information from each node. + +The nodes enter the cleaning state, where each node must clean all the disks before continuing with the configuration. + +After the cleaning state finishes, the nodes enter the available state and the installation program moves the nodes to the deploying state. + +IPA runs the coreos-installer command to install the Red Hat Enterprise Linux CoreOS (RHCOS) image on the disk defined by the rootDeviceHints parameter in the install-config.yaml file. The node boots by using RHCOS. + +After the installation program configures the control plane nodes, it moves control from the bootstrap VM to the control plane nodes and deletes the bootstrap VM. + +The Bare-Metal Operator continues the deployment of the workers, storage, and infra nodes. + +After the installation completes, the nodes move to the active state. You can then proceed with postinstallation configuration and other Day 2 tasks. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-customizations.txt index 69d74f02..70317e0f 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-customizations.txt @@ -19,7 +19,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1242,7 +1242,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1294,9 +1294,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1304,7 +1304,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-default.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-default.txt index 50c3800c..e7b5dc7e 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-default.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-default.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -167,7 +167,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -341,9 +341,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -351,7 +351,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-network-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-network-customizations.txt index 1a2bc820..f8cc3e58 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-network-customizations.txt @@ -25,7 +25,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1199,7 +1199,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1251,9 +1251,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1261,7 +1261,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-private.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-private.txt index 15ea7f3b..ce7a2796 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-private.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-private.txt @@ -112,7 +112,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1195,7 +1195,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1247,9 +1247,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1257,7 +1257,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-shared-vpc.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-shared-vpc.txt index 278e184c..e0c1a2b4 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-shared-vpc.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-shared-vpc.txt @@ -19,7 +19,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -913,7 +913,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -965,9 +965,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -975,7 +975,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-user-infra-vpc.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-user-infra-vpc.txt index 650adf2d..a959659d 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-user-infra-vpc.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-user-infra-vpc.txt @@ -35,7 +35,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1890,10 +1890,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1901,7 +1901,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-user-infra.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-user-infra.txt index 3444ff2c..c12f2ca1 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-user-infra.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-user-infra.txt @@ -31,7 +31,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2039,10 +2039,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2050,7 +2050,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-vpc.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-vpc.txt index 6a4e734f..2f735f3e 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-vpc.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-gcp-vpc.txt @@ -60,7 +60,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1157,7 +1157,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1209,9 +1209,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1219,7 +1219,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt index 599d85db..4df0a331 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt @@ -56,7 +56,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1188,7 +1188,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1240,9 +1240,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1250,7 +1250,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-restricted-networks-gcp.txt b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-restricted-networks-gcp.txt index 570330ed..0fabc6aa 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-restricted-networks-gcp.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_gcp/installing-restricted-networks-gcp.txt @@ -65,7 +65,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2001,10 +2001,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2012,7 +2012,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt index 2273f74f..6bb18b31 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -509,7 +509,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -654,9 +654,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -664,7 +664,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt index 7ea03f51..e3910398 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt @@ -19,7 +19,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -656,7 +656,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -801,9 +801,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -811,7 +811,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt index d4025909..3a006499 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt @@ -125,7 +125,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -627,7 +627,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -772,9 +772,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -782,7 +782,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt index 3f8d96e1..5fa37c8c 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt @@ -887,7 +887,7 @@ If the Red Hat Enterprise Linux CoreOS (RHCOS) image is available locally, the h $ export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE="/rhcos--ibmcloud.x86_64.qcow2.gz" ``` -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -935,9 +935,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -945,7 +945,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt index a8d5caf2..956ee25a 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt @@ -84,7 +84,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -592,7 +592,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -737,9 +737,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -747,7 +747,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_power/installing-ibm-power.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_power/installing-ibm-power.txt index 58d4a6dc..e38dcca1 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_power/installing-ibm-power.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_power/installing-ibm-power.txt @@ -29,7 +29,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1823,9 +1823,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1833,7 +1833,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt index b9277965..3c5f2427 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt @@ -61,7 +61,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1734,9 +1734,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1744,7 +1744,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt index 117f2935..95ac2152 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -486,7 +486,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -631,9 +631,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -641,7 +641,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt index 06aa6607..70be6110 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt @@ -101,7 +101,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -579,7 +579,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -724,9 +724,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -734,7 +734,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt index 500b2ea3..0e526d2f 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt @@ -66,7 +66,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -576,7 +576,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -721,9 +721,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -731,7 +731,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt index 4a3059ec..406a1a31 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt @@ -105,7 +105,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -643,7 +643,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -788,9 +788,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -798,7 +798,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt index 4da6b0d8..251bb181 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt @@ -964,9 +964,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -974,7 +974,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt index b6922a71..f591d1ec 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt @@ -897,9 +897,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -907,7 +907,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z.txt index 22fdd660..bb301bb4 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-ibm-z.txt @@ -914,9 +914,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -924,7 +924,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt index c8891a97..54b202e5 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt @@ -1022,9 +1022,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1032,7 +1032,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt index 2df3aa0b..e5fabe9b 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt @@ -949,9 +949,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -959,7 +959,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt index 2da4c919..6525d2ad 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt @@ -971,9 +971,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -981,7 +981,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt index 2442021f..064041e0 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt @@ -24,7 +24,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt b/ocp-product-docs-plaintext/4.17/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt index bac3683e..49a69b0e 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt @@ -28,7 +28,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1228,7 +1228,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt b/ocp-product-docs-plaintext/4.17/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt index 7d7fa4c0..93d6dd17 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt @@ -840,7 +840,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-installer-custom.txt b/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-installer-custom.txt index ee08c126..332dbba2 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-installer-custom.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-installer-custom.txt @@ -221,7 +221,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1418,7 +1418,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1504,9 +1504,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1514,7 +1514,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-installer-restricted.txt b/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-installer-restricted.txt index ffa1a181..c9c12e97 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-installer-restricted.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-installer-restricted.txt @@ -116,7 +116,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -760,7 +760,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -846,9 +846,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -856,7 +856,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-user.txt b/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-user.txt index 910a7cbf..1e1405a9 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-user.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_openstack/installing-openstack-user.txt @@ -22,7 +22,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1421,9 +1421,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1431,7 +1431,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_platform_agnostic/installing-platform-agnostic.txt b/ocp-product-docs-plaintext/4.17/installing/installing_platform_agnostic/installing-platform-agnostic.txt index ff616ea9..3fef1f53 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_platform_agnostic/installing-platform-agnostic.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_platform_agnostic/installing-platform-agnostic.txt @@ -26,7 +26,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1978,9 +1978,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1988,7 +1988,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt index 1921582c..2558ab8b 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt @@ -1,9 +1,16 @@ # Installing a cluster on vSphere using the Agent-based Installer + The Agent-based installation method provides the flexibility to boot your on-premise servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. + Agent-based installation is a subcommand of the Red Hat OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an Red Hat OpenShift Container Platform cluster with an available release image. -# Additional resources +For more information about installing a cluster using the Agent-based Installer, see Preparing to install with the Agent-based Installer. + -* Preparing to install with the Agent-based Installer \ No newline at end of file +[IMPORTANT] +---- +Your vSphere account must include privileges for reading and creating the resources required to install an Red Hat OpenShift Container Platform cluster. +For more information about privileges, see vCenter requirements. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt index 97d2d8ae..8ca0b3df 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt @@ -63,7 +63,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -440,20 +440,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -469,25 +469,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -983,14 +983,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1038,9 +1038,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1048,7 +1048,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1127,13 +1127,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1150,7 +1150,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1165,7 +1165,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1174,8 +1174,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt index 36dda1cd..35d28ba9 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt @@ -32,7 +32,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -312,20 +312,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -341,25 +341,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -855,14 +855,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -910,9 +910,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -920,7 +920,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -980,13 +980,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1003,7 +1003,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1018,7 +1018,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1027,8 +1027,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1051,7 +1051,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt index 2c41c858..d8ccb0b7 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt @@ -34,7 +34,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -360,20 +360,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -389,25 +389,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -1099,14 +1099,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1154,9 +1154,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1164,7 +1164,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1224,13 +1224,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1247,7 +1247,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1262,7 +1262,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1271,8 +1271,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1295,7 +1295,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt index 2ec2541d..a82065da 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt @@ -33,7 +33,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -51,14 +51,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -148,9 +148,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -158,7 +158,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -218,13 +218,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -241,7 +241,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -256,7 +256,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -265,8 +265,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -289,7 +289,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt index 5af0584c..2121f707 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt @@ -76,7 +76,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -378,20 +378,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -407,25 +407,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -994,9 +994,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1004,7 +1004,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1253,13 +1253,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1276,7 +1276,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1291,7 +1291,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1300,8 +1300,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1352,7 +1352,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt index 02aab130..054cff3a 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt @@ -37,7 +37,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -301,20 +301,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -330,25 +330,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -1053,9 +1053,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1063,7 +1063,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1291,7 +1291,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-vsphere.txt b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-vsphere.txt index 29ad639c..f603aa61 100644 --- a/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-vsphere.txt +++ b/ocp-product-docs-plaintext/4.17/installing/installing_vsphere/upi/installing-vsphere.txt @@ -37,7 +37,7 @@ In Red Hat OpenShift Container Platform 4.17, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -296,20 +296,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -325,25 +325,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -869,9 +869,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -879,7 +879,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1115,13 +1115,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1138,7 +1138,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1153,7 +1153,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1162,8 +1162,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1214,7 +1214,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.17/machine_configuration/index.txt b/ocp-product-docs-plaintext/4.17/machine_configuration/index.txt index 8784926e..72c33b5b 100644 --- a/ocp-product-docs-plaintext/4.17/machine_configuration/index.txt +++ b/ocp-product-docs-plaintext/4.17/machine_configuration/index.txt @@ -335,7 +335,7 @@ UPDATED:: The True status indicates that the MCO has applied the current machine UPDATING:: The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated. DEGRADED:: A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready. MACHINECOUNT:: Indicates the total number of machines in that MCP. -READYMACHINECOUNT:: Indicates the total number of machines in that MCP that are ready for scheduling. +READYMACHINECOUNT:: Indicates the number of machines that are both running the current machine config and are ready for scheduling. This count is always less than or equal to the UPDATEDMACHINECOUNT number. UPDATEDMACHINECOUNT:: Indicates the total number of machines in that MCP that have the current machine config. DEGRADEDMACHINECOUNT:: Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable. diff --git a/ocp-product-docs-plaintext/4.17/machine_configuration/mco-update-boot-images.txt b/ocp-product-docs-plaintext/4.17/machine_configuration/mco-update-boot-images.txt index 2197f168..21ed0f2f 100644 --- a/ocp-product-docs-plaintext/4.17/machine_configuration/mco-update-boot-images.txt +++ b/ocp-product-docs-plaintext/4.17/machine_configuration/mco-update-boot-images.txt @@ -15,7 +15,12 @@ The updating boot image feature for AWS is a Technology Preview feature only. Te For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ---- If you are not using the default user data secret, named worker-user-data, in your machine set, or you have modified the worker-user-data secret, you should not use managed boot image updates. This is because the Machine Config Operator (MCO) updates the machine set to use a managed version of the secret. By using the managed boot images feature, you are giving up the capability to customize the secret stored in the machine set object. -To view the current boot image used in your cluster, examine a machine set: +To view the current boot image used in your cluster, examine a machine set. + +[NOTE] +---- +The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. +---- ```yaml apiVersion: machine.openshift.io/v1beta1 @@ -40,6 +45,26 @@ spec: ``` This boot image is the same as the originally-installed Red Hat OpenShift Container Platform version, in this example Red Hat OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. + +```yaml +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +metadata: + name: ci-ln-hmy310k-72292-5f87z-worker-a + namespace: openshift-machine-api +spec: +# ... + template: +# ... + spec: +# ... + providerSpec: + value: + ami: + id: ami-0e8fd9094e487d1ff +# ... +``` + If you configure your cluster to update your boot images, the boot image referenced in your machine sets matches the current version of the cluster. # Configuring updated boot images @@ -179,7 +204,7 @@ spec: # ... ``` -This boot image is the same as the current Red Hat OpenShift Container Platform version. +This boot image is the same as the current Red Hat OpenShift Container Platform version. The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. * Enabling features using feature gates diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt index e3efb179..205d2c4c 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt @@ -20,30 +20,18 @@ The AWS Load Balancer Operator can tag the public subnets if the kubernetes.io/r The AWS Load Balancer Operator supports the Kubernetes service resource of type LoadBalancer by using Network Load Balancer (NLB) with the instance target type only. -1. You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a Subscription object by running the following command: +1. To deploy the AWS Load Balancer Operator on-demand from OperatorHub, create a Subscription object by running the following command: ```terminal $ oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}' ``` -Example output - -```terminal -install-zlfbt -``` - 2. Check if the status of an install plan is Complete by running the following command: ```terminal $ oc -n aws-load-balancer-operator get ip --template='{{.status.phase}}{{"\n"}}' ``` -Example output - -```terminal -Complete -``` - 3. View the status of the aws-load-balancer-operator-controller-manager deployment by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/dns-operator.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/dns-operator.txt index 81d40e67..8a3e7145 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/dns-operator.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/dns-operator.txt @@ -71,6 +71,12 @@ The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. +2. To find the service CIDR range, such as 172.30.0.0/16, of your cluster, use the oc get command: + +```terminal +$ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}' +``` + # Using DNS forwarding @@ -131,7 +137,7 @@ spec: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions: - ... +... ``` Must comply with the rfc6335 service name syntax. @@ -337,7 +343,7 @@ The string value can be a combination of units such as 0.5h10m and is converted 1. To review the change, look at the config map again by running the following command: ```terminal -oc get configmap/dns-default -n openshift-dns -o yaml +$ oc get configmap/dns-default -n openshift-dns -o yaml ``` 2. Verify that you see entries that look like the following example: @@ -368,19 +374,12 @@ The following are use cases for changing the DNS Operator managementState: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' ``` -2. Review managementState of the DNS Operator using the jsonpath command-line JSON parser: +2. Review managementState of the DNS Operator by using the jsonpath command-line JSON parser: ```terminal $ oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}' ``` -Example output - -```terminal -"Unmanaged" -``` - - [NOTE] ---- diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt index 29693792..eb241e2e 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt @@ -87,19 +87,5 @@ Example output ```text 2024/08/13 15:20:06 15016 packets received 2024/08/13 15:20:06 93581579 bytes received - -2024/08/13 15:20:09 19284 packets received -2024/08/13 15:20:09 99638680 bytes received - -2024/08/13 15:20:12 23522 packets received -2024/08/13 15:20:12 105666062 bytes received - -2024/08/13 15:20:15 27276 packets received -2024/08/13 15:20:15 112028608 bytes received - -2024/08/13 15:20:18 29470 packets received -2024/08/13 15:20:18 112732299 bytes received - -2024/08/13 15:20:21 32588 packets received -2024/08/13 15:20:21 113813781 bytes received +... ``` diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt index 45caae03..e4ecdcf0 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt @@ -26,14 +26,8 @@ $ oc -n external-dns-operator patch subscription external-dns-operator --type='j ``` -* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command: +* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added, outputted as trusted-ca, to the external-dns-operator deployment by running the following command: ```terminal $ oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME ``` - -Example output - -```terminal -trusted-ca -``` diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt index d899e333..475a42b8 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt @@ -7,22 +7,20 @@ You can create DNS records on AWS and AWS GovCloud by using the External DNS Ope You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud. -1. Check the user. The user must have access to the kube-system namespace. If you don’t have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: +1. Check the user profile, such as system:admin, by running the following command. The user profile must have access to the kube-system namespace. If you do not have the credentials, you can fetch the credentials from the kube-system namespace to use the cloud provider client by running the following command: ```terminal $ oc whoami ``` -Example output +2. Fetch the values from aws-creds secret present in kube-system namespace. ```terminal -system:admin +$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) ``` -2. Fetch the values from aws-creds secret present in kube-system namespace. ```terminal -$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) $ export AWS_SECRET_ACCESS_KEY=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) ``` @@ -39,7 +37,7 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None ``` -4. Get the list of dns zones to find the one which corresponds to the previously found route's domain: +4. Get the list of DNS zones and find the DNS zone that corresponds to the domain of the route that you previously queried: ```terminal $ aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt index 32c37150..d09fbc90 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt @@ -51,18 +51,12 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None ``` -6. Get a list of managed zones by running the following command: +6. Get a list of managed zones, such as qe-cvs4g-private-zone test.gcp.example.com, by running the following command: ```terminal $ gcloud dns managed-zones list | grep test.gcp.example.com ``` -Example output - -```terminal -qe-cvs4g-private-zone test.gcp.example.com -``` - 7. Create a YAML file, for example, external-dns-sample-gcp.yaml, that defines the ExternalDNS object: Example external-dns-sample-gcp.yaml file diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt index 5d82bab3..67559fa9 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt @@ -131,22 +131,8 @@ external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m $ oc -n external-dns-operator get subscription ``` -Example output - -```terminal -NAME PACKAGE SOURCE CHANNEL -external-dns-operator external-dns-operator redhat-operators stable-v1 -``` - 5. Check the external-dns-operator version by running the following command: ```terminal $ oc -n external-dns-operator get csv ``` - -Example output - -```terminal -NAME DISPLAY VERSION REPLACES PHASE -external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded -``` diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt index 508d9cd6..89abe4d8 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt @@ -11,30 +11,18 @@ The External DNS Operator implements the External DNS API from the olm.openshift You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a Subscription object. -1. Check the name of an install plan by running the following command: +1. Check the name of an install plan, such as install-zcvlr, by running the following command: ```terminal $ oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' ``` -Example output - -```terminal -install-zcvlr -``` - 2. Check if the status of an install plan is Complete by running the following command: ```terminal $ oc -n external-dns-operator get ip -o yaml | yq '.status.phase' ``` -Example output - -```terminal -Complete -``` - 3. View the status of the external-dns-operator deployment by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/ingress-operator.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/ingress-operator.txt index d2a1e5e0..ed3583b3 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/ingress-operator.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/ingress-operator.txt @@ -314,19 +314,12 @@ certificate authority that you configured in a custom PKI. * Your certificate meets the following requirements: * The certificate is valid for the ingress domain. * The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com. -* You must have an IngressController CR. You may use the default one: +* You must have an IngressController CR, which includes just having the default IngressController CR. You can run the following command to check that you have an IngressController CR: ```terminal $ oc --namespace openshift-ingress-operator get ingresscontrollers ``` -Example output - -```terminal -NAME AGE -default 10m -``` - [NOTE] @@ -617,18 +610,12 @@ $ oc apply -f ingress-autoscaler.yaml * Verify that the default Ingress Controller is scaled out to match the value returned by the kube-state-metrics query by running the following commands: -* Use the grep command to search the Ingress Controller YAML file for replicas: +* Use the grep command to search the Ingress Controller YAML file for the number of replicas: ```terminal $ oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas: ``` -Example output - -```terminal - replicas: 3 -``` - * Get the pods in the openshift-ingress project: ```terminal @@ -670,39 +657,18 @@ Scaling is not an immediate action, as it takes time to create the desired numbe $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ``` -Example output - -```terminal -2 -``` - -2. Scale the default IngressController to the desired number of replicas using -the oc patch command. The following example scales the default IngressController -to 3 replicas: +2. Scale the default IngressController to the desired number of replicas by using the oc patch command. The following example scales the default IngressController to 3 replicas. ```terminal $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge ``` -Example output - -```terminal -ingresscontroller.operator.openshift.io/default patched -``` - -3. Verify that the default IngressController scaled to the number of replicas -that you specified: +3. Verify that the default IngressController scaled to the number of replicas that you specified: ```terminal $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ``` -Example output - -```terminal -3 -``` - [TIP] ---- @@ -1519,18 +1485,12 @@ Optional: Domain for Red Hat OpenShift Container Platform infrastructure to use ---- Wait for the openshift-apiserver finish rolling updates before exposing the route. ---- -1. Expose the route: +1. Expose the route by entering the following command. The command outputs route.route.openshift.io/hello-openshift exposed to designate exposure of the route. ```terminal $ oc expose service hello-openshift ``` -Example output - -```terminal -route.route.openshift.io/hello-openshift exposed -``` - 2. Get a list of routes by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt index 5ce32a48..b5a1ee5c 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt @@ -31,7 +31,7 @@ You can install the Kubernetes NMState Operator by using the web console or the ## Installing the Kubernetes NMState Operator by using the web console -You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. +You can install the Kubernetes NMState Operator by using the web console. After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. * You are logged in as a user with cluster-admin privileges. @@ -50,8 +50,6 @@ The name restriction is a known issue. The instance is a singleton for the entir ---- 9. Accept the default settings and click Create to create the instance. -After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. - ## Installing the Kubernetes NMState Operator by using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI (oc). After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. @@ -113,13 +111,6 @@ $ oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -kubernetes-nmstate-operator.4.17.0-202210210157 Succeeded -``` - 5. Create an instance of the nmstate Operator: ```terminal @@ -131,21 +122,12 @@ metadata: EOF ``` -6. Verify that all pods for the NMState Operator are in a Running state: +6. Verify that all pods for the NMState Operator are in a Running state by entering the following command: ```terminal $ oc get pod -n openshift-nmstate ``` -Example output - -```terminal -Name Ready Status Restarts Age -pod/nmstate-handler-wn55p 1/1 Running 0 77s -pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s -... -``` - ## Viewing metrics collected by the Kubernetes NMState Operator diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/metallb-operator/metallb-operator-install.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/metallb-operator/metallb-operator-install.txt index eb1994ba..95dbcd02 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/metallb-operator/metallb-operator-install.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/metallb-operator/metallb-operator-install.txt @@ -119,20 +119,13 @@ install-wzg94 metallb-operator.4.17.0-nnnnnnnnnnnn Automatic true ---- Installation of the Operator might take a few seconds. ---- -2. To verify that the Operator is installed, enter the following command: +2. To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -metallb-operator.4.17.0-nnnnnnnnnnnn Succeeded -``` - # Starting MetalLB on your cluster diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt index 32613380..bb752045 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt @@ -42,13 +42,6 @@ spec: $ oc -n metallb-system get csv ``` -Example output - -```terminal -NAME DISPLAY VERSION REPLACES PHASE -metallb-operator.v4.17.0 MetalLB Operator 4.17.0 Succeeded -``` - 4. Check the install plan that exists in the namespace by entering the following command. ```terminal @@ -76,19 +69,12 @@ $ oc edit installplan -n metallb-system After you edit the install plan, the upgrade operation starts. If you enter the oc -n metallb-system get csv command during the upgrade operation, the output might show the Replacing or the Pending status. ---- -1. Verify the upgrade was successful by entering the following command: +* To verify that the Operator is upgraded, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc -n metallb-system get csv ``` -Example output - -```terminal -NAME DISPLAY VERSION REPLACE PHASE -metallb-operator.v.0-202503102139 MetalLB Operator 4.17.0-202503102139 metallb-operator.v4.17.0-202502261233 Succeeded -``` - # Additional resources diff --git a/ocp-product-docs-plaintext/4.17/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt b/ocp-product-docs-plaintext/4.17/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt index 78095179..8f91163c 100644 --- a/ocp-product-docs-plaintext/4.17/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt +++ b/ocp-product-docs-plaintext/4.17/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt @@ -78,20 +78,13 @@ EOF ``` -* Check that the Operator is installed by entering the following command: +* To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -sriov-network-operator.4.17.0-202406131906 Succeeded -``` - ## Web console: Installing the SR-IOV Network Operator diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/6x-cluster-logging-deploying-6.0.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/6x-cluster-logging-deploying-6.0.txt deleted file mode 100644 index 1671b011..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/6x-cluster-logging-deploying-6.0.txt +++ /dev/null @@ -1,601 +0,0 @@ -# Installing Logging - - -Red Hat OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs. -To get started with logging, you must install the following Operators: -* Loki Operator to manage your log store. -* Red Hat OpenShift Logging Operator to manage log collection and forwarding. -* Cluster Observability Operator (COO) to manage visualization. -You can use either the Red Hat OpenShift Container Platform web console or the Red Hat OpenShift Container Platform CLI to install or configure logging. - -[IMPORTANT] ----- -You must configure the Red Hat OpenShift Logging Operator after the Loki Operator. ----- - -# Installation by using the CLI - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI. - -## Installing the Loki Operator by using the CLI - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki by using the Red Hat OpenShift Container Platform command-line interface (CLI). You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Create a Namespace object for Loki Operator: -Example Namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-operators-redhat 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Container Platform metric, causing conflicts. -A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -2. Apply the Namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create an OperatorGroup object. -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-operators-redhat as the namespace. -4. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a Subscription object for Loki Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: loki-operator - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-operators-redhat as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -6. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -7. Create a namespace object for deploy the LokiStack: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -8. Apply the namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -9. Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3. -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging -stringData: 2 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Use the name logging-loki-s3 to match the name used in LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -10. Apply the Secret object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -11. Create a LokiStack CR: -Example LokiStack CR - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" 4 - secret: - name: logging-loki-s3 5 - type: s3 6 - storageClassName: 7 - tenants: - mode: openshift-logging 8 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -12. Apply the LokiStack CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -* Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -## Installing Red Hat OpenShift Logging Operator by using the CLI - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store by using the OpenShift CLI (`oc`). - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You installed and configured Loki Operator. -* You have created the openshift-logging namespace. - -1. Create an OperatorGroup object: -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-logging as the namespace. -2. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create a Subscription object for Red Hat OpenShift Logging Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: cluster-logging - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-logging as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -4. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a service account to be used by the log collector: - -```terminal -$ oc create sa logging-collector -n openshift-logging -``` - -6. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs. - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging -``` - -7. Create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify the openshift-logging namespace. -Specify the name of the service account created before. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -8. Apply the ClusterLogForwarder CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -1. Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m -instance-222js 2/2 Running 0 18m -instance-g9ddv 2/2 Running 0 18m -instance-hfqq8 2/2 Running 0 18m -instance-sphwg 2/2 Running 0 18m -instance-vv7zn 2/2 Running 0 18m -instance-wk5zz 2/2 Running 0 18m -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -# Installation by using the web console - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console. - -## Installing Logging by using the web console - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki from the OperatorHub by using the Red Hat OpenShift Container Platform web console. You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install. - -[IMPORTANT] ----- -The Community Loki Operator is not supported by Red Hat. ----- -3. Select stable-x.y as the Update channel. - -The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the Operator install completes with an InstallSucceeded message, refresh the page. ----- -6. While the Operator installs, create the namespace to which the log store will be deployed. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the openshift-logging namespace: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -3. Click Create. -7. Create a secret with the credentials to access the object storage. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3: -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging 2 -stringData: 3 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource. -Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -3. Click Create. -8. Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance. -9. Select YAML view, and then use the following template to create a LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" - secret: - name: logging-loki-s3 4 - type: s3 5 - storageClassName: 6 - tenants: - mode: openshift-logging 7 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -10. Click Create. - -1. In the LokiStack tab veriy that you see your LokiStack instance. -2. In the Status column, verify that you see the message Condition: Ready with a green checkmark. - -## Installing Red Hat OpenShift Logging Operator by using the web console - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store from the OperatorHub by using the Red Hat OpenShift Container Platform web console. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install. -3. Select stable-x.y as the Update channel. The latest version is already selected in the Version field. - -The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the operator installation completes with an InstallSucceeded message, refresh the page. ----- -6. While the operator installs, create the service account that will be used by the log collector to collect the logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the service account. -Example ServiceAccount object - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: logging-collector 1 - namespace: openshift-logging 2 -``` - -Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource. -Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource. -3. Click the Create button. -7. Create the ClusterRoleBinding objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the ClusterRoleBinding resources. -Example ClusterRoleBinding resources - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:write-logs -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: logging-collector-logs-writer 1 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-application -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-application-logs 2 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-infrastructure -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-infrastructure-logs 3 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging -``` - -The cluster role to allow the log collector to write logs to LokiStack. -The cluster role to allow the log collector to collect logs from applications. -The cluster role to allow the log collector to collect logs from infrastructure. -3. Click the Create button. -8. Go to the Operators -> Installed Operators page. Select the operator and click the All instances tab. -9. After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs, find the ClusterLogForwarder resource and click Create Instance. -10. Select YAML view, and then use the following template to create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify openshift-logging as the namespace. -Specify the name of the service account created earlier. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -11. Click Create. - -1. In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder instance. -2. In the Status column, verify that you see the messages: -* Condition: observability.openshift.io/Authorized -* observability.openshift.io/Valid, Ready \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log60-cluster-logging-support.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log60-cluster-logging-support.txt deleted file mode 100644 index d4c8e815..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log60-cluster-logging-support.txt +++ /dev/null @@ -1,141 +0,0 @@ -# Support - - -Only the configuration options described in this documentation are supported for logging. -Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. - -[NOTE] ----- -If you must perform configurations not described in the Red Hat OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed. ----- - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- -Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. -Logging is not: -* A high scale log collection system -* Security Information and Event Monitoring (SIEM) compliant -* A "bring your own" (BYO) log collector configuration -* Historical or long term log retention or storage -* A guaranteed log sink -* Secure storage - audit logs are not stored by default - -# Supported API custom resource definitions - -The following table describes the supported Logging APIs. - - - -# Unsupported configurations - -You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: - -* The collector configuration file -* The collector daemonset - -Explicitly unsupported cases include: - -* Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector. -* Configuring how the log collector normalizes logs. You cannot modify default log normalization. - -# Support policy for unmanaged Operators - -The management state of an Operator determines whether an Operator is actively -managing the resources for its related component in the cluster as designed. If -an Operator is set to an unmanaged state, it does not respond to changes in -configuration nor does it receive updates. - -While this can be helpful in non-production clusters or during debugging, -Operators in an unmanaged state are unsupported and the cluster administrator -assumes full control of the individual component configurations and upgrades. - -An Operator can be set to an unmanaged state using the following methods: - -* Individual Operator configuration - -Individual Operators have a managementState parameter in their configuration. -This can be accessed in different ways, depending on the Operator. For example, -the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource -(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide -configuration resource. - -Changing the managementState parameter to Unmanaged means that the Operator -is not actively managing its resources and will take no action related to the -related component. Some Operators might not support this management state as it -might damage the cluster and require manual recovery. - -[WARNING] ----- -Changing individual Operators to the Unmanaged state renders that particular -component and functionality unsupported. Reported issues must be reproduced in -Managed state for support to proceed. ----- -* Cluster Version Operator (CVO) overrides - -The spec.overrides parameter can be added to the CVO’s configuration to allow -administrators to provide a list of overrides to the CVO’s behavior for a -component. Setting the spec.overrides[].unmanaged parameter to true for a -component blocks cluster upgrades and alerts the administrator after a CVO -override has been set: - -```terminal -Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. -``` - - -[WARNING] ----- -Setting a CVO override puts the entire cluster in an unsupported state. Reported -issues must be reproduced after removing any overrides for support to proceed. ----- - -# Collecting logging data for Red Hat Support - -When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. - -You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. -For prompt support, supply diagnostic information for both Red Hat OpenShift Container Platform and logging. - -## About the must-gather tool - -The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. - -For your logging, must-gather collects the following information: - -* Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level -* Cluster-level resources, including nodes, roles, and role bindings at the cluster level -* OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer - -When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory. - -## Collecting logging data - -You can use the oc adm must-gather CLI command to collect information about logging. - -To collect logging information with must-gather: - -1. Navigate to the directory where you want to store the must-gather information. -2. Run the oc adm must-gather command against the logging image: - -```terminal -$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') -``` - - -The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: -must-gather.local.4157245944708210408. -3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: - -```terminal -$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 -``` - -4. Attach the compressed file to your support case on the Red Hat Customer Portal. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-about-logging.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-about-logging.txt new file mode 100644 index 00000000..eae2690e --- /dev/null +++ b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-about-logging.txt @@ -0,0 +1,16 @@ +# About Logging + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging 6 documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-about.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-about.txt deleted file mode 100644 index 7777605d..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-about.txt +++ /dev/null @@ -1,160 +0,0 @@ -# Logging 6.0 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and Outputs - -Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: - -* application -* receiver -* infrastructure -* audit - -You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver Input Type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec field defines the configuration for a receiver input. - -# Pipelines and Filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator Behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field: - -* When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick Start - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a secret to access an existing object storage bucket: -Example command for AWS - -```terminal -$ oc create secret generic logging-loki-s3 \ - --from-literal=bucketnames="" \ - --from-literal=endpoint="" \ - --from-literal=access_key_id="" \ - --from-literal=access_key_secret="" \ - --from-literal=region="" \ - -n openshift-logging -``` - -3. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2022-06-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -4. Create a service account for the collector: - -```shell -$ oc create sa collector -n openshift-logging -``` - -5. Bind the ClusterRole to the service account: - -```shell -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - -6. Create a UIPlugin to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Add additional roles to the collector service account: - -```shell -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - -8. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-clf.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-clf.txt deleted file mode 100644 index 55d26652..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-clf.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions granted by this ClusterRole. -apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -loki.grafana.com: The API group for managing Loki-related resources. -resources: The resource type that the ClusterRole grants permission to interact with. -application: Refers to the application resources within the Loki logging system. -resourceNames: Specifies the names of resources that this role can manage. -logs: Refers to the log resources that can be created. -verbs: The actions allowed on the resources. -create: Grants permission to create new logs in the Loki system. - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -## Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -### Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -## Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -## Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -## Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -## Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-loki.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-loki.txt deleted file mode 100644 index 697b17c6..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-loki.txt +++ /dev/null @@ -1,748 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- - -# Prerequisites - -* You have installed the Loki Operator by using the CLI or web console. -* You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder. -* The serviceAccount is assigned collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles. - -# Core Setup and Configuration - -Role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -## Enhanced Reliability and Performance - -Configurations to ensure Loki’s reliability and efficiency in production. - -## Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -## Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -## LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -## Advanced Deployment and Scalability - -Specialized configurations for high availability, scalability, and error handling. - -## Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -## Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -### Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -## Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-release-notes.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-release-notes.txt deleted file mode 100644 index 754e4592..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-release-notes.txt +++ /dev/null @@ -1,261 +0,0 @@ -# Release notes - - - -# Logging 6.0.9 - -This release includes RHBA-2025:8144. - -## Bug fixes - -* Before this update, merging data from the message field into the root of a Syslog log event caused inconsistencies with the ViaQ data model. These inconsistencies could overwrite system information, duplicate data, or corrupt the log event. This update makes Syslog parsing and merging consistent with the other output types and resolves the issue. (LOG-7183) -* Before this update, log forwarding failed when the cluster-wide proxy configuration included a URL with a username containing an encoded @ symbol; for example, user%40name. This update adds the correct support for URL-encoded values in proxy configurations and resolves the issue. (LOG-7186) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-12087 -* CVE-2024-12088 -* CVE-2024-12133 -* CVE-2024-12243 -* CVE-2024-12747 -* CVE-2024-56171 -* CVE-2025-0395 -* CVE-2025-24928 - -For detailed information about Red Hat security ratings, see Severity ratings. - -# Logging 6.0.8 - -This release includes RHBA-2025:4520. - -## Bug fixes - -* Before this update, collector pods would enter a crash loop due to a configuration error when attempting token-based authentication with an Elasticsearch output. With this update, token authentication with an Elasticsearch output generates a valid configuration. (LOG-7019) - -## CVEs - -* CVE-2024-2236 -* CVE-2024-5535 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.0.7 - -This release includes RHSA-2025:3905. - -## New features and enhancements - -* Before this update, time-based stream sharding was not enabled in Loki, which resulted in Loki being unable to save historical data. With this update, the Loki Operator enables time-based stream sharding in Loki, which helps Loki save historical data. (LOG-6990) - -## CVEs - -* CVE-2025-30204 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.0.6 - -This release includes RHSA-2025:3132. - -## Bug fixes - -* Before this update, the Logging Operator deployed the collector config map with output configurations that were not referenced by any inputs. With this update, the Operator adds validation to fail the ClusterLogForwarder custom resource if an output configuration is not referenced by any inputs, preventing the deployment of the collector. -(LOG-6759) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-45338 -* CVE-2024-56171 -* CVE-2025-24928 -* CVE-2025-27144 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.0.5 - -This release includes RHBA-2025:1986. - -## CVEs - -* CVE-2020-11023 -* CVE-2024-9287 -* CVE-2024-12797 - -# Logging 6.0.4 - -This release includes RHBA-2025:1228. - -## New features and enhancements - -* This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. -(LOG-6580) - -## Bug fixes - -* Before this update, the Operator used a cached client to fetch the SecurityContextConstraint cluster resource, which could result in an error when the cache is invalid. With this update, the Operator now always retrieves data from the API server instead of using the cache. -(LOG-6130) -* Before this update, the Vector startup script attempted to delete buffer lock files during startup. With this update, the Vector startup script no longer attempts to delete buffer lock files during startup. -(LOG-6348) -* Before this update, a bug in the must-gather script for the cluster-logging-operator prevented the LokiStack from being gathered correctly when it existed. With this update, the LokiStack is gathered correctly. -(LOG-6499) -* Before this update, the collector metrics dashboard could get removed after an Operator upgrade due to a race condition during the change from the old to the new pod deployment. With this update, labels are added to the dashboard ConfigMap to identify the upgraded deployment as the current owner so that it will not be removed. -(LOG-6608) -* Before this update, the logging must-gather did not collect resources such as UIPlugin, ClusterLogForwarder, LogFileMetricExporter and LokiStack CR. With this update, these resources are now collected in their namespace directory instead of the cluster-logging one. -(LOG-6654) -* Before this update, Vector did not retain process information, such as the program name, app-name, procID, and other details, when forwarding journal logs by using the syslog protocol. This could lead to the loss of important information. With this update, the Vector collector now preserves all required process information, and the data format adheres to the specifications of RFC3164 and RFC5424. -(LOG-6659) - -# Logging 6.0.3 - -This release includes RHBA-2024:10991. - -## New features and enhancements - -* With this update, the Loki Operator supports the configuring of the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in Red Hat OpenShift Container Platform 4.17 or later. (LOG-6421) - -## Bug fixes - -* Before this update, the collector used the default settings to collect audit logs, which did not account for back pressure from output receivers. With this update, the audit log collection is optimized for file handling and log reading. (LOG-6034) -* Before this update, any namespace containing openshift or kube was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces: default, kube, openshift, and namespaces that begin with openshift- or kube-. (LOG-6204) -* Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. (LOG-6343) -* Before this update, pipeline validation might enter an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6352) -* Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6406) -* Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6441) -* Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6486) -* Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6543) - -## CVEs - -* CVE-2019-12900 -* CVE-2024-2511 -* CVE-2024-3596 -* CVE-2024-4603 -* CVE-2024-4741 -* CVE-2024-5535 -* CVE-2024-10963 -* CVE-2024-50602 - -# Logging 6.0.2 - -This release includes RHBA-2024:10051. - -## Bug fixes - -* Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. (LOG-5325) -* Before this update, the collector would discard audit log messages that exceeded the configured threshold. This modifies the audit configuration thresholds for the maximum line size as well as the number of bytes read during a read cycle. (LOG-5998) -* Before this update, the Cluster Logging Operator did not watch and reconcile resources associated with an instance of a ClusterLogForwarder like it did in prior releases. This update modifies the operator to watch and reconcile all resources it owns and creates. (LOG-6264) -* Before this update, log events with an unknown severity level sent to Google Cloud Logging would trigger a warning in the vector collector, which would then default the severity to 'DEFAULT'. With this update, log severity levels are now standardized to match Google Cloud Logging specifications, and audit logs are assigned a severity of 'INFO'. (LOG-6296) -* Before this update, when infrastructure namespaces were included in application inputs, the log_type was set as application. With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure. (LOG-6354) -* Before this update, specifying a value for the syslog.enrichment field of the ClusterLogForwarder added namespace_name, container_name, and pod_name to the messages of non-container logs. With this update, only container logs include namespace_name, container_name, and pod_name in their messages when syslog.enrichment is set. (LOG-6402) - -## CVEs - -* CVE-2024-6119 -* CVE-2024-6232 - -# Logging 6.0.1 - -This release includes OpenShift Logging Bug Fix Release 6.0.1. - -## Bug fixes - -* With this update, the default memory limit for the collector has been increased from 1024 Mi to 2024 Mi. However, users should always adjust their resource limits according to their cluster specifications and needs. (LOG-6180) -* Before this update, the Loki Operator failed to add the default namespace label to all AlertingRule resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. -(LOG-6151) -* Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. -(LOG-6129) -* Before this update, it was possible to set log_source in the prune filter, which could lead to inconsistent log data. With this update, the configuration is validated before being applied, and any configuration that includes log_source in the prune filter is rejected. -(LOG-6202) - -## CVEs - -* CVE-2024-24791 -* CVE-2024-34155 -* CVE-2024-34156 -* CVE-2024-34158 -* CVE-2024-6104 -* CVE-2024-6119 -* CVE-2024-45490 -* CVE-2024-45491 -* CVE-2024-45492 - -# Logging 6.0.0 - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.0.0 - - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- - - - -# Removal notice - -* With this release, logging no longer supports the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io custom resources. Refer to the product documentation for details on the replacement features. (LOG-5803) -* With this release, logging no longer manages or deploys log storage (such as Elasticsearch), visualization (such as Kibana), or Fluentd-based log collectors. (LOG-5368) - - -[NOTE] ----- -In order to continue to use Elasticsearch and Kibana managed by the elasticsearch-operator, the administrator must modify those object's ownerRefs before deleting the ClusterLogging resource. ----- - -# New features and enhancements - -* This feature introduces a new architecture for logging for Red Hat OpenShift by shifting component responsibilities to their relevant Operators, such as for storage, visualization, and collection. It introduces the ClusterLogForwarder.observability.openshift.io API for log collection and forwarding. Support for the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io APIs, along with the Red Hat managed Elastic stack (Elasticsearch and Kibana), is removed. Users are encouraged to migrate to the Red Hat LokiStack for log storage. Existing managed Elasticsearch deployments can be used for a limited time. Automated migration for log collection is not provided, so administrators need to create a new ClusterLogForwarder.observability.openshift.io specification to replace their previous custom resources. Refer to the official product documentation for more details. (LOG-3493) -* With this release, the responsibility for deploying the logging view plugin shifts from the Red Hat OpenShift Logging Operator to the Cluster Observability Operator (COO). For new log storage installations that need visualization, the Cluster Observability Operator and the associated UIPlugin resource must be deployed. Refer to the Cluster Observability Operator Overview product documentation for more details. (LOG-5461) -* This enhancement sets default requests and limits for Vector collector deployments' memory and CPU usage based on Vector documentation recommendations. (LOG-4745) -* This enhancement updates Vector to align with the upstream version v0.37.1. (LOG-5296) -* This enhancement introduces an alert that triggers when log collectors buffer logs to a node's file system and use over 15% of the available space, indicating potential back pressure issues. (LOG-5381) -* This enhancement updates the selectors for all components to use common Kubernetes labels. (LOG-5906) -* This enhancement changes the collector configuration to deploy as a ConfigMap instead of a secret, allowing users to view and edit the configuration when the ClusterLogForwarder is set to Unmanaged. (LOG-5599) -* This enhancement adds the ability to configure the Vector collector log level using an annotation on the ClusterLogForwarder, with options including trace, debug, info, warn, error, or off. (LOG-5372) -* This enhancement adds validation to reject configurations where Amazon CloudWatch outputs use multiple AWS roles, preventing incorrect log routing. (LOG-5640) -* This enhancement removes the Log Bytes Collected and Log Bytes Sent graphs from the metrics dashboard. (LOG-5964) -* This enhancement updates the must-gather functionality to only capture information for inspecting Logging 6.0 components, including Vector deployments from ClusterLogForwarder.observability.openshift.io resources and the Red Hat managed LokiStack. (LOG-5949) -* This enhancement improves Azure storage secret validation by providing early warnings for specific error conditions. (LOG-4571) -* This enhancement updates the ClusterLogForwarder API to follow the Kubernetes standards. (LOG-5977) -Example of a new configuration in the ClusterLogForwarder custom resource for the updated API - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: -spec: - outputs: - - name: - type: - : - tuning: - deliveryMode: AtMostOnce -``` - - -# Technology Preview features - -* This release introduces a Technology Preview feature for log forwarding using OpenTelemetry. A new output type,` OTLP`, allows sending JSON-encoded log records using the OpenTelemetry data model and resource semantic conventions. (LOG-4225) - -# Bug fixes - -* Before this update, the CollectorHighErrorRate and CollectorVeryHighErrorRate alerts were still present. With this update, both alerts are removed in the logging 6.0 release but might return in a future release. (LOG-3432) - -# CVEs - -* CVE-2024-34397 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-upgrading-to-6.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-upgrading-to-6.txt deleted file mode 100644 index c23045a4..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-upgrading-to-6.txt +++ /dev/null @@ -1,544 +0,0 @@ -# Upgrading to Logging 6.0 - - -Logging v6.0 is a significant upgrade from previous releases, achieving several longstanding goals of Cluster Logging: -* Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization). -* Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana). -* Deprecation of the Fluentd log collector implementation. -* Removal of support for ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io resources. - -[NOTE] ----- -The cluster-logging-operator does not provide an automated upgrade process. ----- -Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator. This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io resources for common use cases are included. - -# Using the oc explain command - -The oc explain command is an essential tool in the OpenShift CLI oc that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster. - -## Resource Descriptions - -oc explain offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators. - -To view the documentation for the outputs field of the ClusterLogForwarder custom resource, you can use: - - -```terminal -$ oc explain clusterlogforwarders.observability.openshift.io.spec.outputs -``` - - - -[NOTE] ----- -In place of clusterlogforwarder the short form obsclf can be used. ----- - -This will display detailed information about these fields, including their types, default values, and any associated sub-fields. - -## Hierarchical Structure - -The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options. - -For instance, here’s how you can drill down into the storage configuration for a LokiStack custom resource: - - -```terminal -$ oc explain lokistacks.loki.grafana.com -$ oc explain lokistacks.loki.grafana.com.spec -$ oc explain lokistacks.loki.grafana.com.spec.storage -$ oc explain lokistacks.loki.grafana.com.spec.storage.schemas -``` - - -Each command reveals a deeper level of the resource specification, making the structure clear. - -## Type Information - -oc explain also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types. - -For example: - - -```terminal -$ oc explain lokistacks.loki.grafana.com.spec.size -``` - - -This will show that size should be defined using an integer value. - -## Default Values - -When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified. - -Again using lokistacks.loki.grafana.com as an example: - - -```terminal -$ oc explain lokistacks.spec.template.distributor.replicas -``` - - - -```terminal -GROUP: loki.grafana.com -KIND: LokiStack -VERSION: v1 - -FIELD: replicas - -DESCRIPTION: - Replicas defines the number of replica pods of the component. -``` - - -# Log Storage - -The only managed log storage solution available in this release is a Lokistack, managed by the Loki Operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process. - - -[IMPORTANT] ----- -To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the Elasticsearch Operator, remove the owner references from the Elasticsearch resource named elasticsearch, and the Kibana resource named kibana in the openshift-logging namespace before removing the ClusterLogging resource named instance in the same namespace. ----- - -1. Temporarily set ClusterLogging resource to the Unmanaged state by running the following command: - -```terminal -$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge -``` - -2. Remove the ownerReferences parameter from the Elasticsearch resource by running the following command: - -The following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource’s logStore field will no longer affect the Elasticsearch resource. - -```terminal -$ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge -``` - -3. Remove the ownerReferences parameter from the Kibana resource. - -The following command ensures that Cluster Logging no longer owns the Kibana resource. Updates to the ClusterLogging resource’s visualization field will no longer affect the Kibana resource. - -```terminal -$ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge -``` - -4. Set the ClusterLogging resource to the Managed state by running the following command: - -```terminal -$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge -``` - - -# Log Visualization - -The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator. - -# Log Collection and Forwarding - -Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io API group. The following sections highlight the differences from the old API resources. - - -[NOTE] ----- -Vector is the only supported collector implementation. ----- - -# Management, Resource Allocation, and Workload Scheduling - -Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: "ClusterLogging" -spec: - managementState: "Managed" - collection: - resources: - limits: {} - requests: {} - nodeSelector: {} - tolerations: {} -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - managementState: Managed - collector: - resources: - limits: {} - requests: {} - nodeSelector: {} - tolerations: {} -``` - - -# Input Specifications - -The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application, infrastructure, and audit to collect these sources. - -## Application Inputs - -Namespace and container inclusions and exclusions have been consolidated into a single field. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: application-logs - type: application - application: - namespaces: - - foo - - bar - includes: - - namespace: my-important - container: main - excludes: - - container: too-verbose -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: application-logs - type: application - application: - includes: - - namespace: foo - - namespace: bar - - namespace: my-important - container: main - excludes: - - container: too-verbose -``` - - - -[NOTE] ----- -application, infrastructure, and audit are reserved words and cannot be used as names when defining an input. ----- - -## Input Receivers - -Changes to input receivers include: - -* Explicit configuration of the type at the receiver level. -* Port settings moved to the receiver level. - - -```yaml -apiVersion: "logging.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: an-http - receiver: - http: - port: 8443 - format: kubeAPIAudit - - name: a-syslog - receiver: - type: syslog - syslog: - port: 9442 -``` - - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -spec: - inputs: - - name: an-http - type: receiver - receiver: - type: http - port: 8443 - http: - format: kubeAPIAudit - - name: a-syslog - type: receiver - receiver: - type: syslog - port: 9442 -``` - - -# Output Specifications - -High-level changes to output specifications include: - -* URL settings moved to each output type specification. -* Tuning parameters moved to each output type specification. -* Separation of TLS configuration from authentication. -* Explicit configuration of keys and secret/configmap for TLS and authentication. - -# Secrets and TLS Configuration - -Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions. - -# Red Hat Managed Elasticsearch - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogging -metadata: - name: instance - namespace: openshift-logging -spec: - logStore: - type: elasticsearch -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging -spec: - serviceAccount: - name: - managementState: Managed - outputs: - - name: audit-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: audit-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - - name: app-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: app-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - - name: infra-elasticsearch - type: elasticsearch - elasticsearch: - url: https://elasticsearch:9200 - version: 6 - index: infra-write - tls: - ca: - key: ca-bundle.crt - secretName: collector - certificate: - key: tls.crt - secretName: collector - key: - key: tls.key - secretName: collector - pipelines: - - name: app - inputRefs: - - application - outputRefs: - - app-elasticsearch - - name: audit - inputRefs: - - audit - outputRefs: - - audit-elasticsearch - - name: infra - inputRefs: - - infrastructure - outputRefs: - - infra-elasticsearch -``` - - -# Red Hat Managed LokiStack - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogging -metadata: - name: instance - namespace: openshift-logging -spec: - logStore: - type: lokistack - lokistack: - name: logging-loki -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging -spec: - serviceAccount: - name: - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - outputRefs: - - default-lokistack - - inputRefs: - - application - - infrastructure -``` - - -# Filters and Pipeline Configuration - -Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from previous releases have been converted to filters in this release. Individual filters are defined in the filters specification and referenced by a pipeline. - - -```yaml -apiVersion: logging.openshift.io/v1 -kind: ClusterLogForwarder -spec: - pipelines: - - name: application-logs - parse: json - labels: - foo: bar - detectMultilineErrors: true -``` - - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -spec: - filters: - - name: detectexception - type: detectMultilineException - - name: parse-json - type: parse - - name: labels - type: openshiftLabels - openshiftLabels: - foo: bar - pipelines: - - name: application-logs - filterRefs: - - detectexception - - labels - - parse-json -``` - - -# Validation and Status - -Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from previous releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time. - -Instances of the ClusterLogForwarder.observability.openshift.io must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -status: - conditions: - - lastTransitionTime: "2024-09-13T03:28:44Z" - message: 'permitted to collect log types: [application]' - reason: ClusterRolesExist - status: "True" - type: observability.openshift.io/Authorized - - lastTransitionTime: "2024-09-13T12:16:45Z" - message: "" - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/Valid - - lastTransitionTime: "2024-09-13T12:16:45Z" - message: "" - reason: ReconciliationComplete - status: "True" - type: Ready - filterConditions: - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: filter "detectexception" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidFilter-detectexception - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: filter "parse-json" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidFilter-parse-json - inputConditions: - - lastTransitionTime: "2024-09-13T12:23:03Z" - message: input "application1" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidInput-application1 - outputConditions: - - lastTransitionTime: "2024-09-13T13:02:59Z" - message: output "default-lokistack-application1" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidOutput-default-lokistack-application1 - pipelineConditions: - - lastTransitionTime: "2024-09-13T03:28:44Z" - message: pipeline "default-before" is valid - reason: ValidationSuccess - status: "True" - type: observability.openshift.io/ValidPipeline-default-before -``` - - - -[NOTE] ----- -Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue. ----- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-visual.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-visual.txt deleted file mode 100644 index c669b518..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.0/log6x-visual.txt +++ /dev/null @@ -1,5 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/6x-cluster-logging-deploying-6.1.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/6x-cluster-logging-deploying-6.1.txt deleted file mode 100644 index 7521e48d..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/6x-cluster-logging-deploying-6.1.txt +++ /dev/null @@ -1,640 +0,0 @@ -# Installing Logging - - -Red Hat OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs. -To get started with logging, you must install the following Operators: -* Loki Operator to manage your log store. -* Red Hat OpenShift Logging Operator to manage log collection and forwarding. -* Cluster Observability Operator (COO) to manage visualization. -You can use either the Red Hat OpenShift Container Platform web console or the Red Hat OpenShift Container Platform CLI to install or configure logging. - -[IMPORTANT] ----- -You must configure the Red Hat OpenShift Logging Operator after the Loki Operator. ----- - -# Installation by using the CLI - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI. - -## Installing the Loki Operator by using the CLI - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki by using the Red Hat OpenShift Container Platform command-line interface (CLI). You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Create a Namespace object for Loki Operator: -Example Namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-operators-redhat 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Container Platform metric, causing conflicts. -A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -2. Apply the Namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create an OperatorGroup object. -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-operators-redhat as the namespace. -4. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a Subscription object for Loki Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: loki-operator - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-operators-redhat as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -6. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -7. Create a namespace object for deploy the LokiStack: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -8. Apply the namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -9. Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3. -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging -stringData: 2 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Use the name logging-loki-s3 to match the name used in LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -10. Apply the Secret object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -11. Create a LokiStack CR: -Example LokiStack CR - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" 4 - secret: - name: logging-loki-s3 5 - type: s3 6 - storageClassName: 7 - tenants: - mode: openshift-logging 8 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -12. Apply the LokiStack CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -* Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -## Installing Red Hat OpenShift Logging Operator by using the CLI - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store by using the OpenShift CLI (`oc`). - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You installed and configured Loki Operator. -* You have created the openshift-logging namespace. - -1. Create an OperatorGroup object: -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-logging as the namespace. -2. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create a Subscription object for Red Hat OpenShift Logging Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: cluster-logging - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-logging as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -4. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a service account to be used by the log collector: - -```terminal -$ oc create sa logging-collector -n openshift-logging -``` - -6. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs. - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging -``` - -7. Create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify the openshift-logging namespace. -Specify the name of the service account created before. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -8. Apply the ClusterLogForwarder CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -1. Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m -instance-222js 2/2 Running 0 18m -instance-g9ddv 2/2 Running 0 18m -instance-hfqq8 2/2 Running 0 18m -instance-sphwg 2/2 Running 0 18m -instance-vv7zn 2/2 Running 0 18m -instance-wk5zz 2/2 Running 0 18m -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -nclude::modules/log6x-installing-the-logging-ui-plug-in-cli.adoc[leveloffset=+2] - -# Installation by using the web console - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console. - -## Installing Logging by using the web console - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki from the OperatorHub by using the Red Hat OpenShift Container Platform web console. You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install. - -[IMPORTANT] ----- -The Community Loki Operator is not supported by Red Hat. ----- -3. Select stable-x.y as the Update channel. - -The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the Operator install completes with an InstallSucceeded message, refresh the page. ----- -6. While the Operator installs, create the namespace to which the log store will be deployed. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the openshift-logging namespace: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -3. Click Create. -7. Create a secret with the credentials to access the object storage. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3: -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging 2 -stringData: 3 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource. -Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -3. Click Create. -8. Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance. -9. Select YAML view, and then use the following template to create a LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" - secret: - name: logging-loki-s3 4 - type: s3 5 - storageClassName: 6 - tenants: - mode: openshift-logging 7 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -10. Click Create. - -1. In the LokiStack tab veriy that you see your LokiStack instance. -2. In the Status column, verify that you see the message Condition: Ready with a green checkmark. - -## Installing Red Hat OpenShift Logging Operator by using the web console - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store from the OperatorHub by using the Red Hat OpenShift Container Platform web console. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install. -3. Select stable-x.y as the Update channel. The latest version is already selected in the Version field. - -The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the operator installation completes with an InstallSucceeded message, refresh the page. ----- -6. While the operator installs, create the service account that will be used by the log collector to collect the logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the service account. -Example ServiceAccount object - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: logging-collector 1 - namespace: openshift-logging 2 -``` - -Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource. -Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource. -3. Click the Create button. -7. Create the ClusterRoleBinding objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the ClusterRoleBinding resources. -Example ClusterRoleBinding resources - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:write-logs -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: logging-collector-logs-writer 1 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-application -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-application-logs 2 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-infrastructure -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-infrastructure-logs 3 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging -``` - -The cluster role to allow the log collector to write logs to LokiStack. -The cluster role to allow the log collector to collect logs from applications. -The cluster role to allow the log collector to collect logs from infrastructure. -3. Click the Create button. -8. Go to the Operators -> Installed Operators page. Select the operator and click the All instances tab. -9. After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs, find the ClusterLogForwarder resource and click Create Instance. -10. Select YAML view, and then use the following template to create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify openshift-logging as the namespace. -Specify the name of the service account created earlier. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -11. Click Create. - -1. In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder instance. -2. In the Status column, verify that you see the messages: -* Condition: observability.openshift.io/Authorized -* observability.openshift.io/Valid, Ready - -## Installing the Logging UI plugin by using the web console - -Install the Logging UI plugin by using the web console so that you can visualize logs. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator. -2. Navigate to the Installed Operators page. Under Provided APIs, select ClusterObservabilityOperator. Find the UIPlugin resource and click Create Instance. -3. Select the YAML view, and then use the following template to create a UIPlugin custom resource (CR): - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging 1 -spec: - type: Logging 2 - logging: - lokiStack: - name: logging-loki 3 -``` - -Set name to logging. -Set type to Logging. -The name value must match the name of your LokiStack instance. - -[NOTE] ----- -If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration. ----- -4. Click Create. - -1. Refresh the page when a pop-up message instructs you to do so. -2. Navigate to the Observe → Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log61-cluster-logging-support.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log61-cluster-logging-support.txt deleted file mode 100644 index d4c8e815..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log61-cluster-logging-support.txt +++ /dev/null @@ -1,141 +0,0 @@ -# Support - - -Only the configuration options described in this documentation are supported for logging. -Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. - -[NOTE] ----- -If you must perform configurations not described in the Red Hat OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed. ----- - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- -Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. -Logging is not: -* A high scale log collection system -* Security Information and Event Monitoring (SIEM) compliant -* A "bring your own" (BYO) log collector configuration -* Historical or long term log retention or storage -* A guaranteed log sink -* Secure storage - audit logs are not stored by default - -# Supported API custom resource definitions - -The following table describes the supported Logging APIs. - - - -# Unsupported configurations - -You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: - -* The collector configuration file -* The collector daemonset - -Explicitly unsupported cases include: - -* Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector. -* Configuring how the log collector normalizes logs. You cannot modify default log normalization. - -# Support policy for unmanaged Operators - -The management state of an Operator determines whether an Operator is actively -managing the resources for its related component in the cluster as designed. If -an Operator is set to an unmanaged state, it does not respond to changes in -configuration nor does it receive updates. - -While this can be helpful in non-production clusters or during debugging, -Operators in an unmanaged state are unsupported and the cluster administrator -assumes full control of the individual component configurations and upgrades. - -An Operator can be set to an unmanaged state using the following methods: - -* Individual Operator configuration - -Individual Operators have a managementState parameter in their configuration. -This can be accessed in different ways, depending on the Operator. For example, -the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource -(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide -configuration resource. - -Changing the managementState parameter to Unmanaged means that the Operator -is not actively managing its resources and will take no action related to the -related component. Some Operators might not support this management state as it -might damage the cluster and require manual recovery. - -[WARNING] ----- -Changing individual Operators to the Unmanaged state renders that particular -component and functionality unsupported. Reported issues must be reproduced in -Managed state for support to proceed. ----- -* Cluster Version Operator (CVO) overrides - -The spec.overrides parameter can be added to the CVO’s configuration to allow -administrators to provide a list of overrides to the CVO’s behavior for a -component. Setting the spec.overrides[].unmanaged parameter to true for a -component blocks cluster upgrades and alerts the administrator after a CVO -override has been set: - -```terminal -Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. -``` - - -[WARNING] ----- -Setting a CVO override puts the entire cluster in an unsupported state. Reported -issues must be reproduced after removing any overrides for support to proceed. ----- - -# Collecting logging data for Red Hat Support - -When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. - -You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. -For prompt support, supply diagnostic information for both Red Hat OpenShift Container Platform and logging. - -## About the must-gather tool - -The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. - -For your logging, must-gather collects the following information: - -* Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level -* Cluster-level resources, including nodes, roles, and role bindings at the cluster level -* OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer - -When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory. - -## Collecting logging data - -You can use the oc adm must-gather CLI command to collect information about logging. - -To collect logging information with must-gather: - -1. Navigate to the directory where you want to store the must-gather information. -2. Run the oc adm must-gather command against the logging image: - -```terminal -$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') -``` - - -The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: -must-gather.local.4157245944708210408. -3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: - -```terminal -$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 -``` - -4. Attach the compressed file to your support case on the Red Hat Customer Portal. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-about-6.1.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-about-6.1.txt deleted file mode 100644 index 4d7fe521..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-about-6.1.txt +++ /dev/null @@ -1,330 +0,0 @@ -# Logging 6.1 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and outputs - -Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: - -* application -* receiver -* infrastructure -* audit - -You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver input type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec field defines the configuration for a receiver input. - -# Pipelines and filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: - -* When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick start - -OpenShift Logging supports two data models: - -* ViaQ (General Availability) -* OpenTelemetry (Technology Preview) - -You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack. - - -[NOTE] ----- -In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. ----- - -## Quick start with ViaQ - -To use the default ViaQ data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - authentication: - token: - from: serviceAccount - target: - name: logging-loki - namespace: openshift-logging - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -[NOTE] ----- -The dataModel field is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. ----- - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. - -## Quick start with OpenTelemetry - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You have installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 -spec: - serviceAccount: - name: collector - outputs: - - name: loki-otlp - type: lokiStack 2 - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - dataModel: Otel 3 - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: my-pipeline - inputRefs: - - application - - infrastructure - outputRefs: - - loki-otlp -``` - -Use the annotation to enable the Otel data model, which is a Technology Preview feature. -Define the output type as lokiStack. -Specifies the OpenTelemetry data model. - -[NOTE] ----- -You cannot use lokiStack.labelKeys when dataModel is Otel. To achieve similar functionality when dataModel is Otel, refer to "Configuring LokiStack for OTLP data ingestion". ----- - -* To verify that OTLP is functioning correctly, complete the following steps: -1. In the OpenShift web console, click Observe -> OpenShift Logging -> LokiStack -> Writes. -2. Check the Distributor - Structured Metadata section. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-about-logging.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-about-logging.txt new file mode 100644 index 00000000..eae2690e --- /dev/null +++ b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-about-logging.txt @@ -0,0 +1,16 @@ +# About Logging + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging 6 documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-clf-6.1.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-clf-6.1.txt deleted file mode 100644 index eee9c76a..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-clf-6.1.txt +++ /dev/null @@ -1,818 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions granted by this ClusterRole. -apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -loki.grafana.com: The API group for managing Loki-related resources. -resources: The resource type that the ClusterRole grants permission to interact with. -application: Refers to the application resources within the Loki logging system. -resourceNames: Specifies the names of resources that this role can manage. -logs: Refers to the log resources that can be created. -verbs: The actions allowed on the resources. -create: Grants permission to create new logs in the Loki system. - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -# Configuring OTLP output - -Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 - name: clf-otlp -spec: - serviceAccount: - name: - outputs: - - name: otlp - type: otlp - otlp: - tuning: - compression: gzip - deliveryMode: AtLeastOnce - maxRetryDuration: 20 - maxWrite: 10M - minRetryDuration: 5 - url: 2 - pipelines: - - inputRefs: - - application - - infrastructure - - audit - name: otlp-logs - outputRefs: - - otlp -``` - -Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. -This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. - - -[NOTE] ----- -The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. ----- - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -## Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -### Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -## Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -## Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -## Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -## Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt deleted file mode 100644 index d9bc000e..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt +++ /dev/null @@ -1,180 +0,0 @@ -# OTLP data ingestion in Loki - - -You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging 6.1. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories: -* Resource -* Scope -* Log -You can set metadata for multiple entries simultaneously or individually as needed. - -# Configuring LokiStack for OTLP data ingestion - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: - -* Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. - -1. Set the schema version: -* When creating a new LokiStack CR, set version: v13 in the storage schema configuration. - -[NOTE] ----- -For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). ----- -2. Configure the storage schema as follows: -Example configure storage schema - -```yaml -# ... -spec: - storage: - schemas: - - version: v13 - effectiveDate: 2024-10-25 -``` - - -Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. - -# Attribute mapping - -When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki. - -For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases: - -* Using a custom collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki. -* Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. - - -[IMPORTANT] ----- -Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki. ----- - -## Custom attribute mapping for OpenShift - -When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. -In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration. - - -[NOTE] ----- -A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode. ----- - -Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration: - - -```yaml -# ... -spec: - limits: - global: - otlp: {} 1 - tenants: - application: - otlp: {} 2 -``` - - -Defines global OTLP attribute configuration. -OTLP attribute configuration for the application tenant within openshift-logging mode. - - -[NOTE] ----- -Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement. ----- - -Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects: - - -```yaml -spec: - limits: - global: - otlp: - streamLabels: - resourceAttributes: - - name: "k8s.namespace.name" - - name: "k8s.pod.name" - - name: "k8s.container.name" -``` - - -Structured metadata, in contrast, can be generated from resource, scope or log-level attributes: - - -```yaml -# ... -spec: - limits: - global: - otlp: - streamLabels: -# ... - structuredMetadata: - resourceAttributes: - - name: "process.command_line" - - name: "k8s\\.pod\\.labels\\..+" - regex: true - scopeAttributes: - - name: "service.name" - logAttributes: - - name: "http.route" -``` - - - -[TIP] ----- -Use regular expressions by setting regex: true for attributes names when mapping similar attributes in Loki. ----- - - -[IMPORTANT] ----- -Avoid using regular expressions for stream labels, as this can increase data volume. ----- - -## Customizing OpenShift defaults - -In openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be disabled if performance is impacted. - -When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations. - -## Removing recommended attributes - -To reduce default attributes in openshift-logging mode, disable recommended attributes: - - -```yaml -# ... -spec: - tenants: - mode: openshift-logging - openshift: - otlp: - disableRecommendedAttributes: true 1 -``` - - -Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes. - - -[NOTE] ----- -This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries. ----- - -# Additional resources - -* Loki labels -* Structured metadata -* OpenTelemetry attribute \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-loki-6.1.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-loki-6.1.txt deleted file mode 100644 index 620da8db..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-loki-6.1.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Prerequisites - -* You have installed the Loki Operator by using the CLI or web console. -* You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder. -* The serviceAccount is assigned collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles. - -# Core Setup and Configuration - -Role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -## Enhanced Reliability and Performance - -Configurations to ensure Loki’s reliability and efficiency in production. - -## Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -## Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -## LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -## Advanced Deployment and Scalability - -Specialized configurations for high availability, scalability, and error handling. - -## Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -## Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -### Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -## Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt deleted file mode 100644 index 71eb6a76..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt +++ /dev/null @@ -1,81 +0,0 @@ -# OpenTelemetry data model - - -This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging's OpenTelemetry support with Logging 6.1. - -[IMPORTANT] ----- -The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -# Forwarding and ingestion protocol - -Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification. OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources. - -# Semantic conventions - -The log collector in this solution gathers the following log streams: - -* Container logs -* Cluster node journal logs -* Cluster node auditd logs -* Kubernetes and OpenShift API server logs -* OpenShift Virtual Network (OVN) logs - -You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name, cluster_id, pod_name, namespace, and possibly deployment or app_name. These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data. - -In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage. - -The following sections define the attributes that are generally forwarded. - -## Log entry structure - -All log streams include the following log data fields: - -The Applicable Sources column indicates which log sources each field applies to: - -* all: This field is present in all logs. -* container: This field is present in Kubernetes container logs, both application and infrastructure. -* audit: This field is present in Kubernetes, OpenShift API, and OVN logs. -* auditd: This field is present in node auditd logs. -* journal: This field is present in node journal logs. - - - -## Attributes - -Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table. - -The Location column specifies the type of attribute: - -* resource: Indicates a resource attribute -* scope: Indicates a scope attribute -* log: Indicates a log attribute - -The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored: - -* stream label: -* Enables efficient filtering and querying based on specific labels. -* Can be labeled as required if the Loki Operator enforces this attribute in the configuration. -* structured metadata: -* Allows for detailed filtering and storage of key-value pairs. -* Enables users to use direct labels for streamlined queries without requiring JSON parsing. - -With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries. - - - - -[NOTE] ----- -Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases. ----- - -Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: (.,/,-) will be replaced by underscores (_). For example, k8s.namespace.name will become k8s_namespace_name. - -# Additional resources - -* Semantic Conventions -* Logs Data Model -* General Logs Attributes \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-release-notes-6.1.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-release-notes-6.1.txt deleted file mode 100644 index 0dfa4800..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-release-notes-6.1.txt +++ /dev/null @@ -1,222 +0,0 @@ -# Logging 6.1 Release Notes - - - -# Logging 6.1.7 Release Notes - -This release includes RHBA-2025:8143. - -## Bug fixes - -* Before this update, merging data from the message field into the root of a Syslog log event caused the log event to be inconsistent with the ViaQ data model. The inconsistency could lead to overwritten system information, data duplication, or event corruption. This update revises Syslog parsing and merging for the Syslog output to align with other output types, resolving this inconsistency. (LOG-7184) -* Before this update, log forwarding failed if you configured a cluster-wide proxy with a URL containing a username with an encoded "@" symbol; for example "user%40name". This update resolves the issue by adding correct support for URL-encoded values in proxy configurations. (LOG-7187) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-12087 -* CVE-2024-12088 -* CVE-2024-12133 -* CVE-2024-12243 -* CVE-2024-12747 -* CVE-2024-56171 -* CVE-2025-0395 -* CVE-2025-24928 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.1.6 Release Notes - -This release includes RHBA-2025:4529. - -## Bug fixes - -* Before this update, collector pods would enter a crash loop due to a configuration error when attempting token-based authentication with an Elasticsearch output. With this update, token authentication with an Elasticsearch output generates a valid configuration. (LOG-7018) -* Before this update, auditd log messages with multiple msg keys could cause errors in collector pods, because the standard auditd log format expects a single msg field per log entry that follows the msg=audit(TIMESTAMP:ID) structure. With this update, only the first msg value is used, which resolves the issue and ensures accurate extraction of audit metadata. (LOG-7029) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-2236 -* CVE-2024-5535 -* CVE-2024-56171 -* CVE-2025-24928 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.1.5 Release Notes - -This release includes RHSA-2025:3907. - -## New features and enhancements - -* Before this update, time-based stream sharding was not enabled in Loki, which resulted in Loki being unable to save historical data. With this update, Loki Operator enables time-based stream sharding in Loki, which helps Loki save historical data. (LOG-6991) - -## Bug fixes - -* Before this update, the Vector collector could not forward Open Virtual Network (OVN) and Auditd logs. With this update, the Vector collector can forward OVN and Auditd logs. (LOG-6996) - -## CVEs - -* CVE-2025-30204 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.1.4 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.4. - -## Bug fixes - -* Before this update, Red Hat Managed Elasticsearch failed to receive logs if the index name did not follow the required patterns (app-, infra-, audit-), resulting in an index_not_found_exception error due to a restricted automatic index creation. With this update, improved documentation and explanations in the oc explain obsclf.spec.outputs.elasticsearch.index command clarify the index naming limitations, helping users configure log forwarding correctly. -(LOG-6623) -* Before this update, when you used 1x.pico as the LokiStack size, the number of delete workers was set to zero. This issue occurred because of an error in the Operator that generates the Loki configuration. With this update, the number of delete workers is set to ten. -(LOG-6797) -* Before this update, the Operator failed to update the securitycontextconstraint object required by the log collector, which was a regression from previous releases. With this update, the Operator restores the cluster role to the service account and updates the resource. -(LOG-6816) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-45336 -* CVE-2024-45338 -* CVE-2024-56171 -* CVE-2025-24928 -* CVE-2025-27144 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.1.3 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.3. - -## Bug Fixes - -* Before this update, when using the new 1x.pico size with the Loki Operator, the PodDisruptionBudget created for the Ingester pod allowed Kubernetes to evict two of the three Ingester pods. With this update, the Operator now creates a PodDisruptionBudget that allows eviction of only a single Ingester pod. -(LOG-6693) -* Before this update, the Operator did not support templating of syslog facility and severity level, which was consistent with the rest of the API. Instead, the Operator relied upon the 5.x API, which is no longer supported. With this update, the Operator supports templating by adding the required validation to the API and rejecting resources that do not match the required format. -(LOG-6788) -* Before this update, empty OTEL tuning configuration caused a validation error. With this update, the validation rules allow empty OTEL tuning configurations. -(LOG-6532) - -## CVEs - -* CVE-2020-11023 -* CVE-2024-9287 -* CVE-2024-12797 - -# Logging 6.1.2 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.2. - -## New Features and Enhancements - -* This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. -(LOG-6579) - -## Bug Fixes - -* Before this update, the collector alerting rules contained summary and message fields. With this update, the collector alerting rules contain summary and description fields. -(LOG-6126) -* Before this update, the collector metrics dashboard could get removed after an Operator upgrade due to a race condition during the transition from the old to the new pod deployment. With this update, labels are added to the dashboard ConfigMap to identify the upgraded deployment as the current owner so that it will not be removed. -(LOG-6280) -* Before this update, when you included infrastructure namespaces in application inputs, their log_type would be set to application. With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure. -(LOG-6373) -* Before this update, the Cluster Logging Operator used a cached client to fetch the SecurityContextConstraint cluster resource, which could result in an error when the cache is invalid. With this update, the Operator now always retrieves data from the API server instead of using a cache. -(LOG-6418) -* Before this update, the logging must-gather did not collect resources such as UIPlugin, ClusterLogForwarder, LogFileMetricExporter, and LokiStack. With this update, the must-gather now collects all of these resources and places them in their respective namespace directory instead of the cluster-logging directory. -(LOG-6422) -* Before this update, the Vector startup script attempted to delete buffer lock files during startup. With this update, the Vector startup script no longer attempts to delete buffer lock files during startup. -(LOG-6506) -* Before this update, the API documentation incorrectly claimed that lokiStack outputs would default the target namespace, which could prevent the collector from writing to that output. With this update, this claim has been removed from the API documentation and the Cluster Logging Operator now validates that a target namespace is present. -(LOG-6573) -* Before this update, the Cluster Logging Operator could deploy the collector with output configurations that were not referenced by any inputs. With this update, a validation check for the ClusterLogForwarder resource prevents the Operator from deploying the collector. -(LOG-6585) - -## CVEs - -* CVE-2019-12900 - -# Logging 6.1.1 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1. - -## New Features and Enhancements - -* With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in Red Hat OpenShift Container Platform 4.17 or later. (LOG-6420) - -## Bug Fixes - -* Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.]. With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size, max_line_bytes, is 3145728 bytes. The maximum number of bytes read during a read cycle, max_read_bytes, is 262144 bytes. (LOG-6379) -* Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. (LOG-6383) -* Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6405) -* Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6407) -* Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6449) -* Before this update, the ValidLokistackOTLPOutputs condition appeared in the status of the ClusterLogForwarder custom resource even when the output type is not LokiStack. With this update, the ValidLokistackOTLPOutputs condition is removed, and the validation messages for the existing output conditions are corrected. (LOG-6469) -* Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6484) -* Before this update, the must-gather script of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, the must-gather script is fixed, and the LokiStack data is gathered reliably. (LOG-6498) -* Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6533) - -## CVEs - -* CVE-2019-12900 -* CVE-2024-2511 -* CVE-2024-3596 -* CVE-2024-4603 -* CVE-2024-4741 -* CVE-2024-5535 -* CVE-2024-10963 -* CVE-2024-50602 - -# Logging 6.1.0 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0. - -## New Features and Enhancements - -### Log Collection - -* This enhancement adds the source iostream to the attributes sent from collected container logs. The value is set to either stdout or stderr based on how the collector received it. (LOG-5292) -* With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster’s specific needs and specifications. (LOG-6072) -* With this update, users can now set the syslog output delivery mode of the ClusterLogForwarder CR to either AtLeastOnce or AtMostOnce. (LOG-6355) - -### Log Storage - -* With this update, the new 1x.pico LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). (LOG-5939) - -## Technology Preview - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* With this update, OpenTelemetry logs can now be forwarded using the OTel (OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add the observability.openshift.io/tech-preview-otlp-output: "enabled" annotation to your ClusterLogForwarder configuration. For additional configuration information, see OTLP Forwarding. -* With this update, a dataModel field has been added to the lokiStack output specification. Set the dataModel to Otel to configure log forwarding using the OpenTelemetry data format. The default is set to Viaq. For information about data mapping see OTLP Specification. - -## Bug Fixes - -None. - -## CVEs - -* CVE-2024-6119 -* CVE-2024-6232 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-visual-6.1.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-visual-6.1.txt deleted file mode 100644 index c669b518..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.1/log6x-visual-6.1.txt +++ /dev/null @@ -1,5 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/6x-cluster-logging-deploying-6.2.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/6x-cluster-logging-deploying-6.2.txt deleted file mode 100644 index a56192da..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/6x-cluster-logging-deploying-6.2.txt +++ /dev/null @@ -1,680 +0,0 @@ -# Installing Logging - - -Red Hat OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs. -To get started with logging, you must install the following Operators: -* Loki Operator to manage your log store. -* Red Hat OpenShift Logging Operator to manage log collection and forwarding. -* Cluster Observability Operator (COO) to manage visualization. -You can use either the Red Hat OpenShift Container Platform web console or the Red Hat OpenShift Container Platform CLI to install or configure logging. - -[IMPORTANT] ----- -You must configure the Red Hat OpenShift Logging Operator after the Loki Operator. ----- - -# Installation by using the CLI - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI. - -## Installing the Loki Operator by using the CLI - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki by using the Red Hat OpenShift Container Platform command-line interface (CLI). You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Create a Namespace object for Loki Operator: -Example Namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-operators-redhat 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Container Platform metric, causing conflicts. -A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -2. Apply the Namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create an OperatorGroup object. -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-operators-redhat as the namespace. -4. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a Subscription object for Loki Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: loki-operator - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-operators-redhat as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -6. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -7. Create a namespace object for deploy the LokiStack: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -8. Apply the namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -9. Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3. -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging -stringData: 2 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Use the name logging-loki-s3 to match the name used in LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -10. Apply the Secret object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -11. Create a LokiStack CR: -Example LokiStack CR - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" 4 - secret: - name: logging-loki-s3 5 - type: s3 6 - storageClassName: 7 - tenants: - mode: openshift-logging 8 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -12. Apply the LokiStack CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -* Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -## Installing Red Hat OpenShift Logging Operator by using the CLI - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store by using the OpenShift CLI (`oc`). - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You installed and configured Loki Operator. -* You have created the openshift-logging namespace. - -1. Create an OperatorGroup object: -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-logging as the namespace. -2. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create a Subscription object for Red Hat OpenShift Logging Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: cluster-logging - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-logging as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -4. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a service account to be used by the log collector: - -```terminal -$ oc create sa logging-collector -n openshift-logging -``` - -6. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs. - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging -``` - -7. Create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify the openshift-logging namespace. -Specify the name of the service account created before. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -8. Apply the ClusterLogForwarder CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -1. Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m -instance-222js 2/2 Running 0 18m -instance-g9ddv 2/2 Running 0 18m -instance-hfqq8 2/2 Running 0 18m -instance-sphwg 2/2 Running 0 18m -instance-vv7zn 2/2 Running 0 18m -instance-wk5zz 2/2 Running 0 18m -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -## Installing the Logging UI plugin by using the CLI - -Install the Logging UI plugin by using the command-line interface (CLI) so that you can visualize logs. - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You installed and configured Loki Operator. - -1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator. -2. Create a UIPlugin custom resource (CR): -Example UIPlugin CR - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging 1 -spec: - type: Logging 2 - logging: - lokiStack: - name: logging-loki 3 -``` - -Set name to logging. -Set type to Logging. -The name value must match the name of your LokiStack instance. - -[NOTE] ----- -If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration. ----- -3. Apply the UIPlugin CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -1. Access the Red Hat OpenShift Container Platform web console, and refresh the page if a pop-up message instructs you to do so. -2. Navigate to the Observe → Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod. - -# Installation by using the web console - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console. - -## Installing Logging by using the web console - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki from the OperatorHub by using the Red Hat OpenShift Container Platform web console. You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install. - -[IMPORTANT] ----- -The Community Loki Operator is not supported by Red Hat. ----- -3. Select stable-x.y as the Update channel. - -The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the Operator install completes with an InstallSucceeded message, refresh the page. ----- -6. While the Operator installs, create the namespace to which the log store will be deployed. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the openshift-logging namespace: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -3. Click Create. -7. Create a secret with the credentials to access the object storage. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3: -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging 2 -stringData: 3 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource. -Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -3. Click Create. -8. Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance. -9. Select YAML view, and then use the following template to create a LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" - secret: - name: logging-loki-s3 4 - type: s3 5 - storageClassName: 6 - tenants: - mode: openshift-logging 7 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -10. Click Create. - -1. In the LokiStack tab veriy that you see your LokiStack instance. -2. In the Status column, verify that you see the message Condition: Ready with a green checkmark. - -## Installing Red Hat OpenShift Logging Operator by using the web console - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store from the OperatorHub by using the Red Hat OpenShift Container Platform web console. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install. -3. Select stable-x.y as the Update channel. The latest version is already selected in the Version field. - -The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the operator installation completes with an InstallSucceeded message, refresh the page. ----- -6. While the operator installs, create the service account that will be used by the log collector to collect the logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the service account. -Example ServiceAccount object - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: logging-collector 1 - namespace: openshift-logging 2 -``` - -Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource. -Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource. -3. Click the Create button. -7. Create the ClusterRoleBinding objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the ClusterRoleBinding resources. -Example ClusterRoleBinding resources - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:write-logs -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: logging-collector-logs-writer 1 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-application -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-application-logs 2 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-infrastructure -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-infrastructure-logs 3 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging -``` - -The cluster role to allow the log collector to write logs to LokiStack. -The cluster role to allow the log collector to collect logs from applications. -The cluster role to allow the log collector to collect logs from infrastructure. -3. Click the Create button. -8. Go to the Operators -> Installed Operators page. Select the operator and click the All instances tab. -9. After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs, find the ClusterLogForwarder resource and click Create Instance. -10. Select YAML view, and then use the following template to create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify openshift-logging as the namespace. -Specify the name of the service account created earlier. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -11. Click Create. - -1. In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder instance. -2. In the Status column, verify that you see the messages: -* Condition: observability.openshift.io/Authorized -* observability.openshift.io/Valid, Ready - -## Installing the Logging UI plugin by using the web console - -Install the Logging UI plugin by using the web console so that you can visualize logs. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator. -2. Navigate to the Installed Operators page. Under Provided APIs, select ClusterObservabilityOperator. Find the UIPlugin resource and click Create Instance. -3. Select the YAML view, and then use the following template to create a UIPlugin custom resource (CR): - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging 1 -spec: - type: Logging 2 - logging: - lokiStack: - name: logging-loki 3 -``` - -Set name to logging. -Set type to Logging. -The name value must match the name of your LokiStack instance. - -[NOTE] ----- -If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration. ----- -4. Click Create. - -1. Refresh the page when a pop-up message instructs you to do so. -2. Navigate to the Observe → Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log62-cluster-logging-support.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log62-cluster-logging-support.txt deleted file mode 100644 index d4c8e815..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log62-cluster-logging-support.txt +++ /dev/null @@ -1,141 +0,0 @@ -# Support - - -Only the configuration options described in this documentation are supported for logging. -Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. - -[NOTE] ----- -If you must perform configurations not described in the Red Hat OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed. ----- - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- -Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. -Logging is not: -* A high scale log collection system -* Security Information and Event Monitoring (SIEM) compliant -* A "bring your own" (BYO) log collector configuration -* Historical or long term log retention or storage -* A guaranteed log sink -* Secure storage - audit logs are not stored by default - -# Supported API custom resource definitions - -The following table describes the supported Logging APIs. - - - -# Unsupported configurations - -You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: - -* The collector configuration file -* The collector daemonset - -Explicitly unsupported cases include: - -* Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector. -* Configuring how the log collector normalizes logs. You cannot modify default log normalization. - -# Support policy for unmanaged Operators - -The management state of an Operator determines whether an Operator is actively -managing the resources for its related component in the cluster as designed. If -an Operator is set to an unmanaged state, it does not respond to changes in -configuration nor does it receive updates. - -While this can be helpful in non-production clusters or during debugging, -Operators in an unmanaged state are unsupported and the cluster administrator -assumes full control of the individual component configurations and upgrades. - -An Operator can be set to an unmanaged state using the following methods: - -* Individual Operator configuration - -Individual Operators have a managementState parameter in their configuration. -This can be accessed in different ways, depending on the Operator. For example, -the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource -(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide -configuration resource. - -Changing the managementState parameter to Unmanaged means that the Operator -is not actively managing its resources and will take no action related to the -related component. Some Operators might not support this management state as it -might damage the cluster and require manual recovery. - -[WARNING] ----- -Changing individual Operators to the Unmanaged state renders that particular -component and functionality unsupported. Reported issues must be reproduced in -Managed state for support to proceed. ----- -* Cluster Version Operator (CVO) overrides - -The spec.overrides parameter can be added to the CVO’s configuration to allow -administrators to provide a list of overrides to the CVO’s behavior for a -component. Setting the spec.overrides[].unmanaged parameter to true for a -component blocks cluster upgrades and alerts the administrator after a CVO -override has been set: - -```terminal -Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. -``` - - -[WARNING] ----- -Setting a CVO override puts the entire cluster in an unsupported state. Reported -issues must be reproduced after removing any overrides for support to proceed. ----- - -# Collecting logging data for Red Hat Support - -When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. - -You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. -For prompt support, supply diagnostic information for both Red Hat OpenShift Container Platform and logging. - -## About the must-gather tool - -The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. - -For your logging, must-gather collects the following information: - -* Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level -* Cluster-level resources, including nodes, roles, and role bindings at the cluster level -* OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer - -When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory. - -## Collecting logging data - -You can use the oc adm must-gather CLI command to collect information about logging. - -To collect logging information with must-gather: - -1. Navigate to the directory where you want to store the must-gather information. -2. Run the oc adm must-gather command against the logging image: - -```terminal -$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') -``` - - -The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: -must-gather.local.4157245944708210408. -3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: - -```terminal -$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 -``` - -4. Attach the compressed file to your support case on the Red Hat Customer Portal. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-about-6.2.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-about-6.2.txt deleted file mode 100644 index 2b6545ea..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-about-6.2.txt +++ /dev/null @@ -1,330 +0,0 @@ -# Logging 6.2 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and outputs - -Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: - -* application -* receiver -* infrastructure -* audit - -You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver input type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec field defines the configuration for a receiver input. - -# Pipelines and filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: - -* When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick start - -OpenShift Logging supports two data models: - -* ViaQ (General Availability) -* OpenTelemetry (Technology Preview) - -You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack. - - -[NOTE] ----- -In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. ----- - -## Quick start with ViaQ - -To use the default ViaQ data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - authentication: - token: - from: serviceAccount - target: - name: logging-loki - namespace: openshift-logging - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -[NOTE] ----- -The dataModel field is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. ----- - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. - -## Quick start with OpenTelemetry - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You have installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 -spec: - serviceAccount: - name: collector - outputs: - - name: loki-otlp - type: lokiStack 2 - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - dataModel: Otel 3 - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: my-pipeline - inputRefs: - - application - - infrastructure - outputRefs: - - loki-otlp -``` - -Use the annotation to enable the Otel data model, which is a Technology Preview feature. -Define the output type as lokiStack. -Specifies the OpenTelemetry data model. - -[NOTE] ----- -You cannot use lokiStack.labelKeys when dataModel is Otel. To achieve similar functionality when dataModel is Otel, refer to "Configuring LokiStack for OTLP data ingestion". ----- - -* To verify that OTLP is functioning correctly, complete the following steps: -1. In the OpenShift web console, click Observe -> OpenShift Logging -> LokiStack -> Writes. -2. Check the Distributor - Structured Metadata section. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-about-logging.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-about-logging.txt new file mode 100644 index 00000000..eae2690e --- /dev/null +++ b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-about-logging.txt @@ -0,0 +1,16 @@ +# About Logging + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging 6 documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-clf-6.2.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-clf-6.2.txt deleted file mode 100644 index d1c4390a..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-clf-6.2.txt +++ /dev/null @@ -1,988 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions granted by this ClusterRole. -apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -loki.grafana.com: The API group for managing Loki-related resources. -resources: The resource type that the ClusterRole grants permission to interact with. -application: Refers to the application resources within the Loki logging system. -resourceNames: Specifies the names of resources that this role can manage. -logs: Refers to the log resources that can be created. -verbs: The actions allowed on the resources. -create: Grants permission to create new logs in the Loki system. - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -elasticsearch:: Forwards logs to an external Elasticsearch instance. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -# Configuring OTLP output - -Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 - name: clf-otlp -spec: - serviceAccount: - name: - outputs: - - name: otlp - type: otlp - otlp: - tuning: - compression: gzip - deliveryMode: AtLeastOnce - maxRetryDuration: 20 - maxWrite: 10M - minRetryDuration: 5 - url: 2 - pipelines: - - inputRefs: - - application - - infrastructure - - audit - name: otlp-logs - outputRefs: - - otlp -``` - -Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. -This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. - - -[NOTE] ----- -The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. ----- - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -# Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -## Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -# Forwarding logs over HTTP - -To enable forwarding logs over HTTP, specify http as the output type in the ClusterLogForwarder custom resource (CR). - -* Create or edit the ClusterLogForwarder CR using the template below: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - managementState: Managed - outputs: - - name: - type: http - http: - headers: 1 - h1: v1 - h2: v2 - authentication: - username: - key: username - secretName: - password: - key: password - secretName: - timeout: 300 - proxyURL: 2 - url: 3 - tls: - insecureSkipVerify: 4 - ca: - key: - secretName: 5 - pipelines: - - inputRefs: - - application - name: pipe1 - outputRefs: - - 6 - serviceAccount: - name: 7 -``` - -Additional headers to send with the log record. -Optional: URL of the HTTP/HTTPS proxy that should be used to forward logs over http or https from this output. This setting overrides any default proxy settings for the cluster or the node. -Destination address for logs. -Values are either true or false. -Secret name for destination credentials. -This value should be the same as the output name. -The name of your service account. - -# Forwarding logs using the syslog protocol - -You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from Red Hat OpenShift Container Platform. - -To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection. - -* You must have a logging server that is configured to receive the logging data using the specified protocol or format. - -1. Create or edit a YAML file that defines the ClusterLogForwarder CR object: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector -spec: - managementState: Managed - outputs: - - name: rsyslog-east 1 - syslog: - appName: 2 - enrichment: KubernetesMinimal - facility: 3 - msgId: 4 - payloadKey: 5 - procId: 6 - rfc: 7 - severity: informational 8 - tuning: - deliveryMode: 9 - url: 10 - tls: 11 - ca: - key: ca-bundle.crt - secretName: syslog-secret - type: syslog - pipelines: - - inputRefs: 12 - - application - name: syslog-east 13 - outputRefs: - - rsyslog-east - serviceAccount: 14 - name: logcollector -``` - -Specify a name for the output. -Optional: Specify the value for the APP-NAME part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Specify the value for Facility part of the syslog-msg header. -Optional: Specify the value for MSGID part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Specify the record field to use as the payload. The payloadKey value must be a single field path encased in single curly brackets {}. Example: {.}. -Optional: Specify the value for the PROCID part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Set the RFC that the generated messages conform to. The value can be RFC3164 or RFC5424. -Optional: Set the severity level for the message. For more information, see The Syslog Protocol. -Optional: Set the delivery mode for log forwarding. The value can be either AtLeastOnce, or AtMostOnce. -Specify the absolute URL with a scheme. Valid schemes are: tcp, tls, and udp. For example: tls://syslog-receiver.example.com:6514. -Specify the settings for controlling options of the transport layer security (TLS) client connections. -Specify which log types to forward by using the pipeline: application, infrastructure, or audit. -Specify a name for the pipeline. -The name of your service account. -2. Create the CR object: - -```terminal -$ oc create -f .yaml -``` - - -## Adding log source information to the message output - -You can add namespace_name, pod_name, and container_name elements to the message field of the record by adding the enrichment field to your ClusterLogForwarder custom resource (CR). - - -```yaml -# ... - spec: - outputs: - - name: syslogout - syslog: - enrichment: KubernetesMinimal: true - facility: user - payloadKey: message - rfc: RFC3164 - severity: debug - tag: mytag - type: syslog - url: tls://syslog-receiver.example.com:6514 - pipelines: - - inputRefs: - - application - name: test-app - outputRefs: - - syslogout -# ... -``` - - - -[NOTE] ----- -This configuration is compatible with both RFC3164 and RFC5424. ----- - - -```text - 2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...} -``` - - - -```text -2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...} -``` - - -# Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -# Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -# Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt deleted file mode 100644 index c1df8b35..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt +++ /dev/null @@ -1,172 +0,0 @@ -# OTLP data ingestion in Loki - - -You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories: -* Resource -* Scope -* Log -You can set metadata for multiple entries simultaneously or individually as needed. - -# Configuring LokiStack for OTLP data ingestion - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: - -* Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. - -1. Set the schema version: -* When creating a new LokiStack CR, set version: v13 in the storage schema configuration. - -[NOTE] ----- -For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). ----- -2. Configure the storage schema as follows: -Example configure storage schema - -```yaml -# ... -spec: - storage: - schemas: - - version: v13 - effectiveDate: 2024-10-25 -``` - - -Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. - -# Attribute mapping - -When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki. - -For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases: - -* Using a custom collector: If your setup includes a custom collector that generates additional attributes that you do not want to store, consider customizing the mapping to ensure these attributes are dropped by Loki. -* Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. - -## Custom attribute mapping for OpenShift - -When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. -In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration. - - -[NOTE] ----- -A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode. ----- - -Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration: - - -```yaml -# ... -spec: - limits: - global: - otlp: {} 1 - tenants: - application: 2 - otlp: {} -``` - - -Defines global OTLP attribute configuration. -Defines the OTLP attribute configuration for the application tenant within the openshift-logging mode. You can also configure infrastructure and audit tenants in addition to application tenants. - - -[NOTE] ----- -You can use both global and per-tenant OTLP configurations for mapping attributes to stream labels. ----- - -Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects. See the following LokiStack example configuration: - - -```yaml -spec: - limits: - global: - otlp: - streamLabels: - resourceAttributes: - - name: "k8s.namespace.name" - - name: "k8s.pod.name" - - name: "k8s.container.name" -``` - - -You can drop attributes of type resource, scope, or log from the log entry. - - -```yaml -# ... -spec: - limits: - global: - otlp: - streamLabels: -# ... - drop: - resourceAttributes: - - name: "process.command_line" - - name: "k8s\\.pod\\.labels\\..+" - regex: true - scopeAttributes: - - name: "service.name" - logAttributes: - - name: "http.route" -``` - - -You can use regular expressions by setting regex: true to apply a configuration for attributes with similar names. - - -[IMPORTANT] ----- -Avoid using regular expressions for stream labels, as this can increase data volume. ----- - -Attributes that are not explicitly set as stream labels or dropped from the entry are saved as structured metadata by default. - -## Customizing OpenShift defaults - -In the openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be dropped if performance is impacted. For information about the attributes, see OpenTelemetry data model attributes. - -When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or some attributes need to be droped, use custom configuration. Custom configurations can merge with default configurations. - -## Removing recommended attributes - -To reduce default attributes in the openshift-logging mode, disable recommended attributes: - - -```yaml -# ... -spec: - tenants: - mode: openshift-logging - openshift: - otlp: - disableRecommendedAttributes: true 1 -``` - - -Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes or stream labels. - -[NOTE] ----- -This setting might negatively impact query performance, as it removes default stream labels. You must pair this option with a custom attribute configuration to retain attributes essential for queries. ----- - -# Additional resources - -* Loki labels (Grafana documentation) -* Structured metadata (Grafana documentation) -* OpenTelemetry data model -* OpenTelemetry attribute (OpenTelemetry documentation) \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-loki-6.2.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-loki-6.2.txt deleted file mode 100644 index fd0b4f0e..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-loki-6.2.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack custom resource (CR) to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. - -[IMPORTANT] ----- -For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. ----- - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Prerequisites - -* You have installed the Loki Operator by using the command-line interface (CLI) or web console. -* You have created a serviceAccount CR in the same namespace as the ClusterLogForwarder CR. -* You have assigned the collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles to the serviceAccount CR. - -# Core set up and configuration - -Use role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -## Enhanced reliability and performance - -Use the following configurations to ensure reliability and efficiency of Loki in production. - -## Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -## Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -## LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -## Advanced deployment and scalability - -To configure high availability, scalability, and error handling, use the following information. - -## Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -## Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -### Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -## Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-release-notes-6.2.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-release-notes-6.2.txt deleted file mode 100644 index a8856929..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-release-notes-6.2.txt +++ /dev/null @@ -1,114 +0,0 @@ -# Logging 6.2 Release Notes - - - -# Logging 6.2.3 Release Notes - -This release includes RHBA-2025:8138. - -## Bug Fixes - -* Before this update, the cluster logging installation page contained an incorrect URL to the installation steps in the documentation. With this update, the link has been corrected, resolving the issue and helping users successfully navigate to the documentation. (LOG-6760) -* Before this update, the API documentation about default settings of the tuning delivery mode for log forwarding lacked clarity and sufficient detail. This could lead to users experiencing difficulty in understanding or optimally configuring these settings for their logging pipelines. With this update, the documentation has been revised to provide more comprehensive and clearer guidance on tuning delivery mode default settings, resolving potential ambiguities. (LOG-7131) -* Before this update, merging data from the message field into the root of a Syslog log event caused the log event to be inconsistent with the ViaQ data model. The inconsistency could lead to overwritten system information, data duplication, or event corruption. This update revises Syslog parsing and merging for the Syslog output to align with other output types, resolving this inconsistency. (LOG-7185) -* Before this update, log forwarding failed if you configured a cluster-wide proxy with a URL containing a username with an encoded at sign (@); for example user%40name. This update resolves the issue by adding correct support for URL-encoded values in proxy configurations. (LOG-7188) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-12087 -* CVE-2024-12088 -* CVE-2024-12133 -* CVE-2024-12243 -* CVE-2024-12747 -* CVE-2024-56171 -* CVE-2025-0395 -* CVE-2025-24928 - -# Logging 6.2.2 Release Notes - -This release includes RHBA-2025:4526. - -## Bug Fixes - -* Before this update, logs without the responseStatus.code field caused parsing errors in the Loki distributor component. This happened when using the OpenTelemetry data model. With this update, logs without the responseStatus.code field are parsed correctly. (LOG-7012) -* Before this update, the Cloudwatch output supported log events up to 256 KB in size. With this update, the Cloudwatch output supports up to 1 MB in size to match the updates published by Amazon Web Services (AWS). (LOG-7013) -* Before this update, auditd log messages with multiple msg keys could cause errors in collector pods, because the standard auditd log format expects a single msg field per log entry that follows the msg=audit(TIMESTAMP:ID) structure. With this update, only the first msg value is used, which resolves the issue and ensures accurate extraction of audit metadata. (LOG-7014) -* Before this update, collector pods would enter a crash loop due to a configuration error when attempting token-based authentication with an Elasticsearch output. With this update, token authentication with an Elasticsearch output generates a valid configuration. (LOG-7017) - -# Logging 6.2.1 Release Notes - -This release includes RHBA-2025:3908. - -## Bug Fixes - -* Before this update, application programming interface (API) audit logs collected from the management cluster used the cluster_id value from the management cluster. With this update, API audit logs use the cluster_id value from the guest cluster. (LOG-4445) -* Before this update, issuing the oc explain obsclf.spec.filters command did not list all the supported filters in the command output. With this update, all the supported filter types are listed in the command output. (LOG-6753) -* Before this update the log collector flagged a ClusterLogForwarder resource with multiple inputs to a LokiStack output as invalid due to incorrect internal processing logic. This update fixes the issue. (LOG-6758) -* Before this update, issuing the oc explain command for the clusterlogforwarder.spec.outputs.syslog resource returned an incomplete result. With this update, the missing supported types for rfc and enrichment attributes are listed in the result correctly. (LOG-6869) -* Before this update, empty OpenTelemetry (OTEL) tuning configuration caused validation errors. With this update, validation rules have been updated to accept empty tuning configuration. (LOG-6878) -* Before this update the Red Hat OpenShift Logging Operator could not update the securitycontextconstraint resource that is required by the log collector. With this update, the required cluster role has been provided to the service account of the Red Hat OpenShift Logging Operator. As a result of which, Red Hat OpenShift Logging Operator can create or update the securitycontextconstraint resource. (LOG-6879) -* Before this update, the API documentation for the URL attribute of the syslog resource incorrectly mentioned the value udps as a supported value. With this update, all references to udps have been removed. (LOG-6896) -* Before this update, the Red Hat OpenShift Logging Operator was intermittently unable to update the object in logs due to update conflicts. This update resolves the issue and prevents conflicts during object updates by using the Patch() function instead of the Update() function. (LOG-6953) -* Before this update, Loki ingesters that got into an unhealthy state due to networking issues stayed in that state even after the network recovered. With this update, you can configure the Loki Operator to perform service discovery more often so that unhealthy ingesters can rejoin the group. (LOG-6992) -* Before this update, the Vector collector could not forward Open Virtual Network (OVN) and Auditd logs. With this update, the Vector collector can forward OVN and Auditd logs. (LOG-6997) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-2236 -* CVE-2024-5535 -* CVE-2024-56171 -* CVE-2025-24928 - -# Logging 6.2.0 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.2.0. - -## New Features and Enhancements - -### Log Collection - -* With this update, HTTP outputs include a proxy field that you can use to send log data through an HTTP proxy. (LOG-6069) - -### Log Storage - -* With this update, time-based stream sharding in Loki is now enabled by the Loki Operator. This solves the issue of ingesting log entries older than the sliding time-window used by Loki. (LOG-6757) -* With this update, you can configure a custom certificate authority (CA) certificate with Loki Operator when using Swift as an object store. (LOG-4818) -* With this update, you can configure workload identity federation on Google Cloud Platform (GCP) by using the Cluster Credential Operator in OpenShift 4.17 and later releases with the Loki Operator. (LOG-6158) - -## Technology Preview - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* With this update, OpenTelemetry support offered by OpenShift Logging continues to improve, specifically in the area of enabling migrations from the ViaQ data model to OpenTelemetry when forwarding to LokiStack. (LOG-6146) -* With this update, the structuredMetadata field has been removed from Loki Operator in the otlp configuration because structured metadata is now the default type. Additionally, the update introduces a drop field that administrators can use to drop OpenTelemetry attributes when receiving data through OpenTelemetry protocol (OTLP). (LOG-6507) - -## Bug Fixes - -* Before this update, the timestamp shown in the console logs did not match the @timestamp field in the message. With this update the timestamp is correctly shown in the console. (LOG-6222) -* The introduction of ClusterLogForwarder 6.x modified the ClusterLogForwarder API to allow for a consistent templating mechanism. However, this was not applied to the syslog output spec API for the facility and severity fields. This update adds the required validation to the ClusterLogForwarder API for the facility and severity fields. (LOG-6661) -* Before this update, an error in the Loki Operator generating the Loki configuration caused the amount of workers to delete to be zero when 1x.pico was set as the LokiStack size. With this update, the number of workers to delete is set to 10. (LOG-6781) - -## Known Issues - -* The previous data model encoded all information in JSON. The console still uses the query of the previous data model to decode both old and new entries. The logs that are stored by using the new OpenTelemetry data model for the LokiStack output display the following error in the logging console: - -``` -__error__ JSONParserErr -__error_details__ Value looks like object, but can't find closing '}' symbol -``` - - -You can ignore the error as it is only a result of the query and not a data-related error. (LOG-6808) -* Currently, the API documentation incorrectly mentions OpenTelemetry protocol (OTLP) attributes as included instead of excluded in the descriptions of the drop field. (LOG-6839). - -## CVEs - -* CVE-2020-11023 -* CVE-2024-12797 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-visual-6.2.txt b/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-visual-6.2.txt deleted file mode 100644 index c669b518..00000000 --- a/ocp-product-docs-plaintext/4.17/observability/logging/logging-6.2/log6x-visual-6.2.txt +++ /dev/null @@ -1,5 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt index 8a716516..50a40dcd 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt @@ -43,7 +43,7 @@ or as a user with view permissions for all projects, you can access metrics for The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective of the Red Hat OpenShift Container Platform web console, click Observe and go to the Metrics tab. 2. To add one or more queries, perform any of the following actions: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt index 5d415f34..730d069a 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt @@ -88,7 +88,7 @@ Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. [NOTE] diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt index 0a04868d..70e163f8 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -84,7 +84,7 @@ If you do not need the local Alertmanager, you can disable it by configuring the * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config config map in the openshift-monitoring project: @@ -129,7 +129,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -180,7 +180,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt index aa65995a..13aadfce 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -451,7 +451,7 @@ You can create cluster ID labels for metrics by adding the write_relabel setting * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt index a7c3d5ad..4ce7bd22 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt @@ -29,7 +29,7 @@ You cannot add a node selector constraint directly to an existing scheduled pod. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -85,7 +85,7 @@ You can assign tolerations to any of the monitoring stack components to enable m * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -151,7 +151,7 @@ Prometheus then considers this target to be down and sets its up metric value to ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: @@ -194,7 +194,7 @@ To configure CPU and memory resources, specify values for resource limits and re * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the ConfigMap object named cluster-monitoring-config. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -325,7 +325,7 @@ For more information about the support scope of Red Hat Technology Preview featu To choose a metrics collection profile for core Red Hat OpenShift Container Platform monitoring components, edit the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have enabled Technology Preview features by using the FeatureGate custom resource (CR). * You have created the cluster-monitoring-config ConfigMap object. * You have access to the cluster as a user with the cluster-admin cluster role. @@ -385,7 +385,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt index ce3b00df..39104491 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt @@ -34,7 +34,7 @@ Each procedure that requires a change in the config map includes its expected ou You can configure the core Red Hat OpenShift Container Platform monitoring components by creating and updating the cluster-monitoring-config config map in the openshift-monitoring project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Check whether the cluster-monitoring-config ConfigMap object exists: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt index 43e06841..c4cc3be2 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -113,7 +113,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. * You have configured at least one PVC for core Red Hat OpenShift Container Platform monitoring components. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -187,7 +187,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -305,7 +305,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -370,7 +370,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -436,7 +436,7 @@ For default platform monitoring in the openshift-monitoring project, you can ena Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt index c6e4b9be..087ec9aa 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -95,7 +95,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -146,7 +146,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -233,7 +233,7 @@ If you are a non-administrator user who has been given the alert-routing-edit cl * A cluster administrator has enabled monitoring for user-defined projects. * A cluster administrator has enabled alert routing for user-defined projects. * You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml. 2. Add an AlertmanagerConfig YAML definition to the file. For example: @@ -278,7 +278,7 @@ All features of a supported version of upstream Alertmanager are also supported * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled a separate instance of Alertmanager for user-defined alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Print the currently active Alertmanager configuration into the file alertmanager.yaml: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt index a860b44e..ff87e106 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -459,7 +459,7 @@ You cannot override this default configuration by setting the value of the honor * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt index 5e38ead7..b5b3bcd2 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt @@ -28,7 +28,7 @@ It is not permitted to move components to control plane or infrastructure nodes. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -84,7 +84,7 @@ You can assign tolerations to the components that monitor user-defined projects, * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -145,7 +145,7 @@ You can configure these limits and requests for monitoring components that monit To configure CPU and memory resources, specify values for resource limits and requests in the {configmap-name} ConfigMap object in the {namespace-name} namespace. * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -232,7 +232,7 @@ If you set sample or label limits, no further sample data is ingested for that t * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -289,7 +289,7 @@ You can create alerts that notify you when: * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml: @@ -357,7 +357,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt index afda2cf5..624c19a8 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt @@ -55,7 +55,7 @@ You must have access to the cluster as a user with the cluster-admin cluster rol ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have created the cluster-monitoring-config ConfigMap object. * You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. @@ -116,7 +116,7 @@ As a cluster administrator, you can assign the user-workload-monitoring-config-e * You have access to the cluster as a user with the cluster-admin cluster role. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: @@ -175,7 +175,7 @@ You can allow users to create user-defined alert routing configurations that use * You have access to the cluster as a user with the cluster-admin cluster role. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object: @@ -258,7 +258,7 @@ You can grant users permission to configure alert routing for user-defined proje * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled monitoring for user-defined projects. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * Assign the alert-routing-edit cluster role to a user in the user-defined project: @@ -268,7 +268,7 @@ $ oc -n adm policy add-role-to-user alert-routing-edit 1 For , substitute the namespace for the user-defined project, such as ns1. For , substitute the username for the account to which you want to assign the role. -Configuring alert notifications +* Configuring alert notifications # Granting users permissions for monitoring for user-defined projects diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt index e341c10e..88072b4f 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -118,7 +118,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have configured at least one PVC for components that monitor user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -197,7 +197,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -247,7 +247,7 @@ By default, for user-defined projects, Thanos Ruler automatically retains metric * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -311,7 +311,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -376,7 +376,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt index 215d2a76..7eaa0b6c 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt @@ -164,7 +164,7 @@ You can create alerting rules for user-defined projects. Those alerting rules wi * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -210,7 +210,7 @@ To list alerting rules for a user-defined project, you must have been assigned t * You have enabled monitoring for user-defined projects. * You are logged in as a user that has the monitoring-rules-view cluster role for your project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. To list alerting rules in : @@ -231,7 +231,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt index 30c3e085..6b535b1b 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt @@ -169,7 +169,7 @@ These alerting rules trigger alerts based on the values of chosen metrics. ---- * You have access to the cluster as a user that has the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-alerting-rule.yaml. 2. Add an AlertingRule resource to the YAML file. @@ -218,7 +218,7 @@ As a cluster administrator, you can modify core platform alerts before Alertmana For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-modified-alerting-rule.yaml. 2. Add an AlertRelabelConfig resource to the YAML file. @@ -285,7 +285,7 @@ You can create alerting rules for user-defined projects. Those alerting rules wi * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -331,7 +331,7 @@ As a cluster administrator, you can list alerting rules for core Red Hat OpenShift Container Platform and user-defined projects together in a single view. * You have access to the cluster as a user with the cluster-admin role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. From the Administrator perspective of the Red Hat OpenShift Container Platform web console, go to Observe -> Alerting -> Alerting rules. 2. Select the Platform and User sources in the Filter drop-down menu. @@ -347,7 +347,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: diff --git a/ocp-product-docs-plaintext/4.17/observability/monitoring/troubleshooting-monitoring-issues.txt b/ocp-product-docs-plaintext/4.17/observability/monitoring/troubleshooting-monitoring-issues.txt index 64f33c33..81409e19 100644 --- a/ocp-product-docs-plaintext/4.17/observability/monitoring/troubleshooting-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.17/observability/monitoring/troubleshooting-monitoring-issues.txt @@ -200,7 +200,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective, navigate to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -273,7 +273,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.17/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt b/ocp-product-docs-plaintext/4.17/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt index 19af3fd5..dba94b13 100644 --- a/ocp-product-docs-plaintext/4.17/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt +++ b/ocp-product-docs-plaintext/4.17/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt @@ -56,13 +56,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -79,7 +79,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -94,7 +94,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -103,8 +103,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -155,7 +155,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.17/release_notes/ocp-4-17-release-notes.txt b/ocp-product-docs-plaintext/4.17/release_notes/ocp-4-17-release-notes.txt index 8ad69465..048ec5c6 100644 --- a/ocp-product-docs-plaintext/4.17/release_notes/ocp-4-17-release-notes.txt +++ b/ocp-product-docs-plaintext/4.17/release_notes/ocp-4-17-release-notes.txt @@ -1278,6 +1278,32 @@ This section will continue to be updated over time to provide notes on enhanceme For any Red Hat OpenShift Container Platform release, always review the instructions on updating your cluster properly. ---- +## RHSA-2025:12437 - Red Hat OpenShift Container Platform 4.17.37 bug fix update and security + +Issued: 06 August 2025 + +Red Hat OpenShift Container Platform release 4.17.37 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:12437 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:12438 advisory. + +Space precluded documenting all of the container images for this release in the advisory. + +You can view the container images in this release by running the following command: + + +```terminal +$ oc adm release info 4.17.37 --pullspecs +``` + + +### Bug fixes + +* Before this update, the catalog-operator captured snapshots every five minutes, which caused CPU spikes when dealing with many namespaces, subscriptions, and large catalog sources. This increased load on the catalog source pods and prevented users from installing or upgrading Operators. With this release, the catalog snapshot cache lifetime has been increased to 30 minutes allowing enough time for the catalog source to resolve attempts without causing an undue load and stabilizing the Operator installation and upgrade process. (OCPBUGS-57428) +* Before this update, forward slashes were permitted in console.tab/horizontalNav href values. Starting in 4.15, a regression resulted in forward slashes no longer working correctly when used in href values. With this release, forward slashes in console.tab/horizontalNav href values work as expected. (OCPBUGS-59265) +* Before this update, the Observe -> Metrics -> query -> QueryKebab -> Export as csv drop-down item did not handle an undefined title element. As a consequence, users were unable to export the CSV file for certain queries on the Metrics tab of OpenShift Lister version 4.17. With this release, the metrics download for all queries correctly handles the object properties in the drop-down menu items allowing for successful CSV exports. (OCPBUGS-52592) + +### Updating + +To update an Red Hat OpenShift Container Platform 4.17 cluster to this latest release, see Updating a cluster using the CLI. + ## RHSA-2025:11359 - Red Hat OpenShift Container Platform 4.17.36 bug fix update and security Issued: 23 July 2025 @@ -1369,7 +1395,7 @@ $ oc adm release info 4.17.34 --pullspecs * Previously, a Machine Config Operator (MCO) incorrectly set an Upgradeable=False condition to all new nodes that were added to a cluster. A PoolUpdating reason was provided for the Upgradeable=False condition. With this release, the MCO now correctly sets an Upgradeable=True condition to all new nodes that get added to a cluster, which resolves the issue. (OCPBUGS-57135) * Previously, the installer was not checking for ESXi hosts that were powered off within a VMware vSphere cluster, which caused the installation to fail because the OVA could not be uploaded. With this release, the installer now checks the power status of each ESXi host and skips any that are powered off, which resolves the issue and allows the OVA to be imported successfully. (OCPBUGS-56448) * Previously, in certain situations the gateway IP address for a node changed and caused the OVN cluster router to add a new static route with the new gateway IP address, without deleting the original one. The OVN cluster router manages the static route to the cluster subnet. As a result, a stale route still pointed to the switch subnet and this caused intermittent drops during egress traffic transfer. With this release, a patch applied to the OVN cluster router ensures that if the gateway IP address changes, the OVN cluster router updates the existing static route with the new gateway IP address. A stale route no longer points to the OVN cluster router so that egress traffic flow does not drop. (OCPBUGS-56443) -* Previously, a pod with an IP address in an OVN localnet network was unreachable by other pods that ran on the same node but used the default network for communication. Communication between pods on different nodes was not impacted by this communication issue. With this release, communication between a localnet pod and a default network pod that both ran on the same node is improved so that this issue no longer exists. (OCPBUGS-56244) +* Previously, a pod with a secondary interface in an OVN-Kubernetes Localnet network that was plugged into a br-ex interface bridge was out of reach by other pods on the same node, but used the default network for communication. The communication between pods on different nodes was not impacted. With this release, the communication between a Localnet pod and a default network pod running on the same node is possible, however the IP addresses that are used in the Localnet network must be within the same subnet as the host network. (OCPBUGS-56244) ### Updating diff --git a/ocp-product-docs-plaintext/4.17/security/cert_manager_operator/cert-manager-customizing-api-fields.txt b/ocp-product-docs-plaintext/4.17/security/cert_manager_operator/cert-manager-customizing-api-fields.txt index 7f2cffda..06da814e 100644 --- a/ocp-product-docs-plaintext/4.17/security/cert_manager_operator/cert-manager-customizing-api-fields.txt +++ b/ocp-product-docs-plaintext/4.17/security/cert_manager_operator/cert-manager-customizing-api-fields.txt @@ -1,13 +1,111 @@ -# Customizing cert-manager Operator API fields +# Customizing the cert-manager Operator by using the CertManager custom resource -You can customize the cert-manager Operator for Red Hat OpenShift API fields by overriding environment variables and arguments. +After installing the cert-manager Operator for Red Hat OpenShift, you can perform the following actions by configuring the CertManager custom resource (CR): +* Configure the arguments to modify the behavior of the cert-manager components, such as the cert-manager controller, CA injector, and Webhook. +* Set environment variables for the controller pod. +* Define resource requests and limits to manage CPU and memory usage. +* Configure scheduling rules to control where pods run in your cluster. + +```yaml +apiVersion: operator.openshift.io/v1alpha1 +kind: CertManager +metadata: + name: cluster +spec: + controllerConfig: + overrideArgs: + - "--dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53" + overrideEnv: + - name: HTTP_PROXY + value: http://proxy.example.com:8080 + overrideResources: + limits: + cpu: "200m" + memory: "512Mi" + requests: + cpu: "100m" + memory: "256Mi" + overrideScheduling: + nodeSelector: + custom: "label" + tolerations: + - key: "key1" + operator: "Equal" + value: "value1" + effect: "NoSchedule" + + webhookConfig: + overrideArgs: +#... + overrideResources: +#... + overrideScheduling: +#... + + cainjectorConfig: + overrideArgs: +#... + overrideResources: +#... + overrideScheduling: +#... +``` + [WARNING] ---- To override unsupported arguments, you can add spec.unsupportedConfigOverrides section in the CertManager resource, but using spec.unsupportedConfigOverrides is unsupported. ---- +# Explanation of fields in the CertManager custom resource + +You can use the CertManager custom resource (CR) to configure the following core components of the cert-manager Operator for Red Hat OpenShift: + +* Cert-manager controller: You can use the spec.controllerConfig field to configure the cert‑manager controller pod. +* Webhook: You can use the spec.webhookConfig field to configure the webhook pod, which handles validation and mutation requests. +* CA injector: You can use the spec.cainjectorConfig field to configure the CA injector pod. + +## Common configurable fields in the CertManager CR for the cert-manager components + +The following table lists the common fields that you can configure in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + + + +## Overridable arguments for the cert-manager components + +You can configure the overridable arguments for the cert-manager components in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the overridable arguments for the cert-manager components: + + + +## Overridable environment variables for the cert-manager controller + +You can configure the overridable environment variables for the cert-manager controller in the spec.controllerConfig.overrideEnv field in the CertManager CR. + +The following table describes the overridable environment variables for the cert-manager controller: + + + +## Overridable resource parameters for the cert-manager components + +You can configure the CPU and memory limits for the cert-manager components in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the overridable resource parameters for the cert-manager components: + + + +## Overridable scheduling parameters for the cert-manager components + +You can configure the pod scheduling constrainsts for the cert-manager components in the spec.controllerConfig, spec.webhookConfig field, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the pod scheduling parameters for the cert-manager components: + + + +* Deleting a TLS secret automatically upon Certificate removal + # Customizing cert-manager by overriding environment variables from the cert-manager Operator API You can override the supported environment variables for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. @@ -42,6 +140,11 @@ spec: Replace with the proxy server URL. Replace with a comma separated list of domains. These domains are ignored by the proxy server. + +[NOTE] +---- +For more information about the overridable environment variables, see "Overridable environment variables for the cert-manager components" in "Explanation of fields in the CertManager custom resource". +---- 3. Save your changes and quit the text editor to apply your changes. 1. Verify that the cert-manager controller pod is redeployed by running the following command: @@ -77,6 +180,8 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Customizing cert-manager by overriding arguments from the cert-manager Operator API You can override the supported arguments for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. @@ -102,30 +207,24 @@ spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=' 1 - - '--dns01-recursive-nameservers-only' 2 - - '--acme-http01-solver-nameservers=:' 3 - - '--v=' 4 - - '--metrics-listen-address=:' 5 - - '--issuer-ambient-credentials' 6 + - '--dns01-recursive-nameservers-only' + - '--acme-http01-solver-nameservers=:' + - '--v=' + - '--metrics-listen-address=:' + - '--issuer-ambient-credentials' + - '--acme-http01-solver-resource-limits-cpu=' + - '--acme-http01-solver-resource-limits-memory=' + - '--acme-http01-solver-resource-request-cpu=' + - '--acme-http01-solver-resource-request-memory=' webhookConfig: overrideArgs: - - '--v=4' 4 + - '--v=' cainjectorConfig: overrideArgs: - - '--v=2' 4 + - '--v=' ``` -Provide a comma-separated list of nameservers to query for the DNS-01 self check. The nameservers can be specified either as :, for example, 1.1.1.1:53, or use DNS over HTTPS (DoH), for example, https://1.1.1.1/dns-query. -Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. -Provide a comma-separated list of : nameservers to query for the Automated Certificate Management Environment (ACME) HTTP01 self check. For example, --acme-http01-solver-nameservers=1.1.1.1:53. -Specify to set the log level verbosity to determine the verbosity of log messages. -Specify the host and port for the metrics endpoint. The default value is --metrics-listen-address=0.0.0.0:9402. -You must use the --issuer-ambient-credentials argument when configuring an ACME Issuer to solve DNS-01 challenges by using ambient credentials. - -[NOTE] ----- -DNS over HTTPS (DoH) is supported starting only from cert-manager Operator for Red Hat OpenShift version 1.13.0 and later. ----- +For information about the overridable aruguments, see "Overridable arguments for the cert-manager components" in "Explanation of fields in the CertManager custom resource". 3. Save your changes and quit the text editor to apply your changes. * Verify that arguments are updated for cert-manager pods by running the following command: @@ -176,6 +275,8 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Deleting a TLS secret automatically upon Certificate removal You can enable the --enable-certificate-owner-ref flag for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. The --enable-certificate-owner-ref flag sets the certificate resource as an owner of the secret where the TLS certificate is stored. @@ -248,7 +349,7 @@ Example output # Overriding CPU and memory limits for the cert-manager components -After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components such as cert-manager controller, CA injector, and Webhook. +After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components, such as the cert-manager controller, CA injector, and Webhook. * You have access to the Red Hat OpenShift Container Platform cluster as a user with the cluster-admin role. * You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. @@ -316,48 +417,37 @@ Example output The spec.resources field is empty by default. The cert-manager components do not have CPU and memory limits. 3. To configure the CPU and memory limits for the cert-manager controller, CA injector, and Webhook, enter the following command: -```yaml +```terminal $ oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: - overrideResources: - limits: 1 - cpu: 200m 2 - memory: 64Mi 3 - requests: 4 - cpu: 10m 2 - memory: 16Mi 3 + overrideResources: 1 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi webhookConfig: overrideResources: - limits: 5 - cpu: 200m 6 - memory: 64Mi 7 - requests: 8 - cpu: 10m 6 - memory: 16Mi 7 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi cainjectorConfig: overrideResources: - limits: 9 - cpu: 200m 10 - memory: 64Mi 11 - requests: 12 - cpu: 10m 10 - memory: 16Mi 11 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi " ``` -Defines the maximum amount of CPU and memory that a single container in a cert-manager controller pod can request. -You can specify the CPU limit that a cert-manager controller pod can request. The default value is 10m. -You can specify the memory limit that a cert-manager controller pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the cert-manager controller pod. -Defines the maximum amount of CPU and memory that a single container in a CA injector pod can request. -You can specify the CPU limit that a CA injector pod can request. The default value is 10m. -You can specify the memory limit that a CA injector pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the CA injector pod. -Defines the maximum amount of CPU and memory Defines the maximum amount of CPU and memory that a single container in a Webhook pod can request. -You can specify the CPU limit that a Webhook pod can request. The default value is 10m. -You can specify the memory limit that a Webhook pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the Webhook pod. +For information about the overridable resource parameters, see "Overridable resource parameters for the cert-manager components" in "Explanation of fields in the CertManager custom resource". Example output ```terminal @@ -429,9 +519,11 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Configuring scheduling overrides for cert-manager components -You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components such as cert-manager controller, CA injector, and Webhook. +You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components, such as the cert-manager controller, CA injector, and Webhook. * You have access to the Red Hat OpenShift Container Platform cluster as a user with the cluster-admin role. * You have installed version 1.15.0 or later of the cert-manager Operator for Red Hat OpenShift. @@ -442,37 +534,33 @@ You can configure the pod scheduling from the cert-manager Operator for Red Hat $ oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: - overrideScheduling: + overrideScheduling: 1 nodeSelector: - node-role.kubernetes.io/control-plane: '' 1 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule 2 + effect: NoSchedule webhookConfig: overrideScheduling: nodeSelector: - node-role.kubernetes.io/control-plane: '' 3 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule 4 + effect: NoSchedule cainjectorConfig: overrideScheduling: nodeSelector: - node-role.kubernetes.io/control-plane: '' 5 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule" 6 + effect: NoSchedule" +" ``` -Defines the nodeSelector for the cert-manager controller deployment. -Defines the tolerations for the cert-manager controller deployment. -Defines the nodeSelector for the cert-manager webhook deployment. -Defines the tolerations for the cert-manager webhook deployment. -Defines the nodeSelector for the cert-manager cainjector deployment. -Defines the tolerations for the cert-manager cainjector deployment. +For information about the overridable scheduling parameters, see "Overridable scheduling parameters for the cert-manager components" in "Explanation of fields in the CertManager custom resource". 1. Verify pod scheduling settings for cert-manager pods: 1. Check the deployments in the cert-manager namespace to confirm they have the correct nodeSelector and tolerations by running the following command: @@ -517,3 +605,6 @@ cert-manager-webhook ```terminal $ oc get events -n cert-manager --field-selector reason=Scheduled ``` + + +* Explanation of fields in the CertManager custom resource \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.17/security/cert_manager_operator/cert-manager-operator-release-notes.txt b/ocp-product-docs-plaintext/4.17/security/cert_manager_operator/cert-manager-operator-release-notes.txt index 0dc65a1d..935e2cf1 100644 --- a/ocp-product-docs-plaintext/4.17/security/cert_manager_operator/cert-manager-operator-release-notes.txt +++ b/ocp-product-docs-plaintext/4.17/security/cert_manager_operator/cert-manager-operator-release-notes.txt @@ -5,6 +5,44 @@ The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that p These release notes track the development of cert-manager Operator for Red Hat OpenShift. For more information, see About the cert-manager Operator for Red Hat OpenShift. +# cert-manager Operator for Red Hat OpenShift 1.17.0 + +Issued: 2025-08-06 + +The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.17.0: + +* RHBA-2025:13182 +* RHBA-2025:13134 +* RHBA-2025:13133 + +Version 1.17.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.17.4. For more information, see the cert-manager project release notes for v1.17.4. + +## Bug fixes + +* Previously, the status field in the IstioCSR custom resource (CR) was not set to Ready even after the successful deployment of Istio‑CSR. With this fix, the status field is correctly set to Ready, ensuring consistent and reliable status reporting. (CM-546) + +## New features and enhancements + +Support to configure resource requests and limits for ACME HTTP‑01 solver pods + +With this release, the cert-manager Operator for Red Hat OpenShift supports configuring CPU and memory resource requests and limits for ACME HTTP‑01 solver pods. You can configure the CPU and memory resource requests and limits by using the following overridable arguments in the CertManager custom resource (CR): + +* --acme-http01-solver-resource-limits-cpu +* --acme-http01-solver-resource-limits-memory +* --acme-http01-solver-resource-request-cpu +* --acme-http01-solver-resource-request-memory + +For more information, see Overridable arguments for the cert‑manager components. + +## CVEs + +* CVE-2025-22866 +* CVE-2025-22868 +* CVE-2025-22872 +* CVE-2025-22870 +* CVE-2025-27144 +* CVE-2025-22871 + # cert-manager Operator for Red Hat OpenShift 1.16.1 Issued: 2025-07-10 diff --git a/ocp-product-docs-plaintext/4.17/service_mesh/v2x/servicemesh-release-notes.txt b/ocp-product-docs-plaintext/4.17/service_mesh/v2x/servicemesh-release-notes.txt index b5a46177..b2f115c8 100644 --- a/ocp-product-docs-plaintext/4.17/service_mesh/v2x/servicemesh-release-notes.txt +++ b/ocp-product-docs-plaintext/4.17/service_mesh/v2x/servicemesh-release-notes.txt @@ -2,14 +2,32 @@ +# Red Hat OpenShift Service Mesh version 2.6.9 + +This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.9, and includes the following ServiceMeshControlPlane resource version updates: 2.6.9 and 2.5.12. + +This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. + +You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. + +## Component updates + + + +# Red Hat OpenShift Service Mesh version 2.5.12 + +This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.9 and is supported on Red Hat OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). + +## Component updates + + + # Red Hat OpenShift Service Mesh version 2.6.8 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.8, and includes the following ServiceMeshControlPlane resource version updates: 2.6.8 and 2.5.11. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. -The most current version of the Red Hat OpenShift Service Mesh Operator can be used with all supported versions of Service Mesh. The version of Service Mesh is specified using the ServiceMeshControlPlane. - You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. ## Component updates diff --git a/ocp-product-docs-plaintext/4.17/support/troubleshooting/investigating-monitoring-issues.txt b/ocp-product-docs-plaintext/4.17/support/troubleshooting/investigating-monitoring-issues.txt index ae6a84ad..dd2e2eb2 100644 --- a/ocp-product-docs-plaintext/4.17/support/troubleshooting/investigating-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.17/support/troubleshooting/investigating-monitoring-issues.txt @@ -204,7 +204,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective, navigate to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -275,7 +275,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.17/support/troubleshooting/troubleshooting-installations.txt b/ocp-product-docs-plaintext/4.17/support/troubleshooting/troubleshooting-installations.txt index f301dc94..123e18db 100644 --- a/ocp-product-docs-plaintext/4.17/support/troubleshooting/troubleshooting-installations.txt +++ b/ocp-product-docs-plaintext/4.17/support/troubleshooting/troubleshooting-installations.txt @@ -110,7 +110,7 @@ $ ./openshift-install create ignition-configs --dir=./install_dir You can monitor high-level installation, bootstrap, and control plane logs as an Red Hat OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. * You have the fully qualified domain names of the bootstrap and control plane nodes. diff --git a/ocp-product-docs-plaintext/4.17/virt/about_virt/about-virt.txt b/ocp-product-docs-plaintext/4.17/virt/about_virt/about-virt.txt index 776433fd..e272ac23 100644 --- a/ocp-product-docs-plaintext/4.17/virt/about_virt/about-virt.txt +++ b/ocp-product-docs-plaintext/4.17/virt/about_virt/about-virt.txt @@ -37,6 +37,8 @@ You can use OpenShift Virtualization with OVN-Kubernetes or one of the other cer You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies. +For information about partnering with Independent Software Vendors (ISVs) and Services partners for specialized storage, networking, backup, and additional functionality, see the Red Hat Ecosystem Catalog. + # Comparing OpenShift Virtualization to VMware vSphere If you are familiar with VMware vSphere, the following table lists OpenShift Virtualization components that you can use to accomplish similar tasks. However, because OpenShift Virtualization is conceptually different from vSphere, and much of its functionality comes from the underlying Red Hat OpenShift Container Platform, OpenShift Virtualization does not have direct alternatives for all vSphere concepts or components. @@ -53,6 +55,8 @@ OpenShift Virtualization 4.16 is supported for use on Red Hat OpenShift Containe If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.17/virt/install/preparing-cluster-for-virt.txt b/ocp-product-docs-plaintext/4.17/virt/install/preparing-cluster-for-virt.txt index d58aa8db..88621987 100644 --- a/ocp-product-docs-plaintext/4.17/virt/install/preparing-cluster-for-virt.txt +++ b/ocp-product-docs-plaintext/4.17/virt/install/preparing-cluster-for-virt.txt @@ -120,6 +120,8 @@ To mark a storage class as the default for virtualization workloads, set the ann If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.17/virt/monitoring/virt-prometheus-queries.txt b/ocp-product-docs-plaintext/4.17/virt/monitoring/virt-prometheus-queries.txt index d9090b43..00a5c78c 100644 --- a/ocp-product-docs-plaintext/4.17/virt/monitoring/virt-prometheus-queries.txt +++ b/ocp-product-docs-plaintext/4.17/virt/monitoring/virt-prometheus-queries.txt @@ -19,7 +19,7 @@ or as a user with view permissions for all projects, you can access metrics for The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective of the Red Hat OpenShift Container Platform web console, click Observe and go to the Metrics tab. 2. To add one or more queries, perform any of the following actions: diff --git a/ocp-product-docs-plaintext/4.17/virt/vm_networking/virt-hot-plugging-network-interfaces.txt b/ocp-product-docs-plaintext/4.17/virt/vm_networking/virt-hot-plugging-network-interfaces.txt index a16a31ea..1098f24b 100644 --- a/ocp-product-docs-plaintext/4.17/virt/vm_networking/virt-hot-plugging-network-interfaces.txt +++ b/ocp-product-docs-plaintext/4.17/virt/vm_networking/virt-hot-plugging-network-interfaces.txt @@ -25,21 +25,12 @@ If you restart the VM after hot plugging an interface, that interface becomes pa Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. * A network attachment definition is configured in the same namespace as your VM. +* The VM to which you want to hot plug the network interface is running. * You have installed the virtctl tool. -* You have installed the OpenShift CLI (oc). - -1. If the VM to which you want to hot plug the network interface is not running, start it by using the following command: - -```terminal -$ virtctl start -n -``` - -2. Use the following command to add the new network interface to the running VM. Editing the VM specification adds the new network interface to the VM and virtual machine instance (VMI) configuration but does not attach it to the running VM. - -```terminal -$ oc edit vm -``` +* You have permission to create and list VirtualMachineInstanceMigration objects. +* You have installed the OpenShift CLI (`oc`). +1. Use your preferred text editor to edit the VirtualMachine manifest, as shown in the following example: Example VM configuration ```yaml @@ -70,7 +61,7 @@ template: Specifies the name of the new network interface. Specifies the name of the network. This must be the same as the name of the new network interface that you defined in the template.spec.domain.devices.interfaces list. Specifies the name of the NetworkAttachmentDefinition object. -3. To attach the network interface to the running VM, live migrate the VM by running the following command: +2. To attach the network interface to the running VM, live migrate the VM by running the following command: ```terminal $ virtctl migrate diff --git a/ocp-product-docs-plaintext/4.18/architecture/architecture.txt b/ocp-product-docs-plaintext/4.18/architecture/architecture.txt index bda5c637..0584b940 100644 --- a/ocp-product-docs-plaintext/4.18/architecture/architecture.txt +++ b/ocp-product-docs-plaintext/4.18/architecture/architecture.txt @@ -144,7 +144,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.18/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt b/ocp-product-docs-plaintext/4.18/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt index 33450a3e..fa979288 100644 --- a/ocp-product-docs-plaintext/4.18/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt +++ b/ocp-product-docs-plaintext/4.18/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt @@ -4,8 +4,11 @@ Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR. +The following are the different backup types for a Backup CR: * The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. +* If you use Velero's snapshot feature to back up data stored on the persistent volume, only snapshot related information is stored in the S3 bucket along with the Openshift object data. * If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots. +If the underlying storage or the backup bucket are part of the same cluster, then the data might be lost in case of disaster. For more information about CSI volume snapshots, see CSI volume snapshots. [IMPORTANT] diff --git a/ocp-product-docs-plaintext/4.18/cli_reference/openshift_cli/configuring-cli.txt b/ocp-product-docs-plaintext/4.18/cli_reference/openshift_cli/configuring-cli.txt index 21f76556..b7889985 100644 --- a/ocp-product-docs-plaintext/4.18/cli_reference/openshift_cli/configuring-cli.txt +++ b/ocp-product-docs-plaintext/4.18/cli_reference/openshift_cli/configuring-cli.txt @@ -50,4 +50,44 @@ EOF ``` -Tab completion is enabled when you open a new terminal. \ No newline at end of file +Tab completion is enabled when you open a new terminal. + +# Accessing kubeconfig by using the oc CLI + +You can use the oc CLI to log in to your OpenShift cluster and retrieve a kubeconfig file for accessing the cluster from the command line. + +* You have access to the Red Hat OpenShift Container Platform web console or API server endpoint. + +1. Log in to your OpenShift cluster by running the following command: + +```terminal +$ oc login -u -p 123 +``` + +Specify the full API server URL. For example: https://api.my-cluster.example.com:6443. +Specify a valid username. For example: kubeadmin. +Provide the password for the specified user. For example, the kubeadmin password generated during cluster installation. +2. Save the cluster configuration to a local file by running the following command: + +```terminal +$ oc config view --raw > kubeconfig +``` + +3. Set the KUBECONFIG environment variable to point to the exported file by running the following command: + +```terminal +$ export KUBECONFIG=./kubeconfig +``` + +4. Use oc to interact with your OpenShift cluster by running the following command: + +```terminal +$ oc get nodes +``` + + + +[NOTE] +---- +If you plan to reuse the exported kubeconfig file across sessions or machines, store it securely and avoid committing it to source control. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/disconnected/mirroring/about-installing-oc-mirror-v2.txt b/ocp-product-docs-plaintext/4.18/disconnected/mirroring/about-installing-oc-mirror-v2.txt index 44641765..ce2577c3 100644 --- a/ocp-product-docs-plaintext/4.18/disconnected/mirroring/about-installing-oc-mirror-v2.txt +++ b/ocp-product-docs-plaintext/4.18/disconnected/mirroring/about-installing-oc-mirror-v2.txt @@ -355,10 +355,21 @@ CatalogSource:: Retrieves information about the available Operators in the mirro ClusterCatalog:: Retrieves information about the available cluster extensions (which includes Operators) in the mirror registry. Used by OLM v1. UpdateService:: Provides update graph data to the disconnected environment. Used by the OpenShift Update Service. +* CatalogSource * ImageDigestMirrorSet * ImageTagMirrorSet * About catalogs in OLM v1 +## Restrictions on modifying resources that are generated by the oc-mirror plugin + +When using resources that are generated by the oc-mirror plugin v2 to configure your cluster, you must not change certain fields. Modifying these fields can cause errors and is not supported. + +The following table lists the resources and their fields that must remain unchanged: + + + +For more information about these resources, see the OpenShift API documentation for CatalogSource, ImageDigestMirrorSet, and ImageTagMirrorSet. + ## Configuring your cluster to use the resources generated by oc-mirror plugin v2 After you have mirrored your image set to the mirror registry, you must apply the generated ImageDigestMirrorSet (IDMS), ImageTagMirrorSet (ITMS), CatalogSource, and UpdateService resources to the cluster. diff --git a/ocp-product-docs-plaintext/4.18/extensions/ce/crd-upgrade-safety.txt b/ocp-product-docs-plaintext/4.18/extensions/ce/crd-upgrade-safety.txt index bfea505f..c33be27a 100644 --- a/ocp-product-docs-plaintext/4.18/extensions/ce/crd-upgrade-safety.txt +++ b/ocp-product-docs-plaintext/4.18/extensions/ce/crd-upgrade-safety.txt @@ -49,9 +49,9 @@ The following changes to an existing custom resource definition (CRD) are safe f * The maximum value of an existing field is increased in an existing version * A new version of the CRD is added with no modifications to existing versions -# Disabling CRD upgrade safety preflight check +# Disabling the CRD upgrade safety preflight check -The custom resource definition (CRD) upgrade safety preflight check can be disabled by adding the preflight.crdUpgradeSafety.disabled field with a value of true to the ClusterExtension object that provides the CRD. +You can disable the custom resource definition (CRD) upgrade safety preflight check. In the ClusterExtension object that provides the CRD, set the install.preflight.crdUpgradeSafety.enforcement field with the value of None. [WARNING] @@ -59,15 +59,14 @@ The custom resource definition (CRD) upgrade safety preflight check can be disab Disabling the CRD upgrade safety preflight check could break backwards compatibility with stored versions of the CRD and cause other unintended consequences on the cluster. ---- -You cannot disable individual field validators. If you disable the CRD upgrade safety preflight check, all field validators are disabled. +You cannot disable individual field validators. If you disable the CRD upgrade safety preflight check, you disable all field validators. [NOTE] ---- -The following checks are handled by the Kubernetes API server: -* The scope changes from Cluster to Namespace or from Namespace to Cluster -* An existing stored version of the CRD is removed -After disabling the CRD upgrade safety preflight check via Operator Lifecycle Manager (OLM) v1, these two operations are still prevented by Kubernetes. +If you disable the CRD upgrade safety preflight check in Operator Lifecycle Manager (OLM) v1, the Kubernetes API server still prevents the following operations: +* Changing scope from Cluster to Namespace or from Namespace to Cluster +* Removing an existing stored version of the CRD ---- * You have a cluster extension installed. @@ -78,24 +77,29 @@ After disabling the CRD upgrade safety preflight check via Operator Lifecycle Ma $ oc edit clusterextension ``` -2. Set the preflight.crdUpgradeSafety.disabled field to true: +2. Set the install.preflight.crdUpgradeSafety.enforcement field to None: Example ClusterExtension object ```yaml -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: - name: clusterextension-sample + name: clusterextension-sample spec: - installNamespace: default - packageName: argocd-operator - version: 0.6.0 + namespace: default + serviceAccount: + name: sa-example + source: + sourceType: "Catalog" + catalog: + packageName: argocd-operator + version: 0.6.0 + install: preflight: - crdUpgradeSafety: - disabled: true 1 + crdUpgradeSafety: + enforcement: None ``` -Set to true. # Examples of unsafe CRD changes diff --git a/ocp-product-docs-plaintext/4.18/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt b/ocp-product-docs-plaintext/4.18/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt index 6fc749a5..ae7edc3c 100644 --- a/ocp-product-docs-plaintext/4.18/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt +++ b/ocp-product-docs-plaintext/4.18/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt @@ -9,8 +9,8 @@ The management cluster is not the same thing as the managed cluster. A managed c ---- The hosted control planes feature is enabled by default. The multicluster engine Operator supports only the default local-cluster, which is a hub cluster that is managed, and the hub cluster as the management cluster. If you have Red Hat Advanced Cluster Management installed, you can use the managed hub cluster, also known as the local-cluster, as the management cluster. -A hosted cluster is an Red Hat OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command-line interface, hcp, to create a hosted cluster. -The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see Disabling the automatic import of hosted clusters into multicluster engine Operator. +A hosted cluster is an Red Hat OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command-line interface (hcp) to create a hosted cluster. +The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator". # Preparing to deploy hosted control planes on bare metal @@ -213,7 +213,7 @@ cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m To create a hosted cluster by using the console, complete the following steps. -1. Open the Red Hat OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see Accessing the web console. +1. Open the Red Hat OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see "Accessing the web console". 2. In the console header, ensure that All Clusters is selected. 3. Click Infrastructure -> Clusters. 4. Click Create cluster -> Host inventory -> Hosted control plane. @@ -224,7 +224,7 @@ The Create cluster page is displayed. [NOTE] ---- As you enter details about the cluster, you might find the following tips useful: -* If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see Creating a credential for an on-premises environment. +* If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see "Creating a credential for an on-premises environment". * On the Cluster details page, the pull secret is your Red Hat OpenShift Container Platform pull secret that you use to access Red Hat OpenShift Container Platform resources. If you selected a host inventory credential, the pull secret is automatically populated. * On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace. * On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the NodePort type. A DNS entry must exist for the api.. setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods. @@ -281,7 +281,7 @@ The --api-server-address flag defines the IP address that is used for the Kubern Specify the icsp.yaml file that defines ICSP and your mirror registries. Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub. Specify your hosted cluster namespace. -Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.18.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see Extracting the Red Hat OpenShift Container Platform release image digest. +Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.18.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see "Extracting the Red Hat OpenShift Container Platform release image digest". * To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment. * To access a hosted cluster, see Accessing the hosted cluster. diff --git a/ocp-product-docs-plaintext/4.18/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt b/ocp-product-docs-plaintext/4.18/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt index 5658f713..fb2186bc 100644 --- a/ocp-product-docs-plaintext/4.18/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt +++ b/ocp-product-docs-plaintext/4.18/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt @@ -282,7 +282,7 @@ The --api-server-address flag defines the IP address that is used for the Kubern Specify the icsp.yaml file that defines ICSP and your mirror registries. Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub. Specify your hosted cluster namespace. -Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.18.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see Extracting the Red Hat OpenShift Container Platform release image digest. +Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.18.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see "Extracting the Red Hat OpenShift Container Platform release image digest". * To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment. * To access a hosted cluster, see Accessing the hosted cluster. diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-china.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-china.txt index feacb47e..3e75e995 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-china.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-china.txt @@ -1098,7 +1098,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1153,9 +1153,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1163,7 +1163,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-customizations.txt index e0022a2f..f296ad4f 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-customizations.txt @@ -810,7 +810,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -865,9 +865,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -875,7 +875,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-default.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-default.txt index cf8d873b..51e9c497 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-default.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-default.txt @@ -30,7 +30,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -110,9 +110,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -120,7 +120,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-government-region.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-government-region.txt index c7946868..ffbd7a9b 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-government-region.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-government-region.txt @@ -1016,7 +1016,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1071,9 +1071,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1081,7 +1081,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-localzone.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-localzone.txt index ed02b3bc..49ce4d24 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-localzone.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-localzone.txt @@ -1165,7 +1165,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1224,9 +1224,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1234,7 +1234,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-network-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-network-customizations.txt index a122de20..813c73e5 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-network-customizations.txt @@ -1048,7 +1048,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1103,9 +1103,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1113,7 +1113,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-private.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-private.txt index 53aad7d2..a868dada 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-private.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-private.txt @@ -950,7 +950,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1005,9 +1005,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1015,7 +1015,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-secret-region.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-secret-region.txt index 5cee9a0a..04d78813 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-secret-region.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-secret-region.txt @@ -1104,7 +1104,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1159,9 +1159,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1169,7 +1169,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-vpc.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-vpc.txt index 42d84233..db15f00a 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-vpc.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-vpc.txt @@ -951,7 +951,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1006,9 +1006,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1016,7 +1016,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt index 3e8de6ca..fa4b488a 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt @@ -1225,7 +1225,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1284,9 +1284,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1294,7 +1294,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt index f0c86d6d..594bd0f0 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt @@ -938,7 +938,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -993,9 +993,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1003,7 +1003,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt index 1c370b15..4777f4e6 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt @@ -25,7 +25,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/installing-aws-user-infra.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/installing-aws-user-infra.txt index 34d9958e..35d70a83 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/installing-aws-user-infra.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/installing-aws-user-infra.txt @@ -1690,9 +1690,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1700,7 +1700,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/installing-restricted-networks-aws.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/installing-restricted-networks-aws.txt index cd11ea6c..9b9d0d16 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/installing-restricted-networks-aws.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/installing-restricted-networks-aws.txt @@ -2075,9 +2075,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2085,7 +2085,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/upi-aws-preparing-to-install.txt b/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/upi-aws-preparing-to-install.txt index 87b0fbb1..f5d7e7a6 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/upi-aws-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_aws/upi/upi-aws-preparing-to-install.txt @@ -26,7 +26,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-customizations.txt index 9731fc8c..50f58867 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-customizations.txt @@ -1048,7 +1048,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1096,9 +1096,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1106,7 +1106,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-default.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-default.txt index ad7e6359..5c9dfbb2 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-default.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-default.txt @@ -21,7 +21,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -100,9 +100,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -110,7 +110,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-government-region.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-government-region.txt index e51199ab..f76841ea 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-government-region.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-government-region.txt @@ -623,7 +623,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -686,9 +686,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -696,7 +696,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-network-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-network-customizations.txt index f1a55419..4e8e3033 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-network-customizations.txt @@ -1065,7 +1065,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1113,9 +1113,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1123,7 +1123,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt index abd7ed96..cd5c7a72 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt @@ -12,7 +12,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-private.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-private.txt index c6e01cf6..d151cc50 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-private.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-private.txt @@ -1083,7 +1083,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1146,9 +1146,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1156,7 +1156,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-vnet.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-vnet.txt index be13233b..79751490 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-vnet.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-azure-vnet.txt @@ -942,7 +942,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt index 9c2c99e8..a04bd85f 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt @@ -1100,7 +1100,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1148,9 +1148,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1158,7 +1158,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-azure-preparing-upi.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-azure-preparing-upi.txt index d94c1713..8ad9c573 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-azure-preparing-upi.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-azure-preparing-upi.txt @@ -12,7 +12,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-azure-user-infra.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-azure-user-infra.txt index ea53ed03..79216114 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-azure-user-infra.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-azure-user-infra.txt @@ -29,7 +29,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1908,9 +1908,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1918,7 +1918,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt index 11682e09..6aa80f82 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt @@ -59,7 +59,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1954,9 +1954,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1964,7 +1964,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt index d6082117..674183a8 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt @@ -312,7 +312,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -360,9 +360,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -370,7 +370,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt index 73ec2cff..538403db 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt @@ -529,7 +529,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -577,9 +577,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -587,7 +587,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt index e542cf23..dae25398 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt @@ -15,7 +15,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt index 16dc5aa8..8bee809f 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt @@ -1341,9 +1341,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1351,7 +1351,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt index 12cc4750..ddd18d6d 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt @@ -14,7 +14,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/ipi/ipi-install-prerequisites.txt b/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/ipi/ipi-install-prerequisites.txt index 133c1db5..ee2898c8 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/ipi/ipi-install-prerequisites.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/ipi/ipi-install-prerequisites.txt @@ -370,4 +370,49 @@ Prior to the installation of the Red Hat OpenShift Container Platform cluster, g * Control plane and worker nodes are configured. * All nodes accessible via out-of-band management. * (Optional) A separate management network has been created. -* Required data for installation. \ No newline at end of file +* Required data for installation. + +# Installation overview + +The installation program supports interactive mode. However, you can prepare an install-config.yaml file containing the provisioning details for all of the bare-metal hosts, and the relevant cluster details, in advance. + +The installation program loads the install-config.yaml file and the administrator generates the manifests and verifies all prerequisites. + +The installation program performs the following tasks: + +* Enrolls all nodes in the cluster +* Starts the bootstrap virtual machine (VM) +* Starts the metal platform components as systemd services, which have the following containers: +* Ironic-dnsmasq: The DHCP server responsible for handing over the IP addresses to the provisioning interface of various nodes on the provisioning network. Ironic-dnsmasq is only enabled when you deploy an Red Hat OpenShift Container Platform cluster with a provisioning network. +* Ironic-httpd: The HTTP server that is used to ship the images to the nodes. +* Image-customization +* Ironic +* Ironic-inspector (available in Red Hat OpenShift Container Platform 4.16 and earlier) +* Ironic-ramdisk-logs +* Extract-machine-os +* Provisioning-interface +* Metal3-baremetal-operator + +The nodes enter the validation phase, where each node moves to a manageable state after Ironic validates the credentials to access the Baseboard Management Controller (BMC). + +When the node is in the manageable state, the inspection phase starts. The inspection phase ensures that the hardware meets the minimum requirements needed for a successful deployment of Red Hat OpenShift Container Platform. + +The install-config.yaml file details the provisioning network. On the bootstrap VM, the installation program uses the Pre-Boot Execution Environment (PXE) to push a live image to every node with the Ironic Python Agent (IPA) loaded. When using virtual media, it connects directly to the BMC of each node to virtually attach the image. + +When using PXE boot, all nodes reboot to start the process: + +* The ironic-dnsmasq service running on the bootstrap VM provides the IP address of the node and the TFTP boot server. +* The first-boot software loads the root file system over HTTP. +* The ironic service on the bootstrap VM receives the hardware information from each node. + +The nodes enter the cleaning state, where each node must clean all the disks before continuing with the configuration. + +After the cleaning state finishes, the nodes enter the available state and the installation program moves the nodes to the deploying state. + +IPA runs the coreos-installer command to install the Red Hat Enterprise Linux CoreOS (RHCOS) image on the disk defined by the rootDeviceHints parameter in the install-config.yaml file. The node boots by using RHCOS. + +After the installation program configures the control plane nodes, it moves control from the bootstrap VM to the control plane nodes and deletes the bootstrap VM. + +The Bare-Metal Operator continues the deployment of the workers, storage, and infra nodes. + +After the installation completes, the nodes move to the active state. You can then proceed with postinstallation configuration and other Day 2 tasks. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.txt index 446e9d89..c2efc78a 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.txt @@ -21,7 +21,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2985,9 +2985,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2995,7 +2995,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-bare-metal.txt b/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-bare-metal.txt index e7a852e5..e3647ecf 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-bare-metal.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-bare-metal.txt @@ -25,7 +25,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2977,9 +2977,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2987,7 +2987,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.txt b/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.txt index e59aaed4..33772466 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.txt @@ -69,7 +69,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2976,9 +2976,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2986,7 +2986,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-customizations.txt index e2aa5abd..7564031c 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-customizations.txt @@ -19,7 +19,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1246,7 +1246,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1298,9 +1298,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1308,7 +1308,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-default.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-default.txt index e45faae2..012047d0 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-default.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-default.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -165,7 +165,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -339,9 +339,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -349,7 +349,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-network-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-network-customizations.txt index 38941006..1f6b0880 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-network-customizations.txt @@ -25,7 +25,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1204,7 +1204,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1256,9 +1256,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1266,7 +1266,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-private.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-private.txt index df33ec50..06376898 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-private.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-private.txt @@ -112,7 +112,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1199,7 +1199,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1251,9 +1251,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1261,7 +1261,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-shared-vpc.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-shared-vpc.txt index 4cc6698c..c99d8d00 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-shared-vpc.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-shared-vpc.txt @@ -20,7 +20,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -911,7 +911,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -963,9 +963,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -973,7 +973,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-user-infra-vpc.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-user-infra-vpc.txt index a38110ec..ae534fc6 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-user-infra-vpc.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-user-infra-vpc.txt @@ -36,7 +36,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1898,10 +1898,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1909,7 +1909,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-user-infra.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-user-infra.txt index d57ed0fe..5f4eb4ed 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-user-infra.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-user-infra.txt @@ -31,7 +31,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2046,10 +2046,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2057,7 +2057,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-vpc.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-vpc.txt index 1f646b8b..0600be67 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-vpc.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-gcp-vpc.txt @@ -60,7 +60,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1161,7 +1161,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1213,9 +1213,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1223,7 +1223,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt index 08ba0a6e..71834f4f 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt @@ -56,7 +56,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1192,7 +1192,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1244,9 +1244,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1254,7 +1254,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-restricted-networks-gcp.txt b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-restricted-networks-gcp.txt index 111a97c9..638e620a 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-restricted-networks-gcp.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_gcp/installing-restricted-networks-gcp.txt @@ -65,7 +65,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2008,10 +2008,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2019,7 +2019,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt index b726a678..49c581b4 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -507,7 +507,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -652,9 +652,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -662,7 +662,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt index 708c9f1a..f6d6b08a 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt @@ -19,7 +19,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -655,7 +655,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -800,9 +800,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -810,7 +810,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt index 59c7fc02..76f601f3 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt @@ -125,7 +125,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -625,7 +625,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -770,9 +770,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -780,7 +780,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt index ddc24149..16fc7e2a 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt @@ -885,7 +885,7 @@ If the Red Hat Enterprise Linux CoreOS (RHCOS) image is available locally, the h $ export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE="/rhcos--ibmcloud.x86_64.qcow2.gz" ``` -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -933,9 +933,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -943,7 +943,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt index 1ba4b02b..1954419b 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt @@ -84,7 +84,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -590,7 +590,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -735,9 +735,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -745,7 +745,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_power/installing-ibm-power.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_power/installing-ibm-power.txt index 65a98b0c..d303093e 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_power/installing-ibm-power.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_power/installing-ibm-power.txt @@ -29,7 +29,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1817,9 +1817,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1827,7 +1827,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt index b79d7f4c..c2735b38 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt @@ -61,7 +61,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1728,9 +1728,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1738,7 +1738,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt index f43c4df4..2848c233 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -484,7 +484,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -629,9 +629,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -639,7 +639,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt index 43987dd7..f04929b6 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt @@ -101,7 +101,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -577,7 +577,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -722,9 +722,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -732,7 +732,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt index 0f6cbc4c..76871554 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt @@ -66,7 +66,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -574,7 +574,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -719,9 +719,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -729,7 +729,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt index 1054de87..d28c7d20 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt @@ -105,7 +105,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -641,7 +641,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -786,9 +786,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -796,7 +796,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt index 221ab70f..d3d5dbe0 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt @@ -964,9 +964,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -974,7 +974,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt index 10633d48..2e2b35f7 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt @@ -897,9 +897,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -907,7 +907,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z.txt index 6cfac7f6..d7ae9b43 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-ibm-z.txt @@ -914,9 +914,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -924,7 +924,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt index b3546bef..c08e02b5 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt @@ -1022,9 +1022,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1032,7 +1032,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt index b1a4afdc..ea4c5947 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt @@ -949,9 +949,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -959,7 +959,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt index edff72d1..065d9952 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt @@ -971,9 +971,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -981,7 +981,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt index f53a3d42..a532d67f 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt @@ -24,7 +24,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt b/ocp-product-docs-plaintext/4.18/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt index cf67ad47..47a1f8cc 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt @@ -28,7 +28,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1226,7 +1226,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt b/ocp-product-docs-plaintext/4.18/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt index 60ca1c97..7f5eaad7 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt @@ -838,7 +838,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-installer-custom.txt b/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-installer-custom.txt index 6c5b169c..f873308a 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-installer-custom.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-installer-custom.txt @@ -221,7 +221,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1508,7 +1508,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1594,9 +1594,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1604,7 +1604,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-installer-restricted.txt b/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-installer-restricted.txt index 6a1d21d8..408aa4d0 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-installer-restricted.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-installer-restricted.txt @@ -116,7 +116,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -758,7 +758,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -844,9 +844,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -854,7 +854,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-user.txt b/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-user.txt index f60795ac..d776fd2a 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-user.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_openstack/installing-openstack-user.txt @@ -22,7 +22,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1419,9 +1419,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1429,7 +1429,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_platform_agnostic/installing-platform-agnostic.txt b/ocp-product-docs-plaintext/4.18/installing/installing_platform_agnostic/installing-platform-agnostic.txt index 2ffd02cf..08540b14 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_platform_agnostic/installing-platform-agnostic.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_platform_agnostic/installing-platform-agnostic.txt @@ -26,7 +26,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1979,9 +1979,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1989,7 +1989,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt index 1921582c..2558ab8b 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt @@ -1,9 +1,16 @@ # Installing a cluster on vSphere using the Agent-based Installer + The Agent-based installation method provides the flexibility to boot your on-premise servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. + Agent-based installation is a subcommand of the Red Hat OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an Red Hat OpenShift Container Platform cluster with an available release image. -# Additional resources +For more information about installing a cluster using the Agent-based Installer, see Preparing to install with the Agent-based Installer. + -* Preparing to install with the Agent-based Installer \ No newline at end of file +[IMPORTANT] +---- +Your vSphere account must include privileges for reading and creating the resources required to install an Red Hat OpenShift Container Platform cluster. +For more information about privileges, see vCenter requirements. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt index d06cd5c3..344d9ceb 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt @@ -57,7 +57,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -434,20 +434,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -463,25 +463,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -977,14 +977,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1032,9 +1032,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1042,7 +1042,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1121,13 +1121,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1144,7 +1144,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1159,7 +1159,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1168,8 +1168,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt index 97aad0b0..eba9116e 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt @@ -26,7 +26,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -306,20 +306,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -335,25 +335,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -849,14 +849,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -904,9 +904,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -914,7 +914,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -974,13 +974,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -997,7 +997,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1012,7 +1012,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1021,8 +1021,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1045,7 +1045,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt index 0e1154cc..383868d9 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt @@ -28,7 +28,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -354,20 +354,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -383,25 +383,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -1168,14 +1168,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1223,9 +1223,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1233,7 +1233,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1293,13 +1293,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1316,7 +1316,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1331,7 +1331,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1340,8 +1340,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1364,7 +1364,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt index 82456692..c75bdbea 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt @@ -27,7 +27,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -45,14 +45,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -142,9 +142,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -152,7 +152,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -212,13 +212,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -235,7 +235,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -250,7 +250,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -259,8 +259,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -283,7 +283,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt index 53a01109..766d8329 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt @@ -70,7 +70,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -372,20 +372,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -401,25 +401,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -988,9 +988,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -998,7 +998,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1247,13 +1247,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1270,7 +1270,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1285,7 +1285,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1294,8 +1294,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1346,7 +1346,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt index ec03e874..84a6a939 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt @@ -31,7 +31,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -295,20 +295,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -324,25 +324,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -1048,9 +1048,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1058,7 +1058,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1286,7 +1286,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-vsphere.txt b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-vsphere.txt index e2109dc0..14a548cd 100644 --- a/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-vsphere.txt +++ b/ocp-product-docs-plaintext/4.18/installing/installing_vsphere/upi/installing-vsphere.txt @@ -31,7 +31,7 @@ In Red Hat OpenShift Container Platform 4.18, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -290,20 +290,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -319,25 +319,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -863,9 +863,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -873,7 +873,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1109,13 +1109,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1132,7 +1132,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1147,7 +1147,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1156,8 +1156,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1208,7 +1208,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.18/machine_configuration/index.txt b/ocp-product-docs-plaintext/4.18/machine_configuration/index.txt index cabe4c81..6288da4a 100644 --- a/ocp-product-docs-plaintext/4.18/machine_configuration/index.txt +++ b/ocp-product-docs-plaintext/4.18/machine_configuration/index.txt @@ -335,7 +335,7 @@ UPDATED:: The True status indicates that the MCO has applied the current machine UPDATING:: The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated. DEGRADED:: A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready. MACHINECOUNT:: Indicates the total number of machines in that MCP. -READYMACHINECOUNT:: Indicates the total number of machines in that MCP that are ready for scheduling. +READYMACHINECOUNT:: Indicates the number of machines that are both running the current machine config and are ready for scheduling. This count is always less than or equal to the UPDATEDMACHINECOUNT number. UPDATEDMACHINECOUNT:: Indicates the total number of machines in that MCP that have the current machine config. DEGRADEDMACHINECOUNT:: Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable. diff --git a/ocp-product-docs-plaintext/4.18/machine_configuration/mco-coreos-layering.txt b/ocp-product-docs-plaintext/4.18/machine_configuration/mco-coreos-layering.txt index b4dcb387..e1114863 100644 --- a/ocp-product-docs-plaintext/4.18/machine_configuration/mco-coreos-layering.txt +++ b/ocp-product-docs-plaintext/4.18/machine_configuration/mco-coreos-layering.txt @@ -22,12 +22,6 @@ As soon as you apply the custom layered image to your cluster, you effectively t There are two methods for deploying a custom layered image onto your nodes: On-cluster layering:: With on-cluster layering, you create a MachineOSConfig object where you include the Containerfile and other parameters. The build is performed on your cluster and the resulting custom layered image is automatically pushed to your repository and applied to the machine config pool that you specified in the MachineOSConfig object. The entire process is performed completely within your cluster. - -[IMPORTANT] ----- -On-cluster image layering is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- Out-of-cluster layering:: With out-of-cluster layering, you create a Containerfile that references an Red Hat OpenShift Container Platform image and the RPM that you want to apply, build the layered image in your own environment, and push the image to your repository. Then, in your cluster, create a MachineConfig object for the targeted node pool that points to the new image. The Machine Config Operator overrides the base RHCOS image, as specified by the osImageURL value in the associated machine config, and boots the new image. diff --git a/ocp-product-docs-plaintext/4.18/machine_configuration/mco-update-boot-images.txt b/ocp-product-docs-plaintext/4.18/machine_configuration/mco-update-boot-images.txt index c9a755c3..d33df032 100644 --- a/ocp-product-docs-plaintext/4.18/machine_configuration/mco-update-boot-images.txt +++ b/ocp-product-docs-plaintext/4.18/machine_configuration/mco-update-boot-images.txt @@ -9,7 +9,12 @@ This process could cause the following issues: * Version skew issues To avoid these issues, you can configure your cluster to update the boot image whenever you update your cluster. By modifying the MachineConfiguration object, you can enable this feature. Currently, the ability to update the boot image is available for only Google Cloud Platform (GCP) and Amazon Web Services (AWS) clusters. It is not supported for clusters managed by the Cluster CAPI Operator. If you are not using the default user data secret, named worker-user-data, in your machine set, or you have modified the worker-user-data secret, you should not use managed boot image updates. This is because the Machine Config Operator (MCO) updates the machine set to use a managed version of the secret. By using the managed boot images feature, you are giving up the capability to customize the secret stored in the machine set object. -To view the current boot image used in your cluster, examine a machine set: +To view the current boot image used in your cluster, examine a machine set. + +[NOTE] +---- +The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. +---- ```yaml apiVersion: machine.openshift.io/v1beta1 @@ -34,6 +39,26 @@ spec: ``` This boot image is the same as the originally-installed Red Hat OpenShift Container Platform version, in this example Red Hat OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. + +```yaml +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +metadata: + name: ci-ln-hmy310k-72292-5f87z-worker-a + namespace: openshift-machine-api +spec: +# ... + template: +# ... + spec: +# ... + providerSpec: + value: + ami: + id: ami-0e8fd9094e487d1ff +# ... +``` + If you configure your cluster to update your boot images, the boot image referenced in your machine sets matches the current version of the cluster. # Configuring updated boot images @@ -166,7 +191,7 @@ spec: # ... ``` -This boot image is the same as the current Red Hat OpenShift Container Platform version. +This boot image is the same as the current Red Hat OpenShift Container Platform version. The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. * Enabling features using feature gates diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt index e3efb179..205d2c4c 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt @@ -20,30 +20,18 @@ The AWS Load Balancer Operator can tag the public subnets if the kubernetes.io/r The AWS Load Balancer Operator supports the Kubernetes service resource of type LoadBalancer by using Network Load Balancer (NLB) with the instance target type only. -1. You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a Subscription object by running the following command: +1. To deploy the AWS Load Balancer Operator on-demand from OperatorHub, create a Subscription object by running the following command: ```terminal $ oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}' ``` -Example output - -```terminal -install-zlfbt -``` - 2. Check if the status of an install plan is Complete by running the following command: ```terminal $ oc -n aws-load-balancer-operator get ip --template='{{.status.phase}}{{"\n"}}' ``` -Example output - -```terminal -Complete -``` - 3. View the status of the aws-load-balancer-operator-controller-manager deployment by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/dns-operator.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/dns-operator.txt index a3c92709..9c1d1b66 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/dns-operator.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/dns-operator.txt @@ -71,6 +71,12 @@ The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. +2. To find the service CIDR range, such as 172.30.0.0/16, of your cluster, use the oc get command: + +```terminal +$ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}' +``` + # Using DNS forwarding @@ -131,7 +137,7 @@ spec: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions: - ... +... ``` Must comply with the rfc6335 service name syntax. @@ -337,7 +343,7 @@ The string value can be a combination of units such as 0.5h10m and is converted 1. To review the change, look at the config map again by running the following command: ```terminal -oc get configmap/dns-default -n openshift-dns -o yaml +$ oc get configmap/dns-default -n openshift-dns -o yaml ``` 2. Verify that you see entries that look like the following example: @@ -368,19 +374,12 @@ The following are use cases for changing the DNS Operator managementState: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' ``` -2. Review managementState of the DNS Operator using the jsonpath command-line JSON parser: +2. Review managementState of the DNS Operator by using the jsonpath command-line JSON parser: ```terminal $ oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}' ``` -Example output - -```terminal -"Unmanaged" -``` - - [NOTE] ---- diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt index 29693792..eb241e2e 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt @@ -87,19 +87,5 @@ Example output ```text 2024/08/13 15:20:06 15016 packets received 2024/08/13 15:20:06 93581579 bytes received - -2024/08/13 15:20:09 19284 packets received -2024/08/13 15:20:09 99638680 bytes received - -2024/08/13 15:20:12 23522 packets received -2024/08/13 15:20:12 105666062 bytes received - -2024/08/13 15:20:15 27276 packets received -2024/08/13 15:20:15 112028608 bytes received - -2024/08/13 15:20:18 29470 packets received -2024/08/13 15:20:18 112732299 bytes received - -2024/08/13 15:20:21 32588 packets received -2024/08/13 15:20:21 113813781 bytes received +... ``` diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt index 45caae03..e4ecdcf0 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt @@ -26,14 +26,8 @@ $ oc -n external-dns-operator patch subscription external-dns-operator --type='j ``` -* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command: +* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added, outputted as trusted-ca, to the external-dns-operator deployment by running the following command: ```terminal $ oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME ``` - -Example output - -```terminal -trusted-ca -``` diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt index d899e333..475a42b8 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt @@ -7,22 +7,20 @@ You can create DNS records on AWS and AWS GovCloud by using the External DNS Ope You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud. -1. Check the user. The user must have access to the kube-system namespace. If you don’t have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: +1. Check the user profile, such as system:admin, by running the following command. The user profile must have access to the kube-system namespace. If you do not have the credentials, you can fetch the credentials from the kube-system namespace to use the cloud provider client by running the following command: ```terminal $ oc whoami ``` -Example output +2. Fetch the values from aws-creds secret present in kube-system namespace. ```terminal -system:admin +$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) ``` -2. Fetch the values from aws-creds secret present in kube-system namespace. ```terminal -$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) $ export AWS_SECRET_ACCESS_KEY=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) ``` @@ -39,7 +37,7 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None ``` -4. Get the list of dns zones to find the one which corresponds to the previously found route's domain: +4. Get the list of DNS zones and find the DNS zone that corresponds to the domain of the route that you previously queried: ```terminal $ aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt index 32c37150..d09fbc90 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt @@ -51,18 +51,12 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None ``` -6. Get a list of managed zones by running the following command: +6. Get a list of managed zones, such as qe-cvs4g-private-zone test.gcp.example.com, by running the following command: ```terminal $ gcloud dns managed-zones list | grep test.gcp.example.com ``` -Example output - -```terminal -qe-cvs4g-private-zone test.gcp.example.com -``` - 7. Create a YAML file, for example, external-dns-sample-gcp.yaml, that defines the ExternalDNS object: Example external-dns-sample-gcp.yaml file diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt index 5d82bab3..67559fa9 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt @@ -131,22 +131,8 @@ external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m $ oc -n external-dns-operator get subscription ``` -Example output - -```terminal -NAME PACKAGE SOURCE CHANNEL -external-dns-operator external-dns-operator redhat-operators stable-v1 -``` - 5. Check the external-dns-operator version by running the following command: ```terminal $ oc -n external-dns-operator get csv ``` - -Example output - -```terminal -NAME DISPLAY VERSION REPLACES PHASE -external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded -``` diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt index 508d9cd6..89abe4d8 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt @@ -11,30 +11,18 @@ The External DNS Operator implements the External DNS API from the olm.openshift You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a Subscription object. -1. Check the name of an install plan by running the following command: +1. Check the name of an install plan, such as install-zcvlr, by running the following command: ```terminal $ oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' ``` -Example output - -```terminal -install-zcvlr -``` - 2. Check if the status of an install plan is Complete by running the following command: ```terminal $ oc -n external-dns-operator get ip -o yaml | yq '.status.phase' ``` -Example output - -```terminal -Complete -``` - 3. View the status of the external-dns-operator deployment by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/ingress-operator.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/ingress-operator.txt index bd83b890..10c33e1e 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/ingress-operator.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/ingress-operator.txt @@ -314,19 +314,12 @@ certificate authority that you configured in a custom PKI. * Your certificate meets the following requirements: * The certificate is valid for the ingress domain. * The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com. -* You must have an IngressController CR. You may use the default one: +* You must have an IngressController CR, which includes just having the default IngressController CR. You can run the following command to check that you have an IngressController CR: ```terminal $ oc --namespace openshift-ingress-operator get ingresscontrollers ``` -Example output - -```terminal -NAME AGE -default 10m -``` - [NOTE] @@ -617,18 +610,12 @@ $ oc apply -f ingress-autoscaler.yaml * Verify that the default Ingress Controller is scaled out to match the value returned by the kube-state-metrics query by running the following commands: -* Use the grep command to search the Ingress Controller YAML file for replicas: +* Use the grep command to search the Ingress Controller YAML file for the number of replicas: ```terminal $ oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas: ``` -Example output - -```terminal - replicas: 3 -``` - * Get the pods in the openshift-ingress project: ```terminal @@ -670,39 +657,18 @@ Scaling is not an immediate action, as it takes time to create the desired numbe $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ``` -Example output - -```terminal -2 -``` - -2. Scale the default IngressController to the desired number of replicas using -the oc patch command. The following example scales the default IngressController -to 3 replicas: +2. Scale the default IngressController to the desired number of replicas by using the oc patch command. The following example scales the default IngressController to 3 replicas. ```terminal $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge ``` -Example output - -```terminal -ingresscontroller.operator.openshift.io/default patched -``` - -3. Verify that the default IngressController scaled to the number of replicas -that you specified: +3. Verify that the default IngressController scaled to the number of replicas that you specified: ```terminal $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ``` -Example output - -```terminal -3 -``` - [TIP] ---- @@ -1525,18 +1491,12 @@ Optional: Domain for Red Hat OpenShift Container Platform infrastructure to use ---- Wait for the openshift-apiserver finish rolling updates before exposing the route. ---- -1. Expose the route: +1. Expose the route by entering the following command. The command outputs route.route.openshift.io/hello-openshift exposed to designate exposure of the route. ```terminal $ oc expose service hello-openshift ``` -Example output - -```terminal -route.route.openshift.io/hello-openshift exposed -``` - 2. Get a list of routes by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt index 7549a15e..79c1b968 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt @@ -31,7 +31,7 @@ You can install the Kubernetes NMState Operator by using the web console or the ## Installing the Kubernetes NMState Operator by using the web console -You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. +You can install the Kubernetes NMState Operator by using the web console. After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. * You are logged in as a user with cluster-admin privileges. @@ -50,8 +50,6 @@ The name restriction is a known issue. The instance is a singleton for the entir ---- 9. Accept the default settings and click Create to create the instance. -After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. - ## Installing the Kubernetes NMState Operator by using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI (oc). After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. @@ -113,13 +111,6 @@ $ oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -kubernetes-nmstate-operator.4.18.0-202210210157 Succeeded -``` - 5. Create an instance of the nmstate Operator: ```terminal @@ -131,21 +122,12 @@ metadata: EOF ``` -6. Verify that all pods for the NMState Operator are in a Running state: +6. Verify that all pods for the NMState Operator have the Running status by entering the following command: ```terminal $ oc get pod -n openshift-nmstate ``` -Example output - -```terminal -Name Ready Status Restarts Age -pod/nmstate-handler-wn55p 1/1 Running 0 77s -pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s -... -``` - ## Viewing metrics collected by the Kubernetes NMState Operator diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/metallb-operator/metallb-operator-install.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/metallb-operator/metallb-operator-install.txt index 7eaff928..33f54e13 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/metallb-operator/metallb-operator-install.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/metallb-operator/metallb-operator-install.txt @@ -119,20 +119,13 @@ install-wzg94 metallb-operator.4.18.0-nnnnnnnnnnnn Automatic true ---- Installation of the Operator might take a few seconds. ---- -2. To verify that the Operator is installed, enter the following command: +2. To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -metallb-operator.4.18.0-nnnnnnnnnnnn Succeeded -``` - # Starting MetalLB on your cluster diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt index 281312d7..dbccc746 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt @@ -42,13 +42,6 @@ spec: $ oc -n metallb-system get csv ``` -Example output - -```terminal -NAME DISPLAY VERSION REPLACES PHASE -metallb-operator.v4.18.0 MetalLB Operator 4.18.0 Succeeded -``` - 4. Check the install plan that exists in the namespace by entering the following command. ```terminal @@ -76,19 +69,12 @@ $ oc edit installplan -n metallb-system After you edit the install plan, the upgrade operation starts. If you enter the oc -n metallb-system get csv command during the upgrade operation, the output might show the Replacing or the Pending status. ---- -1. Verify the upgrade was successful by entering the following command: +* To verify that the Operator is upgraded, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc -n metallb-system get csv ``` -Example output - -```terminal -NAME DISPLAY VERSION REPLACE PHASE -metallb-operator.v.0-202503102139 MetalLB Operator 4.18.0-202503102139 metallb-operator.v4.18.0-202502261233 Succeeded -``` - # Additional resources diff --git a/ocp-product-docs-plaintext/4.18/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt b/ocp-product-docs-plaintext/4.18/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt index eda83d79..8f91163c 100644 --- a/ocp-product-docs-plaintext/4.18/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt +++ b/ocp-product-docs-plaintext/4.18/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt @@ -78,20 +78,13 @@ EOF ``` -* Check that the Operator is installed by entering the following command: +* To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -sriov-network-operator.4.18.0-202406131906 Succeeded -``` - ## Web console: Installing the SR-IOV Network Operator diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/about-logging.txt b/ocp-product-docs-plaintext/4.18/observability/logging/about-logging.txt new file mode 100644 index 00000000..c900a0c6 --- /dev/null +++ b/ocp-product-docs-plaintext/4.18/observability/logging/about-logging.txt @@ -0,0 +1,16 @@ +# About logging + + + +As a cluster administrator, you can deploy logging on an Red Hat OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. + +You can use logging to perform the following tasks: + +* Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. +* Visualize your log data in the Red Hat OpenShift Container Platform web console. + + +[NOTE] +---- +Because logging releases on a different cadence from Red Hat OpenShift Container Platform, the logging documentation is available as a separate documentation set at Red Hat OpenShift Logging. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/6x-cluster-logging-deploying-6.1.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/6x-cluster-logging-deploying-6.1.txt deleted file mode 100644 index 7521e48d..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/6x-cluster-logging-deploying-6.1.txt +++ /dev/null @@ -1,640 +0,0 @@ -# Installing Logging - - -Red Hat OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs. -To get started with logging, you must install the following Operators: -* Loki Operator to manage your log store. -* Red Hat OpenShift Logging Operator to manage log collection and forwarding. -* Cluster Observability Operator (COO) to manage visualization. -You can use either the Red Hat OpenShift Container Platform web console or the Red Hat OpenShift Container Platform CLI to install or configure logging. - -[IMPORTANT] ----- -You must configure the Red Hat OpenShift Logging Operator after the Loki Operator. ----- - -# Installation by using the CLI - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI. - -## Installing the Loki Operator by using the CLI - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki by using the Red Hat OpenShift Container Platform command-line interface (CLI). You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Create a Namespace object for Loki Operator: -Example Namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-operators-redhat 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Container Platform metric, causing conflicts. -A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -2. Apply the Namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create an OperatorGroup object. -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-operators-redhat as the namespace. -4. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a Subscription object for Loki Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: loki-operator - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-operators-redhat as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -6. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -7. Create a namespace object for deploy the LokiStack: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -8. Apply the namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -9. Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3. -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging -stringData: 2 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Use the name logging-loki-s3 to match the name used in LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -10. Apply the Secret object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -11. Create a LokiStack CR: -Example LokiStack CR - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" 4 - secret: - name: logging-loki-s3 5 - type: s3 6 - storageClassName: 7 - tenants: - mode: openshift-logging 8 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -12. Apply the LokiStack CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -* Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -## Installing Red Hat OpenShift Logging Operator by using the CLI - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store by using the OpenShift CLI (`oc`). - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You installed and configured Loki Operator. -* You have created the openshift-logging namespace. - -1. Create an OperatorGroup object: -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-logging as the namespace. -2. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create a Subscription object for Red Hat OpenShift Logging Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: cluster-logging - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-logging as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -4. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a service account to be used by the log collector: - -```terminal -$ oc create sa logging-collector -n openshift-logging -``` - -6. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs. - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging -``` - -7. Create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify the openshift-logging namespace. -Specify the name of the service account created before. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -8. Apply the ClusterLogForwarder CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -1. Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m -instance-222js 2/2 Running 0 18m -instance-g9ddv 2/2 Running 0 18m -instance-hfqq8 2/2 Running 0 18m -instance-sphwg 2/2 Running 0 18m -instance-vv7zn 2/2 Running 0 18m -instance-wk5zz 2/2 Running 0 18m -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -nclude::modules/log6x-installing-the-logging-ui-plug-in-cli.adoc[leveloffset=+2] - -# Installation by using the web console - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console. - -## Installing Logging by using the web console - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki from the OperatorHub by using the Red Hat OpenShift Container Platform web console. You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install. - -[IMPORTANT] ----- -The Community Loki Operator is not supported by Red Hat. ----- -3. Select stable-x.y as the Update channel. - -The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the Operator install completes with an InstallSucceeded message, refresh the page. ----- -6. While the Operator installs, create the namespace to which the log store will be deployed. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the openshift-logging namespace: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -3. Click Create. -7. Create a secret with the credentials to access the object storage. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3: -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging 2 -stringData: 3 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource. -Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -3. Click Create. -8. Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance. -9. Select YAML view, and then use the following template to create a LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" - secret: - name: logging-loki-s3 4 - type: s3 5 - storageClassName: 6 - tenants: - mode: openshift-logging 7 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -10. Click Create. - -1. In the LokiStack tab veriy that you see your LokiStack instance. -2. In the Status column, verify that you see the message Condition: Ready with a green checkmark. - -## Installing Red Hat OpenShift Logging Operator by using the web console - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store from the OperatorHub by using the Red Hat OpenShift Container Platform web console. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install. -3. Select stable-x.y as the Update channel. The latest version is already selected in the Version field. - -The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the operator installation completes with an InstallSucceeded message, refresh the page. ----- -6. While the operator installs, create the service account that will be used by the log collector to collect the logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the service account. -Example ServiceAccount object - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: logging-collector 1 - namespace: openshift-logging 2 -``` - -Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource. -Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource. -3. Click the Create button. -7. Create the ClusterRoleBinding objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the ClusterRoleBinding resources. -Example ClusterRoleBinding resources - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:write-logs -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: logging-collector-logs-writer 1 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-application -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-application-logs 2 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-infrastructure -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-infrastructure-logs 3 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging -``` - -The cluster role to allow the log collector to write logs to LokiStack. -The cluster role to allow the log collector to collect logs from applications. -The cluster role to allow the log collector to collect logs from infrastructure. -3. Click the Create button. -8. Go to the Operators -> Installed Operators page. Select the operator and click the All instances tab. -9. After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs, find the ClusterLogForwarder resource and click Create Instance. -10. Select YAML view, and then use the following template to create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify openshift-logging as the namespace. -Specify the name of the service account created earlier. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -11. Click Create. - -1. In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder instance. -2. In the Status column, verify that you see the messages: -* Condition: observability.openshift.io/Authorized -* observability.openshift.io/Valid, Ready - -## Installing the Logging UI plugin by using the web console - -Install the Logging UI plugin by using the web console so that you can visualize logs. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator. -2. Navigate to the Installed Operators page. Under Provided APIs, select ClusterObservabilityOperator. Find the UIPlugin resource and click Create Instance. -3. Select the YAML view, and then use the following template to create a UIPlugin custom resource (CR): - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging 1 -spec: - type: Logging 2 - logging: - lokiStack: - name: logging-loki 3 -``` - -Set name to logging. -Set type to Logging. -The name value must match the name of your LokiStack instance. - -[NOTE] ----- -If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration. ----- -4. Click Create. - -1. Refresh the page when a pop-up message instructs you to do so. -2. Navigate to the Observe → Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log61-cluster-logging-support.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log61-cluster-logging-support.txt deleted file mode 100644 index a1e0aa01..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log61-cluster-logging-support.txt +++ /dev/null @@ -1,136 +0,0 @@ -# Support - - -Only the configuration options described in this documentation are supported for logging. -Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. - -[NOTE] ----- -If you must perform configurations not described in the Red Hat OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed. ----- - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. -Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. -Logging is not: -* A high scale log collection system -* Security Information and Event Monitoring (SIEM) compliant -* A "bring your own" (BYO) log collector configuration -* Historical or long term log retention or storage -* A guaranteed log sink -* Secure storage - audit logs are not stored by default - -# Supported API custom resource definitions - -The following table describes the supported Logging APIs. - - - -# Unsupported configurations - -You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: - -* The collector configuration file -* The collector daemonset - -Explicitly unsupported cases include: - -* Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector. -* Configuring how the log collector normalizes logs. You cannot modify default log normalization. - -# Support policy for unmanaged Operators - -The management state of an Operator determines whether an Operator is actively -managing the resources for its related component in the cluster as designed. If -an Operator is set to an unmanaged state, it does not respond to changes in -configuration nor does it receive updates. - -While this can be helpful in non-production clusters or during debugging, -Operators in an unmanaged state are unsupported and the cluster administrator -assumes full control of the individual component configurations and upgrades. - -An Operator can be set to an unmanaged state using the following methods: - -* Individual Operator configuration - -Individual Operators have a managementState parameter in their configuration. -This can be accessed in different ways, depending on the Operator. For example, -the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource -(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide -configuration resource. - -Changing the managementState parameter to Unmanaged means that the Operator -is not actively managing its resources and will take no action related to the -related component. Some Operators might not support this management state as it -might damage the cluster and require manual recovery. - -[WARNING] ----- -Changing individual Operators to the Unmanaged state renders that particular -component and functionality unsupported. Reported issues must be reproduced in -Managed state for support to proceed. ----- -* Cluster Version Operator (CVO) overrides - -The spec.overrides parameter can be added to the CVO’s configuration to allow -administrators to provide a list of overrides to the CVO’s behavior for a -component. Setting the spec.overrides[].unmanaged parameter to true for a -component blocks cluster upgrades and alerts the administrator after a CVO -override has been set: - -```terminal -Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. -``` - - -[WARNING] ----- -Setting a CVO override puts the entire cluster in an unsupported state. Reported -issues must be reproduced after removing any overrides for support to proceed. ----- - -# Collecting logging data for Red Hat Support - -When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. - -You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. -For prompt support, supply diagnostic information for both Red Hat OpenShift Container Platform and logging. - -## About the must-gather tool - -The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. - -For your logging, must-gather collects the following information: - -* Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level -* Cluster-level resources, including nodes, roles, and role bindings at the cluster level -* OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer - -When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory. - -## Collecting logging data - -You can use the oc adm must-gather CLI command to collect information about logging. - -To collect logging information with must-gather: - -1. Navigate to the directory where you want to store the must-gather information. -2. Run the oc adm must-gather command against the logging image: - -```terminal -$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') -``` - - -The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: -must-gather.local.4157245944708210408. -3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: - -```terminal -$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 -``` - -4. Attach the compressed file to your support case on the Red Hat Customer Portal. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-about-6.1.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-about-6.1.txt deleted file mode 100644 index 4d7fe521..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-about-6.1.txt +++ /dev/null @@ -1,330 +0,0 @@ -# Logging 6.1 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and outputs - -Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: - -* application -* receiver -* infrastructure -* audit - -You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver input type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec field defines the configuration for a receiver input. - -# Pipelines and filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: - -* When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick start - -OpenShift Logging supports two data models: - -* ViaQ (General Availability) -* OpenTelemetry (Technology Preview) - -You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack. - - -[NOTE] ----- -In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. ----- - -## Quick start with ViaQ - -To use the default ViaQ data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - authentication: - token: - from: serviceAccount - target: - name: logging-loki - namespace: openshift-logging - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -[NOTE] ----- -The dataModel field is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. ----- - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. - -## Quick start with OpenTelemetry - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You have installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 -spec: - serviceAccount: - name: collector - outputs: - - name: loki-otlp - type: lokiStack 2 - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - dataModel: Otel 3 - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: my-pipeline - inputRefs: - - application - - infrastructure - outputRefs: - - loki-otlp -``` - -Use the annotation to enable the Otel data model, which is a Technology Preview feature. -Define the output type as lokiStack. -Specifies the OpenTelemetry data model. - -[NOTE] ----- -You cannot use lokiStack.labelKeys when dataModel is Otel. To achieve similar functionality when dataModel is Otel, refer to "Configuring LokiStack for OTLP data ingestion". ----- - -* To verify that OTLP is functioning correctly, complete the following steps: -1. In the OpenShift web console, click Observe -> OpenShift Logging -> LokiStack -> Writes. -2. Check the Distributor - Structured Metadata section. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-clf-6.1.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-clf-6.1.txt deleted file mode 100644 index eee9c76a..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-clf-6.1.txt +++ /dev/null @@ -1,818 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions granted by this ClusterRole. -apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -loki.grafana.com: The API group for managing Loki-related resources. -resources: The resource type that the ClusterRole grants permission to interact with. -application: Refers to the application resources within the Loki logging system. -resourceNames: Specifies the names of resources that this role can manage. -logs: Refers to the log resources that can be created. -verbs: The actions allowed on the resources. -create: Grants permission to create new logs in the Loki system. - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -# Configuring OTLP output - -Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 - name: clf-otlp -spec: - serviceAccount: - name: - outputs: - - name: otlp - type: otlp - otlp: - tuning: - compression: gzip - deliveryMode: AtLeastOnce - maxRetryDuration: 20 - maxWrite: 10M - minRetryDuration: 5 - url: 2 - pipelines: - - inputRefs: - - application - - infrastructure - - audit - name: otlp-logs - outputRefs: - - otlp -``` - -Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. -This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. - - -[NOTE] ----- -The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. ----- - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -## Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -### Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -## Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -## Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -## Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -## Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt deleted file mode 100644 index d9bc000e..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-configuring-lokistack-otlp-6.1.txt +++ /dev/null @@ -1,180 +0,0 @@ -# OTLP data ingestion in Loki - - -You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging 6.1. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories: -* Resource -* Scope -* Log -You can set metadata for multiple entries simultaneously or individually as needed. - -# Configuring LokiStack for OTLP data ingestion - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: - -* Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. - -1. Set the schema version: -* When creating a new LokiStack CR, set version: v13 in the storage schema configuration. - -[NOTE] ----- -For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). ----- -2. Configure the storage schema as follows: -Example configure storage schema - -```yaml -# ... -spec: - storage: - schemas: - - version: v13 - effectiveDate: 2024-10-25 -``` - - -Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. - -# Attribute mapping - -When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki. - -For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases: - -* Using a custom collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki. -* Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. - - -[IMPORTANT] ----- -Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki. ----- - -## Custom attribute mapping for OpenShift - -When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. -In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration. - - -[NOTE] ----- -A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode. ----- - -Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration: - - -```yaml -# ... -spec: - limits: - global: - otlp: {} 1 - tenants: - application: - otlp: {} 2 -``` - - -Defines global OTLP attribute configuration. -OTLP attribute configuration for the application tenant within openshift-logging mode. - - -[NOTE] ----- -Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement. ----- - -Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects: - - -```yaml -spec: - limits: - global: - otlp: - streamLabels: - resourceAttributes: - - name: "k8s.namespace.name" - - name: "k8s.pod.name" - - name: "k8s.container.name" -``` - - -Structured metadata, in contrast, can be generated from resource, scope or log-level attributes: - - -```yaml -# ... -spec: - limits: - global: - otlp: - streamLabels: -# ... - structuredMetadata: - resourceAttributes: - - name: "process.command_line" - - name: "k8s\\.pod\\.labels\\..+" - regex: true - scopeAttributes: - - name: "service.name" - logAttributes: - - name: "http.route" -``` - - - -[TIP] ----- -Use regular expressions by setting regex: true for attributes names when mapping similar attributes in Loki. ----- - - -[IMPORTANT] ----- -Avoid using regular expressions for stream labels, as this can increase data volume. ----- - -## Customizing OpenShift defaults - -In openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be disabled if performance is impacted. - -When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations. - -## Removing recommended attributes - -To reduce default attributes in openshift-logging mode, disable recommended attributes: - - -```yaml -# ... -spec: - tenants: - mode: openshift-logging - openshift: - otlp: - disableRecommendedAttributes: true 1 -``` - - -Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes. - - -[NOTE] ----- -This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries. ----- - -# Additional resources - -* Loki labels -* Structured metadata -* OpenTelemetry attribute \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-loki-6.1.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-loki-6.1.txt deleted file mode 100644 index 9374cfe2..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-loki-6.1.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Prerequisites - -* You have installed the Loki Operator by using the CLI or web console. -* You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder. -* The serviceAccount is assigned collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles. - -# Core Setup and Configuration - -Role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -[NOTE] ----- -This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. ----- - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -# Enhanced Reliability and Performance - -Configurations to ensure Loki’s reliability and efficiency in production. - -# Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -# Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -# LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -# Advanced Deployment and Scalability - -Specialized configurations for high availability, scalability, and error handling. - -# Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -# Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -## Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -# Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt deleted file mode 100644 index 71eb6a76..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-opentelemetry-data-model-6.1.txt +++ /dev/null @@ -1,81 +0,0 @@ -# OpenTelemetry data model - - -This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging's OpenTelemetry support with Logging 6.1. - -[IMPORTANT] ----- -The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -# Forwarding and ingestion protocol - -Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification. OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources. - -# Semantic conventions - -The log collector in this solution gathers the following log streams: - -* Container logs -* Cluster node journal logs -* Cluster node auditd logs -* Kubernetes and OpenShift API server logs -* OpenShift Virtual Network (OVN) logs - -You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name, cluster_id, pod_name, namespace, and possibly deployment or app_name. These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data. - -In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage. - -The following sections define the attributes that are generally forwarded. - -## Log entry structure - -All log streams include the following log data fields: - -The Applicable Sources column indicates which log sources each field applies to: - -* all: This field is present in all logs. -* container: This field is present in Kubernetes container logs, both application and infrastructure. -* audit: This field is present in Kubernetes, OpenShift API, and OVN logs. -* auditd: This field is present in node auditd logs. -* journal: This field is present in node journal logs. - - - -## Attributes - -Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table. - -The Location column specifies the type of attribute: - -* resource: Indicates a resource attribute -* scope: Indicates a scope attribute -* log: Indicates a log attribute - -The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored: - -* stream label: -* Enables efficient filtering and querying based on specific labels. -* Can be labeled as required if the Loki Operator enforces this attribute in the configuration. -* structured metadata: -* Allows for detailed filtering and storage of key-value pairs. -* Enables users to use direct labels for streamlined queries without requiring JSON parsing. - -With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries. - - - - -[NOTE] ----- -Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases. ----- - -Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: (.,/,-) will be replaced by underscores (_). For example, k8s.namespace.name will become k8s_namespace_name. - -# Additional resources - -* Semantic Conventions -* Logs Data Model -* General Logs Attributes \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-release-notes-6.1.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-release-notes-6.1.txt deleted file mode 100644 index 0dfa4800..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-release-notes-6.1.txt +++ /dev/null @@ -1,222 +0,0 @@ -# Logging 6.1 Release Notes - - - -# Logging 6.1.7 Release Notes - -This release includes RHBA-2025:8143. - -## Bug fixes - -* Before this update, merging data from the message field into the root of a Syslog log event caused the log event to be inconsistent with the ViaQ data model. The inconsistency could lead to overwritten system information, data duplication, or event corruption. This update revises Syslog parsing and merging for the Syslog output to align with other output types, resolving this inconsistency. (LOG-7184) -* Before this update, log forwarding failed if you configured a cluster-wide proxy with a URL containing a username with an encoded "@" symbol; for example "user%40name". This update resolves the issue by adding correct support for URL-encoded values in proxy configurations. (LOG-7187) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-12087 -* CVE-2024-12088 -* CVE-2024-12133 -* CVE-2024-12243 -* CVE-2024-12747 -* CVE-2024-56171 -* CVE-2025-0395 -* CVE-2025-24928 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.1.6 Release Notes - -This release includes RHBA-2025:4529. - -## Bug fixes - -* Before this update, collector pods would enter a crash loop due to a configuration error when attempting token-based authentication with an Elasticsearch output. With this update, token authentication with an Elasticsearch output generates a valid configuration. (LOG-7018) -* Before this update, auditd log messages with multiple msg keys could cause errors in collector pods, because the standard auditd log format expects a single msg field per log entry that follows the msg=audit(TIMESTAMP:ID) structure. With this update, only the first msg value is used, which resolves the issue and ensures accurate extraction of audit metadata. (LOG-7029) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-2236 -* CVE-2024-5535 -* CVE-2024-56171 -* CVE-2025-24928 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.1.5 Release Notes - -This release includes RHSA-2025:3907. - -## New features and enhancements - -* Before this update, time-based stream sharding was not enabled in Loki, which resulted in Loki being unable to save historical data. With this update, Loki Operator enables time-based stream sharding in Loki, which helps Loki save historical data. (LOG-6991) - -## Bug fixes - -* Before this update, the Vector collector could not forward Open Virtual Network (OVN) and Auditd logs. With this update, the Vector collector can forward OVN and Auditd logs. (LOG-6996) - -## CVEs - -* CVE-2025-30204 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.1.4 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.4. - -## Bug fixes - -* Before this update, Red Hat Managed Elasticsearch failed to receive logs if the index name did not follow the required patterns (app-, infra-, audit-), resulting in an index_not_found_exception error due to a restricted automatic index creation. With this update, improved documentation and explanations in the oc explain obsclf.spec.outputs.elasticsearch.index command clarify the index naming limitations, helping users configure log forwarding correctly. -(LOG-6623) -* Before this update, when you used 1x.pico as the LokiStack size, the number of delete workers was set to zero. This issue occurred because of an error in the Operator that generates the Loki configuration. With this update, the number of delete workers is set to ten. -(LOG-6797) -* Before this update, the Operator failed to update the securitycontextconstraint object required by the log collector, which was a regression from previous releases. With this update, the Operator restores the cluster role to the service account and updates the resource. -(LOG-6816) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-45336 -* CVE-2024-45338 -* CVE-2024-56171 -* CVE-2025-24928 -* CVE-2025-27144 - - -[NOTE] ----- -For detailed information on Red Hat security ratings, review Severity ratings. ----- - -# Logging 6.1.3 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.3. - -## Bug Fixes - -* Before this update, when using the new 1x.pico size with the Loki Operator, the PodDisruptionBudget created for the Ingester pod allowed Kubernetes to evict two of the three Ingester pods. With this update, the Operator now creates a PodDisruptionBudget that allows eviction of only a single Ingester pod. -(LOG-6693) -* Before this update, the Operator did not support templating of syslog facility and severity level, which was consistent with the rest of the API. Instead, the Operator relied upon the 5.x API, which is no longer supported. With this update, the Operator supports templating by adding the required validation to the API and rejecting resources that do not match the required format. -(LOG-6788) -* Before this update, empty OTEL tuning configuration caused a validation error. With this update, the validation rules allow empty OTEL tuning configurations. -(LOG-6532) - -## CVEs - -* CVE-2020-11023 -* CVE-2024-9287 -* CVE-2024-12797 - -# Logging 6.1.2 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.2. - -## New Features and Enhancements - -* This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. -(LOG-6579) - -## Bug Fixes - -* Before this update, the collector alerting rules contained summary and message fields. With this update, the collector alerting rules contain summary and description fields. -(LOG-6126) -* Before this update, the collector metrics dashboard could get removed after an Operator upgrade due to a race condition during the transition from the old to the new pod deployment. With this update, labels are added to the dashboard ConfigMap to identify the upgraded deployment as the current owner so that it will not be removed. -(LOG-6280) -* Before this update, when you included infrastructure namespaces in application inputs, their log_type would be set to application. With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure. -(LOG-6373) -* Before this update, the Cluster Logging Operator used a cached client to fetch the SecurityContextConstraint cluster resource, which could result in an error when the cache is invalid. With this update, the Operator now always retrieves data from the API server instead of using a cache. -(LOG-6418) -* Before this update, the logging must-gather did not collect resources such as UIPlugin, ClusterLogForwarder, LogFileMetricExporter, and LokiStack. With this update, the must-gather now collects all of these resources and places them in their respective namespace directory instead of the cluster-logging directory. -(LOG-6422) -* Before this update, the Vector startup script attempted to delete buffer lock files during startup. With this update, the Vector startup script no longer attempts to delete buffer lock files during startup. -(LOG-6506) -* Before this update, the API documentation incorrectly claimed that lokiStack outputs would default the target namespace, which could prevent the collector from writing to that output. With this update, this claim has been removed from the API documentation and the Cluster Logging Operator now validates that a target namespace is present. -(LOG-6573) -* Before this update, the Cluster Logging Operator could deploy the collector with output configurations that were not referenced by any inputs. With this update, a validation check for the ClusterLogForwarder resource prevents the Operator from deploying the collector. -(LOG-6585) - -## CVEs - -* CVE-2019-12900 - -# Logging 6.1.1 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1. - -## New Features and Enhancements - -* With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in Red Hat OpenShift Container Platform 4.17 or later. (LOG-6420) - -## Bug Fixes - -* Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.]. With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size, max_line_bytes, is 3145728 bytes. The maximum number of bytes read during a read cycle, max_read_bytes, is 262144 bytes. (LOG-6379) -* Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. (LOG-6383) -* Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6405) -* Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6407) -* Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6449) -* Before this update, the ValidLokistackOTLPOutputs condition appeared in the status of the ClusterLogForwarder custom resource even when the output type is not LokiStack. With this update, the ValidLokistackOTLPOutputs condition is removed, and the validation messages for the existing output conditions are corrected. (LOG-6469) -* Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6484) -* Before this update, the must-gather script of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, the must-gather script is fixed, and the LokiStack data is gathered reliably. (LOG-6498) -* Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6533) - -## CVEs - -* CVE-2019-12900 -* CVE-2024-2511 -* CVE-2024-3596 -* CVE-2024-4603 -* CVE-2024-4741 -* CVE-2024-5535 -* CVE-2024-10963 -* CVE-2024-50602 - -# Logging 6.1.0 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0. - -## New Features and Enhancements - -### Log Collection - -* This enhancement adds the source iostream to the attributes sent from collected container logs. The value is set to either stdout or stderr based on how the collector received it. (LOG-5292) -* With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster’s specific needs and specifications. (LOG-6072) -* With this update, users can now set the syslog output delivery mode of the ClusterLogForwarder CR to either AtLeastOnce or AtMostOnce. (LOG-6355) - -### Log Storage - -* With this update, the new 1x.pico LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). (LOG-5939) - -## Technology Preview - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* With this update, OpenTelemetry logs can now be forwarded using the OTel (OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add the observability.openshift.io/tech-preview-otlp-output: "enabled" annotation to your ClusterLogForwarder configuration. For additional configuration information, see OTLP Forwarding. -* With this update, a dataModel field has been added to the lokiStack output specification. Set the dataModel to Otel to configure log forwarding using the OpenTelemetry data format. The default is set to Viaq. For information about data mapping see OTLP Specification. - -## Bug Fixes - -None. - -## CVEs - -* CVE-2024-6119 -* CVE-2024-6232 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-visual-6.1.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-visual-6.1.txt deleted file mode 100644 index c669b518..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.1/log6x-visual-6.1.txt +++ /dev/null @@ -1,5 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/6x-cluster-logging-deploying-6.2.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/6x-cluster-logging-deploying-6.2.txt deleted file mode 100644 index a56192da..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/6x-cluster-logging-deploying-6.2.txt +++ /dev/null @@ -1,680 +0,0 @@ -# Installing Logging - - -Red Hat OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs. -To get started with logging, you must install the following Operators: -* Loki Operator to manage your log store. -* Red Hat OpenShift Logging Operator to manage log collection and forwarding. -* Cluster Observability Operator (COO) to manage visualization. -You can use either the Red Hat OpenShift Container Platform web console or the Red Hat OpenShift Container Platform CLI to install or configure logging. - -[IMPORTANT] ----- -You must configure the Red Hat OpenShift Logging Operator after the Loki Operator. ----- - -# Installation by using the CLI - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI. - -## Installing the Loki Operator by using the CLI - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki by using the Red Hat OpenShift Container Platform command-line interface (CLI). You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Create a Namespace object for Loki Operator: -Example Namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-operators-redhat 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Container Platform metric, causing conflicts. -A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -2. Apply the Namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create an OperatorGroup object. -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-operators-redhat as the namespace. -4. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a Subscription object for Loki Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: loki-operator - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-operators-redhat as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -6. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -7. Create a namespace object for deploy the LokiStack: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -8. Apply the namespace object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -9. Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3. -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging -stringData: 2 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Use the name logging-loki-s3 to match the name used in LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -10. Apply the Secret object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -11. Create a LokiStack CR: -Example LokiStack CR - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" 4 - secret: - name: logging-loki-s3 5 - type: s3 6 - storageClassName: 7 - tenants: - mode: openshift-logging 8 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -12. Apply the LokiStack CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -* Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -## Installing Red Hat OpenShift Logging Operator by using the CLI - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store by using the OpenShift CLI (`oc`). - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You installed and configured Loki Operator. -* You have created the openshift-logging namespace. - -1. Create an OperatorGroup object: -Example OperatorGroup object - -```yaml -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - upgradeStrategy: Default -``` - -You must specify openshift-logging as the namespace. -2. Apply the OperatorGroup object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -3. Create a Subscription object for Red Hat OpenShift Logging Operator: -Example Subscription object - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: cluster-logging - namespace: openshift-logging 1 -spec: - channel: stable-6. 2 - installPlanApproval: Automatic 3 - name: cluster-logging - source: redhat-operators 4 - sourceNamespace: openshift-marketplace -``` - -You must specify openshift-logging as the namespace. -Specify stable-6. as the channel. -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. -Specify redhat-operators as the value. If your Red Hat OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). -4. Apply the Subscription object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - -5. Create a service account to be used by the log collector: - -```terminal -$ oc create sa logging-collector -n openshift-logging -``` - -6. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs. - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging -``` - -7. Create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify the openshift-logging namespace. -Specify the name of the service account created before. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -8. Apply the ClusterLogForwarder CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -1. Verify the installation by running the following command: - -```terminal -$ oc get pods -n openshift-logging -``` - -Example output - -```terminal -$ oc get pods -n openshift-logging -NAME READY STATUS RESTARTS AGE -cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m -instance-222js 2/2 Running 0 18m -instance-g9ddv 2/2 Running 0 18m -instance-hfqq8 2/2 Running 0 18m -instance-sphwg 2/2 Running 0 18m -instance-vv7zn 2/2 Running 0 18m -instance-wk5zz 2/2 Running 0 18m -logging-loki-compactor-0 1/1 Running 0 42m -logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m -logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m -logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m -logging-loki-index-gateway-0 1/1 Running 0 42m -logging-loki-ingester-0 1/1 Running 0 42m -logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m -logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m -``` - - -## Installing the Logging UI plugin by using the CLI - -Install the Logging UI plugin by using the command-line interface (CLI) so that you can visualize logs. - -* You have administrator permissions. -* You installed the OpenShift CLI (`oc`). -* You installed and configured Loki Operator. - -1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator. -2. Create a UIPlugin custom resource (CR): -Example UIPlugin CR - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging 1 -spec: - type: Logging 2 - logging: - lokiStack: - name: logging-loki 3 -``` - -Set name to logging. -Set type to Logging. -The name value must match the name of your LokiStack instance. - -[NOTE] ----- -If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration. ----- -3. Apply the UIPlugin CR object by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -1. Access the Red Hat OpenShift Container Platform web console, and refresh the page if a pop-up message instructs you to do so. -2. Navigate to the Observe → Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod. - -# Installation by using the web console - -The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console. - -## Installing Logging by using the web console - -Install Loki Operator on your Red Hat OpenShift Container Platform cluster to manage the log store Loki from the OperatorHub by using the Red Hat OpenShift Container Platform web console. You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install. - -[IMPORTANT] ----- -The Community Loki Operator is not supported by Red Hat. ----- -3. Select stable-x.y as the Update channel. - -The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the Operator install completes with an InstallSucceeded message, refresh the page. ----- -6. While the Operator installs, create the namespace to which the log store will be deployed. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the openshift-logging namespace: -Example namespace object - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-logging 1 - labels: - openshift.io/cluster-monitoring: "true" 2 -``` - -The openshift-logging namespace is dedicated for all logging workloads. -A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. -3. Click Create. -7. Create a secret with the credentials to access the object storage. -1. Click + in the top right of the screen to access the Import YAML page. -2. Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3: -Example Secret object - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: logging-loki-s3 1 - namespace: openshift-logging 2 -stringData: 3 - access_key_id: - access_key_secret: - bucketnames: s3-bucket-name - endpoint: https://s3.eu-central-1.amazonaws.com - region: eu-central-1 -``` - -Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource. -Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack. -For the contents of the secret see the Loki object storage section. - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- -3. Click Create. -8. Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance. -9. Select YAML view, and then use the following template to create a LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki 1 - namespace: openshift-logging 2 -spec: - size: 1x.small 3 - storage: - schemas: - - version: v13 - effectiveDate: "--
" - secret: - name: logging-loki-s3 4 - type: s3 5 - storageClassName: 6 - tenants: - mode: openshift-logging 7 -``` - -Use the name logging-loki. -You must specify openshift-logging as the namespace. -Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. -Specify the name of your log store secret. -Specify the corresponding storage type. -Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. -The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. -10. Click Create. - -1. In the LokiStack tab veriy that you see your LokiStack instance. -2. In the Status column, verify that you see the message Condition: Ready with a green checkmark. - -## Installing Red Hat OpenShift Logging Operator by using the web console - -Install Red Hat OpenShift Logging Operator on your Red Hat OpenShift Container Platform cluster to collect and forward logs to a log store from the OperatorHub by using the Red Hat OpenShift Container Platform web console. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. In the Red Hat OpenShift Container Platform web console Administrator perspective, go to Operators -> OperatorHub. -2. Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install. -3. Select stable-x.y as the Update channel. The latest version is already selected in the Version field. - -The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. -4. Select Enable Operator-recommended cluster monitoring on this namespace. - -This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. -5. For Update approval select Automatic, then click Install. - -If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. - -[NOTE] ----- -An Operator might display a Failed status before the installation completes. If the operator installation completes with an InstallSucceeded message, refresh the page. ----- -6. While the operator installs, create the service account that will be used by the log collector to collect the logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the service account. -Example ServiceAccount object - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: logging-collector 1 - namespace: openshift-logging 2 -``` - -Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource. -Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource. -3. Click the Create button. -7. Create the ClusterRoleBinding objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs. -1. Click the + in the top right of the screen to access the Import YAML page. -2. Enter the YAML definition for the ClusterRoleBinding resources. -Example ClusterRoleBinding resources - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:write-logs -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: logging-collector-logs-writer 1 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-application -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-application-logs 2 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: logging-collector:collect-infrastructure -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: collect-infrastructure-logs 3 -subjects: -- kind: ServiceAccount - name: logging-collector - namespace: openshift-logging -``` - -The cluster role to allow the log collector to write logs to LokiStack. -The cluster role to allow the log collector to collect logs from applications. -The cluster role to allow the log collector to collect logs from infrastructure. -3. Click the Create button. -8. Go to the Operators -> Installed Operators page. Select the operator and click the All instances tab. -9. After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs, find the ClusterLogForwarder resource and click Create Instance. -10. Select YAML view, and then use the following template to create a ClusterLogForwarder CR: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging 1 -spec: - serviceAccount: - name: logging-collector 2 - outputs: - - name: lokistack-out - type: lokiStack 3 - lokiStack: - target: 4 - name: logging-loki - namespace: openshift-logging - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: infra-app-logs - inputRefs: 5 - - application - - infrastructure - outputRefs: - - lokistack-out -``` - -You must specify openshift-logging as the namespace. -Specify the name of the service account created earlier. -Select the lokiStack output type to send logs to the LokiStack instance. -Point the ClusterLogForwarder to the LokiStack instance created earlier. -Select the log output types you want to send to the LokiStack instance. -11. Click Create. - -1. In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder instance. -2. In the Status column, verify that you see the messages: -* Condition: observability.openshift.io/Authorized -* observability.openshift.io/Valid, Ready - -## Installing the Logging UI plugin by using the web console - -Install the Logging UI plugin by using the web console so that you can visualize logs. - -* You have administrator permissions. -* You have access to the Red Hat OpenShift Container Platform web console. -* You installed and configured Loki Operator. - -1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator. -2. Navigate to the Installed Operators page. Under Provided APIs, select ClusterObservabilityOperator. Find the UIPlugin resource and click Create Instance. -3. Select the YAML view, and then use the following template to create a UIPlugin custom resource (CR): - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging 1 -spec: - type: Logging 2 - logging: - lokiStack: - name: logging-loki 3 -``` - -Set name to logging. -Set type to Logging. -The name value must match the name of your LokiStack instance. - -[NOTE] ----- -If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration. ----- -4. Click Create. - -1. Refresh the page when a pop-up message instructs you to do so. -2. Navigate to the Observe → Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log62-cluster-logging-support.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log62-cluster-logging-support.txt deleted file mode 100644 index a1e0aa01..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log62-cluster-logging-support.txt +++ /dev/null @@ -1,136 +0,0 @@ -# Support - - -Only the configuration options described in this documentation are supported for logging. -Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. - -[NOTE] ----- -If you must perform configurations not described in the Red Hat OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed. ----- - -[NOTE] ----- -Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. ----- -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. -Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. -Logging is not: -* A high scale log collection system -* Security Information and Event Monitoring (SIEM) compliant -* A "bring your own" (BYO) log collector configuration -* Historical or long term log retention or storage -* A guaranteed log sink -* Secure storage - audit logs are not stored by default - -# Supported API custom resource definitions - -The following table describes the supported Logging APIs. - - - -# Unsupported configurations - -You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: - -* The collector configuration file -* The collector daemonset - -Explicitly unsupported cases include: - -* Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector. -* Configuring how the log collector normalizes logs. You cannot modify default log normalization. - -# Support policy for unmanaged Operators - -The management state of an Operator determines whether an Operator is actively -managing the resources for its related component in the cluster as designed. If -an Operator is set to an unmanaged state, it does not respond to changes in -configuration nor does it receive updates. - -While this can be helpful in non-production clusters or during debugging, -Operators in an unmanaged state are unsupported and the cluster administrator -assumes full control of the individual component configurations and upgrades. - -An Operator can be set to an unmanaged state using the following methods: - -* Individual Operator configuration - -Individual Operators have a managementState parameter in their configuration. -This can be accessed in different ways, depending on the Operator. For example, -the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource -(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide -configuration resource. - -Changing the managementState parameter to Unmanaged means that the Operator -is not actively managing its resources and will take no action related to the -related component. Some Operators might not support this management state as it -might damage the cluster and require manual recovery. - -[WARNING] ----- -Changing individual Operators to the Unmanaged state renders that particular -component and functionality unsupported. Reported issues must be reproduced in -Managed state for support to proceed. ----- -* Cluster Version Operator (CVO) overrides - -The spec.overrides parameter can be added to the CVO’s configuration to allow -administrators to provide a list of overrides to the CVO’s behavior for a -component. Setting the spec.overrides[].unmanaged parameter to true for a -component blocks cluster upgrades and alerts the administrator after a CVO -override has been set: - -```terminal -Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. -``` - - -[WARNING] ----- -Setting a CVO override puts the entire cluster in an unsupported state. Reported -issues must be reproduced after removing any overrides for support to proceed. ----- - -# Collecting logging data for Red Hat Support - -When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. - -You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. -For prompt support, supply diagnostic information for both Red Hat OpenShift Container Platform and logging. - -## About the must-gather tool - -The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. - -For your logging, must-gather collects the following information: - -* Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level -* Cluster-level resources, including nodes, roles, and role bindings at the cluster level -* OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer - -When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory. - -## Collecting logging data - -You can use the oc adm must-gather CLI command to collect information about logging. - -To collect logging information with must-gather: - -1. Navigate to the directory where you want to store the must-gather information. -2. Run the oc adm must-gather command against the logging image: - -```terminal -$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') -``` - - -The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: -must-gather.local.4157245944708210408. -3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: - -```terminal -$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 -``` - -4. Attach the compressed file to your support case on the Red Hat Customer Portal. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-about-6.2.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-about-6.2.txt deleted file mode 100644 index 2b6545ea..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-about-6.2.txt +++ /dev/null @@ -1,330 +0,0 @@ -# Logging 6.2 - - -The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. - -# Inputs and outputs - -Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: - -* application -* receiver -* infrastructure -* audit - -You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. - -Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. - -# Receiver input type - -The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog. - -The ReceiverSpec field defines the configuration for a receiver input. - -# Pipelines and filters - -Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. - -# Operator behavior - -The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: - -* When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. -* When set to Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components. - -# Validation - -Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. - -# Quick start - -OpenShift Logging supports two data models: - -* ViaQ (General Availability) -* OpenTelemetry (Technology Preview) - -You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack. - - -[NOTE] ----- -In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. ----- - -## Quick start with ViaQ - -To use the default ViaQ data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging -spec: - serviceAccount: - name: collector - outputs: - - name: default-lokistack - type: lokiStack - lokiStack: - authentication: - token: - from: serviceAccount - target: - name: logging-loki - namespace: openshift-logging - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: default-logstore - inputRefs: - - application - - infrastructure - outputRefs: - - default-lokistack -``` - - -[NOTE] ----- -The dataModel field is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. ----- - -* Verify that logs are visible in the Log section of the Observe tab in the Red Hat OpenShift Container Platform web console. - -## Quick start with OpenTelemetry - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: - -* You have access to an Red Hat OpenShift Container Platform cluster with cluster-admin permissions. -* You have installed the OpenShift CLI (`oc`). -* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. - -1. Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. -2. Create a LokiStack custom resource (CR) in the openshift-logging namespace: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - managementState: Managed - size: 1x.extra-small - storage: - schemas: - - effectiveDate: '2024-10-01' - version: v13 - secret: - name: logging-loki-s3 - type: s3 - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - - -[NOTE] ----- -Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". ----- -3. Create a service account for the collector: - -```terminal -$ oc create sa collector -n openshift-logging -``` - -4. Allow the collector's service account to write data to the LokiStack CR: - -```terminal -$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging -``` - - -[NOTE] ----- -The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. ----- -5. To collect logs, use the service account of the collector by running the following commands: - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging -``` - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging -``` - - -[NOTE] ----- -The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. ----- -6. Create a UIPlugin CR to enable the Log section in the Observe tab: - -```yaml -apiVersion: observability.openshift.io/v1alpha1 -kind: UIPlugin -metadata: - name: logging -spec: - type: Logging - logging: - lokiStack: - name: logging-loki -``` - -7. Create a ClusterLogForwarder CR to configure log forwarding: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - namespace: openshift-logging - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 -spec: - serviceAccount: - name: collector - outputs: - - name: loki-otlp - type: lokiStack 2 - lokiStack: - target: - name: logging-loki - namespace: openshift-logging - dataModel: Otel 3 - authentication: - token: - from: serviceAccount - tls: - ca: - key: service-ca.crt - configMapName: openshift-service-ca.crt - pipelines: - - name: my-pipeline - inputRefs: - - application - - infrastructure - outputRefs: - - loki-otlp -``` - -Use the annotation to enable the Otel data model, which is a Technology Preview feature. -Define the output type as lokiStack. -Specifies the OpenTelemetry data model. - -[NOTE] ----- -You cannot use lokiStack.labelKeys when dataModel is Otel. To achieve similar functionality when dataModel is Otel, refer to "Configuring LokiStack for OTLP data ingestion". ----- - -* To verify that OTLP is functioning correctly, complete the following steps: -1. In the OpenShift web console, click Observe -> OpenShift Logging -> LokiStack -> Writes. -2. Check the Distributor - Structured Metadata section. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-clf-6.2.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-clf-6.2.txt deleted file mode 100644 index 71752411..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-clf-6.2.txt +++ /dev/null @@ -1,987 +0,0 @@ -# Configuring log forwarding - - -The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. -* Selects log messages using inputs -* Forwards logs to external destinations using outputs -* Filters, transforms, and drops log messages using filters -* Defines log forwarding pipelines connecting inputs, filters and outputs - -# Setting up log collection - -This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. - -The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. - -Setup log collection by binding the required cluster roles to your service account. - -## Legacy service accounts - -To use the existing legacy service account logcollector, create the following ClusterRoleBinding: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector -``` - - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector -``` - - -Additionally, create the following ClusterRoleBinding if collecting audit logs: - - -```terminal -$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector -``` - - -## Creating service accounts - -* The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. -* You have administrator permissions. - -1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. -2. Bind the appropriate cluster roles to the service account: -Example binding command - -```terminal -$ oc adm policy add-cluster-role-to-user system:serviceaccount:: -``` - - -### Cluster Role Binding for your Service Account - -The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: manager-rolebinding -roleRef: 1 - apiGroup: rbac.authorization.k8s.io 2 - kind: ClusterRole 3 - name: cluster-logging-operator 4 -subjects: 5 - - kind: ServiceAccount 6 - name: cluster-logging-operator 7 - namespace: openshift-logging 8 -``` - - -roleRef: References the ClusterRole to which the binding applies. -apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. -kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. -name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. -subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. -kind: Specifies that the subject is a ServiceAccount. -Name: The name of the ServiceAccount being granted the permissions. -namespace: Indicates the namespace where the ServiceAccount is located. - -### Writing application logs - -The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-application-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - application 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions granted by this ClusterRole. -apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. -loki.grafana.com: The API group for managing Loki-related resources. -resources: The resource type that the ClusterRole grants permission to interact with. -application: Refers to the application resources within the Loki logging system. -resourceNames: Specifies the names of resources that this role can manage. -logs: Refers to the log resources that can be created. -verbs: The actions allowed on the resources. -create: Grants permission to create new logs in the Loki system. - -### Writing audit logs - -The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-audit-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - audit 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Defines the permissions granted by this ClusterRole. -apiGroups: Specifies the API group loki.grafana.com. -loki.grafana.com: The API group responsible for Loki logging resources. -resources: Refers to the resource type this role manages, in this case, audit. -audit: Specifies that the role manages audit logs within Loki. -resourceNames: Defines the specific resources that the role can access. -logs: Refers to the logs that can be managed under this role. -verbs: The actions allowed on the resources. -create: Grants permission to create new audit logs. - -### Writing infrastructure logs - -The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-logging-write-infrastructure-logs -rules: 1 - - apiGroups: 2 - - loki.grafana.com 3 - resources: 4 - - infrastructure 5 - resourceNames: 6 - - logs 7 - verbs: 8 - - create 9 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Specifies the API group for Loki-related resources. -loki.grafana.com: The API group managing the Loki logging system. -resources: Defines the resource type that this role can interact with. -infrastructure: Refers to infrastructure-related resources that this role manages. -resourceNames: Specifies the names of resources this role can manage. -logs: Refers to the log resources related to infrastructure. -verbs: The actions permitted by this role. -create: Grants permission to create infrastructure logs in the Loki system. - -### ClusterLogForwarder editor role - -The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. - - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: clusterlogforwarder-editor-role -rules: 1 - - apiGroups: 2 - - observability.openshift.io 3 - resources: 4 - - clusterlogforwarders 5 - verbs: 6 - - create 7 - - delete 8 - - get 9 - - list 10 - - patch 11 - - update 12 - - watch 13 -``` - - -rules: Specifies the permissions this ClusterRole grants. -apiGroups: Refers to the OpenShift-specific API group -obervability.openshift.io: The API group for managing observability resources, like logging. -resources: Specifies the resources this role can manage. -clusterlogforwarders: Refers to the log forwarding resources in OpenShift. -verbs: Specifies the actions allowed on the ClusterLogForwarders. -create: Grants permission to create new ClusterLogForwarders. -delete: Grants permission to delete existing ClusterLogForwarders. -get: Grants permission to retrieve information about specific ClusterLogForwarders. -list: Allows listing all ClusterLogForwarders. -patch: Grants permission to partially modify ClusterLogForwarders. -update: Grants permission to update existing ClusterLogForwarders. -watch: Grants permission to monitor changes to ClusterLogForwarders. - -# Modifying log level in collector - -To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off. - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector - annotations: - observability.openshift.io/log-level: debug -# ... -``` - - -# Managing the Operator - -The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: - -Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec. -Unmanaged:: The operator will not take any action related to the logging components. - -This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged. - -# Structure of the ClusterLogForwarder - -The CLF has a spec section that contains the following key components: - -Inputs:: Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. -Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration. -Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. -Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. - -## Inputs - -Inputs are configured in an array under spec.inputs. There are three built-in input types: - -application:: Selects logs from all application containers, excluding those in infrastructure namespaces. -infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces: -* default -* kube -* openshift -* Containing the kube- or openshift- prefix -audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. - -Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. - -## Outputs - -Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are: - -azureMonitor:: Forwards logs to Azure Monitor. -cloudwatch:: Forwards logs to AWS CloudWatch. -elasticsearch:: Forwards logs to an external Elasticsearch instance. -googleCloudLogging:: Forwards logs to Google Cloud Logging. -http:: Forwards logs to a generic HTTP endpoint. -kafka:: Forwards logs to a Kafka broker. -loki:: Forwards logs to a Loki logging backend. -lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with Red Hat OpenShift Container Platform authentication integration. LokiStack's proxy uses Red Hat OpenShift Container Platform authentication to enforce multi-tenancy -otlp:: Forwards logs using the OpenTelemetry Protocol. -splunk:: Forwards logs to Splunk. -syslog:: Forwards logs to an external syslog server. - -Each output type has its own configuration fields. - -# Configuring OTLP output - -Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - annotations: - observability.openshift.io/tech-preview-otlp-output: "enabled" 1 - name: clf-otlp -spec: - serviceAccount: - name: - outputs: - - name: otlp - type: otlp - otlp: - tuning: - compression: gzip - deliveryMode: AtLeastOnce - maxRetryDuration: 20 - maxWrite: 10M - minRetryDuration: 5 - url: 2 - pipelines: - - inputRefs: - - application - - infrastructure - - audit - name: otlp-logs - outputRefs: - - otlp -``` - -Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. -This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. - - -[NOTE] ----- -The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. ----- - -## Pipelines - -Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of: - -inputRefs:: Names of inputs whose logs should be forwarded to this pipeline. -outputRefs:: Names of outputs to send logs to. -filterRefs:: (optional) Names of filters to apply. - -The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. - -## Filters - -Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them. - -Administrators can configure the following types of filters: - -# Enabling multi-line exception detection - -Enables multi-line error detection of container logs. - - -[WARNING] ----- -Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. ----- - -Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. - - -```java -java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null - at testjava.Main.handle(Main.java:47) - at testjava.Main.printMe(Main.java:19) - at testjava.Main.main(Main.java:10) -``` - - -* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters. - - -```yaml -apiVersion: "observability.openshift.io/v1" -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - filters: - - name: - type: detectMultilineException - pipelines: - - inputRefs: - - - name: - filterRefs: - - - outputRefs: - - -``` - - -## Details - -When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence. - -The collector supports the following languages: - -* Java -* JS -* Ruby -* Python -* Golang -* PHP -* Dart - -# Forwarding logs over HTTP - -To enable forwarding logs over HTTP, specify http as the output type in the ClusterLogForwarder custom resource (CR). - -* Create or edit the ClusterLogForwarder CR using the template below: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - managementState: Managed - outputs: - - name: - type: http - http: - headers: 1 - h1: v1 - h2: v2 - authentication: - username: - key: username - secretName: - password: - key: password - secretName: - timeout: 300 - proxyURL: 2 - url: 3 - tls: - insecureSkipVerify: 4 - ca: - key: - secretName: 5 - pipelines: - - inputRefs: - - application - name: pipe1 - outputRefs: - - 6 - serviceAccount: - name: 7 -``` - -Additional headers to send with the log record. -Optional: URL of the HTTP/HTTPS proxy that should be used to forward logs over http or https from this output. This setting overrides any default proxy settings for the cluster or the node. -Destination address for logs. -Values are either true or false. -Secret name for destination credentials. -This value should be the same as the output name. -The name of your service account. - -# Forwarding logs using the syslog protocol - -You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from Red Hat OpenShift Container Platform. - -To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection. - -* You must have a logging server that is configured to receive the logging data using the specified protocol or format. - -1. Create or edit a YAML file that defines the ClusterLogForwarder CR object: - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: collector -spec: - managementState: Managed - outputs: - - name: rsyslog-east 1 - syslog: - appName: 2 - enrichment: KubernetesMinimal - facility: 3 - msgId: 4 - payloadKey: 5 - procId: 6 - rfc: 7 - severity: informational 8 - tuning: - deliveryMode: 9 - url: 10 - tls: 11 - ca: - key: ca-bundle.crt - secretName: syslog-secret - type: syslog - pipelines: - - inputRefs: 12 - - application - name: syslog-east 13 - outputRefs: - - rsyslog-east - serviceAccount: 14 - name: logcollector -``` - -Specify a name for the output. -Optional: Specify the value for the APP-NAME part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Specify the value for Facility part of the syslog-msg header. -Optional: Specify the value for MSGID part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Specify the record field to use as the payload. The payloadKey value must be a single field path encased in single curly brackets {}. Example: {.}. -Optional: Specify the value for the PROCID part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: -{.||"none"}. -Optional: Set the RFC that the generated messages conform to. The value can be RFC3164 or RFC5424. -Optional: Set the severity level for the message. For more information, see The Syslog Protocol. -Optional: Set the delivery mode for log forwarding. The value can be either AtLeastOnce, or AtMostOnce. -Specify the absolute URL with a scheme. Valid schemes are: tcp, tls, and udp. For example: tls://syslog-receiver.example.com:6514. -Specify the settings for controlling options of the transport layer security (TLS) client connections. -Specify which log types to forward by using the pipeline: application, infrastructure, or audit. -Specify a name for the pipeline. -The name of your service account. -2. Create the CR object: - -```terminal -$ oc create -f .yaml -``` - - -## Adding log source information to the message output - -You can add namespace_name, pod_name, and container_name elements to the message field of the record by adding the enrichment field to your ClusterLogForwarder custom resource (CR). - - -```yaml -# ... - spec: - outputs: - - name: syslogout - syslog: - enrichment: KubernetesMinimal - facility: user - payloadKey: message - rfc: RFC3164 - severity: debug - type: syslog - url: tls://syslog-receiver.example.com:6514 - pipelines: - - inputRefs: - - application - name: test-app - outputRefs: - - syslogout -# ... -``` - - - -[NOTE] ----- -This configuration is compatible with both RFC3164 and RFC5424. ----- - - -```text - 2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...} -``` - - - -```text -2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...} -``` - - -# Configuring content filters to drop unwanted log records - -When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. - -1. Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: drop 1 - drop: 2 - - test: 3 - - field: .kubernetes.labels."foo-bar/baz" 4 - matches: .+ 5 - - field: .kubernetes.pod_name - notMatches: "my-pod" 6 - pipelines: - - name: 7 - filterRefs: [""] -# ... -``` - -Specifies the type of filter. The drop filter drops log records that match the filter configuration. -Specifies configuration options for applying the drop filter. -Specifies the configuration for tests that are used to evaluate whether a log record is dropped. -* If all the conditions specified for a test are true, the test passes and the log record is dropped. -* When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. -* If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. -Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. -Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. -Specifies the pipeline that the drop filter is applied to. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -The following additional example shows how you can configure the drop filter to only keep higher priority log records: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .message - notMatches: "(?i)critical|error" - - field: .level - matches: "info|warning" -# ... -``` - - -In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: important - type: drop - drop: - - test: - - field: .kubernetes.namespace_name - matches: "^open" - - test: - - field: .log_type - matches: "application" - - field: .kubernetes.pod_name - notMatches: "my-pod" -# ... -``` - - -# Overview of API audit filter - -OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: - -* None: The event is dropped. -* Metadata: Audit metadata is included, request and response bodies are removed. -* Request: Audit metadata and the request body are included, the response body is removed. -* RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. - -The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions: - -Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status. -Default Rules:: Events that do not match any rule in the policy are filtered as follows: -* Read-only system events such as get, list, and watch are dropped. -* Service account write events that occur within the same namespace as the service account are dropped. -* All other events are forwarded, subject to any configured rate limits. - -To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. - -Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted. - -The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. - - -[NOTE] ----- -You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. ----- - - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: - namespace: -spec: - serviceAccount: - name: - pipelines: - - name: my-pipeline - inputRefs: audit 1 - filterRefs: my-policy 2 - filters: - - name: my-policy - type: kubeAPIAudit - kubeAPIAudit: - # Don't generate audit events for all requests in RequestReceived stage. - omitStages: - - "RequestReceived" - - rules: - # Log pod changes at RequestResponse level - - level: RequestResponse - resources: - - group: "" - resources: ["pods"] - - # Log "pods/log", "pods/status" at Metadata level - - level: Metadata - resources: - - group: "" - resources: ["pods/log", "pods/status"] - - # Don't log requests to a configmap called "controller-leader" - - level: None - resources: - - group: "" - resources: ["configmaps"] - resourceNames: ["controller-leader"] - - # Don't log watch requests by the "system:kube-proxy" on endpoints or services - - level: None - users: ["system:kube-proxy"] - verbs: ["watch"] - resources: - - group: "" # core API group - resources: ["endpoints", "services"] - - # Don't log authenticated requests to certain non-resource URL paths. - - level: None - userGroups: ["system:authenticated"] - nonResourceURLs: - - "/api*" # Wildcard matching. - - "/version" - - # Log the request body of configmap changes in kube-system. - - level: Request - resources: - - group: "" # core API group - resources: ["configmaps"] - # This rule only applies to resources in the "kube-system" namespace. - # The empty string "" can be used to select non-namespaced resources. - namespaces: ["kube-system"] - - # Log configmap and secret changes in all other namespaces at the Metadata level. - - level: Metadata - resources: - - group: "" # core API group - resources: ["secrets", "configmaps"] - - # Log all other resources in core and extensions at the Request level. - - level: Request - resources: - - group: "" # core API group - - group: "extensions" # Version of group should NOT be included. - - # A catch-all rule to log all other requests at the Metadata level. - - level: Metadata -``` - - -The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. -The name of your audit policy. - -# Filtering application logs at input by including the label expressions or a matching label key and values - -You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. - -1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - selector: - matchExpressions: - - key: env 1 - operator: In 2 - values: ["prod", "qa"] 3 - - key: zone - operator: NotIn - values: ["east", "west"] - matchLabels: 4 - app: one - name: app1 - type: application -# ... -``` - -Specifies the label key to match. -Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist. -Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty. -Specifies an exact key or value mapping. -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring content filters to prune log records - -When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. - -1. Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: - -[IMPORTANT] ----- -If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. ----- -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccount: - name: - filters: - - name: - type: prune 1 - prune: 2 - in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 - notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 - pipelines: - - name: 5 - filterRefs: [""] -# ... -``` - -Specify the type of filter. The prune filter prunes log records by configured fields. -Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example, .kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz". -Optional: Any fields that are specified in this array are removed from the log record. -Optional: Any fields that are not specified in this array are removed from the log record. -Specify the pipeline that the prune filter is applied to. - -[NOTE] ----- -The filters exempts the log_type, .log_source, and .message fields. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering the audit and infrastructure log inputs by source - -You can define the list of audit and infrastructure sources to collect the logs by using the input selector. - -1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs1 - type: infrastructure - infrastructure: - sources: 1 - - node - - name: mylogs2 - type: audit - audit: - sources: 2 - - kubeAPI - - openshiftAPI - - ovn -# ... -``` - -Specifies the list of infrastructure sources to collect. The valid sources include: -* node: Journal log from the node -* container: Logs from the workloads deployed in the namespaces -Specifies the list of audit sources to collect. The valid sources include: -* kubeAPI: Logs from the Kubernetes API servers -* openshiftAPI: Logs from the OpenShift API servers -* auditd: Logs from a node auditd service -* ovn: Logs from an open virtual network service -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` - - -# Filtering application logs at input by including or excluding the namespace or container name - -You can include or exclude the application logs based on the namespace and container name by using the input selector. - -1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. - -The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: -Example ClusterLogForwarder CR - -```yaml -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -# ... -spec: - serviceAccount: - name: - inputs: - - name: mylogs - application: - includes: - - namespace: "my-project" 1 - container: "my-container" 2 - excludes: - - container: "other-container*" 3 - namespace: "other-namespace" 4 - type: application -# ... -``` - -Specifies that the logs are only collected from these namespaces. -Specifies that the logs are only collected from these containers. -Specifies the pattern of namespaces to ignore when collecting the logs. -Specifies the set of containers to ignore when collecting the logs. - -[NOTE] ----- -The excludes field takes precedence over the includes field. ----- -2. Apply the ClusterLogForwarder CR by running the following command: - -```terminal -$ oc apply -f .yaml -``` diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt deleted file mode 100644 index c1df8b35..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-configuring-lokistack-otlp-6.2.txt +++ /dev/null @@ -1,172 +0,0 @@ -# OTLP data ingestion in Loki - - -You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories: -* Resource -* Scope -* Log -You can set metadata for multiple entries simultaneously or individually as needed. - -# Configuring LokiStack for OTLP data ingestion - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: - -* Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. - -1. Set the schema version: -* When creating a new LokiStack CR, set version: v13 in the storage schema configuration. - -[NOTE] ----- -For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). ----- -2. Configure the storage schema as follows: -Example configure storage schema - -```yaml -# ... -spec: - storage: - schemas: - - version: v13 - effectiveDate: 2024-10-25 -``` - - -Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. - -# Attribute mapping - -When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki. - -For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases: - -* Using a custom collector: If your setup includes a custom collector that generates additional attributes that you do not want to store, consider customizing the mapping to ensure these attributes are dropped by Loki. -* Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. - -## Custom attribute mapping for OpenShift - -When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. -In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration. - - -[NOTE] ----- -A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode. ----- - -Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration: - - -```yaml -# ... -spec: - limits: - global: - otlp: {} 1 - tenants: - application: 2 - otlp: {} -``` - - -Defines global OTLP attribute configuration. -Defines the OTLP attribute configuration for the application tenant within the openshift-logging mode. You can also configure infrastructure and audit tenants in addition to application tenants. - - -[NOTE] ----- -You can use both global and per-tenant OTLP configurations for mapping attributes to stream labels. ----- - -Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects. See the following LokiStack example configuration: - - -```yaml -spec: - limits: - global: - otlp: - streamLabels: - resourceAttributes: - - name: "k8s.namespace.name" - - name: "k8s.pod.name" - - name: "k8s.container.name" -``` - - -You can drop attributes of type resource, scope, or log from the log entry. - - -```yaml -# ... -spec: - limits: - global: - otlp: - streamLabels: -# ... - drop: - resourceAttributes: - - name: "process.command_line" - - name: "k8s\\.pod\\.labels\\..+" - regex: true - scopeAttributes: - - name: "service.name" - logAttributes: - - name: "http.route" -``` - - -You can use regular expressions by setting regex: true to apply a configuration for attributes with similar names. - - -[IMPORTANT] ----- -Avoid using regular expressions for stream labels, as this can increase data volume. ----- - -Attributes that are not explicitly set as stream labels or dropped from the entry are saved as structured metadata by default. - -## Customizing OpenShift defaults - -In the openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be dropped if performance is impacted. For information about the attributes, see OpenTelemetry data model attributes. - -When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or some attributes need to be droped, use custom configuration. Custom configurations can merge with default configurations. - -## Removing recommended attributes - -To reduce default attributes in the openshift-logging mode, disable recommended attributes: - - -```yaml -# ... -spec: - tenants: - mode: openshift-logging - openshift: - otlp: - disableRecommendedAttributes: true 1 -``` - - -Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes or stream labels. - -[NOTE] ----- -This setting might negatively impact query performance, as it removes default stream labels. You must pair this option with a custom attribute configuration to retain attributes essential for queries. ----- - -# Additional resources - -* Loki labels (Grafana documentation) -* Structured metadata (Grafana documentation) -* OpenTelemetry data model -* OpenTelemetry attribute (OpenTelemetry documentation) \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-loki-6.2.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-loki-6.2.txt deleted file mode 100644 index ce7d9b5d..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-loki-6.2.txt +++ /dev/null @@ -1,764 +0,0 @@ -# Storing logs with LokiStack - - -You can configure a LokiStack custom resource (CR) to store application, audit, and infrastructure-related logs. -Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. - -# Loki deployment sizing - -Sizing for Loki follows the format of 1x. where the value 1x is number of instances and specifies performance capabilities. - -The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. - -Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. - - -[IMPORTANT] ----- -It is not possible to change the number 1x for the deployment size. ----- - - - -# Prerequisites - -* You have installed the Loki Operator by using the command-line interface (CLI) or web console. -* You have created a serviceAccount CR in the same namespace as the ClusterLogForwarder CR. -* You have assigned the collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles to the serviceAccount CR. - -# Core set up and configuration - -Use role-based access controls, basic monitoring, and pod placement to deploy Loki. - -# Authorizing LokiStack rules RBAC permissions - -Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. -Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. - -The following cluster roles for alerting and recording rules are available for LokiStack: - - - -## Examples - -To apply cluster roles for a user, you must bind an existing cluster role to a specific username. - -Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. -When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. -When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. - -The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: - - -```terminal -$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n -``` - - -The following command gives the specified user administrator permissions for alerting rules in all namespaces: - - -```terminal -$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin -``` - - -# Creating a log-based alerting rule with Loki - -The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: - -* If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule -* If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. -* If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule. -* If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. -* If none of the above applies, an alerting rule is considered valid. - - - -1. Create an AlertingRule custom resource (CR): -Example infrastructure AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: loki-operator-alerts - namespace: openshift-operators-redhat 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "infrastructure" 3 - groups: - - name: LokiOperatorHighReconciliationError - rules: - - alert: HighPercentageError - expr: | 4 - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) - / - sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) - > 0.01 - for: 10s - labels: - severity: critical 5 - annotations: - summary: High Loki Operator Reconciliation Errors 6 - description: High Loki Operator Reconciliation Errors 7 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces. -The value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -This field is mandatory. -This field is mandatory. -Example application AlertingRule CR - -```yaml - apiVersion: loki.grafana.com/v1 - kind: AlertingRule - metadata: - name: app-user-workload - namespace: app-ns 1 - labels: 2 - openshift.io/: "true" - spec: - tenantID: "application" - groups: - - name: AppUserWorkloadHighError - rules: - - alert: - expr: | 3 - sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) - for: 10s - labels: - severity: critical 4 - annotations: - summary: 5 - description: 6 -``` - -The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. -The labels block must match the LokiStack spec.rules.selector definition. -Value for kubernetes_namespace_name: must match the value for metadata.namespace. -The value of this mandatory field must be critical, warning, or info. -The value of this mandatory field is a summary of the rule. -The value of this mandatory field is a detailed description of the rule. -2. Apply the AlertingRule CR: - -```terminal -$ oc apply -f .yaml -``` - - -# Configuring Loki to tolerate memberlist creation failure - -In an Red Hat OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. - -As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: - - -```terminal -$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' -``` - - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - hashRing: - type: memberlist - memberlist: - instanceAddrType: podIP -# ... -``` - - -# Enabling stream-based retention with Loki - -You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. - - -[IMPORTANT] ----- -If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. ----- - - -[NOTE] ----- -Schema v13 is recommended. ----- - -1. Create a LokiStack CR: -* Enable stream-based retention globally as shown in the following example: -Example global stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: 1 - retention: 2 - days: 20 - streams: - - days: 4 - priority: 1 - selector: '{kubernetes_namespace_name=~"test.+"}' 3 - - days: 1 - priority: 1 - selector: '{log_type="infrastructure"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. -Retention is enabled in the cluster when this block is added to the CR. -Contains the LogQL query used to define the log stream.spec: -limits: -* Enable stream-based retention per-tenant basis as shown in the following example: -Example per-tenant stream-based retention for AWS - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - retention: - days: 20 - tenants: 1 - application: - retention: - days: 1 - streams: - - days: 4 - selector: '{kubernetes_namespace_name=~"test.+"}' 2 - infrastructure: - retention: - days: 5 - streams: - - days: 1 - selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' - managementState: Managed - replicationFactor: 1 - size: 1x.small - storage: - schemas: - - effectiveDate: "2020-10-11" - version: v13 - secret: - name: logging-loki-s3 - type: aws - storageClassName: gp3-csi - tenants: - mode: openshift-logging -``` - -Sets retention policy by tenant. Valid tenant types are application, audit, and infrastructure. -Contains the LogQL query used to define the log stream. -2. Apply the LokiStack CR: - -```terminal -$ oc apply -f .yaml -``` - - -[NOTE] ----- -This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. ----- - -# Loki pod placement - -You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. - -You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: 1 - nodeSelector: - node-role.kubernetes.io/infra: "" 2 - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" -# ... -``` - - -Specifies the component pod type that applies to the node selector. -Specifies the pods that are moved to nodes containing the defined label. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - compactor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - distributor: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - indexGateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ingester: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - querier: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - queryFrontend: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - ruler: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved - gateway: - nodeSelector: - node-role.kubernetes.io/infra: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/infra - value: reserved - - effect: NoExecute - key: node-role.kubernetes.io/infra - value: reserved -# ... -``` - - -To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: - - -```terminal -$ oc explain lokistack.spec.template -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: template - -DESCRIPTION: - Template defines the resource/limits/tolerations/nodeselectors per - component - -FIELDS: - compactor - Compactor defines the compaction component spec. - - distributor - Distributor defines the distributor component spec. -... -``` - - -For more detailed information, you can add a specific field: - - -```terminal -$ oc explain lokistack.spec.template.compactor -``` - - - -```text -KIND: LokiStack -VERSION: loki.grafana.com/v1 - -RESOURCE: compactor - -DESCRIPTION: - Compactor defines the compaction component spec. - -FIELDS: - nodeSelector - NodeSelector defines the labels required by a node to schedule the - component onto it. -... -``` - - -# Enhanced reliability and performance - -Use the following configurations to ensure reliability and efficiency of Loki in production. - -# Enabling authentication to cloud-based log stores using short-lived tokens - -Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. - -* Use one of the following options to enable authentication: -* If you use the Red Hat OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. -* If you use the OpenShift CLI (`oc`) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. -Example Azure sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: CLIENTID - value: - - name: TENANTID - value: - - name: SUBSCRIPTIONID - value: - - name: REGION - value: -``` - -Example AWS sample subscription - -```yaml -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: loki-operator - namespace: openshift-operators-redhat -spec: - channel: "stable-6.0" - installPlanApproval: Manual - name: loki-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - config: - env: - - name: ROLEARN - value: -``` - - -# Configuring Loki to tolerate node failure - -The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. - -Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods -that prevents a pod from being scheduled on a node. - -In Red Hat OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. - -The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components. - -You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: -# ... - template: - ingester: - podAntiAffinity: - # ... - requiredDuringSchedulingIgnoredDuringExecution: 1 - - labelSelector: - matchLabels: 2 - app.kubernetes.io/component: ingester - topologyKey: kubernetes.io/hostname -# ... -``` - - -The stanza to define a required rule. -The key-value pair (label) that must be matched to apply the rule. - -# LokiStack behavior during cluster restarts - -When an Red Hat OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. - -# Advanced deployment and scalability - -To configure high availability, scalability, and error handling, use the following information. - -# Zone aware data replication - -The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2. - -To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. - - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - replicationFactor: 2 1 - replication: - factor: 2 2 - zones: - - maxSkew: 1 3 - topologyKey: topology.kubernetes.io/zone 4 -``` - - -Deprecated field, values entered are overwritten by replication.factor. -This value is automatically set when deployment size is selected at setup. -The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. -Defines zones in the form of a topology key that corresponds to a node label. - -# Recovering Loki pods from failed zones - -In Red Hat OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. - -Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. - - -[WARNING] ----- -The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. ----- - -* Verify your LokiStack CR has a replication factor greater than 1. -* Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. - -The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. - -1. List the pods in Pending status by running the following command: - -```terminal -$ oc get pods --field-selector status.phase==Pending -n openshift-logging -``` - -Example oc get pods output - -```terminal -NAME READY STATUS RESTARTS AGE 1 -logging-loki-index-gateway-1 0/1 Pending 0 17m -logging-loki-ingester-1 0/1 Pending 0 16m -logging-loki-ruler-1 0/1 Pending 0 16m -``` - -These pods are in Pending status because their corresponding PVCs are in the failed zone. -2. List the PVCs in Pending status by running the following command: - -```terminal -$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r -``` - -Example oc get pvc output - -```terminal -storage-logging-loki-index-gateway-1 -storage-logging-loki-ingester-1 -wal-logging-loki-ingester-1 -storage-logging-loki-ruler-1 -wal-logging-loki-ruler-1 -``` - -3. Delete the PVC(s) for a pod by running the following command: - -```terminal -$ oc delete pvc -n openshift-logging -``` - -4. Delete the pod(s) by running the following command: - -```terminal -$ oc delete pod -n openshift-logging -``` - - -Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. - -## Troubleshooting PVC in a terminating state - -The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully. - -* Remove the finalizer for each PVC by running the command below, then retry deletion. - -```terminal -$ oc patch pvc -p '{"metadata":{"finalizers":null}}' -n openshift-logging -``` - - -# Troubleshooting Loki rate limit errors - -If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors. - -These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. - -In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). - - -[IMPORTANT] ----- -The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. ----- - -* The Log Forwarder API is configured to forward logs to Loki. -* Your system sends a block of messages that is larger than 2 MB to Loki. For example: - -```text -"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ -....... -...... -...... -...... -\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} -``` - -* After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages: - -```text -429 Too Many Requests Ingestion rate limit exceeded -``` - -Example Vector error message - -```text -2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true -``` - - -The error is also visible on the receiving end. For example, in the LokiStack ingester pod: -Example Loki ingester error message - -```text -level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream -``` - - -* Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: - -```yaml -apiVersion: loki.grafana.com/v1 -kind: LokiStack -metadata: - name: logging-loki - namespace: openshift-logging -spec: - limits: - global: - ingestion: - ingestionBurstSize: 16 1 - ingestionRate: 8 2 -# ... -``` - -The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. -The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-release-notes-6.2.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-release-notes-6.2.txt deleted file mode 100644 index a8856929..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-release-notes-6.2.txt +++ /dev/null @@ -1,114 +0,0 @@ -# Logging 6.2 Release Notes - - - -# Logging 6.2.3 Release Notes - -This release includes RHBA-2025:8138. - -## Bug Fixes - -* Before this update, the cluster logging installation page contained an incorrect URL to the installation steps in the documentation. With this update, the link has been corrected, resolving the issue and helping users successfully navigate to the documentation. (LOG-6760) -* Before this update, the API documentation about default settings of the tuning delivery mode for log forwarding lacked clarity and sufficient detail. This could lead to users experiencing difficulty in understanding or optimally configuring these settings for their logging pipelines. With this update, the documentation has been revised to provide more comprehensive and clearer guidance on tuning delivery mode default settings, resolving potential ambiguities. (LOG-7131) -* Before this update, merging data from the message field into the root of a Syslog log event caused the log event to be inconsistent with the ViaQ data model. The inconsistency could lead to overwritten system information, data duplication, or event corruption. This update revises Syslog parsing and merging for the Syslog output to align with other output types, resolving this inconsistency. (LOG-7185) -* Before this update, log forwarding failed if you configured a cluster-wide proxy with a URL containing a username with an encoded at sign (@); for example user%40name. This update resolves the issue by adding correct support for URL-encoded values in proxy configurations. (LOG-7188) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-12087 -* CVE-2024-12088 -* CVE-2024-12133 -* CVE-2024-12243 -* CVE-2024-12747 -* CVE-2024-56171 -* CVE-2025-0395 -* CVE-2025-24928 - -# Logging 6.2.2 Release Notes - -This release includes RHBA-2025:4526. - -## Bug Fixes - -* Before this update, logs without the responseStatus.code field caused parsing errors in the Loki distributor component. This happened when using the OpenTelemetry data model. With this update, logs without the responseStatus.code field are parsed correctly. (LOG-7012) -* Before this update, the Cloudwatch output supported log events up to 256 KB in size. With this update, the Cloudwatch output supports up to 1 MB in size to match the updates published by Amazon Web Services (AWS). (LOG-7013) -* Before this update, auditd log messages with multiple msg keys could cause errors in collector pods, because the standard auditd log format expects a single msg field per log entry that follows the msg=audit(TIMESTAMP:ID) structure. With this update, only the first msg value is used, which resolves the issue and ensures accurate extraction of audit metadata. (LOG-7014) -* Before this update, collector pods would enter a crash loop due to a configuration error when attempting token-based authentication with an Elasticsearch output. With this update, token authentication with an Elasticsearch output generates a valid configuration. (LOG-7017) - -# Logging 6.2.1 Release Notes - -This release includes RHBA-2025:3908. - -## Bug Fixes - -* Before this update, application programming interface (API) audit logs collected from the management cluster used the cluster_id value from the management cluster. With this update, API audit logs use the cluster_id value from the guest cluster. (LOG-4445) -* Before this update, issuing the oc explain obsclf.spec.filters command did not list all the supported filters in the command output. With this update, all the supported filter types are listed in the command output. (LOG-6753) -* Before this update the log collector flagged a ClusterLogForwarder resource with multiple inputs to a LokiStack output as invalid due to incorrect internal processing logic. This update fixes the issue. (LOG-6758) -* Before this update, issuing the oc explain command for the clusterlogforwarder.spec.outputs.syslog resource returned an incomplete result. With this update, the missing supported types for rfc and enrichment attributes are listed in the result correctly. (LOG-6869) -* Before this update, empty OpenTelemetry (OTEL) tuning configuration caused validation errors. With this update, validation rules have been updated to accept empty tuning configuration. (LOG-6878) -* Before this update the Red Hat OpenShift Logging Operator could not update the securitycontextconstraint resource that is required by the log collector. With this update, the required cluster role has been provided to the service account of the Red Hat OpenShift Logging Operator. As a result of which, Red Hat OpenShift Logging Operator can create or update the securitycontextconstraint resource. (LOG-6879) -* Before this update, the API documentation for the URL attribute of the syslog resource incorrectly mentioned the value udps as a supported value. With this update, all references to udps have been removed. (LOG-6896) -* Before this update, the Red Hat OpenShift Logging Operator was intermittently unable to update the object in logs due to update conflicts. This update resolves the issue and prevents conflicts during object updates by using the Patch() function instead of the Update() function. (LOG-6953) -* Before this update, Loki ingesters that got into an unhealthy state due to networking issues stayed in that state even after the network recovered. With this update, you can configure the Loki Operator to perform service discovery more often so that unhealthy ingesters can rejoin the group. (LOG-6992) -* Before this update, the Vector collector could not forward Open Virtual Network (OVN) and Auditd logs. With this update, the Vector collector can forward OVN and Auditd logs. (LOG-6997) - -## CVEs - -* CVE-2022-49043 -* CVE-2024-2236 -* CVE-2024-5535 -* CVE-2024-56171 -* CVE-2025-24928 - -# Logging 6.2.0 Release Notes - -This release includes Logging for Red Hat OpenShift Bug Fix Release 6.2.0. - -## New Features and Enhancements - -### Log Collection - -* With this update, HTTP outputs include a proxy field that you can use to send log data through an HTTP proxy. (LOG-6069) - -### Log Storage - -* With this update, time-based stream sharding in Loki is now enabled by the Loki Operator. This solves the issue of ingesting log entries older than the sliding time-window used by Loki. (LOG-6757) -* With this update, you can configure a custom certificate authority (CA) certificate with Loki Operator when using Swift as an object store. (LOG-4818) -* With this update, you can configure workload identity federation on Google Cloud Platform (GCP) by using the Cluster Credential Operator in OpenShift 4.17 and later releases with the Loki Operator. (LOG-6158) - -## Technology Preview - - -[IMPORTANT] ----- -{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. ----- - -* With this update, OpenTelemetry support offered by OpenShift Logging continues to improve, specifically in the area of enabling migrations from the ViaQ data model to OpenTelemetry when forwarding to LokiStack. (LOG-6146) -* With this update, the structuredMetadata field has been removed from Loki Operator in the otlp configuration because structured metadata is now the default type. Additionally, the update introduces a drop field that administrators can use to drop OpenTelemetry attributes when receiving data through OpenTelemetry protocol (OTLP). (LOG-6507) - -## Bug Fixes - -* Before this update, the timestamp shown in the console logs did not match the @timestamp field in the message. With this update the timestamp is correctly shown in the console. (LOG-6222) -* The introduction of ClusterLogForwarder 6.x modified the ClusterLogForwarder API to allow for a consistent templating mechanism. However, this was not applied to the syslog output spec API for the facility and severity fields. This update adds the required validation to the ClusterLogForwarder API for the facility and severity fields. (LOG-6661) -* Before this update, an error in the Loki Operator generating the Loki configuration caused the amount of workers to delete to be zero when 1x.pico was set as the LokiStack size. With this update, the number of workers to delete is set to 10. (LOG-6781) - -## Known Issues - -* The previous data model encoded all information in JSON. The console still uses the query of the previous data model to decode both old and new entries. The logs that are stored by using the new OpenTelemetry data model for the LokiStack output display the following error in the logging console: - -``` -__error__ JSONParserErr -__error_details__ Value looks like object, but can't find closing '}' symbol -``` - - -You can ignore the error as it is only a result of the query and not a data-related error. (LOG-6808) -* Currently, the API documentation incorrectly mentions OpenTelemetry protocol (OTLP) attributes as included instead of excluded in the descriptions of the drop field. (LOG-6839). - -## CVEs - -* CVE-2020-11023 -* CVE-2024-12797 \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-visual-6.2.txt b/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-visual-6.2.txt deleted file mode 100644 index c669b518..00000000 --- a/ocp-product-docs-plaintext/4.18/observability/logging/logging-6.2/log6x-visual-6.2.txt +++ /dev/null @@ -1,5 +0,0 @@ -# Visualization for logging - - - -Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt index 8a716516..50a40dcd 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt @@ -43,7 +43,7 @@ or as a user with view permissions for all projects, you can access metrics for The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective of the Red Hat OpenShift Container Platform web console, click Observe and go to the Metrics tab. 2. To add one or more queries, perform any of the following actions: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt index e1b6af50..a92a6b85 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt @@ -88,7 +88,7 @@ Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. [NOTE] diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt index be7af384..b8e9ee56 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -84,7 +84,7 @@ If you do not need the local Alertmanager, you can disable it by configuring the * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config config map in the openshift-monitoring project: @@ -129,7 +129,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -180,7 +180,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt index aa65995a..13aadfce 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -451,7 +451,7 @@ You can create cluster ID labels for metrics by adding the write_relabel setting * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt index a7c3d5ad..4ce7bd22 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt @@ -29,7 +29,7 @@ You cannot add a node selector constraint directly to an existing scheduled pod. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -85,7 +85,7 @@ You can assign tolerations to any of the monitoring stack components to enable m * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -151,7 +151,7 @@ Prometheus then considers this target to be down and sets its up metric value to ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: @@ -194,7 +194,7 @@ To configure CPU and memory resources, specify values for resource limits and re * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the ConfigMap object named cluster-monitoring-config. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -325,7 +325,7 @@ For more information about the support scope of Red Hat Technology Preview featu To choose a metrics collection profile for core Red Hat OpenShift Container Platform monitoring components, edit the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have enabled Technology Preview features by using the FeatureGate custom resource (CR). * You have created the cluster-monitoring-config ConfigMap object. * You have access to the cluster as a user with the cluster-admin cluster role. @@ -385,7 +385,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt index ce3b00df..39104491 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt @@ -34,7 +34,7 @@ Each procedure that requires a change in the config map includes its expected ou You can configure the core Red Hat OpenShift Container Platform monitoring components by creating and updating the cluster-monitoring-config config map in the openshift-monitoring project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Check whether the cluster-monitoring-config ConfigMap object exists: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt index 43e06841..c4cc3be2 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -113,7 +113,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. * You have configured at least one PVC for core Red Hat OpenShift Container Platform monitoring components. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -187,7 +187,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -305,7 +305,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -370,7 +370,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -436,7 +436,7 @@ For default platform monitoring in the openshift-monitoring project, you can ena Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt index c1d8482f..c55e71e7 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -95,7 +95,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -146,7 +146,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -233,7 +233,7 @@ If you are a non-administrator user who has been given the alert-routing-edit cl * A cluster administrator has enabled monitoring for user-defined projects. * A cluster administrator has enabled alert routing for user-defined projects. * You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml. 2. Add an AlertmanagerConfig YAML definition to the file. For example: @@ -278,7 +278,7 @@ All features of a supported version of upstream Alertmanager are also supported * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled a separate instance of Alertmanager for user-defined alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Print the currently active Alertmanager configuration into the file alertmanager.yaml: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt index a860b44e..ff87e106 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -459,7 +459,7 @@ You cannot override this default configuration by setting the value of the honor * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt index f37d129d..198d4bc8 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt @@ -28,7 +28,7 @@ It is not permitted to move components to control plane or infrastructure nodes. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -84,7 +84,7 @@ You can assign tolerations to the components that monitor user-defined projects, * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -145,7 +145,7 @@ You can configure these limits and requests for monitoring components that monit To configure CPU and memory resources, specify values for resource limits and requests in the {configmap-name} ConfigMap object in the {namespace-name} namespace. * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -239,7 +239,7 @@ If you set sample or label limits, no further sample data is ingested for that t * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -294,7 +294,7 @@ You can create alerts that notify you when: * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml: @@ -362,7 +362,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt index afda2cf5..624c19a8 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt @@ -55,7 +55,7 @@ You must have access to the cluster as a user with the cluster-admin cluster rol ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have created the cluster-monitoring-config ConfigMap object. * You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. @@ -116,7 +116,7 @@ As a cluster administrator, you can assign the user-workload-monitoring-config-e * You have access to the cluster as a user with the cluster-admin cluster role. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: @@ -175,7 +175,7 @@ You can allow users to create user-defined alert routing configurations that use * You have access to the cluster as a user with the cluster-admin cluster role. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object: @@ -258,7 +258,7 @@ You can grant users permission to configure alert routing for user-defined proje * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled monitoring for user-defined projects. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * Assign the alert-routing-edit cluster role to a user in the user-defined project: @@ -268,7 +268,7 @@ $ oc -n adm policy add-role-to-user alert-routing-edit 1 For , substitute the namespace for the user-defined project, such as ns1. For , substitute the username for the account to which you want to assign the role. -Configuring alert notifications +* Configuring alert notifications # Granting users permissions for monitoring for user-defined projects diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt index e341c10e..88072b4f 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -118,7 +118,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have configured at least one PVC for components that monitor user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -197,7 +197,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -247,7 +247,7 @@ By default, for user-defined projects, Thanos Ruler automatically retains metric * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -311,7 +311,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -376,7 +376,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt index 18f38ac0..5625cd70 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt @@ -163,7 +163,7 @@ To help users understand the impact and cause of the alert, ensure that your ale * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -211,7 +211,7 @@ Therefore, you can have generic alerting rules that apply to multiple user-defin * The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project to edit the user-workload-monitoring-config config map. * The monitoring-rules-edit cluster role for the project where you want to create an alerting rule. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: @@ -286,7 +286,7 @@ To list alerting rules for a user-defined project, you must have been assigned t * You have enabled monitoring for user-defined projects. * You are logged in as a user that has the monitoring-rules-view cluster role for your project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. To list alerting rules in : @@ -307,7 +307,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt index 4d90bf72..a334c637 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt @@ -169,7 +169,7 @@ These alerting rules trigger alerts based on the values of chosen metrics. ---- * You have access to the cluster as a user that has the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-alerting-rule.yaml. 2. Add an AlertingRule resource to the YAML file. @@ -218,7 +218,7 @@ As a cluster administrator, you can modify core platform alerts before Alertmana For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-modified-alerting-rule.yaml. 2. Add an AlertRelabelConfig resource to the YAML file. @@ -284,7 +284,7 @@ To help users understand the impact and cause of the alert, ensure that your ale * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -332,7 +332,7 @@ Therefore, you can have generic alerting rules that apply to multiple user-defin * The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project to edit the user-workload-monitoring-config config map. * The monitoring-rules-edit cluster role for the project where you want to create an alerting rule. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: @@ -407,7 +407,7 @@ As a cluster administrator, you can list alerting rules for core Red Hat OpenShift Container Platform and user-defined projects together in a single view. * You have access to the cluster as a user with the cluster-admin role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. From the Administrator perspective of the Red Hat OpenShift Container Platform web console, go to Observe -> Alerting -> Alerting rules. 2. Select the Platform and User sources in the Filter drop-down menu. @@ -423,7 +423,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: @@ -440,7 +440,7 @@ Creating cross-project alerting rules for user-defined projects is enabled by de * To prevent buggy alerting rules from being applied to the cluster without having to identify the rule that causes the issue. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config config map in the openshift-monitoring project: diff --git a/ocp-product-docs-plaintext/4.18/observability/monitoring/troubleshooting-monitoring-issues.txt b/ocp-product-docs-plaintext/4.18/observability/monitoring/troubleshooting-monitoring-issues.txt index 9e0d0078..ff187c0c 100644 --- a/ocp-product-docs-plaintext/4.18/observability/monitoring/troubleshooting-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.18/observability/monitoring/troubleshooting-monitoring-issues.txt @@ -200,7 +200,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective, navigate to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -273,7 +273,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.18/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt b/ocp-product-docs-plaintext/4.18/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt index 19af3fd5..dba94b13 100644 --- a/ocp-product-docs-plaintext/4.18/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt +++ b/ocp-product-docs-plaintext/4.18/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt @@ -56,13 +56,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -79,7 +79,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -94,7 +94,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -103,8 +103,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -155,7 +155,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.18/release_notes/ocp-4-18-release-notes.txt b/ocp-product-docs-plaintext/4.18/release_notes/ocp-4-18-release-notes.txt index d2be0f6f..56c25bd2 100644 --- a/ocp-product-docs-plaintext/4.18/release_notes/ocp-4-18-release-notes.txt +++ b/ocp-product-docs-plaintext/4.18/release_notes/ocp-4-18-release-notes.txt @@ -1642,7 +1642,7 @@ $ oc adm release info 4.18.14 --pullspecs * Previously, incorrectly formatted proxy variables in an external binary resulted in build failures. With this release, an update removes proxy environment variables from the build pod and prevents any build failures. (OCPBUGS-55699) * Previously, no event was logged when an error occurred from failed conversion from ingress to route. With this update, this error appear in the event logs. (OCPBUGS-55338) * Previously, a missing afterburn package resulted in the failure of the gcp-hostname.service, which caused the scale-up job to fail, impacting end-user deployments. With this release, the afterburn package is installed in the RHEL scale-up job. This fix enables a successful scale-up action, resolving the gcp-hostname service failure. (OCPBUGS-55158) -* Previously, there was no communication between a localnet pod and a pod in the default network when both pods were on the same node. With this release, an update fixes the communication problem when pods are on the same node. (OCPBUGS-55016) +* Previously, a pod with a secondary interface in an OVN-Kubernetes Localnet network that was plugged into a br-ex interface bridge was out of reach by other pods on the same node, but used the default network for communication. The communication between pods on different nodes was not impacted. With this release, the communication between a Localnet pod and a default network pod running on the same node is possible, however the IP addresses that are used in the Localnet network must be within the same subnet as the host network. (OCPBUGS-55016) * Previously, image pull timeouts occurred due to the Zscaler platform scanning all data transfers. This resulted in timed out image pulls. With this release, the image pull timeout is increased to 30 seconds, allowing successful updates. (OCPBUGS-54663) * Previously, you could add white space to Amazon Web Services (AWS) tag names, but the installation program did not support them. This situation resulted in the installation program returning an ERROR failed to fetch Metadata message. With this release, the regular expression for AWS tags now validates any tag name that has white space. The installation program accepts these tags and no longer returns an error because of white space. (OCPBUGS-53221) * Previously, cluster nodes repeatedly lost communication due to improper remote port binding by Open Virtual Network (OVN)-Kubernetes. This affected pod communication across nodes. With this release, the remote port binding functionality is updated to be handled by OVN directly, improving the reliability of cluster node communication. (OCPBUGS-51144) diff --git a/ocp-product-docs-plaintext/4.18/security/cert_manager_operator/cert-manager-customizing-api-fields.txt b/ocp-product-docs-plaintext/4.18/security/cert_manager_operator/cert-manager-customizing-api-fields.txt index 7f2cffda..06da814e 100644 --- a/ocp-product-docs-plaintext/4.18/security/cert_manager_operator/cert-manager-customizing-api-fields.txt +++ b/ocp-product-docs-plaintext/4.18/security/cert_manager_operator/cert-manager-customizing-api-fields.txt @@ -1,13 +1,111 @@ -# Customizing cert-manager Operator API fields +# Customizing the cert-manager Operator by using the CertManager custom resource -You can customize the cert-manager Operator for Red Hat OpenShift API fields by overriding environment variables and arguments. +After installing the cert-manager Operator for Red Hat OpenShift, you can perform the following actions by configuring the CertManager custom resource (CR): +* Configure the arguments to modify the behavior of the cert-manager components, such as the cert-manager controller, CA injector, and Webhook. +* Set environment variables for the controller pod. +* Define resource requests and limits to manage CPU and memory usage. +* Configure scheduling rules to control where pods run in your cluster. + +```yaml +apiVersion: operator.openshift.io/v1alpha1 +kind: CertManager +metadata: + name: cluster +spec: + controllerConfig: + overrideArgs: + - "--dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53" + overrideEnv: + - name: HTTP_PROXY + value: http://proxy.example.com:8080 + overrideResources: + limits: + cpu: "200m" + memory: "512Mi" + requests: + cpu: "100m" + memory: "256Mi" + overrideScheduling: + nodeSelector: + custom: "label" + tolerations: + - key: "key1" + operator: "Equal" + value: "value1" + effect: "NoSchedule" + + webhookConfig: + overrideArgs: +#... + overrideResources: +#... + overrideScheduling: +#... + + cainjectorConfig: + overrideArgs: +#... + overrideResources: +#... + overrideScheduling: +#... +``` + [WARNING] ---- To override unsupported arguments, you can add spec.unsupportedConfigOverrides section in the CertManager resource, but using spec.unsupportedConfigOverrides is unsupported. ---- +# Explanation of fields in the CertManager custom resource + +You can use the CertManager custom resource (CR) to configure the following core components of the cert-manager Operator for Red Hat OpenShift: + +* Cert-manager controller: You can use the spec.controllerConfig field to configure the cert‑manager controller pod. +* Webhook: You can use the spec.webhookConfig field to configure the webhook pod, which handles validation and mutation requests. +* CA injector: You can use the spec.cainjectorConfig field to configure the CA injector pod. + +## Common configurable fields in the CertManager CR for the cert-manager components + +The following table lists the common fields that you can configure in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + + + +## Overridable arguments for the cert-manager components + +You can configure the overridable arguments for the cert-manager components in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the overridable arguments for the cert-manager components: + + + +## Overridable environment variables for the cert-manager controller + +You can configure the overridable environment variables for the cert-manager controller in the spec.controllerConfig.overrideEnv field in the CertManager CR. + +The following table describes the overridable environment variables for the cert-manager controller: + + + +## Overridable resource parameters for the cert-manager components + +You can configure the CPU and memory limits for the cert-manager components in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the overridable resource parameters for the cert-manager components: + + + +## Overridable scheduling parameters for the cert-manager components + +You can configure the pod scheduling constrainsts for the cert-manager components in the spec.controllerConfig, spec.webhookConfig field, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the pod scheduling parameters for the cert-manager components: + + + +* Deleting a TLS secret automatically upon Certificate removal + # Customizing cert-manager by overriding environment variables from the cert-manager Operator API You can override the supported environment variables for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. @@ -42,6 +140,11 @@ spec: Replace with the proxy server URL. Replace with a comma separated list of domains. These domains are ignored by the proxy server. + +[NOTE] +---- +For more information about the overridable environment variables, see "Overridable environment variables for the cert-manager components" in "Explanation of fields in the CertManager custom resource". +---- 3. Save your changes and quit the text editor to apply your changes. 1. Verify that the cert-manager controller pod is redeployed by running the following command: @@ -77,6 +180,8 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Customizing cert-manager by overriding arguments from the cert-manager Operator API You can override the supported arguments for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. @@ -102,30 +207,24 @@ spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=' 1 - - '--dns01-recursive-nameservers-only' 2 - - '--acme-http01-solver-nameservers=:' 3 - - '--v=' 4 - - '--metrics-listen-address=:' 5 - - '--issuer-ambient-credentials' 6 + - '--dns01-recursive-nameservers-only' + - '--acme-http01-solver-nameservers=:' + - '--v=' + - '--metrics-listen-address=:' + - '--issuer-ambient-credentials' + - '--acme-http01-solver-resource-limits-cpu=' + - '--acme-http01-solver-resource-limits-memory=' + - '--acme-http01-solver-resource-request-cpu=' + - '--acme-http01-solver-resource-request-memory=' webhookConfig: overrideArgs: - - '--v=4' 4 + - '--v=' cainjectorConfig: overrideArgs: - - '--v=2' 4 + - '--v=' ``` -Provide a comma-separated list of nameservers to query for the DNS-01 self check. The nameservers can be specified either as :, for example, 1.1.1.1:53, or use DNS over HTTPS (DoH), for example, https://1.1.1.1/dns-query. -Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. -Provide a comma-separated list of : nameservers to query for the Automated Certificate Management Environment (ACME) HTTP01 self check. For example, --acme-http01-solver-nameservers=1.1.1.1:53. -Specify to set the log level verbosity to determine the verbosity of log messages. -Specify the host and port for the metrics endpoint. The default value is --metrics-listen-address=0.0.0.0:9402. -You must use the --issuer-ambient-credentials argument when configuring an ACME Issuer to solve DNS-01 challenges by using ambient credentials. - -[NOTE] ----- -DNS over HTTPS (DoH) is supported starting only from cert-manager Operator for Red Hat OpenShift version 1.13.0 and later. ----- +For information about the overridable aruguments, see "Overridable arguments for the cert-manager components" in "Explanation of fields in the CertManager custom resource". 3. Save your changes and quit the text editor to apply your changes. * Verify that arguments are updated for cert-manager pods by running the following command: @@ -176,6 +275,8 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Deleting a TLS secret automatically upon Certificate removal You can enable the --enable-certificate-owner-ref flag for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. The --enable-certificate-owner-ref flag sets the certificate resource as an owner of the secret where the TLS certificate is stored. @@ -248,7 +349,7 @@ Example output # Overriding CPU and memory limits for the cert-manager components -After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components such as cert-manager controller, CA injector, and Webhook. +After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components, such as the cert-manager controller, CA injector, and Webhook. * You have access to the Red Hat OpenShift Container Platform cluster as a user with the cluster-admin role. * You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. @@ -316,48 +417,37 @@ Example output The spec.resources field is empty by default. The cert-manager components do not have CPU and memory limits. 3. To configure the CPU and memory limits for the cert-manager controller, CA injector, and Webhook, enter the following command: -```yaml +```terminal $ oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: - overrideResources: - limits: 1 - cpu: 200m 2 - memory: 64Mi 3 - requests: 4 - cpu: 10m 2 - memory: 16Mi 3 + overrideResources: 1 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi webhookConfig: overrideResources: - limits: 5 - cpu: 200m 6 - memory: 64Mi 7 - requests: 8 - cpu: 10m 6 - memory: 16Mi 7 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi cainjectorConfig: overrideResources: - limits: 9 - cpu: 200m 10 - memory: 64Mi 11 - requests: 12 - cpu: 10m 10 - memory: 16Mi 11 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi " ``` -Defines the maximum amount of CPU and memory that a single container in a cert-manager controller pod can request. -You can specify the CPU limit that a cert-manager controller pod can request. The default value is 10m. -You can specify the memory limit that a cert-manager controller pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the cert-manager controller pod. -Defines the maximum amount of CPU and memory that a single container in a CA injector pod can request. -You can specify the CPU limit that a CA injector pod can request. The default value is 10m. -You can specify the memory limit that a CA injector pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the CA injector pod. -Defines the maximum amount of CPU and memory Defines the maximum amount of CPU and memory that a single container in a Webhook pod can request. -You can specify the CPU limit that a Webhook pod can request. The default value is 10m. -You can specify the memory limit that a Webhook pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the Webhook pod. +For information about the overridable resource parameters, see "Overridable resource parameters for the cert-manager components" in "Explanation of fields in the CertManager custom resource". Example output ```terminal @@ -429,9 +519,11 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Configuring scheduling overrides for cert-manager components -You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components such as cert-manager controller, CA injector, and Webhook. +You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components, such as the cert-manager controller, CA injector, and Webhook. * You have access to the Red Hat OpenShift Container Platform cluster as a user with the cluster-admin role. * You have installed version 1.15.0 or later of the cert-manager Operator for Red Hat OpenShift. @@ -442,37 +534,33 @@ You can configure the pod scheduling from the cert-manager Operator for Red Hat $ oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: - overrideScheduling: + overrideScheduling: 1 nodeSelector: - node-role.kubernetes.io/control-plane: '' 1 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule 2 + effect: NoSchedule webhookConfig: overrideScheduling: nodeSelector: - node-role.kubernetes.io/control-plane: '' 3 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule 4 + effect: NoSchedule cainjectorConfig: overrideScheduling: nodeSelector: - node-role.kubernetes.io/control-plane: '' 5 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule" 6 + effect: NoSchedule" +" ``` -Defines the nodeSelector for the cert-manager controller deployment. -Defines the tolerations for the cert-manager controller deployment. -Defines the nodeSelector for the cert-manager webhook deployment. -Defines the tolerations for the cert-manager webhook deployment. -Defines the nodeSelector for the cert-manager cainjector deployment. -Defines the tolerations for the cert-manager cainjector deployment. +For information about the overridable scheduling parameters, see "Overridable scheduling parameters for the cert-manager components" in "Explanation of fields in the CertManager custom resource". 1. Verify pod scheduling settings for cert-manager pods: 1. Check the deployments in the cert-manager namespace to confirm they have the correct nodeSelector and tolerations by running the following command: @@ -517,3 +605,6 @@ cert-manager-webhook ```terminal $ oc get events -n cert-manager --field-selector reason=Scheduled ``` + + +* Explanation of fields in the CertManager custom resource \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.18/security/cert_manager_operator/cert-manager-operator-release-notes.txt b/ocp-product-docs-plaintext/4.18/security/cert_manager_operator/cert-manager-operator-release-notes.txt index 767fbc30..6f373fcd 100644 --- a/ocp-product-docs-plaintext/4.18/security/cert_manager_operator/cert-manager-operator-release-notes.txt +++ b/ocp-product-docs-plaintext/4.18/security/cert_manager_operator/cert-manager-operator-release-notes.txt @@ -5,6 +5,44 @@ The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that p These release notes track the development of cert-manager Operator for Red Hat OpenShift. For more information, see About the cert-manager Operator for Red Hat OpenShift. +# cert-manager Operator for Red Hat OpenShift 1.17.0 + +Issued: 2025-08-06 + +The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.17.0: + +* RHBA-2025:13182 +* RHBA-2025:13134 +* RHBA-2025:13133 + +Version 1.17.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.17.4. For more information, see the cert-manager project release notes for v1.17.4. + +## Bug fixes + +* Previously, the status field in the IstioCSR custom resource (CR) was not set to Ready even after the successful deployment of Istio‑CSR. With this fix, the status field is correctly set to Ready, ensuring consistent and reliable status reporting. (CM-546) + +## New features and enhancements + +Support to configure resource requests and limits for ACME HTTP‑01 solver pods + +With this release, the cert-manager Operator for Red Hat OpenShift supports configuring CPU and memory resource requests and limits for ACME HTTP‑01 solver pods. You can configure the CPU and memory resource requests and limits by using the following overridable arguments in the CertManager custom resource (CR): + +* --acme-http01-solver-resource-limits-cpu +* --acme-http01-solver-resource-limits-memory +* --acme-http01-solver-resource-request-cpu +* --acme-http01-solver-resource-request-memory + +For more information, see Overridable arguments for the cert‑manager components. + +## CVEs + +* CVE-2025-22866 +* CVE-2025-22868 +* CVE-2025-22872 +* CVE-2025-22870 +* CVE-2025-27144 +* CVE-2025-22871 + # cert-manager Operator for Red Hat OpenShift 1.16.1 Issued: 2025-07-10 diff --git a/ocp-product-docs-plaintext/4.18/service_mesh/v2x/servicemesh-release-notes.txt b/ocp-product-docs-plaintext/4.18/service_mesh/v2x/servicemesh-release-notes.txt index b5a46177..b2f115c8 100644 --- a/ocp-product-docs-plaintext/4.18/service_mesh/v2x/servicemesh-release-notes.txt +++ b/ocp-product-docs-plaintext/4.18/service_mesh/v2x/servicemesh-release-notes.txt @@ -2,14 +2,32 @@ +# Red Hat OpenShift Service Mesh version 2.6.9 + +This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.9, and includes the following ServiceMeshControlPlane resource version updates: 2.6.9 and 2.5.12. + +This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. + +You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. + +## Component updates + + + +# Red Hat OpenShift Service Mesh version 2.5.12 + +This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.9 and is supported on Red Hat OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). + +## Component updates + + + # Red Hat OpenShift Service Mesh version 2.6.8 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.8, and includes the following ServiceMeshControlPlane resource version updates: 2.6.8 and 2.5.11. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. -The most current version of the Red Hat OpenShift Service Mesh Operator can be used with all supported versions of Service Mesh. The version of Service Mesh is specified using the ServiceMeshControlPlane. - You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. ## Component updates diff --git a/ocp-product-docs-plaintext/4.18/support/troubleshooting/investigating-monitoring-issues.txt b/ocp-product-docs-plaintext/4.18/support/troubleshooting/investigating-monitoring-issues.txt index b431deb2..1456815a 100644 --- a/ocp-product-docs-plaintext/4.18/support/troubleshooting/investigating-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.18/support/troubleshooting/investigating-monitoring-issues.txt @@ -204,7 +204,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective, navigate to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -275,7 +275,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.18/support/troubleshooting/troubleshooting-installations.txt b/ocp-product-docs-plaintext/4.18/support/troubleshooting/troubleshooting-installations.txt index 3dfeeeb2..10df756b 100644 --- a/ocp-product-docs-plaintext/4.18/support/troubleshooting/troubleshooting-installations.txt +++ b/ocp-product-docs-plaintext/4.18/support/troubleshooting/troubleshooting-installations.txt @@ -110,7 +110,7 @@ $ ./openshift-install create ignition-configs --dir=./install_dir You can monitor high-level installation, bootstrap, and control plane logs as an Red Hat OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. * You have the fully qualified domain names of the bootstrap and control plane nodes. diff --git a/ocp-product-docs-plaintext/4.18/virt/about_virt/about-virt.txt b/ocp-product-docs-plaintext/4.18/virt/about_virt/about-virt.txt index 54c5894d..829145e7 100644 --- a/ocp-product-docs-plaintext/4.18/virt/about_virt/about-virt.txt +++ b/ocp-product-docs-plaintext/4.18/virt/about_virt/about-virt.txt @@ -37,6 +37,8 @@ You can use OpenShift Virtualization with OVN-Kubernetes or one of the other cer You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies. +For information about partnering with Independent Software Vendors (ISVs) and Services partners for specialized storage, networking, backup, and additional functionality, see the Red Hat Ecosystem Catalog. + # Comparing OpenShift Virtualization to VMware vSphere If you are familiar with VMware vSphere, the following table lists OpenShift Virtualization components that you can use to accomplish similar tasks. However, because OpenShift Virtualization is conceptually different from vSphere, and much of its functionality comes from the underlying Red Hat OpenShift Container Platform, OpenShift Virtualization does not have direct alternatives for all vSphere concepts or components. @@ -53,6 +55,8 @@ OpenShift Virtualization 4.18 is supported for use on Red Hat OpenShift Containe If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.18/virt/install/preparing-cluster-for-virt.txt b/ocp-product-docs-plaintext/4.18/virt/install/preparing-cluster-for-virt.txt index e88e5db7..2c4ae63f 100644 --- a/ocp-product-docs-plaintext/4.18/virt/install/preparing-cluster-for-virt.txt +++ b/ocp-product-docs-plaintext/4.18/virt/install/preparing-cluster-for-virt.txt @@ -170,6 +170,8 @@ To mark a storage class as the default for virtualization workloads, set the ann If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.18/virt/monitoring/virt-prometheus-queries.txt b/ocp-product-docs-plaintext/4.18/virt/monitoring/virt-prometheus-queries.txt index d9090b43..00a5c78c 100644 --- a/ocp-product-docs-plaintext/4.18/virt/monitoring/virt-prometheus-queries.txt +++ b/ocp-product-docs-plaintext/4.18/virt/monitoring/virt-prometheus-queries.txt @@ -19,7 +19,7 @@ or as a user with view permissions for all projects, you can access metrics for The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Administrator perspective of the Red Hat OpenShift Container Platform web console, click Observe and go to the Metrics tab. 2. To add one or more queries, perform any of the following actions: diff --git a/ocp-product-docs-plaintext/4.18/virt/vm_networking/virt-hot-plugging-network-interfaces.txt b/ocp-product-docs-plaintext/4.18/virt/vm_networking/virt-hot-plugging-network-interfaces.txt index c8375eb7..b9c7b5c4 100644 --- a/ocp-product-docs-plaintext/4.18/virt/vm_networking/virt-hot-plugging-network-interfaces.txt +++ b/ocp-product-docs-plaintext/4.18/virt/vm_networking/virt-hot-plugging-network-interfaces.txt @@ -25,21 +25,12 @@ If you restart the VM after hot plugging an interface, that interface becomes pa Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. * A network attachment definition is configured in the same namespace as your VM. +* The VM to which you want to hot plug the network interface is running. * You have installed the virtctl tool. -* You have installed the OpenShift CLI (oc). - -1. If the VM to which you want to hot plug the network interface is not running, start it by using the following command: - -```terminal -$ virtctl start -n -``` - -2. Use the following command to add the new network interface to the running VM. Editing the VM specification adds the new network interface to the VM and virtual machine instance (VMI) configuration but does not attach it to the running VM. - -```terminal -$ oc edit vm -``` +* You have permission to create and list VirtualMachineInstanceMigration objects. +* You have installed the OpenShift CLI (`oc`). +1. Use your preferred text editor to edit the VirtualMachine manifest, as shown in the following example: Example VM configuration ```yaml @@ -70,7 +61,7 @@ template: Specifies the name of the new network interface. Specifies the name of the network. This must be the same as the name of the new network interface that you defined in the template.spec.domain.devices.interfaces list. Specifies the name of the NetworkAttachmentDefinition object. -3. To attach the network interface to the running VM, live migrate the VM by running the following command: +2. To attach the network interface to the running VM, live migrate the VM by running the following command: ```terminal $ virtctl migrate diff --git a/ocp-product-docs-plaintext/4.19/architecture/architecture.txt b/ocp-product-docs-plaintext/4.19/architecture/architecture.txt index 21d24d6f..7feab090 100644 --- a/ocp-product-docs-plaintext/4.19/architecture/architecture.txt +++ b/ocp-product-docs-plaintext/4.19/architecture/architecture.txt @@ -144,7 +144,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt b/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt index 18cb806b..4e8e68db 100644 --- a/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt +++ b/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.txt @@ -4,8 +4,11 @@ Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR. +The following are the different backup types for a Backup CR: * The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. +* If you use Velero's snapshot feature to back up data stored on the persistent volume, only snapshot related information is stored in the S3 bucket along with the Openshift object data. * If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots. +If the underlying storage or the backup bucket are part of the same cluster, then the data might be lost in case of disaster. For more information about CSI volume snapshots, see CSI volume snapshots. [IMPORTANT] diff --git a/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/installing/oadp-backup-restore-csi-snapshots.txt b/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/installing/oadp-backup-restore-csi-snapshots.txt index ed52b897..d3f9b46a 100644 --- a/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/installing/oadp-backup-restore-csi-snapshots.txt +++ b/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/installing/oadp-backup-restore-csi-snapshots.txt @@ -121,7 +121,7 @@ You can restore a volume snapshot by creating a Restore CR. [NOTE] ---- -You cannot restore Volsync backups from OADP 1.2 with the OAPD 1.3 built-in Data Mover. It is recommended to do a file system backup of all of your workloads with Restic prior to upgrading to OADP 1.3. +You cannot restore Volsync backups from OADP 1.2 with the OAPD 1.3 built-in Data Mover. It is recommended to do a file system backup of all of your workloads with Restic before upgrading to OADP 1.3. ---- * You have access to the cluster with the cluster-admin role. diff --git a/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/oadp-intro.txt b/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/oadp-intro.txt index 89bc0606..fd9394fa 100644 --- a/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/oadp-intro.txt +++ b/ocp-product-docs-plaintext/4.19/backup_and_restore/application_backup_and_restore/oadp-intro.txt @@ -1,12 +1,12 @@ # Introduction to OpenShift API for Data Protection -The OpenShift API for Data Protection (OADP) product safeguards customer applications on Red Hat OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering Red Hat OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). +The OpenShift API for Data Protection (OADP) Operator can safeguard customer applications on Red Hat OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering ROSA with HCP applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). However, OADP does not serve as a disaster recovery solution for etcd or OpenShift Operators. [IMPORTANT] ---- -OADP support is provided to customer workload namespaces and cluster scope resources. +OADP support is applicable to customer workload namespaces and cluster scope resources. Full cluster backup and restore are not supported. ---- diff --git a/ocp-product-docs-plaintext/4.19/cli_reference/cli_manager/cli-manager-release-notes.txt b/ocp-product-docs-plaintext/4.19/cli_reference/cli_manager/cli-manager-release-notes.txt index 66f4b4de..2f17a441 100644 --- a/ocp-product-docs-plaintext/4.19/cli_reference/cli_manager/cli-manager-release-notes.txt +++ b/ocp-product-docs-plaintext/4.19/cli_reference/cli_manager/cli-manager-release-notes.txt @@ -21,16 +21,4 @@ The following advisory is available for the CLI Manager Operator 0.1.1: ## New features and enhancements -This release of the CLI Manager updates the Kubernetes version to 1.32. - -# CLI Manager Operator 0.1.0 (Technology Preview) - -Issued: 19 November 2024 - -The following advisory is available for the CLI Manager Operator 0.1.0: - -* RHEA-2024:8303 - -## New features and enhancements - -* This version is the initial Technology Preview release of the CLI Manager Operator. For installation information, see Installing the CLI Manager Operator. \ No newline at end of file +This release of the CLI Manager updates the Kubernetes version to 1.32. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.19/cli_reference/openshift_cli/configuring-cli.txt b/ocp-product-docs-plaintext/4.19/cli_reference/openshift_cli/configuring-cli.txt index 21f76556..b7889985 100644 --- a/ocp-product-docs-plaintext/4.19/cli_reference/openshift_cli/configuring-cli.txt +++ b/ocp-product-docs-plaintext/4.19/cli_reference/openshift_cli/configuring-cli.txt @@ -50,4 +50,44 @@ EOF ``` -Tab completion is enabled when you open a new terminal. \ No newline at end of file +Tab completion is enabled when you open a new terminal. + +# Accessing kubeconfig by using the oc CLI + +You can use the oc CLI to log in to your OpenShift cluster and retrieve a kubeconfig file for accessing the cluster from the command line. + +* You have access to the Red Hat OpenShift Container Platform web console or API server endpoint. + +1. Log in to your OpenShift cluster by running the following command: + +```terminal +$ oc login -u -p 123 +``` + +Specify the full API server URL. For example: https://api.my-cluster.example.com:6443. +Specify a valid username. For example: kubeadmin. +Provide the password for the specified user. For example, the kubeadmin password generated during cluster installation. +2. Save the cluster configuration to a local file by running the following command: + +```terminal +$ oc config view --raw > kubeconfig +``` + +3. Set the KUBECONFIG environment variable to point to the exported file by running the following command: + +```terminal +$ export KUBECONFIG=./kubeconfig +``` + +4. Use oc to interact with your OpenShift cluster by running the following command: + +```terminal +$ oc get nodes +``` + + + +[NOTE] +---- +If you plan to reuse the exported kubeconfig file across sessions or machines, store it securely and avoid committing it to source control. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.19/disconnected/mirroring/about-installing-oc-mirror-v2.txt b/ocp-product-docs-plaintext/4.19/disconnected/mirroring/about-installing-oc-mirror-v2.txt index c4edeee9..f75daeb3 100644 --- a/ocp-product-docs-plaintext/4.19/disconnected/mirroring/about-installing-oc-mirror-v2.txt +++ b/ocp-product-docs-plaintext/4.19/disconnected/mirroring/about-installing-oc-mirror-v2.txt @@ -355,10 +355,21 @@ CatalogSource:: Retrieves information about the available Operators in the mirro ClusterCatalog:: Retrieves information about the available cluster extensions (which includes Operators) in the mirror registry. Used by OLM v1. UpdateService:: Provides update graph data to the disconnected environment. Used by the OpenShift Update Service. +* CatalogSource * ImageDigestMirrorSet * ImageTagMirrorSet * About catalogs in OLM v1 +## Restrictions on modifying resources that are generated by the oc-mirror plugin + +When using resources that are generated by the oc-mirror plugin v2 to configure your cluster, you must not change certain fields. Modifying these fields can cause errors and is not supported. + +The following table lists the resources and their fields that must remain unchanged: + + + +For more information about these resources, see the OpenShift API documentation for CatalogSource, ImageDigestMirrorSet, and ImageTagMirrorSet. + ## Configuring your cluster to use the resources generated by oc-mirror plugin v2 After you have mirrored your image set to the mirror registry, you must apply the generated ImageDigestMirrorSet (IDMS), ImageTagMirrorSet (ITMS), CatalogSource, and UpdateService resources to the cluster. @@ -1007,6 +1018,70 @@ The following tables describe the oc mirror subcommands and flags for deleting i +## About the --cache-dir and --workspace flags + +You can use the --cache-dir flag to specify a directory where the oc-mirror plugin stores a persistent cache of image blobs and manifests for use during mirroring operations. + +The oc-mirror plugin uses the cache in the disk-to-mirror and mirror-to-disk workflows but does not use the cache in the mirror-to-mirror workflow. The plugin uses the cache to perform incremental mirroring and avoids remirroring unchanged images, which saves time and reduces network bandwidth usage. + +The cache directory only contains data up to the last successful mirroring operation. If you delete or corrupt the cache directory, the oc-mirror plugin pulls the image blobs and manifests again, which can force a full remirror and increase network usage. + +You can use the --workspace flag to specify a directory where the oc-mirror plugin stores the working files that it creates during a mirroring operation, such as the ImageDigestMirrorSet and ImageTagMirrorSet manifests. You can also use the workspace directory to perform the following actions: + +* Store the untarred metadata for release and operator images. +* Generate tar archives for use in disk-to-mirror workflows. +* Apply the generated configuration to clusters. +* Repeat or resume previous mirroring operations. + +If you remove or modify the workspace directory, future mirroring operations might fail, or clusters may use inconsistent image sources. + + +[WARNING] +---- +Deleting or modifying the contents of cache or workspace directories can cause the following issues: +* Failed or incomplete mirroring operations. +* Loss of incremental mirroring data. +* Full remirroring requirements and increased network overhead. +Do not modify, relocate, or delete these directories unless you fully understand the impact. You must regularly back up the cache directory after successful mirroring operations. It is not necessary to back up the workspace directory because its contents are regenerated during each mirroring cycle. +---- + +Consider the following best practices so that you can better manage the cache and workspace directories effectively: + +* Use persistent storage: Place the cache and workspace directories on reliable and backed‑up storage. +* Back up after successful operations: Regularly back up the cache directory, especially after completing a mirroring cycle. +* Restore when needed: In case of data loss, restore the cache and workspace directories from backup to resume mirroring operations without performing a full remirror. +* Separate environments: Use dedicated directories for different environments to prevent conflicts. + +Use the following example to specify the cache and workspace directories when running the oc-mirror command: + + +```terminal +$ oc mirror --config=imageset-config.yaml \ + file://local_mirror \ + --workspace /mnt/mirror-data/workspace \ + --cache-dir /mnt/mirror-data/cache + --v2 +``` + + +After the mirroring operation completes, your directory structure is as follows: + + +```text +/mnt/mirror-data/ +├── cache/ +│ ├── manifests/ +│ ├── metadata.db +│ └── previous-mirror-state.json +└── workspace/ + ├── imageset-config-state.yaml + ├── manifests/ + └── icsp/ +``` + + +You must back up the /mnt/mirror-data/cache directory after each successful mirroring operation. + * Configuring your cluster to use the resources generated by oc-mirror # Next steps diff --git a/ocp-product-docs-plaintext/4.19/extensions/ce/crd-upgrade-safety.txt b/ocp-product-docs-plaintext/4.19/extensions/ce/crd-upgrade-safety.txt index bfea505f..c33be27a 100644 --- a/ocp-product-docs-plaintext/4.19/extensions/ce/crd-upgrade-safety.txt +++ b/ocp-product-docs-plaintext/4.19/extensions/ce/crd-upgrade-safety.txt @@ -49,9 +49,9 @@ The following changes to an existing custom resource definition (CRD) are safe f * The maximum value of an existing field is increased in an existing version * A new version of the CRD is added with no modifications to existing versions -# Disabling CRD upgrade safety preflight check +# Disabling the CRD upgrade safety preflight check -The custom resource definition (CRD) upgrade safety preflight check can be disabled by adding the preflight.crdUpgradeSafety.disabled field with a value of true to the ClusterExtension object that provides the CRD. +You can disable the custom resource definition (CRD) upgrade safety preflight check. In the ClusterExtension object that provides the CRD, set the install.preflight.crdUpgradeSafety.enforcement field with the value of None. [WARNING] @@ -59,15 +59,14 @@ The custom resource definition (CRD) upgrade safety preflight check can be disab Disabling the CRD upgrade safety preflight check could break backwards compatibility with stored versions of the CRD and cause other unintended consequences on the cluster. ---- -You cannot disable individual field validators. If you disable the CRD upgrade safety preflight check, all field validators are disabled. +You cannot disable individual field validators. If you disable the CRD upgrade safety preflight check, you disable all field validators. [NOTE] ---- -The following checks are handled by the Kubernetes API server: -* The scope changes from Cluster to Namespace or from Namespace to Cluster -* An existing stored version of the CRD is removed -After disabling the CRD upgrade safety preflight check via Operator Lifecycle Manager (OLM) v1, these two operations are still prevented by Kubernetes. +If you disable the CRD upgrade safety preflight check in Operator Lifecycle Manager (OLM) v1, the Kubernetes API server still prevents the following operations: +* Changing scope from Cluster to Namespace or from Namespace to Cluster +* Removing an existing stored version of the CRD ---- * You have a cluster extension installed. @@ -78,24 +77,29 @@ After disabling the CRD upgrade safety preflight check via Operator Lifecycle Ma $ oc edit clusterextension ``` -2. Set the preflight.crdUpgradeSafety.disabled field to true: +2. Set the install.preflight.crdUpgradeSafety.enforcement field to None: Example ClusterExtension object ```yaml -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: - name: clusterextension-sample + name: clusterextension-sample spec: - installNamespace: default - packageName: argocd-operator - version: 0.6.0 + namespace: default + serviceAccount: + name: sa-example + source: + sourceType: "Catalog" + catalog: + packageName: argocd-operator + version: 0.6.0 + install: preflight: - crdUpgradeSafety: - disabled: true 1 + crdUpgradeSafety: + enforcement: None ``` -Set to true. # Examples of unsafe CRD changes diff --git a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt index a57e4556..a67a3c58 100644 --- a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt +++ b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-bm.txt @@ -9,8 +9,8 @@ The management cluster is not the same thing as the managed cluster. A managed c ---- The hosted control planes feature is enabled by default. The multicluster engine Operator supports only the default local-cluster, which is a hub cluster that is managed, and the hub cluster as the management cluster. If you have Red Hat Advanced Cluster Management installed, you can use the managed hub cluster, also known as the local-cluster, as the management cluster. -A hosted cluster is an Red Hat OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command-line interface, hcp, to create a hosted cluster. -The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see Disabling the automatic import of hosted clusters into multicluster engine Operator. +A hosted cluster is an Red Hat OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command-line interface (hcp) to create a hosted cluster. +The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator". # Preparing to deploy hosted control planes on bare metal @@ -259,7 +259,7 @@ cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m To create a hosted cluster by using the console, complete the following steps. -1. Open the Red Hat OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see Accessing the web console. +1. Open the Red Hat OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see "Accessing the web console". 2. In the console header, ensure that All Clusters is selected. 3. Click Infrastructure -> Clusters. 4. Click Create cluster -> Host inventory -> Hosted control plane. @@ -270,7 +270,7 @@ The Create cluster page is displayed. [NOTE] ---- As you enter details about the cluster, you might find the following tips useful: -* If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see Creating a credential for an on-premises environment. +* If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see "Creating a credential for an on-premises environment". * On the Cluster details page, the pull secret is your Red Hat OpenShift Container Platform pull secret that you use to access Red Hat OpenShift Container Platform resources. If you selected a host inventory credential, the pull secret is automatically populated. * On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace. * On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the NodePort type. A DNS entry must exist for the api.. setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods. @@ -327,7 +327,7 @@ The --api-server-address flag defines the IP address that is used for the Kubern Specify the icsp.yaml file that defines ICSP and your mirror registries. Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub. Specify your hosted cluster namespace. -Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.19.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see Extracting the Red Hat OpenShift Container Platform release image digest. +Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.19.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see "Extracting the Red Hat OpenShift Container Platform release image digest". * To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment. * To access a hosted cluster, see Accessing the hosted cluster. diff --git a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt index 350e704b..7c270579 100644 --- a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt +++ b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.txt @@ -328,7 +328,7 @@ The --api-server-address flag defines the IP address that is used for the Kubern Specify the icsp.yaml file that defines ICSP and your mirror registries. Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub. Specify your hosted cluster namespace. -Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.19.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see Extracting the Red Hat OpenShift Container Platform release image digest. +Specify the supported Red Hat OpenShift Container Platform version that you want to use, for example, 4.19.0-multi. If you are using a disconnected environment, replace with the digest image. To extract the Red Hat OpenShift Container Platform release image digest, see "Extracting the Red Hat OpenShift Container Platform release image digest". * To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment. * To access a hosted cluster, see Accessing the hosted cluster. diff --git a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-manage/hcp-manage-aws.txt b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-manage/hcp-manage-aws.txt index 38e5196f..7bb7a4b5 100644 --- a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-manage/hcp-manage-aws.txt +++ b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-manage/hcp-manage-aws.txt @@ -974,4 +974,152 @@ spec: ``` -Specify the tag that you want to add to your resource. \ No newline at end of file +Specify the tag that you want to add to your resource. + +# Configuring node pool capacity blocks on AWS + +After creating a hosted cluster, you can configure node pool capacity blocks for graphics processing unit (GPU) reservations on Amazon Web Services (AWS). + +1. Create GPU reservations on AWS by running the following command: + +[IMPORTANT] +---- +The zone of the GPU reservation must match your hosted cluster zone. +---- + +```terminal +$ aws ec2 describe-capacity-block-offerings \ + --instance-type "p4d.24xlarge"\ 1 + --instance-count "1" \ 2 + --start-date-range "$(date -u +"%Y-%m-%dT%H:%M:%SZ")" \ 3 + --end-date-range "$(date -u -d "2 day" +"%Y-%m-%dT%H:%M:%SZ")" \ 4 + --capacity-duration-hours 24 \ 5 + --output json +``` + +Defines the type of your AWS instance, for example, p4d.24xlarge. +Defines your instance purchase quantity, for example, 1. Valid values are integers ranging from 1 to 64. +Defines the start date range, for example, 2025-07-21T10:14:39Z. +Defines the end date range, for example, 2025-07-22T10:16:36Z. +Defines the duration of capacity blocks in hours, for example, 24. +2. Purchase the minimum fee capacity block by running the following command: + +```terminal +$ aws ec2 purchase-capacity-block \ + --capacity-block-offering-id "${MIN_FEE_ID}" \ 1 + --instance-platform "Linux/UNIX"\ 2 + --tag-specifications 'ResourceType=capacity-reservation,Tags=[{Key=usage-cluster-type,Value=hypershift-hosted}]' \ 3 + --output json > "${CR_OUTPUT_FILE}" +``` + +Defines the ID of the capacity block offering. +Defines the platform of your instance. +Defines the tag for your instance. +3. Create an environment variable to set the capacity reservation ID by running the following command: + +```terminal +$ CB_RESERVATION_ID=$(jq -r '.CapacityReservation.CapacityReservationId' "${CR_OUTPUT_FILE}") +``` + + +Wait for a couple of minutes for the GPU reservation to become available. +4. Add a node pool to use the GPU reservation by running the following command: + +```terminal +$ hcp create nodepool aws \ + --cluster-name \ 1 + --name \ 2 + --node-count 1 \ 3 + --instance-type p4d.24xlarge \ 4 + --arch amd64 \ 5 + --release-image \ 6 + --render > /tmp/np.yaml +``` + +Replace with the name of your hosted cluster. +Replace with the name of your node pool. +Defines the node pool count, for example, 1. +Defines the instance type, for example, p4d.24xlarge. +Defines an architecture type, for example, amd64. +Replace with the release image you want to use. +5. Add the capacityReservation setting in your NodePool resource by using the following example configuration: + +```yaml +# ... +spec: + arch: amd64 + clusterName: cb-np-hcp + management: + autoRepair: false + upgradeType: Replace + platform: + aws: + instanceProfile: cb-np-hcp-dqppw-worker + instanceType: p4d.24xlarge + rootVolume: + size: 120 + type: gp3 + subnet: + id: subnet-00000 + placement: + capacityReservation: + id: ${CB_RESERVATION_ID} + marketType: CapacityBlocks + type: AWS +# ... +``` + +6. Apply the node pool configuration by running the following command: + +```terminal +$ oc apply -f /tmp/np.yaml +``` + + +1. Verify that your new node pool is created successfully by running the following command: + +```terminal +$ oc get np -n clusters +``` + +Example output + +```terminal +NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE +clusters cb-np cb-np-hcp 1 1 False False 4.20.0-0.nightly-2025-06-05-224220 False False +``` + +2. Verify that your new compute nodes are created in the hosted cluster by running the following command: + +```terminal +$ oc get nodes +``` + +Example output + +```terminal +NAME STATUS ROLES AGE VERSION +ip-10-0-132-74.ec2.internal Ready worker 17m v1.32.5 +ip-10-0-134-183.ec2.internal Ready worker 4h5m v1.32.5 +``` + + +## Destroying a hosted cluster after configuring node pool capacity blocks + +After you configured node pool capacity blocks, you can optionally destroy a hosted cluster and uninstall the HyperShift Operator. + +1. To destroy a hosted cluster, run the following example command: + +```terminal +$ hcp destroy cluster aws \ + --name cb-np-hcp \ + --aws-creds $HOME/.aws/credentials \ + --namespace clusters \ + --region us-east-2 +``` + +2. To uninstall the HyperShift Operator, run the following command: + +```terminal +$ hcp install render --format=yaml | oc delete -f - +``` diff --git a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-prepare/hcp-requirements.txt b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-prepare/hcp-requirements.txt index adc8d9e5..31225ef9 100644 --- a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-prepare/hcp-requirements.txt +++ b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-prepare/hcp-requirements.txt @@ -112,7 +112,7 @@ When running RHEL or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode After you set up your management cluster in FIPS mode, the hosted cluster creation process runs on that management cluster. -* The multicluster engine for Kubernetes Operator 2.8 support matrix +* The multicluster engine for Kubernetes Operator 2.9 support matrix * Red Hat Red Hat OpenShift Container Platform Operator Update Information Checker * Shared infrastructure between hosted and standalone control planes diff --git a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-updating.txt b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-updating.txt index 67fc76b0..f904fef8 100644 --- a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-updating.txt +++ b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hcp-updating.txt @@ -176,6 +176,8 @@ The multicluster engine for Kubernetes Operator requires a specific Red Hat Open See the following support matrices for the multicluster engine Operator versions: +* multicluster engine Operator 2.9 +* multicluster engine Operator 2.8 * multicluster engine Operator 2.7 * multicluster engine Operator 2.6 * multicluster engine Operator 2.5 diff --git a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hosted-control-planes-release-notes.txt b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hosted-control-planes-release-notes.txt index e9bf062e..aa58b51a 100644 --- a/ocp-product-docs-plaintext/4.19/hosted_control_planes/hosted-control-planes-release-notes.txt +++ b/ocp-product-docs-plaintext/4.19/hosted_control_planes/hosted-control-planes-release-notes.txt @@ -31,6 +31,10 @@ Hosted control planes on RHOSP 17.1 is now supported as a Technology Preview fea For more information, see Deploying hosted control planes on OpenStack. +### Configuring node pool capacity blocks on AWS + +You can now configure node pool capacity blocks for hosted control planes on Amazon Web Services (AWS). For more information, see Configuring node pool capacity blocks on AWS. + ## Bug fixes * Previously, when an IDMS or ICSP in the management OpenShift cluster defined a source that pointed to registry.redhat.io or registry.redhat.io/redhat, and the mirror registry did not contain the required OLM catalog images, provisioning for the HostedCluster resource stalled due to unauthorized image pulls. As a consequence, the HostedCluster resource was not deployed, and it remained blocked, where it could not pull essential catalog images from the mirrored registry. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-china.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-china.txt index 2fe862c3..76a6b44d 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-china.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-china.txt @@ -1098,7 +1098,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1153,9 +1153,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1163,7 +1163,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-customizations.txt index 5b443db0..af33d34e 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-customizations.txt @@ -810,7 +810,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -865,9 +865,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -875,7 +875,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-default.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-default.txt index 5b82395d..52f6baf7 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-default.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-default.txt @@ -30,7 +30,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -110,9 +110,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -120,7 +120,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-government-region.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-government-region.txt index 1273920b..2fc4048c 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-government-region.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-government-region.txt @@ -1016,7 +1016,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1071,9 +1071,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1081,7 +1081,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-localzone.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-localzone.txt index d8b8874e..ce3dcb14 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-localzone.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-localzone.txt @@ -1165,7 +1165,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1224,9 +1224,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1234,7 +1234,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-network-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-network-customizations.txt index a4233a8c..093fcc0b 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-network-customizations.txt @@ -1029,7 +1029,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1084,9 +1084,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1094,7 +1094,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-private.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-private.txt index 4a4bec8c..ac228bdb 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-private.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-private.txt @@ -950,7 +950,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1005,9 +1005,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1015,7 +1015,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-secret-region.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-secret-region.txt index 82a762a2..3cc54f1d 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-secret-region.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-secret-region.txt @@ -1104,7 +1104,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1159,9 +1159,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1169,7 +1169,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-vpc.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-vpc.txt index 7c874f03..980556e6 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-vpc.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-vpc.txt @@ -951,7 +951,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1006,9 +1006,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1016,7 +1016,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt index 144b7806..99c8437a 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-aws-wavelength-zone.txt @@ -1225,7 +1225,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1284,9 +1284,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1294,7 +1294,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt index 3b6713c7..d57cd928 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/installing-restricted-networks-aws-installer-provisioned.txt @@ -941,7 +941,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -996,9 +996,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1006,7 +1006,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt index 5e983ef4..ddb910ce 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/ipi/ipi-aws-preparing-to-install.txt @@ -25,7 +25,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/installing-aws-user-infra.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/installing-aws-user-infra.txt index 0c6818c5..9cba7438 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/installing-aws-user-infra.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/installing-aws-user-infra.txt @@ -1690,9 +1690,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1700,7 +1700,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/installing-restricted-networks-aws.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/installing-restricted-networks-aws.txt index d96b2a61..dc822502 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/installing-restricted-networks-aws.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/installing-restricted-networks-aws.txt @@ -2075,9 +2075,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2085,7 +2085,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/upi-aws-preparing-to-install.txt b/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/upi-aws-preparing-to-install.txt index 10c7ea55..34a0dfc4 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/upi-aws-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_aws/upi/upi-aws-preparing-to-install.txt @@ -26,7 +26,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-customizations.txt index 64601bde..79dd167c 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-customizations.txt @@ -1041,7 +1041,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1089,9 +1089,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1099,7 +1099,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-default.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-default.txt index ad7e6359..5c9dfbb2 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-default.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-default.txt @@ -21,7 +21,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -100,9 +100,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -110,7 +110,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-government-region.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-government-region.txt index e6fc9fa4..72dcfa1b 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-government-region.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-government-region.txt @@ -616,7 +616,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -679,9 +679,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -689,7 +689,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-network-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-network-customizations.txt index 26e8828f..b58b6734 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-network-customizations.txt @@ -1039,7 +1039,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1087,9 +1087,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1097,7 +1097,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt index 67c0da24..ee465f27 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-preparing-ipi.txt @@ -12,7 +12,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-private.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-private.txt index a54625ed..5b5e8e3b 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-private.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-private.txt @@ -1076,7 +1076,7 @@ You can run the create cluster command of the installation program only once, du 1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation. -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1139,9 +1139,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1149,7 +1149,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-vnet.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-vnet.txt index 160bbe76..a4856378 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-vnet.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-azure-vnet.txt @@ -935,7 +935,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt index 7869fb5c..2d12ae28 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/ipi/installing-restricted-networks-azure-installer-provisioned.txt @@ -1093,7 +1093,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have an Azure subscription ID and tenant ID. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1141,9 +1141,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1151,7 +1151,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-azure-preparing-upi.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-azure-preparing-upi.txt index 54469b75..42b22ce1 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-azure-preparing-upi.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-azure-preparing-upi.txt @@ -12,7 +12,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-azure-user-infra.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-azure-user-infra.txt index 6776e493..fd1ec410 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-azure-user-infra.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-azure-user-infra.txt @@ -29,7 +29,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1908,9 +1908,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1918,7 +1918,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt index 697eb46f..e1387298 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure/upi/installing-restricted-networks-azure-user-provisioned.txt @@ -59,7 +59,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1954,9 +1954,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1964,7 +1964,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt index 047316de..56ed4366 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-default.txt @@ -312,7 +312,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -360,9 +360,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -370,7 +370,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt index dbc378a6..b9149540 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.txt @@ -510,7 +510,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -558,9 +558,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -568,7 +568,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt index 7141ebd4..b7d955ef 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/ipi/ipi-ash-preparing-to-install.txt @@ -15,7 +15,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt index 84d886d6..4780e2f5 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/upi/installing-azure-stack-hub-user-infra.txt @@ -1341,9 +1341,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1351,7 +1351,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt index 9c7c8fc2..c6874862 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_azure_stack_hub/upi/upi-ash-preparing-to-install.txt @@ -14,7 +14,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/ipi/ipi-install-prerequisites.txt b/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/ipi/ipi-install-prerequisites.txt index cdd78261..f80d9e0d 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/ipi/ipi-install-prerequisites.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/ipi/ipi-install-prerequisites.txt @@ -387,4 +387,49 @@ Prior to the installation of the Red Hat OpenShift Container Platform cluster, g * Control plane and worker nodes are configured. * All nodes accessible via out-of-band management. * (Optional) A separate management network has been created. -* Required data for installation. \ No newline at end of file +* Required data for installation. + +# Installation overview + +The installation program supports interactive mode. However, you can prepare an install-config.yaml file containing the provisioning details for all of the bare-metal hosts, and the relevant cluster details, in advance. + +The installation program loads the install-config.yaml file and the administrator generates the manifests and verifies all prerequisites. + +The installation program performs the following tasks: + +* Enrolls all nodes in the cluster +* Starts the bootstrap virtual machine (VM) +* Starts the metal platform components as systemd services, which have the following containers: +* Ironic-dnsmasq: The DHCP server responsible for handing over the IP addresses to the provisioning interface of various nodes on the provisioning network. Ironic-dnsmasq is only enabled when you deploy an Red Hat OpenShift Container Platform cluster with a provisioning network. +* Ironic-httpd: The HTTP server that is used to ship the images to the nodes. +* Image-customization +* Ironic +* Ironic-inspector (available in Red Hat OpenShift Container Platform 4.16 and earlier) +* Ironic-ramdisk-logs +* Extract-machine-os +* Provisioning-interface +* Metal3-baremetal-operator + +The nodes enter the validation phase, where each node moves to a manageable state after Ironic validates the credentials to access the Baseboard Management Controller (BMC). + +When the node is in the manageable state, the inspection phase starts. The inspection phase ensures that the hardware meets the minimum requirements needed for a successful deployment of Red Hat OpenShift Container Platform. + +The install-config.yaml file details the provisioning network. On the bootstrap VM, the installation program uses the Pre-Boot Execution Environment (PXE) to push a live image to every node with the Ironic Python Agent (IPA) loaded. When using virtual media, it connects directly to the BMC of each node to virtually attach the image. + +When using PXE boot, all nodes reboot to start the process: + +* The ironic-dnsmasq service running on the bootstrap VM provides the IP address of the node and the TFTP boot server. +* The first-boot software loads the root file system over HTTP. +* The ironic service on the bootstrap VM receives the hardware information from each node. + +The nodes enter the cleaning state, where each node must clean all the disks before continuing with the configuration. + +After the cleaning state finishes, the nodes enter the available state and the installation program moves the nodes to the deploying state. + +IPA runs the coreos-installer command to install the Red Hat Enterprise Linux CoreOS (RHCOS) image on the disk defined by the rootDeviceHints parameter in the install-config.yaml file. The node boots by using RHCOS. + +After the installation program configures the control plane nodes, it moves control from the bootstrap VM to the control plane nodes and deletes the bootstrap VM. + +The Bare-Metal Operator continues the deployment of the workers, storage, and infra nodes. + +After the installation completes, the nodes move to the active state. You can then proceed with postinstallation configuration and other Day 2 tasks. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.txt index 43df955f..46369ba0 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.txt @@ -21,7 +21,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -3145,9 +3145,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -3155,7 +3155,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-bare-metal.txt b/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-bare-metal.txt index a4aa7990..8e4b4658 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-bare-metal.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-bare-metal.txt @@ -30,7 +30,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -3142,9 +3142,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -3152,7 +3152,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.txt b/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.txt index e58df737..942d9560 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.txt @@ -69,7 +69,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -3136,9 +3136,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -3146,7 +3146,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-customizations.txt index ed7f7ce3..8dc72744 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-customizations.txt @@ -19,7 +19,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1281,7 +1281,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1372,9 +1372,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1382,7 +1382,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-default.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-default.txt index ce98bc3a..8f6a1288 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-default.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-default.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -165,7 +165,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -339,9 +339,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -349,7 +349,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-network-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-network-customizations.txt index 22eb9a11..9afea68c 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-network-customizations.txt @@ -25,7 +25,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1239,7 +1239,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1330,9 +1330,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1340,7 +1340,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-private.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-private.txt index b425335f..96816ec3 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-private.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-private.txt @@ -112,7 +112,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1234,7 +1234,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1325,9 +1325,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1335,7 +1335,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-shared-vpc.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-shared-vpc.txt index fbcc5ac1..36922219 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-shared-vpc.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-shared-vpc.txt @@ -20,7 +20,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -946,7 +946,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1037,9 +1037,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1047,7 +1047,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-user-infra-vpc.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-user-infra-vpc.txt index 460c85c8..1c262e45 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-user-infra-vpc.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-user-infra-vpc.txt @@ -36,7 +36,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1898,10 +1898,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1909,7 +1909,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-user-infra.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-user-infra.txt index 42fffa07..cbcb7ba1 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-user-infra.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-user-infra.txt @@ -31,7 +31,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2048,10 +2048,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2059,7 +2059,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-vpc.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-vpc.txt index 98c51b89..f65880b0 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-vpc.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-gcp-vpc.txt @@ -60,7 +60,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1196,7 +1196,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1287,9 +1287,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1297,7 +1297,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt index e1453433..e91e5a9f 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.txt @@ -56,7 +56,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1227,7 +1227,7 @@ following locations: environment variables * The ~/.gcp/osServiceAccount.json file * The gcloud cli default credentials -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1318,9 +1318,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1328,7 +1328,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-restricted-networks-gcp.txt b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-restricted-networks-gcp.txt index d235693e..3c484869 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-restricted-networks-gcp.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_gcp/installing-restricted-networks-gcp.txt @@ -65,7 +65,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -2010,10 +2010,10 @@ You can log in to your cluster as a default system user by exporting the cluster The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). * Ensure the bootstrap process completed successfully. -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -2021,7 +2021,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt index 104a5f51..964551a6 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-customizations.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -507,7 +507,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -652,9 +652,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -662,7 +662,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt index f40a51a6..23911bd8 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.txt @@ -19,7 +19,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -655,7 +655,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -800,9 +800,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -810,7 +810,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt index 3014c6ff..6ed783a8 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-private.txt @@ -125,7 +125,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -625,7 +625,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -770,9 +770,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -780,7 +780,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt index c9a87870..51a5a776 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-restricted.txt @@ -885,7 +885,7 @@ If the Red Hat Enterprise Linux CoreOS (RHCOS) image is available locally, the h $ export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE="/rhcos--ibmcloud.x86_64.qcow2.gz" ``` -2. Change to the directory that contains the installation program and initialize the cluster deployment: +2. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -933,9 +933,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -943,7 +943,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt index fd9f3997..53c349ee 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_cloud/installing-ibm-cloud-vpc.txt @@ -84,7 +84,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -590,7 +590,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -735,9 +735,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -745,7 +745,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_power/installing-ibm-power.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_power/installing-ibm-power.txt index 31490f4e..e75ee0fc 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_power/installing-ibm-power.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_power/installing-ibm-power.txt @@ -29,7 +29,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1817,9 +1817,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1827,7 +1827,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt index 1658a78d..3118d053 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_power/installing-restricted-networks-ibm-power.txt @@ -61,7 +61,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1728,9 +1728,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1738,7 +1738,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt index e0735ab3..27e1337e 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-power-vs-customizations.txt @@ -17,7 +17,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -484,7 +484,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -629,9 +629,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -639,7 +639,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt index cb65c464..fd6ba3d8 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-power-vs-private-cluster.txt @@ -101,7 +101,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -577,7 +577,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -722,9 +722,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -732,7 +732,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt index e8151634..4366460b 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-ibm-powervs-vpc.txt @@ -66,7 +66,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -574,7 +574,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -719,9 +719,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -729,7 +729,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt index 77848acd..b03ce103 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.txt @@ -99,7 +99,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -631,7 +631,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -776,9 +776,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -786,7 +786,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt index 4aea67ac..dca15a09 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z-kvm.txt @@ -1020,9 +1020,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1030,7 +1030,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt index 3ebfcbdb..98f281aa 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z-lpar.txt @@ -986,9 +986,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -996,7 +996,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z.txt index 04d4f3a7..bec6ef01 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-ibm-z.txt @@ -1003,9 +1003,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1013,7 +1013,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt index 0832752f..d3be792a 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-kvm.txt @@ -1078,9 +1078,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1088,7 +1088,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt index c23a7b20..3be5ba52 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z-lpar.txt @@ -1038,9 +1038,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1048,7 +1048,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt index eb2cdc90..36a9c800 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/installing-restricted-networks-ibm-z.txt @@ -1060,9 +1060,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1070,7 +1070,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt index 523ae0b4..c0526215 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_ibm_z/upi/upi-ibm-z-preparing-to-install.txt @@ -24,7 +24,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt b/ocp-product-docs-plaintext/4.19/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt index 8a244b9c..58c5c4c9 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_nutanix/installing-nutanix-installer-provisioned.txt @@ -28,7 +28,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1226,7 +1226,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt b/ocp-product-docs-plaintext/4.19/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt index e845016f..5b4c17d0 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.txt @@ -838,7 +838,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-installer-custom.txt b/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-installer-custom.txt index 3b4b1711..5f81a050 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-installer-custom.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-installer-custom.txt @@ -221,7 +221,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1508,7 +1508,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1594,9 +1594,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1604,7 +1604,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-installer-restricted.txt b/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-installer-restricted.txt index 41095259..9e42a770 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-installer-restricted.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-installer-restricted.txt @@ -116,7 +116,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -758,7 +758,7 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -844,9 +844,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -854,7 +854,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-user.txt b/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-user.txt index da6f6acc..67a11fc4 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-user.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_openstack/installing-openstack-user.txt @@ -22,7 +22,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1419,9 +1419,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1429,7 +1429,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_platform_agnostic/installing-platform-agnostic.txt b/ocp-product-docs-plaintext/4.19/installing/installing_platform_agnostic/installing-platform-agnostic.txt index 3dcdd5f1..14ef03fd 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_platform_agnostic/installing-platform-agnostic.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_platform_agnostic/installing-platform-agnostic.txt @@ -26,7 +26,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -1979,9 +1979,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1989,7 +1989,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt index 1921582c..2558ab8b 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/installing-vsphere-agent-based-installer.txt @@ -1,9 +1,16 @@ # Installing a cluster on vSphere using the Agent-based Installer + The Agent-based installation method provides the flexibility to boot your on-premise servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. + Agent-based installation is a subcommand of the Red Hat OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an Red Hat OpenShift Container Platform cluster with an available release image. -# Additional resources +For more information about installing a cluster using the Agent-based Installer, see Preparing to install with the Agent-based Installer. + -* Preparing to install with the Agent-based Installer \ No newline at end of file +[IMPORTANT] +---- +Your vSphere account must include privileges for reading and creating the resources required to install an Red Hat OpenShift Container Platform cluster. +For more information about privileges, see vCenter requirements. +---- \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt index b617e02a..52453bd6 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.txt @@ -57,7 +57,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -475,20 +475,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -504,25 +504,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -1132,14 +1132,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1187,9 +1187,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1197,7 +1197,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1276,13 +1276,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1299,7 +1299,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1314,7 +1314,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1323,8 +1323,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt index f9713211..36246912 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.txt @@ -26,7 +26,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -347,20 +347,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -376,25 +376,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -1004,14 +1004,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1059,9 +1059,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1069,7 +1069,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1129,13 +1129,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1152,7 +1152,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1167,7 +1167,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1176,8 +1176,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1200,7 +1200,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt index aee92930..48a6648f 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.txt @@ -28,7 +28,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -395,20 +395,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -424,25 +424,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -1323,14 +1323,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -* Change to the directory that contains the installation program and initialize the cluster deployment: +* In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -1378,9 +1378,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1388,7 +1388,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1448,13 +1448,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1471,7 +1471,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1486,7 +1486,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1495,8 +1495,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1519,7 +1519,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt index 6d6a7e36..c99a2750 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.txt @@ -27,7 +27,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -45,14 +45,14 @@ You can run the create cluster command of the installation program only once, du * You have the Red Hat OpenShift Container Platform installation program and the pull secret for your cluster. * You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. -* Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. +* Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer. [IMPORTANT] ---- You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ---- -1. Change to the directory that contains the installation program and initialize the cluster deployment: +1. In the directory that contains the installation program, initialize the cluster deployment by running the following command: ```terminal $ ./openshift-install create cluster --dir \ 1 @@ -142,9 +142,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -152,7 +152,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -212,13 +212,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -235,7 +235,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -250,7 +250,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -259,8 +259,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -283,7 +283,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/uninstalling-cluster-vsphere-installer-provisioned.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/uninstalling-cluster-vsphere-installer-provisioned.txt index ecb33b96..a71794d2 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/uninstalling-cluster-vsphere-installer-provisioned.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/uninstalling-cluster-vsphere-installer-provisioned.txt @@ -3,11 +3,6 @@ You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure. -[NOTE] ----- -When you run the openshift-install destroy cluster command to uninstall Red Hat OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them. ----- - # Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt index 7acf90f3..a4da1d1d 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.txt @@ -70,7 +70,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet obtain the images that are necessary to install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -372,20 +372,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -401,25 +401,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -988,9 +988,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -998,7 +998,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1247,13 +1247,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1270,7 +1270,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1285,7 +1285,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1294,8 +1294,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1346,7 +1346,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt index 40fc1cbb..b223940c 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-vsphere-network-customizations.txt @@ -31,7 +31,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -295,20 +295,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -324,25 +324,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -1048,9 +1048,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -1058,7 +1058,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1286,7 +1286,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-vsphere.txt b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-vsphere.txt index 41648448..f30a05e1 100644 --- a/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-vsphere.txt +++ b/ocp-product-docs-plaintext/4.19/installing/installing_vsphere/upi/installing-vsphere.txt @@ -31,7 +31,7 @@ In Red Hat OpenShift Container Platform 4.19, you require access to the internet install your cluster. -You must have internet access to: +You must have internet access to perform the following actions: * Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. * Access Quay.io to obtain the packages that are required to install your cluster. @@ -290,20 +290,20 @@ You can modify the default installation configuration file, so that you can depl The default install-config.yaml file configuration from the previous release of Red Hat OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. +* You have an existing install-config.yaml installation configuration file. [IMPORTANT] ---- -The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website +You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. ---- - -* You have an existing install-config.yaml installation configuration file. +* You have installed the govc command line tool. [IMPORTANT] ---- -You must specify at least one failure domain for your Red Hat OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your Red Hat OpenShift Container Platform cluster. +The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. ---- -1. Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: +1. Create the openshift-region and openshift-zone vCenter tag categories by running the following commands: [IMPORTANT] ---- @@ -319,25 +319,25 @@ $ govc tags.category.create -d "OpenShift region" openshift-region $ govc tags.category.create -d "OpenShift zone" openshift-zone ``` -2. To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: +2. For each region where you want to deploy your cluster, create a region tag by running the following command: ```terminal $ govc tags.create -c ``` -3. To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: +3. For each zone where you want to deploy your cluster, create a zone tag by running the following command: ```terminal $ govc tags.create -c ``` -4. Attach region tags to each vCenter data center object by entering the following command: +4. Attach region tags to each vCenter data center object by running the following command: ```terminal $ govc tags.attach -c / ``` -5. Attach the zone tags to each vCenter cluster object by entering the following command: +5. Attach the zone tags to each vCenter cluster object by running the following command: ```terminal $ govc tags.attach -c //host/ @@ -863,9 +863,9 @@ The kubeconfig file contains information about the cluster that is used by the C The file is specific to a cluster and is created during Red Hat OpenShift Container Platform installation. * You deployed an Red Hat OpenShift Container Platform cluster. -* You installed the oc CLI. +* You installed the OpenShift CLI (`oc`). -1. Export the kubeadmin credentials: +1. Export the kubeadmin credentials by running the following command: ```terminal $ export KUBECONFIG=/auth/kubeconfig 1 @@ -873,7 +873,7 @@ $ export KUBECONFIG=/auth/kubeconfig 1 For , specify the path to the directory that you stored the installation files in. -2. Verify you can run oc commands successfully using the exported configuration: +2. Verify you can run oc commands successfully using the exported configuration by running the following command: ```terminal $ oc whoami @@ -1109,13 +1109,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -1132,7 +1132,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -1147,7 +1147,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -1156,8 +1156,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -1208,7 +1208,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.19/machine_configuration/index.txt b/ocp-product-docs-plaintext/4.19/machine_configuration/index.txt index 91021b33..73ec63a0 100644 --- a/ocp-product-docs-plaintext/4.19/machine_configuration/index.txt +++ b/ocp-product-docs-plaintext/4.19/machine_configuration/index.txt @@ -335,7 +335,7 @@ UPDATED:: The True status indicates that the MCO has applied the current machine UPDATING:: The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated. DEGRADED:: A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready. MACHINECOUNT:: Indicates the total number of machines in that MCP. -READYMACHINECOUNT:: Indicates the total number of machines in that MCP that are ready for scheduling. +READYMACHINECOUNT:: Indicates the number of machines that are both running the current machine config and are ready for scheduling. This count is always less than or equal to the UPDATEDMACHINECOUNT number. UPDATEDMACHINECOUNT:: Indicates the total number of machines in that MCP that have the current machine config. DEGRADEDMACHINECOUNT:: Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable. diff --git a/ocp-product-docs-plaintext/4.19/machine_configuration/mco-update-boot-images.txt b/ocp-product-docs-plaintext/4.19/machine_configuration/mco-update-boot-images.txt index ba7db430..eb5874d2 100644 --- a/ocp-product-docs-plaintext/4.19/machine_configuration/mco-update-boot-images.txt +++ b/ocp-product-docs-plaintext/4.19/machine_configuration/mco-update-boot-images.txt @@ -43,7 +43,13 @@ How the cluster behaves after disabling or re-enabling the feature, depends upon Because a boot image is used only when a node is scaled up, this feature has no effect on existing nodes. ---- -To view the current boot image used in your cluster, examine a machine set: +To view the current boot image used in your cluster, examine a machine set. + + +[NOTE] +---- +The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. +---- ```yaml @@ -72,6 +78,27 @@ spec: This boot image is the same as the originally-installed Red Hat OpenShift Container Platform version, in this example Red Hat OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. +```yaml +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +metadata: + name: ci-ln-hmy310k-72292-5f87z-worker-a + namespace: openshift-machine-api +spec: +# ... + template: +# ... + spec: +# ... + providerSpec: + value: + ami: + id: ami-0e8fd9094e487d1ff +# ... +``` + + + [IMPORTANT] ---- If any of the machine sets for which you want to enable boot image management use a *-user-data secret that is based on Ignition version 2.2.0, the Machine Config Operator converts the Ignition version to 3.4.0 when you enable the feature. Red Hat OpenShift Container Platform versions 4.5 and lower use Ignition version 2.2.0. If this conversion fails, the MCO or your cluster could degrade. An error message that includes err: converting ignition stub failed: failed to parse Ignition config is added to the output of the oc get ClusterOperator machine-config command. You can use the following general steps to correct the problem: @@ -153,7 +180,7 @@ status: mode: All ``` -* Get the boot image version by running the following command: +* Get the boot image version by running the following command. The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. ```terminal $ oc get machinesets -n openshift-machine-api -o yaml @@ -182,7 +209,7 @@ spec: disks: - autoDelete: true boot: true - image: projects/rhcos-cloud/global/images/rhcos-9-6-20250402-0-gcp-x86-64 1 + image: projects/rhcos-cloud/global/images/ 1 # ... ``` @@ -295,7 +322,7 @@ status: mode: All ``` -* Get the boot image version by running the following command: +* Get the boot image version by running the following command. The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. ```terminal $ oc get machinesets -n openshift-machine-api -o yaml diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt index e3efb179..205d2c4c 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/aws_load_balancer_operator/understanding-aws-load-balancer-operator.txt @@ -20,30 +20,18 @@ The AWS Load Balancer Operator can tag the public subnets if the kubernetes.io/r The AWS Load Balancer Operator supports the Kubernetes service resource of type LoadBalancer by using Network Load Balancer (NLB) with the instance target type only. -1. You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a Subscription object by running the following command: +1. To deploy the AWS Load Balancer Operator on-demand from OperatorHub, create a Subscription object by running the following command: ```terminal $ oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}' ``` -Example output - -```terminal -install-zlfbt -``` - 2. Check if the status of an install plan is Complete by running the following command: ```terminal $ oc -n aws-load-balancer-operator get ip --template='{{.status.phase}}{{"\n"}}' ``` -Example output - -```terminal -Complete -``` - 3. View the status of the aws-load-balancer-operator-controller-manager deployment by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/dns-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/dns-operator.txt index a3c92709..9c1d1b66 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/dns-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/dns-operator.txt @@ -71,6 +71,12 @@ The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. +2. To find the service CIDR range, such as 172.30.0.0/16, of your cluster, use the oc get command: + +```terminal +$ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}' +``` + # Using DNS forwarding @@ -131,7 +137,7 @@ spec: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions: - ... +... ``` Must comply with the rfc6335 service name syntax. @@ -337,7 +343,7 @@ The string value can be a combination of units such as 0.5h10m and is converted 1. To review the change, look at the config map again by running the following command: ```terminal -oc get configmap/dns-default -n openshift-dns -o yaml +$ oc get configmap/dns-default -n openshift-dns -o yaml ``` 2. Verify that you see entries that look like the following example: @@ -368,19 +374,12 @@ The following are use cases for changing the DNS Operator managementState: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' ``` -2. Review managementState of the DNS Operator using the jsonpath command-line JSON parser: +2. Review managementState of the DNS Operator by using the jsonpath command-line JSON parser: ```terminal $ oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}' ``` -Example output - -```terminal -"Unmanaged" -``` - - [NOTE] ---- diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/configuring-dpu-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/configuring-dpu-operator.txt index e3cb5e32..84062fd6 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/configuring-dpu-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/configuring-dpu-operator.txt @@ -30,11 +30,9 @@ $ oc apply -f dpu-operator-host-config.yaml 4. You must label all nodes that either have an attached DPU or are functioning as a DPU. On the host cluster, this means labeling all compute nodes assuming each node has an attached DPU with dpu=true. On the DPU, where each MicroShift cluster consists of a single node, label that single node in each cluster with dpu=true. You can apply this label by running the following command: ```terminal -$ oc label node dpu=true +$ oc label node dpu=true ``` -Example -```terminal -$ oc label node worker-1 dpu=true -``` +where: +node_name:: Refers to the name of your node, such as worker-1. \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/installing-dpu-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/installing-dpu-operator.txt index eb6a1ad1..885f6a10 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/installing-dpu-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/installing-dpu-operator.txt @@ -68,20 +68,13 @@ EOF ``` -1. Check that the Operator is installed by entering the following command: +1. To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get csv -n openshift-dpu-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -dpu-operator.v.4.19-202503130333 Succeeded -``` - 2. Change to the openshift-dpu-operator project: ```terminal diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/uninstalling-dpu-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/uninstalling-dpu-operator.txt index c35ff22b..802d0822 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/uninstalling-dpu-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/dpu-operator/uninstalling-dpu-operator.txt @@ -29,7 +29,7 @@ $ oc delete OperatorGroup dpu-operators -n openshift-dpu-operator ``` 4. Uninstall the DPU Operator as follows: -1. Check the installed operators by running the following command: +1. Check the installed Operators by running the following command: ```terminal $ oc get csv -n openshift-dpu-operator @@ -55,17 +55,11 @@ $ oc delete namespace openshift-dpu-operator ``` -1. Verify that the DPU Operator is uninstalled by running the following command: +1. Verify that the DPU Operator is uninstalled by running the following command. An example of succesful command output is No resources found in openshift-dpu-operator namespace. ```terminal $ oc get csv -n openshift-dpu-operator ``` -Example output - -```terminal -No resources found in openshift-dpu-operator namespace. -``` - * Deleting Operators from a cluster \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt index 29693792..eb241e2e 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/ebpf_manager/ebpf-manager-operator-deploy.txt @@ -87,19 +87,5 @@ Example output ```text 2024/08/13 15:20:06 15016 packets received 2024/08/13 15:20:06 93581579 bytes received - -2024/08/13 15:20:09 19284 packets received -2024/08/13 15:20:09 99638680 bytes received - -2024/08/13 15:20:12 23522 packets received -2024/08/13 15:20:12 105666062 bytes received - -2024/08/13 15:20:15 27276 packets received -2024/08/13 15:20:15 112028608 bytes received - -2024/08/13 15:20:18 29470 packets received -2024/08/13 15:20:18 112732299 bytes received - -2024/08/13 15:20:21 32588 packets received -2024/08/13 15:20:21 113813781 bytes received +... ``` diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt index 45caae03..e4ecdcf0 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-configuring-cluster-wide-egress-proxy.txt @@ -26,14 +26,8 @@ $ oc -n external-dns-operator patch subscription external-dns-operator --type='j ``` -* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command: +* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added, outputted as trusted-ca, to the external-dns-operator deployment by running the following command: ```terminal $ oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME ``` - -Example output - -```terminal -trusted-ca -``` diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt index d899e333..475a42b8 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-aws.txt @@ -7,22 +7,20 @@ You can create DNS records on AWS and AWS GovCloud by using the External DNS Ope You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud. -1. Check the user. The user must have access to the kube-system namespace. If you don’t have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: +1. Check the user profile, such as system:admin, by running the following command. The user profile must have access to the kube-system namespace. If you do not have the credentials, you can fetch the credentials from the kube-system namespace to use the cloud provider client by running the following command: ```terminal $ oc whoami ``` -Example output +2. Fetch the values from aws-creds secret present in kube-system namespace. ```terminal -system:admin +$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) ``` -2. Fetch the values from aws-creds secret present in kube-system namespace. ```terminal -$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) $ export AWS_SECRET_ACCESS_KEY=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) ``` @@ -39,7 +37,7 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None ``` -4. Get the list of dns zones to find the one which corresponds to the previously found route's domain: +4. Get the list of DNS zones and find the DNS zone that corresponds to the domain of the route that you previously queried: ```terminal $ aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt index 32c37150..d09fbc90 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-creating-dns-records-on-gcp.txt @@ -51,18 +51,12 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None ``` -6. Get a list of managed zones by running the following command: +6. Get a list of managed zones, such as qe-cvs4g-private-zone test.gcp.example.com, by running the following command: ```terminal $ gcloud dns managed-zones list | grep test.gcp.example.com ``` -Example output - -```terminal -qe-cvs4g-private-zone test.gcp.example.com -``` - 7. Create a YAML file, for example, external-dns-sample-gcp.yaml, that defines the ExternalDNS object: Example external-dns-sample-gcp.yaml file diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt index 5d82bab3..67559fa9 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/nw-installing-external-dns-operator-on-cloud-providers.txt @@ -131,22 +131,8 @@ external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m $ oc -n external-dns-operator get subscription ``` -Example output - -```terminal -NAME PACKAGE SOURCE CHANNEL -external-dns-operator external-dns-operator redhat-operators stable-v1 -``` - 5. Check the external-dns-operator version by running the following command: ```terminal $ oc -n external-dns-operator get csv ``` - -Example output - -```terminal -NAME DISPLAY VERSION REPLACES PHASE -external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded -``` diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt index 508d9cd6..89abe4d8 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/external_dns_operator/understanding-external-dns-operator.txt @@ -11,30 +11,18 @@ The External DNS Operator implements the External DNS API from the olm.openshift You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a Subscription object. -1. Check the name of an install plan by running the following command: +1. Check the name of an install plan, such as install-zcvlr, by running the following command: ```terminal $ oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' ``` -Example output - -```terminal -install-zcvlr -``` - 2. Check if the status of an install plan is Complete by running the following command: ```terminal $ oc -n external-dns-operator get ip -o yaml | yq '.status.phase' ``` -Example output - -```terminal -Complete -``` - 3. View the status of the external-dns-operator deployment by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/ingress-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/ingress-operator.txt index d7c6c3b4..244d3b1a 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/ingress-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/ingress-operator.txt @@ -311,19 +311,12 @@ certificate authority that you configured in a custom PKI. * Your certificate meets the following requirements: * The certificate is valid for the ingress domain. * The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com. -* You must have an IngressController CR. You may use the default one: +* You must have an IngressController CR, which includes just having the default IngressController CR. You can run the following command to check that you have an IngressController CR: ```terminal $ oc --namespace openshift-ingress-operator get ingresscontrollers ``` -Example output - -```terminal -NAME AGE -default 10m -``` - [NOTE] @@ -614,18 +607,12 @@ $ oc apply -f ingress-autoscaler.yaml * Verify that the default Ingress Controller is scaled out to match the value returned by the kube-state-metrics query by running the following commands: -* Use the grep command to search the Ingress Controller YAML file for replicas: +* Use the grep command to search the Ingress Controller YAML file for the number of replicas: ```terminal $ oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas: ``` -Example output - -```terminal - replicas: 3 -``` - * Get the pods in the openshift-ingress project: ```terminal @@ -667,39 +654,18 @@ Scaling is not an immediate action, as it takes time to create the desired numbe $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ``` -Example output - -```terminal -2 -``` - -2. Scale the default IngressController to the desired number of replicas using -the oc patch command. The following example scales the default IngressController -to 3 replicas: +2. Scale the default IngressController to the desired number of replicas by using the oc patch command. The following example scales the default IngressController to 3 replicas. ```terminal $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge ``` -Example output - -```terminal -ingresscontroller.operator.openshift.io/default patched -``` - -3. Verify that the default IngressController scaled to the number of replicas -that you specified: +3. Verify that the default IngressController scaled to the number of replicas that you specified: ```terminal $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ``` -Example output - -```terminal -3 -``` - [TIP] ---- @@ -1522,18 +1488,12 @@ Optional: Domain for Red Hat OpenShift Container Platform infrastructure to use ---- Wait for the openshift-apiserver finish rolling updates before exposing the route. ---- -1. Expose the route: +1. Expose the route by entering the following command. The command outputs route.route.openshift.io/hello-openshift exposed to designate exposure of the route. ```terminal $ oc expose service hello-openshift ``` -Example output - -```terminal -route.route.openshift.io/hello-openshift exposed -``` - 2. Get a list of routes by running the following command: ```terminal diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt index 4733f64c..595f6106 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.txt @@ -30,7 +30,7 @@ You can install the Kubernetes NMState Operator by using the web console or the ## Installing the Kubernetes NMState Operator by using the web console -You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. +You can install the Kubernetes NMState Operator by using the web console. After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. * You are logged in as a user with cluster-admin privileges. @@ -49,8 +49,6 @@ The name restriction is a known issue. The instance is a singleton for the entir ---- 9. Accept the default settings and click Create to create the instance. -After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. - ## Installing the Kubernetes NMState Operator by using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI (oc). After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. @@ -112,13 +110,6 @@ $ oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -kubernetes-nmstate-operator.4.19.0-202210210157 Succeeded -``` - 5. Create an instance of the nmstate Operator: ```terminal @@ -151,21 +142,12 @@ $ oc apply -f .yaml ``` -1. Verify that all pods for the NMState Operator are in a Running state: +* Verify that all pods for the NMState Operator have the Running status by entering the following command: ```terminal $ oc get pod -n openshift-nmstate ``` -Example output - -```terminal -Name Ready Status Restarts Age -pod/nmstate-handler-wn55p 1/1 Running 0 77s -pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s -... -``` - ## Viewing metrics collected by the Kubernetes NMState Operator diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/metallb-operator/metallb-operator-install.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/metallb-operator/metallb-operator-install.txt index 52df270c..4538e591 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/metallb-operator/metallb-operator-install.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/metallb-operator/metallb-operator-install.txt @@ -119,20 +119,13 @@ install-wzg94 metallb-operator.4.19.0-nnnnnnnnnnnn Automatic true ---- Installation of the Operator might take a few seconds. ---- -2. To verify that the Operator is installed, enter the following command: +2. To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -metallb-operator.4.19.0-nnnnnnnnnnnn Succeeded -``` - # Starting MetalLB on your cluster diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt index 189cbe55..157be37e 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/metallb-operator/metallb-upgrading-operator.txt @@ -42,13 +42,6 @@ spec: $ oc -n metallb-system get csv ``` -Example output - -```terminal -NAME DISPLAY VERSION REPLACES PHASE -metallb-operator.v4.19.0 MetalLB Operator 4.19.0 Succeeded -``` - 4. Check the install plan that exists in the namespace by entering the following command. ```terminal @@ -76,19 +69,12 @@ $ oc edit installplan -n metallb-system After you edit the install plan, the upgrade operation starts. If you enter the oc -n metallb-system get csv command during the upgrade operation, the output might show the Replacing or the Pending status. ---- -1. Verify the upgrade was successful by entering the following command: +* To verify that the Operator is upgraded, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc -n metallb-system get csv ``` -Example output - -```terminal -NAME DISPLAY VERSION REPLACE PHASE -metallb-operator.v.0-202503102139 MetalLB Operator 4.19.0-202503102139 metallb-operator.v4.19.0-202502261233 Succeeded -``` - # Additional resources diff --git a/ocp-product-docs-plaintext/4.19/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt b/ocp-product-docs-plaintext/4.19/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt index cbd00e1e..8f91163c 100644 --- a/ocp-product-docs-plaintext/4.19/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt +++ b/ocp-product-docs-plaintext/4.19/networking/networking_operators/sr-iov-operator/installing-sriov-operator.txt @@ -78,20 +78,13 @@ EOF ``` -* Check that the Operator is installed by entering the following command: +* To verify that the Operator is installed, enter the following command and then check that output shows Succeeded for the Operator: ```terminal $ oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ``` -Example output - -```terminal -Name Phase -sriov-network-operator.4.19.0-202406131906 Succeeded -``` - ## Web console: Installing the SR-IOV Network Operator diff --git a/ocp-product-docs-plaintext/4.19/nodes/clusters/nodes-cluster-resource-configure.txt b/ocp-product-docs-plaintext/4.19/nodes/clusters/nodes-cluster-resource-configure.txt index 287e6726..75e0cd17 100644 --- a/ocp-product-docs-plaintext/4.19/nodes/clusters/nodes-cluster-resource-configure.txt +++ b/ocp-product-docs-plaintext/4.19/nodes/clusters/nodes-cluster-resource-configure.txt @@ -189,8 +189,7 @@ intended to be a helpful starting point. # Finding the memory request and limit from within a pod -An application wishing to dynamically discover its memory request and limit from -within a pod should use the Downward API. +An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. * Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: 1. Create a YAML file similar to the following: @@ -202,7 +201,7 @@ metadata: name: test spec: securityContext: - runAsNonRoot: true + runAsNonRoot: false seccompProfile: type: RuntimeDefault containers: @@ -288,7 +287,7 @@ follows: 1. Access the pod using a remote shell: ```terminal -# oc rsh test +# oc rsh ``` 2. Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control: @@ -315,21 +314,7 @@ Example output Killed ``` -4. Run the following command to view the exit status of the sed command: - -```terminal -$ echo $? -``` - -Example output - -```terminal -137 -``` - - -The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. -5. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: +4. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: ```terminal $ grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control @@ -347,7 +332,7 @@ exits, whether immediately or not, it will have phase Failed and reason OOMKilled. An OOM-killed pod might be restarted depending on the value of restartPolicy. If not restarted, controllers such as the replication controller will notice the pod’s failed status and create a new pod to replace the old one. -Use the follwing command to get the pod status: +Use the following command to get the pod status: ```terminal $ oc get pod test diff --git a/ocp-product-docs-plaintext/4.19/nodes/nodes/nodes-update-boot-images.txt b/ocp-product-docs-plaintext/4.19/nodes/nodes/nodes-update-boot-images.txt index 6c176af4..bd0e115b 100644 --- a/ocp-product-docs-plaintext/4.19/nodes/nodes/nodes-update-boot-images.txt +++ b/ocp-product-docs-plaintext/4.19/nodes/nodes/nodes-update-boot-images.txt @@ -43,7 +43,13 @@ How the cluster behaves after disabling or re-enabling the feature, depends upon Because a boot image is used only when a node is scaled up, this feature has no effect on existing nodes. ---- -To view the current boot image used in your cluster, examine a machine set: +To view the current boot image used in your cluster, examine a machine set. + + +[NOTE] +---- +The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. +---- ```yaml @@ -72,6 +78,27 @@ spec: This boot image is the same as the originally-installed Red Hat OpenShift Container Platform version, in this example Red Hat OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. +```yaml +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +metadata: + name: ci-ln-hmy310k-72292-5f87z-worker-a + namespace: openshift-machine-api +spec: +# ... + template: +# ... + spec: +# ... + providerSpec: + value: + ami: + id: ami-0e8fd9094e487d1ff +# ... +``` + + + [IMPORTANT] ---- If any of the machine sets for which you want to enable boot image management use a *-user-data secret that is based on Ignition version 2.2.0, the Machine Config Operator converts the Ignition version to 3.4.0 when you enable the feature. Red Hat OpenShift Container Platform versions 4.5 and lower use Ignition version 2.2.0. If this conversion fails, the MCO or your cluster could degrade. An error message that includes err: converting ignition stub failed: failed to parse Ignition config is added to the output of the oc get ClusterOperator machine-config command. You can use the following general steps to correct the problem: @@ -156,7 +183,7 @@ status: mode: All ``` -* Get the boot image version by running the following command: +* Get the boot image version by running the following command. The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. ```terminal $ oc get machinesets -n openshift-machine-api -o yaml @@ -185,7 +212,7 @@ spec: disks: - autoDelete: true boot: true - image: projects/rhcos-cloud/global/images/rhcos-9-6-20250402-0-gcp-x86-64 1 + image: projects/rhcos-cloud/global/images/ 1 # ... ``` @@ -298,7 +325,7 @@ status: mode: All ``` -* Get the boot image version by running the following command: +* Get the boot image version by running the following command. The location and format of the boot image within the machine set differs, based on the platform. However, the boot image is always listed in the spec.template.spec.providerSpec. parameter. ```terminal $ oc get machinesets -n openshift-machine-api -o yaml diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt index cbdf9173..5094b338 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.txt @@ -43,7 +43,7 @@ or as a user with view permissions for all projects, you can access metrics for The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Red Hat OpenShift Container Platform web console, click Observe -> Metrics. 2. To add one or more queries, perform any of the following actions: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt index 59ee398f..8e1495c6 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.txt @@ -88,7 +88,7 @@ Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. [NOTE] diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt index c253f9f3..8c94cd29 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -84,7 +84,7 @@ If you do not need the local Alertmanager, you can disable it by configuring the * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config config map in the openshift-monitoring project: @@ -129,7 +129,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config config map. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -180,7 +180,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt index aa65995a..13aadfce 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -451,7 +451,7 @@ You can create cluster ID labels for metrics by adding the write_relabel setting * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt index 811fbae2..a58315ac 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.txt @@ -29,7 +29,7 @@ You cannot add a node selector constraint directly to an existing scheduled pod. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -85,7 +85,7 @@ You can assign tolerations to any of the monitoring stack components to enable m * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -151,7 +151,7 @@ Prometheus then considers this target to be down and sets its up metric value to ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: @@ -194,7 +194,7 @@ To configure CPU and memory resources, specify values for resource limits and re * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the ConfigMap object named cluster-monitoring-config. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -318,7 +318,7 @@ data: To choose a metrics collection profile for core Red Hat OpenShift Container Platform monitoring components, edit the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have created the cluster-monitoring-config ConfigMap object. * You have access to the cluster as a user with the cluster-admin cluster role. @@ -377,7 +377,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt index a45b87d1..5d1137e5 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.txt @@ -34,7 +34,7 @@ Each procedure that requires a change in the config map includes its expected ou You can configure the core Red Hat OpenShift Container Platform monitoring components by creating and updating the cluster-monitoring-config config map in the openshift-monitoring project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Check whether the cluster-monitoring-config ConfigMap object exists: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt index 43e06841..c4cc3be2 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -113,7 +113,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. * You have configured at least one PVC for core Red Hat OpenShift Container Platform monitoring components. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -187,7 +187,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -305,7 +305,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -370,7 +370,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -436,7 +436,7 @@ For default platform monitoring in the openshift-monitoring project, you can ena Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. ---- -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have access to the cluster as a user with the cluster-admin cluster role. * You have created the cluster-monitoring-config ConfigMap object. diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt index c1d8482f..c55e71e7 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.txt @@ -13,7 +13,7 @@ If you add the same external Alertmanager configuration for multiple clusters an * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -95,7 +95,7 @@ After you add a secret to the config map, the secret is mounted as a volume at / * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have created the secret to be configured in Alertmanager in the {namespace-name} project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -146,7 +146,7 @@ You can attach custom labels to all time series and alerts leaving Prometheus by * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -233,7 +233,7 @@ If you are a non-administrator user who has been given the alert-routing-edit cl * A cluster administrator has enabled monitoring for user-defined projects. * A cluster administrator has enabled alert routing for user-defined projects. * You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml. 2. Add an AlertmanagerConfig YAML definition to the file. For example: @@ -278,7 +278,7 @@ All features of a supported version of upstream Alertmanager are also supported * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled a separate instance of Alertmanager for user-defined alert routing. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Print the currently active Alertmanager configuration into the file alertmanager.yaml: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt index 35f522e5..b74018a1 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.txt @@ -11,7 +11,7 @@ You can configure remote write storage to enable Prometheus to send ingested met * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. [IMPORTANT] @@ -459,7 +459,7 @@ You cannot override this default configuration by setting the value of the honor * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt index 93ff48f3..cb6254ff 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.txt @@ -28,7 +28,7 @@ It is not permitted to move components to control plane or infrastructure nodes. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: @@ -84,7 +84,7 @@ You can assign tolerations to the components that monitor user-defined projects, * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -145,7 +145,7 @@ You can configure these limits and requests for monitoring components that monit To configure CPU and memory resources, specify values for resource limits and requests in the {configmap-name} ConfigMap object in the {namespace-name} namespace. * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -239,7 +239,7 @@ If you set sample or label limits, no further sample data is ingested for that t * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -294,7 +294,7 @@ You can create alerts that notify you when: * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml: @@ -362,7 +362,7 @@ You can configure pod topology spread constraints for monitoring pods by using t * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt index faf3d446..33249a8f 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.txt @@ -57,7 +57,7 @@ You must have access to the cluster as a user with the cluster-admin cluster rol ---- * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have created the cluster-monitoring-config ConfigMap object. * You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. @@ -118,7 +118,7 @@ As a cluster administrator, you can assign the user-workload-monitoring-config-e * You have access to the cluster as a user with the cluster-admin cluster role. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: @@ -177,7 +177,7 @@ You can allow users to create user-defined alert routing configurations that use * You have access to the cluster as a user with the cluster-admin cluster role. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config ConfigMap object: @@ -260,7 +260,7 @@ You can grant users permission to configure alert routing for user-defined proje * You have access to the cluster as a user with the cluster-admin cluster role. * You have enabled monitoring for user-defined projects. * The user account that you are assigning the role to already exists. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * Assign the alert-routing-edit cluster role to a user in the user-defined project: @@ -270,7 +270,7 @@ $ oc -n adm policy add-role-to-user alert-routing-edit 1 For , substitute the namespace for the user-defined project, such as ns1. For , substitute the username for the account to which you want to assign the role. -Configuring alert notifications +* Configuring alert notifications # Granting users permissions for monitoring for user-defined projects diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt index e3ae38c9..96cbdbd3 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.txt @@ -37,7 +37,7 @@ To use a persistent volume (PV) for monitoring components, you must configure a * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -118,7 +118,7 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. * You have configured at least one PVC for components that monitor user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes. 2. Edit the {configmap-name} config map in the {namespace-name} project: @@ -197,7 +197,7 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh * You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -247,7 +247,7 @@ By default, for user-defined projects, Thanos Ruler automatically retains metric * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: @@ -311,7 +311,7 @@ The default log level is info. * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: @@ -376,7 +376,7 @@ Because log rotation is not supported, only enable this feature temporarily when * You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the {configmap-name} config map in the {namespace-name} project: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt index 4e49524d..80edd5fc 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.txt @@ -176,7 +176,7 @@ To help users understand the impact and cause of the alert, ensure that your ale * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -224,7 +224,7 @@ Therefore, you can have generic alerting rules that apply to multiple user-defin * The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project to edit the user-workload-monitoring-config config map. * The monitoring-rules-edit cluster role for the project where you want to create an alerting rule. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: @@ -299,7 +299,7 @@ To list alerting rules for a user-defined project, you must have been assigned t * You have enabled monitoring for user-defined projects. * You are logged in as a user that has the monitoring-rules-view cluster role for your project. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. To list alerting rules in : @@ -320,7 +320,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt index 8daeffe5..34a4a764 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.txt @@ -181,7 +181,7 @@ These alerting rules trigger alerts based on the values of chosen metrics. ---- * You have access to the cluster as a user that has the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-alerting-rule.yaml. 2. Add an AlertingRule resource to the YAML file. @@ -230,7 +230,7 @@ As a cluster administrator, you can modify core platform alerts before Alertmana For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a new YAML configuration file named example-modified-alerting-rule.yaml. 2. Add an AlertRelabelConfig resource to the YAML file. @@ -296,7 +296,7 @@ To help users understand the impact and cause of the alert, ensure that your ale * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml. 2. Add an alerting rule configuration to the YAML file. @@ -344,7 +344,7 @@ Therefore, you can have generic alerting rules that apply to multiple user-defin * The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project to edit the user-workload-monitoring-config config map. * The monitoring-rules-edit cluster role for the project where you want to create an alerting rule. * A cluster administrator has enabled monitoring for user-defined projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: @@ -419,7 +419,7 @@ As a cluster administrator, you can list alerting rules for core Red Hat OpenShift Container Platform and user-defined projects together in a single view. * You have access to the cluster as a user with the cluster-admin role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Red Hat OpenShift Container Platform web console, go to Observe -> Alerting -> Alerting rules. 2. Select the Platform and User sources in the Filter drop-down menu. @@ -435,7 +435,7 @@ You can remove alerting rules for user-defined projects. * You have enabled monitoring for user-defined projects. * You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * To remove rule in , run the following: @@ -452,7 +452,7 @@ Creating cross-project alerting rules for user-defined projects is enabled by de * To prevent buggy alerting rules from being applied to the cluster without having to identify the rule that causes the issue. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. Edit the cluster-monitoring-config config map in the openshift-monitoring project: diff --git a/ocp-product-docs-plaintext/4.19/observability/monitoring/troubleshooting-monitoring-issues.txt b/ocp-product-docs-plaintext/4.19/observability/monitoring/troubleshooting-monitoring-issues.txt index 0498687e..274b1d37 100644 --- a/ocp-product-docs-plaintext/4.19/observability/monitoring/troubleshooting-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.19/observability/monitoring/troubleshooting-monitoring-issues.txt @@ -200,7 +200,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Red Hat OpenShift Container Platform web console, go to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -273,7 +273,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.19/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt b/ocp-product-docs-plaintext/4.19/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt index 19af3fd5..dba94b13 100644 --- a/ocp-product-docs-plaintext/4.19/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt +++ b/ocp-product-docs-plaintext/4.19/registry/configuring_registry_storage/configuring-registry-storage-vsphere.txt @@ -56,13 +56,13 @@ testing that was possibly completed against these Red Hat OpenShift Container Pl components. ---- -1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. +1. Change the spec.storage.pvc field in the configs.imageregistry/cluster resource. [NOTE] ---- When you use shared storage, review your security settings to prevent outside access. ---- -2. Verify that you do not have a registry pod: +2. Verify that you do not have a registry pod by running the following command: ```terminal $ oc get pod -n openshift-image-registry -l docker-registry=default @@ -79,7 +79,7 @@ No resourses found in openshift-image-registry namespace ---- If you do have a registry pod in your output, you do not need to continue with this procedure. ---- -3. Check the registry configuration: +3. Check the registry configuration by running the following command: ```terminal $ oc edit configs.imageregistry.operator.openshift.io @@ -94,7 +94,7 @@ storage: ``` Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -4. Check the clusteroperator status: +4. Check the clusteroperator status by running the following command: ```terminal $ oc get clusteroperator image-registry @@ -103,8 +103,8 @@ $ oc get clusteroperator image-registry Example output ```terminal -NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -image-registry 4.7 True False False 6h50m +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +image-registry 4.7 True False False 6h50m ``` @@ -155,7 +155,7 @@ have more than one replica. $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' ``` -2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. +2. Provision the persistent volume (PV) for the block storage device, and create a persistent volume claim (PVC) for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. 1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: ```yaml diff --git a/ocp-product-docs-plaintext/4.19/release_notes/addtl-release-notes.txt b/ocp-product-docs-plaintext/4.19/release_notes/addtl-release-notes.txt index c5c6f6d8..3a7f2307 100644 --- a/ocp-product-docs-plaintext/4.19/release_notes/addtl-release-notes.txt +++ b/ocp-product-docs-plaintext/4.19/release_notes/addtl-release-notes.txt @@ -22,6 +22,7 @@ Custom Metrics Autoscaler Operator D:: Red Hat Developer Hub Operator E:: External DNS Operator F:: File Integrity Operator +H:: Hosted control planes K:: Kube Descheduler Operator M:: Migration Toolkit for Containers (MTC) diff --git a/ocp-product-docs-plaintext/4.19/release_notes/ocp-4-19-release-notes.txt b/ocp-product-docs-plaintext/4.19/release_notes/ocp-4-19-release-notes.txt index fdb91db2..cadb704b 100644 --- a/ocp-product-docs-plaintext/4.19/release_notes/ocp-4-19-release-notes.txt +++ b/ocp-product-docs-plaintext/4.19/release_notes/ocp-4-19-release-notes.txt @@ -671,6 +671,10 @@ Previously, the self-signed loopback certificate for the Kubernetes API Server e The readiness probes for the API server have been modified to exclude etcd checks. This prevents client connections from being closed if etcd is temporarily unavailable. This means that client connections persist through brief etcd unavailability and minimizes temporary API server outages. +## Installer automatically removes leftover Cloud Native Storage (CNS) volumes + +The OpenShift installation program now automatically detects and removes leftover persistent storage volumes on VMware vSphere when you delete a cluster. This prevents orphaned volumes from consuming disk space and creating unnecessary alerts in vCenter. + # Deprecated and removed features Some features available in previous releases have been deprecated or removed. @@ -1262,6 +1266,11 @@ $ oc patch networks.operator.openshift.io cluster --type=merge -p \ After you run the command, the CNO collects must-gather logs that you can inspect. (OCPBUGS-52367) +* There is a known issue with Gateway API and Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure private clusters. The load balancer that is provisioned for a gateway is always configured to be external, which can cause errors or unexpected behavior: +* In an AWS private cluster, the load balancer becomes stuck in the pending state and reports the error: Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB. +* In GCP and Azure private clusters, the load balancer is provisioned with an external IP address, when it should not have an external IP address. + +There is no supported workaround for this issue. (OCPBUGS-57440) * In the event of a crash, the mlx5_core NIC driver causes an out-of-memory issue and kdump does not save the vmcore file in /var/crash. To save the vmcore file, use the crashkernel setting to reserve 1024 MB of memory for the kdump kernel. (OCPBUGS-54520, RHEL-90663) @@ -1293,6 +1302,43 @@ This section will continue to be updated over time to provide notes on enhanceme For any Red Hat OpenShift Container Platform release, always review the instructions on updating your cluster properly. ---- +## RHSA-2025:12341 - Red Hat OpenShift Container Platform 4.19.7 image release, bug fix, and security update advisory + +Issued: 05 August 2025 + +Red Hat OpenShift Container Platform release 4.19.7, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:12341 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:12342 advisory. + +Space precluded documenting all of the container images for this release in the advisory. + +You can view the container images in this release by running the following command: + + +```terminal +$ oc adm release info 4.19.7 --pullspecs +``` + + +### Enhancements + +* The KubeVirt Container Storage Interface (CSI) driver now supports volume expansion. Users can dynamically increase the size of their persistent volumes in their tenant cluster. This capability simplifies storage management, allowing for more flexible and scalable infrastructure. (OCPBUGS-58239) + +### Bug fixes + +* Before this update, a plugin conflict in the console modal occurred due to multiple plugins using the same CreateProjectModal extension point. As a consequence, only one plugin extension was used and the list order could not be changed. With this release, an update to the plugin store resolves extensions in the same order that are defined in the console operator configuration. As a result, anyone with permission to update the operator configuration can set the priority of the plugin. (OCPBUGS-56280) +* Before this update, when you clicked Configure in an AlertmanagerReceiversNotConfigured alert on the Overview page, a runtime error occurred. With this release, improved navigation handling ensures that no runtime errors occur when you click Configure. (OCPBUGS-57105) +* Before this update, the /metrics/usage endpoint was updated to include authentication and Cross-Site Request Forgery (CSRF) protections. Because of this, requests to this endpoint started failing with a "forbidden" error message because the requests lacked the necessary CSRF token in the request cookie. With this release, a CSRF token was added to the /metrics/usage request cookie, which resolved the “forbidden” error message. (OCPBUGS-58331) +* Before this update, when you configured an OpenID Connect (OIDC) provider for a HostedCluster resource with an Open ID cluster that did not specify a client secret, a default secret name was automatically generated. As a consequence, you could not configure OIDC public clients because these clients cannot use client secrets. With this release, a default secret name is not generated when no client secret is provided. As a result, you can configure OIDC public clients. (OCPBUGS-58683) +* Before this update, when a Bare Metal Host (BMH) was marked as Provisioned or ExternallyProvisioned, the system would try to deprovision it or power it off first and the DataImage attached to the BMH would also prevent deletion. This issue blocked or slowed down host removal, creating operational inefficiencies. With this release, if the BMH has the detached annotation status and deletion is requested, the BMH transitions to the deleting state, allowing for direct deletion. (OCPBUGS-59133) +* Before this update, downloads on control plane nodes were inconsistently scheduled because of a mismatch between the node selector for downloads and the console pods. As a consequence, downloads were scheduled on random nodes, which caused potential resource contention and sub-optimal performance. With this release, downloaded workloads consistently schedule on control plane nodes, which improves resource allocation. (OCPBUGS-59488) +* Before this update, a cluster upgrade to Red Hat OpenShift Container Platform 4.18 caused inconsistent egress IP allocation due to stale Network Address Translation (NAT) handling. This issue occurred only when you deleted an egress IP pod while the OVN-Kubernetes controller for an egress node was down. As a consequence, duplicate Logical Router Policies and egress IP usage occurred, which caused inconsistent traffic flow and outage. With this release, egress IP allocation cleanup ensures consistent and reliable egress IP allocation in Red Hat OpenShift Container Platform 4.18 clusters. (OCPBUGS-59530) +* Before this update, if you did not have sufficient privileges when you logged into the console, the get started message occupied excessive space on pages. This issue prevented the complete display of important status messages such as no resources found. As a consequence, truncated versions of the messages were displayed. With this release, the get started message is resized and the page's disable property is removed to use less screen space and to allow scrolling. This fix allows users to view complete statuses and information on all pages. You can now view complete statuses and information on all pages. As a result, the get started content remains fully accessible through scrolling, which ensures the visibility of new user guidance and important system messages. (OCPBUGS-59639) +* Before this update, when you cloned a .tar file with zero length, the oc-mirror ran indefinitely due to an empty archive file. As a consequence, no progress occurred when you mirrored a 0-byte .tar file. With this release, 0-byte .tar files are detected and reported as errors, which prevents the oc-mirror from hanging. (OCPBUGS-59779) +* Before this update, the oc-mirror did not detect Helm Chart images that used an aliased sub-chart. As a consequence, the Helm Chart images were missing after mirroring. With this release, the oc-mirror detects and mirrors Helm Chart images with an aliased sub-chart. (OCPBUGS-59799) + +### Updating + +To update an Red Hat OpenShift Container Platform 4.19 cluster to this latest release, see Updating a cluster using the CLI. + ## RHSA-2025:11673 - Red Hat OpenShift Container Platform 4.19.6 image release, bug fix, and security update advisory Issued: 29 July 2025 diff --git a/ocp-product-docs-plaintext/4.19/security/cert_manager_operator/cert-manager-customizing-api-fields.txt b/ocp-product-docs-plaintext/4.19/security/cert_manager_operator/cert-manager-customizing-api-fields.txt index 7f2cffda..06da814e 100644 --- a/ocp-product-docs-plaintext/4.19/security/cert_manager_operator/cert-manager-customizing-api-fields.txt +++ b/ocp-product-docs-plaintext/4.19/security/cert_manager_operator/cert-manager-customizing-api-fields.txt @@ -1,13 +1,111 @@ -# Customizing cert-manager Operator API fields +# Customizing the cert-manager Operator by using the CertManager custom resource -You can customize the cert-manager Operator for Red Hat OpenShift API fields by overriding environment variables and arguments. +After installing the cert-manager Operator for Red Hat OpenShift, you can perform the following actions by configuring the CertManager custom resource (CR): +* Configure the arguments to modify the behavior of the cert-manager components, such as the cert-manager controller, CA injector, and Webhook. +* Set environment variables for the controller pod. +* Define resource requests and limits to manage CPU and memory usage. +* Configure scheduling rules to control where pods run in your cluster. + +```yaml +apiVersion: operator.openshift.io/v1alpha1 +kind: CertManager +metadata: + name: cluster +spec: + controllerConfig: + overrideArgs: + - "--dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53" + overrideEnv: + - name: HTTP_PROXY + value: http://proxy.example.com:8080 + overrideResources: + limits: + cpu: "200m" + memory: "512Mi" + requests: + cpu: "100m" + memory: "256Mi" + overrideScheduling: + nodeSelector: + custom: "label" + tolerations: + - key: "key1" + operator: "Equal" + value: "value1" + effect: "NoSchedule" + + webhookConfig: + overrideArgs: +#... + overrideResources: +#... + overrideScheduling: +#... + + cainjectorConfig: + overrideArgs: +#... + overrideResources: +#... + overrideScheduling: +#... +``` + [WARNING] ---- To override unsupported arguments, you can add spec.unsupportedConfigOverrides section in the CertManager resource, but using spec.unsupportedConfigOverrides is unsupported. ---- +# Explanation of fields in the CertManager custom resource + +You can use the CertManager custom resource (CR) to configure the following core components of the cert-manager Operator for Red Hat OpenShift: + +* Cert-manager controller: You can use the spec.controllerConfig field to configure the cert‑manager controller pod. +* Webhook: You can use the spec.webhookConfig field to configure the webhook pod, which handles validation and mutation requests. +* CA injector: You can use the spec.cainjectorConfig field to configure the CA injector pod. + +## Common configurable fields in the CertManager CR for the cert-manager components + +The following table lists the common fields that you can configure in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + + + +## Overridable arguments for the cert-manager components + +You can configure the overridable arguments for the cert-manager components in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the overridable arguments for the cert-manager components: + + + +## Overridable environment variables for the cert-manager controller + +You can configure the overridable environment variables for the cert-manager controller in the spec.controllerConfig.overrideEnv field in the CertManager CR. + +The following table describes the overridable environment variables for the cert-manager controller: + + + +## Overridable resource parameters for the cert-manager components + +You can configure the CPU and memory limits for the cert-manager components in the spec.controllerConfig, spec.webhookConfig, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the overridable resource parameters for the cert-manager components: + + + +## Overridable scheduling parameters for the cert-manager components + +You can configure the pod scheduling constrainsts for the cert-manager components in the spec.controllerConfig, spec.webhookConfig field, and spec.cainjectorConfig sections in the CertManager CR. + +The following table describes the pod scheduling parameters for the cert-manager components: + + + +* Deleting a TLS secret automatically upon Certificate removal + # Customizing cert-manager by overriding environment variables from the cert-manager Operator API You can override the supported environment variables for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. @@ -42,6 +140,11 @@ spec: Replace with the proxy server URL. Replace with a comma separated list of domains. These domains are ignored by the proxy server. + +[NOTE] +---- +For more information about the overridable environment variables, see "Overridable environment variables for the cert-manager components" in "Explanation of fields in the CertManager custom resource". +---- 3. Save your changes and quit the text editor to apply your changes. 1. Verify that the cert-manager controller pod is redeployed by running the following command: @@ -77,6 +180,8 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Customizing cert-manager by overriding arguments from the cert-manager Operator API You can override the supported arguments for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. @@ -102,30 +207,24 @@ spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=' 1 - - '--dns01-recursive-nameservers-only' 2 - - '--acme-http01-solver-nameservers=:' 3 - - '--v=' 4 - - '--metrics-listen-address=:' 5 - - '--issuer-ambient-credentials' 6 + - '--dns01-recursive-nameservers-only' + - '--acme-http01-solver-nameservers=:' + - '--v=' + - '--metrics-listen-address=:' + - '--issuer-ambient-credentials' + - '--acme-http01-solver-resource-limits-cpu=' + - '--acme-http01-solver-resource-limits-memory=' + - '--acme-http01-solver-resource-request-cpu=' + - '--acme-http01-solver-resource-request-memory=' webhookConfig: overrideArgs: - - '--v=4' 4 + - '--v=' cainjectorConfig: overrideArgs: - - '--v=2' 4 + - '--v=' ``` -Provide a comma-separated list of nameservers to query for the DNS-01 self check. The nameservers can be specified either as :, for example, 1.1.1.1:53, or use DNS over HTTPS (DoH), for example, https://1.1.1.1/dns-query. -Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. -Provide a comma-separated list of : nameservers to query for the Automated Certificate Management Environment (ACME) HTTP01 self check. For example, --acme-http01-solver-nameservers=1.1.1.1:53. -Specify to set the log level verbosity to determine the verbosity of log messages. -Specify the host and port for the metrics endpoint. The default value is --metrics-listen-address=0.0.0.0:9402. -You must use the --issuer-ambient-credentials argument when configuring an ACME Issuer to solve DNS-01 challenges by using ambient credentials. - -[NOTE] ----- -DNS over HTTPS (DoH) is supported starting only from cert-manager Operator for Red Hat OpenShift version 1.13.0 and later. ----- +For information about the overridable aruguments, see "Overridable arguments for the cert-manager components" in "Explanation of fields in the CertManager custom resource". 3. Save your changes and quit the text editor to apply your changes. * Verify that arguments are updated for cert-manager pods by running the following command: @@ -176,6 +275,8 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Deleting a TLS secret automatically upon Certificate removal You can enable the --enable-certificate-owner-ref flag for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. The --enable-certificate-owner-ref flag sets the certificate resource as an owner of the secret where the TLS certificate is stored. @@ -248,7 +349,7 @@ Example output # Overriding CPU and memory limits for the cert-manager components -After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components such as cert-manager controller, CA injector, and Webhook. +After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components, such as the cert-manager controller, CA injector, and Webhook. * You have access to the Red Hat OpenShift Container Platform cluster as a user with the cluster-admin role. * You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. @@ -316,48 +417,37 @@ Example output The spec.resources field is empty by default. The cert-manager components do not have CPU and memory limits. 3. To configure the CPU and memory limits for the cert-manager controller, CA injector, and Webhook, enter the following command: -```yaml +```terminal $ oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: - overrideResources: - limits: 1 - cpu: 200m 2 - memory: 64Mi 3 - requests: 4 - cpu: 10m 2 - memory: 16Mi 3 + overrideResources: 1 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi webhookConfig: overrideResources: - limits: 5 - cpu: 200m 6 - memory: 64Mi 7 - requests: 8 - cpu: 10m 6 - memory: 16Mi 7 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi cainjectorConfig: overrideResources: - limits: 9 - cpu: 200m 10 - memory: 64Mi 11 - requests: 12 - cpu: 10m 10 - memory: 16Mi 11 + limits: + cpu: 200m + memory: 64Mi + requests: + cpu: 10m + memory: 16Mi " ``` -Defines the maximum amount of CPU and memory that a single container in a cert-manager controller pod can request. -You can specify the CPU limit that a cert-manager controller pod can request. The default value is 10m. -You can specify the memory limit that a cert-manager controller pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the cert-manager controller pod. -Defines the maximum amount of CPU and memory that a single container in a CA injector pod can request. -You can specify the CPU limit that a CA injector pod can request. The default value is 10m. -You can specify the memory limit that a CA injector pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the CA injector pod. -Defines the maximum amount of CPU and memory Defines the maximum amount of CPU and memory that a single container in a Webhook pod can request. -You can specify the CPU limit that a Webhook pod can request. The default value is 10m. -You can specify the memory limit that a Webhook pod can request. The default value is 32Mi. -Defines the amount of CPU and memory set by scheduler for the Webhook pod. +For information about the overridable resource parameters, see "Overridable resource parameters for the cert-manager components" in "Explanation of fields in the CertManager custom resource". Example output ```terminal @@ -429,9 +519,11 @@ Example output ``` +* Explanation of fields in the CertManager custom resource + # Configuring scheduling overrides for cert-manager components -You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components such as cert-manager controller, CA injector, and Webhook. +You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components, such as the cert-manager controller, CA injector, and Webhook. * You have access to the Red Hat OpenShift Container Platform cluster as a user with the cluster-admin role. * You have installed version 1.15.0 or later of the cert-manager Operator for Red Hat OpenShift. @@ -442,37 +534,33 @@ You can configure the pod scheduling from the cert-manager Operator for Red Hat $ oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: - overrideScheduling: + overrideScheduling: 1 nodeSelector: - node-role.kubernetes.io/control-plane: '' 1 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule 2 + effect: NoSchedule webhookConfig: overrideScheduling: nodeSelector: - node-role.kubernetes.io/control-plane: '' 3 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule 4 + effect: NoSchedule cainjectorConfig: overrideScheduling: nodeSelector: - node-role.kubernetes.io/control-plane: '' 5 + node-role.kubernetes.io/control-plane: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule" 6 + effect: NoSchedule" +" ``` -Defines the nodeSelector for the cert-manager controller deployment. -Defines the tolerations for the cert-manager controller deployment. -Defines the nodeSelector for the cert-manager webhook deployment. -Defines the tolerations for the cert-manager webhook deployment. -Defines the nodeSelector for the cert-manager cainjector deployment. -Defines the tolerations for the cert-manager cainjector deployment. +For information about the overridable scheduling parameters, see "Overridable scheduling parameters for the cert-manager components" in "Explanation of fields in the CertManager custom resource". 1. Verify pod scheduling settings for cert-manager pods: 1. Check the deployments in the cert-manager namespace to confirm they have the correct nodeSelector and tolerations by running the following command: @@ -517,3 +605,6 @@ cert-manager-webhook ```terminal $ oc get events -n cert-manager --field-selector reason=Scheduled ``` + + +* Explanation of fields in the CertManager custom resource \ No newline at end of file diff --git a/ocp-product-docs-plaintext/4.19/security/cert_manager_operator/cert-manager-operator-release-notes.txt b/ocp-product-docs-plaintext/4.19/security/cert_manager_operator/cert-manager-operator-release-notes.txt index 767fbc30..6f373fcd 100644 --- a/ocp-product-docs-plaintext/4.19/security/cert_manager_operator/cert-manager-operator-release-notes.txt +++ b/ocp-product-docs-plaintext/4.19/security/cert_manager_operator/cert-manager-operator-release-notes.txt @@ -5,6 +5,44 @@ The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that p These release notes track the development of cert-manager Operator for Red Hat OpenShift. For more information, see About the cert-manager Operator for Red Hat OpenShift. +# cert-manager Operator for Red Hat OpenShift 1.17.0 + +Issued: 2025-08-06 + +The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.17.0: + +* RHBA-2025:13182 +* RHBA-2025:13134 +* RHBA-2025:13133 + +Version 1.17.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.17.4. For more information, see the cert-manager project release notes for v1.17.4. + +## Bug fixes + +* Previously, the status field in the IstioCSR custom resource (CR) was not set to Ready even after the successful deployment of Istio‑CSR. With this fix, the status field is correctly set to Ready, ensuring consistent and reliable status reporting. (CM-546) + +## New features and enhancements + +Support to configure resource requests and limits for ACME HTTP‑01 solver pods + +With this release, the cert-manager Operator for Red Hat OpenShift supports configuring CPU and memory resource requests and limits for ACME HTTP‑01 solver pods. You can configure the CPU and memory resource requests and limits by using the following overridable arguments in the CertManager custom resource (CR): + +* --acme-http01-solver-resource-limits-cpu +* --acme-http01-solver-resource-limits-memory +* --acme-http01-solver-resource-request-cpu +* --acme-http01-solver-resource-request-memory + +For more information, see Overridable arguments for the cert‑manager components. + +## CVEs + +* CVE-2025-22866 +* CVE-2025-22868 +* CVE-2025-22872 +* CVE-2025-22870 +* CVE-2025-27144 +* CVE-2025-22871 + # cert-manager Operator for Red Hat OpenShift 1.16.1 Issued: 2025-07-10 diff --git a/ocp-product-docs-plaintext/4.19/service_mesh/v2x/servicemesh-release-notes.txt b/ocp-product-docs-plaintext/4.19/service_mesh/v2x/servicemesh-release-notes.txt index b5a46177..b2f115c8 100644 --- a/ocp-product-docs-plaintext/4.19/service_mesh/v2x/servicemesh-release-notes.txt +++ b/ocp-product-docs-plaintext/4.19/service_mesh/v2x/servicemesh-release-notes.txt @@ -2,14 +2,32 @@ +# Red Hat OpenShift Service Mesh version 2.6.9 + +This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.9, and includes the following ServiceMeshControlPlane resource version updates: 2.6.9 and 2.5.12. + +This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. + +You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. + +## Component updates + + + +# Red Hat OpenShift Service Mesh version 2.5.12 + +This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.9 and is supported on Red Hat OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). + +## Component updates + + + # Red Hat OpenShift Service Mesh version 2.6.8 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.8, and includes the following ServiceMeshControlPlane resource version updates: 2.6.8 and 2.5.11. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on Red Hat OpenShift Container Platform 4.14 and later. -The most current version of the Red Hat OpenShift Service Mesh Operator can be used with all supported versions of Service Mesh. The version of Service Mesh is specified using the ServiceMeshControlPlane. - You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh automatically ensures a compatible version of Kiali. ## Component updates diff --git a/ocp-product-docs-plaintext/4.19/support/troubleshooting/investigating-monitoring-issues.txt b/ocp-product-docs-plaintext/4.19/support/troubleshooting/investigating-monitoring-issues.txt index 54d01446..e89ca9ad 100644 --- a/ocp-product-docs-plaintext/4.19/support/troubleshooting/investigating-monitoring-issues.txt +++ b/ocp-product-docs-plaintext/4.19/support/troubleshooting/investigating-monitoring-issues.txt @@ -204,7 +204,7 @@ Using attributes that are bound to a limited set of possible values reduces the * Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Red Hat OpenShift Container Platform web console, go to Observe -> Metrics. 2. Enter a Prometheus Query Language (PromQL) query in the Expression field. @@ -275,7 +275,7 @@ There are two KubePersistentVolumeFillingUp alerts: To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: diff --git a/ocp-product-docs-plaintext/4.19/support/troubleshooting/troubleshooting-installations.txt b/ocp-product-docs-plaintext/4.19/support/troubleshooting/troubleshooting-installations.txt index 799ed7a1..b8d436ec 100644 --- a/ocp-product-docs-plaintext/4.19/support/troubleshooting/troubleshooting-installations.txt +++ b/ocp-product-docs-plaintext/4.19/support/troubleshooting/troubleshooting-installations.txt @@ -110,7 +110,7 @@ $ ./openshift-install create ignition-configs --dir=./install_dir You can monitor high-level installation, bootstrap, and control plane logs as an Red Hat OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs. * You have access to the cluster as a user with the cluster-admin cluster role. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. * You have the fully qualified domain names of the bootstrap and control plane nodes. diff --git a/ocp-product-docs-plaintext/4.19/virt/about_virt/about-virt.txt b/ocp-product-docs-plaintext/4.19/virt/about_virt/about-virt.txt index 72791e96..848a5300 100644 --- a/ocp-product-docs-plaintext/4.19/virt/about_virt/about-virt.txt +++ b/ocp-product-docs-plaintext/4.19/virt/about_virt/about-virt.txt @@ -37,6 +37,8 @@ You can use OpenShift Virtualization with OVN-Kubernetes or one of the other cer You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies. +For information about partnering with Independent Software Vendors (ISVs) and Services partners for specialized storage, networking, backup, and additional functionality, see the Red Hat Ecosystem Catalog. + # Comparing OpenShift Virtualization to VMware vSphere If you are familiar with VMware vSphere, the following table lists OpenShift Virtualization components that you can use to accomplish similar tasks. However, because OpenShift Virtualization is conceptually different from vSphere, and much of its functionality comes from the underlying Red Hat OpenShift Container Platform, OpenShift Virtualization does not have direct alternatives for all vSphere concepts or components. @@ -53,6 +55,8 @@ OpenShift Virtualization 4.18 is supported for use on Red Hat OpenShift Containe If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.19/virt/install/preparing-cluster-for-virt.txt b/ocp-product-docs-plaintext/4.19/virt/install/preparing-cluster-for-virt.txt index f29c1b59..325e4885 100644 --- a/ocp-product-docs-plaintext/4.19/virt/install/preparing-cluster-for-virt.txt +++ b/ocp-product-docs-plaintext/4.19/virt/install/preparing-cluster-for-virt.txt @@ -179,6 +179,8 @@ To mark a storage class as the default for virtualization workloads, set the ann If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. +For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog. + For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: * ReadWriteMany (RWX) access mode is required for live migration. diff --git a/ocp-product-docs-plaintext/4.19/virt/monitoring/virt-prometheus-queries.txt b/ocp-product-docs-plaintext/4.19/virt/monitoring/virt-prometheus-queries.txt index 4521bfe1..553e8adb 100644 --- a/ocp-product-docs-plaintext/4.19/virt/monitoring/virt-prometheus-queries.txt +++ b/ocp-product-docs-plaintext/4.19/virt/monitoring/virt-prometheus-queries.txt @@ -19,7 +19,7 @@ or as a user with view permissions for all projects, you can access metrics for The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries. * You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. -* You have installed the OpenShift CLI (oc). +* You have installed the OpenShift CLI (`oc`). 1. In the Red Hat OpenShift Container Platform web console, click Observe -> Metrics. 2. To add one or more queries, perform any of the following actions: diff --git a/ocp-product-docs-plaintext/4.19/virt/vm_networking/virt-hot-plugging-network-interfaces.txt b/ocp-product-docs-plaintext/4.19/virt/vm_networking/virt-hot-plugging-network-interfaces.txt index c72800b1..8471ff9b 100644 --- a/ocp-product-docs-plaintext/4.19/virt/vm_networking/virt-hot-plugging-network-interfaces.txt +++ b/ocp-product-docs-plaintext/4.19/virt/vm_networking/virt-hot-plugging-network-interfaces.txt @@ -25,22 +25,12 @@ If you restart the VM after hot plugging an interface, that interface becomes pa Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. * A network attachment definition is configured in the same namespace as your VM. +* The VM to which you want to hot plug the network interface is running. * You have installed the virtctl tool. -* You have installed the OpenShift CLI (oc). * You have permission to create and list VirtualMachineInstanceMigration objects. +* You have installed the OpenShift CLI (`oc`). -1. If the VM to which you want to hot plug the network interface is not running, start it by using the following command: - -```terminal -$ virtctl start -n -``` - -2. Use the following command to add the new network interface to the running VM. Editing the VM specification adds the new network interface to the VM and virtual machine instance (VMI) configuration but does not attach it to the running VM. - -```terminal -$ oc edit vm -``` - +1. Use your preferred text editor to edit the VirtualMachine manifest, as shown in the following example: Example VM configuration ```yaml @@ -71,7 +61,7 @@ template: Specifies the name of the new network interface. Specifies the name of the network. This must be the same as the name of the new network interface that you defined in the template.spec.domain.devices.interfaces list. Specifies the name of the NetworkAttachmentDefinition object. -3. To attach the network interface to the running VM, live migrate the VM by running the following command: +2. To attach the network interface to the running VM, live migrate the VM by running the following command: ```terminal $ virtctl migrate diff --git a/pdm.lock.cpu b/pdm.lock.cpu index 8b24e4dc..87e9ae81 100644 --- a/pdm.lock.cpu +++ b/pdm.lock.cpu @@ -12,7 +12,7 @@ requires_python = "==3.11.*" [[package]] name = "accelerate" -version = "1.8.1" +version = "1.10.0" requires_python = ">=3.9.0" summary = "Accelerate" groups = ["default"] @@ -26,8 +26,8 @@ dependencies = [ "torch>=2.0.0", ] files = [ - {file = "accelerate-1.8.1-py3-none-any.whl", hash = "sha256:c47b8994498875a2b1286e945bd4d20e476956056c7941d512334f4eb44ff991"}, - {file = "accelerate-1.8.1.tar.gz", hash = "sha256:f60df931671bc4e75077b852990469d4991ce8bd3a58e72375c3c95132034db9"}, + {file = "accelerate-1.10.0-py3-none-any.whl", hash = "sha256:260a72b560e100e839b517a331ec85ed495b3889d12886e79d1913071993c5a3"}, + {file = "accelerate-1.10.0.tar.gz", hash = "sha256:8270568fda9036b5cccdc09703fef47872abccd56eb5f6d53b54ea5fb7581496"}, ] [[package]] @@ -43,7 +43,7 @@ files = [ [[package]] name = "aiohttp" -version = "3.12.14" +version = "3.12.15" requires_python = ">=3.9" summary = "Async http client/server framework (asyncio)" groups = ["default"] @@ -58,24 +58,24 @@ dependencies = [ "yarl<2.0,>=1.17.0", ] files = [ - {file = "aiohttp-3.12.14-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f4552ff7b18bcec18b60a90c6982049cdb9dac1dba48cf00b97934a06ce2e597"}, - {file = "aiohttp-3.12.14-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8283f42181ff6ccbcf25acaae4e8ab2ff7e92b3ca4a4ced73b2c12d8cd971393"}, - {file = "aiohttp-3.12.14-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:040afa180ea514495aaff7ad34ec3d27826eaa5d19812730fe9e529b04bb2179"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b413c12f14c1149f0ffd890f4141a7471ba4b41234fe4fd4a0ff82b1dc299dbb"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:1d6f607ce2e1a93315414e3d448b831238f1874b9968e1195b06efaa5c87e245"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:565e70d03e924333004ed101599902bba09ebb14843c8ea39d657f037115201b"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4699979560728b168d5ab63c668a093c9570af2c7a78ea24ca5212c6cdc2b641"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ad5fdf6af93ec6c99bf800eba3af9a43d8bfd66dce920ac905c817ef4a712afe"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ac76627c0b7ee0e80e871bde0d376a057916cb008a8f3ffc889570a838f5cc7"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:798204af1180885651b77bf03adc903743a86a39c7392c472891649610844635"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:4f1205f97de92c37dd71cf2d5bcfb65fdaed3c255d246172cce729a8d849b4da"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:76ae6f1dd041f85065d9df77c6bc9c9703da9b5c018479d20262acc3df97d419"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:a194ace7bc43ce765338ca2dfb5661489317db216ea7ea700b0332878b392cab"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:16260e8e03744a6fe3fcb05259eeab8e08342c4c33decf96a9dad9f1187275d0"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:8c779e5ebbf0e2e15334ea404fcce54009dc069210164a244d2eac8352a44b28"}, - {file = "aiohttp-3.12.14-cp311-cp311-win32.whl", hash = "sha256:a289f50bf1bd5be227376c067927f78079a7bdeccf8daa6a9e65c38bae14324b"}, - {file = "aiohttp-3.12.14-cp311-cp311-win_amd64.whl", hash = "sha256:0b8a69acaf06b17e9c54151a6c956339cf46db4ff72b3ac28516d0f7068f4ced"}, - {file = "aiohttp-3.12.14.tar.gz", hash = "sha256:6e06e120e34d93100de448fd941522e11dafa78ef1a893c179901b7d66aa29f2"}, + {file = "aiohttp-3.12.15-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:d3ce17ce0220383a0f9ea07175eeaa6aa13ae5a41f30bc61d84df17f0e9b1117"}, + {file = "aiohttp-3.12.15-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:010cc9bbd06db80fe234d9003f67e97a10fe003bfbedb40da7d71c1008eda0fe"}, + {file = "aiohttp-3.12.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3f9d7c55b41ed687b9d7165b17672340187f87a773c98236c987f08c858145a9"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc4fbc61bb3548d3b482f9ac7ddd0f18c67e4225aaa4e8552b9f1ac7e6bda9e5"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7fbc8a7c410bb3ad5d595bb7118147dfbb6449d862cc1125cf8867cb337e8728"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:74dad41b3458dbb0511e760fb355bb0b6689e0630de8a22b1b62a98777136e16"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3b6f0af863cf17e6222b1735a756d664159e58855da99cfe965134a3ff63b0b0"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b5b7fe4972d48a4da367043b8e023fb70a04d1490aa7d68800e465d1b97e493b"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6443cca89553b7a5485331bc9bedb2342b08d073fa10b8c7d1c60579c4a7b9bd"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6c5f40ec615e5264f44b4282ee27628cea221fcad52f27405b80abb346d9f3f8"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:2abbb216a1d3a2fe86dbd2edce20cdc5e9ad0be6378455b05ec7f77361b3ab50"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:db71ce547012a5420a39c1b744d485cfb823564d01d5d20805977f5ea1345676"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:ced339d7c9b5030abad5854aa5413a77565e5b6e6248ff927d3e174baf3badf7"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:7c7dd29c7b5bda137464dc9bfc738d7ceea46ff70309859ffde8c022e9b08ba7"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:421da6fd326460517873274875c6c5a18ff225b40da2616083c5a34a7570b685"}, + {file = "aiohttp-3.12.15-cp311-cp311-win32.whl", hash = "sha256:4420cf9d179ec8dfe4be10e7d0fe47d6d606485512ea2265b0d8c5113372771b"}, + {file = "aiohttp-3.12.15-cp311-cp311-win_amd64.whl", hash = "sha256:edd533a07da85baa4b423ee8839e3e91681c7bfa19b04260a469ee94b778bf6d"}, + {file = "aiohttp-3.12.15.tar.gz", hash = "sha256:4fc61385e9c98d72fcdf47e6dd81833f47b2f77c114c29cd64a361be57a763a2"}, ] [[package]] @@ -123,9 +123,9 @@ files = [ [[package]] name = "anyio" -version = "4.9.0" +version = "4.10.0" requires_python = ">=3.9" -summary = "High level compatibility layer for multiple asynchronous event loop implementations" +summary = "High-level concurrency and networking framework on top of asyncio or Trio" groups = ["default"] dependencies = [ "exceptiongroup>=1.0.2; python_version < \"3.11\"", @@ -134,8 +134,8 @@ dependencies = [ "typing-extensions>=4.5; python_version < \"3.13\"", ] files = [ - {file = "anyio-4.9.0-py3-none-any.whl", hash = "sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c"}, - {file = "anyio-4.9.0.tar.gz", hash = "sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028"}, + {file = "anyio-4.10.0-py3-none-any.whl", hash = "sha256:60e474ac86736bbfd6f210f7a61218939c318f43f9972497381f1c5e930ed3d1"}, + {file = "anyio-4.10.0.tar.gz", hash = "sha256:3f3fae35c96039744587aa5b8371e7e8e603c0702999535961dd336026973ba6"}, ] [[package]] @@ -151,7 +151,7 @@ files = [ [[package]] name = "banks" -version = "2.1.3" +version = "2.2.0" requires_python = ">=3.9" summary = "A prompt programming language" groups = ["default"] @@ -164,8 +164,8 @@ dependencies = [ "pydantic", ] files = [ - {file = "banks-2.1.3-py3-none-any.whl", hash = "sha256:9e1217dc977e6dd1ce42c5ff48e9bcaf238d788c81b42deb6a555615ffcffbab"}, - {file = "banks-2.1.3.tar.gz", hash = "sha256:c0dd2cb0c5487274a513a552827e6a8ddbd0ab1a1b967f177e71a6e4748a3ed2"}, + {file = "banks-2.2.0-py3-none-any.whl", hash = "sha256:963cd5c85a587b122abde4f4064078def35c50c688c1b9d36f43c92503854e7d"}, + {file = "banks-2.2.0.tar.gz", hash = "sha256:d1446280ce6e00301e3e952dd754fd8cee23ff277d29ed160994a84d0d7ffe62"}, ] [[package]] @@ -209,37 +209,35 @@ files = [ [[package]] name = "certifi" -version = "2025.7.9" +version = "2025.8.3" requires_python = ">=3.7" summary = "Python package for providing Mozilla's CA Bundle." groups = ["default"] files = [ - {file = "certifi-2025.7.9-py3-none-any.whl", hash = "sha256:d842783a14f8fdd646895ac26f719a061408834473cfc10203f6a575beb15d39"}, - {file = "certifi-2025.7.9.tar.gz", hash = "sha256:c1d2ec05395148ee10cf672ffc28cd37ea0ab0d99f9cc74c43e588cbd111b079"}, + {file = "certifi-2025.8.3-py3-none-any.whl", hash = "sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5"}, + {file = "certifi-2025.8.3.tar.gz", hash = "sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407"}, ] [[package]] name = "charset-normalizer" -version = "3.4.2" +version = "3.4.3" requires_python = ">=3.7" summary = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." groups = ["default"] files = [ - {file = "charset_normalizer-3.4.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:be1e352acbe3c78727a16a455126d9ff83ea2dfdcbc83148d2982305a04714c2"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa88ca0b1932e93f2d961bf3addbb2db902198dca337d88c89e1559e066e7645"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d524ba3f1581b35c03cb42beebab4a13e6cdad7b36246bd22541fa585a56cccd"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28a1005facc94196e1fb3e82a3d442a9d9110b8434fc1ded7a24a2983c9888d8"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdb20a30fe1175ecabed17cbf7812f7b804b8a315a25f24678bcdf120a90077f"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0f5d9ed7f254402c9e7d35d2f5972c9bbea9040e99cd2861bd77dc68263277c7"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:efd387a49825780ff861998cd959767800d54f8308936b21025326de4b5a42b9"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:f0aa37f3c979cf2546b73e8222bbfa3dc07a641585340179d768068e3455e544"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e70e990b2137b29dc5564715de1e12701815dacc1d056308e2b17e9095372a82"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:0c8c57f84ccfc871a48a47321cfa49ae1df56cd1d965a09abe84066f6853b9c0"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6b66f92b17849b85cad91259efc341dce9c1af48e2173bf38a85c6329f1033e5"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-win32.whl", hash = "sha256:daac4765328a919a805fa5e2720f3e94767abd632ae410a9062dff5412bae65a"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-win_amd64.whl", hash = "sha256:e53efc7c7cee4c1e70661e2e112ca46a575f90ed9ae3fef200f2a25e954f4b28"}, - {file = "charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0"}, - {file = "charset_normalizer-3.4.2.tar.gz", hash = "sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:b256ee2e749283ef3ddcff51a675ff43798d92d746d1a6e4631bf8c707d22d0b"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:13faeacfe61784e2559e690fc53fa4c5ae97c6fcedb8eb6fb8d0a15b475d2c64"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:00237675befef519d9af72169d8604a067d92755e84fe76492fef5441db05b91"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:585f3b2a80fbd26b048a0be90c5aae8f06605d3c92615911c3a2b03a8a3b796f"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0e78314bdc32fa80696f72fa16dc61168fda4d6a0c014e0380f9d02f0e5d8a07"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:96b2b3d1a83ad55310de8c7b4a2d04d9277d5591f40761274856635acc5fcb30"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:939578d9d8fd4299220161fdd76e86c6a251987476f5243e8864a7844476ba14"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:fd10de089bcdcd1be95a2f73dbe6254798ec1bda9f450d5828c96f93e2536b9c"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1e8ac75d72fa3775e0b7cb7e4629cec13b7514d928d15ef8ea06bca03ef01cae"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-win32.whl", hash = "sha256:6cf8fd4c04756b6b60146d98cd8a77d0cdae0e1ca20329da2ac85eed779b6849"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:31a9a6f775f9bcd865d88ee350f0ffb0e25936a7f930ca98995c05abf1faf21c"}, + {file = "charset_normalizer-3.4.3-py3-none-any.whl", hash = "sha256:ce571ab16d890d23b5c278547ba694193a45011ff86a9162a71307ed9f86759a"}, + {file = "charset_normalizer-3.4.3.tar.gz", hash = "sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14"}, ] [[package]] @@ -330,7 +328,7 @@ files = [ [[package]] name = "faiss-cpu" -version = "1.11.0" +version = "1.11.0.post1" requires_python = ">=3.9" summary = "A library for efficient similarity search and clustering of dense vectors." groups = ["default"] @@ -339,12 +337,15 @@ dependencies = [ "packaging", ] files = [ - {file = "faiss_cpu-1.11.0-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:a90d1c81d0ecf2157e1d2576c482d734d10760652a5b2fcfa269916611e41f1c"}, - {file = "faiss_cpu-1.11.0-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:2c39a388b059fb82cd97fbaa7310c3580ced63bf285be531453bfffbe89ea3dd"}, - {file = "faiss_cpu-1.11.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:a4e3433ffc7f9b8707a7963db04f8676a5756868d325644db2db9d67a618b7a0"}, - {file = "faiss_cpu-1.11.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:926645f1b6829623bc88e93bc8ca872504d604718ada3262e505177939aaee0a"}, - {file = "faiss_cpu-1.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:931db6ed2197c03a7fdf833b057c13529afa2cec8a827aa081b7f0543e4e671b"}, - {file = "faiss_cpu-1.11.0.tar.gz", hash = "sha256:44877b896a2b30a61e35ea4970d008e8822545cb340eca4eff223ac7f40a1db9"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-macosx_13_0_x86_64.whl", hash = "sha256:2c8c384e65cc1b118d2903d9f3a27cd35f6c45337696fc0437f71e05f732dbc0"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:36af46945274ed14751b788673125a8a4900408e4837a92371b0cad5708619ea"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1b15412b22a05865433aecfdebf7664b9565bd49b600d23a0a27c74a5526893e"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:81c169ea74213b2c055b8240befe7e9b42a1f3d97cda5238b3b401035ce1a18b"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0794eb035c6075e931996cf2b2703fbb3f47c8c34bc2d727819ddc3e5e486a31"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:18d2221014813dc9a4236e47f9c4097a71273fbf17c3fe66243e724e2018a67a"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-win_amd64.whl", hash = "sha256:3ce8a8984a7dcc689fd192c69a476ecd0b2611c61f96fe0799ff432aa73ff79c"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-win_arm64.whl", hash = "sha256:8384e05afb7c7968e93b81566759f862e744c0667b175086efb3d8b20949b39f"}, + {file = "faiss_cpu-1.11.0.post1.tar.gz", hash = "sha256:06b1ea9ddec9e4d9a41c8ef7478d493b08d770e9a89475056e963081eed757d1"}, ] [[package]] @@ -398,37 +399,37 @@ files = [ [[package]] name = "fsspec" -version = "2025.5.1" +version = "2025.7.0" requires_python = ">=3.9" summary = "File-system specification" groups = ["default", "cpu"] files = [ - {file = "fsspec-2025.5.1-py3-none-any.whl", hash = "sha256:24d3a2e663d5fc735ab256263c4075f374a174c3410c0b25e5bd1970bceaa462"}, - {file = "fsspec-2025.5.1.tar.gz", hash = "sha256:2e55e47a540b91843b755e83ded97c6e897fa0942b11490113f09e9c443c2475"}, + {file = "fsspec-2025.7.0-py3-none-any.whl", hash = "sha256:8b012e39f63c7d5f10474de957f3ab793b47b45ae7d39f2fb735f8bbe25c0e21"}, + {file = "fsspec-2025.7.0.tar.gz", hash = "sha256:786120687ffa54b8283d942929540d8bc5ccfa820deb555a2b5d0ed2b737bf58"}, ] [[package]] name = "greenlet" -version = "3.2.3" +version = "3.2.4" requires_python = ">=3.9" summary = "Lightweight in-process concurrent programming" groups = ["default"] files = [ - {file = "greenlet-3.2.3-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:784ae58bba89fa1fa5733d170d42486580cab9decda3484779f4759345b29822"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0921ac4ea42a5315d3446120ad48f90c3a6b9bb93dd9b3cf4e4d84a66e42de83"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:d2971d93bb99e05f8c2c0c2f4aa9484a18d98c4c3bd3c62b65b7e6ae33dfcfaf"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c667c0bf9d406b77a15c924ef3285e1e05250948001220368e039b6aa5b5034b"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:592c12fb1165be74592f5de0d70f82bc5ba552ac44800d632214b76089945147"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:29e184536ba333003540790ba29829ac14bb645514fbd7e32af331e8202a62a5"}, - {file = "greenlet-3.2.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:93c0bb79844a367782ec4f429d07589417052e621aa39a5ac1fb99c5aa308edc"}, - {file = "greenlet-3.2.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:751261fc5ad7b6705f5f76726567375bb2104a059454e0226e1eef6c756748ba"}, - {file = "greenlet-3.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:83a8761c75312361aa2b5b903b79da97f13f556164a7dd2d5448655425bd4c34"}, - {file = "greenlet-3.2.3.tar.gz", hash = "sha256:8b0dd8ae4c0d6f5e54ee55ba935eeb3d735a9b58a8a1e5b5cbab64e01a39f365"}, + {file = "greenlet-3.2.4-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:96378df1de302bc38e99c3a9aa311967b7dc80ced1dcc6f171e99842987882a2"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1ee8fae0519a337f2329cb78bd7a8e128ec0f881073d43f023c7b8d4831d5246"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:94abf90142c2a18151632371140b3dba4dee031633fe614cb592dbb6c9e17bc3"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:4d1378601b85e2e5171b99be8d2dc85f594c79967599328f95c1dc1a40f1c633"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0db5594dce18db94f7d1650d7489909b57afde4c580806b8d9203b6e79cdc079"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2523e5246274f54fdadbce8494458a2ebdcdbc7b802318466ac5606d3cded1f8"}, + {file = "greenlet-3.2.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1987de92fec508535687fb807a5cea1560f6196285a4cde35c100b8cd632cc52"}, + {file = "greenlet-3.2.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:55e9c5affaa6775e2c6b67659f3a71684de4c549b3dd9afca3bc773533d284fa"}, + {file = "greenlet-3.2.4-cp311-cp311-win_amd64.whl", hash = "sha256:9c40adce87eaa9ddb593ccb0fa6a07caf34015a29bf8d344811665b573138db9"}, + {file = "greenlet-3.2.4.tar.gz", hash = "sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d"}, ] [[package]] name = "griffe" -version = "1.7.3" +version = "1.11.0" requires_python = ">=3.9" summary = "Signatures for entire Python programs. Extract the structure, the frame, the skeleton of your project, to generate API documentation or find breaking changes in your API." groups = ["default"] @@ -436,8 +437,8 @@ dependencies = [ "colorama>=0.4", ] files = [ - {file = "griffe-1.7.3-py3-none-any.whl", hash = "sha256:c6b3ee30c2f0f17f30bcdef5068d6ab7a2a4f1b8bf1a3e74b56fffd21e1c5f75"}, - {file = "griffe-1.7.3.tar.gz", hash = "sha256:52ee893c6a3a968b639ace8015bec9d36594961e156e23315c8e8e51401fa50b"}, + {file = "griffe-1.11.0-py3-none-any.whl", hash = "sha256:dc56cc6af8d322807ecdb484b39838c7a51ca750cf21ccccf890500c4d6389d8"}, + {file = "griffe-1.11.0.tar.gz", hash = "sha256:c153b5bc63ca521f059e9451533a67e44a9d06cf9bf1756e4298bda5bd3262e8"}, ] [[package]] @@ -453,20 +454,20 @@ files = [ [[package]] name = "hf-xet" -version = "1.1.5" +version = "1.1.7" requires_python = ">=3.8" summary = "Fast transfer of large files with the Hugging Face Hub." groups = ["default"] marker = "platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"arm64\" or platform_machine == \"aarch64\"" files = [ - {file = "hf_xet-1.1.5-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:f52c2fa3635b8c37c7764d8796dfa72706cc4eded19d638331161e82b0792e23"}, - {file = "hf_xet-1.1.5-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:9fa6e3ee5d61912c4a113e0708eaaef987047616465ac7aa30f7121a48fc1af8"}, - {file = "hf_xet-1.1.5-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc874b5c843e642f45fd85cda1ce599e123308ad2901ead23d3510a47ff506d1"}, - {file = "hf_xet-1.1.5-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dbba1660e5d810bd0ea77c511a99e9242d920790d0e63c0e4673ed36c4022d18"}, - {file = "hf_xet-1.1.5-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ab34c4c3104133c495785d5d8bba3b1efc99de52c02e759cf711a91fd39d3a14"}, - {file = "hf_xet-1.1.5-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:83088ecea236d5113de478acb2339f92c95b4fb0462acaa30621fac02f5a534a"}, - {file = "hf_xet-1.1.5-cp37-abi3-win_amd64.whl", hash = "sha256:73e167d9807d166596b4b2f0b585c6d5bd84a26dea32843665a8b58f6edba245"}, - {file = "hf_xet-1.1.5.tar.gz", hash = "sha256:69ebbcfd9ec44fdc2af73441619eeb06b94ee34511bbcf57cd423820090f5694"}, + {file = "hf_xet-1.1.7-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:60dae4b44d520819e54e216a2505685248ec0adbdb2dd4848b17aa85a0375cde"}, + {file = "hf_xet-1.1.7-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:b109f4c11e01c057fc82004c9e51e6cdfe2cb230637644ade40c599739067b2e"}, + {file = "hf_xet-1.1.7-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6efaaf1a5a9fc3a501d3e71e88a6bfebc69ee3a716d0e713a931c8b8d920038f"}, + {file = "hf_xet-1.1.7-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:751571540f9c1fbad9afcf222a5fb96daf2384bf821317b8bfb0c59d86078513"}, + {file = "hf_xet-1.1.7-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:18b61bbae92d56ae731b92087c44efcac216071182c603fc535f8e29ec4b09b8"}, + {file = "hf_xet-1.1.7-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:713f2bff61b252f8523739969f247aa354ad8e6d869b8281e174e2ea1bb8d604"}, + {file = "hf_xet-1.1.7-cp37-abi3-win_amd64.whl", hash = "sha256:2e356da7d284479ae0f1dea3cf5a2f74fdf925d6dca84ac4341930d892c7cb34"}, + {file = "hf_xet-1.1.7.tar.gz", hash = "sha256:20cec8db4561338824a3b5f8c19774055b04a8df7fff0cb1ff2cb1a0c1607b80"}, ] [[package]] @@ -503,14 +504,14 @@ files = [ [[package]] name = "huggingface-hub" -version = "0.33.4" +version = "0.34.4" requires_python = ">=3.8.0" summary = "Client library to download and publish models, datasets and other repos on the huggingface.co hub" groups = ["default"] dependencies = [ "filelock", "fsspec>=2023.5.0", - "hf-xet<2.0.0,>=1.1.2; platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"arm64\" or platform_machine == \"aarch64\"", + "hf-xet<2.0.0,>=1.1.3; platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"arm64\" or platform_machine == \"aarch64\"", "packaging>=20.9", "pyyaml>=5.1", "requests", @@ -518,24 +519,24 @@ dependencies = [ "typing-extensions>=3.7.4.3", ] files = [ - {file = "huggingface_hub-0.33.4-py3-none-any.whl", hash = "sha256:09f9f4e7ca62547c70f8b82767eefadd2667f4e116acba2e3e62a5a81815a7bb"}, - {file = "huggingface_hub-0.33.4.tar.gz", hash = "sha256:6af13478deae120e765bfd92adad0ae1aec1ad8c439b46f23058ad5956cbca0a"}, + {file = "huggingface_hub-0.34.4-py3-none-any.whl", hash = "sha256:9b365d781739c93ff90c359844221beef048403f1bc1f1c123c191257c3c890a"}, + {file = "huggingface_hub-0.34.4.tar.gz", hash = "sha256:a4228daa6fb001be3f4f4bdaf9a0db00e1739235702848df00885c9b5742c85c"}, ] [[package]] name = "huggingface-hub" -version = "0.33.4" +version = "0.34.4" extras = ["inference"] requires_python = ">=3.8.0" summary = "Client library to download and publish models, datasets and other repos on the huggingface.co hub" groups = ["default"] dependencies = [ "aiohttp", - "huggingface-hub==0.33.4", + "huggingface-hub==0.34.4", ] files = [ - {file = "huggingface_hub-0.33.4-py3-none-any.whl", hash = "sha256:09f9f4e7ca62547c70f8b82767eefadd2667f4e116acba2e3e62a5a81815a7bb"}, - {file = "huggingface_hub-0.33.4.tar.gz", hash = "sha256:6af13478deae120e765bfd92adad0ae1aec1ad8c439b46f23058ad5956cbca0a"}, + {file = "huggingface_hub-0.34.4-py3-none-any.whl", hash = "sha256:9b365d781739c93ff90c359844221beef048403f1bc1f1c123c191257c3c890a"}, + {file = "huggingface_hub-0.34.4.tar.gz", hash = "sha256:a4228daa6fb001be3f4f4bdaf9a0db00e1739235702848df00885c9b5742c85c"}, ] [[package]] @@ -598,7 +599,7 @@ files = [ [[package]] name = "llama-cloud" -version = "0.1.32" +version = "0.1.35" requires_python = "<4,>=3.8" summary = "" groups = ["default"] @@ -608,98 +609,78 @@ dependencies = [ "pydantic>=1.10", ] files = [ - {file = "llama_cloud-0.1.32-py3-none-any.whl", hash = "sha256:c42b2d5fb24acc8595bcc3626fb84c872909a16ab6d6879a1cb1101b21c238bd"}, - {file = "llama_cloud-0.1.32.tar.gz", hash = "sha256:cea98241127311ea91f191c3c006aa6558f01d16f9539ed93b24d716b888f10e"}, + {file = "llama_cloud-0.1.35-py3-none-any.whl", hash = "sha256:b7abab4423118e6f638d2f326749e7a07c6426543bea6da99b623c715b22af71"}, + {file = "llama_cloud-0.1.35.tar.gz", hash = "sha256:200349d5d57424d7461f304cdb1355a58eea3e6ca1e6b0d75c66b2e937216983"}, ] [[package]] name = "llama-cloud-services" -version = "0.6.43" +version = "0.6.54" requires_python = "<4.0,>=3.9" summary = "Tailored SDK clients for LlamaCloud services." groups = ["default"] dependencies = [ - "click<9.0.0,>=8.1.7", - "eval-type-backport<0.3.0,>=0.2.0; python_version < \"3.10\"", - "llama-cloud==0.1.32", + "click<9,>=8.1.7", + "eval-type-backport<0.3,>=0.2.0; python_version < \"3.10\"", + "llama-cloud==0.1.35", "llama-index-core>=0.12.0", - "platformdirs<5.0.0,>=4.3.7", + "platformdirs<5,>=4.3.7", "pydantic!=2.10,>=2.8", - "python-dotenv<2.0.0,>=1.0.1", + "python-dotenv<2,>=1.0.1", "tenacity<10.0,>=8.5.0", ] files = [ - {file = "llama_cloud_services-0.6.43-py3-none-any.whl", hash = "sha256:2349195f501ba9151ea3ab384d20cae8b4dc4f335f60bd17607332626bdfa2e4"}, - {file = "llama_cloud_services-0.6.43.tar.gz", hash = "sha256:fa6be33bf54d467cace809efee8c2aeeb9de74ce66708513d37b40d738d3350f"}, + {file = "llama_cloud_services-0.6.54-py3-none-any.whl", hash = "sha256:07f595f7a0ba40c6a1a20543d63024ca7600fe65c4811d1951039977908997be"}, + {file = "llama_cloud_services-0.6.54.tar.gz", hash = "sha256:baf65d9bffb68f9dca98ac6e22908b6675b2038b021e657ead1ffc0e43cbd45d"}, ] [[package]] name = "llama-index" -version = "0.12.48" +version = "0.13.1" requires_python = "<4.0,>=3.9" summary = "Interface between LLMs and your data" groups = ["default"] dependencies = [ - "llama-index-agent-openai<0.5,>=0.4.0", - "llama-index-cli<0.5,>=0.4.2", - "llama-index-core<0.13,>=0.12.48", - "llama-index-embeddings-openai<0.4,>=0.3.0", + "llama-index-cli<0.6,>=0.5.0", + "llama-index-core<0.14,>=0.13.1", + "llama-index-embeddings-openai<0.6,>=0.5.0", "llama-index-indices-managed-llama-cloud>=0.4.0", - "llama-index-llms-openai<0.5,>=0.4.0", - "llama-index-multi-modal-llms-openai<0.6,>=0.5.0", - "llama-index-program-openai<0.4,>=0.3.0", - "llama-index-question-gen-openai<0.4,>=0.3.0", - "llama-index-readers-file<0.5,>=0.4.0", + "llama-index-llms-openai<0.6,>=0.5.0", + "llama-index-readers-file<0.6,>=0.5.0", "llama-index-readers-llama-parse>=0.4.0", "nltk>3.8.1", ] files = [ - {file = "llama_index-0.12.48-py3-none-any.whl", hash = "sha256:93a80de54a5cf86114c252338d7917bb81ffe94afa47f01c41c9ee04c0155db4"}, - {file = "llama_index-0.12.48.tar.gz", hash = "sha256:54b922fd94efde2c21c12be392c381cb4a0531a7ca8e482a7e3d1c6795af2da5"}, -] - -[[package]] -name = "llama-index-agent-openai" -version = "0.4.12" -requires_python = "<4.0,>=3.9" -summary = "llama-index agent openai integration" -groups = ["default"] -dependencies = [ - "llama-index-core<0.13,>=0.12.41", - "llama-index-llms-openai<0.5,>=0.4.0", - "openai>=1.14.0", -] -files = [ - {file = "llama_index_agent_openai-0.4.12-py3-none-any.whl", hash = "sha256:6dbb6276b2e5330032a726b28d5eef5140825f36d72d472b231f08ad3af99665"}, - {file = "llama_index_agent_openai-0.4.12.tar.gz", hash = "sha256:d2fe53feb69cfe45752edb7328bf0d25f6a9071b3c056787e661b93e5b748a28"}, + {file = "llama_index-0.13.1-py3-none-any.whl", hash = "sha256:e02b61cac0699c709a12e711bdaca0a2c90c9b8177d45f9b07b8650c9985d09e"}, + {file = "llama_index-0.13.1.tar.gz", hash = "sha256:0cf06beaf460bfa4dd57902e7f4696626da54350851a876b391a82acce7fe5c2"}, ] [[package]] name = "llama-index-cli" -version = "0.4.4" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index cli" groups = ["default"] dependencies = [ - "llama-index-core<0.13,>=0.12.0", - "llama-index-embeddings-openai<0.4,>=0.3.1", - "llama-index-llms-openai<0.5,>=0.4.0", + "llama-index-core<0.14,>=0.13.0", + "llama-index-embeddings-openai<0.6,>=0.5.0", + "llama-index-llms-openai<0.6,>=0.5.0", ] files = [ - {file = "llama_index_cli-0.4.4-py3-none-any.whl", hash = "sha256:1070593cf79407054735ab7a23c5a65a26fc18d264661e42ef38fc549b4b7658"}, - {file = "llama_index_cli-0.4.4.tar.gz", hash = "sha256:c3af0cf1e2a7e5ef44d0bae5aa8e8872b54c5dd6b731afbae9f13ffeb4997be0"}, + {file = "llama_index_cli-0.5.0-py3-none-any.whl", hash = "sha256:e331ca98005c370bfe58800fa5eed8b10061d0b9c656b84a1f5f6168733a2a7b"}, + {file = "llama_index_cli-0.5.0.tar.gz", hash = "sha256:2eb9426232e8d89ffdf0fa6784ff8da09449d920d71d0fcc81d07be93cf9369f"}, ] [[package]] name = "llama-index-core" -version = "0.12.48" +version = "0.13.1" requires_python = "<4.0,>=3.9" summary = "Interface between LLMs and your data" groups = ["default"] dependencies = [ "aiohttp<4,>=3.8.6", "aiosqlite", - "banks<3,>=2.0.0", + "banks<3,>=2.2.0", "dataclasses-json", "deprecated>=1.2.9.3", "dirtyjson<2,>=1.0.8", @@ -713,6 +694,7 @@ dependencies = [ "nltk>3.8.1", "numpy", "pillow>=9.0.0", + "platformdirs", "pydantic>=2.8.0", "pyyaml>=6.0.1", "requests>=2.31.0", @@ -726,59 +708,60 @@ dependencies = [ "wrapt", ] files = [ - {file = "llama_index_core-0.12.48-py3-none-any.whl", hash = "sha256:0770119ab540605cb217dc9b26343b0bdf6f91d843cfb17d0074ba2fac358e56"}, - {file = "llama_index_core-0.12.48.tar.gz", hash = "sha256:a5cb2179495f091f351a41b4ef312ec6593660438e0066011ec81f7b5d2c93be"}, + {file = "llama_index_core-0.13.1-py3-none-any.whl", hash = "sha256:fde6c8c8bcacf7244bdef4908288eced5e11f47e9741d545846c3d1692830510"}, + {file = "llama_index_core-0.13.1.tar.gz", hash = "sha256:04a58cb26638e186ddb02a80970d503842f68abbeb8be5af6a387c51f7995eeb"}, ] [[package]] name = "llama-index-embeddings-huggingface" -version = "0.5.5" +version = "0.6.0" requires_python = "<4.0,>=3.9" summary = "llama-index embeddings huggingface integration" groups = ["default"] dependencies = [ "huggingface-hub[inference]>=0.19.0", - "llama-index-core<0.13,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", "sentence-transformers>=2.6.1", ] files = [ - {file = "llama_index_embeddings_huggingface-0.5.5-py3-none-any.whl", hash = "sha256:8260e1561df17ca510e241a90504b37cc7d8ac6f2d6aaad9732d04ca3ad988d1"}, - {file = "llama_index_embeddings_huggingface-0.5.5.tar.gz", hash = "sha256:7f6e9a031d9146f235df597c0ccd6280cde96b9b437f99052ce79bb72e5fac5e"}, + {file = "llama_index_embeddings_huggingface-0.6.0-py3-none-any.whl", hash = "sha256:0c24aba5265a7dbd6591394a8d2d64d0b978bb50b4b97c4e88cbf698b69fdd10"}, + {file = "llama_index_embeddings_huggingface-0.6.0.tar.gz", hash = "sha256:3ece7d8c5b683d2055fedeca4457dea13f75c81a6d7fb94d77e878cd73d90d97"}, ] [[package]] name = "llama-index-embeddings-openai" -version = "0.3.1" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index embeddings openai integration" groups = ["default"] dependencies = [ - "llama-index-core<0.13.0,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", "openai>=1.1.0", ] files = [ - {file = "llama_index_embeddings_openai-0.3.1-py3-none-any.whl", hash = "sha256:f15a3d13da9b6b21b8bd51d337197879a453d1605e625a1c6d45e741756c0290"}, - {file = "llama_index_embeddings_openai-0.3.1.tar.gz", hash = "sha256:1368aad3ce24cbaed23d5ad251343cef1eb7b4a06d6563d6606d59cb347fef20"}, + {file = "llama_index_embeddings_openai-0.5.0-py3-none-any.whl", hash = "sha256:d817edb22e3ff475e8cd1833faf1147028986bc1d688f7894ef947558864b728"}, + {file = "llama_index_embeddings_openai-0.5.0.tar.gz", hash = "sha256:ac587839a111089ea8a6255f9214016d7a813b383bbbbf9207799be1100758eb"}, ] [[package]] name = "llama-index-indices-managed-llama-cloud" -version = "0.7.10" +version = "0.9.1" requires_python = "<4.0,>=3.9" summary = "llama-index indices llama-cloud integration" groups = ["default"] dependencies = [ - "llama-cloud==0.1.32", - "llama-index-core<0.13,>=0.12.0", + "deprecated==1.2.18", + "llama-cloud==0.1.35", + "llama-index-core<0.14,>=0.13.0", ] files = [ - {file = "llama_index_indices_managed_llama_cloud-0.7.10-py3-none-any.whl", hash = "sha256:f7edcfb8f694cab547cd9324be7835dc97470ce05150d0b8888fa3bf9d2f84a8"}, - {file = "llama_index_indices_managed_llama_cloud-0.7.10.tar.gz", hash = "sha256:53267907e23d8fbcbb97c7a96177a41446de18550ca6030276092e73b45ca880"}, + {file = "llama_index_indices_managed_llama_cloud-0.9.1-py3-none-any.whl", hash = "sha256:df33fb6d8c6b7ee22202ee7a19285a5672f0e58a1235a2504b49c90a7e1c8933"}, + {file = "llama_index_indices_managed_llama_cloud-0.9.1.tar.gz", hash = "sha256:7bee1a368a17ff63bf1078e5ad4795eb88dcdb87c259cfb242c19bd0f4fb978e"}, ] [[package]] name = "llama-index-instrumentation" -version = "0.2.0" +version = "0.4.0" requires_python = "<4.0,>=3.9" summary = "Add your description here" groups = ["default"] @@ -787,123 +770,76 @@ dependencies = [ "pydantic>=2.11.5", ] files = [ - {file = "llama_index_instrumentation-0.2.0-py3-none-any.whl", hash = "sha256:1055ae7a3d19666671a8f1a62d08c90472552d9fcec7e84e6919b2acc92af605"}, - {file = "llama_index_instrumentation-0.2.0.tar.gz", hash = "sha256:ae8333522487e22a33732924a9a08dfb456f54993c5c97d8340db3c620b76f13"}, + {file = "llama_index_instrumentation-0.4.0-py3-none-any.whl", hash = "sha256:83f73156be34dd0121dfe9e259883620e19f0162f152ac483e179ad5ad0396ac"}, + {file = "llama_index_instrumentation-0.4.0.tar.gz", hash = "sha256:f38ecc1f02b6c1f7ab84263baa6467fac9f86538c0ee25542853de46278abea7"}, ] [[package]] name = "llama-index-llms-openai" -version = "0.4.7" +version = "0.5.2" requires_python = "<4.0,>=3.9" summary = "llama-index llms openai integration" groups = ["default"] dependencies = [ - "llama-index-core<0.13,>=0.12.41", + "llama-index-core<0.14,>=0.13.0", "openai<2,>=1.81.0", ] files = [ - {file = "llama_index_llms_openai-0.4.7-py3-none-any.whl", hash = "sha256:3b8d9d3c1bcadc2cff09724de70f074f43eafd5b7048a91247c9a41b7cd6216d"}, - {file = "llama_index_llms_openai-0.4.7.tar.gz", hash = "sha256:564af8ab39fb3f3adfeae73a59c0dca46c099ab844a28e725eee0c551d4869f8"}, -] - -[[package]] -name = "llama-index-multi-modal-llms-openai" -version = "0.5.3" -requires_python = "<4.0,>=3.9" -summary = "llama-index multi-modal-llms openai integration" -groups = ["default"] -dependencies = [ - "llama-index-core<0.13,>=0.12.47", - "llama-index-llms-openai<0.5,>=0.4.0", -] -files = [ - {file = "llama_index_multi_modal_llms_openai-0.5.3-py3-none-any.whl", hash = "sha256:be6237df8f9caaa257f9beda5317287bbd2ec19473d777a30a34e41a7c5bddf8"}, - {file = "llama_index_multi_modal_llms_openai-0.5.3.tar.gz", hash = "sha256:b755a8b47d8d2f34b5a3d249af81d9bfb69d3d2cf9ab539d3a42f7bfa3e2391a"}, -] - -[[package]] -name = "llama-index-program-openai" -version = "0.3.2" -requires_python = "<4.0,>=3.9" -summary = "llama-index program openai integration" -groups = ["default"] -dependencies = [ - "llama-index-agent-openai<0.5,>=0.4.0", - "llama-index-core<0.13,>=0.12.0", - "llama-index-llms-openai<0.5,>=0.4.0", -] -files = [ - {file = "llama_index_program_openai-0.3.2-py3-none-any.whl", hash = "sha256:451829ae53e074e7b47dcc60a9dd155fcf9d1dcbc1754074bdadd6aab4ceb9aa"}, - {file = "llama_index_program_openai-0.3.2.tar.gz", hash = "sha256:04c959a2e616489894bd2eeebb99500d6f1c17d588c3da0ddc75ebd3eb7451ee"}, -] - -[[package]] -name = "llama-index-question-gen-openai" -version = "0.3.1" -requires_python = "<4.0,>=3.9" -summary = "llama-index question_gen openai integration" -groups = ["default"] -dependencies = [ - "llama-index-core<0.13,>=0.12.0", - "llama-index-llms-openai<0.5,>=0.4.0", - "llama-index-program-openai<0.4,>=0.3.0", -] -files = [ - {file = "llama_index_question_gen_openai-0.3.1-py3-none-any.whl", hash = "sha256:1ce266f6c8373fc8d884ff83a44dfbacecde2301785db7144872db51b8b99429"}, - {file = "llama_index_question_gen_openai-0.3.1.tar.gz", hash = "sha256:5e9311b433cc2581ff8a531fa19fb3aa21815baff75aaacdef11760ac9522aa9"}, + {file = "llama_index_llms_openai-0.5.2-py3-none-any.whl", hash = "sha256:f1cc5be83f704d217bd235b609ad1b128dbd42e571329b108f902920836c1071"}, + {file = "llama_index_llms_openai-0.5.2.tar.gz", hash = "sha256:53237fda8ff9089fdb2543ac18ea499b27863cc41095d3a3499f19e9cfd98e1a"}, ] [[package]] name = "llama-index-readers-file" -version = "0.4.11" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index readers file integration" groups = ["default"] dependencies = [ "beautifulsoup4<5,>=4.12.3", "defusedxml>=0.7.1", - "llama-index-core<0.13,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", "pandas<2.3.0", "pypdf<6,>=5.1.0", "striprtf<0.0.27,>=0.0.26", ] files = [ - {file = "llama_index_readers_file-0.4.11-py3-none-any.whl", hash = "sha256:e71192d8d6d0bf95131762da15fa205cf6e0cc248c90c76ee04d0fbfe160d464"}, - {file = "llama_index_readers_file-0.4.11.tar.gz", hash = "sha256:1b21cb66d78dd5f60e8716607d9a47ccd81bb39106d459665be1ca7799e9597b"}, + {file = "llama_index_readers_file-0.5.0-py3-none-any.whl", hash = "sha256:7fc47a9dbf11d07e78992581c20bca82b21bf336e646b4f53263f3909cb02c58"}, + {file = "llama_index_readers_file-0.5.0.tar.gz", hash = "sha256:f324617bfc4d9b32136d25ff5351b92bc0b569a296173ee2a8591c1f886eff0c"}, ] [[package]] name = "llama-index-readers-llama-parse" -version = "0.4.0" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index readers llama-parse integration" groups = ["default"] dependencies = [ - "llama-index-core<0.13.0,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", "llama-parse>=0.5.0", ] files = [ - {file = "llama_index_readers_llama_parse-0.4.0-py3-none-any.whl", hash = "sha256:574e48386f28d2c86c3f961ca4a4906910312f3400dd0c53014465bfbc6b32bf"}, - {file = "llama_index_readers_llama_parse-0.4.0.tar.gz", hash = "sha256:e99ec56f4f8546d7fda1a7c1ae26162fb9acb7ebcac343b5abdb4234b4644e0f"}, + {file = "llama_index_readers_llama_parse-0.5.0-py3-none-any.whl", hash = "sha256:e63ebf2248c4a726b8a1f7b029c90383d82cdc142942b54dbf287d1f3aee6d75"}, + {file = "llama_index_readers_llama_parse-0.5.0.tar.gz", hash = "sha256:891b21fb63fe1fe722e23cfa263a74d9a7354e5d8d7a01f2d4040a52f8d8feef"}, ] [[package]] name = "llama-index-vector-stores-faiss" -version = "0.4.0" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index vector_stores faiss integration" groups = ["default"] dependencies = [ - "llama-index-core<0.13,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", ] files = [ - {file = "llama_index_vector_stores_faiss-0.4.0-py3-none-any.whl", hash = "sha256:092907b38c70b7f9698ad294836389b31fd3a1273ea1d93082993dd0925c8a4b"}, - {file = "llama_index_vector_stores_faiss-0.4.0.tar.gz", hash = "sha256:59b58e4ec91880a5871a896bbdbd94cb781a447f92f400b5f08a62eb56a62e5c"}, + {file = "llama_index_vector_stores_faiss-0.5.0-py3-none-any.whl", hash = "sha256:2fa9848a4423ddb26f987d299749f1fa1c272b8e576332a03e0610d4ee236d09"}, + {file = "llama_index_vector_stores_faiss-0.5.0.tar.gz", hash = "sha256:4b6a1533c075b6e30985bf1eb778716c594ae0511691434df7f75b032ef964eb"}, ] [[package]] name = "llama-index-workflows" -version = "1.1.0" +version = "1.3.0" requires_python = ">=3.9" summary = "An event-driven, async-first, step-based way to control the execution flow of AI applications like Agents." groups = ["default"] @@ -911,24 +847,25 @@ dependencies = [ "eval-type-backport>=0.2.2; python_full_version < \"3.10\"", "llama-index-instrumentation>=0.1.0", "pydantic>=2.11.5", + "typing-extensions>=4.6.0", ] files = [ - {file = "llama_index_workflows-1.1.0-py3-none-any.whl", hash = "sha256:992fd5b012f56725853a4eed2219a66e19fcc7a6db85dc51afcc1bd2a5dd6db1"}, - {file = "llama_index_workflows-1.1.0.tar.gz", hash = "sha256:ff001d362100bfc2a3353cc5f2528a0adb52245e632191a86b4bddacde72b6af"}, + {file = "llama_index_workflows-1.3.0-py3-none-any.whl", hash = "sha256:328cc25d92b014ef527f105a2f2088c0924fff0494e53d93decb951f14fbfe47"}, + {file = "llama_index_workflows-1.3.0.tar.gz", hash = "sha256:9c1688e237efad384f16485af71c6f9456a2eb6d85bf61ff49e5717f10ff286d"}, ] [[package]] name = "llama-parse" -version = "0.6.43" +version = "0.6.54" requires_python = "<4.0,>=3.9" summary = "Parse files into RAG-Optimized formats." groups = ["default"] dependencies = [ - "llama-cloud-services>=0.6.43", + "llama-cloud-services>=0.6.54", ] files = [ - {file = "llama_parse-0.6.43-py3-none-any.whl", hash = "sha256:fe435309638c4fdec4fec31f97c5031b743c92268962d03b99bd76704f566c32"}, - {file = "llama_parse-0.6.43.tar.gz", hash = "sha256:d88e91c97e37f77b2619111ef43c02b7da61125f821cf77f918996eb48200d78"}, + {file = "llama_parse-0.6.54-py3-none-any.whl", hash = "sha256:c66c8d51cf6f29a44eaa8595a595de5d2598afc86e5a33a4cebe5fe228036920"}, + {file = "llama_parse-0.6.54.tar.gz", hash = "sha256:c707b31152155c9bae84e316fab790bbc8c85f4d8825ce5ee386ebeb7db258f1"}, ] [[package]] @@ -1010,7 +947,7 @@ files = [ [[package]] name = "mypy" -version = "1.16.1" +version = "1.17.1" requires_python = ">=3.9" summary = "Optional static typing for Python" groups = ["dev"] @@ -1021,14 +958,14 @@ dependencies = [ "typing-extensions>=4.6.0", ] files = [ - {file = "mypy-1.16.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:472e4e4c100062488ec643f6162dd0d5208e33e2f34544e1fc931372e806c0cc"}, - {file = "mypy-1.16.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ea16e2a7d2714277e349e24d19a782a663a34ed60864006e8585db08f8ad1782"}, - {file = "mypy-1.16.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:08e850ea22adc4d8a4014651575567b0318ede51e8e9fe7a68f25391af699507"}, - {file = "mypy-1.16.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:22d76a63a42619bfb90122889b903519149879ddbf2ba4251834727944c8baca"}, - {file = "mypy-1.16.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:2c7ce0662b6b9dc8f4ed86eb7a5d505ee3298c04b40ec13b30e572c0e5ae17c4"}, - {file = "mypy-1.16.1-cp311-cp311-win_amd64.whl", hash = "sha256:211287e98e05352a2e1d4e8759c5490925a7c784ddc84207f4714822f8cf99b6"}, - {file = "mypy-1.16.1-py3-none-any.whl", hash = "sha256:5fc2ac4027d0ef28d6ba69a0343737a23c4d1b83672bf38d1fe237bdc0643b37"}, - {file = "mypy-1.16.1.tar.gz", hash = "sha256:6bd00a0a2094841c5e47e7374bb42b83d64c527a502e3334e1173a0c24437bab"}, + {file = "mypy-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ad37544be07c5d7fba814eb370e006df58fed8ad1ef33ed1649cb1889ba6ff58"}, + {file = "mypy-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:064e2ff508e5464b4bd807a7c1625bc5047c5022b85c70f030680e18f37273a5"}, + {file = "mypy-1.17.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:70401bbabd2fa1aa7c43bb358f54037baf0586f41e83b0ae67dd0534fc64edfd"}, + {file = "mypy-1.17.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e92bdc656b7757c438660f775f872a669b8ff374edc4d18277d86b63edba6b8b"}, + {file = "mypy-1.17.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c1fdf4abb29ed1cb091cf432979e162c208a5ac676ce35010373ff29247bcad5"}, + {file = "mypy-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:ff2933428516ab63f961644bc49bc4cbe42bbffb2cd3b71cc7277c07d16b1a8b"}, + {file = "mypy-1.17.1-py3-none-any.whl", hash = "sha256:a9f52c0351c21fe24c21d8c0eb1f62967b262d6729393397b6f443c3b773c3b9"}, + {file = "mypy-1.17.1.tar.gz", hash = "sha256:25e01ec741ab5bb3eec8ba9cdb0f769230368a22c959c4937360efb89b7e9f01"}, ] [[package]] @@ -1083,34 +1020,35 @@ files = [ [[package]] name = "numpy" -version = "2.3.1" +version = "2.3.2" requires_python = ">=3.11" summary = "Fundamental package for array computing in Python" groups = ["default"] files = [ - {file = "numpy-2.3.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6ea9e48336a402551f52cd8f593343699003d2353daa4b72ce8d34f66b722070"}, - {file = "numpy-2.3.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5ccb7336eaf0e77c1635b232c141846493a588ec9ea777a7c24d7166bb8533ae"}, - {file = "numpy-2.3.1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:0bb3a4a61e1d327e035275d2a993c96fa786e4913aa089843e6a2d9dd205c66a"}, - {file = "numpy-2.3.1-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:e344eb79dab01f1e838ebb67aab09965fb271d6da6b00adda26328ac27d4a66e"}, - {file = "numpy-2.3.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:467db865b392168ceb1ef1ffa6f5a86e62468c43e0cfb4ab6da667ede10e58db"}, - {file = "numpy-2.3.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:afed2ce4a84f6b0fc6c1ce734ff368cbf5a5e24e8954a338f3bdffa0718adffb"}, - {file = "numpy-2.3.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0025048b3c1557a20bc80d06fdeb8cc7fc193721484cca82b2cfa072fec71a93"}, - {file = "numpy-2.3.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a5ee121b60aa509679b682819c602579e1df14a5b07fe95671c8849aad8f2115"}, - {file = "numpy-2.3.1-cp311-cp311-win32.whl", hash = "sha256:a8b740f5579ae4585831b3cf0e3b0425c667274f82a484866d2adf9570539369"}, - {file = "numpy-2.3.1-cp311-cp311-win_amd64.whl", hash = "sha256:d4580adadc53311b163444f877e0789f1c8861e2698f6b2a4ca852fda154f3ff"}, - {file = "numpy-2.3.1-cp311-cp311-win_arm64.whl", hash = "sha256:ec0bdafa906f95adc9a0c6f26a4871fa753f25caaa0e032578a30457bff0af6a"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:ad506d4b09e684394c42c966ec1527f6ebc25da7f4da4b1b056606ffe446b8a3"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-macosx_14_0_arm64.whl", hash = "sha256:ebb8603d45bc86bbd5edb0d63e52c5fd9e7945d3a503b77e486bd88dde67a19b"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-macosx_14_0_x86_64.whl", hash = "sha256:15aa4c392ac396e2ad3d0a2680c0f0dee420f9fed14eef09bdb9450ee6dcb7b7"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c6e0bf9d1a2f50d2b65a7cf56db37c095af17b59f6c132396f7c6d5dd76484df"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:eabd7e8740d494ce2b4ea0ff05afa1b7b291e978c0ae075487c51e8bd93c0c68"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:e610832418a2bc09d974cc9fecebfa51e9532d6190223bc5ef6a7402ebf3b5cb"}, - {file = "numpy-2.3.1.tar.gz", hash = "sha256:1ec9ae20a4226da374362cca3c62cd753faf2f951440b0e3b98e93c235441d2b"}, + {file = "numpy-2.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9"}, + {file = "numpy-2.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168"}, + {file = "numpy-2.3.2-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:f0a1a8476ad77a228e41619af2fa9505cf69df928e9aaa165746584ea17fed2b"}, + {file = "numpy-2.3.2-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:cbc95b3813920145032412f7e33d12080f11dc776262df1712e1638207dde9e8"}, + {file = "numpy-2.3.2-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f75018be4980a7324edc5930fe39aa391d5734531b1926968605416ff58c332d"}, + {file = "numpy-2.3.2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:20b8200721840f5621b7bd03f8dcd78de33ec522fc40dc2641aa09537df010c3"}, + {file = "numpy-2.3.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1f91e5c028504660d606340a084db4b216567ded1056ea2b4be4f9d10b67197f"}, + {file = "numpy-2.3.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:fb1752a3bb9a3ad2d6b090b88a9a0ae1cd6f004ef95f75825e2f382c183b2097"}, + {file = "numpy-2.3.2-cp311-cp311-win32.whl", hash = "sha256:4ae6863868aaee2f57503c7a5052b3a2807cf7a3914475e637a0ecd366ced220"}, + {file = "numpy-2.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:240259d6564f1c65424bcd10f435145a7644a65a6811cfc3201c4a429ba79170"}, + {file = "numpy-2.3.2-cp311-cp311-win_arm64.whl", hash = "sha256:4209f874d45f921bde2cff1ffcd8a3695f545ad2ffbef6d3d3c6768162efab89"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:14a91ebac98813a49bc6aa1a0dfc09513dcec1d97eaf31ca21a87221a1cdcb15"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:71669b5daae692189540cffc4c439468d35a3f84f0c88b078ecd94337f6cb0ec"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-macosx_14_0_arm64.whl", hash = "sha256:69779198d9caee6e547adb933941ed7520f896fd9656834c300bdf4dd8642712"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-macosx_14_0_x86_64.whl", hash = "sha256:2c3271cc4097beb5a60f010bcc1cc204b300bb3eafb4399376418a83a1c6373c"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8446acd11fe3dc1830568c941d44449fd5cb83068e5c70bd5a470d323d448296"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:aa098a5ab53fa407fded5870865c6275a5cd4101cfdef8d6fafc48286a96e981"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:6936aff90dda378c09bea075af0d9c675fe3a977a9d2402f95a87f440f59f619"}, + {file = "numpy-2.3.2.tar.gz", hash = "sha256:e0486a11ec30cdecb53f184d496d1c6a20786c81e55e41640270130056f8ee48"}, ] [[package]] name = "openai" -version = "1.95.1" +version = "1.99.6" requires_python = ">=3.8" summary = "The official Python library for the openai API" groups = ["default"] @@ -1125,8 +1063,8 @@ dependencies = [ "typing-extensions<5,>=4.11", ] files = [ - {file = "openai-1.95.1-py3-none-any.whl", hash = "sha256:8bbdfeceef231b1ddfabbc232b179d79f8b849aab5a7da131178f8d10e0f162f"}, - {file = "openai-1.95.1.tar.gz", hash = "sha256:f089b605282e2a2b6776090b4b46563ac1da77f56402a222597d591e2dcc1086"}, + {file = "openai-1.99.6-py3-none-any.whl", hash = "sha256:e40d44b2989588c45ce13819598788b77b8fb80ba2f7ae95ce90d14e46f1bd26"}, + {file = "openai-1.99.6.tar.gz", hash = "sha256:f48f4239b938ef187062f3d5199a05b69711d8b600b9a9b6a3853cd271799183"}, ] [[package]] @@ -1314,7 +1252,7 @@ files = [ [[package]] name = "pypdf" -version = "5.7.0" +version = "5.9.0" requires_python = ">=3.8" summary = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" groups = ["default"] @@ -1322,8 +1260,8 @@ dependencies = [ "typing-extensions>=4.0; python_version < \"3.11\"", ] files = [ - {file = "pypdf-5.7.0-py3-none-any.whl", hash = "sha256:203379453439f5b68b7a1cd43cdf4c5f7a02b84810cefa7f93a47b350aaaba48"}, - {file = "pypdf-5.7.0.tar.gz", hash = "sha256:68c92f2e1aae878bab1150e74447f31ab3848b1c0a6f8becae9f0b1904460b6f"}, + {file = "pypdf-5.9.0-py3-none-any.whl", hash = "sha256:be10a4c54202f46d9daceaa8788be07aa8cd5ea8c25c529c50dd509206382c35"}, + {file = "pypdf-5.9.0.tar.gz", hash = "sha256:30f67a614d558e495e1fbb157ba58c1de91ffc1718f5e0dfeb82a029233890a1"}, ] [[package]] @@ -1395,27 +1333,26 @@ files = [ [[package]] name = "regex" -version = "2024.11.6" -requires_python = ">=3.8" +version = "2025.7.34" +requires_python = ">=3.9" summary = "Alternative regular expression module, to replace re." groups = ["default"] files = [ - {file = "regex-2024.11.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:5478c6962ad548b54a591778e93cd7c456a7a29f8eca9c49e4f9a806dcc5d638"}, - {file = "regex-2024.11.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2c89a8cc122b25ce6945f0423dc1352cb9593c68abd19223eebbd4e56612c5b7"}, - {file = "regex-2024.11.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:94d87b689cdd831934fa3ce16cc15cd65748e6d689f5d2b8f4f4df2065c9fa20"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1062b39a0a2b75a9c694f7a08e7183a80c63c0d62b301418ffd9c35f55aaa114"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:167ed4852351d8a750da48712c3930b031f6efdaa0f22fa1933716bfcd6bf4a3"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2d548dafee61f06ebdb584080621f3e0c23fff312f0de1afc776e2a2ba99a74f"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a19f302cd1ce5dd01a9099aaa19cae6173306d1302a43b627f62e21cf18ac0"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bec9931dfb61ddd8ef2ebc05646293812cb6b16b60cf7c9511a832b6f1854b55"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:9714398225f299aa85267fd222f7142fcb5c769e73d7733344efc46f2ef5cf89"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:202eb32e89f60fc147a41e55cb086db2a3f8cb82f9a9a88440dcfc5d37faae8d"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:4181b814e56078e9b00427ca358ec44333765f5ca1b45597ec7446d3a1ef6e34"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:068376da5a7e4da51968ce4c122a7cd31afaaec4fccc7856c92f63876e57b51d"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ac10f2c4184420d881a3475fb2c6f4d95d53a8d50209a2500723d831036f7c45"}, - {file = "regex-2024.11.6-cp311-cp311-win32.whl", hash = "sha256:c36f9b6f5f8649bb251a5f3f66564438977b7ef8386a52460ae77e6070d309d9"}, - {file = "regex-2024.11.6-cp311-cp311-win_amd64.whl", hash = "sha256:02e28184be537f0e75c1f9b2f8847dc51e08e6e171c6bde130b2687e0c33cf60"}, - {file = "regex-2024.11.6.tar.gz", hash = "sha256:7ab159b063c52a0333c884e4679f8d7a85112ee3078fe3d9004b2dd875585519"}, + {file = "regex-2025.7.34-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:da304313761b8500b8e175eb2040c4394a875837d5635f6256d6fa0377ad32c8"}, + {file = "regex-2025.7.34-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:35e43ebf5b18cd751ea81455b19acfdec402e82fe0dc6143edfae4c5c4b3909a"}, + {file = "regex-2025.7.34-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:96bbae4c616726f4661fe7bcad5952e10d25d3c51ddc388189d8864fbc1b3c68"}, + {file = "regex-2025.7.34-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9feab78a1ffa4f2b1e27b1bcdaad36f48c2fed4870264ce32f52a393db093c78"}, + {file = "regex-2025.7.34-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f14b36e6d4d07f1a5060f28ef3b3561c5d95eb0651741474ce4c0a4c56ba8719"}, + {file = "regex-2025.7.34-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:85c3a958ef8b3d5079c763477e1f09e89d13ad22198a37e9d7b26b4b17438b33"}, + {file = "regex-2025.7.34-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:37555e4ae0b93358fa7c2d240a4291d4a4227cc7c607d8f85596cdb08ec0a083"}, + {file = "regex-2025.7.34-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:ee38926f31f1aa61b0232a3a11b83461f7807661c062df9eb88769d86e6195c3"}, + {file = "regex-2025.7.34-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:a664291c31cae9c4a30589bd8bc2ebb56ef880c9c6264cb7643633831e606a4d"}, + {file = "regex-2025.7.34-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:f3e5c1e0925e77ec46ddc736b756a6da50d4df4ee3f69536ffb2373460e2dafd"}, + {file = "regex-2025.7.34-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d428fc7731dcbb4e2ffe43aeb8f90775ad155e7db4347a639768bc6cd2df881a"}, + {file = "regex-2025.7.34-cp311-cp311-win32.whl", hash = "sha256:e154a7ee7fa18333ad90b20e16ef84daaeac61877c8ef942ec8dfa50dc38b7a1"}, + {file = "regex-2025.7.34-cp311-cp311-win_amd64.whl", hash = "sha256:24257953d5c1d6d3c129ab03414c07fc1a47833c9165d49b954190b2b7f21a1a"}, + {file = "regex-2025.7.34-cp311-cp311-win_arm64.whl", hash = "sha256:3157aa512b9e606586900888cd469a444f9b898ecb7f8931996cb715f77477f0"}, + {file = "regex-2025.7.34.tar.gz", hash = "sha256:9ead9765217afd04a86822dfcd4ed2747dfe426e887da413b15ff0ac2457e21a"}, ] [[package]] @@ -1437,58 +1374,58 @@ files = [ [[package]] name = "ruff" -version = "0.12.3" +version = "0.12.8" requires_python = ">=3.7" summary = "An extremely fast Python linter and code formatter, written in Rust." groups = ["dev"] files = [ - {file = "ruff-0.12.3-py3-none-linux_armv6l.whl", hash = "sha256:47552138f7206454eaf0c4fe827e546e9ddac62c2a3d2585ca54d29a890137a2"}, - {file = "ruff-0.12.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:0a9153b000c6fe169bb307f5bd1b691221c4286c133407b8827c406a55282041"}, - {file = "ruff-0.12.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:fa6b24600cf3b750e48ddb6057e901dd5b9aa426e316addb2a1af185a7509882"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2506961bf6ead54887ba3562604d69cb430f59b42133d36976421bc8bd45901"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c4faaff1f90cea9d3033cbbcdf1acf5d7fb11d8180758feb31337391691f3df0"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40dced4a79d7c264389de1c59467d5d5cefd79e7e06d1dfa2c75497b5269a5a6"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:0262d50ba2767ed0fe212aa7e62112a1dcbfd46b858c5bf7bbd11f326998bafc"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12371aec33e1a3758597c5c631bae9a5286f3c963bdfb4d17acdd2d395406687"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:560f13b6baa49785665276c963edc363f8ad4b4fc910a883e2625bdb14a83a9e"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:023040a3499f6f974ae9091bcdd0385dd9e9eb4942f231c23c57708147b06311"}, - {file = "ruff-0.12.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:883d844967bffff5ab28bba1a4d246c1a1b2933f48cb9840f3fdc5111c603b07"}, - {file = "ruff-0.12.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2120d3aa855ff385e0e562fdee14d564c9675edbe41625c87eeab744a7830d12"}, - {file = "ruff-0.12.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:6b16647cbb470eaf4750d27dddc6ebf7758b918887b56d39e9c22cce2049082b"}, - {file = "ruff-0.12.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e1417051edb436230023575b149e8ff843a324557fe0a265863b7602df86722f"}, - {file = "ruff-0.12.3-py3-none-win32.whl", hash = "sha256:dfd45e6e926deb6409d0616078a666ebce93e55e07f0fb0228d4b2608b2c248d"}, - {file = "ruff-0.12.3-py3-none-win_amd64.whl", hash = "sha256:a946cf1e7ba3209bdef039eb97647f1c77f6f540e5845ec9c114d3af8df873e7"}, - {file = "ruff-0.12.3-py3-none-win_arm64.whl", hash = "sha256:5f9c7c9c8f84c2d7f27e93674d27136fbf489720251544c4da7fb3d742e011b1"}, - {file = "ruff-0.12.3.tar.gz", hash = "sha256:f1b5a4b6668fd7b7ea3697d8d98857390b40c1320a63a178eee6be0899ea2d77"}, + {file = "ruff-0.12.8-py3-none-linux_armv6l.whl", hash = "sha256:63cb5a5e933fc913e5823a0dfdc3c99add73f52d139d6cd5cc8639d0e0465513"}, + {file = "ruff-0.12.8-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:9a9bbe28f9f551accf84a24c366c1aa8774d6748438b47174f8e8565ab9dedbc"}, + {file = "ruff-0.12.8-py3-none-macosx_11_0_arm64.whl", hash = "sha256:2fae54e752a3150f7ee0e09bce2e133caf10ce9d971510a9b925392dc98d2fec"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c0acbcf01206df963d9331b5838fb31f3b44fa979ee7fa368b9b9057d89f4a53"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ae3e7504666ad4c62f9ac8eedb52a93f9ebdeb34742b8b71cd3cccd24912719f"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb82efb5d35d07497813a1c5647867390a7d83304562607f3579602fa3d7d46f"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:dbea798fc0065ad0b84a2947b0aff4233f0cb30f226f00a2c5850ca4393de609"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:49ebcaccc2bdad86fd51b7864e3d808aad404aab8df33d469b6e65584656263a"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ac9c570634b98c71c88cb17badd90f13fc076a472ba6ef1d113d8ed3df109fb"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:560e0cd641e45591a3e42cb50ef61ce07162b9c233786663fdce2d8557d99818"}, + {file = "ruff-0.12.8-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:71c83121512e7743fba5a8848c261dcc454cafb3ef2934a43f1b7a4eb5a447ea"}, + {file = "ruff-0.12.8-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:de4429ef2ba091ecddedd300f4c3f24bca875d3d8b23340728c3cb0da81072c3"}, + {file = "ruff-0.12.8-py3-none-musllinux_1_2_i686.whl", hash = "sha256:a2cab5f60d5b65b50fba39a8950c8746df1627d54ba1197f970763917184b161"}, + {file = "ruff-0.12.8-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:45c32487e14f60b88aad6be9fd5da5093dbefb0e3e1224131cb1d441d7cb7d46"}, + {file = "ruff-0.12.8-py3-none-win32.whl", hash = "sha256:daf3475060a617fd5bc80638aeaf2f5937f10af3ec44464e280a9d2218e720d3"}, + {file = "ruff-0.12.8-py3-none-win_amd64.whl", hash = "sha256:7209531f1a1fcfbe8e46bcd7ab30e2f43604d8ba1c49029bb420b103d0b5f76e"}, + {file = "ruff-0.12.8-py3-none-win_arm64.whl", hash = "sha256:c90e1a334683ce41b0e7a04f41790c429bf5073b62c1ae701c9dc5b3d14f0749"}, + {file = "ruff-0.12.8.tar.gz", hash = "sha256:4cb3a45525176e1009b2b64126acf5f9444ea59066262791febf55e40493a033"}, ] [[package]] name = "safetensors" -version = "0.5.3" -requires_python = ">=3.7" +version = "0.6.2" +requires_python = ">=3.9" summary = "" groups = ["default"] files = [ - {file = "safetensors-0.5.3-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:bd20eb133db8ed15b40110b7c00c6df51655a2998132193de2f75f72d99c7073"}, - {file = "safetensors-0.5.3-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:21d01c14ff6c415c485616b8b0bf961c46b3b343ca59110d38d744e577f9cce7"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:11bce6164887cd491ca75c2326a113ba934be596e22b28b1742ce27b1d076467"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4a243be3590bc3301c821da7a18d87224ef35cbd3e5f5727e4e0728b8172411e"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8bd84b12b1670a6f8e50f01e28156422a2bc07fb16fc4e98bded13039d688a0d"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:391ac8cab7c829452175f871fcaf414aa1e292b5448bd02620f675a7f3e7abb9"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cead1fa41fc54b1e61089fa57452e8834f798cb1dc7a09ba3524f1eb08e0317a"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1077f3e94182d72618357b04b5ced540ceb71c8a813d3319f1aba448e68a770d"}, - {file = "safetensors-0.5.3-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:799021e78287bac619c7b3f3606730a22da4cda27759ddf55d37c8db7511c74b"}, - {file = "safetensors-0.5.3-cp38-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:df26da01aaac504334644e1b7642fa000bfec820e7cef83aeac4e355e03195ff"}, - {file = "safetensors-0.5.3-cp38-abi3-musllinux_1_2_i686.whl", hash = "sha256:32c3ef2d7af8b9f52ff685ed0bc43913cdcde135089ae322ee576de93eae5135"}, - {file = "safetensors-0.5.3-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:37f1521be045e56fc2b54c606d4455573e717b2d887c579ee1dbba5f868ece04"}, - {file = "safetensors-0.5.3-cp38-abi3-win32.whl", hash = "sha256:cfc0ec0846dcf6763b0ed3d1846ff36008c6e7290683b61616c4b040f6a54ace"}, - {file = "safetensors-0.5.3-cp38-abi3-win_amd64.whl", hash = "sha256:836cbbc320b47e80acd40e44c8682db0e8ad7123209f69b093def21ec7cafd11"}, - {file = "safetensors-0.5.3.tar.gz", hash = "sha256:b6b0d6ecacec39a4fdd99cc19f4576f5219ce858e6fd8dbe7609df0b8dc56965"}, + {file = "safetensors-0.6.2-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:9c85ede8ec58f120bad982ec47746981e210492a6db876882aa021446af8ffba"}, + {file = "safetensors-0.6.2-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:d6675cf4b39c98dbd7d940598028f3742e0375a6b4d4277e76beb0c35f4b843b"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1d2d2b3ce1e2509c68932ca03ab8f20570920cd9754b05063d4368ee52833ecd"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:93de35a18f46b0f5a6a1f9e26d91b442094f2df02e9fd7acf224cfec4238821a"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89a89b505f335640f9120fac65ddeb83e40f1fd081cb8ed88b505bdccec8d0a1"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fc4d0d0b937e04bdf2ae6f70cd3ad51328635fe0e6214aa1fc811f3b576b3bda"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8045db2c872db8f4cbe3faa0495932d89c38c899c603f21e9b6486951a5ecb8f"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:81e67e8bab9878bb568cffbc5f5e655adb38d2418351dc0859ccac158f753e19"}, + {file = "safetensors-0.6.2-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:b0e4d029ab0a0e0e4fdf142b194514695b1d7d3735503ba700cf36d0fc7136ce"}, + {file = "safetensors-0.6.2-cp38-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:fa48268185c52bfe8771e46325a1e21d317207bcabcb72e65c6e28e9ffeb29c7"}, + {file = "safetensors-0.6.2-cp38-abi3-musllinux_1_2_i686.whl", hash = "sha256:d83c20c12c2d2f465997c51b7ecb00e407e5f94d7dec3ea0cc11d86f60d3fde5"}, + {file = "safetensors-0.6.2-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d944cea65fad0ead848b6ec2c37cc0b197194bec228f8020054742190e9312ac"}, + {file = "safetensors-0.6.2-cp38-abi3-win32.whl", hash = "sha256:cab75ca7c064d3911411461151cb69380c9225798a20e712b102edda2542ddb1"}, + {file = "safetensors-0.6.2-cp38-abi3-win_amd64.whl", hash = "sha256:c7b214870df923cbc1593c3faee16bec59ea462758699bd3fee399d00aac072c"}, + {file = "safetensors-0.6.2.tar.gz", hash = "sha256:43ff2aa0e6fa2dc3ea5524ac7ad93a9839256b8703761e76e2d0b2a3fa4f15d9"}, ] [[package]] name = "scikit-learn" -version = "1.7.0" +version = "1.7.1" requires_python = ">=3.10" summary = "A set of python modules for machine learning and data mining" groups = ["default"] @@ -1499,17 +1436,17 @@ dependencies = [ "threadpoolctl>=3.1.0", ] files = [ - {file = "scikit_learn-1.7.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8ef09b1615e1ad04dc0d0054ad50634514818a8eb3ee3dee99af3bffc0ef5007"}, - {file = "scikit_learn-1.7.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:7d7240c7b19edf6ed93403f43b0fcb0fe95b53bc0b17821f8fb88edab97085ef"}, - {file = "scikit_learn-1.7.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80bd3bd4e95381efc47073a720d4cbab485fc483966f1709f1fd559afac57ab8"}, - {file = "scikit_learn-1.7.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9dbe48d69aa38ecfc5a6cda6c5df5abef0c0ebdb2468e92437e2053f84abb8bc"}, - {file = "scikit_learn-1.7.0-cp311-cp311-win_amd64.whl", hash = "sha256:8fa979313b2ffdfa049ed07252dc94038def3ecd49ea2a814db5401c07f1ecfa"}, - {file = "scikit_learn-1.7.0.tar.gz", hash = "sha256:c01e869b15aec88e2cdb73d27f15bdbe03bce8e2fb43afbe77c45d399e73a5a3"}, + {file = "scikit_learn-1.7.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:90c8494ea23e24c0fb371afc474618c1019dc152ce4a10e4607e62196113851b"}, + {file = "scikit_learn-1.7.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:bb870c0daf3bf3be145ec51df8ac84720d9972170786601039f024bf6d61a518"}, + {file = "scikit_learn-1.7.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:40daccd1b5623f39e8943ab39735cadf0bdce80e67cdca2adcb5426e987320a8"}, + {file = "scikit_learn-1.7.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:30d1f413cfc0aa5a99132a554f1d80517563c34a9d3e7c118fde2d273c6fe0f7"}, + {file = "scikit_learn-1.7.1-cp311-cp311-win_amd64.whl", hash = "sha256:c711d652829a1805a95d7fe96654604a8f16eab5a9e9ad87b3e60173415cb650"}, + {file = "scikit_learn-1.7.1.tar.gz", hash = "sha256:24b3f1e976a4665aa74ee0fcaac2b8fccc6ae77c8e07ab25da3ba6d3292b9802"}, ] [[package]] name = "scipy" -version = "1.16.0" +version = "1.16.1" requires_python = ">=3.11" summary = "Fundamental algorithms for scientific computing in Python" groups = ["default"] @@ -1517,21 +1454,21 @@ dependencies = [ "numpy<2.6,>=1.25.2", ] files = [ - {file = "scipy-1.16.0-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:deec06d831b8f6b5fb0b652433be6a09db29e996368ce5911faf673e78d20085"}, - {file = "scipy-1.16.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:d30c0fe579bb901c61ab4bb7f3eeb7281f0d4c4a7b52dbf563c89da4fd2949be"}, - {file = "scipy-1.16.0-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:b2243561b45257f7391d0f49972fca90d46b79b8dbcb9b2cb0f9df928d370ad4"}, - {file = "scipy-1.16.0-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:e6d7dfc148135e9712d87c5f7e4f2ddc1304d1582cb3a7d698bbadedb61c7afd"}, - {file = "scipy-1.16.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:90452f6a9f3fe5a2cf3748e7be14f9cc7d9b124dce19667b54f5b429d680d539"}, - {file = "scipy-1.16.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a2f0bf2f58031c8701a8b601df41701d2a7be17c7ffac0a4816aeba89c4cdac8"}, - {file = "scipy-1.16.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6c4abb4c11fc0b857474241b812ce69ffa6464b4bd8f4ecb786cf240367a36a7"}, - {file = "scipy-1.16.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b370f8f6ac6ef99815b0d5c9f02e7ade77b33007d74802efc8316c8db98fd11e"}, - {file = "scipy-1.16.0-cp311-cp311-win_amd64.whl", hash = "sha256:a16ba90847249bedce8aa404a83fb8334b825ec4a8e742ce6012a7a5e639f95c"}, - {file = "scipy-1.16.0.tar.gz", hash = "sha256:b5ef54021e832869c8cfb03bc3bf20366cbcd426e02a58e8a58d7584dfbb8f62"}, + {file = "scipy-1.16.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:c033fa32bab91dc98ca59d0cf23bb876454e2bb02cbe592d5023138778f70030"}, + {file = "scipy-1.16.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:6e5c2f74e5df33479b5cd4e97a9104c511518fbd979aa9b8f6aec18b2e9ecae7"}, + {file = "scipy-1.16.1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:0a55ffe0ba0f59666e90951971a884d1ff6f4ec3275a48f472cfb64175570f77"}, + {file = "scipy-1.16.1-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:f8a5d6cd147acecc2603fbd382fed6c46f474cccfcf69ea32582e033fb54dcfe"}, + {file = "scipy-1.16.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cb18899127278058bcc09e7b9966d41a5a43740b5bb8dcba401bd983f82e885b"}, + {file = "scipy-1.16.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:adccd93a2fa937a27aae826d33e3bfa5edf9aa672376a4852d23a7cd67a2e5b7"}, + {file = "scipy-1.16.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:18aca1646a29ee9a0625a1be5637fa798d4d81fdf426481f06d69af828f16958"}, + {file = "scipy-1.16.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d85495cef541729a70cdddbbf3e6b903421bc1af3e8e3a9a72a06751f33b7c39"}, + {file = "scipy-1.16.1-cp311-cp311-win_amd64.whl", hash = "sha256:226652fca853008119c03a8ce71ffe1b3f6d2844cc1686e8f9806edafae68596"}, + {file = "scipy-1.16.1.tar.gz", hash = "sha256:44c76f9e8b6e8e488a586190ab38016e4ed2f8a038af7cd3defa903c0a2238b3"}, ] [[package]] name = "sentence-transformers" -version = "5.0.0" +version = "5.1.0" requires_python = ">=3.9" summary = "Embeddings, Retrieval, and Reranking" groups = ["default"] @@ -1546,8 +1483,8 @@ dependencies = [ "typing-extensions>=4.5.0", ] files = [ - {file = "sentence_transformers-5.0.0-py3-none-any.whl", hash = "sha256:346240f9cc6b01af387393f03e103998190dfb0826a399d0c38a81a05c7a5d76"}, - {file = "sentence_transformers-5.0.0.tar.gz", hash = "sha256:e5a411845910275fd166bacb01d28b7f79537d3550628ae42309dbdd3d5670d1"}, + {file = "sentence_transformers-5.1.0-py3-none-any.whl", hash = "sha256:fc803929f6a3ce82e2b2c06e0efed7a36de535c633d5ce55efac0b710ea5643e"}, + {file = "sentence_transformers-5.1.0.tar.gz", hash = "sha256:70c7630697cc1c64ffca328d6e8688430ebd134b3c2df03dc07cb3a016b04739"}, ] [[package]] @@ -1596,7 +1533,7 @@ files = [ [[package]] name = "sqlalchemy" -version = "2.0.41" +version = "2.0.42" requires_python = ">=3.7" summary = "Database Abstraction Library" groups = ["default"] @@ -1606,40 +1543,40 @@ dependencies = [ "typing-extensions>=4.6.0", ] files = [ - {file = "sqlalchemy-2.0.41-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6375cd674fe82d7aa9816d1cb96ec592bac1726c11e0cafbf40eeee9a4516b5f"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9f8c9fdd15a55d9465e590a402f42082705d66b05afc3ffd2d2eb3c6ba919560"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32f9dc8c44acdee06c8fc6440db9eae8b4af8b01e4b1aee7bdd7241c22edff4f"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90c11ceb9a1f482c752a71f203a81858625d8df5746d787a4786bca4ffdf71c6"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:911cc493ebd60de5f285bcae0491a60b4f2a9f0f5c270edd1c4dbaef7a38fc04"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:03968a349db483936c249f4d9cd14ff2c296adfa1290b660ba6516f973139582"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-win32.whl", hash = "sha256:293cd444d82b18da48c9f71cd7005844dbbd06ca19be1ccf6779154439eec0b8"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-win_amd64.whl", hash = "sha256:3d3549fc3e40667ec7199033a4e40a2f669898a00a7b18a931d3efb4c7900504"}, - {file = "sqlalchemy-2.0.41-py3-none-any.whl", hash = "sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576"}, - {file = "sqlalchemy-2.0.41.tar.gz", hash = "sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c34100c0b7ea31fbc113c124bcf93a53094f8951c7bf39c45f39d327bad6d1e7"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ad59dbe4d1252448c19d171dfba14c74e7950b46dc49d015722a4a06bfdab2b0"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9187498c2149919753a7fd51766ea9c8eecdec7da47c1b955fa8090bc642eaa"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f092cf83ebcafba23a247f5e03f99f5436e3ef026d01c8213b5eca48ad6efa9"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:fc6afee7e66fdba4f5a68610b487c1f754fccdc53894a9567785932dbb6a265e"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:260ca1d2e5910f1f1ad3fe0113f8fab28657cee2542cb48c2f342ed90046e8ec"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-win32.whl", hash = "sha256:2eb539fd83185a85e5fcd6b19214e1c734ab0351d81505b0f987705ba0a1e231"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-win_amd64.whl", hash = "sha256:9193fa484bf00dcc1804aecbb4f528f1123c04bad6a08d7710c909750fa76aeb"}, + {file = "sqlalchemy-2.0.42-py3-none-any.whl", hash = "sha256:defcdff7e661f0043daa381832af65d616e060ddb54d3fe4476f51df7eaa1835"}, + {file = "sqlalchemy-2.0.42.tar.gz", hash = "sha256:160bedd8a5c28765bd5be4dec2d881e109e33b34922e50a3b881a7681773ac5f"}, ] [[package]] name = "sqlalchemy" -version = "2.0.41" +version = "2.0.42" extras = ["asyncio"] requires_python = ">=3.7" summary = "Database Abstraction Library" groups = ["default"] dependencies = [ "greenlet>=1", - "sqlalchemy==2.0.41", + "sqlalchemy==2.0.42", ] files = [ - {file = "sqlalchemy-2.0.41-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6375cd674fe82d7aa9816d1cb96ec592bac1726c11e0cafbf40eeee9a4516b5f"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9f8c9fdd15a55d9465e590a402f42082705d66b05afc3ffd2d2eb3c6ba919560"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32f9dc8c44acdee06c8fc6440db9eae8b4af8b01e4b1aee7bdd7241c22edff4f"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90c11ceb9a1f482c752a71f203a81858625d8df5746d787a4786bca4ffdf71c6"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:911cc493ebd60de5f285bcae0491a60b4f2a9f0f5c270edd1c4dbaef7a38fc04"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:03968a349db483936c249f4d9cd14ff2c296adfa1290b660ba6516f973139582"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-win32.whl", hash = "sha256:293cd444d82b18da48c9f71cd7005844dbbd06ca19be1ccf6779154439eec0b8"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-win_amd64.whl", hash = "sha256:3d3549fc3e40667ec7199033a4e40a2f669898a00a7b18a931d3efb4c7900504"}, - {file = "sqlalchemy-2.0.41-py3-none-any.whl", hash = "sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576"}, - {file = "sqlalchemy-2.0.41.tar.gz", hash = "sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c34100c0b7ea31fbc113c124bcf93a53094f8951c7bf39c45f39d327bad6d1e7"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ad59dbe4d1252448c19d171dfba14c74e7950b46dc49d015722a4a06bfdab2b0"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9187498c2149919753a7fd51766ea9c8eecdec7da47c1b955fa8090bc642eaa"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f092cf83ebcafba23a247f5e03f99f5436e3ef026d01c8213b5eca48ad6efa9"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:fc6afee7e66fdba4f5a68610b487c1f754fccdc53894a9567785932dbb6a265e"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:260ca1d2e5910f1f1ad3fe0113f8fab28657cee2542cb48c2f342ed90046e8ec"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-win32.whl", hash = "sha256:2eb539fd83185a85e5fcd6b19214e1c734ab0351d81505b0f987705ba0a1e231"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-win_amd64.whl", hash = "sha256:9193fa484bf00dcc1804aecbb4f528f1123c04bad6a08d7710c909750fa76aeb"}, + {file = "sqlalchemy-2.0.42-py3-none-any.whl", hash = "sha256:defcdff7e661f0043daa381832af65d616e060ddb54d3fe4476f51df7eaa1835"}, + {file = "sqlalchemy-2.0.42.tar.gz", hash = "sha256:160bedd8a5c28765bd5be4dec2d881e109e33b34922e50a3b881a7681773ac5f"}, ] [[package]] @@ -1691,7 +1628,7 @@ files = [ [[package]] name = "tiktoken" -version = "0.9.0" +version = "0.11.0" requires_python = ">=3.9" summary = "tiktoken is a fast BPE tokeniser for use with OpenAI's models" groups = ["default"] @@ -1700,18 +1637,18 @@ dependencies = [ "requests>=2.26.0", ] files = [ - {file = "tiktoken-0.9.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:f32cc56168eac4851109e9b5d327637f15fd662aa30dd79f964b7c39fbadd26e"}, - {file = "tiktoken-0.9.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:45556bc41241e5294063508caf901bf92ba52d8ef9222023f83d2483a3055348"}, - {file = "tiktoken-0.9.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03935988a91d6d3216e2ec7c645afbb3d870b37bcb67ada1943ec48678e7ee33"}, - {file = "tiktoken-0.9.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b3d80aad8d2c6b9238fc1a5524542087c52b860b10cbf952429ffb714bc1136"}, - {file = "tiktoken-0.9.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b2a21133be05dc116b1d0372af051cd2c6aa1d2188250c9b553f9fa49301b336"}, - {file = "tiktoken-0.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:11a20e67fdf58b0e2dea7b8654a288e481bb4fc0289d3ad21291f8d0849915fb"}, - {file = "tiktoken-0.9.0.tar.gz", hash = "sha256:d02a5ca6a938e0490e1ff957bc48c8b078c88cb83977be1625b1fd8aac792c5d"}, + {file = "tiktoken-0.11.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4ae374c46afadad0f501046db3da1b36cd4dfbfa52af23c998773682446097cf"}, + {file = "tiktoken-0.11.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:25a512ff25dc6c85b58f5dd4f3d8c674dc05f96b02d66cdacf628d26a4e4866b"}, + {file = "tiktoken-0.11.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2130127471e293d385179c1f3f9cd445070c0772be73cdafb7cec9a3684c0458"}, + {file = "tiktoken-0.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21e43022bf2c33f733ea9b54f6a3f6b4354b909f5a73388fb1b9347ca54a069c"}, + {file = "tiktoken-0.11.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:adb4e308eb64380dc70fa30493e21c93475eaa11669dea313b6bbf8210bfd013"}, + {file = "tiktoken-0.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:ece6b76bfeeb61a125c44bbefdfccc279b5288e6007fbedc0d32bfec602df2f2"}, + {file = "tiktoken-0.11.0.tar.gz", hash = "sha256:3c518641aee1c52247c2b97e74d8d07d780092af79d5911a6ab5e79359d9b06a"}, ] [[package]] name = "tokenizers" -version = "0.21.2" +version = "0.21.4" requires_python = ">=3.9" summary = "" groups = ["default"] @@ -1719,21 +1656,21 @@ dependencies = [ "huggingface-hub<1.0,>=0.16.4", ] files = [ - {file = "tokenizers-0.21.2-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:342b5dfb75009f2255ab8dec0041287260fed5ce00c323eb6bab639066fef8ec"}, - {file = "tokenizers-0.21.2-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:126df3205d6f3a93fea80c7a8a266a78c1bd8dd2fe043386bafdd7736a23e45f"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a32cd81be21168bd0d6a0f0962d60177c447a1aa1b1e48fa6ec9fc728ee0b12"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8bd8999538c405133c2ab999b83b17c08b7fc1b48c1ada2469964605a709ef91"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5e9944e61239b083a41cf8fc42802f855e1dca0f499196df37a8ce219abac6eb"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:514cd43045c5d546f01142ff9c79a96ea69e4b5cda09e3027708cb2e6d5762ab"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b1b9405822527ec1e0f7d8d2fdb287a5730c3a6518189c968254a8441b21faae"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fed9a4d51c395103ad24f8e7eb976811c57fbec2af9f133df471afcd922e5020"}, - {file = "tokenizers-0.21.2-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2c41862df3d873665ec78b6be36fcc30a26e3d4902e9dd8608ed61d49a48bc19"}, - {file = "tokenizers-0.21.2-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:ed21dc7e624e4220e21758b2e62893be7101453525e3d23264081c9ef9a6d00d"}, - {file = "tokenizers-0.21.2-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:0e73770507e65a0e0e2a1affd6b03c36e3bc4377bd10c9ccf51a82c77c0fe365"}, - {file = "tokenizers-0.21.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:106746e8aa9014a12109e58d540ad5465b4c183768ea96c03cbc24c44d329958"}, - {file = "tokenizers-0.21.2-cp39-abi3-win32.whl", hash = "sha256:cabda5a6d15d620b6dfe711e1af52205266d05b379ea85a8a301b3593c60e962"}, - {file = "tokenizers-0.21.2-cp39-abi3-win_amd64.whl", hash = "sha256:58747bb898acdb1007f37a7bbe614346e98dc28708ffb66a3fd50ce169ac6c98"}, - {file = "tokenizers-0.21.2.tar.gz", hash = "sha256:fdc7cffde3e2113ba0e6cc7318c40e3438a4d74bbc62bf04bcc63bdfb082ac77"}, + {file = "tokenizers-0.21.4-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:2ccc10a7c3bcefe0f242867dc914fc1226ee44321eb618cfe3019b5df3400133"}, + {file = "tokenizers-0.21.4-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:5e2f601a8e0cd5be5cc7506b20a79112370b9b3e9cb5f13f68ab11acd6ca7d60"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:39b376f5a1aee67b4d29032ee85511bbd1b99007ec735f7f35c8a2eb104eade5"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2107ad649e2cda4488d41dfd031469e9da3fcbfd6183e74e4958fa729ffbf9c6"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c73012da95afafdf235ba80047699df4384fdc481527448a078ffd00e45a7d9"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f23186c40395fc390d27f519679a58023f368a0aad234af145e0f39ad1212732"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cc88bb34e23a54cc42713d6d98af5f1bf79c07653d24fe984d2d695ba2c922a2"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51b7eabb104f46c1c50b486520555715457ae833d5aee9ff6ae853d1130506ff"}, + {file = "tokenizers-0.21.4-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:714b05b2e1af1288bd1bc56ce496c4cebb64a20d158ee802887757791191e6e2"}, + {file = "tokenizers-0.21.4-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:1340ff877ceedfa937544b7d79f5b7becf33a4cfb58f89b3b49927004ef66f78"}, + {file = "tokenizers-0.21.4-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:3c1f4317576e465ac9ef0d165b247825a2a4078bcd01cba6b54b867bdf9fdd8b"}, + {file = "tokenizers-0.21.4-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:c212aa4e45ec0bb5274b16b6f31dd3f1c41944025c2358faaa5782c754e84c24"}, + {file = "tokenizers-0.21.4-cp39-abi3-win32.whl", hash = "sha256:6c42a930bc5f4c47f4ea775c91de47d27910881902b0f20e4990ebe045a415d0"}, + {file = "tokenizers-0.21.4-cp39-abi3-win_amd64.whl", hash = "sha256:475d807a5c3eb72c59ad9b5fcdb254f6e17f53dfcbb9903233b0dfa9c943b597"}, + {file = "tokenizers-0.21.4.tar.gz", hash = "sha256:fa23f85fbc9a02ec5c6978da172cdcbac23498c3ca9f3645c5c68740ac007880"}, ] [[package]] @@ -1772,13 +1709,13 @@ files = [ [[package]] name = "transformers" -version = "4.53.2" +version = "4.55.0" requires_python = ">=3.9.0" summary = "State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow" groups = ["default"] dependencies = [ "filelock", - "huggingface-hub<1.0,>=0.30.0", + "huggingface-hub<1.0,>=0.34.0", "numpy>=1.17", "packaging>=20.0", "pyyaml>=5.1", @@ -1789,13 +1726,13 @@ dependencies = [ "tqdm>=4.27", ] files = [ - {file = "transformers-4.53.2-py3-none-any.whl", hash = "sha256:db8f4819bb34f000029c73c3c557e7d06fc1b8e612ec142eecdae3947a9c78bf"}, - {file = "transformers-4.53.2.tar.gz", hash = "sha256:6c3ed95edfb1cba71c4245758f1b4878c93bf8cde77d076307dacb2cbbd72be2"}, + {file = "transformers-4.55.0-py3-none-any.whl", hash = "sha256:29d9b8800e32a4a831bb16efb5f762f6a9742fef9fce5d693ed018d19b106490"}, + {file = "transformers-4.55.0.tar.gz", hash = "sha256:15aa138a05d07a15b30d191ea2c45e23061ebf9fcc928a1318e03fe2234f3ae1"}, ] [[package]] name = "types-requests" -version = "2.32.4.20250611" +version = "2.32.4.20250809" requires_python = ">=3.9" summary = "Typing stubs for requests" groups = ["dev"] @@ -1803,8 +1740,8 @@ dependencies = [ "urllib3>=2", ] files = [ - {file = "types_requests-2.32.4.20250611-py3-none-any.whl", hash = "sha256:ad2fe5d3b0cb3c2c902c8815a70e7fb2302c4b8c1f77bdcd738192cdb3878072"}, - {file = "types_requests-2.32.4.20250611.tar.gz", hash = "sha256:741c8777ed6425830bf51e54d6abe245f79b4dcb9019f1622b773463946bf826"}, + {file = "types_requests-2.32.4.20250809-py3-none-any.whl", hash = "sha256:f73d1832fb519ece02c85b1f09d5f0dd3108938e7d47e7f94bbfa18a6782b163"}, + {file = "types_requests-2.32.4.20250809.tar.gz", hash = "sha256:d8060de1c8ee599311f56ff58010fb4902f462a1470802cf9f6ed27bc46c4df3"}, ] [[package]] diff --git a/pdm.lock.gpu b/pdm.lock.gpu index f94d30c1..9c8d87a3 100644 --- a/pdm.lock.gpu +++ b/pdm.lock.gpu @@ -12,7 +12,7 @@ requires_python = "==3.11.*" [[package]] name = "accelerate" -version = "1.8.1" +version = "1.10.0" requires_python = ">=3.9.0" summary = "Accelerate" groups = ["default"] @@ -26,8 +26,8 @@ dependencies = [ "torch>=2.0.0", ] files = [ - {file = "accelerate-1.8.1-py3-none-any.whl", hash = "sha256:c47b8994498875a2b1286e945bd4d20e476956056c7941d512334f4eb44ff991"}, - {file = "accelerate-1.8.1.tar.gz", hash = "sha256:f60df931671bc4e75077b852990469d4991ce8bd3a58e72375c3c95132034db9"}, + {file = "accelerate-1.10.0-py3-none-any.whl", hash = "sha256:260a72b560e100e839b517a331ec85ed495b3889d12886e79d1913071993c5a3"}, + {file = "accelerate-1.10.0.tar.gz", hash = "sha256:8270568fda9036b5cccdc09703fef47872abccd56eb5f6d53b54ea5fb7581496"}, ] [[package]] @@ -43,7 +43,7 @@ files = [ [[package]] name = "aiohttp" -version = "3.12.14" +version = "3.12.15" requires_python = ">=3.9" summary = "Async http client/server framework (asyncio)" groups = ["default"] @@ -58,24 +58,24 @@ dependencies = [ "yarl<2.0,>=1.17.0", ] files = [ - {file = "aiohttp-3.12.14-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f4552ff7b18bcec18b60a90c6982049cdb9dac1dba48cf00b97934a06ce2e597"}, - {file = "aiohttp-3.12.14-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8283f42181ff6ccbcf25acaae4e8ab2ff7e92b3ca4a4ced73b2c12d8cd971393"}, - {file = "aiohttp-3.12.14-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:040afa180ea514495aaff7ad34ec3d27826eaa5d19812730fe9e529b04bb2179"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b413c12f14c1149f0ffd890f4141a7471ba4b41234fe4fd4a0ff82b1dc299dbb"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:1d6f607ce2e1a93315414e3d448b831238f1874b9968e1195b06efaa5c87e245"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:565e70d03e924333004ed101599902bba09ebb14843c8ea39d657f037115201b"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4699979560728b168d5ab63c668a093c9570af2c7a78ea24ca5212c6cdc2b641"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ad5fdf6af93ec6c99bf800eba3af9a43d8bfd66dce920ac905c817ef4a712afe"}, - {file = "aiohttp-3.12.14-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ac76627c0b7ee0e80e871bde0d376a057916cb008a8f3ffc889570a838f5cc7"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:798204af1180885651b77bf03adc903743a86a39c7392c472891649610844635"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:4f1205f97de92c37dd71cf2d5bcfb65fdaed3c255d246172cce729a8d849b4da"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:76ae6f1dd041f85065d9df77c6bc9c9703da9b5c018479d20262acc3df97d419"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:a194ace7bc43ce765338ca2dfb5661489317db216ea7ea700b0332878b392cab"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:16260e8e03744a6fe3fcb05259eeab8e08342c4c33decf96a9dad9f1187275d0"}, - {file = "aiohttp-3.12.14-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:8c779e5ebbf0e2e15334ea404fcce54009dc069210164a244d2eac8352a44b28"}, - {file = "aiohttp-3.12.14-cp311-cp311-win32.whl", hash = "sha256:a289f50bf1bd5be227376c067927f78079a7bdeccf8daa6a9e65c38bae14324b"}, - {file = "aiohttp-3.12.14-cp311-cp311-win_amd64.whl", hash = "sha256:0b8a69acaf06b17e9c54151a6c956339cf46db4ff72b3ac28516d0f7068f4ced"}, - {file = "aiohttp-3.12.14.tar.gz", hash = "sha256:6e06e120e34d93100de448fd941522e11dafa78ef1a893c179901b7d66aa29f2"}, + {file = "aiohttp-3.12.15-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:d3ce17ce0220383a0f9ea07175eeaa6aa13ae5a41f30bc61d84df17f0e9b1117"}, + {file = "aiohttp-3.12.15-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:010cc9bbd06db80fe234d9003f67e97a10fe003bfbedb40da7d71c1008eda0fe"}, + {file = "aiohttp-3.12.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3f9d7c55b41ed687b9d7165b17672340187f87a773c98236c987f08c858145a9"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc4fbc61bb3548d3b482f9ac7ddd0f18c67e4225aaa4e8552b9f1ac7e6bda9e5"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7fbc8a7c410bb3ad5d595bb7118147dfbb6449d862cc1125cf8867cb337e8728"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:74dad41b3458dbb0511e760fb355bb0b6689e0630de8a22b1b62a98777136e16"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3b6f0af863cf17e6222b1735a756d664159e58855da99cfe965134a3ff63b0b0"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b5b7fe4972d48a4da367043b8e023fb70a04d1490aa7d68800e465d1b97e493b"}, + {file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6443cca89553b7a5485331bc9bedb2342b08d073fa10b8c7d1c60579c4a7b9bd"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6c5f40ec615e5264f44b4282ee27628cea221fcad52f27405b80abb346d9f3f8"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:2abbb216a1d3a2fe86dbd2edce20cdc5e9ad0be6378455b05ec7f77361b3ab50"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:db71ce547012a5420a39c1b744d485cfb823564d01d5d20805977f5ea1345676"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:ced339d7c9b5030abad5854aa5413a77565e5b6e6248ff927d3e174baf3badf7"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:7c7dd29c7b5bda137464dc9bfc738d7ceea46ff70309859ffde8c022e9b08ba7"}, + {file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:421da6fd326460517873274875c6c5a18ff225b40da2616083c5a34a7570b685"}, + {file = "aiohttp-3.12.15-cp311-cp311-win32.whl", hash = "sha256:4420cf9d179ec8dfe4be10e7d0fe47d6d606485512ea2265b0d8c5113372771b"}, + {file = "aiohttp-3.12.15-cp311-cp311-win_amd64.whl", hash = "sha256:edd533a07da85baa4b423ee8839e3e91681c7bfa19b04260a469ee94b778bf6d"}, + {file = "aiohttp-3.12.15.tar.gz", hash = "sha256:4fc61385e9c98d72fcdf47e6dd81833f47b2f77c114c29cd64a361be57a763a2"}, ] [[package]] @@ -123,9 +123,9 @@ files = [ [[package]] name = "anyio" -version = "4.9.0" +version = "4.10.0" requires_python = ">=3.9" -summary = "High level compatibility layer for multiple asynchronous event loop implementations" +summary = "High-level concurrency and networking framework on top of asyncio or Trio" groups = ["default"] dependencies = [ "exceptiongroup>=1.0.2; python_version < \"3.11\"", @@ -134,8 +134,8 @@ dependencies = [ "typing-extensions>=4.5; python_version < \"3.13\"", ] files = [ - {file = "anyio-4.9.0-py3-none-any.whl", hash = "sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c"}, - {file = "anyio-4.9.0.tar.gz", hash = "sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028"}, + {file = "anyio-4.10.0-py3-none-any.whl", hash = "sha256:60e474ac86736bbfd6f210f7a61218939c318f43f9972497381f1c5e930ed3d1"}, + {file = "anyio-4.10.0.tar.gz", hash = "sha256:3f3fae35c96039744587aa5b8371e7e8e603c0702999535961dd336026973ba6"}, ] [[package]] @@ -151,7 +151,7 @@ files = [ [[package]] name = "banks" -version = "2.1.3" +version = "2.2.0" requires_python = ">=3.9" summary = "A prompt programming language" groups = ["default"] @@ -164,8 +164,8 @@ dependencies = [ "pydantic", ] files = [ - {file = "banks-2.1.3-py3-none-any.whl", hash = "sha256:9e1217dc977e6dd1ce42c5ff48e9bcaf238d788c81b42deb6a555615ffcffbab"}, - {file = "banks-2.1.3.tar.gz", hash = "sha256:c0dd2cb0c5487274a513a552827e6a8ddbd0ab1a1b967f177e71a6e4748a3ed2"}, + {file = "banks-2.2.0-py3-none-any.whl", hash = "sha256:963cd5c85a587b122abde4f4064078def35c50c688c1b9d36f43c92503854e7d"}, + {file = "banks-2.2.0.tar.gz", hash = "sha256:d1446280ce6e00301e3e952dd754fd8cee23ff277d29ed160994a84d0d7ffe62"}, ] [[package]] @@ -209,37 +209,35 @@ files = [ [[package]] name = "certifi" -version = "2025.7.9" +version = "2025.8.3" requires_python = ">=3.7" summary = "Python package for providing Mozilla's CA Bundle." groups = ["default"] files = [ - {file = "certifi-2025.7.9-py3-none-any.whl", hash = "sha256:d842783a14f8fdd646895ac26f719a061408834473cfc10203f6a575beb15d39"}, - {file = "certifi-2025.7.9.tar.gz", hash = "sha256:c1d2ec05395148ee10cf672ffc28cd37ea0ab0d99f9cc74c43e588cbd111b079"}, + {file = "certifi-2025.8.3-py3-none-any.whl", hash = "sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5"}, + {file = "certifi-2025.8.3.tar.gz", hash = "sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407"}, ] [[package]] name = "charset-normalizer" -version = "3.4.2" +version = "3.4.3" requires_python = ">=3.7" summary = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." groups = ["default"] files = [ - {file = "charset_normalizer-3.4.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:be1e352acbe3c78727a16a455126d9ff83ea2dfdcbc83148d2982305a04714c2"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa88ca0b1932e93f2d961bf3addbb2db902198dca337d88c89e1559e066e7645"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d524ba3f1581b35c03cb42beebab4a13e6cdad7b36246bd22541fa585a56cccd"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28a1005facc94196e1fb3e82a3d442a9d9110b8434fc1ded7a24a2983c9888d8"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdb20a30fe1175ecabed17cbf7812f7b804b8a315a25f24678bcdf120a90077f"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0f5d9ed7f254402c9e7d35d2f5972c9bbea9040e99cd2861bd77dc68263277c7"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:efd387a49825780ff861998cd959767800d54f8308936b21025326de4b5a42b9"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:f0aa37f3c979cf2546b73e8222bbfa3dc07a641585340179d768068e3455e544"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e70e990b2137b29dc5564715de1e12701815dacc1d056308e2b17e9095372a82"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:0c8c57f84ccfc871a48a47321cfa49ae1df56cd1d965a09abe84066f6853b9c0"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6b66f92b17849b85cad91259efc341dce9c1af48e2173bf38a85c6329f1033e5"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-win32.whl", hash = "sha256:daac4765328a919a805fa5e2720f3e94767abd632ae410a9062dff5412bae65a"}, - {file = "charset_normalizer-3.4.2-cp311-cp311-win_amd64.whl", hash = "sha256:e53efc7c7cee4c1e70661e2e112ca46a575f90ed9ae3fef200f2a25e954f4b28"}, - {file = "charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0"}, - {file = "charset_normalizer-3.4.2.tar.gz", hash = "sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:b256ee2e749283ef3ddcff51a675ff43798d92d746d1a6e4631bf8c707d22d0b"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:13faeacfe61784e2559e690fc53fa4c5ae97c6fcedb8eb6fb8d0a15b475d2c64"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:00237675befef519d9af72169d8604a067d92755e84fe76492fef5441db05b91"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:585f3b2a80fbd26b048a0be90c5aae8f06605d3c92615911c3a2b03a8a3b796f"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0e78314bdc32fa80696f72fa16dc61168fda4d6a0c014e0380f9d02f0e5d8a07"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:96b2b3d1a83ad55310de8c7b4a2d04d9277d5591f40761274856635acc5fcb30"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:939578d9d8fd4299220161fdd76e86c6a251987476f5243e8864a7844476ba14"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:fd10de089bcdcd1be95a2f73dbe6254798ec1bda9f450d5828c96f93e2536b9c"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1e8ac75d72fa3775e0b7cb7e4629cec13b7514d928d15ef8ea06bca03ef01cae"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-win32.whl", hash = "sha256:6cf8fd4c04756b6b60146d98cd8a77d0cdae0e1ca20329da2ac85eed779b6849"}, + {file = "charset_normalizer-3.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:31a9a6f775f9bcd865d88ee350f0ffb0e25936a7f930ca98995c05abf1faf21c"}, + {file = "charset_normalizer-3.4.3-py3-none-any.whl", hash = "sha256:ce571ab16d890d23b5c278547ba694193a45011ff86a9162a71307ed9f86759a"}, + {file = "charset_normalizer-3.4.3.tar.gz", hash = "sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14"}, ] [[package]] @@ -330,7 +328,7 @@ files = [ [[package]] name = "faiss-cpu" -version = "1.11.0" +version = "1.11.0.post1" requires_python = ">=3.9" summary = "A library for efficient similarity search and clustering of dense vectors." groups = ["default"] @@ -339,12 +337,15 @@ dependencies = [ "packaging", ] files = [ - {file = "faiss_cpu-1.11.0-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:a90d1c81d0ecf2157e1d2576c482d734d10760652a5b2fcfa269916611e41f1c"}, - {file = "faiss_cpu-1.11.0-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:2c39a388b059fb82cd97fbaa7310c3580ced63bf285be531453bfffbe89ea3dd"}, - {file = "faiss_cpu-1.11.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:a4e3433ffc7f9b8707a7963db04f8676a5756868d325644db2db9d67a618b7a0"}, - {file = "faiss_cpu-1.11.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:926645f1b6829623bc88e93bc8ca872504d604718ada3262e505177939aaee0a"}, - {file = "faiss_cpu-1.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:931db6ed2197c03a7fdf833b057c13529afa2cec8a827aa081b7f0543e4e671b"}, - {file = "faiss_cpu-1.11.0.tar.gz", hash = "sha256:44877b896a2b30a61e35ea4970d008e8822545cb340eca4eff223ac7f40a1db9"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-macosx_13_0_x86_64.whl", hash = "sha256:2c8c384e65cc1b118d2903d9f3a27cd35f6c45337696fc0437f71e05f732dbc0"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:36af46945274ed14751b788673125a8a4900408e4837a92371b0cad5708619ea"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1b15412b22a05865433aecfdebf7664b9565bd49b600d23a0a27c74a5526893e"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:81c169ea74213b2c055b8240befe7e9b42a1f3d97cda5238b3b401035ce1a18b"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0794eb035c6075e931996cf2b2703fbb3f47c8c34bc2d727819ddc3e5e486a31"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:18d2221014813dc9a4236e47f9c4097a71273fbf17c3fe66243e724e2018a67a"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-win_amd64.whl", hash = "sha256:3ce8a8984a7dcc689fd192c69a476ecd0b2611c61f96fe0799ff432aa73ff79c"}, + {file = "faiss_cpu-1.11.0.post1-cp311-cp311-win_arm64.whl", hash = "sha256:8384e05afb7c7968e93b81566759f862e744c0667b175086efb3d8b20949b39f"}, + {file = "faiss_cpu-1.11.0.post1.tar.gz", hash = "sha256:06b1ea9ddec9e4d9a41c8ef7478d493b08d770e9a89475056e963081eed757d1"}, ] [[package]] @@ -398,37 +399,37 @@ files = [ [[package]] name = "fsspec" -version = "2025.5.1" +version = "2025.7.0" requires_python = ">=3.9" summary = "File-system specification" groups = ["default", "gpu"] files = [ - {file = "fsspec-2025.5.1-py3-none-any.whl", hash = "sha256:24d3a2e663d5fc735ab256263c4075f374a174c3410c0b25e5bd1970bceaa462"}, - {file = "fsspec-2025.5.1.tar.gz", hash = "sha256:2e55e47a540b91843b755e83ded97c6e897fa0942b11490113f09e9c443c2475"}, + {file = "fsspec-2025.7.0-py3-none-any.whl", hash = "sha256:8b012e39f63c7d5f10474de957f3ab793b47b45ae7d39f2fb735f8bbe25c0e21"}, + {file = "fsspec-2025.7.0.tar.gz", hash = "sha256:786120687ffa54b8283d942929540d8bc5ccfa820deb555a2b5d0ed2b737bf58"}, ] [[package]] name = "greenlet" -version = "3.2.3" +version = "3.2.4" requires_python = ">=3.9" summary = "Lightweight in-process concurrent programming" groups = ["default"] files = [ - {file = "greenlet-3.2.3-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:784ae58bba89fa1fa5733d170d42486580cab9decda3484779f4759345b29822"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0921ac4ea42a5315d3446120ad48f90c3a6b9bb93dd9b3cf4e4d84a66e42de83"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:d2971d93bb99e05f8c2c0c2f4aa9484a18d98c4c3bd3c62b65b7e6ae33dfcfaf"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c667c0bf9d406b77a15c924ef3285e1e05250948001220368e039b6aa5b5034b"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:592c12fb1165be74592f5de0d70f82bc5ba552ac44800d632214b76089945147"}, - {file = "greenlet-3.2.3-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:29e184536ba333003540790ba29829ac14bb645514fbd7e32af331e8202a62a5"}, - {file = "greenlet-3.2.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:93c0bb79844a367782ec4f429d07589417052e621aa39a5ac1fb99c5aa308edc"}, - {file = "greenlet-3.2.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:751261fc5ad7b6705f5f76726567375bb2104a059454e0226e1eef6c756748ba"}, - {file = "greenlet-3.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:83a8761c75312361aa2b5b903b79da97f13f556164a7dd2d5448655425bd4c34"}, - {file = "greenlet-3.2.3.tar.gz", hash = "sha256:8b0dd8ae4c0d6f5e54ee55ba935eeb3d735a9b58a8a1e5b5cbab64e01a39f365"}, + {file = "greenlet-3.2.4-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:96378df1de302bc38e99c3a9aa311967b7dc80ced1dcc6f171e99842987882a2"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1ee8fae0519a337f2329cb78bd7a8e128ec0f881073d43f023c7b8d4831d5246"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:94abf90142c2a18151632371140b3dba4dee031633fe614cb592dbb6c9e17bc3"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:4d1378601b85e2e5171b99be8d2dc85f594c79967599328f95c1dc1a40f1c633"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0db5594dce18db94f7d1650d7489909b57afde4c580806b8d9203b6e79cdc079"}, + {file = "greenlet-3.2.4-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2523e5246274f54fdadbce8494458a2ebdcdbc7b802318466ac5606d3cded1f8"}, + {file = "greenlet-3.2.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1987de92fec508535687fb807a5cea1560f6196285a4cde35c100b8cd632cc52"}, + {file = "greenlet-3.2.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:55e9c5affaa6775e2c6b67659f3a71684de4c549b3dd9afca3bc773533d284fa"}, + {file = "greenlet-3.2.4-cp311-cp311-win_amd64.whl", hash = "sha256:9c40adce87eaa9ddb593ccb0fa6a07caf34015a29bf8d344811665b573138db9"}, + {file = "greenlet-3.2.4.tar.gz", hash = "sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d"}, ] [[package]] name = "griffe" -version = "1.7.3" +version = "1.11.0" requires_python = ">=3.9" summary = "Signatures for entire Python programs. Extract the structure, the frame, the skeleton of your project, to generate API documentation or find breaking changes in your API." groups = ["default"] @@ -436,8 +437,8 @@ dependencies = [ "colorama>=0.4", ] files = [ - {file = "griffe-1.7.3-py3-none-any.whl", hash = "sha256:c6b3ee30c2f0f17f30bcdef5068d6ab7a2a4f1b8bf1a3e74b56fffd21e1c5f75"}, - {file = "griffe-1.7.3.tar.gz", hash = "sha256:52ee893c6a3a968b639ace8015bec9d36594961e156e23315c8e8e51401fa50b"}, + {file = "griffe-1.11.0-py3-none-any.whl", hash = "sha256:dc56cc6af8d322807ecdb484b39838c7a51ca750cf21ccccf890500c4d6389d8"}, + {file = "griffe-1.11.0.tar.gz", hash = "sha256:c153b5bc63ca521f059e9451533a67e44a9d06cf9bf1756e4298bda5bd3262e8"}, ] [[package]] @@ -453,20 +454,20 @@ files = [ [[package]] name = "hf-xet" -version = "1.1.5" +version = "1.1.7" requires_python = ">=3.8" summary = "Fast transfer of large files with the Hugging Face Hub." groups = ["default"] marker = "platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"arm64\" or platform_machine == \"aarch64\"" files = [ - {file = "hf_xet-1.1.5-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:f52c2fa3635b8c37c7764d8796dfa72706cc4eded19d638331161e82b0792e23"}, - {file = "hf_xet-1.1.5-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:9fa6e3ee5d61912c4a113e0708eaaef987047616465ac7aa30f7121a48fc1af8"}, - {file = "hf_xet-1.1.5-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc874b5c843e642f45fd85cda1ce599e123308ad2901ead23d3510a47ff506d1"}, - {file = "hf_xet-1.1.5-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dbba1660e5d810bd0ea77c511a99e9242d920790d0e63c0e4673ed36c4022d18"}, - {file = "hf_xet-1.1.5-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ab34c4c3104133c495785d5d8bba3b1efc99de52c02e759cf711a91fd39d3a14"}, - {file = "hf_xet-1.1.5-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:83088ecea236d5113de478acb2339f92c95b4fb0462acaa30621fac02f5a534a"}, - {file = "hf_xet-1.1.5-cp37-abi3-win_amd64.whl", hash = "sha256:73e167d9807d166596b4b2f0b585c6d5bd84a26dea32843665a8b58f6edba245"}, - {file = "hf_xet-1.1.5.tar.gz", hash = "sha256:69ebbcfd9ec44fdc2af73441619eeb06b94ee34511bbcf57cd423820090f5694"}, + {file = "hf_xet-1.1.7-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:60dae4b44d520819e54e216a2505685248ec0adbdb2dd4848b17aa85a0375cde"}, + {file = "hf_xet-1.1.7-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:b109f4c11e01c057fc82004c9e51e6cdfe2cb230637644ade40c599739067b2e"}, + {file = "hf_xet-1.1.7-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6efaaf1a5a9fc3a501d3e71e88a6bfebc69ee3a716d0e713a931c8b8d920038f"}, + {file = "hf_xet-1.1.7-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:751571540f9c1fbad9afcf222a5fb96daf2384bf821317b8bfb0c59d86078513"}, + {file = "hf_xet-1.1.7-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:18b61bbae92d56ae731b92087c44efcac216071182c603fc535f8e29ec4b09b8"}, + {file = "hf_xet-1.1.7-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:713f2bff61b252f8523739969f247aa354ad8e6d869b8281e174e2ea1bb8d604"}, + {file = "hf_xet-1.1.7-cp37-abi3-win_amd64.whl", hash = "sha256:2e356da7d284479ae0f1dea3cf5a2f74fdf925d6dca84ac4341930d892c7cb34"}, + {file = "hf_xet-1.1.7.tar.gz", hash = "sha256:20cec8db4561338824a3b5f8c19774055b04a8df7fff0cb1ff2cb1a0c1607b80"}, ] [[package]] @@ -503,14 +504,14 @@ files = [ [[package]] name = "huggingface-hub" -version = "0.33.4" +version = "0.34.4" requires_python = ">=3.8.0" summary = "Client library to download and publish models, datasets and other repos on the huggingface.co hub" groups = ["default"] dependencies = [ "filelock", "fsspec>=2023.5.0", - "hf-xet<2.0.0,>=1.1.2; platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"arm64\" or platform_machine == \"aarch64\"", + "hf-xet<2.0.0,>=1.1.3; platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"arm64\" or platform_machine == \"aarch64\"", "packaging>=20.9", "pyyaml>=5.1", "requests", @@ -518,24 +519,24 @@ dependencies = [ "typing-extensions>=3.7.4.3", ] files = [ - {file = "huggingface_hub-0.33.4-py3-none-any.whl", hash = "sha256:09f9f4e7ca62547c70f8b82767eefadd2667f4e116acba2e3e62a5a81815a7bb"}, - {file = "huggingface_hub-0.33.4.tar.gz", hash = "sha256:6af13478deae120e765bfd92adad0ae1aec1ad8c439b46f23058ad5956cbca0a"}, + {file = "huggingface_hub-0.34.4-py3-none-any.whl", hash = "sha256:9b365d781739c93ff90c359844221beef048403f1bc1f1c123c191257c3c890a"}, + {file = "huggingface_hub-0.34.4.tar.gz", hash = "sha256:a4228daa6fb001be3f4f4bdaf9a0db00e1739235702848df00885c9b5742c85c"}, ] [[package]] name = "huggingface-hub" -version = "0.33.4" +version = "0.34.4" extras = ["inference"] requires_python = ">=3.8.0" summary = "Client library to download and publish models, datasets and other repos on the huggingface.co hub" groups = ["default"] dependencies = [ "aiohttp", - "huggingface-hub==0.33.4", + "huggingface-hub==0.34.4", ] files = [ - {file = "huggingface_hub-0.33.4-py3-none-any.whl", hash = "sha256:09f9f4e7ca62547c70f8b82767eefadd2667f4e116acba2e3e62a5a81815a7bb"}, - {file = "huggingface_hub-0.33.4.tar.gz", hash = "sha256:6af13478deae120e765bfd92adad0ae1aec1ad8c439b46f23058ad5956cbca0a"}, + {file = "huggingface_hub-0.34.4-py3-none-any.whl", hash = "sha256:9b365d781739c93ff90c359844221beef048403f1bc1f1c123c191257c3c890a"}, + {file = "huggingface_hub-0.34.4.tar.gz", hash = "sha256:a4228daa6fb001be3f4f4bdaf9a0db00e1739235702848df00885c9b5742c85c"}, ] [[package]] @@ -598,7 +599,7 @@ files = [ [[package]] name = "llama-cloud" -version = "0.1.32" +version = "0.1.35" requires_python = "<4,>=3.8" summary = "" groups = ["default"] @@ -608,98 +609,78 @@ dependencies = [ "pydantic>=1.10", ] files = [ - {file = "llama_cloud-0.1.32-py3-none-any.whl", hash = "sha256:c42b2d5fb24acc8595bcc3626fb84c872909a16ab6d6879a1cb1101b21c238bd"}, - {file = "llama_cloud-0.1.32.tar.gz", hash = "sha256:cea98241127311ea91f191c3c006aa6558f01d16f9539ed93b24d716b888f10e"}, + {file = "llama_cloud-0.1.35-py3-none-any.whl", hash = "sha256:b7abab4423118e6f638d2f326749e7a07c6426543bea6da99b623c715b22af71"}, + {file = "llama_cloud-0.1.35.tar.gz", hash = "sha256:200349d5d57424d7461f304cdb1355a58eea3e6ca1e6b0d75c66b2e937216983"}, ] [[package]] name = "llama-cloud-services" -version = "0.6.43" +version = "0.6.54" requires_python = "<4.0,>=3.9" summary = "Tailored SDK clients for LlamaCloud services." groups = ["default"] dependencies = [ - "click<9.0.0,>=8.1.7", - "eval-type-backport<0.3.0,>=0.2.0; python_version < \"3.10\"", - "llama-cloud==0.1.32", + "click<9,>=8.1.7", + "eval-type-backport<0.3,>=0.2.0; python_version < \"3.10\"", + "llama-cloud==0.1.35", "llama-index-core>=0.12.0", - "platformdirs<5.0.0,>=4.3.7", + "platformdirs<5,>=4.3.7", "pydantic!=2.10,>=2.8", - "python-dotenv<2.0.0,>=1.0.1", + "python-dotenv<2,>=1.0.1", "tenacity<10.0,>=8.5.0", ] files = [ - {file = "llama_cloud_services-0.6.43-py3-none-any.whl", hash = "sha256:2349195f501ba9151ea3ab384d20cae8b4dc4f335f60bd17607332626bdfa2e4"}, - {file = "llama_cloud_services-0.6.43.tar.gz", hash = "sha256:fa6be33bf54d467cace809efee8c2aeeb9de74ce66708513d37b40d738d3350f"}, + {file = "llama_cloud_services-0.6.54-py3-none-any.whl", hash = "sha256:07f595f7a0ba40c6a1a20543d63024ca7600fe65c4811d1951039977908997be"}, + {file = "llama_cloud_services-0.6.54.tar.gz", hash = "sha256:baf65d9bffb68f9dca98ac6e22908b6675b2038b021e657ead1ffc0e43cbd45d"}, ] [[package]] name = "llama-index" -version = "0.12.48" +version = "0.13.1" requires_python = "<4.0,>=3.9" summary = "Interface between LLMs and your data" groups = ["default"] dependencies = [ - "llama-index-agent-openai<0.5,>=0.4.0", - "llama-index-cli<0.5,>=0.4.2", - "llama-index-core<0.13,>=0.12.48", - "llama-index-embeddings-openai<0.4,>=0.3.0", + "llama-index-cli<0.6,>=0.5.0", + "llama-index-core<0.14,>=0.13.1", + "llama-index-embeddings-openai<0.6,>=0.5.0", "llama-index-indices-managed-llama-cloud>=0.4.0", - "llama-index-llms-openai<0.5,>=0.4.0", - "llama-index-multi-modal-llms-openai<0.6,>=0.5.0", - "llama-index-program-openai<0.4,>=0.3.0", - "llama-index-question-gen-openai<0.4,>=0.3.0", - "llama-index-readers-file<0.5,>=0.4.0", + "llama-index-llms-openai<0.6,>=0.5.0", + "llama-index-readers-file<0.6,>=0.5.0", "llama-index-readers-llama-parse>=0.4.0", "nltk>3.8.1", ] files = [ - {file = "llama_index-0.12.48-py3-none-any.whl", hash = "sha256:93a80de54a5cf86114c252338d7917bb81ffe94afa47f01c41c9ee04c0155db4"}, - {file = "llama_index-0.12.48.tar.gz", hash = "sha256:54b922fd94efde2c21c12be392c381cb4a0531a7ca8e482a7e3d1c6795af2da5"}, -] - -[[package]] -name = "llama-index-agent-openai" -version = "0.4.12" -requires_python = "<4.0,>=3.9" -summary = "llama-index agent openai integration" -groups = ["default"] -dependencies = [ - "llama-index-core<0.13,>=0.12.41", - "llama-index-llms-openai<0.5,>=0.4.0", - "openai>=1.14.0", -] -files = [ - {file = "llama_index_agent_openai-0.4.12-py3-none-any.whl", hash = "sha256:6dbb6276b2e5330032a726b28d5eef5140825f36d72d472b231f08ad3af99665"}, - {file = "llama_index_agent_openai-0.4.12.tar.gz", hash = "sha256:d2fe53feb69cfe45752edb7328bf0d25f6a9071b3c056787e661b93e5b748a28"}, + {file = "llama_index-0.13.1-py3-none-any.whl", hash = "sha256:e02b61cac0699c709a12e711bdaca0a2c90c9b8177d45f9b07b8650c9985d09e"}, + {file = "llama_index-0.13.1.tar.gz", hash = "sha256:0cf06beaf460bfa4dd57902e7f4696626da54350851a876b391a82acce7fe5c2"}, ] [[package]] name = "llama-index-cli" -version = "0.4.4" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index cli" groups = ["default"] dependencies = [ - "llama-index-core<0.13,>=0.12.0", - "llama-index-embeddings-openai<0.4,>=0.3.1", - "llama-index-llms-openai<0.5,>=0.4.0", + "llama-index-core<0.14,>=0.13.0", + "llama-index-embeddings-openai<0.6,>=0.5.0", + "llama-index-llms-openai<0.6,>=0.5.0", ] files = [ - {file = "llama_index_cli-0.4.4-py3-none-any.whl", hash = "sha256:1070593cf79407054735ab7a23c5a65a26fc18d264661e42ef38fc549b4b7658"}, - {file = "llama_index_cli-0.4.4.tar.gz", hash = "sha256:c3af0cf1e2a7e5ef44d0bae5aa8e8872b54c5dd6b731afbae9f13ffeb4997be0"}, + {file = "llama_index_cli-0.5.0-py3-none-any.whl", hash = "sha256:e331ca98005c370bfe58800fa5eed8b10061d0b9c656b84a1f5f6168733a2a7b"}, + {file = "llama_index_cli-0.5.0.tar.gz", hash = "sha256:2eb9426232e8d89ffdf0fa6784ff8da09449d920d71d0fcc81d07be93cf9369f"}, ] [[package]] name = "llama-index-core" -version = "0.12.48" +version = "0.13.1" requires_python = "<4.0,>=3.9" summary = "Interface between LLMs and your data" groups = ["default"] dependencies = [ "aiohttp<4,>=3.8.6", "aiosqlite", - "banks<3,>=2.0.0", + "banks<3,>=2.2.0", "dataclasses-json", "deprecated>=1.2.9.3", "dirtyjson<2,>=1.0.8", @@ -713,6 +694,7 @@ dependencies = [ "nltk>3.8.1", "numpy", "pillow>=9.0.0", + "platformdirs", "pydantic>=2.8.0", "pyyaml>=6.0.1", "requests>=2.31.0", @@ -726,59 +708,60 @@ dependencies = [ "wrapt", ] files = [ - {file = "llama_index_core-0.12.48-py3-none-any.whl", hash = "sha256:0770119ab540605cb217dc9b26343b0bdf6f91d843cfb17d0074ba2fac358e56"}, - {file = "llama_index_core-0.12.48.tar.gz", hash = "sha256:a5cb2179495f091f351a41b4ef312ec6593660438e0066011ec81f7b5d2c93be"}, + {file = "llama_index_core-0.13.1-py3-none-any.whl", hash = "sha256:fde6c8c8bcacf7244bdef4908288eced5e11f47e9741d545846c3d1692830510"}, + {file = "llama_index_core-0.13.1.tar.gz", hash = "sha256:04a58cb26638e186ddb02a80970d503842f68abbeb8be5af6a387c51f7995eeb"}, ] [[package]] name = "llama-index-embeddings-huggingface" -version = "0.5.5" +version = "0.6.0" requires_python = "<4.0,>=3.9" summary = "llama-index embeddings huggingface integration" groups = ["default"] dependencies = [ "huggingface-hub[inference]>=0.19.0", - "llama-index-core<0.13,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", "sentence-transformers>=2.6.1", ] files = [ - {file = "llama_index_embeddings_huggingface-0.5.5-py3-none-any.whl", hash = "sha256:8260e1561df17ca510e241a90504b37cc7d8ac6f2d6aaad9732d04ca3ad988d1"}, - {file = "llama_index_embeddings_huggingface-0.5.5.tar.gz", hash = "sha256:7f6e9a031d9146f235df597c0ccd6280cde96b9b437f99052ce79bb72e5fac5e"}, + {file = "llama_index_embeddings_huggingface-0.6.0-py3-none-any.whl", hash = "sha256:0c24aba5265a7dbd6591394a8d2d64d0b978bb50b4b97c4e88cbf698b69fdd10"}, + {file = "llama_index_embeddings_huggingface-0.6.0.tar.gz", hash = "sha256:3ece7d8c5b683d2055fedeca4457dea13f75c81a6d7fb94d77e878cd73d90d97"}, ] [[package]] name = "llama-index-embeddings-openai" -version = "0.3.1" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index embeddings openai integration" groups = ["default"] dependencies = [ - "llama-index-core<0.13.0,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", "openai>=1.1.0", ] files = [ - {file = "llama_index_embeddings_openai-0.3.1-py3-none-any.whl", hash = "sha256:f15a3d13da9b6b21b8bd51d337197879a453d1605e625a1c6d45e741756c0290"}, - {file = "llama_index_embeddings_openai-0.3.1.tar.gz", hash = "sha256:1368aad3ce24cbaed23d5ad251343cef1eb7b4a06d6563d6606d59cb347fef20"}, + {file = "llama_index_embeddings_openai-0.5.0-py3-none-any.whl", hash = "sha256:d817edb22e3ff475e8cd1833faf1147028986bc1d688f7894ef947558864b728"}, + {file = "llama_index_embeddings_openai-0.5.0.tar.gz", hash = "sha256:ac587839a111089ea8a6255f9214016d7a813b383bbbbf9207799be1100758eb"}, ] [[package]] name = "llama-index-indices-managed-llama-cloud" -version = "0.7.10" +version = "0.9.1" requires_python = "<4.0,>=3.9" summary = "llama-index indices llama-cloud integration" groups = ["default"] dependencies = [ - "llama-cloud==0.1.32", - "llama-index-core<0.13,>=0.12.0", + "deprecated==1.2.18", + "llama-cloud==0.1.35", + "llama-index-core<0.14,>=0.13.0", ] files = [ - {file = "llama_index_indices_managed_llama_cloud-0.7.10-py3-none-any.whl", hash = "sha256:f7edcfb8f694cab547cd9324be7835dc97470ce05150d0b8888fa3bf9d2f84a8"}, - {file = "llama_index_indices_managed_llama_cloud-0.7.10.tar.gz", hash = "sha256:53267907e23d8fbcbb97c7a96177a41446de18550ca6030276092e73b45ca880"}, + {file = "llama_index_indices_managed_llama_cloud-0.9.1-py3-none-any.whl", hash = "sha256:df33fb6d8c6b7ee22202ee7a19285a5672f0e58a1235a2504b49c90a7e1c8933"}, + {file = "llama_index_indices_managed_llama_cloud-0.9.1.tar.gz", hash = "sha256:7bee1a368a17ff63bf1078e5ad4795eb88dcdb87c259cfb242c19bd0f4fb978e"}, ] [[package]] name = "llama-index-instrumentation" -version = "0.2.0" +version = "0.4.0" requires_python = "<4.0,>=3.9" summary = "Add your description here" groups = ["default"] @@ -787,123 +770,76 @@ dependencies = [ "pydantic>=2.11.5", ] files = [ - {file = "llama_index_instrumentation-0.2.0-py3-none-any.whl", hash = "sha256:1055ae7a3d19666671a8f1a62d08c90472552d9fcec7e84e6919b2acc92af605"}, - {file = "llama_index_instrumentation-0.2.0.tar.gz", hash = "sha256:ae8333522487e22a33732924a9a08dfb456f54993c5c97d8340db3c620b76f13"}, + {file = "llama_index_instrumentation-0.4.0-py3-none-any.whl", hash = "sha256:83f73156be34dd0121dfe9e259883620e19f0162f152ac483e179ad5ad0396ac"}, + {file = "llama_index_instrumentation-0.4.0.tar.gz", hash = "sha256:f38ecc1f02b6c1f7ab84263baa6467fac9f86538c0ee25542853de46278abea7"}, ] [[package]] name = "llama-index-llms-openai" -version = "0.4.7" +version = "0.5.2" requires_python = "<4.0,>=3.9" summary = "llama-index llms openai integration" groups = ["default"] dependencies = [ - "llama-index-core<0.13,>=0.12.41", + "llama-index-core<0.14,>=0.13.0", "openai<2,>=1.81.0", ] files = [ - {file = "llama_index_llms_openai-0.4.7-py3-none-any.whl", hash = "sha256:3b8d9d3c1bcadc2cff09724de70f074f43eafd5b7048a91247c9a41b7cd6216d"}, - {file = "llama_index_llms_openai-0.4.7.tar.gz", hash = "sha256:564af8ab39fb3f3adfeae73a59c0dca46c099ab844a28e725eee0c551d4869f8"}, -] - -[[package]] -name = "llama-index-multi-modal-llms-openai" -version = "0.5.3" -requires_python = "<4.0,>=3.9" -summary = "llama-index multi-modal-llms openai integration" -groups = ["default"] -dependencies = [ - "llama-index-core<0.13,>=0.12.47", - "llama-index-llms-openai<0.5,>=0.4.0", -] -files = [ - {file = "llama_index_multi_modal_llms_openai-0.5.3-py3-none-any.whl", hash = "sha256:be6237df8f9caaa257f9beda5317287bbd2ec19473d777a30a34e41a7c5bddf8"}, - {file = "llama_index_multi_modal_llms_openai-0.5.3.tar.gz", hash = "sha256:b755a8b47d8d2f34b5a3d249af81d9bfb69d3d2cf9ab539d3a42f7bfa3e2391a"}, -] - -[[package]] -name = "llama-index-program-openai" -version = "0.3.2" -requires_python = "<4.0,>=3.9" -summary = "llama-index program openai integration" -groups = ["default"] -dependencies = [ - "llama-index-agent-openai<0.5,>=0.4.0", - "llama-index-core<0.13,>=0.12.0", - "llama-index-llms-openai<0.5,>=0.4.0", -] -files = [ - {file = "llama_index_program_openai-0.3.2-py3-none-any.whl", hash = "sha256:451829ae53e074e7b47dcc60a9dd155fcf9d1dcbc1754074bdadd6aab4ceb9aa"}, - {file = "llama_index_program_openai-0.3.2.tar.gz", hash = "sha256:04c959a2e616489894bd2eeebb99500d6f1c17d588c3da0ddc75ebd3eb7451ee"}, -] - -[[package]] -name = "llama-index-question-gen-openai" -version = "0.3.1" -requires_python = "<4.0,>=3.9" -summary = "llama-index question_gen openai integration" -groups = ["default"] -dependencies = [ - "llama-index-core<0.13,>=0.12.0", - "llama-index-llms-openai<0.5,>=0.4.0", - "llama-index-program-openai<0.4,>=0.3.0", -] -files = [ - {file = "llama_index_question_gen_openai-0.3.1-py3-none-any.whl", hash = "sha256:1ce266f6c8373fc8d884ff83a44dfbacecde2301785db7144872db51b8b99429"}, - {file = "llama_index_question_gen_openai-0.3.1.tar.gz", hash = "sha256:5e9311b433cc2581ff8a531fa19fb3aa21815baff75aaacdef11760ac9522aa9"}, + {file = "llama_index_llms_openai-0.5.2-py3-none-any.whl", hash = "sha256:f1cc5be83f704d217bd235b609ad1b128dbd42e571329b108f902920836c1071"}, + {file = "llama_index_llms_openai-0.5.2.tar.gz", hash = "sha256:53237fda8ff9089fdb2543ac18ea499b27863cc41095d3a3499f19e9cfd98e1a"}, ] [[package]] name = "llama-index-readers-file" -version = "0.4.11" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index readers file integration" groups = ["default"] dependencies = [ "beautifulsoup4<5,>=4.12.3", "defusedxml>=0.7.1", - "llama-index-core<0.13,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", "pandas<2.3.0", "pypdf<6,>=5.1.0", "striprtf<0.0.27,>=0.0.26", ] files = [ - {file = "llama_index_readers_file-0.4.11-py3-none-any.whl", hash = "sha256:e71192d8d6d0bf95131762da15fa205cf6e0cc248c90c76ee04d0fbfe160d464"}, - {file = "llama_index_readers_file-0.4.11.tar.gz", hash = "sha256:1b21cb66d78dd5f60e8716607d9a47ccd81bb39106d459665be1ca7799e9597b"}, + {file = "llama_index_readers_file-0.5.0-py3-none-any.whl", hash = "sha256:7fc47a9dbf11d07e78992581c20bca82b21bf336e646b4f53263f3909cb02c58"}, + {file = "llama_index_readers_file-0.5.0.tar.gz", hash = "sha256:f324617bfc4d9b32136d25ff5351b92bc0b569a296173ee2a8591c1f886eff0c"}, ] [[package]] name = "llama-index-readers-llama-parse" -version = "0.4.0" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index readers llama-parse integration" groups = ["default"] dependencies = [ - "llama-index-core<0.13.0,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", "llama-parse>=0.5.0", ] files = [ - {file = "llama_index_readers_llama_parse-0.4.0-py3-none-any.whl", hash = "sha256:574e48386f28d2c86c3f961ca4a4906910312f3400dd0c53014465bfbc6b32bf"}, - {file = "llama_index_readers_llama_parse-0.4.0.tar.gz", hash = "sha256:e99ec56f4f8546d7fda1a7c1ae26162fb9acb7ebcac343b5abdb4234b4644e0f"}, + {file = "llama_index_readers_llama_parse-0.5.0-py3-none-any.whl", hash = "sha256:e63ebf2248c4a726b8a1f7b029c90383d82cdc142942b54dbf287d1f3aee6d75"}, + {file = "llama_index_readers_llama_parse-0.5.0.tar.gz", hash = "sha256:891b21fb63fe1fe722e23cfa263a74d9a7354e5d8d7a01f2d4040a52f8d8feef"}, ] [[package]] name = "llama-index-vector-stores-faiss" -version = "0.4.0" +version = "0.5.0" requires_python = "<4.0,>=3.9" summary = "llama-index vector_stores faiss integration" groups = ["default"] dependencies = [ - "llama-index-core<0.13,>=0.12.0", + "llama-index-core<0.14,>=0.13.0", ] files = [ - {file = "llama_index_vector_stores_faiss-0.4.0-py3-none-any.whl", hash = "sha256:092907b38c70b7f9698ad294836389b31fd3a1273ea1d93082993dd0925c8a4b"}, - {file = "llama_index_vector_stores_faiss-0.4.0.tar.gz", hash = "sha256:59b58e4ec91880a5871a896bbdbd94cb781a447f92f400b5f08a62eb56a62e5c"}, + {file = "llama_index_vector_stores_faiss-0.5.0-py3-none-any.whl", hash = "sha256:2fa9848a4423ddb26f987d299749f1fa1c272b8e576332a03e0610d4ee236d09"}, + {file = "llama_index_vector_stores_faiss-0.5.0.tar.gz", hash = "sha256:4b6a1533c075b6e30985bf1eb778716c594ae0511691434df7f75b032ef964eb"}, ] [[package]] name = "llama-index-workflows" -version = "1.1.0" +version = "1.3.0" requires_python = ">=3.9" summary = "An event-driven, async-first, step-based way to control the execution flow of AI applications like Agents." groups = ["default"] @@ -911,24 +847,25 @@ dependencies = [ "eval-type-backport>=0.2.2; python_full_version < \"3.10\"", "llama-index-instrumentation>=0.1.0", "pydantic>=2.11.5", + "typing-extensions>=4.6.0", ] files = [ - {file = "llama_index_workflows-1.1.0-py3-none-any.whl", hash = "sha256:992fd5b012f56725853a4eed2219a66e19fcc7a6db85dc51afcc1bd2a5dd6db1"}, - {file = "llama_index_workflows-1.1.0.tar.gz", hash = "sha256:ff001d362100bfc2a3353cc5f2528a0adb52245e632191a86b4bddacde72b6af"}, + {file = "llama_index_workflows-1.3.0-py3-none-any.whl", hash = "sha256:328cc25d92b014ef527f105a2f2088c0924fff0494e53d93decb951f14fbfe47"}, + {file = "llama_index_workflows-1.3.0.tar.gz", hash = "sha256:9c1688e237efad384f16485af71c6f9456a2eb6d85bf61ff49e5717f10ff286d"}, ] [[package]] name = "llama-parse" -version = "0.6.43" +version = "0.6.54" requires_python = "<4.0,>=3.9" summary = "Parse files into RAG-Optimized formats." groups = ["default"] dependencies = [ - "llama-cloud-services>=0.6.43", + "llama-cloud-services>=0.6.54", ] files = [ - {file = "llama_parse-0.6.43-py3-none-any.whl", hash = "sha256:fe435309638c4fdec4fec31f97c5031b743c92268962d03b99bd76704f566c32"}, - {file = "llama_parse-0.6.43.tar.gz", hash = "sha256:d88e91c97e37f77b2619111ef43c02b7da61125f821cf77f918996eb48200d78"}, + {file = "llama_parse-0.6.54-py3-none-any.whl", hash = "sha256:c66c8d51cf6f29a44eaa8595a595de5d2598afc86e5a33a4cebe5fe228036920"}, + {file = "llama_parse-0.6.54.tar.gz", hash = "sha256:c707b31152155c9bae84e316fab790bbc8c85f4d8825ce5ee386ebeb7db258f1"}, ] [[package]] @@ -1010,7 +947,7 @@ files = [ [[package]] name = "mypy" -version = "1.16.1" +version = "1.17.1" requires_python = ">=3.9" summary = "Optional static typing for Python" groups = ["dev"] @@ -1021,14 +958,14 @@ dependencies = [ "typing-extensions>=4.6.0", ] files = [ - {file = "mypy-1.16.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:472e4e4c100062488ec643f6162dd0d5208e33e2f34544e1fc931372e806c0cc"}, - {file = "mypy-1.16.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ea16e2a7d2714277e349e24d19a782a663a34ed60864006e8585db08f8ad1782"}, - {file = "mypy-1.16.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:08e850ea22adc4d8a4014651575567b0318ede51e8e9fe7a68f25391af699507"}, - {file = "mypy-1.16.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:22d76a63a42619bfb90122889b903519149879ddbf2ba4251834727944c8baca"}, - {file = "mypy-1.16.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:2c7ce0662b6b9dc8f4ed86eb7a5d505ee3298c04b40ec13b30e572c0e5ae17c4"}, - {file = "mypy-1.16.1-cp311-cp311-win_amd64.whl", hash = "sha256:211287e98e05352a2e1d4e8759c5490925a7c784ddc84207f4714822f8cf99b6"}, - {file = "mypy-1.16.1-py3-none-any.whl", hash = "sha256:5fc2ac4027d0ef28d6ba69a0343737a23c4d1b83672bf38d1fe237bdc0643b37"}, - {file = "mypy-1.16.1.tar.gz", hash = "sha256:6bd00a0a2094841c5e47e7374bb42b83d64c527a502e3334e1173a0c24437bab"}, + {file = "mypy-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ad37544be07c5d7fba814eb370e006df58fed8ad1ef33ed1649cb1889ba6ff58"}, + {file = "mypy-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:064e2ff508e5464b4bd807a7c1625bc5047c5022b85c70f030680e18f37273a5"}, + {file = "mypy-1.17.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:70401bbabd2fa1aa7c43bb358f54037baf0586f41e83b0ae67dd0534fc64edfd"}, + {file = "mypy-1.17.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e92bdc656b7757c438660f775f872a669b8ff374edc4d18277d86b63edba6b8b"}, + {file = "mypy-1.17.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c1fdf4abb29ed1cb091cf432979e162c208a5ac676ce35010373ff29247bcad5"}, + {file = "mypy-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:ff2933428516ab63f961644bc49bc4cbe42bbffb2cd3b71cc7277c07d16b1a8b"}, + {file = "mypy-1.17.1-py3-none-any.whl", hash = "sha256:a9f52c0351c21fe24c21d8c0eb1f62967b262d6729393397b6f443c3b773c3b9"}, + {file = "mypy-1.17.1.tar.gz", hash = "sha256:25e01ec741ab5bb3eec8ba9cdb0f769230368a22c959c4937360efb89b7e9f01"}, ] [[package]] @@ -1083,29 +1020,30 @@ files = [ [[package]] name = "numpy" -version = "2.3.1" +version = "2.3.2" requires_python = ">=3.11" summary = "Fundamental package for array computing in Python" groups = ["default"] files = [ - {file = "numpy-2.3.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6ea9e48336a402551f52cd8f593343699003d2353daa4b72ce8d34f66b722070"}, - {file = "numpy-2.3.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5ccb7336eaf0e77c1635b232c141846493a588ec9ea777a7c24d7166bb8533ae"}, - {file = "numpy-2.3.1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:0bb3a4a61e1d327e035275d2a993c96fa786e4913aa089843e6a2d9dd205c66a"}, - {file = "numpy-2.3.1-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:e344eb79dab01f1e838ebb67aab09965fb271d6da6b00adda26328ac27d4a66e"}, - {file = "numpy-2.3.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:467db865b392168ceb1ef1ffa6f5a86e62468c43e0cfb4ab6da667ede10e58db"}, - {file = "numpy-2.3.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:afed2ce4a84f6b0fc6c1ce734ff368cbf5a5e24e8954a338f3bdffa0718adffb"}, - {file = "numpy-2.3.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0025048b3c1557a20bc80d06fdeb8cc7fc193721484cca82b2cfa072fec71a93"}, - {file = "numpy-2.3.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a5ee121b60aa509679b682819c602579e1df14a5b07fe95671c8849aad8f2115"}, - {file = "numpy-2.3.1-cp311-cp311-win32.whl", hash = "sha256:a8b740f5579ae4585831b3cf0e3b0425c667274f82a484866d2adf9570539369"}, - {file = "numpy-2.3.1-cp311-cp311-win_amd64.whl", hash = "sha256:d4580adadc53311b163444f877e0789f1c8861e2698f6b2a4ca852fda154f3ff"}, - {file = "numpy-2.3.1-cp311-cp311-win_arm64.whl", hash = "sha256:ec0bdafa906f95adc9a0c6f26a4871fa753f25caaa0e032578a30457bff0af6a"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:ad506d4b09e684394c42c966ec1527f6ebc25da7f4da4b1b056606ffe446b8a3"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-macosx_14_0_arm64.whl", hash = "sha256:ebb8603d45bc86bbd5edb0d63e52c5fd9e7945d3a503b77e486bd88dde67a19b"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-macosx_14_0_x86_64.whl", hash = "sha256:15aa4c392ac396e2ad3d0a2680c0f0dee420f9fed14eef09bdb9450ee6dcb7b7"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c6e0bf9d1a2f50d2b65a7cf56db37c095af17b59f6c132396f7c6d5dd76484df"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:eabd7e8740d494ce2b4ea0ff05afa1b7b291e978c0ae075487c51e8bd93c0c68"}, - {file = "numpy-2.3.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:e610832418a2bc09d974cc9fecebfa51e9532d6190223bc5ef6a7402ebf3b5cb"}, - {file = "numpy-2.3.1.tar.gz", hash = "sha256:1ec9ae20a4226da374362cca3c62cd753faf2f951440b0e3b98e93c235441d2b"}, + {file = "numpy-2.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9"}, + {file = "numpy-2.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168"}, + {file = "numpy-2.3.2-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:f0a1a8476ad77a228e41619af2fa9505cf69df928e9aaa165746584ea17fed2b"}, + {file = "numpy-2.3.2-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:cbc95b3813920145032412f7e33d12080f11dc776262df1712e1638207dde9e8"}, + {file = "numpy-2.3.2-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f75018be4980a7324edc5930fe39aa391d5734531b1926968605416ff58c332d"}, + {file = "numpy-2.3.2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:20b8200721840f5621b7bd03f8dcd78de33ec522fc40dc2641aa09537df010c3"}, + {file = "numpy-2.3.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1f91e5c028504660d606340a084db4b216567ded1056ea2b4be4f9d10b67197f"}, + {file = "numpy-2.3.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:fb1752a3bb9a3ad2d6b090b88a9a0ae1cd6f004ef95f75825e2f382c183b2097"}, + {file = "numpy-2.3.2-cp311-cp311-win32.whl", hash = "sha256:4ae6863868aaee2f57503c7a5052b3a2807cf7a3914475e637a0ecd366ced220"}, + {file = "numpy-2.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:240259d6564f1c65424bcd10f435145a7644a65a6811cfc3201c4a429ba79170"}, + {file = "numpy-2.3.2-cp311-cp311-win_arm64.whl", hash = "sha256:4209f874d45f921bde2cff1ffcd8a3695f545ad2ffbef6d3d3c6768162efab89"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:14a91ebac98813a49bc6aa1a0dfc09513dcec1d97eaf31ca21a87221a1cdcb15"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:71669b5daae692189540cffc4c439468d35a3f84f0c88b078ecd94337f6cb0ec"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-macosx_14_0_arm64.whl", hash = "sha256:69779198d9caee6e547adb933941ed7520f896fd9656834c300bdf4dd8642712"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-macosx_14_0_x86_64.whl", hash = "sha256:2c3271cc4097beb5a60f010bcc1cc204b300bb3eafb4399376418a83a1c6373c"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8446acd11fe3dc1830568c941d44449fd5cb83068e5c70bd5a470d323d448296"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:aa098a5ab53fa407fded5870865c6275a5cd4101cfdef8d6fafc48286a96e981"}, + {file = "numpy-2.3.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:6936aff90dda378c09bea075af0d9c675fe3a977a9d2402f95a87f440f59f619"}, + {file = "numpy-2.3.2.tar.gz", hash = "sha256:e0486a11ec30cdecb53f184d496d1c6a20786c81e55e41640270130056f8ee48"}, ] [[package]] @@ -1289,7 +1227,7 @@ files = [ [[package]] name = "openai" -version = "1.95.1" +version = "1.99.6" requires_python = ">=3.8" summary = "The official Python library for the openai API" groups = ["default"] @@ -1304,8 +1242,8 @@ dependencies = [ "typing-extensions<5,>=4.11", ] files = [ - {file = "openai-1.95.1-py3-none-any.whl", hash = "sha256:8bbdfeceef231b1ddfabbc232b179d79f8b849aab5a7da131178f8d10e0f162f"}, - {file = "openai-1.95.1.tar.gz", hash = "sha256:f089b605282e2a2b6776090b4b46563ac1da77f56402a222597d591e2dcc1086"}, + {file = "openai-1.99.6-py3-none-any.whl", hash = "sha256:e40d44b2989588c45ce13819598788b77b8fb80ba2f7ae95ce90d14e46f1bd26"}, + {file = "openai-1.99.6.tar.gz", hash = "sha256:f48f4239b938ef187062f3d5199a05b69711d8b600b9a9b6a3853cd271799183"}, ] [[package]] @@ -1493,7 +1431,7 @@ files = [ [[package]] name = "pypdf" -version = "5.7.0" +version = "5.9.0" requires_python = ">=3.8" summary = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" groups = ["default"] @@ -1501,8 +1439,8 @@ dependencies = [ "typing-extensions>=4.0; python_version < \"3.11\"", ] files = [ - {file = "pypdf-5.7.0-py3-none-any.whl", hash = "sha256:203379453439f5b68b7a1cd43cdf4c5f7a02b84810cefa7f93a47b350aaaba48"}, - {file = "pypdf-5.7.0.tar.gz", hash = "sha256:68c92f2e1aae878bab1150e74447f31ab3848b1c0a6f8becae9f0b1904460b6f"}, + {file = "pypdf-5.9.0-py3-none-any.whl", hash = "sha256:be10a4c54202f46d9daceaa8788be07aa8cd5ea8c25c529c50dd509206382c35"}, + {file = "pypdf-5.9.0.tar.gz", hash = "sha256:30f67a614d558e495e1fbb157ba58c1de91ffc1718f5e0dfeb82a029233890a1"}, ] [[package]] @@ -1574,27 +1512,26 @@ files = [ [[package]] name = "regex" -version = "2024.11.6" -requires_python = ">=3.8" +version = "2025.7.34" +requires_python = ">=3.9" summary = "Alternative regular expression module, to replace re." groups = ["default"] files = [ - {file = "regex-2024.11.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:5478c6962ad548b54a591778e93cd7c456a7a29f8eca9c49e4f9a806dcc5d638"}, - {file = "regex-2024.11.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2c89a8cc122b25ce6945f0423dc1352cb9593c68abd19223eebbd4e56612c5b7"}, - {file = "regex-2024.11.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:94d87b689cdd831934fa3ce16cc15cd65748e6d689f5d2b8f4f4df2065c9fa20"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1062b39a0a2b75a9c694f7a08e7183a80c63c0d62b301418ffd9c35f55aaa114"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:167ed4852351d8a750da48712c3930b031f6efdaa0f22fa1933716bfcd6bf4a3"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2d548dafee61f06ebdb584080621f3e0c23fff312f0de1afc776e2a2ba99a74f"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a19f302cd1ce5dd01a9099aaa19cae6173306d1302a43b627f62e21cf18ac0"}, - {file = "regex-2024.11.6-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bec9931dfb61ddd8ef2ebc05646293812cb6b16b60cf7c9511a832b6f1854b55"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:9714398225f299aa85267fd222f7142fcb5c769e73d7733344efc46f2ef5cf89"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:202eb32e89f60fc147a41e55cb086db2a3f8cb82f9a9a88440dcfc5d37faae8d"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:4181b814e56078e9b00427ca358ec44333765f5ca1b45597ec7446d3a1ef6e34"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:068376da5a7e4da51968ce4c122a7cd31afaaec4fccc7856c92f63876e57b51d"}, - {file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ac10f2c4184420d881a3475fb2c6f4d95d53a8d50209a2500723d831036f7c45"}, - {file = "regex-2024.11.6-cp311-cp311-win32.whl", hash = "sha256:c36f9b6f5f8649bb251a5f3f66564438977b7ef8386a52460ae77e6070d309d9"}, - {file = "regex-2024.11.6-cp311-cp311-win_amd64.whl", hash = "sha256:02e28184be537f0e75c1f9b2f8847dc51e08e6e171c6bde130b2687e0c33cf60"}, - {file = "regex-2024.11.6.tar.gz", hash = "sha256:7ab159b063c52a0333c884e4679f8d7a85112ee3078fe3d9004b2dd875585519"}, + {file = "regex-2025.7.34-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:da304313761b8500b8e175eb2040c4394a875837d5635f6256d6fa0377ad32c8"}, + {file = "regex-2025.7.34-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:35e43ebf5b18cd751ea81455b19acfdec402e82fe0dc6143edfae4c5c4b3909a"}, + {file = "regex-2025.7.34-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:96bbae4c616726f4661fe7bcad5952e10d25d3c51ddc388189d8864fbc1b3c68"}, + {file = "regex-2025.7.34-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9feab78a1ffa4f2b1e27b1bcdaad36f48c2fed4870264ce32f52a393db093c78"}, + {file = "regex-2025.7.34-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f14b36e6d4d07f1a5060f28ef3b3561c5d95eb0651741474ce4c0a4c56ba8719"}, + {file = "regex-2025.7.34-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:85c3a958ef8b3d5079c763477e1f09e89d13ad22198a37e9d7b26b4b17438b33"}, + {file = "regex-2025.7.34-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:37555e4ae0b93358fa7c2d240a4291d4a4227cc7c607d8f85596cdb08ec0a083"}, + {file = "regex-2025.7.34-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:ee38926f31f1aa61b0232a3a11b83461f7807661c062df9eb88769d86e6195c3"}, + {file = "regex-2025.7.34-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:a664291c31cae9c4a30589bd8bc2ebb56ef880c9c6264cb7643633831e606a4d"}, + {file = "regex-2025.7.34-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:f3e5c1e0925e77ec46ddc736b756a6da50d4df4ee3f69536ffb2373460e2dafd"}, + {file = "regex-2025.7.34-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d428fc7731dcbb4e2ffe43aeb8f90775ad155e7db4347a639768bc6cd2df881a"}, + {file = "regex-2025.7.34-cp311-cp311-win32.whl", hash = "sha256:e154a7ee7fa18333ad90b20e16ef84daaeac61877c8ef942ec8dfa50dc38b7a1"}, + {file = "regex-2025.7.34-cp311-cp311-win_amd64.whl", hash = "sha256:24257953d5c1d6d3c129ab03414c07fc1a47833c9165d49b954190b2b7f21a1a"}, + {file = "regex-2025.7.34-cp311-cp311-win_arm64.whl", hash = "sha256:3157aa512b9e606586900888cd469a444f9b898ecb7f8931996cb715f77477f0"}, + {file = "regex-2025.7.34.tar.gz", hash = "sha256:9ead9765217afd04a86822dfcd4ed2747dfe426e887da413b15ff0ac2457e21a"}, ] [[package]] @@ -1616,58 +1553,58 @@ files = [ [[package]] name = "ruff" -version = "0.12.3" +version = "0.12.8" requires_python = ">=3.7" summary = "An extremely fast Python linter and code formatter, written in Rust." groups = ["dev"] files = [ - {file = "ruff-0.12.3-py3-none-linux_armv6l.whl", hash = "sha256:47552138f7206454eaf0c4fe827e546e9ddac62c2a3d2585ca54d29a890137a2"}, - {file = "ruff-0.12.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:0a9153b000c6fe169bb307f5bd1b691221c4286c133407b8827c406a55282041"}, - {file = "ruff-0.12.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:fa6b24600cf3b750e48ddb6057e901dd5b9aa426e316addb2a1af185a7509882"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2506961bf6ead54887ba3562604d69cb430f59b42133d36976421bc8bd45901"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c4faaff1f90cea9d3033cbbcdf1acf5d7fb11d8180758feb31337391691f3df0"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40dced4a79d7c264389de1c59467d5d5cefd79e7e06d1dfa2c75497b5269a5a6"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:0262d50ba2767ed0fe212aa7e62112a1dcbfd46b858c5bf7bbd11f326998bafc"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12371aec33e1a3758597c5c631bae9a5286f3c963bdfb4d17acdd2d395406687"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:560f13b6baa49785665276c963edc363f8ad4b4fc910a883e2625bdb14a83a9e"}, - {file = "ruff-0.12.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:023040a3499f6f974ae9091bcdd0385dd9e9eb4942f231c23c57708147b06311"}, - {file = "ruff-0.12.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:883d844967bffff5ab28bba1a4d246c1a1b2933f48cb9840f3fdc5111c603b07"}, - {file = "ruff-0.12.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2120d3aa855ff385e0e562fdee14d564c9675edbe41625c87eeab744a7830d12"}, - {file = "ruff-0.12.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:6b16647cbb470eaf4750d27dddc6ebf7758b918887b56d39e9c22cce2049082b"}, - {file = "ruff-0.12.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e1417051edb436230023575b149e8ff843a324557fe0a265863b7602df86722f"}, - {file = "ruff-0.12.3-py3-none-win32.whl", hash = "sha256:dfd45e6e926deb6409d0616078a666ebce93e55e07f0fb0228d4b2608b2c248d"}, - {file = "ruff-0.12.3-py3-none-win_amd64.whl", hash = "sha256:a946cf1e7ba3209bdef039eb97647f1c77f6f540e5845ec9c114d3af8df873e7"}, - {file = "ruff-0.12.3-py3-none-win_arm64.whl", hash = "sha256:5f9c7c9c8f84c2d7f27e93674d27136fbf489720251544c4da7fb3d742e011b1"}, - {file = "ruff-0.12.3.tar.gz", hash = "sha256:f1b5a4b6668fd7b7ea3697d8d98857390b40c1320a63a178eee6be0899ea2d77"}, + {file = "ruff-0.12.8-py3-none-linux_armv6l.whl", hash = "sha256:63cb5a5e933fc913e5823a0dfdc3c99add73f52d139d6cd5cc8639d0e0465513"}, + {file = "ruff-0.12.8-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:9a9bbe28f9f551accf84a24c366c1aa8774d6748438b47174f8e8565ab9dedbc"}, + {file = "ruff-0.12.8-py3-none-macosx_11_0_arm64.whl", hash = "sha256:2fae54e752a3150f7ee0e09bce2e133caf10ce9d971510a9b925392dc98d2fec"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c0acbcf01206df963d9331b5838fb31f3b44fa979ee7fa368b9b9057d89f4a53"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ae3e7504666ad4c62f9ac8eedb52a93f9ebdeb34742b8b71cd3cccd24912719f"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb82efb5d35d07497813a1c5647867390a7d83304562607f3579602fa3d7d46f"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:dbea798fc0065ad0b84a2947b0aff4233f0cb30f226f00a2c5850ca4393de609"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:49ebcaccc2bdad86fd51b7864e3d808aad404aab8df33d469b6e65584656263a"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ac9c570634b98c71c88cb17badd90f13fc076a472ba6ef1d113d8ed3df109fb"}, + {file = "ruff-0.12.8-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:560e0cd641e45591a3e42cb50ef61ce07162b9c233786663fdce2d8557d99818"}, + {file = "ruff-0.12.8-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:71c83121512e7743fba5a8848c261dcc454cafb3ef2934a43f1b7a4eb5a447ea"}, + {file = "ruff-0.12.8-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:de4429ef2ba091ecddedd300f4c3f24bca875d3d8b23340728c3cb0da81072c3"}, + {file = "ruff-0.12.8-py3-none-musllinux_1_2_i686.whl", hash = "sha256:a2cab5f60d5b65b50fba39a8950c8746df1627d54ba1197f970763917184b161"}, + {file = "ruff-0.12.8-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:45c32487e14f60b88aad6be9fd5da5093dbefb0e3e1224131cb1d441d7cb7d46"}, + {file = "ruff-0.12.8-py3-none-win32.whl", hash = "sha256:daf3475060a617fd5bc80638aeaf2f5937f10af3ec44464e280a9d2218e720d3"}, + {file = "ruff-0.12.8-py3-none-win_amd64.whl", hash = "sha256:7209531f1a1fcfbe8e46bcd7ab30e2f43604d8ba1c49029bb420b103d0b5f76e"}, + {file = "ruff-0.12.8-py3-none-win_arm64.whl", hash = "sha256:c90e1a334683ce41b0e7a04f41790c429bf5073b62c1ae701c9dc5b3d14f0749"}, + {file = "ruff-0.12.8.tar.gz", hash = "sha256:4cb3a45525176e1009b2b64126acf5f9444ea59066262791febf55e40493a033"}, ] [[package]] name = "safetensors" -version = "0.5.3" -requires_python = ">=3.7" +version = "0.6.2" +requires_python = ">=3.9" summary = "" groups = ["default"] files = [ - {file = "safetensors-0.5.3-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:bd20eb133db8ed15b40110b7c00c6df51655a2998132193de2f75f72d99c7073"}, - {file = "safetensors-0.5.3-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:21d01c14ff6c415c485616b8b0bf961c46b3b343ca59110d38d744e577f9cce7"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:11bce6164887cd491ca75c2326a113ba934be596e22b28b1742ce27b1d076467"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4a243be3590bc3301c821da7a18d87224ef35cbd3e5f5727e4e0728b8172411e"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8bd84b12b1670a6f8e50f01e28156422a2bc07fb16fc4e98bded13039d688a0d"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:391ac8cab7c829452175f871fcaf414aa1e292b5448bd02620f675a7f3e7abb9"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cead1fa41fc54b1e61089fa57452e8834f798cb1dc7a09ba3524f1eb08e0317a"}, - {file = "safetensors-0.5.3-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1077f3e94182d72618357b04b5ced540ceb71c8a813d3319f1aba448e68a770d"}, - {file = "safetensors-0.5.3-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:799021e78287bac619c7b3f3606730a22da4cda27759ddf55d37c8db7511c74b"}, - {file = "safetensors-0.5.3-cp38-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:df26da01aaac504334644e1b7642fa000bfec820e7cef83aeac4e355e03195ff"}, - {file = "safetensors-0.5.3-cp38-abi3-musllinux_1_2_i686.whl", hash = "sha256:32c3ef2d7af8b9f52ff685ed0bc43913cdcde135089ae322ee576de93eae5135"}, - {file = "safetensors-0.5.3-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:37f1521be045e56fc2b54c606d4455573e717b2d887c579ee1dbba5f868ece04"}, - {file = "safetensors-0.5.3-cp38-abi3-win32.whl", hash = "sha256:cfc0ec0846dcf6763b0ed3d1846ff36008c6e7290683b61616c4b040f6a54ace"}, - {file = "safetensors-0.5.3-cp38-abi3-win_amd64.whl", hash = "sha256:836cbbc320b47e80acd40e44c8682db0e8ad7123209f69b093def21ec7cafd11"}, - {file = "safetensors-0.5.3.tar.gz", hash = "sha256:b6b0d6ecacec39a4fdd99cc19f4576f5219ce858e6fd8dbe7609df0b8dc56965"}, + {file = "safetensors-0.6.2-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:9c85ede8ec58f120bad982ec47746981e210492a6db876882aa021446af8ffba"}, + {file = "safetensors-0.6.2-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:d6675cf4b39c98dbd7d940598028f3742e0375a6b4d4277e76beb0c35f4b843b"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1d2d2b3ce1e2509c68932ca03ab8f20570920cd9754b05063d4368ee52833ecd"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:93de35a18f46b0f5a6a1f9e26d91b442094f2df02e9fd7acf224cfec4238821a"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89a89b505f335640f9120fac65ddeb83e40f1fd081cb8ed88b505bdccec8d0a1"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fc4d0d0b937e04bdf2ae6f70cd3ad51328635fe0e6214aa1fc811f3b576b3bda"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8045db2c872db8f4cbe3faa0495932d89c38c899c603f21e9b6486951a5ecb8f"}, + {file = "safetensors-0.6.2-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:81e67e8bab9878bb568cffbc5f5e655adb38d2418351dc0859ccac158f753e19"}, + {file = "safetensors-0.6.2-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:b0e4d029ab0a0e0e4fdf142b194514695b1d7d3735503ba700cf36d0fc7136ce"}, + {file = "safetensors-0.6.2-cp38-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:fa48268185c52bfe8771e46325a1e21d317207bcabcb72e65c6e28e9ffeb29c7"}, + {file = "safetensors-0.6.2-cp38-abi3-musllinux_1_2_i686.whl", hash = "sha256:d83c20c12c2d2f465997c51b7ecb00e407e5f94d7dec3ea0cc11d86f60d3fde5"}, + {file = "safetensors-0.6.2-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d944cea65fad0ead848b6ec2c37cc0b197194bec228f8020054742190e9312ac"}, + {file = "safetensors-0.6.2-cp38-abi3-win32.whl", hash = "sha256:cab75ca7c064d3911411461151cb69380c9225798a20e712b102edda2542ddb1"}, + {file = "safetensors-0.6.2-cp38-abi3-win_amd64.whl", hash = "sha256:c7b214870df923cbc1593c3faee16bec59ea462758699bd3fee399d00aac072c"}, + {file = "safetensors-0.6.2.tar.gz", hash = "sha256:43ff2aa0e6fa2dc3ea5524ac7ad93a9839256b8703761e76e2d0b2a3fa4f15d9"}, ] [[package]] name = "scikit-learn" -version = "1.7.0" +version = "1.7.1" requires_python = ">=3.10" summary = "A set of python modules for machine learning and data mining" groups = ["default"] @@ -1678,17 +1615,17 @@ dependencies = [ "threadpoolctl>=3.1.0", ] files = [ - {file = "scikit_learn-1.7.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8ef09b1615e1ad04dc0d0054ad50634514818a8eb3ee3dee99af3bffc0ef5007"}, - {file = "scikit_learn-1.7.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:7d7240c7b19edf6ed93403f43b0fcb0fe95b53bc0b17821f8fb88edab97085ef"}, - {file = "scikit_learn-1.7.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80bd3bd4e95381efc47073a720d4cbab485fc483966f1709f1fd559afac57ab8"}, - {file = "scikit_learn-1.7.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9dbe48d69aa38ecfc5a6cda6c5df5abef0c0ebdb2468e92437e2053f84abb8bc"}, - {file = "scikit_learn-1.7.0-cp311-cp311-win_amd64.whl", hash = "sha256:8fa979313b2ffdfa049ed07252dc94038def3ecd49ea2a814db5401c07f1ecfa"}, - {file = "scikit_learn-1.7.0.tar.gz", hash = "sha256:c01e869b15aec88e2cdb73d27f15bdbe03bce8e2fb43afbe77c45d399e73a5a3"}, + {file = "scikit_learn-1.7.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:90c8494ea23e24c0fb371afc474618c1019dc152ce4a10e4607e62196113851b"}, + {file = "scikit_learn-1.7.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:bb870c0daf3bf3be145ec51df8ac84720d9972170786601039f024bf6d61a518"}, + {file = "scikit_learn-1.7.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:40daccd1b5623f39e8943ab39735cadf0bdce80e67cdca2adcb5426e987320a8"}, + {file = "scikit_learn-1.7.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:30d1f413cfc0aa5a99132a554f1d80517563c34a9d3e7c118fde2d273c6fe0f7"}, + {file = "scikit_learn-1.7.1-cp311-cp311-win_amd64.whl", hash = "sha256:c711d652829a1805a95d7fe96654604a8f16eab5a9e9ad87b3e60173415cb650"}, + {file = "scikit_learn-1.7.1.tar.gz", hash = "sha256:24b3f1e976a4665aa74ee0fcaac2b8fccc6ae77c8e07ab25da3ba6d3292b9802"}, ] [[package]] name = "scipy" -version = "1.16.0" +version = "1.16.1" requires_python = ">=3.11" summary = "Fundamental algorithms for scientific computing in Python" groups = ["default"] @@ -1696,21 +1633,21 @@ dependencies = [ "numpy<2.6,>=1.25.2", ] files = [ - {file = "scipy-1.16.0-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:deec06d831b8f6b5fb0b652433be6a09db29e996368ce5911faf673e78d20085"}, - {file = "scipy-1.16.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:d30c0fe579bb901c61ab4bb7f3eeb7281f0d4c4a7b52dbf563c89da4fd2949be"}, - {file = "scipy-1.16.0-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:b2243561b45257f7391d0f49972fca90d46b79b8dbcb9b2cb0f9df928d370ad4"}, - {file = "scipy-1.16.0-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:e6d7dfc148135e9712d87c5f7e4f2ddc1304d1582cb3a7d698bbadedb61c7afd"}, - {file = "scipy-1.16.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:90452f6a9f3fe5a2cf3748e7be14f9cc7d9b124dce19667b54f5b429d680d539"}, - {file = "scipy-1.16.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a2f0bf2f58031c8701a8b601df41701d2a7be17c7ffac0a4816aeba89c4cdac8"}, - {file = "scipy-1.16.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6c4abb4c11fc0b857474241b812ce69ffa6464b4bd8f4ecb786cf240367a36a7"}, - {file = "scipy-1.16.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b370f8f6ac6ef99815b0d5c9f02e7ade77b33007d74802efc8316c8db98fd11e"}, - {file = "scipy-1.16.0-cp311-cp311-win_amd64.whl", hash = "sha256:a16ba90847249bedce8aa404a83fb8334b825ec4a8e742ce6012a7a5e639f95c"}, - {file = "scipy-1.16.0.tar.gz", hash = "sha256:b5ef54021e832869c8cfb03bc3bf20366cbcd426e02a58e8a58d7584dfbb8f62"}, + {file = "scipy-1.16.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:c033fa32bab91dc98ca59d0cf23bb876454e2bb02cbe592d5023138778f70030"}, + {file = "scipy-1.16.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:6e5c2f74e5df33479b5cd4e97a9104c511518fbd979aa9b8f6aec18b2e9ecae7"}, + {file = "scipy-1.16.1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:0a55ffe0ba0f59666e90951971a884d1ff6f4ec3275a48f472cfb64175570f77"}, + {file = "scipy-1.16.1-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:f8a5d6cd147acecc2603fbd382fed6c46f474cccfcf69ea32582e033fb54dcfe"}, + {file = "scipy-1.16.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cb18899127278058bcc09e7b9966d41a5a43740b5bb8dcba401bd983f82e885b"}, + {file = "scipy-1.16.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:adccd93a2fa937a27aae826d33e3bfa5edf9aa672376a4852d23a7cd67a2e5b7"}, + {file = "scipy-1.16.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:18aca1646a29ee9a0625a1be5637fa798d4d81fdf426481f06d69af828f16958"}, + {file = "scipy-1.16.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d85495cef541729a70cdddbbf3e6b903421bc1af3e8e3a9a72a06751f33b7c39"}, + {file = "scipy-1.16.1-cp311-cp311-win_amd64.whl", hash = "sha256:226652fca853008119c03a8ce71ffe1b3f6d2844cc1686e8f9806edafae68596"}, + {file = "scipy-1.16.1.tar.gz", hash = "sha256:44c76f9e8b6e8e488a586190ab38016e4ed2f8a038af7cd3defa903c0a2238b3"}, ] [[package]] name = "sentence-transformers" -version = "5.0.0" +version = "5.1.0" requires_python = ">=3.9" summary = "Embeddings, Retrieval, and Reranking" groups = ["default"] @@ -1725,8 +1662,8 @@ dependencies = [ "typing-extensions>=4.5.0", ] files = [ - {file = "sentence_transformers-5.0.0-py3-none-any.whl", hash = "sha256:346240f9cc6b01af387393f03e103998190dfb0826a399d0c38a81a05c7a5d76"}, - {file = "sentence_transformers-5.0.0.tar.gz", hash = "sha256:e5a411845910275fd166bacb01d28b7f79537d3550628ae42309dbdd3d5670d1"}, + {file = "sentence_transformers-5.1.0-py3-none-any.whl", hash = "sha256:fc803929f6a3ce82e2b2c06e0efed7a36de535c633d5ce55efac0b710ea5643e"}, + {file = "sentence_transformers-5.1.0.tar.gz", hash = "sha256:70c7630697cc1c64ffca328d6e8688430ebd134b3c2df03dc07cb3a016b04739"}, ] [[package]] @@ -1775,7 +1712,7 @@ files = [ [[package]] name = "sqlalchemy" -version = "2.0.41" +version = "2.0.42" requires_python = ">=3.7" summary = "Database Abstraction Library" groups = ["default"] @@ -1785,40 +1722,40 @@ dependencies = [ "typing-extensions>=4.6.0", ] files = [ - {file = "sqlalchemy-2.0.41-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6375cd674fe82d7aa9816d1cb96ec592bac1726c11e0cafbf40eeee9a4516b5f"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9f8c9fdd15a55d9465e590a402f42082705d66b05afc3ffd2d2eb3c6ba919560"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32f9dc8c44acdee06c8fc6440db9eae8b4af8b01e4b1aee7bdd7241c22edff4f"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90c11ceb9a1f482c752a71f203a81858625d8df5746d787a4786bca4ffdf71c6"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:911cc493ebd60de5f285bcae0491a60b4f2a9f0f5c270edd1c4dbaef7a38fc04"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:03968a349db483936c249f4d9cd14ff2c296adfa1290b660ba6516f973139582"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-win32.whl", hash = "sha256:293cd444d82b18da48c9f71cd7005844dbbd06ca19be1ccf6779154439eec0b8"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-win_amd64.whl", hash = "sha256:3d3549fc3e40667ec7199033a4e40a2f669898a00a7b18a931d3efb4c7900504"}, - {file = "sqlalchemy-2.0.41-py3-none-any.whl", hash = "sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576"}, - {file = "sqlalchemy-2.0.41.tar.gz", hash = "sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c34100c0b7ea31fbc113c124bcf93a53094f8951c7bf39c45f39d327bad6d1e7"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ad59dbe4d1252448c19d171dfba14c74e7950b46dc49d015722a4a06bfdab2b0"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9187498c2149919753a7fd51766ea9c8eecdec7da47c1b955fa8090bc642eaa"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f092cf83ebcafba23a247f5e03f99f5436e3ef026d01c8213b5eca48ad6efa9"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:fc6afee7e66fdba4f5a68610b487c1f754fccdc53894a9567785932dbb6a265e"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:260ca1d2e5910f1f1ad3fe0113f8fab28657cee2542cb48c2f342ed90046e8ec"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-win32.whl", hash = "sha256:2eb539fd83185a85e5fcd6b19214e1c734ab0351d81505b0f987705ba0a1e231"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-win_amd64.whl", hash = "sha256:9193fa484bf00dcc1804aecbb4f528f1123c04bad6a08d7710c909750fa76aeb"}, + {file = "sqlalchemy-2.0.42-py3-none-any.whl", hash = "sha256:defcdff7e661f0043daa381832af65d616e060ddb54d3fe4476f51df7eaa1835"}, + {file = "sqlalchemy-2.0.42.tar.gz", hash = "sha256:160bedd8a5c28765bd5be4dec2d881e109e33b34922e50a3b881a7681773ac5f"}, ] [[package]] name = "sqlalchemy" -version = "2.0.41" +version = "2.0.42" extras = ["asyncio"] requires_python = ">=3.7" summary = "Database Abstraction Library" groups = ["default"] dependencies = [ "greenlet>=1", - "sqlalchemy==2.0.41", + "sqlalchemy==2.0.42", ] files = [ - {file = "sqlalchemy-2.0.41-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6375cd674fe82d7aa9816d1cb96ec592bac1726c11e0cafbf40eeee9a4516b5f"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9f8c9fdd15a55d9465e590a402f42082705d66b05afc3ffd2d2eb3c6ba919560"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32f9dc8c44acdee06c8fc6440db9eae8b4af8b01e4b1aee7bdd7241c22edff4f"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90c11ceb9a1f482c752a71f203a81858625d8df5746d787a4786bca4ffdf71c6"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:911cc493ebd60de5f285bcae0491a60b4f2a9f0f5c270edd1c4dbaef7a38fc04"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:03968a349db483936c249f4d9cd14ff2c296adfa1290b660ba6516f973139582"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-win32.whl", hash = "sha256:293cd444d82b18da48c9f71cd7005844dbbd06ca19be1ccf6779154439eec0b8"}, - {file = "sqlalchemy-2.0.41-cp311-cp311-win_amd64.whl", hash = "sha256:3d3549fc3e40667ec7199033a4e40a2f669898a00a7b18a931d3efb4c7900504"}, - {file = "sqlalchemy-2.0.41-py3-none-any.whl", hash = "sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576"}, - {file = "sqlalchemy-2.0.41.tar.gz", hash = "sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c34100c0b7ea31fbc113c124bcf93a53094f8951c7bf39c45f39d327bad6d1e7"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ad59dbe4d1252448c19d171dfba14c74e7950b46dc49d015722a4a06bfdab2b0"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9187498c2149919753a7fd51766ea9c8eecdec7da47c1b955fa8090bc642eaa"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f092cf83ebcafba23a247f5e03f99f5436e3ef026d01c8213b5eca48ad6efa9"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:fc6afee7e66fdba4f5a68610b487c1f754fccdc53894a9567785932dbb6a265e"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:260ca1d2e5910f1f1ad3fe0113f8fab28657cee2542cb48c2f342ed90046e8ec"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-win32.whl", hash = "sha256:2eb539fd83185a85e5fcd6b19214e1c734ab0351d81505b0f987705ba0a1e231"}, + {file = "sqlalchemy-2.0.42-cp311-cp311-win_amd64.whl", hash = "sha256:9193fa484bf00dcc1804aecbb4f528f1123c04bad6a08d7710c909750fa76aeb"}, + {file = "sqlalchemy-2.0.42-py3-none-any.whl", hash = "sha256:defcdff7e661f0043daa381832af65d616e060ddb54d3fe4476f51df7eaa1835"}, + {file = "sqlalchemy-2.0.42.tar.gz", hash = "sha256:160bedd8a5c28765bd5be4dec2d881e109e33b34922e50a3b881a7681773ac5f"}, ] [[package]] @@ -1870,7 +1807,7 @@ files = [ [[package]] name = "tiktoken" -version = "0.9.0" +version = "0.11.0" requires_python = ">=3.9" summary = "tiktoken is a fast BPE tokeniser for use with OpenAI's models" groups = ["default"] @@ -1879,18 +1816,18 @@ dependencies = [ "requests>=2.26.0", ] files = [ - {file = "tiktoken-0.9.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:f32cc56168eac4851109e9b5d327637f15fd662aa30dd79f964b7c39fbadd26e"}, - {file = "tiktoken-0.9.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:45556bc41241e5294063508caf901bf92ba52d8ef9222023f83d2483a3055348"}, - {file = "tiktoken-0.9.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03935988a91d6d3216e2ec7c645afbb3d870b37bcb67ada1943ec48678e7ee33"}, - {file = "tiktoken-0.9.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b3d80aad8d2c6b9238fc1a5524542087c52b860b10cbf952429ffb714bc1136"}, - {file = "tiktoken-0.9.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b2a21133be05dc116b1d0372af051cd2c6aa1d2188250c9b553f9fa49301b336"}, - {file = "tiktoken-0.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:11a20e67fdf58b0e2dea7b8654a288e481bb4fc0289d3ad21291f8d0849915fb"}, - {file = "tiktoken-0.9.0.tar.gz", hash = "sha256:d02a5ca6a938e0490e1ff957bc48c8b078c88cb83977be1625b1fd8aac792c5d"}, + {file = "tiktoken-0.11.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4ae374c46afadad0f501046db3da1b36cd4dfbfa52af23c998773682446097cf"}, + {file = "tiktoken-0.11.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:25a512ff25dc6c85b58f5dd4f3d8c674dc05f96b02d66cdacf628d26a4e4866b"}, + {file = "tiktoken-0.11.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2130127471e293d385179c1f3f9cd445070c0772be73cdafb7cec9a3684c0458"}, + {file = "tiktoken-0.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21e43022bf2c33f733ea9b54f6a3f6b4354b909f5a73388fb1b9347ca54a069c"}, + {file = "tiktoken-0.11.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:adb4e308eb64380dc70fa30493e21c93475eaa11669dea313b6bbf8210bfd013"}, + {file = "tiktoken-0.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:ece6b76bfeeb61a125c44bbefdfccc279b5288e6007fbedc0d32bfec602df2f2"}, + {file = "tiktoken-0.11.0.tar.gz", hash = "sha256:3c518641aee1c52247c2b97e74d8d07d780092af79d5911a6ab5e79359d9b06a"}, ] [[package]] name = "tokenizers" -version = "0.21.2" +version = "0.21.4" requires_python = ">=3.9" summary = "" groups = ["default"] @@ -1898,21 +1835,21 @@ dependencies = [ "huggingface-hub<1.0,>=0.16.4", ] files = [ - {file = "tokenizers-0.21.2-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:342b5dfb75009f2255ab8dec0041287260fed5ce00c323eb6bab639066fef8ec"}, - {file = "tokenizers-0.21.2-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:126df3205d6f3a93fea80c7a8a266a78c1bd8dd2fe043386bafdd7736a23e45f"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a32cd81be21168bd0d6a0f0962d60177c447a1aa1b1e48fa6ec9fc728ee0b12"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8bd8999538c405133c2ab999b83b17c08b7fc1b48c1ada2469964605a709ef91"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5e9944e61239b083a41cf8fc42802f855e1dca0f499196df37a8ce219abac6eb"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:514cd43045c5d546f01142ff9c79a96ea69e4b5cda09e3027708cb2e6d5762ab"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b1b9405822527ec1e0f7d8d2fdb287a5730c3a6518189c968254a8441b21faae"}, - {file = "tokenizers-0.21.2-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fed9a4d51c395103ad24f8e7eb976811c57fbec2af9f133df471afcd922e5020"}, - {file = "tokenizers-0.21.2-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2c41862df3d873665ec78b6be36fcc30a26e3d4902e9dd8608ed61d49a48bc19"}, - {file = "tokenizers-0.21.2-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:ed21dc7e624e4220e21758b2e62893be7101453525e3d23264081c9ef9a6d00d"}, - {file = "tokenizers-0.21.2-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:0e73770507e65a0e0e2a1affd6b03c36e3bc4377bd10c9ccf51a82c77c0fe365"}, - {file = "tokenizers-0.21.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:106746e8aa9014a12109e58d540ad5465b4c183768ea96c03cbc24c44d329958"}, - {file = "tokenizers-0.21.2-cp39-abi3-win32.whl", hash = "sha256:cabda5a6d15d620b6dfe711e1af52205266d05b379ea85a8a301b3593c60e962"}, - {file = "tokenizers-0.21.2-cp39-abi3-win_amd64.whl", hash = "sha256:58747bb898acdb1007f37a7bbe614346e98dc28708ffb66a3fd50ce169ac6c98"}, - {file = "tokenizers-0.21.2.tar.gz", hash = "sha256:fdc7cffde3e2113ba0e6cc7318c40e3438a4d74bbc62bf04bcc63bdfb082ac77"}, + {file = "tokenizers-0.21.4-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:2ccc10a7c3bcefe0f242867dc914fc1226ee44321eb618cfe3019b5df3400133"}, + {file = "tokenizers-0.21.4-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:5e2f601a8e0cd5be5cc7506b20a79112370b9b3e9cb5f13f68ab11acd6ca7d60"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:39b376f5a1aee67b4d29032ee85511bbd1b99007ec735f7f35c8a2eb104eade5"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2107ad649e2cda4488d41dfd031469e9da3fcbfd6183e74e4958fa729ffbf9c6"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c73012da95afafdf235ba80047699df4384fdc481527448a078ffd00e45a7d9"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f23186c40395fc390d27f519679a58023f368a0aad234af145e0f39ad1212732"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cc88bb34e23a54cc42713d6d98af5f1bf79c07653d24fe984d2d695ba2c922a2"}, + {file = "tokenizers-0.21.4-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51b7eabb104f46c1c50b486520555715457ae833d5aee9ff6ae853d1130506ff"}, + {file = "tokenizers-0.21.4-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:714b05b2e1af1288bd1bc56ce496c4cebb64a20d158ee802887757791191e6e2"}, + {file = "tokenizers-0.21.4-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:1340ff877ceedfa937544b7d79f5b7becf33a4cfb58f89b3b49927004ef66f78"}, + {file = "tokenizers-0.21.4-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:3c1f4317576e465ac9ef0d165b247825a2a4078bcd01cba6b54b867bdf9fdd8b"}, + {file = "tokenizers-0.21.4-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:c212aa4e45ec0bb5274b16b6f31dd3f1c41944025c2358faaa5782c754e84c24"}, + {file = "tokenizers-0.21.4-cp39-abi3-win32.whl", hash = "sha256:6c42a930bc5f4c47f4ea775c91de47d27910881902b0f20e4990ebe045a415d0"}, + {file = "tokenizers-0.21.4-cp39-abi3-win_amd64.whl", hash = "sha256:475d807a5c3eb72c59ad9b5fcdb254f6e17f53dfcbb9903233b0dfa9c943b597"}, + {file = "tokenizers-0.21.4.tar.gz", hash = "sha256:fa23f85fbc9a02ec5c6978da172cdcbac23498c3ca9f3645c5c68740ac007880"}, ] [[package]] @@ -1967,13 +1904,13 @@ files = [ [[package]] name = "transformers" -version = "4.53.2" +version = "4.55.0" requires_python = ">=3.9.0" summary = "State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow" groups = ["default"] dependencies = [ "filelock", - "huggingface-hub<1.0,>=0.30.0", + "huggingface-hub<1.0,>=0.34.0", "numpy>=1.17", "packaging>=20.0", "pyyaml>=5.1", @@ -1984,8 +1921,8 @@ dependencies = [ "tqdm>=4.27", ] files = [ - {file = "transformers-4.53.2-py3-none-any.whl", hash = "sha256:db8f4819bb34f000029c73c3c557e7d06fc1b8e612ec142eecdae3947a9c78bf"}, - {file = "transformers-4.53.2.tar.gz", hash = "sha256:6c3ed95edfb1cba71c4245758f1b4878c93bf8cde77d076307dacb2cbbd72be2"}, + {file = "transformers-4.55.0-py3-none-any.whl", hash = "sha256:29d9b8800e32a4a831bb16efb5f762f6a9742fef9fce5d693ed018d19b106490"}, + {file = "transformers-4.55.0.tar.gz", hash = "sha256:15aa138a05d07a15b30d191ea2c45e23061ebf9fcc928a1318e03fe2234f3ae1"}, ] [[package]] @@ -2000,7 +1937,7 @@ files = [ [[package]] name = "types-requests" -version = "2.32.4.20250611" +version = "2.32.4.20250809" requires_python = ">=3.9" summary = "Typing stubs for requests" groups = ["dev"] @@ -2008,8 +1945,8 @@ dependencies = [ "urllib3>=2", ] files = [ - {file = "types_requests-2.32.4.20250611-py3-none-any.whl", hash = "sha256:ad2fe5d3b0cb3c2c902c8815a70e7fb2302c4b8c1f77bdcd738192cdb3878072"}, - {file = "types_requests-2.32.4.20250611.tar.gz", hash = "sha256:741c8777ed6425830bf51e54d6abe245f79b4dcb9019f1622b773463946bf826"}, + {file = "types_requests-2.32.4.20250809-py3-none-any.whl", hash = "sha256:f73d1832fb519ece02c85b1f09d5f0dd3108938e7d47e7f94bbfa18a6782b163"}, + {file = "types_requests-2.32.4.20250809.tar.gz", hash = "sha256:d8060de1c8ee599311f56ff58010fb4902f462a1470802cf9f6ed27bc46c4df3"}, ] [[package]] diff --git a/requirements.cpu.txt b/requirements.cpu.txt index 21629eaf..9f674ea8 100644 --- a/requirements.cpu.txt +++ b/requirements.cpu.txt @@ -1,31 +1,31 @@ # This file is @generated by PDM. # Please do not edit it manually. -accelerate==1.8.1 \ - --hash=sha256:c47b8994498875a2b1286e945bd4d20e476956056c7941d512334f4eb44ff991 \ - --hash=sha256:f60df931671bc4e75077b852990469d4991ce8bd3a58e72375c3c95132034db9 +accelerate==1.10.0 \ + --hash=sha256:260a72b560e100e839b517a331ec85ed495b3889d12886e79d1913071993c5a3 \ + --hash=sha256:8270568fda9036b5cccdc09703fef47872abccd56eb5f6d53b54ea5fb7581496 aiohappyeyeballs==2.6.1 \ --hash=sha256:c3f9d0113123803ccadfdf3f0faa505bc78e6a72d1cc4806cbd719826e943558 \ --hash=sha256:f349ba8f4b75cb25c99c5c2d84e997e485204d2902a9597802b0371f09331fb8 -aiohttp==3.12.14 \ - --hash=sha256:040afa180ea514495aaff7ad34ec3d27826eaa5d19812730fe9e529b04bb2179 \ - --hash=sha256:0b8a69acaf06b17e9c54151a6c956339cf46db4ff72b3ac28516d0f7068f4ced \ - --hash=sha256:16260e8e03744a6fe3fcb05259eeab8e08342c4c33decf96a9dad9f1187275d0 \ - --hash=sha256:1d6f607ce2e1a93315414e3d448b831238f1874b9968e1195b06efaa5c87e245 \ - --hash=sha256:4699979560728b168d5ab63c668a093c9570af2c7a78ea24ca5212c6cdc2b641 \ - --hash=sha256:4ac76627c0b7ee0e80e871bde0d376a057916cb008a8f3ffc889570a838f5cc7 \ - --hash=sha256:4f1205f97de92c37dd71cf2d5bcfb65fdaed3c255d246172cce729a8d849b4da \ - --hash=sha256:565e70d03e924333004ed101599902bba09ebb14843c8ea39d657f037115201b \ - --hash=sha256:6e06e120e34d93100de448fd941522e11dafa78ef1a893c179901b7d66aa29f2 \ - --hash=sha256:76ae6f1dd041f85065d9df77c6bc9c9703da9b5c018479d20262acc3df97d419 \ - --hash=sha256:798204af1180885651b77bf03adc903743a86a39c7392c472891649610844635 \ - --hash=sha256:8283f42181ff6ccbcf25acaae4e8ab2ff7e92b3ca4a4ced73b2c12d8cd971393 \ - --hash=sha256:8c779e5ebbf0e2e15334ea404fcce54009dc069210164a244d2eac8352a44b28 \ - --hash=sha256:a194ace7bc43ce765338ca2dfb5661489317db216ea7ea700b0332878b392cab \ - --hash=sha256:a289f50bf1bd5be227376c067927f78079a7bdeccf8daa6a9e65c38bae14324b \ - --hash=sha256:ad5fdf6af93ec6c99bf800eba3af9a43d8bfd66dce920ac905c817ef4a712afe \ - --hash=sha256:b413c12f14c1149f0ffd890f4141a7471ba4b41234fe4fd4a0ff82b1dc299dbb \ - --hash=sha256:f4552ff7b18bcec18b60a90c6982049cdb9dac1dba48cf00b97934a06ce2e597 +aiohttp==3.12.15 \ + --hash=sha256:010cc9bbd06db80fe234d9003f67e97a10fe003bfbedb40da7d71c1008eda0fe \ + --hash=sha256:2abbb216a1d3a2fe86dbd2edce20cdc5e9ad0be6378455b05ec7f77361b3ab50 \ + --hash=sha256:3b6f0af863cf17e6222b1735a756d664159e58855da99cfe965134a3ff63b0b0 \ + --hash=sha256:3f9d7c55b41ed687b9d7165b17672340187f87a773c98236c987f08c858145a9 \ + --hash=sha256:421da6fd326460517873274875c6c5a18ff225b40da2616083c5a34a7570b685 \ + --hash=sha256:4420cf9d179ec8dfe4be10e7d0fe47d6d606485512ea2265b0d8c5113372771b \ + --hash=sha256:4fc61385e9c98d72fcdf47e6dd81833f47b2f77c114c29cd64a361be57a763a2 \ + --hash=sha256:6443cca89553b7a5485331bc9bedb2342b08d073fa10b8c7d1c60579c4a7b9bd \ + --hash=sha256:6c5f40ec615e5264f44b4282ee27628cea221fcad52f27405b80abb346d9f3f8 \ + --hash=sha256:74dad41b3458dbb0511e760fb355bb0b6689e0630de8a22b1b62a98777136e16 \ + --hash=sha256:7c7dd29c7b5bda137464dc9bfc738d7ceea46ff70309859ffde8c022e9b08ba7 \ + --hash=sha256:7fbc8a7c410bb3ad5d595bb7118147dfbb6449d862cc1125cf8867cb337e8728 \ + --hash=sha256:b5b7fe4972d48a4da367043b8e023fb70a04d1490aa7d68800e465d1b97e493b \ + --hash=sha256:bc4fbc61bb3548d3b482f9ac7ddd0f18c67e4225aaa4e8552b9f1ac7e6bda9e5 \ + --hash=sha256:ced339d7c9b5030abad5854aa5413a77565e5b6e6248ff927d3e174baf3badf7 \ + --hash=sha256:d3ce17ce0220383a0f9ea07175eeaa6aa13ae5a41f30bc61d84df17f0e9b1117 \ + --hash=sha256:db71ce547012a5420a39c1b744d485cfb823564d01d5d20805977f5ea1345676 \ + --hash=sha256:edd533a07da85baa4b423ee8839e3e91681c7bfa19b04260a469ee94b778bf6d aiosignal==1.4.0 \ --hash=sha256:053243f8b92b990551949e63930a839ff0cf0b0ebbe0597b0f3fb19e1a0fe82e \ --hash=sha256:f47eecd9468083c2029cc99945502cb7708b082c232f9aca65da147157b251c7 @@ -35,15 +35,15 @@ aiosqlite==0.21.0 \ annotated-types==0.7.0 \ --hash=sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53 \ --hash=sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89 -anyio==4.9.0 \ - --hash=sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028 \ - --hash=sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c +anyio==4.10.0 \ + --hash=sha256:3f3fae35c96039744587aa5b8371e7e8e603c0702999535961dd336026973ba6 \ + --hash=sha256:60e474ac86736bbfd6f210f7a61218939c318f43f9972497381f1c5e930ed3d1 attrs==25.3.0 \ --hash=sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3 \ --hash=sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b -banks==2.1.3 \ - --hash=sha256:9e1217dc977e6dd1ce42c5ff48e9bcaf238d788c81b42deb6a555615ffcffbab \ - --hash=sha256:c0dd2cb0c5487274a513a552827e6a8ddbd0ab1a1b967f177e71a6e4748a3ed2 +banks==2.2.0 \ + --hash=sha256:963cd5c85a587b122abde4f4064078def35c50c688c1b9d36f43c92503854e7d \ + --hash=sha256:d1446280ce6e00301e3e952dd754fd8cee23ff277d29ed160994a84d0d7ffe62 beautifulsoup4==4.13.4 \ --hash=sha256:9bbbb14bfde9d79f38b8cd5f8c7c85f4b8f2523190ebed90e950a8dea4cb1c4b \ --hash=sha256:dbb3c4e1ceae6aefebdaf2423247260cd062430a410e38c66f2baa50a8437195 @@ -54,25 +54,23 @@ black==25.1.0 \ --hash=sha256:96c1c7cd856bba8e20094e36e0f948718dc688dba4a9d78c3adde52b9e6c2299 \ --hash=sha256:a39337598244de4bae26475f77dda852ea00a93bd4c728e09eacd827ec929df0 \ --hash=sha256:bce2e264d59c91e52d8000d507eb20a9aca4a778731a08cfff7e5ac4a4bb7096 -certifi==2025.7.9 \ - --hash=sha256:c1d2ec05395148ee10cf672ffc28cd37ea0ab0d99f9cc74c43e588cbd111b079 \ - --hash=sha256:d842783a14f8fdd646895ac26f719a061408834473cfc10203f6a575beb15d39 -charset-normalizer==3.4.2 \ - --hash=sha256:0c8c57f84ccfc871a48a47321cfa49ae1df56cd1d965a09abe84066f6853b9c0 \ - --hash=sha256:0f5d9ed7f254402c9e7d35d2f5972c9bbea9040e99cd2861bd77dc68263277c7 \ - --hash=sha256:28a1005facc94196e1fb3e82a3d442a9d9110b8434fc1ded7a24a2983c9888d8 \ - --hash=sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63 \ - --hash=sha256:6b66f92b17849b85cad91259efc341dce9c1af48e2173bf38a85c6329f1033e5 \ - --hash=sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0 \ - --hash=sha256:aa88ca0b1932e93f2d961bf3addbb2db902198dca337d88c89e1559e066e7645 \ - --hash=sha256:be1e352acbe3c78727a16a455126d9ff83ea2dfdcbc83148d2982305a04714c2 \ - --hash=sha256:d524ba3f1581b35c03cb42beebab4a13e6cdad7b36246bd22541fa585a56cccd \ - --hash=sha256:daac4765328a919a805fa5e2720f3e94767abd632ae410a9062dff5412bae65a \ - --hash=sha256:e53efc7c7cee4c1e70661e2e112ca46a575f90ed9ae3fef200f2a25e954f4b28 \ - --hash=sha256:e70e990b2137b29dc5564715de1e12701815dacc1d056308e2b17e9095372a82 \ - --hash=sha256:efd387a49825780ff861998cd959767800d54f8308936b21025326de4b5a42b9 \ - --hash=sha256:f0aa37f3c979cf2546b73e8222bbfa3dc07a641585340179d768068e3455e544 \ - --hash=sha256:fdb20a30fe1175ecabed17cbf7812f7b804b8a315a25f24678bcdf120a90077f +certifi==2025.8.3 \ + --hash=sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407 \ + --hash=sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5 +charset-normalizer==3.4.3 \ + --hash=sha256:00237675befef519d9af72169d8604a067d92755e84fe76492fef5441db05b91 \ + --hash=sha256:0e78314bdc32fa80696f72fa16dc61168fda4d6a0c014e0380f9d02f0e5d8a07 \ + --hash=sha256:13faeacfe61784e2559e690fc53fa4c5ae97c6fcedb8eb6fb8d0a15b475d2c64 \ + --hash=sha256:1e8ac75d72fa3775e0b7cb7e4629cec13b7514d928d15ef8ea06bca03ef01cae \ + --hash=sha256:31a9a6f775f9bcd865d88ee350f0ffb0e25936a7f930ca98995c05abf1faf21c \ + --hash=sha256:585f3b2a80fbd26b048a0be90c5aae8f06605d3c92615911c3a2b03a8a3b796f \ + --hash=sha256:6cf8fd4c04756b6b60146d98cd8a77d0cdae0e1ca20329da2ac85eed779b6849 \ + --hash=sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14 \ + --hash=sha256:939578d9d8fd4299220161fdd76e86c6a251987476f5243e8864a7844476ba14 \ + --hash=sha256:96b2b3d1a83ad55310de8c7b4a2d04d9277d5591f40761274856635acc5fcb30 \ + --hash=sha256:b256ee2e749283ef3ddcff51a675ff43798d92d746d1a6e4631bf8c707d22d0b \ + --hash=sha256:ce571ab16d890d23b5c278547ba694193a45011ff86a9162a71307ed9f86759a \ + --hash=sha256:fd10de089bcdcd1be95a2f73dbe6254798ec1bda9f450d5828c96f93e2536b9c click==8.2.1 \ --hash=sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202 \ --hash=sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b @@ -94,13 +92,16 @@ dirtyjson==1.0.8 \ distro==1.9.0 \ --hash=sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed \ --hash=sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2 -faiss-cpu==1.11.0 \ - --hash=sha256:2c39a388b059fb82cd97fbaa7310c3580ced63bf285be531453bfffbe89ea3dd \ - --hash=sha256:44877b896a2b30a61e35ea4970d008e8822545cb340eca4eff223ac7f40a1db9 \ - --hash=sha256:926645f1b6829623bc88e93bc8ca872504d604718ada3262e505177939aaee0a \ - --hash=sha256:931db6ed2197c03a7fdf833b057c13529afa2cec8a827aa081b7f0543e4e671b \ - --hash=sha256:a4e3433ffc7f9b8707a7963db04f8676a5756868d325644db2db9d67a618b7a0 \ - --hash=sha256:a90d1c81d0ecf2157e1d2576c482d734d10760652a5b2fcfa269916611e41f1c +faiss-cpu==1.11.0.post1 \ + --hash=sha256:06b1ea9ddec9e4d9a41c8ef7478d493b08d770e9a89475056e963081eed757d1 \ + --hash=sha256:0794eb035c6075e931996cf2b2703fbb3f47c8c34bc2d727819ddc3e5e486a31 \ + --hash=sha256:18d2221014813dc9a4236e47f9c4097a71273fbf17c3fe66243e724e2018a67a \ + --hash=sha256:1b15412b22a05865433aecfdebf7664b9565bd49b600d23a0a27c74a5526893e \ + --hash=sha256:2c8c384e65cc1b118d2903d9f3a27cd35f6c45337696fc0437f71e05f732dbc0 \ + --hash=sha256:36af46945274ed14751b788673125a8a4900408e4837a92371b0cad5708619ea \ + --hash=sha256:3ce8a8984a7dcc689fd192c69a476ecd0b2611c61f96fe0799ff432aa73ff79c \ + --hash=sha256:81c169ea74213b2c055b8240befe7e9b42a1f3d97cda5238b3b401035ce1a18b \ + --hash=sha256:8384e05afb7c7968e93b81566759f862e744c0667b175086efb3d8b20949b39f filelock==3.18.0 \ --hash=sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2 \ --hash=sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de @@ -127,44 +128,44 @@ frozenlist==1.7.0 \ --hash=sha256:ce48b2fece5aeb45265bb7a58259f45027db0abff478e3077e12b05b17fb9da7 \ --hash=sha256:d50ac7627b3a1bd2dcef6f9da89a772694ec04d9a61b66cf87f7d9446b4a0c31 \ --hash=sha256:fe2365ae915a1fafd982c146754e1de6ab3478def8a59c86e1f7242d794f97d5 -fsspec==2025.5.1 \ - --hash=sha256:24d3a2e663d5fc735ab256263c4075f374a174c3410c0b25e5bd1970bceaa462 \ - --hash=sha256:2e55e47a540b91843b755e83ded97c6e897fa0942b11490113f09e9c443c2475 -greenlet==3.2.3 \ - --hash=sha256:0921ac4ea42a5315d3446120ad48f90c3a6b9bb93dd9b3cf4e4d84a66e42de83 \ - --hash=sha256:29e184536ba333003540790ba29829ac14bb645514fbd7e32af331e8202a62a5 \ - --hash=sha256:592c12fb1165be74592f5de0d70f82bc5ba552ac44800d632214b76089945147 \ - --hash=sha256:751261fc5ad7b6705f5f76726567375bb2104a059454e0226e1eef6c756748ba \ - --hash=sha256:784ae58bba89fa1fa5733d170d42486580cab9decda3484779f4759345b29822 \ - --hash=sha256:83a8761c75312361aa2b5b903b79da97f13f556164a7dd2d5448655425bd4c34 \ - --hash=sha256:8b0dd8ae4c0d6f5e54ee55ba935eeb3d735a9b58a8a1e5b5cbab64e01a39f365 \ - --hash=sha256:93c0bb79844a367782ec4f429d07589417052e621aa39a5ac1fb99c5aa308edc \ - --hash=sha256:c667c0bf9d406b77a15c924ef3285e1e05250948001220368e039b6aa5b5034b \ - --hash=sha256:d2971d93bb99e05f8c2c0c2f4aa9484a18d98c4c3bd3c62b65b7e6ae33dfcfaf -griffe==1.7.3 \ - --hash=sha256:52ee893c6a3a968b639ace8015bec9d36594961e156e23315c8e8e51401fa50b \ - --hash=sha256:c6b3ee30c2f0f17f30bcdef5068d6ab7a2a4f1b8bf1a3e74b56fffd21e1c5f75 +fsspec==2025.7.0 \ + --hash=sha256:786120687ffa54b8283d942929540d8bc5ccfa820deb555a2b5d0ed2b737bf58 \ + --hash=sha256:8b012e39f63c7d5f10474de957f3ab793b47b45ae7d39f2fb735f8bbe25c0e21 +greenlet==3.2.4 \ + --hash=sha256:0db5594dce18db94f7d1650d7489909b57afde4c580806b8d9203b6e79cdc079 \ + --hash=sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d \ + --hash=sha256:1987de92fec508535687fb807a5cea1560f6196285a4cde35c100b8cd632cc52 \ + --hash=sha256:1ee8fae0519a337f2329cb78bd7a8e128ec0f881073d43f023c7b8d4831d5246 \ + --hash=sha256:2523e5246274f54fdadbce8494458a2ebdcdbc7b802318466ac5606d3cded1f8 \ + --hash=sha256:4d1378601b85e2e5171b99be8d2dc85f594c79967599328f95c1dc1a40f1c633 \ + --hash=sha256:55e9c5affaa6775e2c6b67659f3a71684de4c549b3dd9afca3bc773533d284fa \ + --hash=sha256:94abf90142c2a18151632371140b3dba4dee031633fe614cb592dbb6c9e17bc3 \ + --hash=sha256:96378df1de302bc38e99c3a9aa311967b7dc80ced1dcc6f171e99842987882a2 \ + --hash=sha256:9c40adce87eaa9ddb593ccb0fa6a07caf34015a29bf8d344811665b573138db9 +griffe==1.11.0 \ + --hash=sha256:c153b5bc63ca521f059e9451533a67e44a9d06cf9bf1756e4298bda5bd3262e8 \ + --hash=sha256:dc56cc6af8d322807ecdb484b39838c7a51ca750cf21ccccf890500c4d6389d8 h11==0.16.0 \ --hash=sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1 \ --hash=sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86 -hf-xet==1.1.5; platform_machine == "x86_64" or platform_machine == "amd64" or platform_machine == "arm64" or platform_machine == "aarch64" \ - --hash=sha256:69ebbcfd9ec44fdc2af73441619eeb06b94ee34511bbcf57cd423820090f5694 \ - --hash=sha256:73e167d9807d166596b4b2f0b585c6d5bd84a26dea32843665a8b58f6edba245 \ - --hash=sha256:83088ecea236d5113de478acb2339f92c95b4fb0462acaa30621fac02f5a534a \ - --hash=sha256:9fa6e3ee5d61912c4a113e0708eaaef987047616465ac7aa30f7121a48fc1af8 \ - --hash=sha256:ab34c4c3104133c495785d5d8bba3b1efc99de52c02e759cf711a91fd39d3a14 \ - --hash=sha256:dbba1660e5d810bd0ea77c511a99e9242d920790d0e63c0e4673ed36c4022d18 \ - --hash=sha256:f52c2fa3635b8c37c7764d8796dfa72706cc4eded19d638331161e82b0792e23 \ - --hash=sha256:fc874b5c843e642f45fd85cda1ce599e123308ad2901ead23d3510a47ff506d1 +hf-xet==1.1.7; platform_machine == "x86_64" or platform_machine == "amd64" or platform_machine == "arm64" or platform_machine == "aarch64" \ + --hash=sha256:18b61bbae92d56ae731b92087c44efcac216071182c603fc535f8e29ec4b09b8 \ + --hash=sha256:20cec8db4561338824a3b5f8c19774055b04a8df7fff0cb1ff2cb1a0c1607b80 \ + --hash=sha256:2e356da7d284479ae0f1dea3cf5a2f74fdf925d6dca84ac4341930d892c7cb34 \ + --hash=sha256:60dae4b44d520819e54e216a2505685248ec0adbdb2dd4848b17aa85a0375cde \ + --hash=sha256:6efaaf1a5a9fc3a501d3e71e88a6bfebc69ee3a716d0e713a931c8b8d920038f \ + --hash=sha256:713f2bff61b252f8523739969f247aa354ad8e6d869b8281e174e2ea1bb8d604 \ + --hash=sha256:751571540f9c1fbad9afcf222a5fb96daf2384bf821317b8bfb0c59d86078513 \ + --hash=sha256:b109f4c11e01c057fc82004c9e51e6cdfe2cb230637644ade40c599739067b2e httpcore==1.0.9 \ --hash=sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55 \ --hash=sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8 httpx==0.28.1 \ --hash=sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc \ --hash=sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad -huggingface-hub[inference]==0.33.4 \ - --hash=sha256:09f9f4e7ca62547c70f8b82767eefadd2667f4e116acba2e3e62a5a81815a7bb \ - --hash=sha256:6af13478deae120e765bfd92adad0ae1aec1ad8c439b46f23058ad5956cbca0a +huggingface-hub[inference]==0.34.4 \ + --hash=sha256:9b365d781739c93ff90c359844221beef048403f1bc1f1c123c191257c3c890a \ + --hash=sha256:a4228daa6fb001be3f4f4bdaf9a0db00e1739235702848df00885c9b5742c85c idna==3.10 \ --hash=sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9 \ --hash=sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3 @@ -188,63 +189,51 @@ jiter==0.10.0 \ joblib==1.5.1 \ --hash=sha256:4719a31f054c7d766948dcd83e9613686b27114f190f717cec7eaa2084f8a74a \ --hash=sha256:f4f86e351f39fe3d0d32a9f2c3d8af1ee4cec285aafcb27003dda5205576b444 -llama-cloud==0.1.32 \ - --hash=sha256:c42b2d5fb24acc8595bcc3626fb84c872909a16ab6d6879a1cb1101b21c238bd \ - --hash=sha256:cea98241127311ea91f191c3c006aa6558f01d16f9539ed93b24d716b888f10e -llama-cloud-services==0.6.43 \ - --hash=sha256:2349195f501ba9151ea3ab384d20cae8b4dc4f335f60bd17607332626bdfa2e4 \ - --hash=sha256:fa6be33bf54d467cace809efee8c2aeeb9de74ce66708513d37b40d738d3350f -llama-index==0.12.48 \ - --hash=sha256:54b922fd94efde2c21c12be392c381cb4a0531a7ca8e482a7e3d1c6795af2da5 \ - --hash=sha256:93a80de54a5cf86114c252338d7917bb81ffe94afa47f01c41c9ee04c0155db4 -llama-index-agent-openai==0.4.12 \ - --hash=sha256:6dbb6276b2e5330032a726b28d5eef5140825f36d72d472b231f08ad3af99665 \ - --hash=sha256:d2fe53feb69cfe45752edb7328bf0d25f6a9071b3c056787e661b93e5b748a28 -llama-index-cli==0.4.4 \ - --hash=sha256:1070593cf79407054735ab7a23c5a65a26fc18d264661e42ef38fc549b4b7658 \ - --hash=sha256:c3af0cf1e2a7e5ef44d0bae5aa8e8872b54c5dd6b731afbae9f13ffeb4997be0 -llama-index-core==0.12.48 \ - --hash=sha256:0770119ab540605cb217dc9b26343b0bdf6f91d843cfb17d0074ba2fac358e56 \ - --hash=sha256:a5cb2179495f091f351a41b4ef312ec6593660438e0066011ec81f7b5d2c93be -llama-index-embeddings-huggingface==0.5.5 \ - --hash=sha256:7f6e9a031d9146f235df597c0ccd6280cde96b9b437f99052ce79bb72e5fac5e \ - --hash=sha256:8260e1561df17ca510e241a90504b37cc7d8ac6f2d6aaad9732d04ca3ad988d1 -llama-index-embeddings-openai==0.3.1 \ - --hash=sha256:1368aad3ce24cbaed23d5ad251343cef1eb7b4a06d6563d6606d59cb347fef20 \ - --hash=sha256:f15a3d13da9b6b21b8bd51d337197879a453d1605e625a1c6d45e741756c0290 -llama-index-indices-managed-llama-cloud==0.7.10 \ - --hash=sha256:53267907e23d8fbcbb97c7a96177a41446de18550ca6030276092e73b45ca880 \ - --hash=sha256:f7edcfb8f694cab547cd9324be7835dc97470ce05150d0b8888fa3bf9d2f84a8 -llama-index-instrumentation==0.2.0 \ - --hash=sha256:1055ae7a3d19666671a8f1a62d08c90472552d9fcec7e84e6919b2acc92af605 \ - --hash=sha256:ae8333522487e22a33732924a9a08dfb456f54993c5c97d8340db3c620b76f13 -llama-index-llms-openai==0.4.7 \ - --hash=sha256:3b8d9d3c1bcadc2cff09724de70f074f43eafd5b7048a91247c9a41b7cd6216d \ - --hash=sha256:564af8ab39fb3f3adfeae73a59c0dca46c099ab844a28e725eee0c551d4869f8 -llama-index-multi-modal-llms-openai==0.5.3 \ - --hash=sha256:b755a8b47d8d2f34b5a3d249af81d9bfb69d3d2cf9ab539d3a42f7bfa3e2391a \ - --hash=sha256:be6237df8f9caaa257f9beda5317287bbd2ec19473d777a30a34e41a7c5bddf8 -llama-index-program-openai==0.3.2 \ - --hash=sha256:04c959a2e616489894bd2eeebb99500d6f1c17d588c3da0ddc75ebd3eb7451ee \ - --hash=sha256:451829ae53e074e7b47dcc60a9dd155fcf9d1dcbc1754074bdadd6aab4ceb9aa -llama-index-question-gen-openai==0.3.1 \ - --hash=sha256:1ce266f6c8373fc8d884ff83a44dfbacecde2301785db7144872db51b8b99429 \ - --hash=sha256:5e9311b433cc2581ff8a531fa19fb3aa21815baff75aaacdef11760ac9522aa9 -llama-index-readers-file==0.4.11 \ - --hash=sha256:1b21cb66d78dd5f60e8716607d9a47ccd81bb39106d459665be1ca7799e9597b \ - --hash=sha256:e71192d8d6d0bf95131762da15fa205cf6e0cc248c90c76ee04d0fbfe160d464 -llama-index-readers-llama-parse==0.4.0 \ - --hash=sha256:574e48386f28d2c86c3f961ca4a4906910312f3400dd0c53014465bfbc6b32bf \ - --hash=sha256:e99ec56f4f8546d7fda1a7c1ae26162fb9acb7ebcac343b5abdb4234b4644e0f -llama-index-vector-stores-faiss==0.4.0 \ - --hash=sha256:092907b38c70b7f9698ad294836389b31fd3a1273ea1d93082993dd0925c8a4b \ - --hash=sha256:59b58e4ec91880a5871a896bbdbd94cb781a447f92f400b5f08a62eb56a62e5c -llama-index-workflows==1.1.0 \ - --hash=sha256:992fd5b012f56725853a4eed2219a66e19fcc7a6db85dc51afcc1bd2a5dd6db1 \ - --hash=sha256:ff001d362100bfc2a3353cc5f2528a0adb52245e632191a86b4bddacde72b6af -llama-parse==0.6.43 \ - --hash=sha256:d88e91c97e37f77b2619111ef43c02b7da61125f821cf77f918996eb48200d78 \ - --hash=sha256:fe435309638c4fdec4fec31f97c5031b743c92268962d03b99bd76704f566c32 +llama-cloud==0.1.35 \ + --hash=sha256:200349d5d57424d7461f304cdb1355a58eea3e6ca1e6b0d75c66b2e937216983 \ + --hash=sha256:b7abab4423118e6f638d2f326749e7a07c6426543bea6da99b623c715b22af71 +llama-cloud-services==0.6.54 \ + --hash=sha256:07f595f7a0ba40c6a1a20543d63024ca7600fe65c4811d1951039977908997be \ + --hash=sha256:baf65d9bffb68f9dca98ac6e22908b6675b2038b021e657ead1ffc0e43cbd45d +llama-index==0.13.1 \ + --hash=sha256:0cf06beaf460bfa4dd57902e7f4696626da54350851a876b391a82acce7fe5c2 \ + --hash=sha256:e02b61cac0699c709a12e711bdaca0a2c90c9b8177d45f9b07b8650c9985d09e +llama-index-cli==0.5.0 \ + --hash=sha256:2eb9426232e8d89ffdf0fa6784ff8da09449d920d71d0fcc81d07be93cf9369f \ + --hash=sha256:e331ca98005c370bfe58800fa5eed8b10061d0b9c656b84a1f5f6168733a2a7b +llama-index-core==0.13.1 \ + --hash=sha256:04a58cb26638e186ddb02a80970d503842f68abbeb8be5af6a387c51f7995eeb \ + --hash=sha256:fde6c8c8bcacf7244bdef4908288eced5e11f47e9741d545846c3d1692830510 +llama-index-embeddings-huggingface==0.6.0 \ + --hash=sha256:0c24aba5265a7dbd6591394a8d2d64d0b978bb50b4b97c4e88cbf698b69fdd10 \ + --hash=sha256:3ece7d8c5b683d2055fedeca4457dea13f75c81a6d7fb94d77e878cd73d90d97 +llama-index-embeddings-openai==0.5.0 \ + --hash=sha256:ac587839a111089ea8a6255f9214016d7a813b383bbbbf9207799be1100758eb \ + --hash=sha256:d817edb22e3ff475e8cd1833faf1147028986bc1d688f7894ef947558864b728 +llama-index-indices-managed-llama-cloud==0.9.1 \ + --hash=sha256:7bee1a368a17ff63bf1078e5ad4795eb88dcdb87c259cfb242c19bd0f4fb978e \ + --hash=sha256:df33fb6d8c6b7ee22202ee7a19285a5672f0e58a1235a2504b49c90a7e1c8933 +llama-index-instrumentation==0.4.0 \ + --hash=sha256:83f73156be34dd0121dfe9e259883620e19f0162f152ac483e179ad5ad0396ac \ + --hash=sha256:f38ecc1f02b6c1f7ab84263baa6467fac9f86538c0ee25542853de46278abea7 +llama-index-llms-openai==0.5.2 \ + --hash=sha256:53237fda8ff9089fdb2543ac18ea499b27863cc41095d3a3499f19e9cfd98e1a \ + --hash=sha256:f1cc5be83f704d217bd235b609ad1b128dbd42e571329b108f902920836c1071 +llama-index-readers-file==0.5.0 \ + --hash=sha256:7fc47a9dbf11d07e78992581c20bca82b21bf336e646b4f53263f3909cb02c58 \ + --hash=sha256:f324617bfc4d9b32136d25ff5351b92bc0b569a296173ee2a8591c1f886eff0c +llama-index-readers-llama-parse==0.5.0 \ + --hash=sha256:891b21fb63fe1fe722e23cfa263a74d9a7354e5d8d7a01f2d4040a52f8d8feef \ + --hash=sha256:e63ebf2248c4a726b8a1f7b029c90383d82cdc142942b54dbf287d1f3aee6d75 +llama-index-vector-stores-faiss==0.5.0 \ + --hash=sha256:2fa9848a4423ddb26f987d299749f1fa1c272b8e576332a03e0610d4ee236d09 \ + --hash=sha256:4b6a1533c075b6e30985bf1eb778716c594ae0511691434df7f75b032ef964eb +llama-index-workflows==1.3.0 \ + --hash=sha256:328cc25d92b014ef527f105a2f2088c0924fff0494e53d93decb951f14fbfe47 \ + --hash=sha256:9c1688e237efad384f16485af71c6f9456a2eb6d85bf61ff49e5717f10ff286d +llama-parse==0.6.54 \ + --hash=sha256:c66c8d51cf6f29a44eaa8595a595de5d2598afc86e5a33a4cebe5fe228036920 \ + --hash=sha256:c707b31152155c9bae84e316fab790bbc8c85f4d8825ce5ee386ebeb7db258f1 markupsafe==3.0.2 \ --hash=sha256:0bff5e0ae4ef2e1ae4fdf2dfd5b76c75e5c2fa4132d05fc1b0dabcd20c7e28c4 \ --hash=sha256:1e084f686b92e5b83186b07e8a17fc09e38fff551f3602b249881fec658d3eca \ @@ -284,15 +273,15 @@ multidict==6.6.3 \ --hash=sha256:e995a34c3d44ab511bfc11aa26869b9d66c2d8c799fa0e74b28a473a692532d6 \ --hash=sha256:ef43b5dd842382329e4797c46f10748d8c2b6e0614f46b4afe4aee9ac33159df \ --hash=sha256:f114d8478733ca7388e7c7e0ab34b72547476b97009d643644ac33d4d3fe1821 -mypy==1.16.1 \ - --hash=sha256:08e850ea22adc4d8a4014651575567b0318ede51e8e9fe7a68f25391af699507 \ - --hash=sha256:211287e98e05352a2e1d4e8759c5490925a7c784ddc84207f4714822f8cf99b6 \ - --hash=sha256:22d76a63a42619bfb90122889b903519149879ddbf2ba4251834727944c8baca \ - --hash=sha256:2c7ce0662b6b9dc8f4ed86eb7a5d505ee3298c04b40ec13b30e572c0e5ae17c4 \ - --hash=sha256:472e4e4c100062488ec643f6162dd0d5208e33e2f34544e1fc931372e806c0cc \ - --hash=sha256:5fc2ac4027d0ef28d6ba69a0343737a23c4d1b83672bf38d1fe237bdc0643b37 \ - --hash=sha256:6bd00a0a2094841c5e47e7374bb42b83d64c527a502e3334e1173a0c24437bab \ - --hash=sha256:ea16e2a7d2714277e349e24d19a782a663a34ed60864006e8585db08f8ad1782 +mypy==1.17.1 \ + --hash=sha256:064e2ff508e5464b4bd807a7c1625bc5047c5022b85c70f030680e18f37273a5 \ + --hash=sha256:25e01ec741ab5bb3eec8ba9cdb0f769230368a22c959c4937360efb89b7e9f01 \ + --hash=sha256:70401bbabd2fa1aa7c43bb358f54037baf0586f41e83b0ae67dd0534fc64edfd \ + --hash=sha256:a9f52c0351c21fe24c21d8c0eb1f62967b262d6729393397b6f443c3b773c3b9 \ + --hash=sha256:ad37544be07c5d7fba814eb370e006df58fed8ad1ef33ed1649cb1889ba6ff58 \ + --hash=sha256:c1fdf4abb29ed1cb091cf432979e162c208a5ac676ce35010373ff29247bcad5 \ + --hash=sha256:e92bdc656b7757c438660f775f872a669b8ff374edc4d18277d86b63edba6b8b \ + --hash=sha256:ff2933428516ab63f961644bc49bc4cbe42bbffb2cd3b71cc7277c07d16b1a8b mypy-extensions==1.1.0 \ --hash=sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505 \ --hash=sha256:52e68efc3284861e772bbcd66823fde5ae21fd2fdb51c62a211403730b916558 @@ -305,28 +294,29 @@ networkx==3.5 \ nltk==3.9.1 \ --hash=sha256:4fa26829c5b00715afe3061398a8989dc643b92ce7dd93fb4585a70930d168a1 \ --hash=sha256:87d127bd3de4bd89a4f81265e5fa59cb1b199b27440175370f7417d2bc7ae868 -numpy==2.3.1 \ - --hash=sha256:0025048b3c1557a20bc80d06fdeb8cc7fc193721484cca82b2cfa072fec71a93 \ - --hash=sha256:0bb3a4a61e1d327e035275d2a993c96fa786e4913aa089843e6a2d9dd205c66a \ - --hash=sha256:15aa4c392ac396e2ad3d0a2680c0f0dee420f9fed14eef09bdb9450ee6dcb7b7 \ - --hash=sha256:1ec9ae20a4226da374362cca3c62cd753faf2f951440b0e3b98e93c235441d2b \ - --hash=sha256:467db865b392168ceb1ef1ffa6f5a86e62468c43e0cfb4ab6da667ede10e58db \ - --hash=sha256:5ccb7336eaf0e77c1635b232c141846493a588ec9ea777a7c24d7166bb8533ae \ - --hash=sha256:6ea9e48336a402551f52cd8f593343699003d2353daa4b72ce8d34f66b722070 \ - --hash=sha256:a5ee121b60aa509679b682819c602579e1df14a5b07fe95671c8849aad8f2115 \ - --hash=sha256:a8b740f5579ae4585831b3cf0e3b0425c667274f82a484866d2adf9570539369 \ - --hash=sha256:ad506d4b09e684394c42c966ec1527f6ebc25da7f4da4b1b056606ffe446b8a3 \ - --hash=sha256:afed2ce4a84f6b0fc6c1ce734ff368cbf5a5e24e8954a338f3bdffa0718adffb \ - --hash=sha256:c6e0bf9d1a2f50d2b65a7cf56db37c095af17b59f6c132396f7c6d5dd76484df \ - --hash=sha256:d4580adadc53311b163444f877e0789f1c8861e2698f6b2a4ca852fda154f3ff \ - --hash=sha256:e344eb79dab01f1e838ebb67aab09965fb271d6da6b00adda26328ac27d4a66e \ - --hash=sha256:e610832418a2bc09d974cc9fecebfa51e9532d6190223bc5ef6a7402ebf3b5cb \ - --hash=sha256:eabd7e8740d494ce2b4ea0ff05afa1b7b291e978c0ae075487c51e8bd93c0c68 \ - --hash=sha256:ebb8603d45bc86bbd5edb0d63e52c5fd9e7945d3a503b77e486bd88dde67a19b \ - --hash=sha256:ec0bdafa906f95adc9a0c6f26a4871fa753f25caaa0e032578a30457bff0af6a -openai==1.95.1 \ - --hash=sha256:8bbdfeceef231b1ddfabbc232b179d79f8b849aab5a7da131178f8d10e0f162f \ - --hash=sha256:f089b605282e2a2b6776090b4b46563ac1da77f56402a222597d591e2dcc1086 +numpy==2.3.2 \ + --hash=sha256:14a91ebac98813a49bc6aa1a0dfc09513dcec1d97eaf31ca21a87221a1cdcb15 \ + --hash=sha256:1f91e5c028504660d606340a084db4b216567ded1056ea2b4be4f9d10b67197f \ + --hash=sha256:20b8200721840f5621b7bd03f8dcd78de33ec522fc40dc2641aa09537df010c3 \ + --hash=sha256:240259d6564f1c65424bcd10f435145a7644a65a6811cfc3201c4a429ba79170 \ + --hash=sha256:2c3271cc4097beb5a60f010bcc1cc204b300bb3eafb4399376418a83a1c6373c \ + --hash=sha256:4209f874d45f921bde2cff1ffcd8a3695f545ad2ffbef6d3d3c6768162efab89 \ + --hash=sha256:4ae6863868aaee2f57503c7a5052b3a2807cf7a3914475e637a0ecd366ced220 \ + --hash=sha256:6936aff90dda378c09bea075af0d9c675fe3a977a9d2402f95a87f440f59f619 \ + --hash=sha256:69779198d9caee6e547adb933941ed7520f896fd9656834c300bdf4dd8642712 \ + --hash=sha256:71669b5daae692189540cffc4c439468d35a3f84f0c88b078ecd94337f6cb0ec \ + --hash=sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168 \ + --hash=sha256:8446acd11fe3dc1830568c941d44449fd5cb83068e5c70bd5a470d323d448296 \ + --hash=sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9 \ + --hash=sha256:aa098a5ab53fa407fded5870865c6275a5cd4101cfdef8d6fafc48286a96e981 \ + --hash=sha256:cbc95b3813920145032412f7e33d12080f11dc776262df1712e1638207dde9e8 \ + --hash=sha256:e0486a11ec30cdecb53f184d496d1c6a20786c81e55e41640270130056f8ee48 \ + --hash=sha256:f0a1a8476ad77a228e41619af2fa9505cf69df928e9aaa165746584ea17fed2b \ + --hash=sha256:f75018be4980a7324edc5930fe39aa391d5734531b1926968605416ff58c332d \ + --hash=sha256:fb1752a3bb9a3ad2d6b090b88a9a0ae1cd6f004ef95f75825e2f382c183b2097 +openai==1.99.6 \ + --hash=sha256:e40d44b2989588c45ce13819598788b77b8fb80ba2f7ae95ce90d14e46f1bd26 \ + --hash=sha256:f48f4239b938ef187062f3d5199a05b69711d8b600b9a9b6a3853cd271799183 packaging==25.0 \ --hash=sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484 \ --hash=sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f @@ -421,9 +411,9 @@ pydantic-core==2.33.2 \ --hash=sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246 \ --hash=sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8 \ --hash=sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d -pypdf==5.7.0 \ - --hash=sha256:203379453439f5b68b7a1cd43cdf4c5f7a02b84810cefa7f93a47b350aaaba48 \ - --hash=sha256:68c92f2e1aae878bab1150e74447f31ab3848b1c0a6f8becae9f0b1904460b6f +pypdf==5.9.0 \ + --hash=sha256:30f67a614d558e495e1fbb157ba58c1de91ffc1718f5e0dfeb82a029233890a1 \ + --hash=sha256:be10a4c54202f46d9daceaa8788be07aa8cd5ea8c25c529c50dd509206382c35 python-dateutil==2.9.0.post0 \ --hash=sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3 \ --hash=sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427 @@ -447,82 +437,81 @@ pyyaml==6.0.2 \ --hash=sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e \ --hash=sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44 \ --hash=sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4 -regex==2024.11.6 \ - --hash=sha256:02e28184be537f0e75c1f9b2f8847dc51e08e6e171c6bde130b2687e0c33cf60 \ - --hash=sha256:068376da5a7e4da51968ce4c122a7cd31afaaec4fccc7856c92f63876e57b51d \ - --hash=sha256:1062b39a0a2b75a9c694f7a08e7183a80c63c0d62b301418ffd9c35f55aaa114 \ - --hash=sha256:167ed4852351d8a750da48712c3930b031f6efdaa0f22fa1933716bfcd6bf4a3 \ - --hash=sha256:202eb32e89f60fc147a41e55cb086db2a3f8cb82f9a9a88440dcfc5d37faae8d \ - --hash=sha256:2c89a8cc122b25ce6945f0423dc1352cb9593c68abd19223eebbd4e56612c5b7 \ - --hash=sha256:2d548dafee61f06ebdb584080621f3e0c23fff312f0de1afc776e2a2ba99a74f \ - --hash=sha256:4181b814e56078e9b00427ca358ec44333765f5ca1b45597ec7446d3a1ef6e34 \ - --hash=sha256:5478c6962ad548b54a591778e93cd7c456a7a29f8eca9c49e4f9a806dcc5d638 \ - --hash=sha256:7ab159b063c52a0333c884e4679f8d7a85112ee3078fe3d9004b2dd875585519 \ - --hash=sha256:94d87b689cdd831934fa3ce16cc15cd65748e6d689f5d2b8f4f4df2065c9fa20 \ - --hash=sha256:9714398225f299aa85267fd222f7142fcb5c769e73d7733344efc46f2ef5cf89 \ - --hash=sha256:ac10f2c4184420d881a3475fb2c6f4d95d53a8d50209a2500723d831036f7c45 \ - --hash=sha256:bec9931dfb61ddd8ef2ebc05646293812cb6b16b60cf7c9511a832b6f1854b55 \ - --hash=sha256:c36f9b6f5f8649bb251a5f3f66564438977b7ef8386a52460ae77e6070d309d9 \ - --hash=sha256:f2a19f302cd1ce5dd01a9099aaa19cae6173306d1302a43b627f62e21cf18ac0 +regex==2025.7.34 \ + --hash=sha256:24257953d5c1d6d3c129ab03414c07fc1a47833c9165d49b954190b2b7f21a1a \ + --hash=sha256:3157aa512b9e606586900888cd469a444f9b898ecb7f8931996cb715f77477f0 \ + --hash=sha256:35e43ebf5b18cd751ea81455b19acfdec402e82fe0dc6143edfae4c5c4b3909a \ + --hash=sha256:37555e4ae0b93358fa7c2d240a4291d4a4227cc7c607d8f85596cdb08ec0a083 \ + --hash=sha256:85c3a958ef8b3d5079c763477e1f09e89d13ad22198a37e9d7b26b4b17438b33 \ + --hash=sha256:96bbae4c616726f4661fe7bcad5952e10d25d3c51ddc388189d8864fbc1b3c68 \ + --hash=sha256:9ead9765217afd04a86822dfcd4ed2747dfe426e887da413b15ff0ac2457e21a \ + --hash=sha256:9feab78a1ffa4f2b1e27b1bcdaad36f48c2fed4870264ce32f52a393db093c78 \ + --hash=sha256:a664291c31cae9c4a30589bd8bc2ebb56ef880c9c6264cb7643633831e606a4d \ + --hash=sha256:d428fc7731dcbb4e2ffe43aeb8f90775ad155e7db4347a639768bc6cd2df881a \ + --hash=sha256:da304313761b8500b8e175eb2040c4394a875837d5635f6256d6fa0377ad32c8 \ + --hash=sha256:e154a7ee7fa18333ad90b20e16ef84daaeac61877c8ef942ec8dfa50dc38b7a1 \ + --hash=sha256:ee38926f31f1aa61b0232a3a11b83461f7807661c062df9eb88769d86e6195c3 \ + --hash=sha256:f14b36e6d4d07f1a5060f28ef3b3561c5d95eb0651741474ce4c0a4c56ba8719 \ + --hash=sha256:f3e5c1e0925e77ec46ddc736b756a6da50d4df4ee3f69536ffb2373460e2dafd requests==2.32.4 \ --hash=sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c \ --hash=sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422 -ruff==0.12.3 \ - --hash=sha256:023040a3499f6f974ae9091bcdd0385dd9e9eb4942f231c23c57708147b06311 \ - --hash=sha256:0262d50ba2767ed0fe212aa7e62112a1dcbfd46b858c5bf7bbd11f326998bafc \ - --hash=sha256:0a9153b000c6fe169bb307f5bd1b691221c4286c133407b8827c406a55282041 \ - --hash=sha256:12371aec33e1a3758597c5c631bae9a5286f3c963bdfb4d17acdd2d395406687 \ - --hash=sha256:2120d3aa855ff385e0e562fdee14d564c9675edbe41625c87eeab744a7830d12 \ - --hash=sha256:40dced4a79d7c264389de1c59467d5d5cefd79e7e06d1dfa2c75497b5269a5a6 \ - --hash=sha256:47552138f7206454eaf0c4fe827e546e9ddac62c2a3d2585ca54d29a890137a2 \ - --hash=sha256:560f13b6baa49785665276c963edc363f8ad4b4fc910a883e2625bdb14a83a9e \ - --hash=sha256:5f9c7c9c8f84c2d7f27e93674d27136fbf489720251544c4da7fb3d742e011b1 \ - --hash=sha256:6b16647cbb470eaf4750d27dddc6ebf7758b918887b56d39e9c22cce2049082b \ - --hash=sha256:883d844967bffff5ab28bba1a4d246c1a1b2933f48cb9840f3fdc5111c603b07 \ - --hash=sha256:a946cf1e7ba3209bdef039eb97647f1c77f6f540e5845ec9c114d3af8df873e7 \ - --hash=sha256:c4faaff1f90cea9d3033cbbcdf1acf5d7fb11d8180758feb31337391691f3df0 \ - --hash=sha256:dfd45e6e926deb6409d0616078a666ebce93e55e07f0fb0228d4b2608b2c248d \ - --hash=sha256:e1417051edb436230023575b149e8ff843a324557fe0a265863b7602df86722f \ - --hash=sha256:e2506961bf6ead54887ba3562604d69cb430f59b42133d36976421bc8bd45901 \ - --hash=sha256:f1b5a4b6668fd7b7ea3697d8d98857390b40c1320a63a178eee6be0899ea2d77 \ - --hash=sha256:fa6b24600cf3b750e48ddb6057e901dd5b9aa426e316addb2a1af185a7509882 -safetensors==0.5.3 \ - --hash=sha256:1077f3e94182d72618357b04b5ced540ceb71c8a813d3319f1aba448e68a770d \ - --hash=sha256:11bce6164887cd491ca75c2326a113ba934be596e22b28b1742ce27b1d076467 \ - --hash=sha256:21d01c14ff6c415c485616b8b0bf961c46b3b343ca59110d38d744e577f9cce7 \ - --hash=sha256:32c3ef2d7af8b9f52ff685ed0bc43913cdcde135089ae322ee576de93eae5135 \ - --hash=sha256:37f1521be045e56fc2b54c606d4455573e717b2d887c579ee1dbba5f868ece04 \ - --hash=sha256:391ac8cab7c829452175f871fcaf414aa1e292b5448bd02620f675a7f3e7abb9 \ - --hash=sha256:4a243be3590bc3301c821da7a18d87224ef35cbd3e5f5727e4e0728b8172411e \ - --hash=sha256:799021e78287bac619c7b3f3606730a22da4cda27759ddf55d37c8db7511c74b \ - --hash=sha256:836cbbc320b47e80acd40e44c8682db0e8ad7123209f69b093def21ec7cafd11 \ - --hash=sha256:8bd84b12b1670a6f8e50f01e28156422a2bc07fb16fc4e98bded13039d688a0d \ - --hash=sha256:b6b0d6ecacec39a4fdd99cc19f4576f5219ce858e6fd8dbe7609df0b8dc56965 \ - --hash=sha256:bd20eb133db8ed15b40110b7c00c6df51655a2998132193de2f75f72d99c7073 \ - --hash=sha256:cead1fa41fc54b1e61089fa57452e8834f798cb1dc7a09ba3524f1eb08e0317a \ - --hash=sha256:cfc0ec0846dcf6763b0ed3d1846ff36008c6e7290683b61616c4b040f6a54ace \ - --hash=sha256:df26da01aaac504334644e1b7642fa000bfec820e7cef83aeac4e355e03195ff -scikit-learn==1.7.0 \ - --hash=sha256:7d7240c7b19edf6ed93403f43b0fcb0fe95b53bc0b17821f8fb88edab97085ef \ - --hash=sha256:80bd3bd4e95381efc47073a720d4cbab485fc483966f1709f1fd559afac57ab8 \ - --hash=sha256:8ef09b1615e1ad04dc0d0054ad50634514818a8eb3ee3dee99af3bffc0ef5007 \ - --hash=sha256:8fa979313b2ffdfa049ed07252dc94038def3ecd49ea2a814db5401c07f1ecfa \ - --hash=sha256:9dbe48d69aa38ecfc5a6cda6c5df5abef0c0ebdb2468e92437e2053f84abb8bc \ - --hash=sha256:c01e869b15aec88e2cdb73d27f15bdbe03bce8e2fb43afbe77c45d399e73a5a3 -scipy==1.16.0 \ - --hash=sha256:6c4abb4c11fc0b857474241b812ce69ffa6464b4bd8f4ecb786cf240367a36a7 \ - --hash=sha256:90452f6a9f3fe5a2cf3748e7be14f9cc7d9b124dce19667b54f5b429d680d539 \ - --hash=sha256:a16ba90847249bedce8aa404a83fb8334b825ec4a8e742ce6012a7a5e639f95c \ - --hash=sha256:a2f0bf2f58031c8701a8b601df41701d2a7be17c7ffac0a4816aeba89c4cdac8 \ - --hash=sha256:b2243561b45257f7391d0f49972fca90d46b79b8dbcb9b2cb0f9df928d370ad4 \ - --hash=sha256:b370f8f6ac6ef99815b0d5c9f02e7ade77b33007d74802efc8316c8db98fd11e \ - --hash=sha256:b5ef54021e832869c8cfb03bc3bf20366cbcd426e02a58e8a58d7584dfbb8f62 \ - --hash=sha256:d30c0fe579bb901c61ab4bb7f3eeb7281f0d4c4a7b52dbf563c89da4fd2949be \ - --hash=sha256:deec06d831b8f6b5fb0b652433be6a09db29e996368ce5911faf673e78d20085 \ - --hash=sha256:e6d7dfc148135e9712d87c5f7e4f2ddc1304d1582cb3a7d698bbadedb61c7afd -sentence-transformers==5.0.0 \ - --hash=sha256:346240f9cc6b01af387393f03e103998190dfb0826a399d0c38a81a05c7a5d76 \ - --hash=sha256:e5a411845910275fd166bacb01d28b7f79537d3550628ae42309dbdd3d5670d1 +ruff==0.12.8 \ + --hash=sha256:0ac9c570634b98c71c88cb17badd90f13fc076a472ba6ef1d113d8ed3df109fb \ + --hash=sha256:2fae54e752a3150f7ee0e09bce2e133caf10ce9d971510a9b925392dc98d2fec \ + --hash=sha256:45c32487e14f60b88aad6be9fd5da5093dbefb0e3e1224131cb1d441d7cb7d46 \ + --hash=sha256:49ebcaccc2bdad86fd51b7864e3d808aad404aab8df33d469b6e65584656263a \ + --hash=sha256:4cb3a45525176e1009b2b64126acf5f9444ea59066262791febf55e40493a033 \ + --hash=sha256:560e0cd641e45591a3e42cb50ef61ce07162b9c233786663fdce2d8557d99818 \ + --hash=sha256:63cb5a5e933fc913e5823a0dfdc3c99add73f52d139d6cd5cc8639d0e0465513 \ + --hash=sha256:71c83121512e7743fba5a8848c261dcc454cafb3ef2934a43f1b7a4eb5a447ea \ + --hash=sha256:7209531f1a1fcfbe8e46bcd7ab30e2f43604d8ba1c49029bb420b103d0b5f76e \ + --hash=sha256:9a9bbe28f9f551accf84a24c366c1aa8774d6748438b47174f8e8565ab9dedbc \ + --hash=sha256:a2cab5f60d5b65b50fba39a8950c8746df1627d54ba1197f970763917184b161 \ + --hash=sha256:ae3e7504666ad4c62f9ac8eedb52a93f9ebdeb34742b8b71cd3cccd24912719f \ + --hash=sha256:c0acbcf01206df963d9331b5838fb31f3b44fa979ee7fa368b9b9057d89f4a53 \ + --hash=sha256:c90e1a334683ce41b0e7a04f41790c429bf5073b62c1ae701c9dc5b3d14f0749 \ + --hash=sha256:cb82efb5d35d07497813a1c5647867390a7d83304562607f3579602fa3d7d46f \ + --hash=sha256:daf3475060a617fd5bc80638aeaf2f5937f10af3ec44464e280a9d2218e720d3 \ + --hash=sha256:dbea798fc0065ad0b84a2947b0aff4233f0cb30f226f00a2c5850ca4393de609 \ + --hash=sha256:de4429ef2ba091ecddedd300f4c3f24bca875d3d8b23340728c3cb0da81072c3 +safetensors==0.6.2 \ + --hash=sha256:1d2d2b3ce1e2509c68932ca03ab8f20570920cd9754b05063d4368ee52833ecd \ + --hash=sha256:43ff2aa0e6fa2dc3ea5524ac7ad93a9839256b8703761e76e2d0b2a3fa4f15d9 \ + --hash=sha256:8045db2c872db8f4cbe3faa0495932d89c38c899c603f21e9b6486951a5ecb8f \ + --hash=sha256:81e67e8bab9878bb568cffbc5f5e655adb38d2418351dc0859ccac158f753e19 \ + --hash=sha256:89a89b505f335640f9120fac65ddeb83e40f1fd081cb8ed88b505bdccec8d0a1 \ + --hash=sha256:93de35a18f46b0f5a6a1f9e26d91b442094f2df02e9fd7acf224cfec4238821a \ + --hash=sha256:9c85ede8ec58f120bad982ec47746981e210492a6db876882aa021446af8ffba \ + --hash=sha256:b0e4d029ab0a0e0e4fdf142b194514695b1d7d3735503ba700cf36d0fc7136ce \ + --hash=sha256:c7b214870df923cbc1593c3faee16bec59ea462758699bd3fee399d00aac072c \ + --hash=sha256:cab75ca7c064d3911411461151cb69380c9225798a20e712b102edda2542ddb1 \ + --hash=sha256:d6675cf4b39c98dbd7d940598028f3742e0375a6b4d4277e76beb0c35f4b843b \ + --hash=sha256:d83c20c12c2d2f465997c51b7ecb00e407e5f94d7dec3ea0cc11d86f60d3fde5 \ + --hash=sha256:d944cea65fad0ead848b6ec2c37cc0b197194bec228f8020054742190e9312ac \ + --hash=sha256:fa48268185c52bfe8771e46325a1e21d317207bcabcb72e65c6e28e9ffeb29c7 \ + --hash=sha256:fc4d0d0b937e04bdf2ae6f70cd3ad51328635fe0e6214aa1fc811f3b576b3bda +scikit-learn==1.7.1 \ + --hash=sha256:24b3f1e976a4665aa74ee0fcaac2b8fccc6ae77c8e07ab25da3ba6d3292b9802 \ + --hash=sha256:30d1f413cfc0aa5a99132a554f1d80517563c34a9d3e7c118fde2d273c6fe0f7 \ + --hash=sha256:40daccd1b5623f39e8943ab39735cadf0bdce80e67cdca2adcb5426e987320a8 \ + --hash=sha256:90c8494ea23e24c0fb371afc474618c1019dc152ce4a10e4607e62196113851b \ + --hash=sha256:bb870c0daf3bf3be145ec51df8ac84720d9972170786601039f024bf6d61a518 \ + --hash=sha256:c711d652829a1805a95d7fe96654604a8f16eab5a9e9ad87b3e60173415cb650 +scipy==1.16.1 \ + --hash=sha256:0a55ffe0ba0f59666e90951971a884d1ff6f4ec3275a48f472cfb64175570f77 \ + --hash=sha256:18aca1646a29ee9a0625a1be5637fa798d4d81fdf426481f06d69af828f16958 \ + --hash=sha256:226652fca853008119c03a8ce71ffe1b3f6d2844cc1686e8f9806edafae68596 \ + --hash=sha256:44c76f9e8b6e8e488a586190ab38016e4ed2f8a038af7cd3defa903c0a2238b3 \ + --hash=sha256:6e5c2f74e5df33479b5cd4e97a9104c511518fbd979aa9b8f6aec18b2e9ecae7 \ + --hash=sha256:adccd93a2fa937a27aae826d33e3bfa5edf9aa672376a4852d23a7cd67a2e5b7 \ + --hash=sha256:c033fa32bab91dc98ca59d0cf23bb876454e2bb02cbe592d5023138778f70030 \ + --hash=sha256:cb18899127278058bcc09e7b9966d41a5a43740b5bb8dcba401bd983f82e885b \ + --hash=sha256:d85495cef541729a70cdddbbf3e6b903421bc1af3e8e3a9a72a06751f33b7c39 \ + --hash=sha256:f8a5d6cd147acecc2603fbd382fed6c46f474cccfcf69ea32582e033fb54dcfe +sentence-transformers==5.1.0 \ + --hash=sha256:70c7630697cc1c64ffca328d6e8688430ebd134b3c2df03dc07cb3a016b04739 \ + --hash=sha256:fc803929f6a3ce82e2b2c06e0efed7a36de535c633d5ce55efac0b710ea5643e setuptools==80.9.0 \ --hash=sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922 \ --hash=sha256:f36b47402ecde768dbfafc46e8e4207b4360c654f1f3bb84475f0a28628fb19c @@ -535,17 +524,17 @@ sniffio==1.3.1 \ soupsieve==2.7 \ --hash=sha256:6e60cc5c1ffaf1cebcc12e8188320b72071e922c2e897f737cadce79ad5d30c4 \ --hash=sha256:ad282f9b6926286d2ead4750552c8a6142bc4c783fd66b0293547c8fe6ae126a -sqlalchemy[asyncio]==2.0.41 \ - --hash=sha256:03968a349db483936c249f4d9cd14ff2c296adfa1290b660ba6516f973139582 \ - --hash=sha256:293cd444d82b18da48c9f71cd7005844dbbd06ca19be1ccf6779154439eec0b8 \ - --hash=sha256:32f9dc8c44acdee06c8fc6440db9eae8b4af8b01e4b1aee7bdd7241c22edff4f \ - --hash=sha256:3d3549fc3e40667ec7199033a4e40a2f669898a00a7b18a931d3efb4c7900504 \ - --hash=sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576 \ - --hash=sha256:6375cd674fe82d7aa9816d1cb96ec592bac1726c11e0cafbf40eeee9a4516b5f \ - --hash=sha256:90c11ceb9a1f482c752a71f203a81858625d8df5746d787a4786bca4ffdf71c6 \ - --hash=sha256:911cc493ebd60de5f285bcae0491a60b4f2a9f0f5c270edd1c4dbaef7a38fc04 \ - --hash=sha256:9f8c9fdd15a55d9465e590a402f42082705d66b05afc3ffd2d2eb3c6ba919560 \ - --hash=sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9 +sqlalchemy[asyncio]==2.0.42 \ + --hash=sha256:160bedd8a5c28765bd5be4dec2d881e109e33b34922e50a3b881a7681773ac5f \ + --hash=sha256:1f092cf83ebcafba23a247f5e03f99f5436e3ef026d01c8213b5eca48ad6efa9 \ + --hash=sha256:260ca1d2e5910f1f1ad3fe0113f8fab28657cee2542cb48c2f342ed90046e8ec \ + --hash=sha256:2eb539fd83185a85e5fcd6b19214e1c734ab0351d81505b0f987705ba0a1e231 \ + --hash=sha256:9193fa484bf00dcc1804aecbb4f528f1123c04bad6a08d7710c909750fa76aeb \ + --hash=sha256:ad59dbe4d1252448c19d171dfba14c74e7950b46dc49d015722a4a06bfdab2b0 \ + --hash=sha256:c34100c0b7ea31fbc113c124bcf93a53094f8951c7bf39c45f39d327bad6d1e7 \ + --hash=sha256:defcdff7e661f0043daa381832af65d616e060ddb54d3fe4476f51df7eaa1835 \ + --hash=sha256:f9187498c2149919753a7fd51766ea9c8eecdec7da47c1b955fa8090bc642eaa \ + --hash=sha256:fc6afee7e66fdba4f5a68610b487c1f754fccdc53894a9567785932dbb6a265e striprtf==0.0.26 \ --hash=sha256:8c8f9d32083cdc2e8bfb149455aa1cc5a4e0a035893bedc75db8b73becb3a1bb \ --hash=sha256:fdb2bba7ac440072d1c41eab50d8d74ae88f60a8b6575c6e2c7805dc462093aa @@ -558,41 +547,41 @@ tenacity==9.1.2 \ threadpoolctl==3.6.0 \ --hash=sha256:43a0b8fd5a2928500110039e43a5eed8480b918967083ea48dc3ab9f13c4a7fb \ --hash=sha256:8ab8b4aa3491d812b623328249fab5302a68d2d71745c8a4c719a2fcaba9f44e -tiktoken==0.9.0 \ - --hash=sha256:03935988a91d6d3216e2ec7c645afbb3d870b37bcb67ada1943ec48678e7ee33 \ - --hash=sha256:11a20e67fdf58b0e2dea7b8654a288e481bb4fc0289d3ad21291f8d0849915fb \ - --hash=sha256:45556bc41241e5294063508caf901bf92ba52d8ef9222023f83d2483a3055348 \ - --hash=sha256:8b3d80aad8d2c6b9238fc1a5524542087c52b860b10cbf952429ffb714bc1136 \ - --hash=sha256:b2a21133be05dc116b1d0372af051cd2c6aa1d2188250c9b553f9fa49301b336 \ - --hash=sha256:d02a5ca6a938e0490e1ff957bc48c8b078c88cb83977be1625b1fd8aac792c5d \ - --hash=sha256:f32cc56168eac4851109e9b5d327637f15fd662aa30dd79f964b7c39fbadd26e -tokenizers==0.21.2 \ - --hash=sha256:0e73770507e65a0e0e2a1affd6b03c36e3bc4377bd10c9ccf51a82c77c0fe365 \ - --hash=sha256:106746e8aa9014a12109e58d540ad5465b4c183768ea96c03cbc24c44d329958 \ - --hash=sha256:126df3205d6f3a93fea80c7a8a266a78c1bd8dd2fe043386bafdd7736a23e45f \ - --hash=sha256:2c41862df3d873665ec78b6be36fcc30a26e3d4902e9dd8608ed61d49a48bc19 \ - --hash=sha256:342b5dfb75009f2255ab8dec0041287260fed5ce00c323eb6bab639066fef8ec \ - --hash=sha256:4a32cd81be21168bd0d6a0f0962d60177c447a1aa1b1e48fa6ec9fc728ee0b12 \ - --hash=sha256:514cd43045c5d546f01142ff9c79a96ea69e4b5cda09e3027708cb2e6d5762ab \ - --hash=sha256:58747bb898acdb1007f37a7bbe614346e98dc28708ffb66a3fd50ce169ac6c98 \ - --hash=sha256:5e9944e61239b083a41cf8fc42802f855e1dca0f499196df37a8ce219abac6eb \ - --hash=sha256:8bd8999538c405133c2ab999b83b17c08b7fc1b48c1ada2469964605a709ef91 \ - --hash=sha256:b1b9405822527ec1e0f7d8d2fdb287a5730c3a6518189c968254a8441b21faae \ - --hash=sha256:cabda5a6d15d620b6dfe711e1af52205266d05b379ea85a8a301b3593c60e962 \ - --hash=sha256:ed21dc7e624e4220e21758b2e62893be7101453525e3d23264081c9ef9a6d00d \ - --hash=sha256:fdc7cffde3e2113ba0e6cc7318c40e3438a4d74bbc62bf04bcc63bdfb082ac77 \ - --hash=sha256:fed9a4d51c395103ad24f8e7eb976811c57fbec2af9f133df471afcd922e5020 +tiktoken==0.11.0 \ + --hash=sha256:2130127471e293d385179c1f3f9cd445070c0772be73cdafb7cec9a3684c0458 \ + --hash=sha256:21e43022bf2c33f733ea9b54f6a3f6b4354b909f5a73388fb1b9347ca54a069c \ + --hash=sha256:25a512ff25dc6c85b58f5dd4f3d8c674dc05f96b02d66cdacf628d26a4e4866b \ + --hash=sha256:3c518641aee1c52247c2b97e74d8d07d780092af79d5911a6ab5e79359d9b06a \ + --hash=sha256:4ae374c46afadad0f501046db3da1b36cd4dfbfa52af23c998773682446097cf \ + --hash=sha256:adb4e308eb64380dc70fa30493e21c93475eaa11669dea313b6bbf8210bfd013 \ + --hash=sha256:ece6b76bfeeb61a125c44bbefdfccc279b5288e6007fbedc0d32bfec602df2f2 +tokenizers==0.21.4 \ + --hash=sha256:1340ff877ceedfa937544b7d79f5b7becf33a4cfb58f89b3b49927004ef66f78 \ + --hash=sha256:2107ad649e2cda4488d41dfd031469e9da3fcbfd6183e74e4958fa729ffbf9c6 \ + --hash=sha256:2ccc10a7c3bcefe0f242867dc914fc1226ee44321eb618cfe3019b5df3400133 \ + --hash=sha256:39b376f5a1aee67b4d29032ee85511bbd1b99007ec735f7f35c8a2eb104eade5 \ + --hash=sha256:3c1f4317576e465ac9ef0d165b247825a2a4078bcd01cba6b54b867bdf9fdd8b \ + --hash=sha256:3c73012da95afafdf235ba80047699df4384fdc481527448a078ffd00e45a7d9 \ + --hash=sha256:475d807a5c3eb72c59ad9b5fcdb254f6e17f53dfcbb9903233b0dfa9c943b597 \ + --hash=sha256:51b7eabb104f46c1c50b486520555715457ae833d5aee9ff6ae853d1130506ff \ + --hash=sha256:5e2f601a8e0cd5be5cc7506b20a79112370b9b3e9cb5f13f68ab11acd6ca7d60 \ + --hash=sha256:6c42a930bc5f4c47f4ea775c91de47d27910881902b0f20e4990ebe045a415d0 \ + --hash=sha256:714b05b2e1af1288bd1bc56ce496c4cebb64a20d158ee802887757791191e6e2 \ + --hash=sha256:c212aa4e45ec0bb5274b16b6f31dd3f1c41944025c2358faaa5782c754e84c24 \ + --hash=sha256:cc88bb34e23a54cc42713d6d98af5f1bf79c07653d24fe984d2d695ba2c922a2 \ + --hash=sha256:f23186c40395fc390d27f519679a58023f368a0aad234af145e0f39ad1212732 \ + --hash=sha256:fa23f85fbc9a02ec5c6978da172cdcbac23498c3ca9f3645c5c68740ac007880 torch @ https://download.pytorch.org/whl/cpu/torch-2.6.0%2Bcpu-cp311-cp311-linux_x86_64.whl \ --hash=sha256:5b6ae523bfb67088a17ca7734d131548a2e60346c622621e4248ed09dd0790cc tqdm==4.67.1 \ --hash=sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2 \ --hash=sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2 -transformers==4.53.2 \ - --hash=sha256:6c3ed95edfb1cba71c4245758f1b4878c93bf8cde77d076307dacb2cbbd72be2 \ - --hash=sha256:db8f4819bb34f000029c73c3c557e7d06fc1b8e612ec142eecdae3947a9c78bf -types-requests==2.32.4.20250611 \ - --hash=sha256:741c8777ed6425830bf51e54d6abe245f79b4dcb9019f1622b773463946bf826 \ - --hash=sha256:ad2fe5d3b0cb3c2c902c8815a70e7fb2302c4b8c1f77bdcd738192cdb3878072 +transformers==4.55.0 \ + --hash=sha256:15aa138a05d07a15b30d191ea2c45e23061ebf9fcc928a1318e03fe2234f3ae1 \ + --hash=sha256:29d9b8800e32a4a831bb16efb5f762f6a9742fef9fce5d693ed018d19b106490 +types-requests==2.32.4.20250809 \ + --hash=sha256:d8060de1c8ee599311f56ff58010fb4902f462a1470802cf9f6ed27bc46c4df3 \ + --hash=sha256:f73d1832fb519ece02c85b1f09d5f0dd3108938e7d47e7f94bbfa18a6782b163 typing-extensions==4.14.1 \ --hash=sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36 \ --hash=sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76 diff --git a/requirements.gpu.txt b/requirements.gpu.txt index cace6ad9..4067cd7d 100644 --- a/requirements.gpu.txt +++ b/requirements.gpu.txt @@ -1,31 +1,31 @@ # This file is @generated by PDM. # Please do not edit it manually. -accelerate==1.8.1 \ - --hash=sha256:c47b8994498875a2b1286e945bd4d20e476956056c7941d512334f4eb44ff991 \ - --hash=sha256:f60df931671bc4e75077b852990469d4991ce8bd3a58e72375c3c95132034db9 +accelerate==1.10.0 \ + --hash=sha256:260a72b560e100e839b517a331ec85ed495b3889d12886e79d1913071993c5a3 \ + --hash=sha256:8270568fda9036b5cccdc09703fef47872abccd56eb5f6d53b54ea5fb7581496 aiohappyeyeballs==2.6.1 \ --hash=sha256:c3f9d0113123803ccadfdf3f0faa505bc78e6a72d1cc4806cbd719826e943558 \ --hash=sha256:f349ba8f4b75cb25c99c5c2d84e997e485204d2902a9597802b0371f09331fb8 -aiohttp==3.12.14 \ - --hash=sha256:040afa180ea514495aaff7ad34ec3d27826eaa5d19812730fe9e529b04bb2179 \ - --hash=sha256:0b8a69acaf06b17e9c54151a6c956339cf46db4ff72b3ac28516d0f7068f4ced \ - --hash=sha256:16260e8e03744a6fe3fcb05259eeab8e08342c4c33decf96a9dad9f1187275d0 \ - --hash=sha256:1d6f607ce2e1a93315414e3d448b831238f1874b9968e1195b06efaa5c87e245 \ - --hash=sha256:4699979560728b168d5ab63c668a093c9570af2c7a78ea24ca5212c6cdc2b641 \ - --hash=sha256:4ac76627c0b7ee0e80e871bde0d376a057916cb008a8f3ffc889570a838f5cc7 \ - --hash=sha256:4f1205f97de92c37dd71cf2d5bcfb65fdaed3c255d246172cce729a8d849b4da \ - --hash=sha256:565e70d03e924333004ed101599902bba09ebb14843c8ea39d657f037115201b \ - --hash=sha256:6e06e120e34d93100de448fd941522e11dafa78ef1a893c179901b7d66aa29f2 \ - --hash=sha256:76ae6f1dd041f85065d9df77c6bc9c9703da9b5c018479d20262acc3df97d419 \ - --hash=sha256:798204af1180885651b77bf03adc903743a86a39c7392c472891649610844635 \ - --hash=sha256:8283f42181ff6ccbcf25acaae4e8ab2ff7e92b3ca4a4ced73b2c12d8cd971393 \ - --hash=sha256:8c779e5ebbf0e2e15334ea404fcce54009dc069210164a244d2eac8352a44b28 \ - --hash=sha256:a194ace7bc43ce765338ca2dfb5661489317db216ea7ea700b0332878b392cab \ - --hash=sha256:a289f50bf1bd5be227376c067927f78079a7bdeccf8daa6a9e65c38bae14324b \ - --hash=sha256:ad5fdf6af93ec6c99bf800eba3af9a43d8bfd66dce920ac905c817ef4a712afe \ - --hash=sha256:b413c12f14c1149f0ffd890f4141a7471ba4b41234fe4fd4a0ff82b1dc299dbb \ - --hash=sha256:f4552ff7b18bcec18b60a90c6982049cdb9dac1dba48cf00b97934a06ce2e597 +aiohttp==3.12.15 \ + --hash=sha256:010cc9bbd06db80fe234d9003f67e97a10fe003bfbedb40da7d71c1008eda0fe \ + --hash=sha256:2abbb216a1d3a2fe86dbd2edce20cdc5e9ad0be6378455b05ec7f77361b3ab50 \ + --hash=sha256:3b6f0af863cf17e6222b1735a756d664159e58855da99cfe965134a3ff63b0b0 \ + --hash=sha256:3f9d7c55b41ed687b9d7165b17672340187f87a773c98236c987f08c858145a9 \ + --hash=sha256:421da6fd326460517873274875c6c5a18ff225b40da2616083c5a34a7570b685 \ + --hash=sha256:4420cf9d179ec8dfe4be10e7d0fe47d6d606485512ea2265b0d8c5113372771b \ + --hash=sha256:4fc61385e9c98d72fcdf47e6dd81833f47b2f77c114c29cd64a361be57a763a2 \ + --hash=sha256:6443cca89553b7a5485331bc9bedb2342b08d073fa10b8c7d1c60579c4a7b9bd \ + --hash=sha256:6c5f40ec615e5264f44b4282ee27628cea221fcad52f27405b80abb346d9f3f8 \ + --hash=sha256:74dad41b3458dbb0511e760fb355bb0b6689e0630de8a22b1b62a98777136e16 \ + --hash=sha256:7c7dd29c7b5bda137464dc9bfc738d7ceea46ff70309859ffde8c022e9b08ba7 \ + --hash=sha256:7fbc8a7c410bb3ad5d595bb7118147dfbb6449d862cc1125cf8867cb337e8728 \ + --hash=sha256:b5b7fe4972d48a4da367043b8e023fb70a04d1490aa7d68800e465d1b97e493b \ + --hash=sha256:bc4fbc61bb3548d3b482f9ac7ddd0f18c67e4225aaa4e8552b9f1ac7e6bda9e5 \ + --hash=sha256:ced339d7c9b5030abad5854aa5413a77565e5b6e6248ff927d3e174baf3badf7 \ + --hash=sha256:d3ce17ce0220383a0f9ea07175eeaa6aa13ae5a41f30bc61d84df17f0e9b1117 \ + --hash=sha256:db71ce547012a5420a39c1b744d485cfb823564d01d5d20805977f5ea1345676 \ + --hash=sha256:edd533a07da85baa4b423ee8839e3e91681c7bfa19b04260a469ee94b778bf6d aiosignal==1.4.0 \ --hash=sha256:053243f8b92b990551949e63930a839ff0cf0b0ebbe0597b0f3fb19e1a0fe82e \ --hash=sha256:f47eecd9468083c2029cc99945502cb7708b082c232f9aca65da147157b251c7 @@ -35,15 +35,15 @@ aiosqlite==0.21.0 \ annotated-types==0.7.0 \ --hash=sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53 \ --hash=sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89 -anyio==4.9.0 \ - --hash=sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028 \ - --hash=sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c +anyio==4.10.0 \ + --hash=sha256:3f3fae35c96039744587aa5b8371e7e8e603c0702999535961dd336026973ba6 \ + --hash=sha256:60e474ac86736bbfd6f210f7a61218939c318f43f9972497381f1c5e930ed3d1 attrs==25.3.0 \ --hash=sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3 \ --hash=sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b -banks==2.1.3 \ - --hash=sha256:9e1217dc977e6dd1ce42c5ff48e9bcaf238d788c81b42deb6a555615ffcffbab \ - --hash=sha256:c0dd2cb0c5487274a513a552827e6a8ddbd0ab1a1b967f177e71a6e4748a3ed2 +banks==2.2.0 \ + --hash=sha256:963cd5c85a587b122abde4f4064078def35c50c688c1b9d36f43c92503854e7d \ + --hash=sha256:d1446280ce6e00301e3e952dd754fd8cee23ff277d29ed160994a84d0d7ffe62 beautifulsoup4==4.13.4 \ --hash=sha256:9bbbb14bfde9d79f38b8cd5f8c7c85f4b8f2523190ebed90e950a8dea4cb1c4b \ --hash=sha256:dbb3c4e1ceae6aefebdaf2423247260cd062430a410e38c66f2baa50a8437195 @@ -54,25 +54,23 @@ black==25.1.0 \ --hash=sha256:96c1c7cd856bba8e20094e36e0f948718dc688dba4a9d78c3adde52b9e6c2299 \ --hash=sha256:a39337598244de4bae26475f77dda852ea00a93bd4c728e09eacd827ec929df0 \ --hash=sha256:bce2e264d59c91e52d8000d507eb20a9aca4a778731a08cfff7e5ac4a4bb7096 -certifi==2025.7.9 \ - --hash=sha256:c1d2ec05395148ee10cf672ffc28cd37ea0ab0d99f9cc74c43e588cbd111b079 \ - --hash=sha256:d842783a14f8fdd646895ac26f719a061408834473cfc10203f6a575beb15d39 -charset-normalizer==3.4.2 \ - --hash=sha256:0c8c57f84ccfc871a48a47321cfa49ae1df56cd1d965a09abe84066f6853b9c0 \ - --hash=sha256:0f5d9ed7f254402c9e7d35d2f5972c9bbea9040e99cd2861bd77dc68263277c7 \ - --hash=sha256:28a1005facc94196e1fb3e82a3d442a9d9110b8434fc1ded7a24a2983c9888d8 \ - --hash=sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63 \ - --hash=sha256:6b66f92b17849b85cad91259efc341dce9c1af48e2173bf38a85c6329f1033e5 \ - --hash=sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0 \ - --hash=sha256:aa88ca0b1932e93f2d961bf3addbb2db902198dca337d88c89e1559e066e7645 \ - --hash=sha256:be1e352acbe3c78727a16a455126d9ff83ea2dfdcbc83148d2982305a04714c2 \ - --hash=sha256:d524ba3f1581b35c03cb42beebab4a13e6cdad7b36246bd22541fa585a56cccd \ - --hash=sha256:daac4765328a919a805fa5e2720f3e94767abd632ae410a9062dff5412bae65a \ - --hash=sha256:e53efc7c7cee4c1e70661e2e112ca46a575f90ed9ae3fef200f2a25e954f4b28 \ - --hash=sha256:e70e990b2137b29dc5564715de1e12701815dacc1d056308e2b17e9095372a82 \ - --hash=sha256:efd387a49825780ff861998cd959767800d54f8308936b21025326de4b5a42b9 \ - --hash=sha256:f0aa37f3c979cf2546b73e8222bbfa3dc07a641585340179d768068e3455e544 \ - --hash=sha256:fdb20a30fe1175ecabed17cbf7812f7b804b8a315a25f24678bcdf120a90077f +certifi==2025.8.3 \ + --hash=sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407 \ + --hash=sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5 +charset-normalizer==3.4.3 \ + --hash=sha256:00237675befef519d9af72169d8604a067d92755e84fe76492fef5441db05b91 \ + --hash=sha256:0e78314bdc32fa80696f72fa16dc61168fda4d6a0c014e0380f9d02f0e5d8a07 \ + --hash=sha256:13faeacfe61784e2559e690fc53fa4c5ae97c6fcedb8eb6fb8d0a15b475d2c64 \ + --hash=sha256:1e8ac75d72fa3775e0b7cb7e4629cec13b7514d928d15ef8ea06bca03ef01cae \ + --hash=sha256:31a9a6f775f9bcd865d88ee350f0ffb0e25936a7f930ca98995c05abf1faf21c \ + --hash=sha256:585f3b2a80fbd26b048a0be90c5aae8f06605d3c92615911c3a2b03a8a3b796f \ + --hash=sha256:6cf8fd4c04756b6b60146d98cd8a77d0cdae0e1ca20329da2ac85eed779b6849 \ + --hash=sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14 \ + --hash=sha256:939578d9d8fd4299220161fdd76e86c6a251987476f5243e8864a7844476ba14 \ + --hash=sha256:96b2b3d1a83ad55310de8c7b4a2d04d9277d5591f40761274856635acc5fcb30 \ + --hash=sha256:b256ee2e749283ef3ddcff51a675ff43798d92d746d1a6e4631bf8c707d22d0b \ + --hash=sha256:ce571ab16d890d23b5c278547ba694193a45011ff86a9162a71307ed9f86759a \ + --hash=sha256:fd10de089bcdcd1be95a2f73dbe6254798ec1bda9f450d5828c96f93e2536b9c click==8.2.1 \ --hash=sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202 \ --hash=sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b @@ -94,13 +92,16 @@ dirtyjson==1.0.8 \ distro==1.9.0 \ --hash=sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed \ --hash=sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2 -faiss-cpu==1.11.0 \ - --hash=sha256:2c39a388b059fb82cd97fbaa7310c3580ced63bf285be531453bfffbe89ea3dd \ - --hash=sha256:44877b896a2b30a61e35ea4970d008e8822545cb340eca4eff223ac7f40a1db9 \ - --hash=sha256:926645f1b6829623bc88e93bc8ca872504d604718ada3262e505177939aaee0a \ - --hash=sha256:931db6ed2197c03a7fdf833b057c13529afa2cec8a827aa081b7f0543e4e671b \ - --hash=sha256:a4e3433ffc7f9b8707a7963db04f8676a5756868d325644db2db9d67a618b7a0 \ - --hash=sha256:a90d1c81d0ecf2157e1d2576c482d734d10760652a5b2fcfa269916611e41f1c +faiss-cpu==1.11.0.post1 \ + --hash=sha256:06b1ea9ddec9e4d9a41c8ef7478d493b08d770e9a89475056e963081eed757d1 \ + --hash=sha256:0794eb035c6075e931996cf2b2703fbb3f47c8c34bc2d727819ddc3e5e486a31 \ + --hash=sha256:18d2221014813dc9a4236e47f9c4097a71273fbf17c3fe66243e724e2018a67a \ + --hash=sha256:1b15412b22a05865433aecfdebf7664b9565bd49b600d23a0a27c74a5526893e \ + --hash=sha256:2c8c384e65cc1b118d2903d9f3a27cd35f6c45337696fc0437f71e05f732dbc0 \ + --hash=sha256:36af46945274ed14751b788673125a8a4900408e4837a92371b0cad5708619ea \ + --hash=sha256:3ce8a8984a7dcc689fd192c69a476ecd0b2611c61f96fe0799ff432aa73ff79c \ + --hash=sha256:81c169ea74213b2c055b8240befe7e9b42a1f3d97cda5238b3b401035ce1a18b \ + --hash=sha256:8384e05afb7c7968e93b81566759f862e744c0667b175086efb3d8b20949b39f filelock==3.18.0 \ --hash=sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2 \ --hash=sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de @@ -127,44 +128,44 @@ frozenlist==1.7.0 \ --hash=sha256:ce48b2fece5aeb45265bb7a58259f45027db0abff478e3077e12b05b17fb9da7 \ --hash=sha256:d50ac7627b3a1bd2dcef6f9da89a772694ec04d9a61b66cf87f7d9446b4a0c31 \ --hash=sha256:fe2365ae915a1fafd982c146754e1de6ab3478def8a59c86e1f7242d794f97d5 -fsspec==2025.5.1 \ - --hash=sha256:24d3a2e663d5fc735ab256263c4075f374a174c3410c0b25e5bd1970bceaa462 \ - --hash=sha256:2e55e47a540b91843b755e83ded97c6e897fa0942b11490113f09e9c443c2475 -greenlet==3.2.3 \ - --hash=sha256:0921ac4ea42a5315d3446120ad48f90c3a6b9bb93dd9b3cf4e4d84a66e42de83 \ - --hash=sha256:29e184536ba333003540790ba29829ac14bb645514fbd7e32af331e8202a62a5 \ - --hash=sha256:592c12fb1165be74592f5de0d70f82bc5ba552ac44800d632214b76089945147 \ - --hash=sha256:751261fc5ad7b6705f5f76726567375bb2104a059454e0226e1eef6c756748ba \ - --hash=sha256:784ae58bba89fa1fa5733d170d42486580cab9decda3484779f4759345b29822 \ - --hash=sha256:83a8761c75312361aa2b5b903b79da97f13f556164a7dd2d5448655425bd4c34 \ - --hash=sha256:8b0dd8ae4c0d6f5e54ee55ba935eeb3d735a9b58a8a1e5b5cbab64e01a39f365 \ - --hash=sha256:93c0bb79844a367782ec4f429d07589417052e621aa39a5ac1fb99c5aa308edc \ - --hash=sha256:c667c0bf9d406b77a15c924ef3285e1e05250948001220368e039b6aa5b5034b \ - --hash=sha256:d2971d93bb99e05f8c2c0c2f4aa9484a18d98c4c3bd3c62b65b7e6ae33dfcfaf -griffe==1.7.3 \ - --hash=sha256:52ee893c6a3a968b639ace8015bec9d36594961e156e23315c8e8e51401fa50b \ - --hash=sha256:c6b3ee30c2f0f17f30bcdef5068d6ab7a2a4f1b8bf1a3e74b56fffd21e1c5f75 +fsspec==2025.7.0 \ + --hash=sha256:786120687ffa54b8283d942929540d8bc5ccfa820deb555a2b5d0ed2b737bf58 \ + --hash=sha256:8b012e39f63c7d5f10474de957f3ab793b47b45ae7d39f2fb735f8bbe25c0e21 +greenlet==3.2.4 \ + --hash=sha256:0db5594dce18db94f7d1650d7489909b57afde4c580806b8d9203b6e79cdc079 \ + --hash=sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d \ + --hash=sha256:1987de92fec508535687fb807a5cea1560f6196285a4cde35c100b8cd632cc52 \ + --hash=sha256:1ee8fae0519a337f2329cb78bd7a8e128ec0f881073d43f023c7b8d4831d5246 \ + --hash=sha256:2523e5246274f54fdadbce8494458a2ebdcdbc7b802318466ac5606d3cded1f8 \ + --hash=sha256:4d1378601b85e2e5171b99be8d2dc85f594c79967599328f95c1dc1a40f1c633 \ + --hash=sha256:55e9c5affaa6775e2c6b67659f3a71684de4c549b3dd9afca3bc773533d284fa \ + --hash=sha256:94abf90142c2a18151632371140b3dba4dee031633fe614cb592dbb6c9e17bc3 \ + --hash=sha256:96378df1de302bc38e99c3a9aa311967b7dc80ced1dcc6f171e99842987882a2 \ + --hash=sha256:9c40adce87eaa9ddb593ccb0fa6a07caf34015a29bf8d344811665b573138db9 +griffe==1.11.0 \ + --hash=sha256:c153b5bc63ca521f059e9451533a67e44a9d06cf9bf1756e4298bda5bd3262e8 \ + --hash=sha256:dc56cc6af8d322807ecdb484b39838c7a51ca750cf21ccccf890500c4d6389d8 h11==0.16.0 \ --hash=sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1 \ --hash=sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86 -hf-xet==1.1.5; platform_machine == "x86_64" or platform_machine == "amd64" or platform_machine == "arm64" or platform_machine == "aarch64" \ - --hash=sha256:69ebbcfd9ec44fdc2af73441619eeb06b94ee34511bbcf57cd423820090f5694 \ - --hash=sha256:73e167d9807d166596b4b2f0b585c6d5bd84a26dea32843665a8b58f6edba245 \ - --hash=sha256:83088ecea236d5113de478acb2339f92c95b4fb0462acaa30621fac02f5a534a \ - --hash=sha256:9fa6e3ee5d61912c4a113e0708eaaef987047616465ac7aa30f7121a48fc1af8 \ - --hash=sha256:ab34c4c3104133c495785d5d8bba3b1efc99de52c02e759cf711a91fd39d3a14 \ - --hash=sha256:dbba1660e5d810bd0ea77c511a99e9242d920790d0e63c0e4673ed36c4022d18 \ - --hash=sha256:f52c2fa3635b8c37c7764d8796dfa72706cc4eded19d638331161e82b0792e23 \ - --hash=sha256:fc874b5c843e642f45fd85cda1ce599e123308ad2901ead23d3510a47ff506d1 +hf-xet==1.1.7; platform_machine == "x86_64" or platform_machine == "amd64" or platform_machine == "arm64" or platform_machine == "aarch64" \ + --hash=sha256:18b61bbae92d56ae731b92087c44efcac216071182c603fc535f8e29ec4b09b8 \ + --hash=sha256:20cec8db4561338824a3b5f8c19774055b04a8df7fff0cb1ff2cb1a0c1607b80 \ + --hash=sha256:2e356da7d284479ae0f1dea3cf5a2f74fdf925d6dca84ac4341930d892c7cb34 \ + --hash=sha256:60dae4b44d520819e54e216a2505685248ec0adbdb2dd4848b17aa85a0375cde \ + --hash=sha256:6efaaf1a5a9fc3a501d3e71e88a6bfebc69ee3a716d0e713a931c8b8d920038f \ + --hash=sha256:713f2bff61b252f8523739969f247aa354ad8e6d869b8281e174e2ea1bb8d604 \ + --hash=sha256:751571540f9c1fbad9afcf222a5fb96daf2384bf821317b8bfb0c59d86078513 \ + --hash=sha256:b109f4c11e01c057fc82004c9e51e6cdfe2cb230637644ade40c599739067b2e httpcore==1.0.9 \ --hash=sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55 \ --hash=sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8 httpx==0.28.1 \ --hash=sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc \ --hash=sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad -huggingface-hub[inference]==0.33.4 \ - --hash=sha256:09f9f4e7ca62547c70f8b82767eefadd2667f4e116acba2e3e62a5a81815a7bb \ - --hash=sha256:6af13478deae120e765bfd92adad0ae1aec1ad8c439b46f23058ad5956cbca0a +huggingface-hub[inference]==0.34.4 \ + --hash=sha256:9b365d781739c93ff90c359844221beef048403f1bc1f1c123c191257c3c890a \ + --hash=sha256:a4228daa6fb001be3f4f4bdaf9a0db00e1739235702848df00885c9b5742c85c idna==3.10 \ --hash=sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9 \ --hash=sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3 @@ -188,63 +189,51 @@ jiter==0.10.0 \ joblib==1.5.1 \ --hash=sha256:4719a31f054c7d766948dcd83e9613686b27114f190f717cec7eaa2084f8a74a \ --hash=sha256:f4f86e351f39fe3d0d32a9f2c3d8af1ee4cec285aafcb27003dda5205576b444 -llama-cloud==0.1.32 \ - --hash=sha256:c42b2d5fb24acc8595bcc3626fb84c872909a16ab6d6879a1cb1101b21c238bd \ - --hash=sha256:cea98241127311ea91f191c3c006aa6558f01d16f9539ed93b24d716b888f10e -llama-cloud-services==0.6.43 \ - --hash=sha256:2349195f501ba9151ea3ab384d20cae8b4dc4f335f60bd17607332626bdfa2e4 \ - --hash=sha256:fa6be33bf54d467cace809efee8c2aeeb9de74ce66708513d37b40d738d3350f -llama-index==0.12.48 \ - --hash=sha256:54b922fd94efde2c21c12be392c381cb4a0531a7ca8e482a7e3d1c6795af2da5 \ - --hash=sha256:93a80de54a5cf86114c252338d7917bb81ffe94afa47f01c41c9ee04c0155db4 -llama-index-agent-openai==0.4.12 \ - --hash=sha256:6dbb6276b2e5330032a726b28d5eef5140825f36d72d472b231f08ad3af99665 \ - --hash=sha256:d2fe53feb69cfe45752edb7328bf0d25f6a9071b3c056787e661b93e5b748a28 -llama-index-cli==0.4.4 \ - --hash=sha256:1070593cf79407054735ab7a23c5a65a26fc18d264661e42ef38fc549b4b7658 \ - --hash=sha256:c3af0cf1e2a7e5ef44d0bae5aa8e8872b54c5dd6b731afbae9f13ffeb4997be0 -llama-index-core==0.12.48 \ - --hash=sha256:0770119ab540605cb217dc9b26343b0bdf6f91d843cfb17d0074ba2fac358e56 \ - --hash=sha256:a5cb2179495f091f351a41b4ef312ec6593660438e0066011ec81f7b5d2c93be -llama-index-embeddings-huggingface==0.5.5 \ - --hash=sha256:7f6e9a031d9146f235df597c0ccd6280cde96b9b437f99052ce79bb72e5fac5e \ - --hash=sha256:8260e1561df17ca510e241a90504b37cc7d8ac6f2d6aaad9732d04ca3ad988d1 -llama-index-embeddings-openai==0.3.1 \ - --hash=sha256:1368aad3ce24cbaed23d5ad251343cef1eb7b4a06d6563d6606d59cb347fef20 \ - --hash=sha256:f15a3d13da9b6b21b8bd51d337197879a453d1605e625a1c6d45e741756c0290 -llama-index-indices-managed-llama-cloud==0.7.10 \ - --hash=sha256:53267907e23d8fbcbb97c7a96177a41446de18550ca6030276092e73b45ca880 \ - --hash=sha256:f7edcfb8f694cab547cd9324be7835dc97470ce05150d0b8888fa3bf9d2f84a8 -llama-index-instrumentation==0.2.0 \ - --hash=sha256:1055ae7a3d19666671a8f1a62d08c90472552d9fcec7e84e6919b2acc92af605 \ - --hash=sha256:ae8333522487e22a33732924a9a08dfb456f54993c5c97d8340db3c620b76f13 -llama-index-llms-openai==0.4.7 \ - --hash=sha256:3b8d9d3c1bcadc2cff09724de70f074f43eafd5b7048a91247c9a41b7cd6216d \ - --hash=sha256:564af8ab39fb3f3adfeae73a59c0dca46c099ab844a28e725eee0c551d4869f8 -llama-index-multi-modal-llms-openai==0.5.3 \ - --hash=sha256:b755a8b47d8d2f34b5a3d249af81d9bfb69d3d2cf9ab539d3a42f7bfa3e2391a \ - --hash=sha256:be6237df8f9caaa257f9beda5317287bbd2ec19473d777a30a34e41a7c5bddf8 -llama-index-program-openai==0.3.2 \ - --hash=sha256:04c959a2e616489894bd2eeebb99500d6f1c17d588c3da0ddc75ebd3eb7451ee \ - --hash=sha256:451829ae53e074e7b47dcc60a9dd155fcf9d1dcbc1754074bdadd6aab4ceb9aa -llama-index-question-gen-openai==0.3.1 \ - --hash=sha256:1ce266f6c8373fc8d884ff83a44dfbacecde2301785db7144872db51b8b99429 \ - --hash=sha256:5e9311b433cc2581ff8a531fa19fb3aa21815baff75aaacdef11760ac9522aa9 -llama-index-readers-file==0.4.11 \ - --hash=sha256:1b21cb66d78dd5f60e8716607d9a47ccd81bb39106d459665be1ca7799e9597b \ - --hash=sha256:e71192d8d6d0bf95131762da15fa205cf6e0cc248c90c76ee04d0fbfe160d464 -llama-index-readers-llama-parse==0.4.0 \ - --hash=sha256:574e48386f28d2c86c3f961ca4a4906910312f3400dd0c53014465bfbc6b32bf \ - --hash=sha256:e99ec56f4f8546d7fda1a7c1ae26162fb9acb7ebcac343b5abdb4234b4644e0f -llama-index-vector-stores-faiss==0.4.0 \ - --hash=sha256:092907b38c70b7f9698ad294836389b31fd3a1273ea1d93082993dd0925c8a4b \ - --hash=sha256:59b58e4ec91880a5871a896bbdbd94cb781a447f92f400b5f08a62eb56a62e5c -llama-index-workflows==1.1.0 \ - --hash=sha256:992fd5b012f56725853a4eed2219a66e19fcc7a6db85dc51afcc1bd2a5dd6db1 \ - --hash=sha256:ff001d362100bfc2a3353cc5f2528a0adb52245e632191a86b4bddacde72b6af -llama-parse==0.6.43 \ - --hash=sha256:d88e91c97e37f77b2619111ef43c02b7da61125f821cf77f918996eb48200d78 \ - --hash=sha256:fe435309638c4fdec4fec31f97c5031b743c92268962d03b99bd76704f566c32 +llama-cloud==0.1.35 \ + --hash=sha256:200349d5d57424d7461f304cdb1355a58eea3e6ca1e6b0d75c66b2e937216983 \ + --hash=sha256:b7abab4423118e6f638d2f326749e7a07c6426543bea6da99b623c715b22af71 +llama-cloud-services==0.6.54 \ + --hash=sha256:07f595f7a0ba40c6a1a20543d63024ca7600fe65c4811d1951039977908997be \ + --hash=sha256:baf65d9bffb68f9dca98ac6e22908b6675b2038b021e657ead1ffc0e43cbd45d +llama-index==0.13.1 \ + --hash=sha256:0cf06beaf460bfa4dd57902e7f4696626da54350851a876b391a82acce7fe5c2 \ + --hash=sha256:e02b61cac0699c709a12e711bdaca0a2c90c9b8177d45f9b07b8650c9985d09e +llama-index-cli==0.5.0 \ + --hash=sha256:2eb9426232e8d89ffdf0fa6784ff8da09449d920d71d0fcc81d07be93cf9369f \ + --hash=sha256:e331ca98005c370bfe58800fa5eed8b10061d0b9c656b84a1f5f6168733a2a7b +llama-index-core==0.13.1 \ + --hash=sha256:04a58cb26638e186ddb02a80970d503842f68abbeb8be5af6a387c51f7995eeb \ + --hash=sha256:fde6c8c8bcacf7244bdef4908288eced5e11f47e9741d545846c3d1692830510 +llama-index-embeddings-huggingface==0.6.0 \ + --hash=sha256:0c24aba5265a7dbd6591394a8d2d64d0b978bb50b4b97c4e88cbf698b69fdd10 \ + --hash=sha256:3ece7d8c5b683d2055fedeca4457dea13f75c81a6d7fb94d77e878cd73d90d97 +llama-index-embeddings-openai==0.5.0 \ + --hash=sha256:ac587839a111089ea8a6255f9214016d7a813b383bbbbf9207799be1100758eb \ + --hash=sha256:d817edb22e3ff475e8cd1833faf1147028986bc1d688f7894ef947558864b728 +llama-index-indices-managed-llama-cloud==0.9.1 \ + --hash=sha256:7bee1a368a17ff63bf1078e5ad4795eb88dcdb87c259cfb242c19bd0f4fb978e \ + --hash=sha256:df33fb6d8c6b7ee22202ee7a19285a5672f0e58a1235a2504b49c90a7e1c8933 +llama-index-instrumentation==0.4.0 \ + --hash=sha256:83f73156be34dd0121dfe9e259883620e19f0162f152ac483e179ad5ad0396ac \ + --hash=sha256:f38ecc1f02b6c1f7ab84263baa6467fac9f86538c0ee25542853de46278abea7 +llama-index-llms-openai==0.5.2 \ + --hash=sha256:53237fda8ff9089fdb2543ac18ea499b27863cc41095d3a3499f19e9cfd98e1a \ + --hash=sha256:f1cc5be83f704d217bd235b609ad1b128dbd42e571329b108f902920836c1071 +llama-index-readers-file==0.5.0 \ + --hash=sha256:7fc47a9dbf11d07e78992581c20bca82b21bf336e646b4f53263f3909cb02c58 \ + --hash=sha256:f324617bfc4d9b32136d25ff5351b92bc0b569a296173ee2a8591c1f886eff0c +llama-index-readers-llama-parse==0.5.0 \ + --hash=sha256:891b21fb63fe1fe722e23cfa263a74d9a7354e5d8d7a01f2d4040a52f8d8feef \ + --hash=sha256:e63ebf2248c4a726b8a1f7b029c90383d82cdc142942b54dbf287d1f3aee6d75 +llama-index-vector-stores-faiss==0.5.0 \ + --hash=sha256:2fa9848a4423ddb26f987d299749f1fa1c272b8e576332a03e0610d4ee236d09 \ + --hash=sha256:4b6a1533c075b6e30985bf1eb778716c594ae0511691434df7f75b032ef964eb +llama-index-workflows==1.3.0 \ + --hash=sha256:328cc25d92b014ef527f105a2f2088c0924fff0494e53d93decb951f14fbfe47 \ + --hash=sha256:9c1688e237efad384f16485af71c6f9456a2eb6d85bf61ff49e5717f10ff286d +llama-parse==0.6.54 \ + --hash=sha256:c66c8d51cf6f29a44eaa8595a595de5d2598afc86e5a33a4cebe5fe228036920 \ + --hash=sha256:c707b31152155c9bae84e316fab790bbc8c85f4d8825ce5ee386ebeb7db258f1 markupsafe==3.0.2 \ --hash=sha256:0bff5e0ae4ef2e1ae4fdf2dfd5b76c75e5c2fa4132d05fc1b0dabcd20c7e28c4 \ --hash=sha256:1e084f686b92e5b83186b07e8a17fc09e38fff551f3602b249881fec658d3eca \ @@ -284,15 +273,15 @@ multidict==6.6.3 \ --hash=sha256:e995a34c3d44ab511bfc11aa26869b9d66c2d8c799fa0e74b28a473a692532d6 \ --hash=sha256:ef43b5dd842382329e4797c46f10748d8c2b6e0614f46b4afe4aee9ac33159df \ --hash=sha256:f114d8478733ca7388e7c7e0ab34b72547476b97009d643644ac33d4d3fe1821 -mypy==1.16.1 \ - --hash=sha256:08e850ea22adc4d8a4014651575567b0318ede51e8e9fe7a68f25391af699507 \ - --hash=sha256:211287e98e05352a2e1d4e8759c5490925a7c784ddc84207f4714822f8cf99b6 \ - --hash=sha256:22d76a63a42619bfb90122889b903519149879ddbf2ba4251834727944c8baca \ - --hash=sha256:2c7ce0662b6b9dc8f4ed86eb7a5d505ee3298c04b40ec13b30e572c0e5ae17c4 \ - --hash=sha256:472e4e4c100062488ec643f6162dd0d5208e33e2f34544e1fc931372e806c0cc \ - --hash=sha256:5fc2ac4027d0ef28d6ba69a0343737a23c4d1b83672bf38d1fe237bdc0643b37 \ - --hash=sha256:6bd00a0a2094841c5e47e7374bb42b83d64c527a502e3334e1173a0c24437bab \ - --hash=sha256:ea16e2a7d2714277e349e24d19a782a663a34ed60864006e8585db08f8ad1782 +mypy==1.17.1 \ + --hash=sha256:064e2ff508e5464b4bd807a7c1625bc5047c5022b85c70f030680e18f37273a5 \ + --hash=sha256:25e01ec741ab5bb3eec8ba9cdb0f769230368a22c959c4937360efb89b7e9f01 \ + --hash=sha256:70401bbabd2fa1aa7c43bb358f54037baf0586f41e83b0ae67dd0534fc64edfd \ + --hash=sha256:a9f52c0351c21fe24c21d8c0eb1f62967b262d6729393397b6f443c3b773c3b9 \ + --hash=sha256:ad37544be07c5d7fba814eb370e006df58fed8ad1ef33ed1649cb1889ba6ff58 \ + --hash=sha256:c1fdf4abb29ed1cb091cf432979e162c208a5ac676ce35010373ff29247bcad5 \ + --hash=sha256:e92bdc656b7757c438660f775f872a669b8ff374edc4d18277d86b63edba6b8b \ + --hash=sha256:ff2933428516ab63f961644bc49bc4cbe42bbffb2cd3b71cc7277c07d16b1a8b mypy-extensions==1.1.0 \ --hash=sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505 \ --hash=sha256:52e68efc3284861e772bbcd66823fde5ae21fd2fdb51c62a211403730b916558 @@ -305,25 +294,26 @@ networkx==3.5 \ nltk==3.9.1 \ --hash=sha256:4fa26829c5b00715afe3061398a8989dc643b92ce7dd93fb4585a70930d168a1 \ --hash=sha256:87d127bd3de4bd89a4f81265e5fa59cb1b199b27440175370f7417d2bc7ae868 -numpy==2.3.1 \ - --hash=sha256:0025048b3c1557a20bc80d06fdeb8cc7fc193721484cca82b2cfa072fec71a93 \ - --hash=sha256:0bb3a4a61e1d327e035275d2a993c96fa786e4913aa089843e6a2d9dd205c66a \ - --hash=sha256:15aa4c392ac396e2ad3d0a2680c0f0dee420f9fed14eef09bdb9450ee6dcb7b7 \ - --hash=sha256:1ec9ae20a4226da374362cca3c62cd753faf2f951440b0e3b98e93c235441d2b \ - --hash=sha256:467db865b392168ceb1ef1ffa6f5a86e62468c43e0cfb4ab6da667ede10e58db \ - --hash=sha256:5ccb7336eaf0e77c1635b232c141846493a588ec9ea777a7c24d7166bb8533ae \ - --hash=sha256:6ea9e48336a402551f52cd8f593343699003d2353daa4b72ce8d34f66b722070 \ - --hash=sha256:a5ee121b60aa509679b682819c602579e1df14a5b07fe95671c8849aad8f2115 \ - --hash=sha256:a8b740f5579ae4585831b3cf0e3b0425c667274f82a484866d2adf9570539369 \ - --hash=sha256:ad506d4b09e684394c42c966ec1527f6ebc25da7f4da4b1b056606ffe446b8a3 \ - --hash=sha256:afed2ce4a84f6b0fc6c1ce734ff368cbf5a5e24e8954a338f3bdffa0718adffb \ - --hash=sha256:c6e0bf9d1a2f50d2b65a7cf56db37c095af17b59f6c132396f7c6d5dd76484df \ - --hash=sha256:d4580adadc53311b163444f877e0789f1c8861e2698f6b2a4ca852fda154f3ff \ - --hash=sha256:e344eb79dab01f1e838ebb67aab09965fb271d6da6b00adda26328ac27d4a66e \ - --hash=sha256:e610832418a2bc09d974cc9fecebfa51e9532d6190223bc5ef6a7402ebf3b5cb \ - --hash=sha256:eabd7e8740d494ce2b4ea0ff05afa1b7b291e978c0ae075487c51e8bd93c0c68 \ - --hash=sha256:ebb8603d45bc86bbd5edb0d63e52c5fd9e7945d3a503b77e486bd88dde67a19b \ - --hash=sha256:ec0bdafa906f95adc9a0c6f26a4871fa753f25caaa0e032578a30457bff0af6a +numpy==2.3.2 \ + --hash=sha256:14a91ebac98813a49bc6aa1a0dfc09513dcec1d97eaf31ca21a87221a1cdcb15 \ + --hash=sha256:1f91e5c028504660d606340a084db4b216567ded1056ea2b4be4f9d10b67197f \ + --hash=sha256:20b8200721840f5621b7bd03f8dcd78de33ec522fc40dc2641aa09537df010c3 \ + --hash=sha256:240259d6564f1c65424bcd10f435145a7644a65a6811cfc3201c4a429ba79170 \ + --hash=sha256:2c3271cc4097beb5a60f010bcc1cc204b300bb3eafb4399376418a83a1c6373c \ + --hash=sha256:4209f874d45f921bde2cff1ffcd8a3695f545ad2ffbef6d3d3c6768162efab89 \ + --hash=sha256:4ae6863868aaee2f57503c7a5052b3a2807cf7a3914475e637a0ecd366ced220 \ + --hash=sha256:6936aff90dda378c09bea075af0d9c675fe3a977a9d2402f95a87f440f59f619 \ + --hash=sha256:69779198d9caee6e547adb933941ed7520f896fd9656834c300bdf4dd8642712 \ + --hash=sha256:71669b5daae692189540cffc4c439468d35a3f84f0c88b078ecd94337f6cb0ec \ + --hash=sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168 \ + --hash=sha256:8446acd11fe3dc1830568c941d44449fd5cb83068e5c70bd5a470d323d448296 \ + --hash=sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9 \ + --hash=sha256:aa098a5ab53fa407fded5870865c6275a5cd4101cfdef8d6fafc48286a96e981 \ + --hash=sha256:cbc95b3813920145032412f7e33d12080f11dc776262df1712e1638207dde9e8 \ + --hash=sha256:e0486a11ec30cdecb53f184d496d1c6a20786c81e55e41640270130056f8ee48 \ + --hash=sha256:f0a1a8476ad77a228e41619af2fa9505cf69df928e9aaa165746584ea17fed2b \ + --hash=sha256:f75018be4980a7324edc5930fe39aa391d5734531b1926968605416ff58c332d \ + --hash=sha256:fb1752a3bb9a3ad2d6b090b88a9a0ae1cd6f004ef95f75825e2f382c183b2097 nvidia-cublas-cu12==12.4.5.8; platform_system == "Linux" and platform_machine == "x86_64" \ --hash=sha256:0f8aa1706812e00b9f19dfe0cdb3999b092ccb8ca168c0db5b8ea712456fd9b3 \ --hash=sha256:2fc8da60df463fdefa81e323eef2e36489e1c94335b5358bcb38360adf75ac9b \ @@ -373,9 +363,9 @@ nvidia-nvtx-cu12==12.4.127; platform_system == "Linux" and platform_machine == " --hash=sha256:641dccaaa1139f3ffb0d3164b4b84f9d253397e38246a4f2f36728b48566d485 \ --hash=sha256:781e950d9b9f60d8241ccea575b32f5105a5baf4c2351cab5256a24869f12a1a \ --hash=sha256:7959ad635db13edf4fc65c06a6e9f9e55fc2f92596db928d169c0bb031e88ef3 -openai==1.95.1 \ - --hash=sha256:8bbdfeceef231b1ddfabbc232b179d79f8b849aab5a7da131178f8d10e0f162f \ - --hash=sha256:f089b605282e2a2b6776090b4b46563ac1da77f56402a222597d591e2dcc1086 +openai==1.99.6 \ + --hash=sha256:e40d44b2989588c45ce13819598788b77b8fb80ba2f7ae95ce90d14e46f1bd26 \ + --hash=sha256:f48f4239b938ef187062f3d5199a05b69711d8b600b9a9b6a3853cd271799183 packaging==25.0 \ --hash=sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484 \ --hash=sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f @@ -470,9 +460,9 @@ pydantic-core==2.33.2 \ --hash=sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246 \ --hash=sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8 \ --hash=sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d -pypdf==5.7.0 \ - --hash=sha256:203379453439f5b68b7a1cd43cdf4c5f7a02b84810cefa7f93a47b350aaaba48 \ - --hash=sha256:68c92f2e1aae878bab1150e74447f31ab3848b1c0a6f8becae9f0b1904460b6f +pypdf==5.9.0 \ + --hash=sha256:30f67a614d558e495e1fbb157ba58c1de91ffc1718f5e0dfeb82a029233890a1 \ + --hash=sha256:be10a4c54202f46d9daceaa8788be07aa8cd5ea8c25c529c50dd509206382c35 python-dateutil==2.9.0.post0 \ --hash=sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3 \ --hash=sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427 @@ -496,82 +486,81 @@ pyyaml==6.0.2 \ --hash=sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e \ --hash=sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44 \ --hash=sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4 -regex==2024.11.6 \ - --hash=sha256:02e28184be537f0e75c1f9b2f8847dc51e08e6e171c6bde130b2687e0c33cf60 \ - --hash=sha256:068376da5a7e4da51968ce4c122a7cd31afaaec4fccc7856c92f63876e57b51d \ - --hash=sha256:1062b39a0a2b75a9c694f7a08e7183a80c63c0d62b301418ffd9c35f55aaa114 \ - --hash=sha256:167ed4852351d8a750da48712c3930b031f6efdaa0f22fa1933716bfcd6bf4a3 \ - --hash=sha256:202eb32e89f60fc147a41e55cb086db2a3f8cb82f9a9a88440dcfc5d37faae8d \ - --hash=sha256:2c89a8cc122b25ce6945f0423dc1352cb9593c68abd19223eebbd4e56612c5b7 \ - --hash=sha256:2d548dafee61f06ebdb584080621f3e0c23fff312f0de1afc776e2a2ba99a74f \ - --hash=sha256:4181b814e56078e9b00427ca358ec44333765f5ca1b45597ec7446d3a1ef6e34 \ - --hash=sha256:5478c6962ad548b54a591778e93cd7c456a7a29f8eca9c49e4f9a806dcc5d638 \ - --hash=sha256:7ab159b063c52a0333c884e4679f8d7a85112ee3078fe3d9004b2dd875585519 \ - --hash=sha256:94d87b689cdd831934fa3ce16cc15cd65748e6d689f5d2b8f4f4df2065c9fa20 \ - --hash=sha256:9714398225f299aa85267fd222f7142fcb5c769e73d7733344efc46f2ef5cf89 \ - --hash=sha256:ac10f2c4184420d881a3475fb2c6f4d95d53a8d50209a2500723d831036f7c45 \ - --hash=sha256:bec9931dfb61ddd8ef2ebc05646293812cb6b16b60cf7c9511a832b6f1854b55 \ - --hash=sha256:c36f9b6f5f8649bb251a5f3f66564438977b7ef8386a52460ae77e6070d309d9 \ - --hash=sha256:f2a19f302cd1ce5dd01a9099aaa19cae6173306d1302a43b627f62e21cf18ac0 +regex==2025.7.34 \ + --hash=sha256:24257953d5c1d6d3c129ab03414c07fc1a47833c9165d49b954190b2b7f21a1a \ + --hash=sha256:3157aa512b9e606586900888cd469a444f9b898ecb7f8931996cb715f77477f0 \ + --hash=sha256:35e43ebf5b18cd751ea81455b19acfdec402e82fe0dc6143edfae4c5c4b3909a \ + --hash=sha256:37555e4ae0b93358fa7c2d240a4291d4a4227cc7c607d8f85596cdb08ec0a083 \ + --hash=sha256:85c3a958ef8b3d5079c763477e1f09e89d13ad22198a37e9d7b26b4b17438b33 \ + --hash=sha256:96bbae4c616726f4661fe7bcad5952e10d25d3c51ddc388189d8864fbc1b3c68 \ + --hash=sha256:9ead9765217afd04a86822dfcd4ed2747dfe426e887da413b15ff0ac2457e21a \ + --hash=sha256:9feab78a1ffa4f2b1e27b1bcdaad36f48c2fed4870264ce32f52a393db093c78 \ + --hash=sha256:a664291c31cae9c4a30589bd8bc2ebb56ef880c9c6264cb7643633831e606a4d \ + --hash=sha256:d428fc7731dcbb4e2ffe43aeb8f90775ad155e7db4347a639768bc6cd2df881a \ + --hash=sha256:da304313761b8500b8e175eb2040c4394a875837d5635f6256d6fa0377ad32c8 \ + --hash=sha256:e154a7ee7fa18333ad90b20e16ef84daaeac61877c8ef942ec8dfa50dc38b7a1 \ + --hash=sha256:ee38926f31f1aa61b0232a3a11b83461f7807661c062df9eb88769d86e6195c3 \ + --hash=sha256:f14b36e6d4d07f1a5060f28ef3b3561c5d95eb0651741474ce4c0a4c56ba8719 \ + --hash=sha256:f3e5c1e0925e77ec46ddc736b756a6da50d4df4ee3f69536ffb2373460e2dafd requests==2.32.4 \ --hash=sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c \ --hash=sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422 -ruff==0.12.3 \ - --hash=sha256:023040a3499f6f974ae9091bcdd0385dd9e9eb4942f231c23c57708147b06311 \ - --hash=sha256:0262d50ba2767ed0fe212aa7e62112a1dcbfd46b858c5bf7bbd11f326998bafc \ - --hash=sha256:0a9153b000c6fe169bb307f5bd1b691221c4286c133407b8827c406a55282041 \ - --hash=sha256:12371aec33e1a3758597c5c631bae9a5286f3c963bdfb4d17acdd2d395406687 \ - --hash=sha256:2120d3aa855ff385e0e562fdee14d564c9675edbe41625c87eeab744a7830d12 \ - --hash=sha256:40dced4a79d7c264389de1c59467d5d5cefd79e7e06d1dfa2c75497b5269a5a6 \ - --hash=sha256:47552138f7206454eaf0c4fe827e546e9ddac62c2a3d2585ca54d29a890137a2 \ - --hash=sha256:560f13b6baa49785665276c963edc363f8ad4b4fc910a883e2625bdb14a83a9e \ - --hash=sha256:5f9c7c9c8f84c2d7f27e93674d27136fbf489720251544c4da7fb3d742e011b1 \ - --hash=sha256:6b16647cbb470eaf4750d27dddc6ebf7758b918887b56d39e9c22cce2049082b \ - --hash=sha256:883d844967bffff5ab28bba1a4d246c1a1b2933f48cb9840f3fdc5111c603b07 \ - --hash=sha256:a946cf1e7ba3209bdef039eb97647f1c77f6f540e5845ec9c114d3af8df873e7 \ - --hash=sha256:c4faaff1f90cea9d3033cbbcdf1acf5d7fb11d8180758feb31337391691f3df0 \ - --hash=sha256:dfd45e6e926deb6409d0616078a666ebce93e55e07f0fb0228d4b2608b2c248d \ - --hash=sha256:e1417051edb436230023575b149e8ff843a324557fe0a265863b7602df86722f \ - --hash=sha256:e2506961bf6ead54887ba3562604d69cb430f59b42133d36976421bc8bd45901 \ - --hash=sha256:f1b5a4b6668fd7b7ea3697d8d98857390b40c1320a63a178eee6be0899ea2d77 \ - --hash=sha256:fa6b24600cf3b750e48ddb6057e901dd5b9aa426e316addb2a1af185a7509882 -safetensors==0.5.3 \ - --hash=sha256:1077f3e94182d72618357b04b5ced540ceb71c8a813d3319f1aba448e68a770d \ - --hash=sha256:11bce6164887cd491ca75c2326a113ba934be596e22b28b1742ce27b1d076467 \ - --hash=sha256:21d01c14ff6c415c485616b8b0bf961c46b3b343ca59110d38d744e577f9cce7 \ - --hash=sha256:32c3ef2d7af8b9f52ff685ed0bc43913cdcde135089ae322ee576de93eae5135 \ - --hash=sha256:37f1521be045e56fc2b54c606d4455573e717b2d887c579ee1dbba5f868ece04 \ - --hash=sha256:391ac8cab7c829452175f871fcaf414aa1e292b5448bd02620f675a7f3e7abb9 \ - --hash=sha256:4a243be3590bc3301c821da7a18d87224ef35cbd3e5f5727e4e0728b8172411e \ - --hash=sha256:799021e78287bac619c7b3f3606730a22da4cda27759ddf55d37c8db7511c74b \ - --hash=sha256:836cbbc320b47e80acd40e44c8682db0e8ad7123209f69b093def21ec7cafd11 \ - --hash=sha256:8bd84b12b1670a6f8e50f01e28156422a2bc07fb16fc4e98bded13039d688a0d \ - --hash=sha256:b6b0d6ecacec39a4fdd99cc19f4576f5219ce858e6fd8dbe7609df0b8dc56965 \ - --hash=sha256:bd20eb133db8ed15b40110b7c00c6df51655a2998132193de2f75f72d99c7073 \ - --hash=sha256:cead1fa41fc54b1e61089fa57452e8834f798cb1dc7a09ba3524f1eb08e0317a \ - --hash=sha256:cfc0ec0846dcf6763b0ed3d1846ff36008c6e7290683b61616c4b040f6a54ace \ - --hash=sha256:df26da01aaac504334644e1b7642fa000bfec820e7cef83aeac4e355e03195ff -scikit-learn==1.7.0 \ - --hash=sha256:7d7240c7b19edf6ed93403f43b0fcb0fe95b53bc0b17821f8fb88edab97085ef \ - --hash=sha256:80bd3bd4e95381efc47073a720d4cbab485fc483966f1709f1fd559afac57ab8 \ - --hash=sha256:8ef09b1615e1ad04dc0d0054ad50634514818a8eb3ee3dee99af3bffc0ef5007 \ - --hash=sha256:8fa979313b2ffdfa049ed07252dc94038def3ecd49ea2a814db5401c07f1ecfa \ - --hash=sha256:9dbe48d69aa38ecfc5a6cda6c5df5abef0c0ebdb2468e92437e2053f84abb8bc \ - --hash=sha256:c01e869b15aec88e2cdb73d27f15bdbe03bce8e2fb43afbe77c45d399e73a5a3 -scipy==1.16.0 \ - --hash=sha256:6c4abb4c11fc0b857474241b812ce69ffa6464b4bd8f4ecb786cf240367a36a7 \ - --hash=sha256:90452f6a9f3fe5a2cf3748e7be14f9cc7d9b124dce19667b54f5b429d680d539 \ - --hash=sha256:a16ba90847249bedce8aa404a83fb8334b825ec4a8e742ce6012a7a5e639f95c \ - --hash=sha256:a2f0bf2f58031c8701a8b601df41701d2a7be17c7ffac0a4816aeba89c4cdac8 \ - --hash=sha256:b2243561b45257f7391d0f49972fca90d46b79b8dbcb9b2cb0f9df928d370ad4 \ - --hash=sha256:b370f8f6ac6ef99815b0d5c9f02e7ade77b33007d74802efc8316c8db98fd11e \ - --hash=sha256:b5ef54021e832869c8cfb03bc3bf20366cbcd426e02a58e8a58d7584dfbb8f62 \ - --hash=sha256:d30c0fe579bb901c61ab4bb7f3eeb7281f0d4c4a7b52dbf563c89da4fd2949be \ - --hash=sha256:deec06d831b8f6b5fb0b652433be6a09db29e996368ce5911faf673e78d20085 \ - --hash=sha256:e6d7dfc148135e9712d87c5f7e4f2ddc1304d1582cb3a7d698bbadedb61c7afd -sentence-transformers==5.0.0 \ - --hash=sha256:346240f9cc6b01af387393f03e103998190dfb0826a399d0c38a81a05c7a5d76 \ - --hash=sha256:e5a411845910275fd166bacb01d28b7f79537d3550628ae42309dbdd3d5670d1 +ruff==0.12.8 \ + --hash=sha256:0ac9c570634b98c71c88cb17badd90f13fc076a472ba6ef1d113d8ed3df109fb \ + --hash=sha256:2fae54e752a3150f7ee0e09bce2e133caf10ce9d971510a9b925392dc98d2fec \ + --hash=sha256:45c32487e14f60b88aad6be9fd5da5093dbefb0e3e1224131cb1d441d7cb7d46 \ + --hash=sha256:49ebcaccc2bdad86fd51b7864e3d808aad404aab8df33d469b6e65584656263a \ + --hash=sha256:4cb3a45525176e1009b2b64126acf5f9444ea59066262791febf55e40493a033 \ + --hash=sha256:560e0cd641e45591a3e42cb50ef61ce07162b9c233786663fdce2d8557d99818 \ + --hash=sha256:63cb5a5e933fc913e5823a0dfdc3c99add73f52d139d6cd5cc8639d0e0465513 \ + --hash=sha256:71c83121512e7743fba5a8848c261dcc454cafb3ef2934a43f1b7a4eb5a447ea \ + --hash=sha256:7209531f1a1fcfbe8e46bcd7ab30e2f43604d8ba1c49029bb420b103d0b5f76e \ + --hash=sha256:9a9bbe28f9f551accf84a24c366c1aa8774d6748438b47174f8e8565ab9dedbc \ + --hash=sha256:a2cab5f60d5b65b50fba39a8950c8746df1627d54ba1197f970763917184b161 \ + --hash=sha256:ae3e7504666ad4c62f9ac8eedb52a93f9ebdeb34742b8b71cd3cccd24912719f \ + --hash=sha256:c0acbcf01206df963d9331b5838fb31f3b44fa979ee7fa368b9b9057d89f4a53 \ + --hash=sha256:c90e1a334683ce41b0e7a04f41790c429bf5073b62c1ae701c9dc5b3d14f0749 \ + --hash=sha256:cb82efb5d35d07497813a1c5647867390a7d83304562607f3579602fa3d7d46f \ + --hash=sha256:daf3475060a617fd5bc80638aeaf2f5937f10af3ec44464e280a9d2218e720d3 \ + --hash=sha256:dbea798fc0065ad0b84a2947b0aff4233f0cb30f226f00a2c5850ca4393de609 \ + --hash=sha256:de4429ef2ba091ecddedd300f4c3f24bca875d3d8b23340728c3cb0da81072c3 +safetensors==0.6.2 \ + --hash=sha256:1d2d2b3ce1e2509c68932ca03ab8f20570920cd9754b05063d4368ee52833ecd \ + --hash=sha256:43ff2aa0e6fa2dc3ea5524ac7ad93a9839256b8703761e76e2d0b2a3fa4f15d9 \ + --hash=sha256:8045db2c872db8f4cbe3faa0495932d89c38c899c603f21e9b6486951a5ecb8f \ + --hash=sha256:81e67e8bab9878bb568cffbc5f5e655adb38d2418351dc0859ccac158f753e19 \ + --hash=sha256:89a89b505f335640f9120fac65ddeb83e40f1fd081cb8ed88b505bdccec8d0a1 \ + --hash=sha256:93de35a18f46b0f5a6a1f9e26d91b442094f2df02e9fd7acf224cfec4238821a \ + --hash=sha256:9c85ede8ec58f120bad982ec47746981e210492a6db876882aa021446af8ffba \ + --hash=sha256:b0e4d029ab0a0e0e4fdf142b194514695b1d7d3735503ba700cf36d0fc7136ce \ + --hash=sha256:c7b214870df923cbc1593c3faee16bec59ea462758699bd3fee399d00aac072c \ + --hash=sha256:cab75ca7c064d3911411461151cb69380c9225798a20e712b102edda2542ddb1 \ + --hash=sha256:d6675cf4b39c98dbd7d940598028f3742e0375a6b4d4277e76beb0c35f4b843b \ + --hash=sha256:d83c20c12c2d2f465997c51b7ecb00e407e5f94d7dec3ea0cc11d86f60d3fde5 \ + --hash=sha256:d944cea65fad0ead848b6ec2c37cc0b197194bec228f8020054742190e9312ac \ + --hash=sha256:fa48268185c52bfe8771e46325a1e21d317207bcabcb72e65c6e28e9ffeb29c7 \ + --hash=sha256:fc4d0d0b937e04bdf2ae6f70cd3ad51328635fe0e6214aa1fc811f3b576b3bda +scikit-learn==1.7.1 \ + --hash=sha256:24b3f1e976a4665aa74ee0fcaac2b8fccc6ae77c8e07ab25da3ba6d3292b9802 \ + --hash=sha256:30d1f413cfc0aa5a99132a554f1d80517563c34a9d3e7c118fde2d273c6fe0f7 \ + --hash=sha256:40daccd1b5623f39e8943ab39735cadf0bdce80e67cdca2adcb5426e987320a8 \ + --hash=sha256:90c8494ea23e24c0fb371afc474618c1019dc152ce4a10e4607e62196113851b \ + --hash=sha256:bb870c0daf3bf3be145ec51df8ac84720d9972170786601039f024bf6d61a518 \ + --hash=sha256:c711d652829a1805a95d7fe96654604a8f16eab5a9e9ad87b3e60173415cb650 +scipy==1.16.1 \ + --hash=sha256:0a55ffe0ba0f59666e90951971a884d1ff6f4ec3275a48f472cfb64175570f77 \ + --hash=sha256:18aca1646a29ee9a0625a1be5637fa798d4d81fdf426481f06d69af828f16958 \ + --hash=sha256:226652fca853008119c03a8ce71ffe1b3f6d2844cc1686e8f9806edafae68596 \ + --hash=sha256:44c76f9e8b6e8e488a586190ab38016e4ed2f8a038af7cd3defa903c0a2238b3 \ + --hash=sha256:6e5c2f74e5df33479b5cd4e97a9104c511518fbd979aa9b8f6aec18b2e9ecae7 \ + --hash=sha256:adccd93a2fa937a27aae826d33e3bfa5edf9aa672376a4852d23a7cd67a2e5b7 \ + --hash=sha256:c033fa32bab91dc98ca59d0cf23bb876454e2bb02cbe592d5023138778f70030 \ + --hash=sha256:cb18899127278058bcc09e7b9966d41a5a43740b5bb8dcba401bd983f82e885b \ + --hash=sha256:d85495cef541729a70cdddbbf3e6b903421bc1af3e8e3a9a72a06751f33b7c39 \ + --hash=sha256:f8a5d6cd147acecc2603fbd382fed6c46f474cccfcf69ea32582e033fb54dcfe +sentence-transformers==5.1.0 \ + --hash=sha256:70c7630697cc1c64ffca328d6e8688430ebd134b3c2df03dc07cb3a016b04739 \ + --hash=sha256:fc803929f6a3ce82e2b2c06e0efed7a36de535c633d5ce55efac0b710ea5643e setuptools==80.9.0 \ --hash=sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922 \ --hash=sha256:f36b47402ecde768dbfafc46e8e4207b4360c654f1f3bb84475f0a28628fb19c @@ -584,17 +573,17 @@ sniffio==1.3.1 \ soupsieve==2.7 \ --hash=sha256:6e60cc5c1ffaf1cebcc12e8188320b72071e922c2e897f737cadce79ad5d30c4 \ --hash=sha256:ad282f9b6926286d2ead4750552c8a6142bc4c783fd66b0293547c8fe6ae126a -sqlalchemy[asyncio]==2.0.41 \ - --hash=sha256:03968a349db483936c249f4d9cd14ff2c296adfa1290b660ba6516f973139582 \ - --hash=sha256:293cd444d82b18da48c9f71cd7005844dbbd06ca19be1ccf6779154439eec0b8 \ - --hash=sha256:32f9dc8c44acdee06c8fc6440db9eae8b4af8b01e4b1aee7bdd7241c22edff4f \ - --hash=sha256:3d3549fc3e40667ec7199033a4e40a2f669898a00a7b18a931d3efb4c7900504 \ - --hash=sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576 \ - --hash=sha256:6375cd674fe82d7aa9816d1cb96ec592bac1726c11e0cafbf40eeee9a4516b5f \ - --hash=sha256:90c11ceb9a1f482c752a71f203a81858625d8df5746d787a4786bca4ffdf71c6 \ - --hash=sha256:911cc493ebd60de5f285bcae0491a60b4f2a9f0f5c270edd1c4dbaef7a38fc04 \ - --hash=sha256:9f8c9fdd15a55d9465e590a402f42082705d66b05afc3ffd2d2eb3c6ba919560 \ - --hash=sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9 +sqlalchemy[asyncio]==2.0.42 \ + --hash=sha256:160bedd8a5c28765bd5be4dec2d881e109e33b34922e50a3b881a7681773ac5f \ + --hash=sha256:1f092cf83ebcafba23a247f5e03f99f5436e3ef026d01c8213b5eca48ad6efa9 \ + --hash=sha256:260ca1d2e5910f1f1ad3fe0113f8fab28657cee2542cb48c2f342ed90046e8ec \ + --hash=sha256:2eb539fd83185a85e5fcd6b19214e1c734ab0351d81505b0f987705ba0a1e231 \ + --hash=sha256:9193fa484bf00dcc1804aecbb4f528f1123c04bad6a08d7710c909750fa76aeb \ + --hash=sha256:ad59dbe4d1252448c19d171dfba14c74e7950b46dc49d015722a4a06bfdab2b0 \ + --hash=sha256:c34100c0b7ea31fbc113c124bcf93a53094f8951c7bf39c45f39d327bad6d1e7 \ + --hash=sha256:defcdff7e661f0043daa381832af65d616e060ddb54d3fe4476f51df7eaa1835 \ + --hash=sha256:f9187498c2149919753a7fd51766ea9c8eecdec7da47c1b955fa8090bc642eaa \ + --hash=sha256:fc6afee7e66fdba4f5a68610b487c1f754fccdc53894a9567785932dbb6a265e striprtf==0.0.26 \ --hash=sha256:8c8f9d32083cdc2e8bfb149455aa1cc5a4e0a035893bedc75db8b73becb3a1bb \ --hash=sha256:fdb2bba7ac440072d1c41eab50d8d74ae88f60a8b6575c6e2c7805dc462093aa @@ -607,30 +596,30 @@ tenacity==9.1.2 \ threadpoolctl==3.6.0 \ --hash=sha256:43a0b8fd5a2928500110039e43a5eed8480b918967083ea48dc3ab9f13c4a7fb \ --hash=sha256:8ab8b4aa3491d812b623328249fab5302a68d2d71745c8a4c719a2fcaba9f44e -tiktoken==0.9.0 \ - --hash=sha256:03935988a91d6d3216e2ec7c645afbb3d870b37bcb67ada1943ec48678e7ee33 \ - --hash=sha256:11a20e67fdf58b0e2dea7b8654a288e481bb4fc0289d3ad21291f8d0849915fb \ - --hash=sha256:45556bc41241e5294063508caf901bf92ba52d8ef9222023f83d2483a3055348 \ - --hash=sha256:8b3d80aad8d2c6b9238fc1a5524542087c52b860b10cbf952429ffb714bc1136 \ - --hash=sha256:b2a21133be05dc116b1d0372af051cd2c6aa1d2188250c9b553f9fa49301b336 \ - --hash=sha256:d02a5ca6a938e0490e1ff957bc48c8b078c88cb83977be1625b1fd8aac792c5d \ - --hash=sha256:f32cc56168eac4851109e9b5d327637f15fd662aa30dd79f964b7c39fbadd26e -tokenizers==0.21.2 \ - --hash=sha256:0e73770507e65a0e0e2a1affd6b03c36e3bc4377bd10c9ccf51a82c77c0fe365 \ - --hash=sha256:106746e8aa9014a12109e58d540ad5465b4c183768ea96c03cbc24c44d329958 \ - --hash=sha256:126df3205d6f3a93fea80c7a8a266a78c1bd8dd2fe043386bafdd7736a23e45f \ - --hash=sha256:2c41862df3d873665ec78b6be36fcc30a26e3d4902e9dd8608ed61d49a48bc19 \ - --hash=sha256:342b5dfb75009f2255ab8dec0041287260fed5ce00c323eb6bab639066fef8ec \ - --hash=sha256:4a32cd81be21168bd0d6a0f0962d60177c447a1aa1b1e48fa6ec9fc728ee0b12 \ - --hash=sha256:514cd43045c5d546f01142ff9c79a96ea69e4b5cda09e3027708cb2e6d5762ab \ - --hash=sha256:58747bb898acdb1007f37a7bbe614346e98dc28708ffb66a3fd50ce169ac6c98 \ - --hash=sha256:5e9944e61239b083a41cf8fc42802f855e1dca0f499196df37a8ce219abac6eb \ - --hash=sha256:8bd8999538c405133c2ab999b83b17c08b7fc1b48c1ada2469964605a709ef91 \ - --hash=sha256:b1b9405822527ec1e0f7d8d2fdb287a5730c3a6518189c968254a8441b21faae \ - --hash=sha256:cabda5a6d15d620b6dfe711e1af52205266d05b379ea85a8a301b3593c60e962 \ - --hash=sha256:ed21dc7e624e4220e21758b2e62893be7101453525e3d23264081c9ef9a6d00d \ - --hash=sha256:fdc7cffde3e2113ba0e6cc7318c40e3438a4d74bbc62bf04bcc63bdfb082ac77 \ - --hash=sha256:fed9a4d51c395103ad24f8e7eb976811c57fbec2af9f133df471afcd922e5020 +tiktoken==0.11.0 \ + --hash=sha256:2130127471e293d385179c1f3f9cd445070c0772be73cdafb7cec9a3684c0458 \ + --hash=sha256:21e43022bf2c33f733ea9b54f6a3f6b4354b909f5a73388fb1b9347ca54a069c \ + --hash=sha256:25a512ff25dc6c85b58f5dd4f3d8c674dc05f96b02d66cdacf628d26a4e4866b \ + --hash=sha256:3c518641aee1c52247c2b97e74d8d07d780092af79d5911a6ab5e79359d9b06a \ + --hash=sha256:4ae374c46afadad0f501046db3da1b36cd4dfbfa52af23c998773682446097cf \ + --hash=sha256:adb4e308eb64380dc70fa30493e21c93475eaa11669dea313b6bbf8210bfd013 \ + --hash=sha256:ece6b76bfeeb61a125c44bbefdfccc279b5288e6007fbedc0d32bfec602df2f2 +tokenizers==0.21.4 \ + --hash=sha256:1340ff877ceedfa937544b7d79f5b7becf33a4cfb58f89b3b49927004ef66f78 \ + --hash=sha256:2107ad649e2cda4488d41dfd031469e9da3fcbfd6183e74e4958fa729ffbf9c6 \ + --hash=sha256:2ccc10a7c3bcefe0f242867dc914fc1226ee44321eb618cfe3019b5df3400133 \ + --hash=sha256:39b376f5a1aee67b4d29032ee85511bbd1b99007ec735f7f35c8a2eb104eade5 \ + --hash=sha256:3c1f4317576e465ac9ef0d165b247825a2a4078bcd01cba6b54b867bdf9fdd8b \ + --hash=sha256:3c73012da95afafdf235ba80047699df4384fdc481527448a078ffd00e45a7d9 \ + --hash=sha256:475d807a5c3eb72c59ad9b5fcdb254f6e17f53dfcbb9903233b0dfa9c943b597 \ + --hash=sha256:51b7eabb104f46c1c50b486520555715457ae833d5aee9ff6ae853d1130506ff \ + --hash=sha256:5e2f601a8e0cd5be5cc7506b20a79112370b9b3e9cb5f13f68ab11acd6ca7d60 \ + --hash=sha256:6c42a930bc5f4c47f4ea775c91de47d27910881902b0f20e4990ebe045a415d0 \ + --hash=sha256:714b05b2e1af1288bd1bc56ce496c4cebb64a20d158ee802887757791191e6e2 \ + --hash=sha256:c212aa4e45ec0bb5274b16b6f31dd3f1c41944025c2358faaa5782c754e84c24 \ + --hash=sha256:cc88bb34e23a54cc42713d6d98af5f1bf79c07653d24fe984d2d695ba2c922a2 \ + --hash=sha256:f23186c40395fc390d27f519679a58023f368a0aad234af145e0f39ad1212732 \ + --hash=sha256:fa23f85fbc9a02ec5c6978da172cdcbac23498c3ca9f3645c5c68740ac007880 torch==2.6.0 \ --hash=sha256:46763dcb051180ce1ed23d1891d9b1598e07d051ce4c9d14307029809c4d64f7 \ --hash=sha256:7979834102cd5b7a43cc64e87f2f3b14bd0e1458f06e9f88ffa386d07c7446e1 \ @@ -639,14 +628,14 @@ torch==2.6.0 \ tqdm==4.67.1 \ --hash=sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2 \ --hash=sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2 -transformers==4.53.2 \ - --hash=sha256:6c3ed95edfb1cba71c4245758f1b4878c93bf8cde77d076307dacb2cbbd72be2 \ - --hash=sha256:db8f4819bb34f000029c73c3c557e7d06fc1b8e612ec142eecdae3947a9c78bf +transformers==4.55.0 \ + --hash=sha256:15aa138a05d07a15b30d191ea2c45e23061ebf9fcc928a1318e03fe2234f3ae1 \ + --hash=sha256:29d9b8800e32a4a831bb16efb5f762f6a9742fef9fce5d693ed018d19b106490 triton==3.2.0; platform_system == "Linux" and platform_machine == "x86_64" \ --hash=sha256:8009a1fb093ee8546495e96731336a33fb8856a38e45bb4ab6affd6dbc3ba220 -types-requests==2.32.4.20250611 \ - --hash=sha256:741c8777ed6425830bf51e54d6abe245f79b4dcb9019f1622b773463946bf826 \ - --hash=sha256:ad2fe5d3b0cb3c2c902c8815a70e7fb2302c4b8c1f77bdcd738192cdb3878072 +types-requests==2.32.4.20250809 \ + --hash=sha256:d8060de1c8ee599311f56ff58010fb4902f462a1470802cf9f6ed27bc46c4df3 \ + --hash=sha256:f73d1832fb519ece02c85b1f09d5f0dd3108938e7d47e7f94bbfa18a6782b163 typing-extensions==4.14.1 \ --hash=sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36 \ --hash=sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76