Skip to content

Commit f345e92

Browse files
authored
Merge pull request #52099 from libander/RHDEVDOCS-4102-main-cp
Manual CP of RHDEVDOCS-4102 to main
2 parents 25f53fc + 422a10b commit f345e92

30 files changed

+275
-51
lines changed

logging/cluster-logging-external.adoc

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,6 @@ include::modules/cluster-logging-collector-log-forward-loki.adoc[leveloffset=+1]
181181

182182
include::modules/cluster-logging-troubleshooting-loki-entry-out-of-order-errors.adoc[leveloffset=+2]
183183

184-
185184
[role="_additional-resources"]
186185
.Additional resources
187186

@@ -194,8 +193,6 @@ include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=
194193

195194
include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1]
196195

197-
include::modules/cluster-logging-collector-collecting-ovn-logs.adoc[leveloffset=+1]
198-
199196
[role="_additional-resources"]
200197
.Additional resources
201198

logging/cluster-logging-release-notes.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ include::modules/cluster-logging-loki-tech-preview.adoc[leveloffset=+2]
157157
* link:https://access.redhat.com/security/cve/CVE-2022-21698[CVE-2022-21698]
158158
** link:https://bugzilla.redhat.com/show_bug.cgi?id=2045880[BZ-2045880]
159159

160-
//include::modules/cluster-logging-rn-5.3.12.adoc[leveloffset=+1]
160+
include::modules/cluster-logging-rn-5.3.12.adoc[leveloffset=+1]
161161

162162
include::modules/cluster-logging-rn-5.3.11.adoc[leveloffset=+1]
163163

@@ -930,7 +930,7 @@ This release includes link:https://access.redhat.com/errata/RHBA-2021:3393[RHBA-
930930

931931
* This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information, see xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-es_cluster-logging-external[Forwarding logs to an external Elasticsearch instance]. (link:https://issues.redhat.com/browse/LOG-1022[LOG-1022])
932932

933-
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collecting-ovn-audit-logs_cluster-logging-external[Collecting OVN network policy audit logs]. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
933+
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
934934

935935
* By default, the data model introduced in {product-title} 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs.
936936
+

logging/cluster-logging-upgrading.adoc

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,4 @@ To upgrade from cluster logging in {product-title} version 4.6 and earlier to Op
1717

1818
To upgrade from a previous version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions.
1919

20-
include::modules/cluster-logging-updating-logging-to-5-0.adoc[leveloffset=+1]
21-
22-
include::modules/cluster-logging-updating-logging-to-5-1.adoc[leveloffset=+1]
20+
include::modules/cluster-logging-updating-logging-to-current.adoc[leveloffset=+1]

logging/cluster-logging.adoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,6 @@ For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logg
3636

3737
include::modules/cluster-logging-json-logging-about.adoc[leveloffset=+2]
3838

39-
For information, see xref:../logging/cluster-logging.adoc#cluster-logging-json-logging-about_cluster-logging[About JSON Logging].
40-
4139
include::modules/cluster-logging-collecting-storing-kubernetes-events.adoc[leveloffset=+2]
4240

4341
For information, see xref:../logging/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[About collecting and storing Kubernetes events].

modules/cluster-logging-clo-status-comp.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ You can view the status for a number of {logging} components.
1010

1111
.Prerequisites
1212

13-
* The {logging-title} and Elasticsearch must be installed.
13+
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
1414
1515
.Procedure
1616

modules/cluster-logging-clo-status.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ You can view the status of your Red Hat OpenShift Logging Operator.
1010

1111
.Prerequisites
1212

13-
* The {logging-title} and Elasticsearch must be installed.
13+
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
1414
1515
.Procedure
1616

modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,3 +49,5 @@ kafka 2.7.0
4949
====
5050
Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.
5151
====
52+
53+
//ENG-Feedback: How can we reformat this to accurately reflect 5.4?

modules/cluster-logging-collector-tolerations.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ tolerations:
2525

2626
.Prerequisites
2727

28-
* The {logging-title} and Elasticsearch must be installed.
28+
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
2929
3030
.Procedure
3131

modules/cluster-logging-collector-tuning.adoc

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ The {logging-title} includes multiple Fluentd parameters that you can use for tu
1414
1515
Fluentd collects log data in a single blob called a _chunk_. When Fluentd creates a chunk, the chunk is considered to be in the _stage_, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the _queue_, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.
1616

17-
By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In {product-title}, you cannot change the indefinite retry behavior.
17+
By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval.
1818

1919
These parameters can help you determine the trade-offs between latency and throughput.
2020

@@ -37,7 +37,7 @@ These parameters are:
3737
[options="header"]
3838
|===
3939

40-
|Parmeter |Description |Default
40+
|Parameter |Description |Default
4141

4242
|`chunkLimitSize`
4343
|The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk.
@@ -82,6 +82,10 @@ a|The retry method when flushing fails:
8282
* `periodic`: Retries flushes periodically, based on the `retryWait` parameter.
8383
|`exponential_backoff`
8484
85+
|`retryTimeOut`
86+
|The maximum time interval to attempt retries before the record is discarded.
87+
|`60m`
88+
8589
|`retryWait`
8690
|The time in seconds before the next chunk flush.
8791
|`1s`
@@ -138,7 +142,7 @@ spec:
138142
+
139143
[source,terminal]
140144
----
141-
$ oc get pods -n openshift-logging
145+
$ oc get pods -l component=collector -n openshift-logging
142146
----
143147

144148
. Check that the new values are in the `fluentd` config map:

modules/cluster-logging-deploy-cli.adoc

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,7 @@ You can use the {product-title} CLI to install the OpenShift Elasticsearch and R
1010

1111
.Prerequisites
1212

13-
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
14-
requires its own storage volume.
13+
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.
1514
+
1615
[NOTE]
1716
====
@@ -140,7 +139,7 @@ spec:
140139
name: "elasticsearch-operator"
141140
----
142141
<1> You must specify the `openshift-operators-redhat` namespace.
143-
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel. See the following note.
142+
<2> Specify `stable`, or `stable-5.<x>` as the channel. See the following note.
144143
<3> `Automatic` allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. `Manual` requires a user with appropriate credentials to approve the Operator update.
145144
<4> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster,
146145
specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
@@ -241,7 +240,7 @@ spec:
241240
sourceNamespace: openshift-marketplace
242241
----
243242
<1> You must specify the `openshift-logging` namespace.
244-
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel.
243+
<2> Specify `stable`, or `stable-5.<x>` as the channel.
245244
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
246245
+
247246
[source,terminal]
@@ -386,7 +385,7 @@ This creates the {logging} components, the `Elasticsearch` custom resource and c
386385

387386
. Verify the installation by listing the pods in the *openshift-logging* project.
388387
+
389-
You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
388+
You should see several pods for components of the Logging subsystem, similar to the following list:
390389
+
391390
[source,terminal]
392391
----
@@ -401,11 +400,11 @@ cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m
401400
elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s
402401
elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s
403402
elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s
404-
fluentd-587vb 1/1 Running 0 2m26s
405-
fluentd-7mpb9 1/1 Running 0 2m30s
406-
fluentd-flm6j 1/1 Running 0 2m33s
407-
fluentd-gn4rn 1/1 Running 0 2m26s
408-
fluentd-nlgb6 1/1 Running 0 2m30s
409-
fluentd-snpkt 1/1 Running 0 2m28s
403+
collector-587vb 1/1 Running 0 2m26s
404+
collector-7mpb9 1/1 Running 0 2m30s
405+
collector-flm6j 1/1 Running 0 2m33s
406+
collector-gn4rn 1/1 Running 0 2m26s
407+
collector-nlgb6 1/1 Running 0 2m30s
408+
collector-snpkt 1/1 Running 0 2m28s
410409
kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s
411410
----

0 commit comments

Comments
 (0)