Skip to content

Commit 4257408

Browse files
authored
Merge pull request #51862 from openshift/revert-50781-RHDEVDOCS-4102-main
Revert "RHDEVDOCS-4102 - Logging corrections from Support Engineering Feedback"
2 parents 6ec909e + c8f832f commit 4257408

30 files changed

+51
-275
lines changed

logging/cluster-logging-external.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -181,6 +181,7 @@ include::modules/cluster-logging-collector-log-forward-loki.adoc[leveloffset=+1]
181181

182182
include::modules/cluster-logging-troubleshooting-loki-entry-out-of-order-errors.adoc[leveloffset=+2]
183183

184+
184185
[role="_additional-resources"]
185186
.Additional resources
186187

@@ -193,6 +194,8 @@ include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=
193194

194195
include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1]
195196

197+
include::modules/cluster-logging-collector-collecting-ovn-logs.adoc[leveloffset=+1]
198+
196199
[role="_additional-resources"]
197200
.Additional resources
198201

logging/cluster-logging-release-notes.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ include::modules/cluster-logging-loki-tech-preview.adoc[leveloffset=+2]
153153
* link:https://access.redhat.com/security/cve/CVE-2022-21698[CVE-2022-21698]
154154
** link:https://bugzilla.redhat.com/show_bug.cgi?id=2045880[BZ-2045880]
155155

156-
include::modules/cluster-logging-rn-5.3.12.adoc[leveloffset=+1]
156+
//include::modules/cluster-logging-rn-5.3.12.adoc[leveloffset=+1]
157157

158158
include::modules/cluster-logging-rn-5.3.11.adoc[leveloffset=+1]
159159

@@ -926,7 +926,7 @@ This release includes link:https://access.redhat.com/errata/RHBA-2021:3393[RHBA-
926926

927927
* This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information, see xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-es_cluster-logging-external[Forwarding logs to an external Elasticsearch instance]. (link:https://issues.redhat.com/browse/LOG-1022[LOG-1022])
928928

929-
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
929+
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collecting-ovn-audit-logs_cluster-logging-external[Collecting OVN network policy audit logs]. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
930930

931931
* By default, the data model introduced in {product-title} 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs.
932932
+

logging/cluster-logging-upgrading.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,6 @@ To upgrade from cluster logging in {product-title} version 4.6 and earlier to Op
1717

1818
To upgrade from a previous version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions.
1919

20-
include::modules/cluster-logging-updating-logging-to-current.adoc[leveloffset=+1]
20+
include::modules/cluster-logging-updating-logging-to-5-0.adoc[leveloffset=+1]
21+
22+
include::modules/cluster-logging-updating-logging-to-5-1.adoc[leveloffset=+1]

logging/cluster-logging.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logg
3636

3737
include::modules/cluster-logging-json-logging-about.adoc[leveloffset=+2]
3838

39+
For information, see xref:../logging/cluster-logging.adoc#cluster-logging-json-logging-about_cluster-logging[About JSON Logging].
40+
3941
include::modules/cluster-logging-collecting-storing-kubernetes-events.adoc[leveloffset=+2]
4042

4143
For information, see xref:../logging/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[About collecting and storing Kubernetes events].

modules/cluster-logging-clo-status-comp.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ You can view the status for a number of {logging} components.
1010

1111
.Prerequisites
1212

13-
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
13+
* The {logging-title} and Elasticsearch must be installed.
1414
1515
.Procedure
1616

modules/cluster-logging-clo-status.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ You can view the status of your Red Hat OpenShift Logging Operator.
1010

1111
.Prerequisites
1212

13-
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
13+
* The {logging-title} and Elasticsearch must be installed.
1414
1515
.Procedure
1616

modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,5 +49,3 @@ kafka 2.7.0
4949
====
5050
Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.
5151
====
52-
53-
//ENG-Feedback: How can we reformat this to accurately reflect 5.4?

modules/cluster-logging-collector-tolerations.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ tolerations:
2525

2626
.Prerequisites
2727

28-
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
28+
* The {logging-title} and Elasticsearch must be installed.
2929
3030
.Procedure
3131

modules/cluster-logging-collector-tuning.adoc

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ The {logging-title} includes multiple Fluentd parameters that you can use for tu
1414
1515
Fluentd collects log data in a single blob called a _chunk_. When Fluentd creates a chunk, the chunk is considered to be in the _stage_, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the _queue_, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.
1616

17-
By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval.
17+
By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In {product-title}, you cannot change the indefinite retry behavior.
1818

1919
These parameters can help you determine the trade-offs between latency and throughput.
2020

@@ -37,7 +37,7 @@ These parameters are:
3737
[options="header"]
3838
|===
3939

40-
|Parameter |Description |Default
40+
|Parmeter |Description |Default
4141

4242
|`chunkLimitSize`
4343
|The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk.
@@ -82,10 +82,6 @@ a|The retry method when flushing fails:
8282
* `periodic`: Retries flushes periodically, based on the `retryWait` parameter.
8383
|`exponential_backoff`
8484
85-
|`retryTimeOut`
86-
|The maximum time interval to attempt retries before the record is discarded.
87-
|`60m`
88-
8985
|`retryWait`
9086
|The time in seconds before the next chunk flush.
9187
|`1s`
@@ -142,7 +138,7 @@ spec:
142138
+
143139
[source,terminal]
144140
----
145-
$ oc get pods -l component=collector -n openshift-logging
141+
$ oc get pods -n openshift-logging
146142
----
147143

148144
. Check that the new values are in the `fluentd` config map:

modules/cluster-logging-deploy-cli.adoc

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,8 @@ You can use the {product-title} CLI to install the OpenShift Elasticsearch and R
1010

1111
.Prerequisites
1212

13-
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.
13+
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
14+
requires its own storage volume.
1415
+
1516
[NOTE]
1617
====
@@ -139,7 +140,7 @@ spec:
139140
name: "elasticsearch-operator"
140141
----
141142
<1> You must specify the `openshift-operators-redhat` namespace.
142-
<2> Specify `stable`, or `stable-5.<x>` as the channel. See the following note.
143+
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel. See the following note.
143144
<3> `Automatic` allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. `Manual` requires a user with appropriate credentials to approve the Operator update.
144145
<4> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster,
145146
specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
@@ -240,7 +241,7 @@ spec:
240241
sourceNamespace: openshift-marketplace
241242
----
242243
<1> You must specify the `openshift-logging` namespace.
243-
<2> Specify `stable`, or `stable-5.<x>` as the channel.
244+
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel.
244245
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
245246
+
246247
[source,terminal]
@@ -385,7 +386,7 @@ This creates the {logging} components, the `Elasticsearch` custom resource and c
385386

386387
. Verify the installation by listing the pods in the *openshift-logging* project.
387388
+
388-
You should see several pods for components of the Logging subsystem, similar to the following list:
389+
You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
389390
+
390391
[source,terminal]
391392
----
@@ -400,11 +401,11 @@ cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m
400401
elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s
401402
elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s
402403
elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s
403-
collector-587vb 1/1 Running 0 2m26s
404-
collector-7mpb9 1/1 Running 0 2m30s
405-
collector-flm6j 1/1 Running 0 2m33s
406-
collector-gn4rn 1/1 Running 0 2m26s
407-
collector-nlgb6 1/1 Running 0 2m30s
408-
collector-snpkt 1/1 Running 0 2m28s
404+
fluentd-587vb 1/1 Running 0 2m26s
405+
fluentd-7mpb9 1/1 Running 0 2m30s
406+
fluentd-flm6j 1/1 Running 0 2m33s
407+
fluentd-gn4rn 1/1 Running 0 2m26s
408+
fluentd-nlgb6 1/1 Running 0 2m30s
409+
fluentd-snpkt 1/1 Running 0 2m28s
409410
kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s
410411
----

0 commit comments

Comments
 (0)