You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -926,7 +926,7 @@ This release includes link:https://access.redhat.com/errata/RHBA-2021:3393[RHBA-
926
926
927
927
* This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information, see xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-es_cluster-logging-external[Forwarding logs to an external Elasticsearch instance]. (link:https://issues.redhat.com/browse/LOG-1022[LOG-1022])
928
928
929
-
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collecting-ovn-audit-logs_cluster-logging-external[Collecting OVN network policy audit logs]. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
929
+
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
930
930
931
931
* By default, the data model introduced in {product-title} 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs.
Copy file name to clipboardExpand all lines: logging/cluster-logging-upgrading.adoc
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,6 +17,4 @@ To upgrade from cluster logging in {product-title} version 4.6 and earlier to Op
17
17
18
18
To upgrade from a previous version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions.
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-tuning.adoc
+7-3Lines changed: 7 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ The {logging-title} includes multiple Fluentd parameters that you can use for tu
14
14
15
15
Fluentd collects log data in a single blob called a _chunk_. When Fluentd creates a chunk, the chunk is considered to be in the _stage_, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the _queue_, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.
16
16
17
-
By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In {product-title}, you cannot change the indefinite retry behavior.
17
+
By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval.
18
18
19
19
These parameters can help you determine the trade-offs between latency and throughput.
20
20
@@ -37,7 +37,7 @@ These parameters are:
37
37
[options="header"]
38
38
|===
39
39
40
-
|Parmeter |Description |Default
40
+
|Parameter |Description |Default
41
41
42
42
|`chunkLimitSize`
43
43
|The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk.
@@ -82,6 +82,10 @@ a|The retry method when flushing fails:
82
82
* `periodic`: Retries flushes periodically, based on the `retryWait` parameter.
83
83
|`exponential_backoff`
84
84
85
+
|`retryTimeOut`
86
+
|The maximum time interval to attempt retries before the record is discarded.
87
+
|`60m`
88
+
85
89
|`retryWait`
86
90
|The time in seconds before the next chunk flush.
87
91
|`1s`
@@ -138,7 +142,7 @@ spec:
138
142
+
139
143
[source,terminal]
140
144
----
141
-
$ oc get pods -n openshift-logging
145
+
$ oc get pods -l component=collector -n openshift-logging
142
146
----
143
147
144
148
. Check that the new values are in the `fluentd` config map:
Copy file name to clipboardExpand all lines: modules/cluster-logging-deploy-cli.adoc
+10-11Lines changed: 10 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,8 +10,7 @@ You can use the {product-title} CLI to install the OpenShift Elasticsearch and R
10
10
11
11
.Prerequisites
12
12
13
-
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
14
-
requires its own storage volume.
13
+
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.
15
14
+
16
15
[NOTE]
17
16
====
@@ -140,7 +139,7 @@ spec:
140
139
name: "elasticsearch-operator"
141
140
----
142
141
<1> You must specify the `openshift-operators-redhat` namespace.
143
-
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel. See the following note.
142
+
<2> Specify `stable`, or `stable-5.<x>` as the channel. See the following note.
144
143
<3> `Automatic` allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. `Manual` requires a user with appropriate credentials to approve the Operator update.
145
144
<4> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster,
146
145
specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
@@ -241,7 +240,7 @@ spec:
241
240
sourceNamespace: openshift-marketplace
242
241
----
243
242
<1> You must specify the `openshift-logging` namespace.
244
-
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel.
243
+
<2> Specify `stable`, or `stable-5.<x>` as the channel.
245
244
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
246
245
+
247
246
[source,terminal]
@@ -386,7 +385,7 @@ This creates the {logging} components, the `Elasticsearch` custom resource and c
386
385
387
386
. Verify the installation by listing the pods in the *openshift-logging* project.
388
387
+
389
-
You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
388
+
You should see several pods for components of the Logging subsystem, similar to the following list:
0 commit comments