You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of {product-title}. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Cluster Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.
9
+
In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of {product-title}. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Cluster Logging Operator and LokiStack provided by the {loki-op} are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.
10
10
11
11
[id="logging-release-notes-5-8-0-enhancements"]
12
12
== Enhancements
@@ -30,7 +30,7 @@ In order to support multi-cluster log forwarding in additional namespaces other
30
30
=== Log Storage
31
31
* With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. (link:https://issues.redhat.com/browse/LOG-3841[LOG-3841])
32
32
33
-
* With this update, the Loki Operator introduces `PodDisruptionBudget` configuration on LokiStack deployments to ensure normal operations during {product-title} cluster restarts by keeping ingestion and the query path available. (link:https://issues.redhat.com/browse/LOG-3839[LOG-3839])
33
+
* With this update, the {loki-op} introduces `PodDisruptionBudget` configuration on LokiStack deployments to ensure normal operations during {product-title} cluster restarts by keeping ingestion and the query path available. (link:https://issues.redhat.com/browse/LOG-3839[LOG-3839])
34
34
35
35
* With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies.
Copy file name to clipboardExpand all lines: modules/logging-rn-5.7.3.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,13 +9,13 @@ This release includes link:https://access.redhat.com/errata/RHSA-2023:3998[OpenS
9
9
== Bug fixes
10
10
* Before this update, when viewing logs within the {product-title} web console, cached files caused the data to not refresh. With this update the bootstrap files are not cached, resolving the issue. (link:https://issues.redhat.com/browse/LOG-4100[LOG-4100])
11
11
12
-
* Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. (link:https://issues.redhat.com/browse/LOG-4156[LOG-4156])
12
+
* Before this update, the {loki-op} reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. (link:https://issues.redhat.com/browse/LOG-4156[LOG-4156])
13
13
14
-
* Before this update, the LokiStack ruler did not restart after changes were made to the `RulerConfig` custom resource (CR). With this update, the Loki Operator restarts the ruler pods after the `RulerConfig` CR is updated. (link:https://issues.redhat.com/browse/LOG-4161[LOG-4161])
14
+
* Before this update, the LokiStack ruler did not restart after changes were made to the `RulerConfig` custom resource (CR). With this update, the {loki-op} restarts the ruler pods after the `RulerConfig` CR is updated. (link:https://issues.redhat.com/browse/LOG-4161[LOG-4161])
15
15
16
16
* Before this update, the vector collector terminated unexpectedly when input match label values contained a `/` character within the `ClusterLogForwarder`. This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. (link:https://issues.redhat.com/browse/LOG-4176[LOG-4176])
17
17
18
-
* Before this update, the Loki Operator terminated unexpectedly when a `LokiStack` CR defined tenant limits, but not global limits. With this update, the Loki Operator can process `LokiStack` CRs without global limits, resolving the issue. (link:https://issues.redhat.com/browse/LOG-4198[LOG-4198])
18
+
* Before this update, the {loki-op} terminated unexpectedly when a `LokiStack` CR defined tenant limits, but not global limits. With this update, the {loki-op} can process `LokiStack` CRs without global limits, resolving the issue. (link:https://issues.redhat.com/browse/LOG-4198[LOG-4198])
19
19
20
20
* Before this update, Fluentd did not send logs to an Elasticsearch cluster when the private key provided was passphrase-protected. With this update, Fluentd properly handles passphrase-protected private keys when establishing a connection with Elasticsearch. (link:https://issues.redhat.com/browse/LOG-4258[LOG-4258])
Copy file name to clipboardExpand all lines: modules/logging-rn-5.7.4.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ This release includes link:https://access.redhat.com/errata/RHSA-2023:4341[OpenS
19
19
* Before this update, the Vector collector occasionally panicked with the following error message in its log:
20
20
`thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9`. With this update, the error has been resolved. (link:https://issues.redhat.com/browse/LOG-4275[LOG-4275])
21
21
22
-
* Before this update, an issue in the Loki Operator caused the `alert-manager` configuration for the application tenant to disappear if the Operator was configured with additional options for that tenant. With this update, the generated Loki configuration now contains both the custom and the auto-generated configuration. (link:https://issues.redhat.com/browse/LOG-4361[LOG-4361])
22
+
* Before this update, an issue in the {loki-op} caused the `alert-manager` configuration for the application tenant to disappear if the Operator was configured with additional options for that tenant. With this update, the generated Loki configuration now contains both the custom and the auto-generated configuration. (link:https://issues.redhat.com/browse/LOG-4361[LOG-4361])
23
23
24
24
* Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. (link:https://issues.redhat.com/browse/LOG-4368[LOG-4368])
Copy file name to clipboardExpand all lines: modules/logging-upgrading-loki.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ To update the {loki-op} to a new major release version, you must modify the upda
20
20
21
21
. Select the *openshift-operators-redhat* project.
22
22
23
-
. Click the *Loki Operator*.
23
+
. Click the *{loki-op}*.
24
24
25
25
. Click *Subscription*. In the *Subscription details* section, click the *Update channel* link. This link text might be *stable* or *stable-5.y*, depending on your current update channel.
For more information about creating a `cluster-admin` group, see the "Additional resources" section.
12
+
For more information about creating a `cluster-admin` group, see the "Additional resources" section.
13
13
14
14
.Procedure
15
15
16
16
. Navigate to *Operators*->*Installed Operators*, viewing *All projects* from the *Project* dropdown.
17
-
. Look for *Loki Operator*. In the details, under *Provided APIs*, select *LokiStack*.
17
+
. Look for *{loki-op}*. In the details, under *Provided APIs*, select *LokiStack*.
18
18
. Click *Create LokiStack*.
19
19
. Ensure the following fields are specified in either *Form View* or *YAML view*:
20
20
+
21
21
[source,yaml]
22
22
----
23
-
apiVersion: loki.grafana.com/v1
24
-
kind: LokiStack
25
-
metadata:
26
-
name: loki
27
-
namespace: netobserv <1>
28
-
spec:
29
-
size: 1x.small
30
-
storage:
31
-
schemas:
32
-
- version: v12
33
-
effectiveDate: '2022-06-01'
34
-
secret:
35
-
name: loki-s3
36
-
type: s3
37
-
storageClassName: gp3 <2>
38
-
tenants:
39
-
mode: openshift-network
23
+
apiVersion: loki.grafana.com/v1
24
+
kind: LokiStack
25
+
metadata:
26
+
name: loki
27
+
namespace: netobserv <1>
28
+
spec:
29
+
size: 1x.small
30
+
storage:
31
+
schemas:
32
+
- version: v12
33
+
effectiveDate: '2022-06-01'
34
+
secret:
35
+
name: loki-s3
36
+
type: s3
37
+
storageClassName: gp3 <2>
38
+
tenants:
39
+
mode: openshift-network
40
40
----
41
41
<1> The installation examples in this documentation use the same namespace, `netobserv`, across all components. You can optionally use a different namespace.
42
42
<2> Use a storage class name that is available on the cluster for `ReadWriteOnce` access mode. You can use `oc get storageclasses` to see what is available on your cluster.
Using the `oc log` command, you can view container logs, build configs and
9
-
deployments in real time. Different can users have access different
10
-
access to logs:
8
+
Using the `oc log` command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs:
11
9
12
10
* Users who have access to a project are able to see the logs for that project by default.
13
11
* Users with admin roles can access all container logs.
14
12
15
-
To save your logs for further audit and analysis, you can enable the `cluster-logging` add-on
16
-
feature to collect, manage, and view system, container, and audit logs.
17
-
You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator
18
-
and Red Hat OpenShift Logging Operator.
13
+
To save your logs for further audit and analysis, you can enable the `cluster-logging` add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the {es-op} and {clo}.
Copy file name to clipboardExpand all lines: modules/troubleshooting-network-observability-loki-tenant-rate-limit.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ You can update the LokiStack CRD with the `perStreamRateLimit` and `perStreamRat
11
11
12
12
.Procedure
13
13
. Navigate to *Operators*->*Installed Operators*, viewing *All projects* from the *Project* dropdown.
14
-
. Look for *Loki Operator*, and select the *LokiStack* tab.
14
+
. Look for *{loki-op}*, and select the *LokiStack* tab.
15
15
. Create or edit an existing *LokiStack* instance using the *YAML view* to add the `perStreamRateLimit` and `perStreamRateLimitBurst` specifications:
16
16
+
17
17
[source, yaml]
@@ -37,4 +37,4 @@ spec:
37
37
. Click *Save*.
38
38
39
39
.Verification
40
-
Once you update the `perStreamRateLimit` and `perStreamRateLimitBurst` specifications, the pods in your cluster restart and the 429 rate-limit error no longer occurs.
40
+
Once you update the `perStreamRateLimit` and `perStreamRateLimitBurst` specifications, the pods in your cluster restart and the 429 rate-limit error no longer occurs.
* Since the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate change periodically affects the `flowlogs-pipeline` pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater. (link:https://issues.redhat.com/browse/NETOBSERV-980[*NETOBSERV-980*])
128
+
* Since the 1.2.0 release of the Network Observability Operator, using {loki-op} 5.6, a Loki certificate change periodically affects the `flowlogs-pipeline` pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater. (link:https://issues.redhat.com/browse/NETOBSERV-980[*NETOBSERV-980*])
129
129
130
130
* Currently, when `spec.agent.ebpf.features` includes DNSTracking, larger DNS packets require the `eBPF` agent to look for DNS header outside of the 1st socket buffer (SKB) segment. A new `eBPF` agent helper function needs to be implemented to support it. Currently, there is no workaround for this issue. (link:https://issues.redhat.com/browse/NETOBSERV-1304[*NETOBSERV-1304*])
131
131
@@ -169,7 +169,7 @@ You must switch your channel from `v1.0.x` to `stable` to receive future Operato
169
169
170
170
[id="authToken-host"]
171
171
==== Deprecated configuration parameter setting
172
-
The release of Network Observability Operator 1.3 deprecates the `spec.Loki.authToken` `HOST` setting. When using the Loki Operator, you must now only use the `FORWARD` setting.
172
+
The release of Network Observability Operator 1.3 deprecates the `spec.Loki.authToken` `HOST` setting. When using the {loki-op}, you must now only use the `FORWARD` setting.
@@ -193,7 +193,7 @@ The release of Network Observability Operator 1.3 deprecates the `spec.Loki.auth
193
193
=== Known issues
194
194
* When `processor.metrics.tls` is set to `PROVIDED` in the `FlowCollector`, the `flowlogs-pipeline` `servicemonitor` is not adapted to the TLS scheme. (link:https://issues.redhat.com/browse/NETOBSERV-1087[*NETOBSERV-1087*])
195
195
196
-
* Since the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate change periodically affects the `flowlogs-pipeline` pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater.(link:https://issues.redhat.com/browse/NETOBSERV-980[*NETOBSERV-980*])
196
+
* Since the 1.2.0 release of the Network Observability Operator, using {loki-op} 5.6, a Loki certificate change periodically affects the `flowlogs-pipeline` pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater.(link:https://issues.redhat.com/browse/NETOBSERV-980[*NETOBSERV-980*])
* In the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate transition periodically affects the `flowlogs-pipeline` pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate transition. (link:https://issues.redhat.com/browse/NETOBSERV-980[*NETOBSERV-980*])
238
+
* In the 1.2.0 release of the Network Observability Operator, using {loki-op} 5.6, a Loki certificate transition periodically affects the `flowlogs-pipeline` pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate transition. (link:https://issues.redhat.com/browse/NETOBSERV-980[*NETOBSERV-980*])
@@ -254,4 +254,4 @@ The Network Observability Operator is now stable and the release channel is upgr
254
254
=== Bug fix
255
255
256
256
* Previously, unless the Loki `authToken` configuration was set to `FORWARD` mode, authentication was no longer enforced, allowing any user who could connect to the {product-title} console in an {product-title} cluster to retrieve flows without authentication.
257
-
Now, regardless of the Loki `authToken` mode, only cluster administrators can retrieve flows. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2169468[*BZ#2169468*])
257
+
Now, regardless of the Loki `authToken` mode, only cluster administrators can retrieve flows. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2169468[*BZ#2169468*])
0 commit comments