Skip to content

Commit 6024757

Browse files
authored
Merge pull request #35445 from rolfedh/logging-5.2
RHDEVDOCS-3226 Logging 5.2 docs aggregated branch
2 parents e3ab61b + ba45d89 commit 6024757

File tree

32 files changed

+785
-636
lines changed

32 files changed

+785
-636
lines changed

logging/cluster-logging-external.adoc

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
:context: cluster-logging-external
22
[id="cluster-logging-external"]
3-
= Forwarding logs to third-party systems
3+
= Forwarding logs to external third-party logging systems
44
include::modules/common-attributes.adoc[]
55

66
toc::[]
@@ -11,7 +11,7 @@ To send logs to other log aggregators, you use the {product-title} Cluster Log F
1111

1212
[NOTE]
1313
====
14-
To send audit logs to the internal log store, use the Cluster Log Forwarder as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
14+
To send audit logs to the default internal Elasticsearch log store, use the Cluster Log Forwarder as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
1515
====
1616

1717
When you forward logs externally, the Red Hat OpenShift Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
@@ -29,12 +29,38 @@ You cannot use the config map methods and the Cluster Log Forwarder in the same
2929
// assemblies.
3030

3131
include::modules/cluster-logging-collector-log-forwarding-about.adoc[leveloffset=+1]
32+
3233
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc[leveloffset=+1]
34+
35+
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-2.adoc[leveloffset=+1]
36+
3337
include::modules/cluster-logging-collector-log-forward-es.adoc[leveloffset=+1]
38+
3439
include::modules/cluster-logging-collector-log-forward-fluentd.adoc[leveloffset=+1]
40+
3541
include::modules/cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+1]
36-
include::modules/cluster-logging-collector-log-forward-kafka.adoc[leveloffset=+1]
42+
43+
include::modules/cluster-logging-collector-log-forward-cloudwatch.adoc[leveloffset=+1]
44+
45+
include::modules/cluster-logging-collector-log-forward-loki.adoc[leveloffset=+1]
46+
47+
.Additional resources
48+
49+
* xref:../logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields[Log Record Fields].
50+
3751
include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=+1]
52+
3853
include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1]
54+
55+
include::modules/cluster-logging-collector-collecting-ovn-logs.adoc[leveloffset=+1]
56+
57+
.Additional resources
58+
59+
* xref:../networking/network_policy/logging-network-policy.adoc#nw-networkpolicy-audit-concept_logging-network-policy[Network policy audit logging]
60+
61+
3962
include::modules/cluster-logging-collector-legacy-fluentd.adoc[leveloffset=+1]
63+
4064
include::modules/cluster-logging-collector-legacy-syslog.adoc[leveloffset=+1]
65+
66+
include::modules/cluster-logging-troubleshooting-log-forwarding.adoc[leveloffset=+1]
Lines changed: 76 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,93 @@
11
[id="cluster-logging-release-notes"]
2-
= Release notes for Red Hat OpenShift Logging 5.1
2+
= Release notes for Red Hat OpenShift Logging 5.2
33
include::modules/common-attributes.adoc[]
44
:context: cluster-logging-release-notes-v5x
55

66
toc::[]
77

8-
[id="openshift-logging-about-this-release"]
9-
== About this release
10-
11-
The following advisories are available for OpenShift Logging 5.1.x:
12-
13-
* link:https://access.redhat.com/errata/RHBA-2021:2885[RHBA-2021:2885 - Bug Fix Advisory. Openshift Logging Bug Fix Release 5.1.1]
14-
* link:https://access.redhat.com/errata/RHBA-2021:2112[RHBA-2021:2112 - Bug Fix Advisory. OpenShift Logging Bug Fix Release 5.1.0]
15-
168
[id="openshift-logging-supported-versions"]
179
== Supported versions
1810

19-
* OpenShift Logging version 5.1 runs on {product-title} versions 4.7 and 4.8.
11+
* OpenShift Logging versions 5.0, 5.1, and 5.2 run on {product-title} versions 4.7 and 4.8.
2012

2113
[id="openshift-logging-inclusive-language"]
2214
== Making open source more inclusive
2315

2416
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see link:https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language[Red Hat CTO Chris Wright’s message].
2517

2618
// Release Notes by version
19+
20+
[id="cluster-logging-release-notes-5-2-0"]
21+
== OpenShift Logging 5.2.0
22+
23+
This release includes link:https://access.redhat.com/errata/RHBA-2021:3393[RHBA-2021:3393 OpenShift Logging Bug Fix Release 5.2.0].
24+
25+
[id="openshift-logging-5-2-0-new-features-and-enhancements"]
26+
=== New features and enhancements
27+
28+
* With this update, you can forward log data to Amazon CloudWatch, which provides application and infrastructure monitoring. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collector-log-forward-cloudwatch_cluster-logging-external[Forwarding logs to Amazon CloudWatch]. (link:https://issues.redhat.com/browse/LOG-1173[LOG-1173])
29+
30+
* With this update, you can forward log data to Grafana Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collector-log-forward-loki_cluster-logging-external[Forwarding logs to Grafana Loki]. (link:https://issues.redhat.com/browse/LOG-684[LOG-684])
31+
32+
* With this update, if you use the Fluentd forward protocol to forward log data over a TLS-encrypted connection, you can now use a password-encrypted private key file and specify the passphrase in the Cluster Log Forwarder configuration. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collector-log-forward-fluentd_cluster-logging-external[Forwarding logs using the Fluentd forward protocol]. (link:https://issues.redhat.com/browse/LOG-1525[LOG-1525])
33+
34+
* This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information see xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-es_cluster-logging-external[Forwarding logs to an external Elasticsearch instance]. (link:https://issues.redhat.com/browse/LOG-1022[LOG-1022])
35+
36+
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collecting-ovn-audit-logs_cluster-logging-external[Collecting OVN network policy audit logs]. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
37+
38+
* By default, the data model introduced in {product-title} 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs.
39+
+
40+
The current release, OpenShift Logging 5.2, adds namespace metrics to the *Logging* dashboard in the {product-title} console. With these metrics, you can see which namespaces produce logs and how many logs each namespace produces for a given timestamp.
41+
+
42+
To see these metrics, open the *Administrator* perspective in the {product-title} web console, and navigate to *Monitoring* -> *Dashboards* -> *Logging/Elasticsearch*. (link:https://issues.redhat.com/browse/LOG-1680[LOG-1680])
43+
44+
* The current release, OpenShift Logging 5.2, enables two new metrics: For a given timestamp or duration, you can see the total logs produced or logged by individual containers, and the total logs collected by the collector. These metrics are labeled by namespace, pod, and container name, so you can see how many logs each namespace and pod collects and produces. (link:https://issues.redhat.com/browse/LOG-1213[LOG-1213])
45+
46+
[id="openshift-logging-5-2-0-bug-fixes"]
47+
=== Bug fixes
48+
49+
* link:https://issues.redhat.com/browse/LOG-1130[LOG-1130] "BZ#1927249 - fieldmanager.go:186 - SHOULD NOT HAPPEN - failed to update managedFields...duplicate entries for key 'name="POLICY_MAPPING'"
50+
* link:https://issues.redhat.com/browse/LOG-1268[LOG-1268] "elasticsearch-im-{app,infra,audit} successfully run but fail to run their tasks."
51+
* link:https://issues.redhat.com/browse/LOG-1271[LOG-1271] "Logs Produced Line Chart does not display podname and namespace labels"
52+
* link:https://issues.redhat.com/browse/LOG-1273[LOG-1273] "The index management job status is always 'Completed' even when there has an error in the job log."
53+
* link:https://issues.redhat.com/browse/LOG-1385[LOG-1385] "APIRemovedInNextReleaseInUse alert for priorityclasses"
54+
* link:https://issues.redhat.com/browse/LOG-1420[LOG-1420] "Operators missing disconnected annotation"
55+
* link:https://issues.redhat.com/browse/LOG-1440[LOG-1440] "BZ#1966561 - OLM bug workaround for workload partitioning (PR#1042)"
56+
* link:https://issues.redhat.com/browse/LOG-1499[LOG-1499] "release-5.2 - Error 'com.fasterxml.jackson.core.JsonParseException: Invalid UTF-8 start byte 0x92' in Elasticsearch/Fluentd logs"
57+
* link:https://issues.redhat.com/browse/LOG-1567[LOG-1567] "Use correct variable for nextIndex"
58+
* link:https://issues.redhat.com/browse/LOG-1446[LOG-1446] "kibana-proxy CrashLoopBackoff with error Invalid configuration cookie_secret must be 16, 24, or 32 bytes to create an AES cipher"
59+
* link:https://issues.redhat.com/browse/LOG-1625[LOG-1625] "ds/fluentd is not created due to: 'system:serviceaccount:openshift-logging:cluster-logging-operator' cannot create resource 'securitycontextconstraints' in API group 'security.openshift.io' at the cluster scope"
60+
* link:https://issues.redhat.com/browse/LOG-1071[LOG-1071] "fluentd configuration posting all messages to its own log"
61+
* link:https://issues.redhat.com/browse/LOG-1276[LOG-1276] "Update Elasticsearch/kibana to use opendistro security plugin 2.10.5.1"
62+
* link:https://issues.redhat.com/browse/LOG-1353[LOG-1353] "No datapoints found on top 10 containers dashboard"
63+
* link:https://issues.redhat.com/browse/LOG-1411[LOG-1411] "Underestimate queued_chunks_limit_size value with chunkLimitSize and totalLimitSize tuning parameters"
64+
* link:https://issues.redhat.com/browse/LOG-1558[LOG-1558] "Update OD security dependency to resolve kibana index migration issue"
65+
* link:https://issues.redhat.com/browse/LOG-1562[LOG-1562] "CVE-2021-32740 logging-fluentd-container: rubygem-addressable: ReDoS in templates - openshift-logging-5"
66+
* link:https://issues.redhat.com/browse/LOG-1570[LOG-1570] "Bug 1981579: Fix built-in application behavior to collect all of logs"
67+
* link:https://issues.redhat.com/browse/LOG-1589[LOG-1589] "There are lots of dockercfg secrets and service-account-token secrets for ES and Kibana in openshift-logging namespace after deploying EFK pods."
68+
* link:https://issues.redhat.com/browse/LOG-1590[LOG-1590] "Vendored viaq/logerr dependency is missing a license file"
69+
* link:https://issues.redhat.com/browse/LOG-1623[LOG-1623] "Metric`log_collected_bytes_total` is not exposed"
70+
* link:https://issues.redhat.com/browse/LOG-1624[LOG-1624] "Index management cronjobs are using wrong image in CSV/elasticsearch-operator.5.2.0-1"
71+
* link:https://issues.redhat.com/browse/LOG-1634[LOG-1634] "Logging 5.2 - The CSV version is not changed in new bundles."
72+
* link:https://issues.redhat.com/browse/LOG-1647[LOG-1647] "Fluentd pods raise error `Prometheus::Client::LabelSetValidator::InvalidLabelSetError` when forward logs to external logStore"
73+
* link:https://issues.redhat.com/browse/LOG-1657[LOG-1657] "Index management jobs failing with error while attemping to determine the active write alias no permissions"
74+
* link:https://issues.redhat.com/browse/LOG-1681[LOG-1681] "Fluentd pod metric not able to scrape , Fluentd target down"
75+
* link:https://issues.redhat.com/browse/LOG-1683[LOG-1683] "Loki output not present in CLO CRD"
76+
* link:https://issues.redhat.com/browse/LOG-1702[LOG-1702] "Entry out of order when forward logs to loki"
77+
* link:https://issues.redhat.com/browse/LOG-1714[LOG-1714] "Memory/CPU spike issues seen with Logging 5.2 on Power"
78+
* link:https://issues.redhat.com/browse/LOG-1722[LOG-1722] "The value of card `Total Namespace Count` in Logging/Elasticsearch dashboard is not correct."
79+
* link:https://issues.redhat.com/browse/LOG-1723[LOG-1723] "In fluentd config, flush_interval can't be set with flush_mode=immediate"
80+
81+
[id="openshift-logging-5-2-0-known-issues"]
82+
=== Known issues
83+
84+
* If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. (link:https://issues.redhat.com/browse/LOG-1652[LOG-1652])
85+
+
86+
As a workaround, if you change the secret, you can force the Fluentd pods to redeploy by entering:
87+
+
88+
[source,terminal]
89+
----
90+
$ oc delete pod -l component=fluentd
91+
----
92+
2793
include::modules/cluster-logging-release-notes-5.1.0.adoc[leveloffset=+1]

logging/cluster-logging-upgrading.adoc

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]
55

66
toc::[]
77

8-
{product-title} versions 4.7 and 4.8 both support OpenShift Logging versions 5.0 and 5.1.
8+
{product-title} versions 4.7 and 4.8 support OpenShift Logging versions 5.0, 5.1, and 5.2.
99

1010
To upgrade from cluster logging in {product-title} version 4.6 and earlier to OpenShift Logging 5.x, you update the {product-title} cluster to version 4.7 or 4.8. Then, you update the following operators:
1111

@@ -14,10 +14,5 @@ To upgrade from cluster logging in {product-title} version 4.6 and earlier to Op
1414
1515
To upgrade from a previous version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions.
1616

17-
// The following include statements pull in the module files that comprise
18-
// the assembly. Include any combination of concept, procedure, or reference
19-
// modules required to cover the user story. You can also include other
20-
// assemblies.
21-
2217
include::modules/cluster-logging-updating-logging-to-5-0.adoc[leveloffset=+1]
2318
include::modules/cluster-logging-updating-logging-to-5-1.adoc[leveloffset=+1]

logging/cluster-logging.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ OpenShift Logging aggregates the following types of logs:
2020
2121
[NOTE]
2222
====
23-
Because the internal {product-title} Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the internal log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
23+
Because the internal {product-title} Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
2424
====
2525
endif::[]
2626

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
[id="cluster-logging-collecting-ovn-audit-logs_{context}"]
2+
= Collecting OVN network policy audit logs
3+
4+
You can collect the OVN network policy audit logs from the `/var/log/ovn/acl-audit-log.log` file on OVN-Kubernetes pods and forward them to logging servers.
5+
6+
.Prerequisites
7+
8+
* You are using {product-title} version 4.8 or later.
9+
* You are using Cluster Logging version 5.2 or later.
10+
* You have already set up a `ClusterLogForwarder` custom resource (CR) object.
11+
* The {product-title} cluster is configured for OVN-Kubernetes network policy audit logging. See the following "Additional resources" section.
12+
13+
[NOTE]
14+
====
15+
Often, logging servers that store audit data must meet organizational and governmental requirements for compliance and security.
16+
====
17+
18+
.Procedure
19+
20+
. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object as described in other topics on forwarding logs to third-party systems.
21+
22+
. In the YAML file, add the `audit` log type to the `inputRefs` element in a pipeline. For example:
23+
+
24+
[source,yaml]
25+
----
26+
pipelines:
27+
- name: audit-logs
28+
inputRefs:
29+
- audit <1>
30+
outputRefs:
31+
- secure-logging-server <2>
32+
----
33+
<1> Specify `audit` as one of the log types to input.
34+
<2> Specify the output that connects to your logging server.
35+
36+
. Recreate the updated CR object:
37+
+
38+
[source,terminal]
39+
----
40+
$ oc create -f <file-name>.yaml
41+
----
42+
43+
.Verification
44+
45+
Verify that audit log entries from the nodes that you are monitoring are present among the log data gathered by the logging server.
46+
47+
Find an original audit log entry in `/var/log/ovn/acl-audit-log.log` and compare it with the corresponding log entry on the logging server.
48+
49+
For example, an original log entry in `/var/log/ovn/acl-audit-log.log` might look like this:
50+
51+
[source,txt]
52+
----
53+
2021-07-06T08:26:58.687Z|00004|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-
54+
logging_deny-all", verdict=drop, severity=alert:
55+
icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:12,dl_dst=0a:58:0a:81:02:14,nw_src=10
56+
.129.2.18,nw_dst=10.129.2.20,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
57+
----
58+
59+
And the corresponding OVN audit log entry you find on the logging server might look like this:
60+
61+
[source,json]
62+
----
63+
{
64+
"@timestamp" : "2021-07-06T08:26:58..687000+00:00",
65+
"hostname":"ip.abc.iternal",
66+
"level":"info",
67+
"message" : "2021-07-06T08:26:58.687Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:12,dl_dst=0a:58:0a:81:02:14,nw_src=10.129.2.18,nw_dst=10.129.2.20,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0"
68+
}
69+
----
70+
71+
Where:
72+
73+
* `@timestamp` is the timestamp of the log entry.
74+
* `hostname` is the node from which the log originated.
75+
* `level` is the log entry.
76+
* `message` is the original audit log message.
77+
78+
[NOTE]
79+
====
80+
On an Elasticsearch server, look for log entries whose indices begin with `audit-00000`.
81+
====
82+
83+
.Troubleshooting
84+
85+
. Verify that your {product-title} cluster meets all the prerequisites.
86+
. Verify that you have completed the procedure.
87+
. Verify that the nodes generating OVN logs are enabled and have `/var/log/ovn/acl-audit-log.log` files.
88+
. Check the Fluentd pod logs for issues.

modules/cluster-logging-collector-legacy-fluentd.adoc

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
// Module included in the following assemblies:
2-
//
3-
// * logging/cluster-logging-external.adoc
4-
51
[id="cluster-logging-collector-legacy-fluentd_{context}"]
62
= Forwarding logs using the legacy Fluentd method
73

@@ -113,10 +109,3 @@ To use Mutual TLS (mTLS) authentication, see the link:https://docs.fluentd.org/o
113109
----
114110
$ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging
115111
----
116-
117-
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
118-
119-
[source,terminal]
120-
----
121-
$ oc delete pod --selector logging-infra=fluentd
122-
----

modules/cluster-logging-collector-legacy-syslog.adoc

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
// Module included in the following assemblies:
2-
//
3-
// * logging/cluster-logging-external.adoc
4-
51
[id="cluster-logging-collector-legacy-syslog_{context}"]
62
= Forwarding logs using the legacy syslog method
73

@@ -113,11 +109,3 @@ rfc 3164 <5>
113109
----
114110
$ oc create configmap syslog --from-file=syslog.conf -n openshift-logging
115111
----
116-
117-
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
118-
pods to force them to redeploy.
119-
120-
[source,terminal]
121-
----
122-
$ oc delete pod --selector logging-infra=fluentd
123-
----

0 commit comments

Comments
 (0)