You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following fields can be present in log records exported by OpenShift Logging system. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.
8
+
The following fields can be present in log records exported by OpenShift Logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.
9
9
10
10
To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch */_search URL*, to look for a Kubernetes pod name, use `/_search/q=kubernetes.pod_name:name-of-my-pod`.
11
11
12
-
// The logging system can forward JSON-formatted log entries to external systems. These log entries are formatted as a fluentd message with extra fields such as `kubernetes`. The fields exported by the logging system and available for searching from Elasticsearch and Kibana are documented at the end of this document.
12
+
// The logging system can parse JSON-formatted log entries to external systems. These log entries are formatted as a fluentd message with extra fields such as `kubernetes`. The fields exported by the logging system and available for searching from Elasticsearch and Kibana are documented at the end of this document.
Copy file name to clipboardExpand all lines: logging/cluster-logging-external.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,13 @@
1
1
:context: cluster-logging-external
2
2
[id="cluster-logging-external"]
3
-
= Forwarding logs to thirdparty systems
3
+
= Forwarding logs to third-party systems
4
4
include::modules/common-attributes.adoc[]
5
5
6
6
toc::[]
7
7
8
8
By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
9
9
10
-
To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable TLS support to send logs securely, as required by your organization.
10
+
To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-legacy-syslog.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ You can configure the following `syslog` parameters. For more information, see t
57
57
** `15` or `solaris-cron` for the scheduling daemon
58
58
** `16`–`23` or `local0` – `local7` for locally used facilities
59
59
* payloadKey: The record field to use as payload for the syslog message.
60
-
* rfc: The RFC to be used for sending log using syslog.
60
+
* rfc: The RFC to be used for sending logs using syslog.
61
61
* severity: The link:https://tools.ietf.org/html/rfc3164#section-4.1.1[syslog severity] to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
62
62
** `0` or `Emergency` for messages indicating the system is unusable
63
63
** `1` or `Alert` for messages indicating action must be taken immediately
@@ -67,7 +67,7 @@ You can configure the following `syslog` parameters. For more information, see t
67
67
** `5` or `Notice` for messages indicating normal but significant conditions
68
68
** `6` or `Informational` for messages indicating informational messages
69
69
** `7` or `Debug` for messages indicating debug-level messages, the default
70
-
* tag: The record field to use as tag on the syslog message.
70
+
* tag: The record field to use as a tag on the syslog message.
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-log-forward-es.adoc
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,9 +45,10 @@ spec:
45
45
outputRefs:
46
46
- elasticsearch-secure <9>
47
47
- default <10>
48
+
parse: json <11>
48
49
labels:
49
-
logs: application <11>
50
-
- name: infrastructure-audit-logs <12>
50
+
myLabel: myValue <12>
51
+
- name: infrastructure-audit-logs <13>
51
52
inputRefs:
52
53
- infrastructure
53
54
outputRefs:
@@ -65,8 +66,9 @@ spec:
65
66
<8> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
66
67
<9> Specify the output to use with that pipeline for forwarding the logs.
67
68
<10> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
68
-
<11> Optional: One or more labels to add to the logs.
69
-
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
69
+
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
70
+
<12> Optional: One or more labels to add to the logs.
71
+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
70
72
** Optional. A name to describe the pipeline.
71
73
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
72
74
** The `outputRefs` is the name of the output to use.
= Forwarding logs using the Fluentd forward protocol
7
7
8
-
You can use the Fluentd *forward* protocol to send a copy of your logs to an external log aggregator that you have configured to accept the protocol. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from {product-title}.
8
+
You can use the Fluentd *forward* protocol to send a copy of your logs to an external log aggregator configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from {product-title}.
9
9
10
10
To configure log forwarding using the *forward* protocol, create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the Fluentd servers and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.
11
11
@@ -14,10 +14,6 @@ To configure log forwarding using the *forward* protocol, create a `ClusterLogFo
14
14
Alternately, you can use a config map to forward logs using the *forward* protocols. However, this method is deprecated in {product-title} and will be removed in a future release.
15
15
====
16
16
17
-
.Prerequisites
18
-
19
-
* An external log aggregator that is configured to receive log data from {product-title} using the Fluentd *forward* protocol.
20
-
21
17
.Procedure
22
18
23
19
. Create a `ClusterLogForwarder` CR YAML file similar to the following:
@@ -47,9 +43,10 @@ spec:
47
43
outputRefs:
48
44
- fluentd-server-secure <9>
49
45
- default <10>
46
+
parse: json <11>
50
47
labels:
51
-
clusterId: C1234 <11>
52
-
- name: forward-to-fluentd-insecure <12>
48
+
clusterId: C1234 <12>
49
+
- name: forward-to-fluentd-insecure <13>
53
50
inputRefs:
54
51
- infrastructure
55
52
outputRefs:
@@ -67,8 +64,9 @@ spec:
67
64
<8> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
68
65
<9> Specify the output to use with that pipeline for forwarding the logs.
69
66
<10> Optional. Specify the `default` output to forward logs to the internal Elasticsearch instance.
70
-
<11> Optional. One or more labels to add to the logs.
71
-
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
67
+
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
68
+
<12> Optional. One or more labels to add to the logs.
69
+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
72
70
** Optional. A name to describe the pipeline.
73
71
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
74
72
** The `outputRefs` is the name of the output to use.
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-log-forward-kafka.adoc
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,9 +41,10 @@ spec:
41
41
- application
42
42
outputRefs: <10>
43
43
- app-logs
44
+
parse: json <11>
44
45
labels:
45
-
logType: application <11>
46
-
- name: infra-topic <12>
46
+
logType: application <12>
47
+
- name: infra-topic <13>
47
48
inputRefs:
48
49
- infrastructure
49
50
outputRefs:
@@ -55,7 +56,7 @@ spec:
55
56
- audit
56
57
outputRefs:
57
58
- audit-logs
58
-
- default <13>
59
+
- default <14>
59
60
labels:
60
61
logType: audit
61
62
----
@@ -69,15 +70,16 @@ spec:
69
70
<8> Optional: Specify a name for the pipeline.
70
71
<9> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
71
72
<10> Specify the output to use with that pipeline for forwarding the logs.
72
-
<11> Optional: One or more labels to add to the logs.
73
-
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
73
+
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
74
+
<12> Optional: One or more labels to add to the logs.
75
+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
74
76
** Optional. A name to describe the pipeline.
75
77
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
76
78
** The `outputRefs` is the name of the output to use.
77
79
** Optional: One or more labels to add to the logs.
78
-
<13> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
80
+
<14> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
79
81
80
-
. Optional: To forward a single output to multiple kafka brokers, specify an array of kafka brokers as shown in this example:
82
+
. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in this example:
0 commit comments