Skip to content

Commit ef1f1f8

Browse files
authored
Merge pull request #34891 from rolfedh/RHDEVDOCS-2677
RHDEVDOCS-2677 Document "Allow storing and querying of structured log…
2 parents 536ca22 + c96d1bb commit ef1f1f8

19 files changed

+373
-134
lines changed

_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1871,6 +1871,8 @@ Topics:
18711871
- Name: Forwarding logs to third party systems
18721872
File: cluster-logging-external
18731873
Distros: openshift-enterprise,openshift-origin
1874+
- Name: Enabling JSON logging
1875+
File: cluster-logging-enabling-json-logging
18741876
- Name: Collecting and storing Kubernetes events
18751877
File: cluster-logging-eventrouter
18761878
Distros: openshift-enterprise,openshift-origin
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
:context: cluster-logging-enabling-json-logging
2+
[id="cluster-logging-enabling-json-logging"]
3+
= Enabling JSON logging
4+
include::modules/common-attributes.adoc[]
5+
6+
toc::[]
7+
8+
You can configure the Log Forwarding API to parse JSON strings into a structured object.
9+
10+
include::modules/cluster-logging-json-log-forwarding.adoc[leveloffset=+1]
11+
include::modules/cluster-logging-configuration-of-json-log-data-for-default-elasticsearch.adoc[leveloffset=+1]
12+
include::modules/cluster-logging-forwarding-json-logs-to-the-default-elasticsearch.adoc[leveloffset=+1]
13+
14+
.Additional resources
15+
16+
* xref:../logging/cluster-logging-external.adoc#cluster-logging-external[Forwarding logs to third-party systems]

logging/cluster-logging-exported-fields.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,12 @@ include::modules/common-attributes.adoc[]
55

66
toc::[]
77

8-
The following fields can be present in log records exported by OpenShift Logging system. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.
8+
The following fields can be present in log records exported by OpenShift Logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.
99

1010
To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch */_search URL*, to look for a Kubernetes pod name, use `/_search/q=kubernetes.pod_name:name-of-my-pod`.
1111

12-
// The logging system can forward JSON-formatted log entries to external systems. These log entries are formatted as a fluentd message with extra fields such as `kubernetes`. The fields exported by the logging system and available for searching from Elasticsearch and Kibana are documented at the end of this document.
12+
// The logging system can parse JSON-formatted log entries to external systems. These log entries are formatted as a fluentd message with extra fields such as `kubernetes`. The fields exported by the logging system and available for searching from Elasticsearch and Kibana are documented at the end of this document.
1313

1414
include::modules/cluster-logging-exported-fields-top-level-fields.adoc[leveloffset=0]
1515
include::modules/cluster-logging-exported-fields-kubernetes.adoc[leveloffset=0]
16+
// add modules/cluster-logging-exported-fields-openshift when available

logging/cluster-logging-external.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
:context: cluster-logging-external
22
[id="cluster-logging-external"]
3-
= Forwarding logs to third party systems
3+
= Forwarding logs to third-party systems
44
include::modules/common-attributes.adoc[]
55

66
toc::[]
77

88
By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
99

10-
To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable TLS support to send logs securely, as required by your organization.
10+
To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.
1111

1212
[NOTE]
1313
====

logging/cluster-logging.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,4 +68,4 @@ For information, see xref:../logging/cluster-logging-eventrouter.adoc#cluster-lo
6868

6969
include::modules/cluster-logging-forwarding-about.adoc[leveloffset=+2]
7070

71-
For information, see xref:../logging/cluster-logging-external.adoc#cluster-logging-external[Forwarding logs to third party systems].
71+
For information, see xref:../logging/cluster-logging-external.adoc#cluster-logging-external[Forwarding logs to third-party systems].

logging/config/cluster-logging-collector.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,4 +26,4 @@ include::modules/cluster-logging-removing-unused-components-if-no-elasticsearch.
2626

2727
.Additional resources
2828

29-
* xref:../../logging/cluster-logging-external.adoc#cluster-logging-external[Forwarding logs to third party systems]
29+
* xref:../../logging/cluster-logging-external.adoc#cluster-logging-external[Forwarding logs to third-party systems]

modules/cluster-logging-collector-legacy-syslog.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ You can configure the following `syslog` parameters. For more information, see t
5757
** `15` or `solaris-cron` for the scheduling daemon
5858
** `16`–`23` or `local0` – `local7` for locally used facilities
5959
* payloadKey: The record field to use as payload for the syslog message.
60-
* rfc: The RFC to be used for sending log using syslog.
60+
* rfc: The RFC to be used for sending logs using syslog.
6161
* severity: The link:https://tools.ietf.org/html/rfc3164#section-4.1.1[syslog severity] to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
6262
** `0` or `Emergency` for messages indicating the system is unusable
6363
** `1` or `Alert` for messages indicating action must be taken immediately
@@ -67,7 +67,7 @@ You can configure the following `syslog` parameters. For more information, see t
6767
** `5` or `Notice` for messages indicating normal but significant conditions
6868
** `6` or `Informational` for messages indicating informational messages
6969
** `7` or `Debug` for messages indicating debug-level messages, the default
70-
* tag: The record field to use as tag on the syslog message.
70+
* tag: The record field to use as a tag on the syslog message.
7171
* trimPrefix: The prefix to remove from the tag.
7272

7373
.Procedure

modules/cluster-logging-collector-log-forward-es.adoc

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,9 +45,10 @@ spec:
4545
outputRefs:
4646
- elasticsearch-secure <9>
4747
- default <10>
48+
parse: json <11>
4849
labels:
49-
logs: application <11>
50-
- name: infrastructure-audit-logs <12>
50+
myLabel: myValue <12>
51+
- name: infrastructure-audit-logs <13>
5152
inputRefs:
5253
- infrastructure
5354
outputRefs:
@@ -65,8 +66,9 @@ spec:
6566
<8> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
6667
<9> Specify the output to use with that pipeline for forwarding the logs.
6768
<10> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
68-
<11> Optional: One or more labels to add to the logs.
69-
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
69+
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
70+
<12> Optional: One or more labels to add to the logs.
71+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
7072
** Optional. A name to describe the pipeline.
7173
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
7274
** The `outputRefs` is the name of the output to use.

modules/cluster-logging-collector-log-forward-fluentd.adoc

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id="cluster-logging-collector-log-forward-fluentd_{context}"]
66
= Forwarding logs using the Fluentd forward protocol
77

8-
You can use the Fluentd *forward* protocol to send a copy of your logs to an external log aggregator that you have configured to accept the protocol. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from {product-title}.
8+
You can use the Fluentd *forward* protocol to send a copy of your logs to an external log aggregator configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from {product-title}.
99

1010
To configure log forwarding using the *forward* protocol, create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the Fluentd servers and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.
1111

@@ -14,10 +14,6 @@ To configure log forwarding using the *forward* protocol, create a `ClusterLogFo
1414
Alternately, you can use a config map to forward logs using the *forward* protocols. However, this method is deprecated in {product-title} and will be removed in a future release.
1515
====
1616

17-
.Prerequisites
18-
19-
* An external log aggregator that is configured to receive log data from {product-title} using the Fluentd *forward* protocol.
20-
2117
.Procedure
2218

2319
. Create a `ClusterLogForwarder` CR YAML file similar to the following:
@@ -47,9 +43,10 @@ spec:
4743
outputRefs:
4844
- fluentd-server-secure <9>
4945
- default <10>
46+
parse: json <11>
5047
labels:
51-
clusterId: C1234 <11>
52-
- name: forward-to-fluentd-insecure <12>
48+
clusterId: C1234 <12>
49+
- name: forward-to-fluentd-insecure <13>
5350
inputRefs:
5451
- infrastructure
5552
outputRefs:
@@ -67,8 +64,9 @@ spec:
6764
<8> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
6865
<9> Specify the output to use with that pipeline for forwarding the logs.
6966
<10> Optional. Specify the `default` output to forward logs to the internal Elasticsearch instance.
70-
<11> Optional. One or more labels to add to the logs.
71-
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
67+
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
68+
<12> Optional. One or more labels to add to the logs.
69+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
7270
** Optional. A name to describe the pipeline.
7371
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
7472
** The `outputRefs` is the name of the output to use.

modules/cluster-logging-collector-log-forward-kafka.adoc

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -41,9 +41,10 @@ spec:
4141
- application
4242
outputRefs: <10>
4343
- app-logs
44+
parse: json <11>
4445
labels:
45-
logType: application <11>
46-
- name: infra-topic <12>
46+
logType: application <12>
47+
- name: infra-topic <13>
4748
inputRefs:
4849
- infrastructure
4950
outputRefs:
@@ -55,7 +56,7 @@ spec:
5556
- audit
5657
outputRefs:
5758
- audit-logs
58-
- default <13>
59+
- default <14>
5960
labels:
6061
logType: audit
6162
----
@@ -69,15 +70,16 @@ spec:
6970
<8> Optional: Specify a name for the pipeline.
7071
<9> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
7172
<10> Specify the output to use with that pipeline for forwarding the logs.
72-
<11> Optional: One or more labels to add to the logs.
73-
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
73+
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
74+
<12> Optional: One or more labels to add to the logs.
75+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
7476
** Optional. A name to describe the pipeline.
7577
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
7678
** The `outputRefs` is the name of the output to use.
7779
** Optional: One or more labels to add to the logs.
78-
<13> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
80+
<14> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
7981

80-
. Optional: To forward a single output to multiple kafka brokers, specify an array of kafka brokers as shown in this example:
82+
. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in this example:
8183
+
8284
[source,yaml]
8385
----

0 commit comments

Comments
 (0)