Skip to content

Commit ba90c21

Browse files
authored
Merge pull request #35248 from rolfedh/RHDEVDOCS-3217
RHDEVDOCS-3217 Tweaks to JSON parsing topics
2 parents 05616f0 + 4dc7ccf commit ba90c21

5 files changed

+12
-9
lines changed

logging/cluster-logging.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ OpenShift Logging aggregates the following types of logs:
1616

1717
* `application` - Container logs generated by user applications running in the cluster, except infrastructure container applications.
1818
* `infrastructure` - Logs generated by infrastructure components running in the cluster and {product-title} nodes, such as journal logs. Infrastructure components are pods that run in the `openshift*`, `kube*`, or `default` projects.
19-
* `audit` - Logs generated by auditd, the node audit system, which are stored in the */var/log/audit/audit.log* file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.
19+
* `audit` - Logs generated by auditd, the node audit system, which are stored in the */var/log/audit/audit.log* file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.
2020
2121
[NOTE]
2222
====

modules/cluster-logging-configuration-of-json-log-data-for-default-elasticsearch.adoc

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,13 @@
11
[id="cluster-logging-configuration-of-json-log-data-for-default-elasticsearch_{context}"]
22
= Configuring JSON log data for Elasticsearch
33

4-
When forwarding JSON logs to an Elasticsearch log store, you must create an index for each format if the JSON log entries _have different formats_.
4+
If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the `ClusterLogForwarder` custom resource (CR) to group each schema into a single output definition. This way, each schema is forwarded to a separate index.
55

66
[IMPORTANT]
77
====
8-
You must create a separate index for each different JSON log format. Otherwise, forwarding different formats to the same index can cause type conflicts and cardinality problems.
8+
If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
99
====
1010

11-
To provide a different index for each format, you configure the `ClusterLogForwarder` custom resource (CR). You use a structure type from which to construct the index name.
12-
1311
.Structure types
1412

1513
You can use the following structure types in the `ClusterLogForwarder` CR to construct index names for the Elasticsearch log store:

modules/cluster-logging-forwarding-json-logs-to-the-default-elasticsearch.adoc

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,14 @@
11
[id="cluster-logging-forwarding-json-logs-to-the-default-elasticsearch_{context}"]
22
= Forwarding JSON logs to the Elasticsearch log store
33

4-
For the Elasticsearch log store that OpenShift Logging manages, you must create a different index for each format in advance if your JSON log entries _have different formats_. Otherwise, forwarding different formats to the same index can cause type conflicts and cardinality problems.
4+
For an Elasticsearch log store, if your JSON log entries _follow different schemas_, configure the `ClusterLogForwarder` custom resource (CR) to group each JSON schema into a single output definition. This way, Elasticsearch uses a separate index for each schema.
5+
6+
[IMPORTANT]
7+
====
8+
Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store.
9+
10+
To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
11+
====
512

613
.Procedure
714

modules/cluster-logging-json-log-forwarding.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[id="cluster-logging-json-log-forwarding_{context}"]
22
= Parsing JSON logs
33

4-
Logs including JSON logs are usually represented as a string inside the `message` field. That makes it hard for users to query specific fields inside a JSON document. OpenShift Logging's Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either Red Hat's managed Elasticsearch or any other third-party system supported by the Log Forwarding API.
4+
Logs including JSON logs are usually represented as a string inside the `message` field. That makes it hard for users to query specific fields inside a JSON document. OpenShift Logging's Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either OpenShift Logging-managed Elasticsearch or any other third-party system supported by the Log Forwarding API.
55

66
To illustrate how this works, suppose that you have the following structured JSON log entry.
77

modules/cluster-logging-maintenance-support-list.adoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,6 @@ Explicitly unsupported cases include:
2727

2828
* *Throttling log collection*. You cannot throttle down the rate at which the logs are read in by the log collector.
2929

30-
* *Configuring log collection JSON parsing*. You cannot format log messages in JSON.
31-
3230
* *Configuring the logging collector using environment variables*. You cannot use environment variables to modify the log collector.
3331

3432
* *Configuring how the log collector normalizes logs*. You cannot modify default log normalization.

0 commit comments

Comments
 (0)