Skip to content

Commit 775cd09

Browse files
committed
OBSDOCS-616 - Removal of parse:json from CLF code examples - structuredkey fix w/ peer rev
1 parent 718b75c commit 775cd09

10 files changed

+49
-60
lines changed

modules/cluster-logging-collector-log-forward-es.adoc

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -46,10 +46,9 @@ spec:
4646
outputRefs:
4747
- elasticsearch-secure <10>
4848
- default <11>
49-
parse: json <12>
5049
labels:
51-
myLabel: "myValue" <13>
52-
- name: infrastructure-audit-logs <14>
50+
myLabel: "myValue" <12>
51+
- name: infrastructure-audit-logs <13>
5352
inputRefs:
5453
- infrastructure
5554
outputRefs:
@@ -68,9 +67,8 @@ spec:
6867
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
6968
<10> Specify the name of the output to use when forwarding logs with this pipeline.
7069
<11> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
71-
<12> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
72-
<13> Optional: String. One or more labels to add to the logs.
73-
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
70+
<12> Optional: String. One or more labels to add to the logs.
71+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
7472
** A name to describe the pipeline.
7573
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
7674
** The `outputRefs` is the name of the output to use.

modules/cluster-logging-collector-log-forward-fluentd.adoc

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -40,10 +40,9 @@ spec:
4040
outputRefs:
4141
- fluentd-server-secure <10>
4242
- default <11>
43-
parse: json <12>
4443
labels:
45-
clusterId: "C1234" <13>
46-
- name: forward-to-fluentd-insecure <14>
44+
clusterId: "C1234" <12>
45+
- name: forward-to-fluentd-insecure <13>
4746
inputRefs:
4847
- infrastructure
4948
outputRefs:
@@ -62,9 +61,8 @@ spec:
6261
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
6362
<10> Specify the name of the output to use when forwarding logs with this pipeline.
6463
<11> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
65-
<12> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
66-
<13> Optional: String. One or more labels to add to the logs.
67-
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
64+
<12> Optional: String. One or more labels to add to the logs.
65+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
6866
** A name to describe the pipeline.
6967
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
7068
** The `outputRefs` is the name of the output to use.

modules/cluster-logging-collector-log-forward-kafka.adoc

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -43,10 +43,9 @@ spec:
4343
- application
4444
outputRefs: <10>
4545
- app-logs
46-
parse: json <11>
4746
labels:
48-
logType: "application" <12>
49-
- name: infra-topic <13>
47+
logType: "application" <11>
48+
- name: infra-topic <12>
5049
inputRefs:
5150
- infrastructure
5251
outputRefs:
@@ -58,7 +57,7 @@ spec:
5857
- audit
5958
outputRefs:
6059
- audit-logs
61-
- default <14>
60+
- default <13>
6261
labels:
6362
logType: "audit"
6463
----
@@ -72,14 +71,13 @@ spec:
7271
<8> Optional: Specify a name for the pipeline.
7372
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
7473
<10> Specify the name of the output to use when forwarding logs with this pipeline.
75-
<11> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
76-
<12> Optional: String. One or more labels to add to the logs.
77-
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
74+
<11> Optional: String. One or more labels to add to the logs.
75+
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
7876
** A name to describe the pipeline.
7977
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
8078
** The `outputRefs` is the name of the output to use.
8179
** Optional: String. One or more labels to add to the logs.
82-
<14> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
80+
<13> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
8381

8482
. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example:
8583
+

modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -24,30 +24,28 @@ spec:
2424
pipelines:
2525
- inputRefs: [ myAppLogData ] <3>
2626
outputRefs: [ default ] <4>
27-
parse: json <5>
28-
inputs: <6>
27+
inputs: <5>
2928
- name: myAppLogData
3029
application:
3130
selector:
32-
matchLabels: <7>
31+
matchLabels: <6>
3332
environment: production
3433
app: nginx
35-
namespaces: <8>
34+
namespaces: <7>
3635
- app1
3736
- app2
38-
outputs: <9>
37+
outputs: <8>
3938
- default
4039
...
4140
----
4241
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
4342
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
4443
<3> Specify one or more comma-separated values from `inputs[].name`.
4544
<4> Specify one or more comma-separated values from `outputs[]`.
46-
<5> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
47-
<6> Define a unique `inputs[].name` for each application that has a unique set of pod labels.
48-
<7> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
49-
<8> Optional: Specify one or more namespaces.
50-
<9> Specify one or more outputs to forward your log data to. The optional `default` output shown here sends log data to the internal Elasticsearch instance.
45+
<5> Define a unique `inputs[].name` for each application that has a unique set of pod labels.
46+
<6> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
47+
<7> Optional: Specify one or more namespaces.
48+
<8> Specify one or more outputs to forward your log data to. The optional `default` output shown here sends log data to the internal Elasticsearch instance.
5149

5250
. Optional: To restrict the gathering of log data to specific namespaces, use `inputs[].name.application.namespaces`, as shown in the preceding example.
5351

modules/cluster-logging-collector-log-forward-project.adoc

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -42,10 +42,9 @@ spec:
4242
- my-app-logs
4343
outputRefs: <10>
4444
- fluentd-server-insecure
45-
parse: json <11>
4645
labels:
47-
project: "my-project" <12>
48-
- name: forward-to-fluentd-secure <13>
46+
project: "my-project" <11>
47+
- name: forward-to-fluentd-secure <12>
4948
inputRefs:
5049
- application
5150
- audit
@@ -66,9 +65,8 @@ spec:
6665
<8> Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance.
6766
<9> The `my-app-logs` input.
6867
<10> The name of the output to use.
69-
<11> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
70-
<12> Optional: String. One or more labels to add to the logs.
71-
<13> Configuration for a pipeline to send logs to other log aggregators.
68+
<11> Optional: String. One or more labels to add to the logs.
69+
<12> Configuration for a pipeline to send logs to other log aggregators.
7270
** Optional: Specify a name for the pipeline.
7371
** Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
7472
** Specify the name of the output to use when forwarding logs with this pipeline.

modules/cluster-logging-collector-log-forward-syslog.adoc

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -51,11 +51,10 @@ spec:
5151
outputRefs: <10>
5252
- rsyslog-east
5353
- default <11>
54-
parse: json <12>
5554
labels:
56-
secure: "true" <13>
55+
secure: "true" <12>
5756
syslog: "east"
58-
- name: syslog-west <14>
57+
- name: syslog-west <13>
5958
inputRefs:
6059
- infrastructure
6160
outputRefs:
@@ -75,9 +74,8 @@ spec:
7574
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
7675
<10> Specify the name of the output to use when forwarding logs with this pipeline.
7776
<11> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
78-
<12> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
79-
<13> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
80-
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
77+
<12> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
78+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
8179
** A name to describe the pipeline.
8280
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
8381
** The `outputRefs` is the name of the output to use.

modules/cluster-logging-collector-log-forwarding-about.adoc

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -90,23 +90,22 @@ spec:
9090
outputRefs:
9191
- elasticsearch-secure
9292
- default
93-
parse: json <8>
9493
labels:
95-
secure: "true" <9>
94+
secure: "true" <8>
9695
datacenter: "east"
97-
- name: infrastructure-logs <10>
96+
- name: infrastructure-logs <9>
9897
inputRefs:
9998
- infrastructure
10099
outputRefs:
101100
- elasticsearch-insecure
102101
labels:
103102
datacenter: "west"
104-
- name: my-app <11>
103+
- name: my-app <10>
105104
inputRefs:
106105
- my-app-logs
107106
outputRefs:
108107
- default
109-
- inputRefs: <12>
108+
- inputRefs: <11>
110109
- application
111110
outputRefs:
112111
- kafka-app
@@ -134,15 +133,14 @@ spec:
134133
** The `inputRefs` is the log type, in this example `audit`.
135134
** The `outputRefs` is the name of the output to use, in this example `elasticsearch-secure` to forward to the secure Elasticsearch instance and `default` to forward to the internal Elasticsearch instance.
136135
** Optional: Labels to add to the logs.
137-
<8> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
138-
<9> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
139-
<10> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
140-
<11> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
136+
<8> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
137+
<9> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
138+
<10> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
141139
** A name to describe the pipeline.
142140
** The `inputRefs` is a specific input: `my-app-logs`.
143141
** The `outputRefs` is `default`.
144142
** Optional: String. One or more labels to add to the logs.
145-
<12> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
143+
<11> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
146144
** The `inputRefs` is the log type, in this example `application`.
147145
** The `outputRefs` is the name of the output to use.
148146
** Optional: String. One or more labels to add to the logs.

modules/cluster-logging-configuration-of-json-log-data-for-default-elasticsearch.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@ If you forward JSON logs to the default Elasticsearch instance managed by OpenSh
1212

1313
You can use the following structure types in the `ClusterLogForwarder` CR to construct index names for the Elasticsearch log store:
1414

15-
* `structuredTypeKey` (string, optional) is the name of a message field. The value of that field, if present, is used to construct the index name.
15+
* `structuredTypeKey` is the name of a message field. The value of that field is used to construct the index name.
1616
** `kubernetes.labels.<key>` is the Kubernetes pod label whose value is used to construct the index name.
1717
** `openshift.labels.<key>` is the `pipeline.label.<key>` element in the `ClusterLogForwarder` CR whose value is used to construct the index name.
1818
** `kubernetes.container_name` uses the container name to construct the index name.
19-
* `structuredTypeName`: (string, optional) If `structuredTypeKey` is not set or its key is not present, OpenShift Logging uses the value of `structuredTypeName` as the structured type. When you use both `structuredTypeKey` and `structuredTypeName` together, `structuredTypeName` provides a fallback index name if the key in `structuredTypeKey` is missing from the JSON log data.
19+
* `structuredTypeName`: If the `structuredTypeKey` field is not set or its key is not present, the `structuredTypeName` value is used as the structured type. When you use both the `structuredTypeKey` field and the `structuredTypeName` field together, the `structuredTypeName` value provides a fallback index name if the key in the `structuredTypeKey` field is missing from the JSON log data.
2020

2121
[NOTE]
2222
====

modules/cluster-logging-forwarding-json-logs-to-the-default-elasticsearch.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,13 +28,13 @@ pipelines:
2828
parse: json
2929
----
3030

31-
. Optional: Use `structuredTypeKey` to specify one of the log record fields, as described in the documentation about "Configuring JSON log data for Elasticsearch". Otherwise, remove this line.
31+
. Use `structuredTypeKey` field to specify one of the log record fields.
3232

33-
. Optional: Use `structuredTypeName` to specify a `<name>`, as described in the documentation about "Configuring JSON log data for Elasticsearch". Otherwise, remove this line.
33+
. Use `structuredTypeName` field to specify a name.
3434
+
3535
[IMPORTANT]
3636
====
37-
To parse JSON logs, you must set either `structuredTypeKey` or `structuredTypeName`, or both `structuredTypeKey` and `structuredTypeName`.
37+
To parse JSON logs, you must set both the `structuredTypeKey` and `structuredTypeName` fields.
3838
====
3939

4040
. For `inputRefs`, specify which log types to forward by using that pipeline, such as `application,` `infrastructure`, or `audit`.
@@ -48,7 +48,7 @@ To parse JSON logs, you must set either `structuredTypeKey` or `structuredTypeNa
4848
$ oc create -f <filename>.yaml
4949
----
5050
+
51-
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. However, if they do not redeploy, delete the Fluentd pods to force them to redeploy.
51+
The Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy.
5252
+
5353
[source,terminal]
5454
----

modules/cluster-logging-forwarding-separate-indices.adoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,9 @@ metadata:
2929
spec:
3030
outputDefaults:
3131
elasticsearch:
32-
enableStructuredContainerLogs: true <1>
32+
structuredTypeKey: kubernetes.labels.logFormat <1>
33+
structuredTypeName: nologformat
34+
enableStructuredContainerLogs: true <2>
3335
pipelines:
3436
- inputRefs:
3537
- application
@@ -38,7 +40,8 @@ spec:
3840
- default
3941
parse: json
4042
----
41-
<1> Enables multi-container outputs.
43+
<1> Uses the value of the key-value pair that is formed by the Kubernetes `logFormat` label.
44+
<2> Enables multi-container outputs.
4245

4346
. Create or edit a YAML file that defines the `Pod` CR object:
4447
+

0 commit comments

Comments
 (0)