Skip to content

Commit 56ba8b6

Browse files
authored
Merge pull request #35411 from rolfedh/RHDEVDOCS-3180
RHDEVDOCS-3180 Issue in ClusterLogForwarder custom resource sample
2 parents c8c0310 + 8a4dca0 commit 56ba8b6

10 files changed

+69
-75
lines changed

modules/cluster-logging-collector-log-forward-es.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -51,14 +51,14 @@ spec:
5151
- default <10>
5252
parse: json <11>
5353
labels:
54-
myLabel: myValue <12>
54+
myLabel: "myValue" <12>
5555
- name: infrastructure-audit-logs <13>
5656
inputRefs:
5757
- infrastructure
5858
outputRefs:
5959
- elasticsearch-insecure
6060
labels:
61-
logs: audit-infra
61+
logs: "audit-infra"
6262
----
6363
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
6464
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -71,12 +71,12 @@ spec:
7171
<9> Specify the output to use with that pipeline for forwarding the logs.
7272
<10> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
7373
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
74-
<12> Optional: One or more labels to add to the logs.
75-
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
74+
<12> Optional: String. One or more labels to add to the logs.
75+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
7676
** Optional. A name to describe the pipeline.
7777
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
7878
** The `outputRefs` is the name of the output to use.
79-
** Optional: One or more labels to add to the logs.
79+
** Optional: String. One or more labels to add to the logs.
8080

8181
. Create the CR object:
8282
+

modules/cluster-logging-collector-log-forward-fluentd.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,14 +49,14 @@ spec:
4949
- default <10>
5050
parse: json <11>
5151
labels:
52-
clusterId: C1234 <12>
52+
clusterId: "C1234" <12>
5353
- name: forward-to-fluentd-insecure <13>
5454
inputRefs:
5555
- infrastructure
5656
outputRefs:
5757
- fluentd-server-insecure
5858
labels:
59-
clusterId: C1234
59+
clusterId: "C1234"
6060
----
6161
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
6262
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -69,12 +69,12 @@ spec:
6969
<9> Specify the output to use with that pipeline for forwarding the logs.
7070
<10> Optional. Specify the `default` output to forward logs to the internal Elasticsearch instance.
7171
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
72-
<12> Optional. One or more labels to add to the logs.
73-
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
72+
<12> Optional: String. One or more labels to add to the logs.
73+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
7474
** Optional. A name to describe the pipeline.
7575
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
7676
** The `outputRefs` is the name of the output to use.
77-
** Optional: One or more labels to add to the logs.
77+
** Optional: String. One or more labels to add to the logs.
7878

7979
. Create the CR object:
8080
+

modules/cluster-logging-collector-log-forward-kafka.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -43,22 +43,22 @@ spec:
4343
- app-logs
4444
parse: json <11>
4545
labels:
46-
logType: application <12>
46+
logType: "application" <12>
4747
- name: infra-topic <13>
4848
inputRefs:
4949
- infrastructure
5050
outputRefs:
5151
- infra-logs
5252
labels:
53-
logType: infra
53+
logType: "infra"
5454
- name: audit-topic
5555
inputRefs:
5656
- audit
5757
outputRefs:
5858
- audit-logs
5959
- default <14>
6060
labels:
61-
logType: audit
61+
logType: "audit"
6262
----
6363
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
6464
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -71,12 +71,12 @@ spec:
7171
<9> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
7272
<10> Specify the output to use with that pipeline for forwarding the logs.
7373
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
74-
<12> Optional: One or more labels to add to the logs.
75-
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
74+
<12> Optional: String. One or more labels to add to the logs.
75+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
7676
** Optional. A name to describe the pipeline.
7777
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
7878
** The `outputRefs` is the name of the output to use.
79-
** Optional: One or more labels to add to the logs.
79+
** Optional: String. One or more labels to add to the logs.
8080
<14> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
8181

8282
. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in this example:

modules/cluster-logging-collector-log-forward-project.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -46,8 +46,8 @@ spec:
4646
outputRefs: <10>
4747
- fluentd-server-insecure
4848
parse: json <11>
49-
labels: <12>
50-
project: my-project
49+
labels:
50+
project: "my-project" <12>
5151
- name: forward-to-fluentd-secure <13>
5252
inputRefs:
5353
- application
@@ -57,7 +57,7 @@ spec:
5757
- fluentd-server-secure
5858
- default
5959
labels:
60-
clusterId: C1234
60+
clusterId: "C1234"
6161
----
6262
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
6363
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -70,13 +70,13 @@ spec:
7070
<9> The `my-app-logs` input.
7171
<10> The name of the output to use.
7272
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
73-
<12> Optional: A label to add to the logs.
73+
<12> Optional: String. One or more labels to add to the logs.
7474
<13> Configuration for a pipeline to send logs to other log aggregators.
7575
** Optional: Specify a name for the pipeline.
7676
** Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
7777
** Specify the output to use with that pipeline for forwarding the logs.
7878
** Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
79-
** Optional: One or more labels to add to the logs.
79+
** Optional: String. One or more labels to add to the logs.
8080

8181
. Create the CR object:
8282
+

modules/cluster-logging-collector-log-forward-syslog.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -61,16 +61,16 @@ spec:
6161
- default <11>
6262
parse: json <12>
6363
labels:
64-
syslog: east <13>
65-
secure: true
64+
secure: "true" <13>
65+
syslog: "east"
6666
- name: syslog-west <14>
6767
inputRefs:
6868
- infrastructure
6969
outputRefs:
7070
- rsyslog-west
7171
- default
7272
labels:
73-
syslog: west
73+
syslog: "west"
7474
----
7575
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
7676
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -84,12 +84,12 @@ spec:
8484
<10> Specify the output to use with that pipeline for forwarding the logs.
8585
<11> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
8686
<12> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
87-
<13> Optional: One or more labels to add to the logs.
88-
<14> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
87+
<13> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
88+
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
8989
** Optional. A name to describe the pipeline.
9090
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
9191
** The `outputRefs` is the name of the output to use.
92-
** Optional: One or more labels to add to the logs.
92+
** Optional: String. One or more labels to add to the logs.
9393

9494
. Create the CR object:
9595
+

modules/cluster-logging-collector-log-forwarding-about.adoc

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -90,26 +90,26 @@ spec:
9090
- default
9191
parse: json <8>
9292
labels:
93-
datacenter: east
94-
secure: true
95-
- name: infrastructure-logs <9>
93+
secure: "true" <9>
94+
datacenter: "east"
95+
- name: infrastructure-logs <10>
9696
inputRefs:
9797
- infrastructure
9898
outputRefs:
9999
- elasticsearch-insecure
100100
labels:
101-
datacenter: west
102-
- name: my-app <10>
101+
datacenter: "west"
102+
- name: my-app <11>
103103
inputRefs:
104104
- my-app-logs
105105
outputRefs:
106106
- default
107-
- inputRefs: <11>
107+
- inputRefs: <12>
108108
- application
109109
outputRefs:
110110
- kafka-app
111111
labels:
112-
datacenter: south
112+
datacenter: "south"
113113
----
114114
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
115115
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -133,16 +133,17 @@ spec:
133133
** The `outputRefs` is the name of the output to use, in this example `elasticsearch-secure` to forward to the secure Elasticsearch instance and `default` to forward to the internal Elasticsearch instance.
134134
** Optional: Labels to add to the logs.
135135
<8> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
136-
<9> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
137-
<10> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
136+
<9> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
137+
<10> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
138+
<11> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
138139
** Optional. A name to describe the pipeline.
139140
** The `inputRefs` is a specific input: `my-app-logs`.
140141
** The `outputRefs` is `default`.
141-
** Optional: A label to add to the logs.
142-
<11> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
142+
** Optional: String. One or more labels to add to the logs.
143+
<12> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
143144
** The `inputRefs` is the log type, in this example `application`.
144145
** The `outputRefs` is the name of the output to use.
145-
** Optional: A label to add to the logs.
146+
** Optional: String. One or more labels to add to the logs.
146147

147148
[discrete]
148149
[id="cluster-logging-external-fluentd"]

modules/cluster-logging-deploy-cli.adoc

Lines changed: 18 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -29,9 +29,9 @@ endif::[]
2929

3030
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the CLI:
3131

32-
. Create a Namespace for the OpenShift Elasticsearch Operator.
32+
. Create a namespace for the OpenShift Elasticsearch Operator.
3333

34-
.. Create a Namespace object YAML file (for example, `eo-namespace.yaml`) for the OpenShift Elasticsearch Operator:
34+
.. Create a namespace object YAML file (for example, `eo-namespace.yaml`) for the OpenShift Elasticsearch Operator:
3535
+
3636
[source,yaml]
3737
----
@@ -44,17 +44,10 @@ metadata:
4444
labels:
4545
openshift.io/cluster-monitoring: "true" <2>
4646
----
47-
<1> You must specify the `openshift-operators-redhat` Namespace. To prevent
48-
possible conflicts with metrics, you should configure the Prometheus Cluster
49-
Monitoring stack to scrape metrics from the `openshift-operators-redhat`
50-
Namespace and not the `openshift-operators` Namespace. The `openshift-operators`
51-
Namespace might contain Community Operators, which are untrusted and could publish
52-
a metric with the same name as an {product-title} metric, which would cause
53-
conflicts.
54-
<2> You must specify this label as shown to ensure that cluster monitoring
55-
scrapes the `openshift-operators-redhat` Namespace.
47+
<1> You must specify the `openshift-operators-redhat` namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the `openshift-operators-redhat` namespace and not the `openshift-operators` namespace. The `openshift-operators` namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an {product-title} metric, which would cause conflicts.
48+
<2> String. You must specify this label as shown to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
5649

57-
.. Create the Namespace:
50+
.. Create the namespace:
5851
+
5952
[source,terminal]
6053
----
@@ -68,9 +61,9 @@ For example:
6861
$ oc create -f eo-namespace.yaml
6962
----
7063

71-
. Create a Namespace for the Red Hat OpenShift Logging Operator:
64+
. Create a namespace for the Red Hat OpenShift Logging Operator:
7265

73-
.. Create a Namespace object YAML file (for example, `olo-namespace.yaml`) for the Red Hat OpenShift Logging Operator:
66+
.. Create a namespace object YAML file (for example, `olo-namespace.yaml`) for the Red Hat OpenShift Logging Operator:
7467
+
7568
[source,yaml]
7669
----
@@ -84,7 +77,7 @@ metadata:
8477
openshift.io/cluster-monitoring: "true"
8578
----
8679

87-
.. Create the Namespace:
80+
.. Create the namespace:
8881
+
8982
[source,terminal]
9083
----
@@ -111,7 +104,7 @@ metadata:
111104
namespace: openshift-operators-redhat <1>
112105
spec: {}
113106
----
114-
<1> You must specify the `openshift-operators-redhat` Namespace.
107+
<1> You must specify the `openshift-operators-redhat` namespace.
115108

116109
.. Create an Operator Group object:
117110
+
@@ -128,7 +121,7 @@ $ oc create -f eo-og.yaml
128121
----
129122

130123
.. Create a Subscription object YAML file (for example, `eo-sub.yaml`) to
131-
subscribe a Namespace to the OpenShift Elasticsearch Operator.
124+
subscribe a namespace to the OpenShift Elasticsearch Operator.
132125
+
133126
.Example Subscription
134127
[source,yaml]
@@ -145,7 +138,7 @@ spec:
145138
sourceNamespace: "openshift-marketplace"
146139
name: "elasticsearch-operator"
147140
----
148-
<1> You must specify the `openshift-operators-redhat` Namespace.
141+
<1> You must specify the `openshift-operators-redhat` namespace.
149142
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel. See the following note.
150143
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster,
151144
specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
@@ -172,7 +165,7 @@ For example:
172165
$ oc create -f eo-sub.yaml
173166
----
174167
+
175-
The OpenShift Elasticsearch Operator is installed to the `openshift-operators-redhat` Namespace and copied to each project in the cluster.
168+
The OpenShift Elasticsearch Operator is installed to the `openshift-operators-redhat` namespace and copied to each project in the cluster.
176169

177170
.. Verify the Operator installation:
178171
+
@@ -196,7 +189,7 @@ openshift-authentication elasticsearch-operator.5
196189
...
197190
----
198191
+
199-
There should be an OpenShift Elasticsearch Operator in each Namespace. The version number might be different than shown.
192+
There should be an OpenShift Elasticsearch Operator in each namespace. The version number might be different than shown.
200193

201194
. Install the Red Hat OpenShift Logging Operator by creating the following objects:
202195

@@ -213,7 +206,7 @@ spec:
213206
targetNamespaces:
214207
- openshift-logging <1>
215208
----
216-
<1> You must specify the `openshift-logging` Namespace.
209+
<1> You must specify the `openshift-logging` namespace.
217210

218211
.. Create an Operator Group object:
219212
+
@@ -230,7 +223,7 @@ $ oc create -f olo-og.yaml
230223
----
231224

232225
.. Create a Subscription object YAML file (for example, `olo-sub.yaml`) to
233-
subscribe a Namespace to the Red Hat OpenShift Logging Operator.
226+
subscribe a namespace to the Red Hat OpenShift Logging Operator.
234227
+
235228
[source,yaml]
236229
----
@@ -245,7 +238,7 @@ spec:
245238
source: redhat-operators <3>
246239
sourceNamespace: openshift-marketplace
247240
----
248-
<1> You must specify the `openshift-logging` Namespace.
241+
<1> You must specify the `openshift-logging` namespace.
249242
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel.
250243
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
251244
+
@@ -261,11 +254,11 @@ For example:
261254
$ oc create -f olo-sub.yaml
262255
----
263256
+
264-
The Red Hat OpenShift Logging Operator is installed to the `openshift-logging` Namespace.
257+
The Red Hat OpenShift Logging Operator is installed to the `openshift-logging` namespace.
265258

266259
.. Verify the Operator installation.
267260
+
268-
There should be a Red Hat OpenShift Logging Operator in the `openshift-logging` Namespace. The Version number might be different than shown.
261+
There should be a Red Hat OpenShift Logging Operator in the `openshift-logging` namespace. The Version number might be different than shown.
269262
+
270263
[source,terminal]
271264
----

modules/cluster-logging-deploy-multitenant.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,5 +52,5 @@ spec:
5252
- from:
5353
- namespaceSelector:
5454
matchLabels:
55-
project: openshift-operators-redhat
55+
project: "openshift-operators-redhat"
5656
----

0 commit comments

Comments
 (0)