You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-log-forward-es.adoc
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,14 +51,14 @@ spec:
51
51
- default <10>
52
52
parse: json <11>
53
53
labels:
54
-
myLabel: myValue <12>
54
+
myLabel: "myValue" <12>
55
55
- name: infrastructure-audit-logs <13>
56
56
inputRefs:
57
57
- infrastructure
58
58
outputRefs:
59
59
- elasticsearch-insecure
60
60
labels:
61
-
logs: audit-infra
61
+
logs: "audit-infra"
62
62
----
63
63
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
64
64
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -71,12 +71,12 @@ spec:
71
71
<9> Specify the output to use with that pipeline for forwarding the logs.
72
72
<10> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
73
73
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
74
-
<12> Optional: One or more labels to add to the logs.
75
-
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
74
+
<12> Optional: String. One or more labels to add to the logs.
75
+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
76
76
** Optional. A name to describe the pipeline.
77
77
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
78
78
** The `outputRefs` is the name of the output to use.
79
-
** Optional: One or more labels to add to the logs.
79
+
** Optional: String. One or more labels to add to the logs.
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-log-forward-fluentd.adoc
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,14 +49,14 @@ spec:
49
49
- default <10>
50
50
parse: json <11>
51
51
labels:
52
-
clusterId: C1234 <12>
52
+
clusterId: "C1234" <12>
53
53
- name: forward-to-fluentd-insecure <13>
54
54
inputRefs:
55
55
- infrastructure
56
56
outputRefs:
57
57
- fluentd-server-insecure
58
58
labels:
59
-
clusterId: C1234
59
+
clusterId: "C1234"
60
60
----
61
61
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
62
62
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -69,12 +69,12 @@ spec:
69
69
<9> Specify the output to use with that pipeline for forwarding the logs.
70
70
<10> Optional. Specify the `default` output to forward logs to the internal Elasticsearch instance.
71
71
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
72
-
<12> Optional. One or more labels to add to the logs.
73
-
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
72
+
<12> Optional: String. One or more labels to add to the logs.
73
+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
74
74
** Optional. A name to describe the pipeline.
75
75
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
76
76
** The `outputRefs` is the name of the output to use.
77
-
** Optional: One or more labels to add to the logs.
77
+
** Optional: String. One or more labels to add to the logs.
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-log-forward-kafka.adoc
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,22 +43,22 @@ spec:
43
43
- app-logs
44
44
parse: json <11>
45
45
labels:
46
-
logType: application <12>
46
+
logType: "application" <12>
47
47
- name: infra-topic <13>
48
48
inputRefs:
49
49
- infrastructure
50
50
outputRefs:
51
51
- infra-logs
52
52
labels:
53
-
logType: infra
53
+
logType: "infra"
54
54
- name: audit-topic
55
55
inputRefs:
56
56
- audit
57
57
outputRefs:
58
58
- audit-logs
59
59
- default <14>
60
60
labels:
61
-
logType: audit
61
+
logType: "audit"
62
62
----
63
63
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
64
64
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -71,12 +71,12 @@ spec:
71
71
<9> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
72
72
<10> Specify the output to use with that pipeline for forwarding the logs.
73
73
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
74
-
<12> Optional: One or more labels to add to the logs.
75
-
<13> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
74
+
<12> Optional: String. One or more labels to add to the logs.
75
+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
76
76
** Optional. A name to describe the pipeline.
77
77
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
78
78
** The `outputRefs` is the name of the output to use.
79
-
** Optional: One or more labels to add to the logs.
79
+
** Optional: String. One or more labels to add to the logs.
80
80
<14> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
81
81
82
82
. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in this example:
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-log-forward-project.adoc
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,8 +46,8 @@ spec:
46
46
outputRefs: <10>
47
47
- fluentd-server-insecure
48
48
parse: json <11>
49
-
labels: <12>
50
-
project: my-project
49
+
labels:
50
+
project: "my-project" <12>
51
51
- name: forward-to-fluentd-secure <13>
52
52
inputRefs:
53
53
- application
@@ -57,7 +57,7 @@ spec:
57
57
- fluentd-server-secure
58
58
- default
59
59
labels:
60
-
clusterId: C1234
60
+
clusterId: "C1234"
61
61
----
62
62
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
63
63
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -70,13 +70,13 @@ spec:
70
70
<9> The `my-app-logs` input.
71
71
<10> The name of the output to use.
72
72
<11> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
73
-
<12> Optional: A label to add to the logs.
73
+
<12> Optional: String. One or more labels to add to the logs.
74
74
<13> Configuration for a pipeline to send logs to other log aggregators.
75
75
** Optional: Specify a name for the pipeline.
76
76
** Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
77
77
** Specify the output to use with that pipeline for forwarding the logs.
78
78
** Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
79
-
** Optional: One or more labels to add to the logs.
79
+
** Optional: String. One or more labels to add to the logs.
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-log-forward-syslog.adoc
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,16 +61,16 @@ spec:
61
61
- default <11>
62
62
parse: json <12>
63
63
labels:
64
-
syslog: east <13>
65
-
secure: true
64
+
secure: "true" <13>
65
+
syslog: "east"
66
66
- name: syslog-west <14>
67
67
inputRefs:
68
68
- infrastructure
69
69
outputRefs:
70
70
- rsyslog-west
71
71
- default
72
72
labels:
73
-
syslog: west
73
+
syslog: "west"
74
74
----
75
75
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
76
76
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -84,12 +84,12 @@ spec:
84
84
<10> Specify the output to use with that pipeline for forwarding the logs.
85
85
<11> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
86
86
<12> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
87
-
<13> Optional: One or more labels to add to the logs.
88
-
<14> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
87
+
<13> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
88
+
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
89
89
** Optional. A name to describe the pipeline.
90
90
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
91
91
** The `outputRefs` is the name of the output to use.
92
-
** Optional: One or more labels to add to the logs.
92
+
** Optional: String. One or more labels to add to the logs.
Copy file name to clipboardExpand all lines: modules/cluster-logging-collector-log-forwarding-about.adoc
+13-12Lines changed: 13 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,26 +90,26 @@ spec:
90
90
- default
91
91
parse: json <8>
92
92
labels:
93
-
datacenter: east
94
-
secure: true
95
-
- name: infrastructure-logs <9>
93
+
secure: "true" <9>
94
+
datacenter: "east"
95
+
- name: infrastructure-logs <10>
96
96
inputRefs:
97
97
- infrastructure
98
98
outputRefs:
99
99
- elasticsearch-insecure
100
100
labels:
101
-
datacenter: west
102
-
- name: my-app <10>
101
+
datacenter: "west"
102
+
- name: my-app <11>
103
103
inputRefs:
104
104
- my-app-logs
105
105
outputRefs:
106
106
- default
107
-
- inputRefs: <11>
107
+
- inputRefs: <12>
108
108
- application
109
109
outputRefs:
110
110
- kafka-app
111
111
labels:
112
-
datacenter: south
112
+
datacenter: "south"
113
113
----
114
114
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
115
115
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
@@ -133,16 +133,17 @@ spec:
133
133
** The `outputRefs` is the name of the output to use, in this example `elasticsearch-secure` to forward to the secure Elasticsearch instance and `default` to forward to the internal Elasticsearch instance.
134
134
** Optional: Labels to add to the logs.
135
135
<8> Optional: Forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
136
-
<9> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
137
-
<10> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
136
+
<9> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
137
+
<10> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
138
+
<11> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
138
139
** Optional. A name to describe the pipeline.
139
140
** The `inputRefs` is a specific input: `my-app-logs`.
140
141
** The `outputRefs` is `default`.
141
-
** Optional: A label to add to the logs.
142
-
<11> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
142
+
** Optional: String. One or more labels to add to the logs.
143
+
<12> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
143
144
** The `inputRefs` is the log type, in this example `application`.
144
145
** The `outputRefs` is the name of the output to use.
145
-
** Optional: A label to add to the logs.
146
+
** Optional: String. One or more labels to add to the logs.
Copy file name to clipboardExpand all lines: modules/cluster-logging-deploy-cli.adoc
+18-25Lines changed: 18 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,9 +29,9 @@ endif::[]
29
29
30
30
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the CLI:
31
31
32
-
. Create a Namespace for the OpenShift Elasticsearch Operator.
32
+
. Create a namespace for the OpenShift Elasticsearch Operator.
33
33
34
-
.. Create a Namespace object YAML file (for example, `eo-namespace.yaml`) for the OpenShift Elasticsearch Operator:
34
+
.. Create a namespace object YAML file (for example, `eo-namespace.yaml`) for the OpenShift Elasticsearch Operator:
35
35
+
36
36
[source,yaml]
37
37
----
@@ -44,17 +44,10 @@ metadata:
44
44
labels:
45
45
openshift.io/cluster-monitoring: "true" <2>
46
46
----
47
-
<1> You must specify the `openshift-operators-redhat` Namespace. To prevent
48
-
possible conflicts with metrics, you should configure the Prometheus Cluster
49
-
Monitoring stack to scrape metrics from the `openshift-operators-redhat`
50
-
Namespace and not the `openshift-operators` Namespace. The `openshift-operators`
51
-
Namespace might contain Community Operators, which are untrusted and could publish
52
-
a metric with the same name as an {product-title} metric, which would cause
53
-
conflicts.
54
-
<2> You must specify this label as shown to ensure that cluster monitoring
55
-
scrapes the `openshift-operators-redhat` Namespace.
47
+
<1> You must specify the `openshift-operators-redhat` namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the `openshift-operators-redhat` namespace and not the `openshift-operators` namespace. The `openshift-operators` namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an {product-title} metric, which would cause conflicts.
48
+
<2> String. You must specify this label as shown to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
56
49
57
-
.. Create the Namespace:
50
+
.. Create the namespace:
58
51
+
59
52
[source,terminal]
60
53
----
@@ -68,9 +61,9 @@ For example:
68
61
$ oc create -f eo-namespace.yaml
69
62
----
70
63
71
-
. Create a Namespace for the Red Hat OpenShift Logging Operator:
64
+
. Create a namespace for the Red Hat OpenShift Logging Operator:
72
65
73
-
.. Create a Namespace object YAML file (for example, `olo-namespace.yaml`) for the Red Hat OpenShift Logging Operator:
66
+
.. Create a namespace object YAML file (for example, `olo-namespace.yaml`) for the Red Hat OpenShift Logging Operator:
74
67
+
75
68
[source,yaml]
76
69
----
@@ -84,7 +77,7 @@ metadata:
84
77
openshift.io/cluster-monitoring: "true"
85
78
----
86
79
87
-
.. Create the Namespace:
80
+
.. Create the namespace:
88
81
+
89
82
[source,terminal]
90
83
----
@@ -111,7 +104,7 @@ metadata:
111
104
namespace: openshift-operators-redhat <1>
112
105
spec: {}
113
106
----
114
-
<1> You must specify the `openshift-operators-redhat` Namespace.
107
+
<1> You must specify the `openshift-operators-redhat` namespace.
115
108
116
109
.. Create an Operator Group object:
117
110
+
@@ -128,7 +121,7 @@ $ oc create -f eo-og.yaml
128
121
----
129
122
130
123
.. Create a Subscription object YAML file (for example, `eo-sub.yaml`) to
131
-
subscribe a Namespace to the OpenShift Elasticsearch Operator.
124
+
subscribe a namespace to the OpenShift Elasticsearch Operator.
132
125
+
133
126
.Example Subscription
134
127
[source,yaml]
@@ -145,7 +138,7 @@ spec:
145
138
sourceNamespace: "openshift-marketplace"
146
139
name: "elasticsearch-operator"
147
140
----
148
-
<1> You must specify the `openshift-operators-redhat` Namespace.
141
+
<1> You must specify the `openshift-operators-redhat` namespace.
149
142
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel. See the following note.
150
143
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster,
151
144
specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
@@ -172,7 +165,7 @@ For example:
172
165
$ oc create -f eo-sub.yaml
173
166
----
174
167
+
175
-
The OpenShift Elasticsearch Operator is installed to the `openshift-operators-redhat` Namespace and copied to each project in the cluster.
168
+
The OpenShift Elasticsearch Operator is installed to the `openshift-operators-redhat` namespace and copied to each project in the cluster.
There should be an OpenShift Elasticsearch Operator in each Namespace. The version number might be different than shown.
192
+
There should be an OpenShift Elasticsearch Operator in each namespace. The version number might be different than shown.
200
193
201
194
. Install the Red Hat OpenShift Logging Operator by creating the following objects:
202
195
@@ -213,7 +206,7 @@ spec:
213
206
targetNamespaces:
214
207
- openshift-logging <1>
215
208
----
216
-
<1> You must specify the `openshift-logging` Namespace.
209
+
<1> You must specify the `openshift-logging` namespace.
217
210
218
211
.. Create an Operator Group object:
219
212
+
@@ -230,7 +223,7 @@ $ oc create -f olo-og.yaml
230
223
----
231
224
232
225
.. Create a Subscription object YAML file (for example, `olo-sub.yaml`) to
233
-
subscribe a Namespace to the Red Hat OpenShift Logging Operator.
226
+
subscribe a namespace to the Red Hat OpenShift Logging Operator.
234
227
+
235
228
[source,yaml]
236
229
----
@@ -245,7 +238,7 @@ spec:
245
238
source: redhat-operators <3>
246
239
sourceNamespace: openshift-marketplace
247
240
----
248
-
<1> You must specify the `openshift-logging` Namespace.
241
+
<1> You must specify the `openshift-logging` namespace.
249
242
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel.
250
243
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
251
244
+
@@ -261,11 +254,11 @@ For example:
261
254
$ oc create -f olo-sub.yaml
262
255
----
263
256
+
264
-
The Red Hat OpenShift Logging Operator is installed to the `openshift-logging` Namespace.
257
+
The Red Hat OpenShift Logging Operator is installed to the `openshift-logging` namespace.
265
258
266
259
.. Verify the Operator installation.
267
260
+
268
-
There should be a Red Hat OpenShift Logging Operator in the `openshift-logging` Namespace. The Version number might be different than shown.
261
+
There should be a Red Hat OpenShift Logging Operator in the `openshift-logging` namespace. The Version number might be different than shown.
0 commit comments