Skip to content

Commit a194b4c

Browse files
authored
Merge pull request #69027 from abrennan89/clf-examples
OBSDOCS-612: Clean up CLF examples
2 parents 78130f6 + fc59293 commit a194b4c

14 files changed

+310
-286
lines changed

logging/log_collection_forwarding/log-forwarding.adoc

Lines changed: 26 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -153,45 +153,47 @@ $ oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.ya
153153
+
154154
[source,yaml]
155155
----
156-
apiVersion: "logging.openshift.io/v1"
156+
apiVersion: logging.openshift.io/v1
157157
kind: ClusterLogForwarder
158158
metadata:
159-
name: instance <1>
160-
namespace: openshift-logging <2>
159+
name: <log_forwarder_name> <1>
160+
namespace: <log_forwarder_namespace> <2>
161161
spec:
162+
serviceAccountName: clf-collector <3>
162163
outputs:
163-
- name: cw <3>
164-
type: cloudwatch <4>
164+
- name: cw <4>
165+
type: cloudwatch <5>
165166
cloudwatch:
166-
groupBy: logType <5>
167-
groupPrefix: <group prefix> <6>
168-
region: us-east-2 <7>
167+
groupBy: logType <6>
168+
groupPrefix: <group prefix> <7>
169+
region: us-east-2 <8>
169170
secret:
170-
name: <your_role_name> <8>
171+
name: <your_secret_name> <9>
171172
pipelines:
172-
- name: to-cloudwatch <9>
173-
inputRefs: <10>
173+
- name: to-cloudwatch <10>
174+
inputRefs: <11>
174175
- infrastructure
175176
- audit
176177
- application
177178
outputRefs:
178-
- cw <11>
179+
- cw <12>
179180
----
180-
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
181-
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
182-
<3> Specify a name for the output.
183-
<4> Specify the `cloudwatch` type.
184-
<5> Optional: Specify how to group the logs:
181+
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
182+
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
183+
<3> Specify the `clf-collector` service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
184+
<4> Specify a name for the output.
185+
<5> Specify the `cloudwatch` type.
186+
<6> Optional: Specify how to group the logs:
185187
+
186-
* `logType` creates log groups for each log type
188+
* `logType` creates log groups for each log type.
187189
* `namespaceName` creates a log group for each application name space. Infrastructure and audit logs are unaffected, remaining grouped by `logType`.
188190
* `namespaceUUID` creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
189-
<6> Optional: Specify a string to replace the default `infrastructureName` prefix in the names of the log groups.
190-
<7> Specify the AWS region.
191-
<8> Specify the name of the secret that contains your AWS credentials.
192-
<9> Optional: Specify a name for the pipeline.
193-
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
194-
<11> Specify the name of the output to use when forwarding logs with this pipeline.
191+
<7> Optional: Specify a string to replace the default `infrastructureName` prefix in the names of the log groups.
192+
<8> Specify the AWS region.
193+
<9> Specify the name of the secret that contains your AWS credentials.
194+
<10> Optional: Specify a name for the pipeline.
195+
<11> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
196+
<12> Specify the name of the output to use when forwarding logs with this pipeline.
195197
endif::[]
196198

197199
[role="_additional-resources"]

modules/cluster-logging-collector-log-forward-cloudwatch.adoc

Lines changed: 28 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -33,45 +33,47 @@ $ oc apply -f cw-secret.yaml
3333
+
3434
[source,yaml]
3535
----
36-
apiVersion: "logging.openshift.io/v1"
36+
apiVersion: logging.openshift.io/v1
3737
kind: ClusterLogForwarder
3838
metadata:
39-
name: instance <1>
40-
namespace: openshift-logging <2>
39+
name: <log_forwarder_name> <1>
40+
namespace: <log_forwarder_namespace> <2>
4141
spec:
42+
serviceAccountName: <service_account_name> <3>
4243
outputs:
43-
- name: cw <3>
44-
type: cloudwatch <4>
44+
- name: cw <4>
45+
type: cloudwatch <5>
4546
cloudwatch:
46-
groupBy: logType <5>
47-
groupPrefix: <group prefix> <6>
48-
region: us-east-2 <7>
47+
groupBy: logType <6>
48+
groupPrefix: <group prefix> <7>
49+
region: us-east-2 <8>
4950
secret:
50-
name: cw-secret <8>
51+
name: cw-secret <9>
5152
pipelines:
52-
- name: infra-logs <9>
53-
inputRefs: <10>
53+
- name: infra-logs <10>
54+
inputRefs: <11>
5455
- infrastructure
5556
- audit
5657
- application
5758
outputRefs:
58-
- cw <11>
59-
----
60-
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
61-
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
62-
<3> Specify a name for the output.
63-
<4> Specify the `cloudwatch` type.
64-
<5> Optional: Specify how to group the logs:
59+
- cw <12>
60+
----
61+
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
62+
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
63+
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
64+
<4> Specify a name for the output.
65+
<5> Specify the `cloudwatch` type.
66+
<6> Optional: Specify how to group the logs:
6567
+
66-
* `logType` creates log groups for each log type
68+
* `logType` creates log groups for each log type.
6769
* `namespaceName` creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs.
6870
* `namespaceUUID` creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
69-
<6> Optional: Specify a string to replace the default `infrastructureName` prefix in the names of the log groups.
70-
<7> Specify the AWS region.
71-
<8> Specify the name of the secret that contains your AWS credentials.
72-
<9> Optional: Specify a name for the pipeline.
73-
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
74-
<11> Specify the name of the output to use when forwarding logs with this pipeline.
71+
<7> Optional: Specify a string to replace the default `infrastructureName` prefix in the names of the log groups.
72+
<8> Specify the AWS region.
73+
<9> Specify the name of the secret that contains your AWS credentials.
74+
<10> Optional: Specify a name for the pipeline.
75+
<11> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
76+
<12> Specify the name of the output to use when forwarding logs with this pipeline.
7577

7678
. Create the CR object:
7779
+
@@ -287,4 +289,4 @@ $ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
287289
"mycluster-7977k.infrastructure"
288290
----
289291
290-
The `groupBy` field affects the application log group only. It does not affect the `audit` and `infrastructure` log groups.
292+
The `groupBy` field affects the application log group only. It does not affect the `audit` and `infrastructure` log groups.

modules/cluster-logging-collector-log-forward-es.adoc

Lines changed: 29 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -23,52 +23,54 @@ If you want to forward logs to *only* the internal {product-title} Elasticsearch
2323
+
2424
[source,yaml]
2525
----
26-
apiVersion: "logging.openshift.io/v1"
26+
apiVersion: logging.openshift.io/v1
2727
kind: ClusterLogForwarder
2828
metadata:
29-
name: instance <1>
30-
namespace: openshift-logging <2>
29+
name: <log_forwarder_name> <1>
30+
namespace: <log_forwarder_namespace> <2>
3131
spec:
32+
serviceAccountName: <service_account_name> <3>
3233
outputs:
33-
- name: elasticsearch-insecure <3>
34-
type: "elasticsearch" <4>
35-
url: http://elasticsearch.insecure.com:9200 <5>
34+
- name: elasticsearch-insecure <4>
35+
type: "elasticsearch" <5>
36+
url: http://elasticsearch.insecure.com:9200 <6>
3637
- name: elasticsearch-secure
3738
type: "elasticsearch"
38-
url: https://elasticsearch.secure.com:9200 <6>
39+
url: https://elasticsearch.secure.com:9200 <7>
3940
secret:
40-
name: es-secret <7>
41+
name: es-secret <8>
4142
pipelines:
42-
- name: application-logs <8>
43-
inputRefs: <9>
43+
- name: application-logs <9>
44+
inputRefs: <10>
4445
- application
4546
- audit
4647
outputRefs:
47-
- elasticsearch-secure <10>
48-
- default <11>
48+
- elasticsearch-secure <11>
49+
- default <12>
4950
labels:
50-
myLabel: "myValue" <12>
51-
- name: infrastructure-audit-logs <13>
51+
myLabel: "myValue" <13>
52+
- name: infrastructure-audit-logs <14>
5253
inputRefs:
5354
- infrastructure
5455
outputRefs:
5556
- elasticsearch-insecure
5657
labels:
5758
logs: "audit-infra"
5859
----
59-
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
60-
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
61-
<3> Specify a name for the output.
62-
<4> Specify the `elasticsearch` type.
63-
<5> Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the `http` (insecure) or `https` (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.
64-
<6> For a secure connection, you can specify an `https` or `http` URL that you authenticate by specifying a `secret`.
65-
<7> For an `https` prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent. Otherwise, for `http` and `https` prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password."
66-
<8> Optional: Specify a name for the pipeline.
67-
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
68-
<10> Specify the name of the output to use when forwarding logs with this pipeline.
69-
<11> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
70-
<12> Optional: String. One or more labels to add to the logs.
71-
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
60+
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
61+
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
62+
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
63+
<4> Specify a name for the output.
64+
<5> Specify the `elasticsearch` type.
65+
<6> Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the `http` (insecure) or `https` (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.
66+
<7> For a secure connection, you can specify an `https` or `http` URL that you authenticate by specifying a `secret`.
67+
<8> For an `https` prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent. Otherwise, for `http` and `https` prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting a secret that contains a username and password."
68+
<9> Optional: Specify a name for the pipeline.
69+
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
70+
<11> Specify the name of the output to use when forwarding logs with this pipeline.
71+
<12> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
72+
<13> Optional: String. One or more labels to add to the logs.
73+
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
7274
** A name to describe the pipeline.
7375
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
7476
** The `outputRefs` is the name of the output to use.

modules/cluster-logging-collector-log-forward-gcp.adoc

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -29,30 +29,34 @@ $ oc -n openshift-logging create secret generic gcp-secret --from-file google-ap
2929
+
3030
[source,yaml]
3131
----
32-
apiVersion: "logging.openshift.io/v1"
33-
kind: "ClusterLogForwarder"
32+
apiVersion: logging.openshift.io/v1
33+
kind: ClusterLogForwarder
3434
metadata:
35-
name: "instance"
36-
namespace: "openshift-logging"
35+
name: <log_forwarder_name> <1>
36+
namespace: <log_forwarder_namespace> <2>
3737
spec:
38+
serviceAccountName: <service_account_name> <3>
3839
outputs:
3940
- name: gcp-1
4041
type: googleCloudLogging
4142
secret:
4243
name: gcp-secret
4344
googleCloudLogging:
44-
projectId : "openshift-gce-devel" <1>
45-
logId : "app-gcp" <2>
45+
projectId : "openshift-gce-devel" <4>
46+
logId : "app-gcp" <5>
4647
pipelines:
4748
- name: test-app
48-
inputRefs: <3>
49+
inputRefs: <6>
4950
- application
5051
outputRefs:
5152
- gcp-1
5253
----
53-
<1> Set either a `projectId`, `folderId`, `organizationId`, or `billingAccountId` field and its corresponding value, depending on where you want to store your logs in the link:https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy[GCP resource hierarchy].
54-
<2> Set the value to add to the `logName` field of the link:https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry[Log Entry].
55-
<3> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`.
54+
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
55+
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
56+
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
57+
<4> Set a `projectId`, `folderId`, `organizationId`, or `billingAccountId` field and its corresponding value, depending on where you want to store your logs in the link:https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy[GCP resource hierarchy].
58+
<5> Set the value to add to the `logName` field of the link:https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry[Log Entry].
59+
<6> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`.
5660

5761
[role="_additional-resources"]
5862
.Additional resources

modules/cluster-logging-collector-log-forward-kafka.adoc

Lines changed: 26 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -20,32 +20,33 @@ To configure log forwarding to an external Kafka instance, you must create a `Cl
2020
apiVersion: logging.openshift.io/v1
2121
kind: ClusterLogForwarder
2222
metadata:
23-
name: instance <1>
24-
namespace: openshift-logging <2>
23+
name: <log_forwarder_name> <1>
24+
namespace: <log_forwarder_namespace> <2>
2525
spec:
26+
serviceAccountName: <service_account_name> <3>
2627
outputs:
27-
- name: app-logs <3>
28-
type: kafka <4>
29-
url: tls://kafka.example.devlab.com:9093/app-topic <5>
28+
- name: app-logs <4>
29+
type: kafka <5>
30+
url: tls://kafka.example.devlab.com:9093/app-topic <6>
3031
secret:
31-
name: kafka-secret <6>
32+
name: kafka-secret <7>
3233
- name: infra-logs
3334
type: kafka
34-
url: tcp://kafka.devlab2.example.com:9093/infra-topic <7>
35+
url: tcp://kafka.devlab2.example.com:9093/infra-topic <8>
3536
- name: audit-logs
3637
type: kafka
3738
url: tls://kafka.qelab.example.com:9093/audit-topic
3839
secret:
3940
name: kafka-secret-qe
4041
pipelines:
41-
- name: app-topic <8>
42-
inputRefs: <9>
42+
- name: app-topic <9>
43+
inputRefs: <10>
4344
- application
44-
outputRefs: <10>
45+
outputRefs: <11>
4546
- app-logs
4647
labels:
47-
logType: "application" <11>
48-
- name: infra-topic <12>
48+
logType: "application" <12>
49+
- name: infra-topic <13>
4950
inputRefs:
5051
- infrastructure
5152
outputRefs:
@@ -57,27 +58,26 @@ spec:
5758
- audit
5859
outputRefs:
5960
- audit-logs
60-
- default <13>
6161
labels:
6262
logType: "audit"
6363
----
64-
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
65-
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
66-
<3> Specify a name for the output.
67-
<4> Specify the `kafka` type.
68-
<5> Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
69-
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.
70-
<7> Optional: To send an insecure output, use a `tcp` prefix in front of the URL. Also omit the `secret` key and its `name` from this output.
71-
<8> Optional: Specify a name for the pipeline.
72-
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
73-
<10> Specify the name of the output to use when forwarding logs with this pipeline.
74-
<11> Optional: String. One or more labels to add to the logs.
75-
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
64+
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
65+
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
66+
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
67+
<4> Specify a name for the output.
68+
<5> Specify the `kafka` type.
69+
<6> Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
70+
<7> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.
71+
<8> Optional: To send an insecure output, use a `tcp` prefix in front of the URL. Also omit the `secret` key and its `name` from this output.
72+
<9> Optional: Specify a name for the pipeline.
73+
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
74+
<11> Specify the name of the output to use when forwarding logs with this pipeline.
75+
<12> Optional: String. One or more labels to add to the logs.
76+
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
7677
** A name to describe the pipeline.
7778
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
7879
** The `outputRefs` is the name of the output to use.
7980
** Optional: String. One or more labels to add to the logs.
80-
<13> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
8181

8282
. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example:
8383
+

0 commit comments

Comments
 (0)