Skip to content

Commit 6d716fe

Browse files
authored
Merge pull request #89176 from theashiot/OBSDOCS-1730
OBSDOCS-1730: Port Configuring the logging collector to 5.8 and 6.y docs
2 parents 8681b34 + ef3c593 commit 6d716fe

13 files changed

+493
-85
lines changed

_topic_maps/_topic_map.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2999,6 +2999,8 @@ Topics:
29992999
File: log6x-about-6.2
30003000
- Name: Configuring log forwarding
30013001
File: log6x-clf-6.2
3002+
- Name: Configuring the logging collector
3003+
File: 6x-cluster-logging-collector-6.2
30023004
- Name: Configuring LokiStack storage
30033005
File: log6x-loki-6.2
30043006
- Name: Visualization for logging
@@ -3014,6 +3016,8 @@ Topics:
30143016
File: log6x-about-6.1
30153017
- Name: Configuring log forwarding
30163018
File: log6x-clf-6.1
3019+
- Name: Configuring the logging collector
3020+
File: 6x-cluster-logging-collector-6.1
30173021
- Name: Configuring LokiStack storage
30183022
File: log6x-loki-6.1
30193023
- Name: Configuring LokiStack for OTLP
@@ -3033,6 +3037,8 @@ Topics:
30333037
File: log6x-upgrading-to-6
30343038
- Name: Configuring log forwarding
30353039
File: log6x-clf
3040+
- Name: Configuring the logging collector
3041+
File: 6x-cluster-logging-collector-6.0
30363042
- Name: Configuring LokiStack storage
30373043
File: log6x-loki
30383044
- Name: Visualization for logging
@@ -3044,6 +3050,8 @@ Topics:
30443050
Topics:
30453051
- Name: Release notes
30463052
File: logging-5-8-release-notes
3053+
- Name: Configuring the logging collector
3054+
File: cluster-logging-collector
30473055
# - Name: Support
30483056
# File: cluster-logging-support
30493057
# - Name: Troubleshooting logging

modules/log-collector-http-server.adoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@
22
//
33
// * observability/logging/log_collection_forwarding/cluster-logging-collector.adoc
44

5+
6+
//This file is for Logging 5.x
7+
58
:_mod-docs-content-type: PROCEDURE
69
[id="log-collector-http-server_{context}"]
710
= Configuring the collector to receive audit logs as an HTTP server
@@ -23,7 +26,7 @@ You can configure your log collector to listen for HTTP connections and receive
2326
.Example `ClusterLogForwarder` CR if you are using a multi log forwarder deployment
2427
[source,yaml]
2528
----
26-
apiVersion: logging.openshift.io/v1beta1
29+
apiVersion: logging.openshift.io/v1
2730
kind: ClusterLogForwarder
2831
metadata:
2932
# ...

modules/log-collector-rsyslog-server.adoc

Lines changed: 0 additions & 84 deletions
This file was deleted.
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/cluster-logging-collector.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="log6x-cluster-logging-collector-limits_{context}"]
7+
= Configure log collector CPU and memory limits
8+
9+
Use the log collector to adjust the CPU and memory limits.
10+
11+
.Procedure
12+
13+
* Edit the `ClusterLogForwarder` custom resource (CR):
14+
+
15+
[source,terminal]
16+
----
17+
$ oc -n openshift-logging edit ClusterLogging instance
18+
----
19+
+
20+
[source,yaml]
21+
----
22+
apiVersion: observability.openshift.io/v1
23+
kind: ClusterLogForwarder
24+
metadata:
25+
name: instance
26+
namespace: openshift-logging
27+
spec:
28+
collector:
29+
resources:
30+
limits: <1>
31+
memory: 736Mi
32+
requests:
33+
cpu: 100m
34+
memory: 736Mi
35+
# ...
36+
----
37+
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/cluster-logging-deploying.adoc
4+
// * observability/logging/log_collection_forwarding/cluster-logging-collector.adoc
5+
6+
:_mod-docs-content-type: PROCEDURE
7+
[id="log6x-configuring-logging-collector_{context}"]
8+
= Configuring the log collector
9+
10+
You can configure which log collector type your {logging} uses by modifying the `ClusterLogging` custom resource (CR).
11+
12+
.Prerequisites
13+
14+
* You have administrator permissions.
15+
* You have installed the {oc-first}.
16+
* You have installed the {clo}.
17+
* You have created a `ClusterLogging` CR.
18+
19+
.Procedure
20+
21+
. Modify the `ClusterLogging` CR `collection` spec:
22+
+
23+
.`ClusterLogging` CR example
24+
[source,yaml]
25+
----
26+
apiVersion: logging.openshift.io/v1
27+
kind: ClusterLogging
28+
metadata:
29+
# ...
30+
spec:
31+
# ...
32+
collection:
33+
type: <log_collector_type> <1>
34+
resources: {}
35+
tolerations: {}
36+
# ...
37+
----
38+
<1> The log collector type you want to use for the {logging}.
39+
40+
. Apply the `ClusterLogging` CR by running the following command:
41+
+
42+
[source,terminal]
43+
----
44+
$ oc apply -f <filename>.yaml
45+
----
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/log_collection_forwarding/cluster-logging-collector.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="log6x-creating-logfilesmetricexporter_{context}"]
7+
= Creating a LogFileMetricExporter resource
8+
9+
To generate metrics from the logs produced by running containers, you must create a `LogFileMetricExporter` custom resource (CR).
10+
11+
If you do not create the `LogFileMetricExporter` CR, you might see a *No datapoints found* message in the {product-title} web console dashboard for *Produced Logs*.
12+
13+
.Prerequisites
14+
15+
* You have administrator permissions.
16+
* You have installed the {clo}.
17+
* You have installed the {oc-first}.
18+
19+
.Procedure
20+
21+
. Create a `LogFileMetricExporter` CR as a YAML file:
22+
+
23+
.Example `LogFileMetricExporter` CR
24+
[source,yaml]
25+
----
26+
apiVersion: logging.openshift.io/v1alpha1
27+
kind: LogFileMetricExporter
28+
metadata:
29+
name: instance
30+
namespace: openshift-logging
31+
spec:
32+
nodeSelector: {} # <1>
33+
resources: # <2>
34+
limits:
35+
cpu: 500m
36+
memory: 256Mi
37+
requests:
38+
cpu: 200m
39+
memory: 128Mi
40+
tolerations: [] # <3>
41+
# ...
42+
----
43+
<1> Optional: The `nodeSelector` stanza defines which pods are scheduled on which nodes.
44+
<2> The `resources` stanza defines resource requirements for the `LogFileMetricExporter` CR.
45+
<3> Optional: The `tolerations` stanza defines the tolerations that the pods accept.
46+
47+
. Apply the `LogFileMetricExporter` CR by running the following command:
48+
+
49+
[source,terminal]
50+
----
51+
$ oc apply -f <filename>.yaml
52+
----
Lines changed: 119 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,119 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/log_collection_forwarding/cluster-logging-collector.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="log6x-log-collector-http-server_{context}"]
7+
= Configuring the collector to receive audit logs as an HTTP server
8+
9+
You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying `http` as a receiver input in the `ClusterLogForwarder` custom resource (CR).
10+
11+
:feature-name: HTTP receiver input
12+
include::snippets/logging-http-sys-input-support.adoc[]
13+
14+
15+
.Prerequisites
16+
17+
* You have administrator permissions.
18+
* You have installed the {oc-first}.
19+
* You have installed the {clo}.
20+
* You have created a `ClusterLogForwarder` CR.
21+
22+
.Procedure
23+
24+
. Modify the `ClusterLogForwarder` CR to add configuration for the `http` receiver input:
25+
+
26+
--
27+
.Example `ClusterLogForwarder` CR
28+
[source,yaml]
29+
----
30+
apiVersion: observability.openshift.io/v1
31+
kind: ClusterLogForwarder
32+
metadata:
33+
# ...
34+
spec:
35+
inputs:
36+
- name: http-receiver # <1>
37+
type: receiver
38+
receiver:
39+
type: http # <2>
40+
port: 8443 # <3>
41+
http:
42+
format: kubeAPIAudit # <4>
43+
outputs:
44+
- name: default-lokistack
45+
lokiStack:
46+
authentication:
47+
token:
48+
from: serviceAccount
49+
target:
50+
name: logging-loki
51+
namespace: openshift-logging
52+
tls:
53+
ca:
54+
key: service-ca.crt
55+
configMapName: openshift-service-ca.crt
56+
type: lokiStack
57+
# ...
58+
pipelines: # <5>
59+
- name: http-pipeline
60+
inputRefs:
61+
- http-receiver
62+
outputRefs:
63+
- <output_name>
64+
# ...
65+
----
66+
<1> Specify a name for your input receiver.
67+
<2> Specify the input receiver type as `http`.
68+
<3> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. The default value is `8443`.
69+
<4> Currently, only the `kube-apiserver` webhook format is supported for `http` input receivers.
70+
<5> Configure a pipeline for your input receiver.
71+
--
72+
73+
. Apply the changes to the `ClusterLogForwarder` CR by running the following command:
74+
+
75+
[source,terminal]
76+
----
77+
$ oc apply -f <filename>.yaml
78+
----
79+
80+
.Verification
81+
82+
. Verify that the collector is listening on the service that has a name in the `<clusterlogforwarder_resource_name>-<input_name>` format by running the following command:
83+
+
84+
[source,terminal]
85+
----
86+
$ oc get svc
87+
----
88+
+
89+
.Example output
90+
+
91+
[source,terminal]
92+
----
93+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
94+
collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s
95+
collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6s
96+
----
97+
+
98+
In this example output, the service name is `collector-http-receiver`.
99+
100+
. Extract the certificate authority (CA) certificate file by running the following command:
101+
+
102+
[source,terminal]
103+
----
104+
$ oc extract cm/openshift-service-ca.crt -n <namespace>
105+
----
106+
107+
. Use the `curl` command to send logs by running the following command:
108+
+
109+
[source,terminal]
110+
----
111+
$ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<msessage>"}'
112+
----
113+
+
114+
Replace `<openshift_service_ca.crt>` with the extracted CA certificate file.
115+
+
116+
[NOTE]
117+
====
118+
You can only forward logs within a cluster by following the verification steps.
119+
====

0 commit comments

Comments
 (0)