Skip to content

Commit 42c18a5

Browse files
committed
OBSDOCS-1679: Port the Configuring log forwarding chapter
1 parent 5f73e09 commit 42c18a5

File tree

5 files changed

+128
-4
lines changed

5 files changed

+128
-4
lines changed

_topic_maps/_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2993,6 +2993,8 @@ Topics:
29932993
Topics:
29942994
- Name: Release notes
29952995
File: log6x-release-notes-6.2
2996+
- Name: Configuring log forwarding
2997+
File: log6x-clf-6.2
29962998
- Name: Logging 6.1
29972999
Dir: logging-6.1
29983000
Topics:

modules/log6x-collection-setup.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ rules: <1>
9292
- logs <7>
9393
verbs: <8>
9494
- create <9>
95-
Annotations
95+
----
9696
<1> rules: Specifies the permissions granted by this ClusterRole.
9797
<2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system.
9898
<3> loki.grafana.com: The API group for managing Loki-related resources.
@@ -102,7 +102,7 @@ Annotations
102102
<7> logs: Refers to the log resources that can be created.
103103
<8> verbs: The actions allowed on the resources.
104104
<9> create: Grants permission to create new logs in the Loki system.
105-
----
105+
106106

107107
=== Writing audit logs
108108
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.

observability/logging/logging-6.0/log6x-clf.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ Outputs are configured in an array under `spec.outputs`. Each output must have a
8080

8181
azureMonitor:: Forwards logs to Azure Monitor.
8282
cloudwatch:: Forwards logs to AWS CloudWatch.
83-
elasticsearch:: Forwards logs to an external Elasticsearch instance.
83+
//elasticsearch:: Forwards logs to an external Elasticsearch instance.
8484
googleCloudLogging:: Forwards logs to Google Cloud Logging.
8585
http:: Forwards logs to a generic HTTP endpoint.
8686
kafka:: Forwards logs to a Kafka broker.

observability/logging/logging-6.1/log6x-clf-6.1.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Outputs are configured in an array under `spec.outputs`. Each output must have a
8181

8282
azureMonitor:: Forwards logs to Azure Monitor.
8383
cloudwatch:: Forwards logs to AWS CloudWatch.
84-
elasticsearch:: Forwards logs to an external Elasticsearch instance.
84+
//elasticsearch:: Forwards logs to an external Elasticsearch instance.
8585
googleCloudLogging:: Forwards logs to Google Cloud Logging.
8686
http:: Forwards logs to a generic HTTP endpoint.
8787
kafka:: Forwards logs to a Kafka broker.
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
include::_attributes/common-attributes.adoc[]
3+
[id="log6x-clf-6-2"]
4+
= Configuring log forwarding
5+
:context: logging-6x-6.2
6+
7+
toc::[]
8+
9+
The `ClusterLogForwarder` (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
10+
11+
.Key Functions of the ClusterLogForwarder
12+
* Selects log messages using inputs
13+
* Forwards logs to external destinations using outputs
14+
* Filters, transforms, and drops log messages using filters
15+
* Defines log forwarding pipelines connecting inputs, filters and outputs
16+
17+
include::modules/log6x-collection-setup.adoc[leveloffset=+1]
18+
19+
[id="modifying-log-level_6-2_{context}"]
20+
== Modifying log level in collector
21+
22+
To modify the log level in the collector, you can set the `observability.openshift.io/log-level` annotation to `trace`, `debug`, `info`, `warn`, `error`, and `off`.
23+
24+
.Example log level annotation
25+
[source,yaml]
26+
----
27+
apiVersion: observability.openshift.io/v1
28+
kind: ClusterLogForwarder
29+
metadata:
30+
name: collector
31+
annotations:
32+
observability.openshift.io/log-level: debug
33+
# ...
34+
----
35+
36+
[id="managing-the-operator_6-2_{context}"]
37+
== Managing the Operator
38+
39+
The `ClusterLogForwarder` resource has a `managementState` field that controls whether the operator actively manages its resources or leaves them Unmanaged:
40+
41+
Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec.
42+
43+
Unmanaged:: The operator will not take any action related to the logging components.
44+
45+
This allows administrators to temporarily pause log forwarding by setting `managementState` to `Unmanaged`.
46+
47+
[id="clf-structure_6-2_{context}"]
48+
== Structure of the ClusterLogForwarder
49+
50+
The CLF has a `spec` section that contains the following key components:
51+
52+
Inputs:: Select log messages to be forwarded. Built-in input types `application`, `infrastructure` and `audit` forward logs from different parts of the cluster. You can also define custom inputs.
53+
54+
Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
55+
56+
Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
57+
58+
Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
59+
60+
[id="clf-inputs_6-2_{context}"]
61+
=== Inputs
62+
63+
Inputs are configured in an array under `spec.inputs`. There are three built-in input types:
64+
65+
application:: Selects logs from all application containers, excluding those in infrastructure namespaces.
66+
67+
infrastructure:: Selects logs from nodes and from infrastructure components running in the following namespaces:
68+
** `default`
69+
** `kube`
70+
** `openshift`
71+
** Containing the `kube-` or `openshift-` prefix
72+
73+
audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
74+
75+
Users can define custom inputs of type `application` that select logs from specific namespaces or using pod labels.
76+
77+
[id="clf-outputs_6-2_{context}"]
78+
=== Outputs
79+
80+
Outputs are configured in an array under `spec.outputs`. Each output must have a unique name and a type. Supported types are:
81+
82+
azureMonitor:: Forwards logs to Azure Monitor.
83+
cloudwatch:: Forwards logs to AWS CloudWatch.
84+
//elasticsearch:: Forwards logs to an external Elasticsearch instance.
85+
googleCloudLogging:: Forwards logs to Google Cloud Logging.
86+
http:: Forwards logs to a generic HTTP endpoint.
87+
kafka:: Forwards logs to a Kafka broker.
88+
loki:: Forwards logs to a Loki logging backend.
89+
lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with {Product-Title} authentication integration. LokiStack's proxy uses {Product-Title} authentication to enforce multi-tenancy
90+
otlp:: Forwards logs using the OpenTelemetry Protocol.
91+
splunk:: Forwards logs to Splunk.
92+
syslog:: Forwards logs to an external syslog server.
93+
94+
Each output type has its own configuration fields.
95+
96+
include::modules/log6x-configuring-otlp-output.adoc[leveloffset=+1]
97+
98+
[id="clf-pipelines_6-2_{context}"]
99+
=== Pipelines
100+
101+
Pipelines are configured in an array under `spec.pipelines`. Each pipeline must have a unique name and consists of:
102+
103+
inputRefs:: Names of inputs whose logs should be forwarded to this pipeline.
104+
outputRefs:: Names of outputs to send logs to.
105+
filterRefs:: (optional) Names of filters to apply.
106+
107+
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
108+
109+
[id="clf-filters_6-2_{context}"]
110+
=== Filters
111+
112+
Filters are configured in an array under `spec.filters`. They can match incoming log messages based on the value of structured fields and modify or drop them.
113+
114+
Administrators can configure the following types of filters:
115+
116+
include::modules/log6x-multiline-except.adoc[leveloffset=+2]
117+
include::modules/log6x-content-filter-drop-records.adoc[leveloffset=+2]
118+
include::modules/log6x-audit-log-filtering.adoc[leveloffset=+2]
119+
include::modules/log6x-input-spec-filter-labels-expressions.adoc[leveloffset=+2]
120+
include::modules/log6x-content-filter-prune-records.adoc[leveloffset=+2]
121+
include::modules/log6x-input-spec-filter-audit-infrastructure.adoc[leveloffset=+1]
122+
include::modules/log6x-input-spec-filter-namespace-container.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)