Skip to content

Commit 9e3e4a4

Browse files
authored
Merge pull request #89739 from openshift-cherrypick-robot/cherry-pick-89407-to-logging-docs-6.2-4.17
[logging-docs-6.2-4.17] OBSDOCS-1640: Document HTTP output proxy should be configurable
2 parents 6d716fe + c07c278 commit 9e3e4a4

File tree

3 files changed

+194
-5
lines changed

3 files changed

+194
-5
lines changed
Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/logging-6.2/log6x-clf-6.2.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="cluster-logging-collector-log-forward-syslog-6x_{context}"]
7+
= Forwarding logs using the syslog protocol
8+
9+
You can use the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from {product-title}.
10+
11+
To configure log forwarding using the syslog protocol, you must create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
12+
13+
.Prerequisites
14+
15+
* You must have a logging server that is configured to receive the logging data using the specified protocol or format.
16+
17+
.Procedure
18+
19+
. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object:
20+
+
21+
[source,yaml]
22+
----
23+
apiVersion: observability.openshift.io/v1
24+
kind: ClusterLogForwarder
25+
metadata:
26+
name: collector
27+
spec:
28+
managementState: Managed
29+
outputs:
30+
- name: rsyslog-east # <1>
31+
syslog:
32+
appName: <app_name> # <2>
33+
enrichment: KubernetesMinimal
34+
facility: <facility_value> # <3>
35+
msgId: <message_ID> # <4>
36+
payloadKey: <record_field> # <5>
37+
procId: <process_ID> # <6>
38+
rfc: <RFC3164_or_RFC5424> # <7>
39+
severity: informational # <8>
40+
tuning:
41+
deliveryMode: <AtLeastOnce_or_AtMostOnce> # <9>
42+
url: <url> # <10>
43+
tls: # <11>
44+
ca:
45+
key: ca-bundle.crt
46+
secretName: syslog-secret
47+
type: syslog
48+
pipelines:
49+
- inputRefs: # <12>
50+
- application
51+
name: syslog-east # <13>
52+
outputRefs:
53+
- rsyslog-east
54+
serviceAccount: # <14>
55+
name: logcollector
56+
----
57+
<1> Specify a name for the output.
58+
<2> Optional: Specify the value for the `APP-NAME` part of the syslog message header. The value must conform with link:https://datatracker.ietf.org/doc/html/rfc5424[The Syslog Protocol]. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
59+
<3> Optional: Specify the value for `Facility` part of the syslog-msg header.
60+
<4> Optional: Specify the value for `MSGID` part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
61+
<5> Optional: Specify the record field to use as the payload. The `payloadKey` value must be a single field path encased in single curly brackets `{}`. Example: {.<value>}.
62+
<6> Optional: Specify the value for the `PROCID` part of the syslog message header. The value must conform with link:https://datatracker.ietf.org/doc/html/rfc5424[The Syslog Protocol]. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
63+
<7> Optional: Set the RFC that the generated messages conform to. The value can be `RFC3164` or `RFC5424`.
64+
<8> Optional: Set the severity level for the message. For more information, see link:https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.1[The Syslog Protocol].
65+
<9> Optional: Set the delivery mode for log forwarding. The value can be either `AtLeastOnce`, or `AtMostOnce`.
66+
<10> Specify the absolute URL with a scheme. Valid schemes are: `tcp`, `tls`, and `udp`. For example: `tls://syslog-receiver.example.com:6514`.
67+
<11> Specify the settings for controlling options of the transport layer security (TLS) client connections.
68+
<12> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
69+
<13> Specify a name for the pipeline.
70+
<14> The name of your service account.
71+
72+
. Create the CR object:
73+
+
74+
[source,terminal]
75+
----
76+
$ oc create -f <filename>.yaml
77+
----
78+
79+
[id="cluster-logging-collector-log-forward-examples-syslog-log-source_{context}"]
80+
== Adding log source information to the message output
81+
82+
You can add `namespace_name`, `pod_name`, and `container_name` elements to the `message` field of the record by adding the `enrichment` field to your `ClusterLogForwarder` custom resource (CR).
83+
84+
[source,yaml]
85+
----
86+
# ...
87+
spec:
88+
outputs:
89+
- name: syslogout
90+
syslog:
91+
enrichment: KubernetesMinimal: true
92+
facility: user
93+
payloadKey: message
94+
rfc: RFC3164
95+
severity: debug
96+
tag: mytag
97+
type: syslog
98+
url: tls://syslog-receiver.example.com:6514
99+
pipelines:
100+
- inputRefs:
101+
- application
102+
name: test-app
103+
outputRefs:
104+
- syslogout
105+
# ...
106+
----
107+
108+
[NOTE]
109+
====
110+
This configuration is compatible with both RFC3164 and RFC5424.
111+
====
112+
113+
.Example syslog message output with `enrichment: None`
114+
[source, text]
115+
----
116+
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}
117+
----
118+
119+
.Example syslog message output with `enrichment: KubernetesMinimal`
120+
121+
[source, text]
122+
----
123+
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
124+
----
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/logging-6.2/log6x-clf-6.2.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="logging-http-forward-6-2_{context}"]
7+
= Forwarding logs over HTTP
8+
9+
To enable forwarding logs over HTTP, specify `http` as the output type in the `ClusterLogForwarder` custom resource (CR).
10+
11+
.Procedure
12+
13+
* Create or edit the `ClusterLogForwarder` CR using the template below:
14+
+
15+
.Example ClusterLogForwarder CR
16+
[source,yaml]
17+
----
18+
apiVersion: observability.openshift.io/v1
19+
kind: ClusterLogForwarder
20+
metadata:
21+
name: <log_forwarder_name>
22+
namespace: <log_forwarder_namespace>
23+
spec:
24+
managementState: Managed
25+
outputs:
26+
- name: <output_name>
27+
type: http
28+
http:
29+
headers: # <1>
30+
h1: v1
31+
h2: v2
32+
authentication:
33+
username:
34+
key: username
35+
secretName: <http_auth_secret>
36+
password:
37+
key: password
38+
secretName: <http_auth_secret>
39+
timeout: 300
40+
proxyURL: <proxy_url> # <2>
41+
url: <url> # <3>
42+
tls:
43+
insecureSkipVerify: # <4>
44+
ca:
45+
key: <ca_certificate>
46+
secretName: <secret_name> # <5>
47+
pipelines:
48+
- inputRefs:
49+
- application
50+
name: pipe1
51+
outputRefs:
52+
- <output_name> # <6>
53+
serviceAccount:
54+
name: <service_account_name> # <7>
55+
----
56+
<1> Additional headers to send with the log record.
57+
<2> Optional: URL of the HTTP/HTTPS proxy that should be used to forward logs over http or https from this output. This setting overrides any default proxy settings for the cluster or the node.
58+
<3> Destination address for logs.
59+
<4> Values are either `true` or `false`.
60+
<5> Secret name for destination credentials.
61+
<6> This value should be the same as the output name.
62+
<7> The name of your service account.

observability/logging/logging-6.2/log6x-clf-6.2.adoc

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -113,10 +113,13 @@ Filters are configured in an array under `spec.filters`. They can match incoming
113113

114114
Administrators can configure the following types of filters:
115115

116-
include::modules/log6x-multiline-except.adoc[leveloffset=+2]
117-
include::modules/log6x-content-filter-drop-records.adoc[leveloffset=+2]
118-
include::modules/log6x-audit-log-filtering.adoc[leveloffset=+2]
119-
include::modules/log6x-input-spec-filter-labels-expressions.adoc[leveloffset=+2]
120-
include::modules/log6x-content-filter-prune-records.adoc[leveloffset=+2]
116+
include::modules/log6x-multiline-except.adoc[leveloffset=+1]
117+
include::modules/log6x-logging-http-forward-6-2.adoc[leveloffset=+1]
118+
include::modules/log6x-cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+1]
119+
include::modules/log6x-content-filter-drop-records.adoc[leveloffset=+1]
120+
include::modules/log6x-audit-log-filtering.adoc[leveloffset=+1]
121+
include::modules/log6x-input-spec-filter-labels-expressions.adoc[leveloffset=+1]
122+
include::modules/log6x-content-filter-prune-records.adoc[leveloffset=+1]
121123
include::modules/log6x-input-spec-filter-audit-infrastructure.adoc[leveloffset=+1]
122124
include::modules/log6x-input-spec-filter-namespace-container.adoc[leveloffset=+1]
125+

0 commit comments

Comments
 (0)