Skip to content

Commit eb5ea52

Browse files
committed
[OBSDOCS-1471] Logging 5.8 docs missing from 4.17
include assemblies remove elasticsearch instances no1 fix topicmap no1
1 parent 47e8e56 commit eb5ea52

File tree

60 files changed

+3908
-328
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+3908
-328
lines changed

_topic_maps/_topic_map.yml

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3081,8 +3081,26 @@ Topics:
30813081
Topics:
30823082
- Name: Release notes
30833083
File: logging-5-8-release-notes
3084-
- Name: Installing Logging
3085-
File: cluster-logging-deploying
3084+
- Name: Logging overview
3085+
File: about-logging
3086+
- Name: Cluster logging support
3087+
File: cluster-logging-support
3088+
- Name: Visualization for logging
3089+
File: logging-visualization
3090+
- Name: Quick start
3091+
File: quick-start
3092+
- Name: Installing logging
3093+
File: installing-logging
3094+
- Name: Configuring log forwarding
3095+
File: configuring-log-forwarding
3096+
- Name: Configuring LokiStack storage
3097+
File: configuring-lokistack-storage
3098+
- Name: Configuring LokiStack for OTLP
3099+
File: configuring-lokistack-otlp
3100+
- Name: OpenTelemetry data model
3101+
File: opentelemetry-data-model
3102+
- Name: Upgrading to Logging 6.0
3103+
File: upgrading-to-logging-60
30863104
# - Name: Configuring the logging collector
30873105
# File: cluster-logging-collector
30883106
# - Name: Support

modules/cluster-logging-collector-limits.adoc

Lines changed: 0 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -36,29 +36,3 @@ spec:
3636
# ...
3737
----
3838
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
39-
40-
////
41-
[source,yaml]
42-
----
43-
$ oc edit ClusterLogging instance
44-
45-
apiVersion: "logging.openshift.io/v1"
46-
kind: "ClusterLogging"
47-
metadata:
48-
name: "instance"
49-
50-
....
51-
52-
spec:
53-
collection:
54-
logs:
55-
rsyslog:
56-
resources:
57-
limits: <1>
58-
memory: 358Mi
59-
requests:
60-
cpu: 100m
61-
memory: 358Mi
62-
----
63-
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
64-
////

modules/cluster-logging-collector-log-forward-syslog.adoc

Lines changed: 55 additions & 124 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
[id="cluster-logging-collector-log-forward-syslog_{context}"]
33
= Forwarding logs using the syslog protocol
44

5-
You can use the *syslog* link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from {product-title}.
5+
You can use the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from {product-title}.
66

7-
To configure log forwarding using the *syslog* protocol, you must create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
7+
To configure log forwarding using the syslog protocol, you must create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
88

99
.Prerequisites
1010

@@ -16,72 +16,54 @@ To configure log forwarding using the *syslog* protocol, you must create a `Clus
1616
+
1717
[source,yaml]
1818
----
19-
apiVersion: logging.openshift.io/v1
19+
apiVersion: observability.openshift.io/v1
2020
kind: ClusterLogForwarder
2121
metadata:
22-
name: <log_forwarder_name> <1>
23-
namespace: <log_forwarder_namespace> <2>
22+
name: collector
2423
spec:
25-
serviceAccountName: <service_account_name> <3>
24+
managementState: Managed
2625
outputs:
27-
- name: rsyslog-east <4>
28-
type: syslog <5>
29-
syslog: <6>
30-
facility: local0
31-
rfc: RFC3164
32-
payloadKey: message
33-
severity: informational
34-
url: 'tls://rsyslogserver.east.example.com:514' <7>
35-
secret: <8>
36-
name: syslog-secret
37-
- name: rsyslog-west
38-
type: syslog
39-
syslog:
40-
appName: myapp
41-
facility: user
42-
msgID: mymsg
43-
procID: myproc
44-
rfc: RFC5424
45-
severity: debug
46-
url: 'tcp://rsyslogserver.west.example.com:514'
26+
- name: rsyslog-east # <1>
27+
syslog:
28+
appName: <app_name> # <2>
29+
enrichment: KubernetesMinimal
30+
facility: <facility_value> # <3>
31+
msgId: <message_ID> # <4>
32+
payloadKey: <record_field> # <5>
33+
procId: <process_ID> # <6>
34+
rfc: <RFC3164_or_RFC5424> # <7>
35+
severity: informational # <8>
36+
tuning:
37+
deliveryMode: <AtLeastOnce_or_AtMostOnce> # <9>
38+
url: <url> # <10>
39+
tls: # <11>
40+
ca:
41+
key: ca-bundle.crt
42+
secretName: syslog-secret
43+
type: syslog
4744
pipelines:
48-
- name: syslog-east <9>
49-
inputRefs: <10>
50-
- audit
51-
- application
52-
outputRefs: <11>
53-
- rsyslog-east
54-
- default <12>
55-
labels:
56-
secure: "true" <13>
57-
syslog: "east"
58-
- name: syslog-west <14>
59-
inputRefs:
60-
- infrastructure
61-
outputRefs:
62-
- rsyslog-west
63-
- default
64-
labels:
65-
syslog: "west"
45+
- inputRefs: # <12>
46+
- application
47+
name: syslog-east # <13>
48+
outputRefs:
49+
- rsyslog-east
50+
serviceAccount: # <14>
51+
name: logcollector
6652
----
67-
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
68-
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
69-
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
70-
<4> Specify a name for the output.
71-
<5> Specify the `syslog` type.
72-
<6> Optional: Specify the syslog parameters, listed below.
73-
<7> Specify the URL and port of the external syslog instance. You can use the `udp` (insecure), `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
74-
<8> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a `ca-bundle.crt` key that points to the certificate it represents. In legacy implementations, the secret must exist in the `openshift-logging` project.
75-
<9> Optional: Specify a name for the pipeline.
76-
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
77-
<11> Specify the name of the output to use when forwarding logs with this pipeline.
78-
<12> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
79-
<13> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
80-
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
81-
** A name to describe the pipeline.
82-
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
83-
** The `outputRefs` is the name of the output to use.
84-
** Optional: String. One or more labels to add to the logs.
53+
<1> Specify a name for the output.
54+
<2> Optional: Specify the value for the `APP-NAME` part of the syslog message header. The value must conform with link:https://datatracker.ietf.org/doc/html/rfc5424[The Syslog Protocol]. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
55+
<3> Optional: Specify the value for `Facility` part of the syslog-msg header.
56+
<4> Optional: Specify the value for `MSGID` part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
57+
<5> Optional: Specify the record field to use as the payload. The `payloadKey` value must be a single field path encased in single curly brackets `{}`. Example: {.<value>}.
58+
<6> Optional: Specify the value for the `PROCID` part of the syslog message header. The value must conform with link:https://datatracker.ietf.org/doc/html/rfc5424[The Syslog Protocol]. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
59+
<7> Optional: Set the RFC that the generated messages conform to. The value can be `RFC3164` or `RFC5424`.
60+
<8> Optional: Set the severity level for the message. For more information, see link:https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.1[The Syslog Protocol].
61+
<9> Optional: Set the delivery mode for log forwarding. The value can be either `AtLeastOnce`, or `AtMostOnce`.
62+
<10> Specify the absolute URL with a scheme. Valid schemes are: `tcp`, `tls`, and `udp`. For example: `tls://syslog-receiver.example.com:6514`.
63+
<11> Specify the settings for controlling options of the transport layer security (TLS) client connections.
64+
<12> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
65+
<13> Specify a name for the pipeline.
66+
<14> The name of your service account.
8567

8668
. Create the CR object:
8769
+
@@ -90,99 +72,48 @@ spec:
9072
$ oc create -f <filename>.yaml
9173
----
9274

93-
[id=cluster-logging-collector-log-forward-examples-syslog-log-source]
94-
== Adding log source information to message output
75+
[id="cluster-logging-collector-log-forward-examples-syslog-log-source_{context}"]
76+
== Adding log source information to the message output
9577

96-
You can add `namespace_name`, `pod_name`, and `container_name` elements to the `message` field of the record by adding the `AddLogSource` field to your `ClusterLogForwarder` custom resource (CR).
78+
You can add `namespace_name`, `pod_name`, and `container_name` elements to the `message` field of the record by adding the `enrichment` field to your `ClusterLogForwarder` custom resource (CR).
9779

9880
[source,yaml]
9981
----
82+
# ...
10083
spec:
10184
outputs:
10285
- name: syslogout
10386
syslog:
104-
addLogSource: true
87+
enrichment: KubernetesMinimal
10588
facility: user
10689
payloadKey: message
10790
rfc: RFC3164
10891
severity: debug
109-
tag: mytag
11092
type: syslog
111-
url: tls://syslog-receiver.openshift-logging.svc:24224
93+
url: tls://syslog-receiver.example.com:6514
11294
pipelines:
11395
- inputRefs:
11496
- application
11597
name: test-app
11698
outputRefs:
11799
- syslogout
100+
# ...
118101
----
119102

120103
[NOTE]
121104
====
122105
This configuration is compatible with both RFC3164 and RFC5424.
123106
====
124107

125-
.Example syslog message output without `AddLogSource`
108+
.Example syslog message output with `enrichment: None`
126109
[source, text]
127110
----
128-
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
111+
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}
129112
----
130113

131-
.Example syslog message output with `AddLogSource`
114+
.Example syslog message output with `enrichment: KubernetesMinimal`
132115

133116
[source, text]
134117
----
135-
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
118+
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
136119
----
137-
138-
[id=cluster-logging-collector-log-forward-examples-syslog-parms]
139-
== Syslog parameters
140-
141-
You can configure the following for the `syslog` outputs. For more information, see the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] RFC.
142-
143-
* facility: The link:https://tools.ietf.org/html/rfc5424#section-6.2.1[syslog facility]. The value can be a decimal integer or a case-insensitive keyword:
144-
** `0` or `kern` for kernel messages
145-
** `1` or `user` for user-level messages, the default.
146-
** `2` or `mail` for the mail system
147-
** `3` or `daemon` for system daemons
148-
** `4` or `auth` for security/authentication messages
149-
** `5` or `syslog` for messages generated internally by syslogd
150-
** `6` or `lpr` for the line printer subsystem
151-
** `7` or `news` for the network news subsystem
152-
** `8` or `uucp` for the UUCP subsystem
153-
** `9` or `cron` for the clock daemon
154-
** `10` or `authpriv` for security authentication messages
155-
** `11` or `ftp` for the FTP daemon
156-
** `12` or `ntp` for the NTP subsystem
157-
** `13` or `security` for the syslog audit log
158-
** `14` or `console` for the syslog alert log
159-
** `15` or `solaris-cron` for the scheduling daemon
160-
** `16`–`23` or `local0` – `local7` for locally used facilities
161-
* Optional: `payloadKey`: The record field to use as payload for the syslog message.
162-
+
163-
[NOTE]
164-
====
165-
Configuring the `payloadKey` parameter prevents other parameters from being forwarded to the syslog.
166-
====
167-
+
168-
* rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
169-
* severity: The link:https://tools.ietf.org/html/rfc5424#section-6.2.1[syslog severity] to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
170-
** `0` or `Emergency` for messages indicating the system is unusable
171-
** `1` or `Alert` for messages indicating action must be taken immediately
172-
** `2` or `Critical` for messages indicating critical conditions
173-
** `3` or `Error` for messages indicating error conditions
174-
** `4` or `Warning` for messages indicating warning conditions
175-
** `5` or `Notice` for messages indicating normal but significant conditions
176-
** `6` or `Informational` for messages indicating informational messages
177-
** `7` or `Debug` for messages indicating debug-level messages, the default
178-
* tag: Tag specifies a record field to use as a tag on the syslog message.
179-
* trimPrefix: Remove the specified prefix from the tag.
180-
181-
[id=cluster-logging-collector-log-forward-examples-syslog-5424]
182-
== Additional RFC5424 syslog parameters
183-
184-
The following parameters apply to RFC5424:
185-
186-
* appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for `RFC5424`.
187-
* msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for `RFC5424`.
188-
* procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for `RFC5424`.

modules/cluster-logging-deploying-about.adoc

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -155,10 +155,11 @@ spec:
155155
nodeCount: 3
156156
resources:
157157
limits:
158-
memory: 32Gi
158+
cpu: 200m
159+
memory: 16Gi
159160
requests:
160-
cpu: 3
161-
memory: 32Gi
161+
cpu: 200m
162+
memory: 16Gi
162163
storage:
163164
storageClassName: "gp2"
164165
size: "200G"

modules/cluster-logging-elasticsearch-audit.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ include::snippets/audit-logs-default.adoc[]
1010

1111
.Procedure
1212

13-
To use the Log Forward API to forward audit logs to the internal Elasticsearch instance:
13+
To use the Log Forwarding API to forward audit logs to the internal Elasticsearch instance:
1414

1515
. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object:
1616
+

modules/cluster-logging-kibana-limits.adoc

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22
//
33
// * observability/logging/cluster-logging-visualizer.adoc
44

5-
:_mod-docs-content-type: PROCEDURE
65
[id="cluster-logging-kibana-limits_{context}"]
76
= Configure the CPU and memory limits for the log visualizer
87

modules/cluster-logging-kibana-scaling.adoc

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,6 @@ $ oc -n openshift-logging edit ClusterLogging instance
1919
+
2020
[source,yaml]
2121
----
22-
$ oc edit ClusterLogging instance
23-
2422
apiVersion: "logging.openshift.io/v1"
2523
kind: "ClusterLogging"
2624
metadata:
@@ -35,4 +33,3 @@ spec:
3533
replicas: 1 <1>
3634
----
3735
<1> Specify the number of Kibana nodes.
38-

modules/cluster-logging-maintenance-support-list-6x.adoc

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,3 @@
1-
// Module included in the following assemblies:
2-
//
3-
// * observability/logging/logging-6.0/log60-cluster-logging-support.adoc
4-
// * observability/logging/logging-6.1/log61-cluster-logging-support.adoc
5-
// * observability/logging/logging-6.2/log62-cluster-logging-support.adoc
6-
71
:_mod-docs-content-type: REFERENCE
82
[id="cluster-logging-maintenance-support-list_{context}"]
93
= Unsupported configurations

0 commit comments

Comments
 (0)