Skip to content

Commit 15c390a

Browse files
committed
[OBSDOCS-1471] Logging 5.8 docs missing from 4.17
include assemblies remove elasticsearch instances no1 fix topicmap no1 fix asciidoc no1 add loki-statement-snip fix opentelemetry attributes fix opentelemetry attributes no2 fix opentelemetry attributes no3 add logging 5.x from 4.16 topic-map fix no2 topic-map fix no3 topic-map fix no4 release note fix no1 kibana fix
1 parent 47e8e56 commit 15c390a

File tree

67 files changed

+3935
-474
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+3935
-474
lines changed

_topic_maps/_topic_map.yml

Lines changed: 85 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3076,13 +3076,92 @@ Topics:
30763076
# File: log6x-visual
30773077
# - Name: API reference 6.0
30783078
# File: log6x-api-reference
3079-
- Name: Logging 5.8
3080-
Dir: logging_release_notes
3079+
- Name: Release notes for Logging 5.8
3080+
File: logging-5-8-release-notes
3081+
- Name: Support
3082+
File: cluster-logging-support
3083+
- Name: Troubleshooting logging
3084+
Dir: troubleshooting
3085+
Topics:
3086+
- Name: Viewing Logging status
3087+
File: cluster-logging-cluster-status
3088+
- Name: Troubleshooting log forwarding
3089+
File: log-forwarding-troubleshooting
3090+
- Name: About Logging
3091+
File: cluster-logging
3092+
- Name: Installing Logging
3093+
File: cluster-logging-deploying
3094+
- Name: Updating Logging
3095+
File: cluster-logging-upgrading
3096+
Distros: openshift-enterprise,openshift-origin
3097+
- Name: Visualizing logs
3098+
Dir: log_visualization
3099+
Topics:
3100+
- Name: About log visualization
3101+
File: log-visualization
3102+
- Name: Log visualization with the web console
3103+
File: log-visualization-ocp-console
3104+
- Name: Configuring your Logging deployment
3105+
Dir: config
3106+
Distros: openshift-enterprise,openshift-origin
3107+
Topics:
3108+
- Name: Configuring CPU and memory limits for Logging components
3109+
File: cluster-logging-memory
3110+
- Name: Configuring systemd-journald for Logging
3111+
File: cluster-logging-systemd
3112+
- Name: Log collection and forwarding
3113+
Dir: log_collection_forwarding
3114+
Topics:
3115+
- Name: About log collection and forwarding
3116+
File: log-forwarding
3117+
- Name: Log output types
3118+
File: logging-output-types
3119+
- Name: Enabling JSON log forwarding
3120+
File: cluster-logging-enabling-json-logging
3121+
- Name: Configuring log forwarding
3122+
File: configuring-log-forwarding
3123+
- Name: Configuring the logging collector
3124+
File: cluster-logging-collector
3125+
- Name: Collecting and storing Kubernetes events
3126+
File: cluster-logging-eventrouter
3127+
- Name: Log storage
3128+
Dir: log_storage
3129+
Topics:
3130+
- Name: Installing log storage
3131+
File: installing-log-storage
3132+
- Name: Configuring the LokiStack log store
3133+
File: cluster-logging-loki
3134+
- Name: Logging alerts
3135+
Dir: logging_alerts
3136+
Topics:
3137+
- Name: Default logging alerts
3138+
File: default-logging-alerts
3139+
- Name: Custom logging alerts
3140+
File: custom-logging-alerts
3141+
- Name: Performance and reliability tuning
3142+
Dir: performance_reliability
30813143
Topics:
3082-
- Name: Release notes
3083-
File: logging-5-8-release-notes
3084-
- Name: Installing Logging
3085-
File: cluster-logging-deploying
3144+
- Name: Flow control mechanisms
3145+
File: logging-flow-control-mechanisms
3146+
- Name: Scheduling resources
3147+
Dir: scheduling_resources
3148+
Topics:
3149+
- Name: Using node selectors to move logging resources
3150+
File: logging-node-selectors
3151+
- Name: Using tolerations to control logging pod placement
3152+
File: logging-taints-tolerations
3153+
- Name: Uninstalling Logging
3154+
File: cluster-logging-uninstall
3155+
# - Name: Exported fields
3156+
# File: cluster-logging-exported-fields
3157+
# Distros: openshift-enterprise,openshift-origin
3158+
- Name: API reference
3159+
Dir: api_reference
3160+
Topics:
3161+
- Name: 5.8 Logging API reference
3162+
File: logging-5-8-reference
3163+
# - Name: 5.7 Logging API reference
3164+
# File: logging-5-7-reference
30863165
# - Name: Configuring the logging collector
30873166
# File: cluster-logging-collector
30883167
# - Name: Support

modules/cluster-logging-collector-limits.adoc

Lines changed: 0 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -36,29 +36,3 @@ spec:
3636
# ...
3737
----
3838
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
39-
40-
////
41-
[source,yaml]
42-
----
43-
$ oc edit ClusterLogging instance
44-
45-
apiVersion: "logging.openshift.io/v1"
46-
kind: "ClusterLogging"
47-
metadata:
48-
name: "instance"
49-
50-
....
51-
52-
spec:
53-
collection:
54-
logs:
55-
rsyslog:
56-
resources:
57-
limits: <1>
58-
memory: 358Mi
59-
requests:
60-
cpu: 100m
61-
memory: 358Mi
62-
----
63-
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
64-
////

modules/cluster-logging-collector-log-forward-syslog.adoc

Lines changed: 55 additions & 124 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
[id="cluster-logging-collector-log-forward-syslog_{context}"]
33
= Forwarding logs using the syslog protocol
44

5-
You can use the *syslog* link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from {product-title}.
5+
You can use the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from {product-title}.
66

7-
To configure log forwarding using the *syslog* protocol, you must create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
7+
To configure log forwarding using the syslog protocol, you must create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
88

99
.Prerequisites
1010

@@ -16,72 +16,54 @@ To configure log forwarding using the *syslog* protocol, you must create a `Clus
1616
+
1717
[source,yaml]
1818
----
19-
apiVersion: logging.openshift.io/v1
19+
apiVersion: observability.openshift.io/v1
2020
kind: ClusterLogForwarder
2121
metadata:
22-
name: <log_forwarder_name> <1>
23-
namespace: <log_forwarder_namespace> <2>
22+
name: collector
2423
spec:
25-
serviceAccountName: <service_account_name> <3>
24+
managementState: Managed
2625
outputs:
27-
- name: rsyslog-east <4>
28-
type: syslog <5>
29-
syslog: <6>
30-
facility: local0
31-
rfc: RFC3164
32-
payloadKey: message
33-
severity: informational
34-
url: 'tls://rsyslogserver.east.example.com:514' <7>
35-
secret: <8>
36-
name: syslog-secret
37-
- name: rsyslog-west
38-
type: syslog
39-
syslog:
40-
appName: myapp
41-
facility: user
42-
msgID: mymsg
43-
procID: myproc
44-
rfc: RFC5424
45-
severity: debug
46-
url: 'tcp://rsyslogserver.west.example.com:514'
26+
- name: rsyslog-east # <1>
27+
syslog:
28+
appName: <app_name> # <2>
29+
enrichment: KubernetesMinimal
30+
facility: <facility_value> # <3>
31+
msgId: <message_ID> # <4>
32+
payloadKey: <record_field> # <5>
33+
procId: <process_ID> # <6>
34+
rfc: <RFC3164_or_RFC5424> # <7>
35+
severity: informational # <8>
36+
tuning:
37+
deliveryMode: <AtLeastOnce_or_AtMostOnce> # <9>
38+
url: <url> # <10>
39+
tls: # <11>
40+
ca:
41+
key: ca-bundle.crt
42+
secretName: syslog-secret
43+
type: syslog
4744
pipelines:
48-
- name: syslog-east <9>
49-
inputRefs: <10>
50-
- audit
51-
- application
52-
outputRefs: <11>
53-
- rsyslog-east
54-
- default <12>
55-
labels:
56-
secure: "true" <13>
57-
syslog: "east"
58-
- name: syslog-west <14>
59-
inputRefs:
60-
- infrastructure
61-
outputRefs:
62-
- rsyslog-west
63-
- default
64-
labels:
65-
syslog: "west"
45+
- inputRefs: # <12>
46+
- application
47+
name: syslog-east # <13>
48+
outputRefs:
49+
- rsyslog-east
50+
serviceAccount: # <14>
51+
name: logcollector
6652
----
67-
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
68-
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
69-
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
70-
<4> Specify a name for the output.
71-
<5> Specify the `syslog` type.
72-
<6> Optional: Specify the syslog parameters, listed below.
73-
<7> Specify the URL and port of the external syslog instance. You can use the `udp` (insecure), `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
74-
<8> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a `ca-bundle.crt` key that points to the certificate it represents. In legacy implementations, the secret must exist in the `openshift-logging` project.
75-
<9> Optional: Specify a name for the pipeline.
76-
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
77-
<11> Specify the name of the output to use when forwarding logs with this pipeline.
78-
<12> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
79-
<13> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
80-
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
81-
** A name to describe the pipeline.
82-
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
83-
** The `outputRefs` is the name of the output to use.
84-
** Optional: String. One or more labels to add to the logs.
53+
<1> Specify a name for the output.
54+
<2> Optional: Specify the value for the `APP-NAME` part of the syslog message header. The value must conform with link:https://datatracker.ietf.org/doc/html/rfc5424[The Syslog Protocol]. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
55+
<3> Optional: Specify the value for `Facility` part of the syslog-msg header.
56+
<4> Optional: Specify the value for `MSGID` part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
57+
<5> Optional: Specify the record field to use as the payload. The `payloadKey` value must be a single field path encased in single curly brackets `{}`. Example: {.<value>}.
58+
<6> Optional: Specify the value for the `PROCID` part of the syslog message header. The value must conform with link:https://datatracker.ietf.org/doc/html/rfc5424[The Syslog Protocol]. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
59+
<7> Optional: Set the RFC that the generated messages conform to. The value can be `RFC3164` or `RFC5424`.
60+
<8> Optional: Set the severity level for the message. For more information, see link:https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.1[The Syslog Protocol].
61+
<9> Optional: Set the delivery mode for log forwarding. The value can be either `AtLeastOnce`, or `AtMostOnce`.
62+
<10> Specify the absolute URL with a scheme. Valid schemes are: `tcp`, `tls`, and `udp`. For example: `tls://syslog-receiver.example.com:6514`.
63+
<11> Specify the settings for controlling options of the transport layer security (TLS) client connections.
64+
<12> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
65+
<13> Specify a name for the pipeline.
66+
<14> The name of your service account.
8567

8668
. Create the CR object:
8769
+
@@ -90,99 +72,48 @@ spec:
9072
$ oc create -f <filename>.yaml
9173
----
9274

93-
[id=cluster-logging-collector-log-forward-examples-syslog-log-source]
94-
== Adding log source information to message output
75+
[id="cluster-logging-collector-log-forward-examples-syslog-log-source_{context}"]
76+
== Adding log source information to the message output
9577

96-
You can add `namespace_name`, `pod_name`, and `container_name` elements to the `message` field of the record by adding the `AddLogSource` field to your `ClusterLogForwarder` custom resource (CR).
78+
You can add `namespace_name`, `pod_name`, and `container_name` elements to the `message` field of the record by adding the `enrichment` field to your `ClusterLogForwarder` custom resource (CR).
9779

9880
[source,yaml]
9981
----
82+
# ...
10083
spec:
10184
outputs:
10285
- name: syslogout
10386
syslog:
104-
addLogSource: true
87+
enrichment: KubernetesMinimal
10588
facility: user
10689
payloadKey: message
10790
rfc: RFC3164
10891
severity: debug
109-
tag: mytag
11092
type: syslog
111-
url: tls://syslog-receiver.openshift-logging.svc:24224
93+
url: tls://syslog-receiver.example.com:6514
11294
pipelines:
11395
- inputRefs:
11496
- application
11597
name: test-app
11698
outputRefs:
11799
- syslogout
100+
# ...
118101
----
119102

120103
[NOTE]
121104
====
122105
This configuration is compatible with both RFC3164 and RFC5424.
123106
====
124107

125-
.Example syslog message output without `AddLogSource`
108+
.Example syslog message output with `enrichment: None`
126109
[source, text]
127110
----
128-
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
111+
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}
129112
----
130113

131-
.Example syslog message output with `AddLogSource`
114+
.Example syslog message output with `enrichment: KubernetesMinimal`
132115

133116
[source, text]
134117
----
135-
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
118+
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
136119
----
137-
138-
[id=cluster-logging-collector-log-forward-examples-syslog-parms]
139-
== Syslog parameters
140-
141-
You can configure the following for the `syslog` outputs. For more information, see the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] RFC.
142-
143-
* facility: The link:https://tools.ietf.org/html/rfc5424#section-6.2.1[syslog facility]. The value can be a decimal integer or a case-insensitive keyword:
144-
** `0` or `kern` for kernel messages
145-
** `1` or `user` for user-level messages, the default.
146-
** `2` or `mail` for the mail system
147-
** `3` or `daemon` for system daemons
148-
** `4` or `auth` for security/authentication messages
149-
** `5` or `syslog` for messages generated internally by syslogd
150-
** `6` or `lpr` for the line printer subsystem
151-
** `7` or `news` for the network news subsystem
152-
** `8` or `uucp` for the UUCP subsystem
153-
** `9` or `cron` for the clock daemon
154-
** `10` or `authpriv` for security authentication messages
155-
** `11` or `ftp` for the FTP daemon
156-
** `12` or `ntp` for the NTP subsystem
157-
** `13` or `security` for the syslog audit log
158-
** `14` or `console` for the syslog alert log
159-
** `15` or `solaris-cron` for the scheduling daemon
160-
** `16`–`23` or `local0` – `local7` for locally used facilities
161-
* Optional: `payloadKey`: The record field to use as payload for the syslog message.
162-
+
163-
[NOTE]
164-
====
165-
Configuring the `payloadKey` parameter prevents other parameters from being forwarded to the syslog.
166-
====
167-
+
168-
* rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
169-
* severity: The link:https://tools.ietf.org/html/rfc5424#section-6.2.1[syslog severity] to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
170-
** `0` or `Emergency` for messages indicating the system is unusable
171-
** `1` or `Alert` for messages indicating action must be taken immediately
172-
** `2` or `Critical` for messages indicating critical conditions
173-
** `3` or `Error` for messages indicating error conditions
174-
** `4` or `Warning` for messages indicating warning conditions
175-
** `5` or `Notice` for messages indicating normal but significant conditions
176-
** `6` or `Informational` for messages indicating informational messages
177-
** `7` or `Debug` for messages indicating debug-level messages, the default
178-
* tag: Tag specifies a record field to use as a tag on the syslog message.
179-
* trimPrefix: Remove the specified prefix from the tag.
180-
181-
[id=cluster-logging-collector-log-forward-examples-syslog-5424]
182-
== Additional RFC5424 syslog parameters
183-
184-
The following parameters apply to RFC5424:
185-
186-
* appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for `RFC5424`.
187-
* msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for `RFC5424`.
188-
* procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for `RFC5424`.

modules/cluster-logging-deploying-about.adoc

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -155,10 +155,11 @@ spec:
155155
nodeCount: 3
156156
resources:
157157
limits:
158-
memory: 32Gi
158+
cpu: 200m
159+
memory: 16Gi
159160
requests:
160-
cpu: 3
161-
memory: 32Gi
161+
cpu: 200m
162+
memory: 16Gi
162163
storage:
163164
storageClassName: "gp2"
164165
size: "200G"

modules/cluster-logging-elasticsearch-audit.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ include::snippets/audit-logs-default.adoc[]
1010

1111
.Procedure
1212

13-
To use the Log Forward API to forward audit logs to the internal Elasticsearch instance:
13+
To use the Log Forwarding API to forward audit logs to the internal Elasticsearch instance:
1414

1515
. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object:
1616
+

modules/cluster-logging-kibana-limits.adoc

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22
//
33
// * observability/logging/cluster-logging-visualizer.adoc
44

5-
:_mod-docs-content-type: PROCEDURE
65
[id="cluster-logging-kibana-limits_{context}"]
76
= Configure the CPU and memory limits for the log visualizer
87

0 commit comments

Comments
 (0)