Skip to content

[OBSDOCS-1471] Logging 5.8 docs missing from 4.17 #97452

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: enterprise-4.17
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 20 additions & 2 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3081,8 +3081,26 @@ Topics:
Topics:
- Name: Release notes
File: logging-5-8-release-notes
- Name: Installing Logging
File: cluster-logging-deploying
- Name: Logging overview
File: about-logging
- Name: Cluster logging support
File: cluster-logging-support
- Name: Visualization for logging
File: logging-visualization
- Name: Quick start
File: quick-start
- Name: Installing logging
File: installing-logging
- Name: Configuring log forwarding
File: configuring-log-forwarding
- Name: Configuring LokiStack storage
File: configuring-lokistack-storage
- Name: Configuring LokiStack for OTLP
File: configuring-lokistack-otlp
- Name: OpenTelemetry data model
File: opentelemetry-data-model
- Name: Upgrading to Logging 6.0
File: upgrading-to-logging-60
# - Name: Configuring the logging collector
# File: cluster-logging-collector
# - Name: Support
Expand Down
26 changes: 0 additions & 26 deletions modules/cluster-logging-collector-limits.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,29 +36,3 @@ spec:
# ...
----
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
////
[source,yaml]
----
$ oc edit ClusterLogging instance
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
....
spec:
collection:
logs:
rsyslog:
resources:
limits: <1>
memory: 358Mi
requests:
cpu: 100m
memory: 358Mi
----
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
////
179 changes: 55 additions & 124 deletions modules/cluster-logging-collector-log-forward-syslog.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
[id="cluster-logging-collector-log-forward-syslog_{context}"]
= Forwarding logs using the syslog protocol

You can use the *syslog* link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from {product-title}.
You can use the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from {product-title}.

To configure log forwarding using the *syslog* protocol, you must create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
To configure log forwarding using the syslog protocol, you must create a `ClusterLogForwarder` custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.

.Prerequisites

Expand All @@ -16,72 +16,54 @@ To configure log forwarding using the *syslog* protocol, you must create a `Clus
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name> <1>
namespace: <log_forwarder_namespace> <2>
name: collector
spec:
serviceAccountName: <service_account_name> <3>
managementState: Managed
outputs:
- name: rsyslog-east <4>
type: syslog <5>
syslog: <6>
facility: local0
rfc: RFC3164
payloadKey: message
severity: informational
url: 'tls://rsyslogserver.east.example.com:514' <7>
secret: <8>
name: syslog-secret
- name: rsyslog-west
type: syslog
syslog:
appName: myapp
facility: user
msgID: mymsg
procID: myproc
rfc: RFC5424
severity: debug
url: 'tcp://rsyslogserver.west.example.com:514'
- name: rsyslog-east # <1>
syslog:
appName: <app_name> # <2>
enrichment: KubernetesMinimal
facility: <facility_value> # <3>
msgId: <message_ID> # <4>
payloadKey: <record_field> # <5>
procId: <process_ID> # <6>
rfc: <RFC3164_or_RFC5424> # <7>
severity: informational # <8>
tuning:
deliveryMode: <AtLeastOnce_or_AtMostOnce> # <9>
url: <url> # <10>
tls: # <11>
ca:
key: ca-bundle.crt
secretName: syslog-secret
type: syslog
pipelines:
- name: syslog-east <9>
inputRefs: <10>
- audit
- application
outputRefs: <11>
- rsyslog-east
- default <12>
labels:
secure: "true" <13>
syslog: "east"
- name: syslog-west <14>
inputRefs:
- infrastructure
outputRefs:
- rsyslog-west
- default
labels:
syslog: "west"
- inputRefs: # <12>
- application
name: syslog-east # <13>
outputRefs:
- rsyslog-east
serviceAccount: # <14>
name: logcollector
----
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
<4> Specify a name for the output.
<5> Specify the `syslog` type.
<6> Optional: Specify the syslog parameters, listed below.
<7> Specify the URL and port of the external syslog instance. You can use the `udp` (insecure), `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
<8> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a `ca-bundle.crt` key that points to the certificate it represents. In legacy implementations, the secret must exist in the `openshift-logging` project.
<9> Optional: Specify a name for the pipeline.
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<11> Specify the name of the output to use when forwarding logs with this pipeline.
<12> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
<13> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
** A name to describe the pipeline.
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
** The `outputRefs` is the name of the output to use.
** Optional: String. One or more labels to add to the logs.
<1> Specify a name for the output.
<2> Optional: Specify the value for the `APP-NAME` part of the syslog message header. The value must conform with link:https://datatracker.ietf.org/doc/html/rfc5424[The Syslog Protocol]. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
<3> Optional: Specify the value for `Facility` part of the syslog-msg header.
<4> Optional: Specify the value for `MSGID` part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
<5> Optional: Specify the record field to use as the payload. The `payloadKey` value must be a single field path encased in single curly brackets `{}`. Example: {.<value>}.
<6> Optional: Specify the value for the `PROCID` part of the syslog message header. The value must conform with link:https://datatracker.ietf.org/doc/html/rfc5424[The Syslog Protocol]. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
<7> Optional: Set the RFC that the generated messages conform to. The value can be `RFC3164` or `RFC5424`.
<8> Optional: Set the severity level for the message. For more information, see link:https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.1[The Syslog Protocol].
<9> Optional: Set the delivery mode for log forwarding. The value can be either `AtLeastOnce`, or `AtMostOnce`.
<10> Specify the absolute URL with a scheme. Valid schemes are: `tcp`, `tls`, and `udp`. For example: `tls://syslog-receiver.example.com:6514`.
<11> Specify the settings for controlling options of the transport layer security (TLS) client connections.
<12> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<13> Specify a name for the pipeline.
<14> The name of your service account.

. Create the CR object:
+
Expand All @@ -90,99 +72,48 @@ spec:
$ oc create -f <filename>.yaml
----

[id=cluster-logging-collector-log-forward-examples-syslog-log-source]
== Adding log source information to message output
[id="cluster-logging-collector-log-forward-examples-syslog-log-source_{context}"]
== Adding log source information to the message output

You can add `namespace_name`, `pod_name`, and `container_name` elements to the `message` field of the record by adding the `AddLogSource` field to your `ClusterLogForwarder` custom resource (CR).
You can add `namespace_name`, `pod_name`, and `container_name` elements to the `message` field of the record by adding the `enrichment` field to your `ClusterLogForwarder` custom resource (CR).

[source,yaml]
----
# ...
spec:
outputs:
- name: syslogout
syslog:
addLogSource: true
enrichment: KubernetesMinimal
facility: user
payloadKey: message
rfc: RFC3164
severity: debug
tag: mytag
type: syslog
url: tls://syslog-receiver.openshift-logging.svc:24224
url: tls://syslog-receiver.example.com:6514
pipelines:
- inputRefs:
- application
name: test-app
outputRefs:
- syslogout
# ...
----

[NOTE]
====
This configuration is compatible with both RFC3164 and RFC5424.
====

.Example syslog message output without `AddLogSource`
.Example syslog message output with `enrichment: None`
[source, text]
----
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}
----

.Example syslog message output with `AddLogSource`
.Example syslog message output with `enrichment: KubernetesMinimal`

[source, text]
----
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
----

[id=cluster-logging-collector-log-forward-examples-syslog-parms]
== Syslog parameters

You can configure the following for the `syslog` outputs. For more information, see the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] RFC.

* facility: The link:https://tools.ietf.org/html/rfc5424#section-6.2.1[syslog facility]. The value can be a decimal integer or a case-insensitive keyword:
** `0` or `kern` for kernel messages
** `1` or `user` for user-level messages, the default.
** `2` or `mail` for the mail system
** `3` or `daemon` for system daemons
** `4` or `auth` for security/authentication messages
** `5` or `syslog` for messages generated internally by syslogd
** `6` or `lpr` for the line printer subsystem
** `7` or `news` for the network news subsystem
** `8` or `uucp` for the UUCP subsystem
** `9` or `cron` for the clock daemon
** `10` or `authpriv` for security authentication messages
** `11` or `ftp` for the FTP daemon
** `12` or `ntp` for the NTP subsystem
** `13` or `security` for the syslog audit log
** `14` or `console` for the syslog alert log
** `15` or `solaris-cron` for the scheduling daemon
** `16`–`23` or `local0` – `local7` for locally used facilities
* Optional: `payloadKey`: The record field to use as payload for the syslog message.
+
[NOTE]
====
Configuring the `payloadKey` parameter prevents other parameters from being forwarded to the syslog.
====
+
* rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
* severity: The link:https://tools.ietf.org/html/rfc5424#section-6.2.1[syslog severity] to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
** `0` or `Emergency` for messages indicating the system is unusable
** `1` or `Alert` for messages indicating action must be taken immediately
** `2` or `Critical` for messages indicating critical conditions
** `3` or `Error` for messages indicating error conditions
** `4` or `Warning` for messages indicating warning conditions
** `5` or `Notice` for messages indicating normal but significant conditions
** `6` or `Informational` for messages indicating informational messages
** `7` or `Debug` for messages indicating debug-level messages, the default
* tag: Tag specifies a record field to use as a tag on the syslog message.
* trimPrefix: Remove the specified prefix from the tag.

[id=cluster-logging-collector-log-forward-examples-syslog-5424]
== Additional RFC5424 syslog parameters

The following parameters apply to RFC5424:

* appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for `RFC5424`.
* msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for `RFC5424`.
* procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for `RFC5424`.
7 changes: 4 additions & 3 deletions modules/cluster-logging-deploying-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -155,10 +155,11 @@ spec:
nodeCount: 3
resources:
limits:
memory: 32Gi
cpu: 200m
memory: 16Gi
requests:
cpu: 3
memory: 32Gi
cpu: 200m
memory: 16Gi
storage:
storageClassName: "gp2"
size: "200G"
Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-elasticsearch-audit.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ include::snippets/audit-logs-default.adoc[]

.Procedure

To use the Log Forward API to forward audit logs to the internal Elasticsearch instance:
To use the Log Forwarding API to forward audit logs to the internal Elasticsearch instance:

. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object:
+
Expand Down
1 change: 0 additions & 1 deletion modules/cluster-logging-kibana-limits.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
//
// * observability/logging/cluster-logging-visualizer.adoc

:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-kibana-limits_{context}"]
= Configure the CPU and memory limits for the log visualizer

Expand Down
3 changes: 0 additions & 3 deletions modules/cluster-logging-kibana-scaling.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@ $ oc -n openshift-logging edit ClusterLogging instance
+
[source,yaml]
----
$ oc edit ClusterLogging instance
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
Expand All @@ -35,4 +33,3 @@ spec:
replicas: 1 <1>
----
<1> Specify the number of Kibana nodes.

6 changes: 0 additions & 6 deletions modules/cluster-logging-maintenance-support-list-6x.adoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,3 @@
// Module included in the following assemblies:
//
// * observability/logging/logging-6.0/log60-cluster-logging-support.adoc
// * observability/logging/logging-6.1/log61-cluster-logging-support.adoc
// * observability/logging/logging-6.2/log62-cluster-logging-support.adoc

:_mod-docs-content-type: REFERENCE
[id="cluster-logging-maintenance-support-list_{context}"]
= Unsupported configurations
Expand Down
Loading