Skip to content

Commit be3176d

Browse files
committed
OBSDOCS-642: Add docs for creating the LogFileMetricExporter CR
1 parent e049463 commit be3176d

File tree

3 files changed

+80
-1
lines changed

3 files changed

+80
-1
lines changed

logging/log_collection_forwarding/cluster-logging-collector.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ All supported modifications to the log collector can be performed though the `sp
1212

1313
include::modules/configuring-logging-collector.adoc[leveloffset=+1]
1414

15+
include::modules/creating-logfilesmetricexporter.adoc[leveloffset=+1]
16+
1517
include::modules/log-collector-resources-scheduling.adoc[leveloffset=+1]
1618

1719
include::modules/cluster-logging-collector-pod-location.adoc[leveloffset=+1]
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/log_collection_forwarding/cluster-logging-collector.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="creating-logfilesmetricexporter_{context}"]
7+
= Creating a LogFileMetricExporter resource
8+
9+
In {logging} version 5.8 and newer versions, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers.
10+
11+
If you do not create the `LogFileMetricExporter` CR, you may see a *No datapoints found* message in the {product-title} web console dashboard for *Produced Logs*.
12+
13+
.Prerequisites
14+
15+
* You have administrator permissions.
16+
* You have installed the {clo}.
17+
* You have installed the {oc-first}.
18+
19+
.Procedure
20+
21+
. Create a `LogFileMetricExporter` CR as a YAML file:
22+
+
23+
.Example `LogFileMetricExporter` CR
24+
[source,yaml]
25+
----
26+
apiVersion: logging.openshift.io/v1alpha1
27+
kind: LogFileMetricExporter
28+
metadata:
29+
name: instance
30+
namespace: openshift-logging
31+
spec:
32+
nodeSelector: {} # <1>
33+
resources: # <2>
34+
limits:
35+
cpu: 500m
36+
memory: 256Mi
37+
requests:
38+
cpu: 200m
39+
memory: 128Mi
40+
tolerations: [] # <3>
41+
# ...
42+
----
43+
<1> Optional: The `nodeSelector` stanza defines which nodes the pods are scheduled on.
44+
<2> The `resources` stanza defines resource requirements for the `LogFileMetricExporter` CR.
45+
<3> Optional: The `tolerations` stanza defines the tolerations that the pods accept.
46+
47+
. Apply the `LogFileMetricExporter` CR by running the following command:
48+
+
49+
[source,terminal]
50+
----
51+
$ oc apply -f <filename>.yaml
52+
----
53+
54+
.Verification
55+
56+
A `logfilesmetricexporter` pod runs concurrently with a `collector` pod on each node.
57+
58+
* Verify that the `logfilesmetricexporter` pods are running in the namespace where you have created the `LogFileMetricExporter` CR, by running the following command and observing the output:
59+
+
60+
[source,terminal]
61+
----
62+
$ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging
63+
----
64+
+
65+
.Example output
66+
[source,terminal]
67+
----
68+
NAME READY STATUS RESTARTS AGE
69+
logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s
70+
logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s
71+
----

modules/logging-release-notes-5-8-0.adoc

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,17 @@ This release includes link:https://access.redhat.com/errata/RHBA-2023:6139[OpenS
66

77
[id="logging-release-notes-5-8-0-deprecation-notice"]
88
== Deprecation notice
9+
910
In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of {product-title}. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the {clo} and LokiStack provided by the {loki-op} are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.
1011

1112
[id="logging-release-notes-5-8-0-enhancements"]
1213
== Enhancements
1314

1415
[id="logging-release-notes-5-8-0-log-collection"]
1516
=== Log Collection
17+
18+
* With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the `LogFileMetricExporter` CR, you may see a *No datapoints found* message in the {product-title} web console dashboard for *Produced Logs*. (link:https://issues.redhat.com/browse/LOG-3819[LOG-3819])
19+
1620
* With this update, you can deploy multiple, isolated, and RBAC-protected `ClusterLogForwarder` custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. (link:https://issues.redhat.com/browse/LOG-1343[LOG-1343])
1721
+
1822
[IMPORTANT]
@@ -28,6 +32,7 @@ In order to support multi-cluster log forwarding in additional namespaces other
2832

2933
[id="logging-release-notes-5-8-0-log-storage"]
3034
=== Log Storage
35+
3136
* With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. (link:https://issues.redhat.com/browse/LOG-3841[LOG-3841])
3237

3338
* With this update, the {loki-op} introduces `PodDisruptionBudget` configuration on LokiStack deployments to ensure normal operations during {product-title} cluster restarts by keeping ingestion and the query path available. (link:https://issues.redhat.com/browse/LOG-3839[LOG-3839])
@@ -43,12 +48,14 @@ In order to support multi-cluster log forwarding in additional namespaces other
4348

4449
[id="logging-release-notes-5-8-0-log-console"]
4550
=== Log Console
51+
4652
* With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. (link:https://issues.redhat.com/browse/LOG-3856[LOG-3856])
4753

4854
* With this update, {product-title} application owners can receive notifications for application log-based alerts on the {product-title} web console *Developer* perspective for {product-title} version 4.14 and later. (link:https://issues.redhat.com/browse/LOG-3548[LOG-3548])
4955

5056
[id="logging-release-notes-5-8-0-known-issues"]
5157
== Known Issues
58+
5259
* Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an `RST_STREAM` frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/LOG-4609[LOG-4609])
5360

5461
* Currently, when using FluentD as the collector, the collector pod cannot start on the {product-title} IPv6-enabled cluster. The pod logs produce the `fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known` error. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/LOG-4706[LOG-4706])
@@ -59,7 +66,6 @@ In order to support multi-cluster log forwarding in additional namespaces other
5966

6067
* Currently, when deploying the {logging} version 5.8 on a FIPS-enabled cluster, the collector pods cannot start and are stuck in `CrashLoopBackOff` status, while using FluentD as a collector. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/LOG-3933[LOG-3933])
6168

62-
6369
[id="logging-release-notes-5-8-0-CVEs"]
6470
== CVEs
6571
* link:https://access.redhat.com/security/cve/CVE-2023-40217[CVE-2023-40217]

0 commit comments

Comments
 (0)