Skip to content

Commit 72bcce0

Browse files
authored
Merge pull request #25456 from mburke5678/BZ-1867137-46
4.6 After installation infra and audit index pattern not available in Kibana
2 parents 13653b0 + 3cec1d4 commit 72bcce0

File tree

3 files changed

+29
-76
lines changed

3 files changed

+29
-76
lines changed

logging/config/cluster-logging-log-store.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ include::modules/cluster-logging-elasticsearch-audit.adoc[leveloffset=+1]
3838

3939
.Additional resources
4040

41-
For more information on the Log Forward API, see xref:../../logging/cluster-logging-external.adoc#cluster-logging-external[Forwarding logs using the Log Forwarding API].
41+
For more information on the Log Forwarding API, see xref:../../logging/cluster-logging-external.adoc#cluster-logging-external[Forwarding logs using the Log Forwarding API].
4242

4343
include::modules/cluster-logging-elasticsearch-retention.adoc[leveloffset=+1]
4444

modules/cluster-logging-elasticsearch-audit.adoc

Lines changed: 27 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -18,120 +18,73 @@ The internal {product-title} Elasticsearch log store does not provide secure sto
1818

1919
To use the Log Forward API to forward audit logs to the internal Elasticsearch instance:
2020

21-
. If the Log Forward API is not enabled:
22-
23-
.. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
24-
+
25-
----
26-
$ oc edit ClusterLogging instance
27-
----
28-
29-
.. Add the `clusterlogging.openshift.io/logforwardingtechpreview` annotation and set to `enabled`:
30-
+
31-
[source,yaml]
32-
----
33-
apiVersion: "logging.openshift.io/v1"
34-
kind: "ClusterLogging"
35-
metadata:
36-
annotations:
37-
clusterlogging.openshift.io/logforwardingtechpreview: enabled <1>
38-
name: "instance"
39-
namespace: "openshift-logging"
40-
spec:
41-
42-
...
43-
44-
collection: <2>
45-
logs:
46-
type: "fluentd"
47-
fluentd: {}
48-
----
49-
<1> Enables and disables the Log Forwarding API. Set to `enabled` to use log forwarding.
50-
<2> The `spec.collection` block must be defined to use Fluentd in the Cluster Logging CR.
51-
5221
. Create a Log Forwarding CR YAML file or edit your existing CR:
5322
+
5423
* Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes:
5524
+
5625
[source,yaml]
5726
----
58-
apiVersion: logging.openshift.io/v1alpha1
59-
kind: LogForwarding
27+
apiVersion: logging.openshift.io/v1
28+
kind: ClusterLogForwarder
6029
metadata:
6130
name: instance
6231
namespace: openshift-logging
6332
spec:
64-
disableDefaultForwarding: true
65-
outputs:
66-
- name: clo-es
67-
type: elasticsearch
68-
endpoint: 'elasticsearch.openshift-logging.svc:9200' <1>
69-
secret:
70-
name: fluentd
71-
pipelines:
72-
- name: audit-pipeline <2>
73-
inputSource: logs.audit
74-
outputRefs:
75-
- clo-es
76-
- name: app-pipeline <3>
77-
inputSource: logs.app
78-
outputRefs:
79-
- clo-es
80-
- name: infra-pipeline <4>
81-
inputSource: logs.infra
82-
outputRefs:
83-
- clo-es
33+
pipelines: <1>
34+
- name: all-to-default
35+
inputRefs:
36+
- infrastructure
37+
- application
38+
- audit
39+
outputRefs:
40+
- default
8441
----
85-
<1> The `endpoint` parameter points to the internal Elasticsearch instance.
86-
<2> This parameter sends the audit logs to the specified endpoint.
87-
<3> This parameter sends the application logs to the specified endpoint.
88-
<4> This parameter sends the infrastructure logs to the specified endpoint.
42+
<1> A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance.
8943
+
9044
[NOTE]
9145
====
92-
You must configure a pipeline and output for all three types of logs: application, infrastructure, and audit. If you do not specify a pipeline and output for a log type, those logs are not stored and will be lost.
46+
You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost.
9347
====
9448
+
95-
* If you have an existing LogForwarding CR, add an output for the internal Elasticsearch instance and a pipeline to that output for the audit logs. For example:
49+
* If you have an existing LogForwarding CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example:
9650
+
9751
[source,yaml]
9852
----
99-
apiVersion: "logging.openshift.io/v1alpha1"
53+
apiVersion: "logging.openshift.io/v1"
10054
kind: "LogForwarding"
10155
metadata:
10256
name: instance
10357
namespace: openshift-logging
10458
spec:
105-
disableDefaultForwarding: true
10659
outputs:
107-
- name: elasticsearch <1>
108-
type: "elasticsearch"
109-
endpoint: elasticsearch.openshift-logging.svc:9200
110-
secret:
111-
name: fluentd
11260
- name: elasticsearch-insecure
11361
type: "elasticsearch"
114-
endpoint: elasticsearch-insecure.svc.messaging.cluster.local
62+
url: elasticsearch-insecure.svc.messaging.cluster.local
11563
insecure: true
64+
- name: elasticsearch-secure
65+
type: "elasticsearch"
66+
url: elasticsearch-secure.svc.messaging.cluster.local
67+
secret:
68+
name: es-audit
11669
- name: secureforward-offcluster
11770
type: "forward"
118-
endpoint: https://secureforward.offcluster.com:24224
71+
url: https://secureforward.offcluster.com:24224
11972
secret:
12073
name: secureforward
12174
pipelines:
12275
- name: container-logs
123-
inputSource: logs.app
76+
inputRefs: application
12477
outputRefs:
12578
- secureforward-offcluster
12679
- name: infra-logs
127-
inputSource: logs.infra
80+
inputRefs: infrastructure
12881
outputRefs:
12982
- elasticsearch-insecure
13083
- name: audit-logs
131-
inputSource: logs.audit
84+
inputRefs: audit
13285
outputRefs:
133-
- elasticsearch <2>
86+
- elasticsearch-secure
87+
- default <1>
13488
----
135-
<1> An output for the internal Elasticsearch instance.
136-
<2> A pipeline for sending the audit logs to the internal Elasticsearch instance.
89+
<1> This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance.
13790

modules/cluster-logging-visualizer-launch.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ pie charts, heat maps, built-in geospatial support, and other visualizations.
1010

1111
.Prerequisites
1212

13-
* To list the *infra* and *audit* indices in Kibana, a user must have the `cluster-admin` role, the `cluster-reader` role, or both roles. The default `kubeadmin` user does not have proper permissions to list these indices.
13+
* To list the *infra* and *audit* indices in Kibana, a user must have the `cluster-admin` role, the `cluster-reader` role, or both roles. The default `kubeadmin` user has proper permissions to list these indices.
1414
+
1515
If you can view the Pods and logs in the `default` project, you should be able to access the these indices. You can use the following command to check if the current user has proper permissions:
1616
+

0 commit comments

Comments
 (0)