Skip to content

Commit 62117be

Browse files
authored
RHIDP-4570: Document how to send RHDH audit logs to Splunk (redhat-developer#711)
* RHIDP-4570: Document how to send RHDH audit logs to Splunk * Moved chapter * Incorporated review comments * Incorporated QE review
1 parent 2e353e8 commit 62117be

File tree

3 files changed

+211
-3
lines changed

3 files changed

+211
-3
lines changed

artifacts/attributes.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,8 @@
2626
:ocp-very-short: RHOCP
2727
:osd-brand-name: Red Hat OpenShift Dedicated
2828
:osd-short: OpenShift Dedicated
29+
:logging-brand-name: Red Hat OpenShift Logging
30+
:logging-short: OpenShift Logging
2931
// minimum and current latest supported versions
3032
:ocp-version-min: 4.14
3133
:ocp-version: 4.17

assemblies/assembly-audit-log.adoc

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,14 +33,16 @@ Audit logs are not forwarded to the internal log store by default because this d
3333
* For a complete list of fields that a {product-short} audit log can include, see xref:ref-audit-log-fields.adoc_{context}[]
3434
* For a list of scaffolder events that a {product-short} audit log can include, see xref:ref-audit-log-scaffolder-events.adoc_{context}[]
3535

36-
include::modules/observe/con-audit-log-config.adoc[leveloffset=+1]
36+
include::modules/observe/con-audit-log-config.adoc[]
3737

38-
include::modules/observe/proc-audit-log-view.adoc[leveloffset=+1]
38+
include::modules/observe/proc-forward-audit-log-splunk.adoc[leveloffset=+2]
39+
40+
include::modules/observe/proc-audit-log-view.adoc[]
3941

4042
include::modules/observe/ref-audit-log-fields.adoc[leveloffset=+2]
4143

4244
include::modules/observe/ref-audit-log-scaffolder-events.adoc[leveloffset=+2]
4345

4446
include::modules/observe/ref-audit-log-catalog-events.adoc[leveloffset=+2]
4547

46-
include::modules/observe/ref-audit-log-file-rotation-overview.adoc[leveloffset=+1]
48+
include::modules/observe/ref-audit-log-file-rotation-overview.adoc[]
Lines changed: 204 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
[id='proc-forward-audit-log-splunk_{context}']
2+
= Forwarding {product} audit logs to Splunk
3+
4+
You can use the {logging-brand-name} ({logging-short}) Operator and a `ClusterLogForwarder` instance to capture the streamed audit logs from a {product-short} instance and forward them to the HTTPS endpoint associated with your Splunk instance.
5+
6+
.Prerequisites
7+
8+
* You have a cluster running on a supported {ocp-short} version.
9+
* You have an account with `cluster-admin` privileges.
10+
* You have a Splunk Cloud account or Splunk Enterprise installation.
11+
12+
.Procedure
13+
14+
. Log in to your {ocp-short} cluster.
15+
. Install the {logging-short} Operator in the `openshift-logging` namespace and switch to the namespace:
16+
+
17+
--
18+
.Example command to switch to a namespace
19+
[source,bash]
20+
----
21+
oc project openshift-logging
22+
----
23+
--
24+
. Create a `serviceAccount` named `log-collector` and bind the `collect-application-logs` role to the `serviceAccount` :
25+
+
26+
--
27+
.Example command to create a `serviceAccount`
28+
[source,bash]
29+
----
30+
oc create sa log-collector
31+
----
32+
33+
.Example command to bind a role to a `serviceAccount`
34+
[source,bash]
35+
----
36+
oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector
37+
----
38+
--
39+
. Generate a `hecToken` in your Splunk instance.
40+
. Create a key/value secret in the `openshift-logging` namespace and verify the secret:
41+
+
42+
--
43+
.Example command to create a key/value secret with `hecToken`
44+
[source,bash]
45+
----
46+
oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>
47+
----
48+
49+
.Example command to verify a secret
50+
[source,bash]
51+
----
52+
oc -n openshift-logging get secret/splunk-secret -o yaml
53+
----
54+
--
55+
. Create a basic `ClusterLogForwarder`resource YAML file as follows:
56+
+
57+
--
58+
.Example `ClusterLogForwarder`resource YAML file
59+
[source,yaml]
60+
----
61+
apiVersion: logging.openshift.io/v1
62+
kind: ClusterLogForwarder
63+
metadata:
64+
name: instance
65+
namespace: openshift-logging
66+
----
67+
68+
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-create-clf_configuring-log-forwarding[Creating a log forwarder].
69+
--
70+
. Define the following `ClusterLogForwarder` configuration using OpenShift web console or OpenShift CLI:
71+
.. Specify the `log-collector` as `serviceAccount` in the YAML file:
72+
+
73+
--
74+
.Example `serviceAccount` configuration
75+
[source,yaml]
76+
----
77+
serviceAccount:
78+
name: log-collector
79+
----
80+
--
81+
.. Configure `inputs` to specify the type and source of logs to forward. The following configuration enables the forwarder to capture logs from all applications in a provided namespace:
82+
+
83+
--
84+
.Example `inputs` configuration
85+
[source,yaml]
86+
----
87+
inputs:
88+
- name: my-app-logs-input
89+
type: application
90+
application:
91+
includes:
92+
- namespace: my-developer-hub-namespace
93+
containerLimit:
94+
maxRecordsPerSecond: 100
95+
----
96+
97+
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#cluster-logging-collector-log-forward-logs-from-application-pods_configuring-log-forwarding[Forwarding application logs from specific pods].
98+
--
99+
.. Configure outputs to specify where the captured logs are sent. In this step, focus on the `splunk` type. You can either use `tls.insecureSkipVerify` option if the Splunk endpoint uses self-signed TLS certificates (not recommended) or provide the certificate chain using a Secret.
100+
+
101+
--
102+
.Example `outputs` configuration
103+
[source,yaml]
104+
----
105+
outputs:
106+
- name: splunk-receiver-application
107+
type: splunk
108+
splunk:
109+
authentication:
110+
token:
111+
key: hecToken
112+
secretName: splunk-secret
113+
index: main
114+
url: 'https://my-splunk-instance-url'
115+
rateLimit:
116+
maxRecordsPerSecond: 250
117+
----
118+
119+
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-forward-splunk_configuring-log-forwarding[Forwarding logs to Splunk] in {ocp-short} documentation.
120+
--
121+
.. Optional: Filter logs to include only audit logs:
122+
+
123+
--
124+
.Example `filters` configuration
125+
[source,yaml]
126+
----
127+
filters:
128+
- name: audit-logs-only
129+
type: drop
130+
drop:
131+
- test:
132+
- field: .message
133+
notMatches: isAuditLog
134+
----
135+
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-content-filtering[Filtering logs by content] in {ocp-short} documentation.
136+
--
137+
.. Configure pipelines to route logs from specific inputs to designated outputs. Use the names of the defined inputs and outputs to specify multiple `inputRefs` and `outputRefs` in each pipeline:
138+
+
139+
--
140+
.Example `pipelines` configuration
141+
[source,yaml]
142+
----
143+
pipelines:
144+
- name: my-app-logs-pipeline
145+
detectMultilineErrors: true
146+
inputRefs:
147+
- my-app-logs-input
148+
outputRefs:
149+
- splunk-receiver-application
150+
filterRefs:
151+
- audit-logs-only
152+
----
153+
--
154+
155+
. Run the following command to apply the `ClusterLogForwarder` configuration:
156+
+
157+
--
158+
.Example command to apply `ClusterLogForwarder` configuration
159+
[source,bash]
160+
----
161+
oc apply -f <ClusterLogForwarder-configuration.yaml>
162+
----
163+
--
164+
. Optional: To reduce the risk of log loss, configure your `ClusterLogForwarder` pods using the following options:
165+
.. Define the resource requests and limits for the log collector as follows:
166+
+
167+
--
168+
.Example `collector` configuration
169+
[source,yaml]
170+
----
171+
collector:
172+
resources:
173+
requests:
174+
cpu: 250m
175+
memory: 64Mi
176+
ephemeral-storage: 250Mi
177+
limits:
178+
cpu: 500m
179+
memory: 128Mi
180+
ephemeral-storage: 500Mi
181+
----
182+
--
183+
.. Define `tuning` options for log delivery, including `delivery`, `compression`, and `RetryDuration`. Tuning can be applied per output as needed.
184+
+
185+
--
186+
.Example `tuning` configuration
187+
[source,yaml]
188+
----
189+
tuning:
190+
delivery: AtLeastOnce <1>
191+
compression: none
192+
minRetryDuration: 1s
193+
maxRetryDuration: 10s
194+
----
195+
196+
<1> `AtLeastOnce` delivery mode means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
197+
--
198+
199+
.Verification
200+
. Confirm that logs are being forwarded to your Splunk instance by viewing them in the Splunk dashboard.
201+
. Troubleshoot any issues using {ocp-short} and Splunk logs as needed.
202+
203+
204+

0 commit comments

Comments
 (0)