|
| 1 | +[id='proc-forward-audit-log-splunk_{context}'] |
| 2 | += Forwarding {product} audit logs to Splunk |
| 3 | + |
| 4 | +You can use the {logging-brand-name} ({logging-short}) Operator and `ClusterLogForwarder` to capture the streamed audit logs from a {product-short} instance and forward them to the HTTPS endpoint associated with your Splunk instance. |
| 5 | + |
| 6 | +.Prerequisites |
| 7 | + |
| 8 | +* You have a cluster running on a supported {ocp-short} version. |
| 9 | +* You have an account with `cluster-admin` privileges. |
| 10 | +* You have a Splunk Cloud account. |
| 11 | + |
| 12 | +.Procedure |
| 13 | + |
| 14 | +. Log in to your {ocp-short} cluster. |
| 15 | +. Install the {logging-short} Operator in the `openshift-logging` namespace and switch to the namespace: |
| 16 | ++ |
| 17 | +-- |
| 18 | +.Example command to switch to a namespace |
| 19 | +[source,bash] |
| 20 | +---- |
| 21 | +oc project openshift-logging |
| 22 | +---- |
| 23 | +-- |
| 24 | +. Create a `serviceAccount` named `log-collector` and bind the `collect-application-logs` role to the `serviceAccount` : |
| 25 | ++ |
| 26 | +-- |
| 27 | +.Example command to create a `serviceAccount` |
| 28 | +[source,bash] |
| 29 | +---- |
| 30 | +oc create sa log-collector |
| 31 | +---- |
| 32 | + |
| 33 | +.Example command to bind a role to a `serviceAccount` |
| 34 | +[source,bash] |
| 35 | +---- |
| 36 | +oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector |
| 37 | +---- |
| 38 | +-- |
| 39 | +. Generate a `hecToken` in your Splunk instance. |
| 40 | +. Create a key/value secret in the `openshift-logging` namespace and verify the secret: |
| 41 | ++ |
| 42 | +-- |
| 43 | +.Example command to create a key/value secret with `hecToken` |
| 44 | +[source,bash] |
| 45 | +---- |
| 46 | +oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token> |
| 47 | +---- |
| 48 | + |
| 49 | +.Example command to verify a secret |
| 50 | +[source,bash] |
| 51 | +---- |
| 52 | +oc -n openshift-logging get secret/splunk-secret -o yaml |
| 53 | +---- |
| 54 | +-- |
| 55 | +. Define the following `ClusterLogForwarder` configuration using OpenShift web console or OpenShift CLI: |
| 56 | +.. Specify the `log-collector` as `serviceAccount` in the YAML file: |
| 57 | ++ |
| 58 | +-- |
| 59 | +.Example `serviceAccount` configuration |
| 60 | +[source,yaml] |
| 61 | +---- |
| 62 | +serviceAccount: |
| 63 | + name: log-collector |
| 64 | +---- |
| 65 | +-- |
| 66 | +.. Configure `inputs` to specify the type and source of logs to forward. The following configuration enables the forwarder to capture logs from all applications in a provided namespace: |
| 67 | ++ |
| 68 | +-- |
| 69 | +.Example `inputs` configuration |
| 70 | +[source,yaml] |
| 71 | +---- |
| 72 | +inputs: |
| 73 | + - name: my-app-logs-input |
| 74 | + type: application |
| 75 | + application: |
| 76 | + includes: |
| 77 | + - namespace: my-developer-hub-namespace |
| 78 | + containerLimit: |
| 79 | + maxRecordsPerSecond: 100 |
| 80 | +---- |
| 81 | + |
| 82 | +For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#cluster-logging-collector-log-forward-logs-from-application-pods_configuring-log-forwarding[Forwarding application logs from specific pods]. |
| 83 | +-- |
| 84 | +.. Configure outputs to specify where the captured logs are sent. In this step, focus on the `splunk` type. You can either use `tls.insecureSkipVerify` option if the Splunk endpoint uses self-signed TLS certificates (not recommended) or provide the certificate chain using a Secret. |
| 85 | ++ |
| 86 | +-- |
| 87 | +.Example `outputs` configuration |
| 88 | +[source,yaml] |
| 89 | +---- |
| 90 | +outputs: |
| 91 | + - name: splunk-receiver-application |
| 92 | + type: splunk |
| 93 | + splunk: |
| 94 | + authentication: |
| 95 | + token: |
| 96 | + key: hecToken |
| 97 | + secretName: splunk-secret |
| 98 | + index: main |
| 99 | + url: 'https://my-splunk-instance-url' |
| 100 | + rateLimit: |
| 101 | + maxRecordsPerSecond: 250 |
| 102 | +---- |
| 103 | + |
| 104 | +For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-forward-splunk_configuring-log-forwarding[Forwarding logs to Splunk] in {ocp-short} documentation. |
| 105 | +-- |
| 106 | +.. Optional: Filter logs to include only audit logs: |
| 107 | ++ |
| 108 | +-- |
| 109 | +.Example `filters` configuration |
| 110 | +[source,yaml] |
| 111 | +---- |
| 112 | +filters: |
| 113 | + - name: audit-logs-only |
| 114 | + type: drop |
| 115 | + drop: |
| 116 | + - test: |
| 117 | + - field: .message |
| 118 | + notMatches: isAuditLog |
| 119 | +---- |
| 120 | +For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-content-filtering[Filtering logs by content] in {ocp-short} documentation. |
| 121 | +-- |
| 122 | +.. Configure pipelines to route logs from specific inputs to designated outputs. Use the names of the defined inputs and outputs to specify multiple `inputRefs` and `outputRefs` in each pipeline: |
| 123 | ++ |
| 124 | +-- |
| 125 | +.Example `pipelines` configuration |
| 126 | +[source,yaml] |
| 127 | +---- |
| 128 | +pipelines: |
| 129 | + - name: my-app-logs-pipeline |
| 130 | + detectMultilineErrors: true |
| 131 | + inputRefs: |
| 132 | + - my-app-logs-input |
| 133 | + outputRefs: |
| 134 | + - splunk-receiver-application |
| 135 | + filterRefs: |
| 136 | + - audit-logs-only |
| 137 | +---- |
| 138 | +-- |
| 139 | + |
| 140 | +. Run the following command to apply the `ClusterLogForwarder` configuration: |
| 141 | ++ |
| 142 | +-- |
| 143 | +.Example command to apply `ClusterLogForwarder` configuration |
| 144 | +[source,bash] |
| 145 | +---- |
| 146 | +oc apply -f <ClusterLogForwarder-configuration.yaml> |
| 147 | +---- |
| 148 | +-- |
| 149 | +. Optional: Customize your `ClusterLogForwarder` pods using the following options: |
| 150 | +.. Define the resource requests and limits for the log collector as follows: |
| 151 | ++ |
| 152 | +-- |
| 153 | +.Example `collector` configuration |
| 154 | +[source,yaml] |
| 155 | +---- |
| 156 | +collector: |
| 157 | + resources: |
| 158 | + requests: |
| 159 | + cpu: 250m |
| 160 | + memory: 64Mi |
| 161 | + ephemeral-storage: 250Mi |
| 162 | + limits: |
| 163 | + cpu: 500m |
| 164 | + memory: 128Mi |
| 165 | + ephemeral-storage: 500Mi |
| 166 | +---- |
| 167 | +-- |
| 168 | +.. Define `tuning` options for log delivery, including `delivery`, `compression`, and `RetryDuration`. Tuning can be applied per output as needed. |
| 169 | ++ |
| 170 | +-- |
| 171 | +.Example `tuning` configuration |
| 172 | +[source,yaml] |
| 173 | +---- |
| 174 | +tuning: |
| 175 | + delivery: AtLeastOnce |
| 176 | + compression: none |
| 177 | + minRetryDuration: 1s |
| 178 | + maxRetryDuration: 10s |
| 179 | +---- |
| 180 | +-- |
| 181 | + |
| 182 | +.Verification |
| 183 | +. Confirm that logs are being forwarded to your Splunk instance by viewing them in the Splunk dashboard. |
| 184 | +. Troubleshoot any issues using {ocp-short} and Splunk logs as needed. |
| 185 | + |
| 186 | + |
| 187 | + |
0 commit comments