generated from redhat-developer/new-project-template
-
Notifications
You must be signed in to change notification settings - Fork 57
RHIDP-4570: Document how to send RHDH audit logs to Splunk #711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Gerry-Forde
merged 4 commits into
redhat-developer:main
from
hmanwani-rh:RHIDP-4570-main
Nov 21, 2024
Merged
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,204 @@ | ||
| [id='proc-forward-audit-log-splunk_{context}'] | ||
| = Forwarding {product} audit logs to Splunk | ||
|
|
||
| You can use the {logging-brand-name} ({logging-short}) Operator and a `ClusterLogForwarder` instance to capture the streamed audit logs from a {product-short} instance and forward them to the HTTPS endpoint associated with your Splunk instance. | ||
|
|
||
| .Prerequisites | ||
|
|
||
| * You have a cluster running on a supported {ocp-short} version. | ||
| * You have an account with `cluster-admin` privileges. | ||
| * You have a Splunk Cloud account or Splunk Enterprise installation. | ||
|
|
||
| .Procedure | ||
|
|
||
| . Log in to your {ocp-short} cluster. | ||
| . Install the {logging-short} Operator in the `openshift-logging` namespace and switch to the namespace: | ||
| + | ||
| -- | ||
| .Example command to switch to a namespace | ||
| [source,bash] | ||
| ---- | ||
| oc project openshift-logging | ||
| ---- | ||
| -- | ||
| . Create a `serviceAccount` named `log-collector` and bind the `collect-application-logs` role to the `serviceAccount` : | ||
| + | ||
| -- | ||
| .Example command to create a `serviceAccount` | ||
| [source,bash] | ||
| ---- | ||
| oc create sa log-collector | ||
| ---- | ||
|
|
||
| .Example command to bind a role to a `serviceAccount` | ||
| [source,bash] | ||
| ---- | ||
| oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector | ||
| ---- | ||
| -- | ||
| . Generate a `hecToken` in your Splunk instance. | ||
| . Create a key/value secret in the `openshift-logging` namespace and verify the secret: | ||
| + | ||
| -- | ||
| .Example command to create a key/value secret with `hecToken` | ||
| [source,bash] | ||
| ---- | ||
| oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token> | ||
| ---- | ||
|
|
||
| .Example command to verify a secret | ||
| [source,bash] | ||
| ---- | ||
| oc -n openshift-logging get secret/splunk-secret -o yaml | ||
| ---- | ||
| -- | ||
| . Create a basic `ClusterLogForwarder`resource YAML file as follows: | ||
| + | ||
| -- | ||
| .Example `ClusterLogForwarder`resource YAML file | ||
| [source,yaml] | ||
| ---- | ||
| apiVersion: logging.openshift.io/v1 | ||
| kind: ClusterLogForwarder | ||
| metadata: | ||
| name: instance | ||
| namespace: openshift-logging | ||
| ---- | ||
|
|
||
| For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-create-clf_configuring-log-forwarding[Creating a log forwarder]. | ||
| -- | ||
| . Define the following `ClusterLogForwarder` configuration using OpenShift web console or OpenShift CLI: | ||
| .. Specify the `log-collector` as `serviceAccount` in the YAML file: | ||
| + | ||
| -- | ||
| .Example `serviceAccount` configuration | ||
| [source,yaml] | ||
| ---- | ||
| serviceAccount: | ||
| name: log-collector | ||
| ---- | ||
| -- | ||
| .. Configure `inputs` to specify the type and source of logs to forward. The following configuration enables the forwarder to capture logs from all applications in a provided namespace: | ||
| + | ||
| -- | ||
| .Example `inputs` configuration | ||
| [source,yaml] | ||
| ---- | ||
| inputs: | ||
| - name: my-app-logs-input | ||
| type: application | ||
| application: | ||
| includes: | ||
| - namespace: my-developer-hub-namespace | ||
| containerLimit: | ||
| maxRecordsPerSecond: 100 | ||
| ---- | ||
|
|
||
| For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#cluster-logging-collector-log-forward-logs-from-application-pods_configuring-log-forwarding[Forwarding application logs from specific pods]. | ||
| -- | ||
| .. Configure outputs to specify where the captured logs are sent. In this step, focus on the `splunk` type. You can either use `tls.insecureSkipVerify` option if the Splunk endpoint uses self-signed TLS certificates (not recommended) or provide the certificate chain using a Secret. | ||
| + | ||
| -- | ||
| .Example `outputs` configuration | ||
| [source,yaml] | ||
| ---- | ||
| outputs: | ||
| - name: splunk-receiver-application | ||
| type: splunk | ||
| splunk: | ||
| authentication: | ||
| token: | ||
| key: hecToken | ||
| secretName: splunk-secret | ||
| index: main | ||
| url: 'https://my-splunk-instance-url' | ||
| rateLimit: | ||
| maxRecordsPerSecond: 250 | ||
| ---- | ||
|
|
||
| For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-forward-splunk_configuring-log-forwarding[Forwarding logs to Splunk] in {ocp-short} documentation. | ||
| -- | ||
| .. Optional: Filter logs to include only audit logs: | ||
hmanwani-rh marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| + | ||
| -- | ||
| .Example `filters` configuration | ||
| [source,yaml] | ||
| ---- | ||
| filters: | ||
| - name: audit-logs-only | ||
| type: drop | ||
| drop: | ||
| - test: | ||
| - field: .message | ||
| notMatches: isAuditLog | ||
| ---- | ||
| For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-content-filtering[Filtering logs by content] in {ocp-short} documentation. | ||
| -- | ||
| .. Configure pipelines to route logs from specific inputs to designated outputs. Use the names of the defined inputs and outputs to specify multiple `inputRefs` and `outputRefs` in each pipeline: | ||
| + | ||
| -- | ||
| .Example `pipelines` configuration | ||
| [source,yaml] | ||
| ---- | ||
| pipelines: | ||
| - name: my-app-logs-pipeline | ||
| detectMultilineErrors: true | ||
| inputRefs: | ||
| - my-app-logs-input | ||
| outputRefs: | ||
| - splunk-receiver-application | ||
| filterRefs: | ||
| - audit-logs-only | ||
| ---- | ||
| -- | ||
|
|
||
| . Run the following command to apply the `ClusterLogForwarder` configuration: | ||
| + | ||
| -- | ||
| .Example command to apply `ClusterLogForwarder` configuration | ||
| [source,bash] | ||
| ---- | ||
| oc apply -f <ClusterLogForwarder-configuration.yaml> | ||
| ---- | ||
| -- | ||
| . Optional: To reduce the risk of log loss, configure your `ClusterLogForwarder` pods using the following options: | ||
| .. Define the resource requests and limits for the log collector as follows: | ||
| + | ||
| -- | ||
| .Example `collector` configuration | ||
| [source,yaml] | ||
| ---- | ||
| collector: | ||
| resources: | ||
| requests: | ||
| cpu: 250m | ||
| memory: 64Mi | ||
| ephemeral-storage: 250Mi | ||
| limits: | ||
| cpu: 500m | ||
| memory: 128Mi | ||
| ephemeral-storage: 500Mi | ||
| ---- | ||
| -- | ||
| .. Define `tuning` options for log delivery, including `delivery`, `compression`, and `RetryDuration`. Tuning can be applied per output as needed. | ||
| + | ||
| -- | ||
| .Example `tuning` configuration | ||
| [source,yaml] | ||
| ---- | ||
| tuning: | ||
| delivery: AtLeastOnce <1> | ||
| compression: none | ||
| minRetryDuration: 1s | ||
| maxRetryDuration: 10s | ||
| ---- | ||
|
|
||
| <1> `AtLeastOnce` delivery mode means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash. | ||
| -- | ||
|
|
||
| .Verification | ||
| . Confirm that logs are being forwarded to your Splunk instance by viewing them in the Splunk dashboard. | ||
| . Troubleshoot any issues using {ocp-short} and Splunk logs as needed. | ||
|
|
||
|
|
||
|
|
||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.