Skip to content

Commit 129e055

Browse files
authored
Merge branch 'main' into RHIDP-4896
2 parents 94a4854 + 2a2dcd5 commit 129e055

File tree

13 files changed

+248
-5
lines changed

13 files changed

+248
-5
lines changed

artifacts/attributes.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,8 @@
2626
:ocp-very-short: RHOCP
2727
:osd-brand-name: Red Hat OpenShift Dedicated
2828
:osd-short: OpenShift Dedicated
29+
:logging-brand-name: Red Hat OpenShift Logging
30+
:logging-short: OpenShift Logging
2931
// minimum and current latest supported versions
3032
:ocp-version-min: 4.14
3133
:ocp-version: 4.17

assemblies/assembly-about-rhdh.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,4 @@ This platform is driven by a centralized software catalog, providing efficiency
1919
Use {product} to simplify decision-making through a selection of internally approved tools, programming languages, and developer resources within a self-managed portal.
2020

2121

22-
include::modules/discover/con-benefits-of-rhdh.adoc[leveloffset=+1]
22+
include::modules/about/con-benefits-of-rhdh.adoc[leveloffset=+1]

assemblies/assembly-audit-log.adoc

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,14 +33,16 @@ Audit logs are not forwarded to the internal log store by default because this d
3333
* For a complete list of fields that a {product-short} audit log can include, see xref:ref-audit-log-fields.adoc_{context}[]
3434
* For a list of scaffolder events that a {product-short} audit log can include, see xref:ref-audit-log-scaffolder-events.adoc_{context}[]
3535

36-
include::modules/observe/con-audit-log-config.adoc[leveloffset=+1]
36+
include::modules/observe/con-audit-log-config.adoc[]
3737

38-
include::modules/observe/proc-audit-log-view.adoc[leveloffset=+1]
38+
include::modules/observe/proc-forward-audit-log-splunk.adoc[leveloffset=+2]
39+
40+
include::modules/observe/proc-audit-log-view.adoc[]
3941

4042
include::modules/observe/ref-audit-log-fields.adoc[leveloffset=+2]
4143

4244
include::modules/observe/ref-audit-log-scaffolder-events.adoc[leveloffset=+2]
4345

4446
include::modules/observe/ref-audit-log-catalog-events.adoc[leveloffset=+2]
4547

46-
include::modules/observe/ref-audit-log-file-rotation-overview.adoc[leveloffset=+1]
48+
include::modules/observe/ref-audit-log-file-rotation-overview.adoc[]

assemblies/assembly-running-rhdh-behind-a-proxy.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,9 @@ You can run the {product-very-short} application behind a corporate proxy by set
66
* `HTTP_PROXY`: Denotes the proxy to use for HTTP requests.
77
* `HTTPS_PROXY`: Denotes the proxy to use for HTTPS requests.
88

9-
Additionally, you can set the `NO_PROXY` environment variable to exclude certain domains from proxying. The variable value is a comma-separated list of hostnames that do not require a proxy to get reached, even if one is specified.
9+
Additionally, set the `NO_PROXY` environment variable to bypass the proxy for certain domains. The variable value is a comma-separated list of hostnames or IP addresses that can be accessed without the proxy, even if one is specified.
1010

11+
include::modules/admin/procedure-understanding-no-proxy.adoc[leveloffset=+1]
1112

1213
include::modules/admin/proc-configuring-proxy-in-helm-deployment.adoc[leveloffset=+1]
1314
include::modules/admin/proc-configuring-proxy-in-operator-deployment.adoc[leveloffset=+1]
File renamed without changes.
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
[id="understanding-no-proxy"]
2+
= Understanding the `NO_PROXY` exclusion rules
3+
4+
`NO_PROXY` is a comma or space-separated list of hostnames or IP addresses, with optional port numbers. If the input URL matches any of the entries listed in `NO_PROXY`, a direct request fetches that URL, for example, bypassing the proxy settings.
5+
6+
[NOTE]
7+
====
8+
The default value for `NO_PROXY` in {product-very-short} is `localhost,127.0.0.1`. If you want to override it, include at least `localhost` or `localhost:7007` in the list. Otherwise, the {product-very-short} backend might fail.
9+
====
10+
11+
Matching follows the rules below:
12+
13+
* `NO_PROXY=*` will bypass the proxy for all requests.
14+
15+
* Space and commas might separate the entries in the `NO_PROXY` list. For example, `NO_PROXY="localhost,example.com"`, or `NO_PROXY="localhost example.com"`, or `NO_PROXY="localhost, example.com"` would have the same effect.
16+
17+
* If `NO_PROXY` contains no entries, configuring the `HTTP(S)_PROXY` settings makes the backend send all requests through the proxy.
18+
19+
* The backend does not perform a DNS lookup to determine if a request should bypass the proxy or not. For example, if DNS resolves `example.com` to `1.2.3.4`, setting `NO_PROXY=1.2.3.4` has no effect on requests sent to `example.com`. Only requests sent to the IP address `1.2.3.4` bypass the proxy.
20+
21+
* If you add a port after the hostname or IP address, the request must match both the host/IP and port to bypass the proxy. For example, `NO_PROXY=example.com:1234` would bypass the proxy for requests to `http(s)://example.com:1234`, but not for requests on other ports, like `http(s)://example.com`.
22+
23+
* If you do not specify a port after the hostname or IP address, all requests to that host/IP address will bypass the proxy regardless of the port. For example, `NO_PROXY=localhost` would bypass the proxy for requests sent to URLs like `http(s)://localhost:7077` and `http(s)://localhost:8888`.
24+
25+
* IP Address blocks in CIDR notation will not work. So setting `NO_PROXY=10.11.0.0/16` will not have any effect, even if the backend sends a request to an IP address in that block.
26+
27+
* Supports only IPv4 addresses. IPv6 addresses like `::1` will not work.
28+
29+
* Generally, the proxy is only bypassed if the hostname is an exact match for an entry in the `NO_PROXY` list. The only exceptions are entries that start with a dot (`.`) or with a wildcard (`*`). In such a case, bypass the proxy if the hostname ends with the entry.
30+
31+
[NOTE]
32+
====
33+
List the domain and the wildcard domain if you want to exclude a given domain and all its subdomains. For example, you would set `NO_PROXY=example.com,.example.com` to bypass the proxy for requests sent to `http(s)://example.com` and `http(s)://subdomain.example.com`.
34+
====
Lines changed: 204 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
[id='proc-forward-audit-log-splunk_{context}']
2+
= Forwarding {product} audit logs to Splunk
3+
4+
You can use the {logging-brand-name} ({logging-short}) Operator and a `ClusterLogForwarder` instance to capture the streamed audit logs from a {product-short} instance and forward them to the HTTPS endpoint associated with your Splunk instance.
5+
6+
.Prerequisites
7+
8+
* You have a cluster running on a supported {ocp-short} version.
9+
* You have an account with `cluster-admin` privileges.
10+
* You have a Splunk Cloud account or Splunk Enterprise installation.
11+
12+
.Procedure
13+
14+
. Log in to your {ocp-short} cluster.
15+
. Install the {logging-short} Operator in the `openshift-logging` namespace and switch to the namespace:
16+
+
17+
--
18+
.Example command to switch to a namespace
19+
[source,bash]
20+
----
21+
oc project openshift-logging
22+
----
23+
--
24+
. Create a `serviceAccount` named `log-collector` and bind the `collect-application-logs` role to the `serviceAccount` :
25+
+
26+
--
27+
.Example command to create a `serviceAccount`
28+
[source,bash]
29+
----
30+
oc create sa log-collector
31+
----
32+
33+
.Example command to bind a role to a `serviceAccount`
34+
[source,bash]
35+
----
36+
oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector
37+
----
38+
--
39+
. Generate a `hecToken` in your Splunk instance.
40+
. Create a key/value secret in the `openshift-logging` namespace and verify the secret:
41+
+
42+
--
43+
.Example command to create a key/value secret with `hecToken`
44+
[source,bash]
45+
----
46+
oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>
47+
----
48+
49+
.Example command to verify a secret
50+
[source,bash]
51+
----
52+
oc -n openshift-logging get secret/splunk-secret -o yaml
53+
----
54+
--
55+
. Create a basic `ClusterLogForwarder`resource YAML file as follows:
56+
+
57+
--
58+
.Example `ClusterLogForwarder`resource YAML file
59+
[source,yaml]
60+
----
61+
apiVersion: logging.openshift.io/v1
62+
kind: ClusterLogForwarder
63+
metadata:
64+
name: instance
65+
namespace: openshift-logging
66+
----
67+
68+
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-create-clf_configuring-log-forwarding[Creating a log forwarder].
69+
--
70+
. Define the following `ClusterLogForwarder` configuration using OpenShift web console or OpenShift CLI:
71+
.. Specify the `log-collector` as `serviceAccount` in the YAML file:
72+
+
73+
--
74+
.Example `serviceAccount` configuration
75+
[source,yaml]
76+
----
77+
serviceAccount:
78+
name: log-collector
79+
----
80+
--
81+
.. Configure `inputs` to specify the type and source of logs to forward. The following configuration enables the forwarder to capture logs from all applications in a provided namespace:
82+
+
83+
--
84+
.Example `inputs` configuration
85+
[source,yaml]
86+
----
87+
inputs:
88+
- name: my-app-logs-input
89+
type: application
90+
application:
91+
includes:
92+
- namespace: my-developer-hub-namespace
93+
containerLimit:
94+
maxRecordsPerSecond: 100
95+
----
96+
97+
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#cluster-logging-collector-log-forward-logs-from-application-pods_configuring-log-forwarding[Forwarding application logs from specific pods].
98+
--
99+
.. Configure outputs to specify where the captured logs are sent. In this step, focus on the `splunk` type. You can either use `tls.insecureSkipVerify` option if the Splunk endpoint uses self-signed TLS certificates (not recommended) or provide the certificate chain using a Secret.
100+
+
101+
--
102+
.Example `outputs` configuration
103+
[source,yaml]
104+
----
105+
outputs:
106+
- name: splunk-receiver-application
107+
type: splunk
108+
splunk:
109+
authentication:
110+
token:
111+
key: hecToken
112+
secretName: splunk-secret
113+
index: main
114+
url: 'https://my-splunk-instance-url'
115+
rateLimit:
116+
maxRecordsPerSecond: 250
117+
----
118+
119+
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-forward-splunk_configuring-log-forwarding[Forwarding logs to Splunk] in {ocp-short} documentation.
120+
--
121+
.. Optional: Filter logs to include only audit logs:
122+
+
123+
--
124+
.Example `filters` configuration
125+
[source,yaml]
126+
----
127+
filters:
128+
- name: audit-logs-only
129+
type: drop
130+
drop:
131+
- test:
132+
- field: .message
133+
notMatches: isAuditLog
134+
----
135+
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/logging/index#logging-content-filtering[Filtering logs by content] in {ocp-short} documentation.
136+
--
137+
.. Configure pipelines to route logs from specific inputs to designated outputs. Use the names of the defined inputs and outputs to specify multiple `inputRefs` and `outputRefs` in each pipeline:
138+
+
139+
--
140+
.Example `pipelines` configuration
141+
[source,yaml]
142+
----
143+
pipelines:
144+
- name: my-app-logs-pipeline
145+
detectMultilineErrors: true
146+
inputRefs:
147+
- my-app-logs-input
148+
outputRefs:
149+
- splunk-receiver-application
150+
filterRefs:
151+
- audit-logs-only
152+
----
153+
--
154+
155+
. Run the following command to apply the `ClusterLogForwarder` configuration:
156+
+
157+
--
158+
.Example command to apply `ClusterLogForwarder` configuration
159+
[source,bash]
160+
----
161+
oc apply -f <ClusterLogForwarder-configuration.yaml>
162+
----
163+
--
164+
. Optional: To reduce the risk of log loss, configure your `ClusterLogForwarder` pods using the following options:
165+
.. Define the resource requests and limits for the log collector as follows:
166+
+
167+
--
168+
.Example `collector` configuration
169+
[source,yaml]
170+
----
171+
collector:
172+
resources:
173+
requests:
174+
cpu: 250m
175+
memory: 64Mi
176+
ephemeral-storage: 250Mi
177+
limits:
178+
cpu: 500m
179+
memory: 128Mi
180+
ephemeral-storage: 500Mi
181+
----
182+
--
183+
.. Define `tuning` options for log delivery, including `delivery`, `compression`, and `RetryDuration`. Tuning can be applied per output as needed.
184+
+
185+
--
186+
.Example `tuning` configuration
187+
[source,yaml]
188+
----
189+
tuning:
190+
delivery: AtLeastOnce <1>
191+
compression: none
192+
minRetryDuration: 1s
193+
maxRetryDuration: 10s
194+
----
195+
196+
<1> `AtLeastOnce` delivery mode means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
197+
--
198+
199+
.Verification
200+
. Confirm that logs are being forwarded to your Splunk instance by viewing them in the Splunk dashboard.
201+
. Troubleshoot any issues using {ocp-short} and Splunk logs as needed.
202+
203+
204+

0 commit comments

Comments
 (0)