You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/compliance-anatomy.adoc
+18-13Lines changed: 18 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,34 +13,34 @@ The compliance content is stored in `Profile` objects that are generated from a
13
13
14
14
[source,terminal]
15
15
----
16
-
$ oc get profilebundle.compliance
16
+
$ oc get -n openshift-compliance profilebundle.compliance
17
17
----
18
18
19
19
[source,terminal]
20
20
----
21
-
$ oc get profile.compliance
21
+
$ oc get -n openshift-compliance profile.compliance
22
22
----
23
23
24
24
The `ProfileBundle` objects are processed by deployments labeled with the `Bundle` name. To troubleshoot an issue with the `Bundle`, you can find the deployment and view logs of the pods in a deployment:
These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results:
130
131
131
132
[source,terminal]
132
133
----
133
-
$ oc get pvc -lcompliance.openshift.io/scan-name=<scan_name>
134
+
$ oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
134
135
----
135
136
136
137
The PVCs are mounted by a per-scan `ResultServer` deployment. A `ResultServer` is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the `ResultServer` deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the `ResultServer` is protected by mutual TLS protocols.
@@ -147,8 +148,9 @@ $ oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scan
At this point, the scan proceeds to the Running phase.
151
151
----
152
+
+
153
+
The scan then proceeds to the Running phase.
152
154
153
155
[id="compliance-scan-running-phase_{context}"]
154
156
=== Running phase
@@ -158,7 +160,7 @@ The running phase waits until the scanner pods finish. The following terms and p
158
160
159
161
* *scanner*: This container runs the scan. For node scans, the container mounts the node filesystem as `/host` and mounts the content delivered by the init container. The container also mounts the `entrypoint` `ConfigMap` created in the Launching phase and executes it. The default script in the entrypoint `ConfigMap` executes OpenSCAP and stores the result files in the `/results` directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the `debug` flag.
160
162
161
-
* *logcollector*: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the `ResultServer` and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a `ConfigMap.` These result config maps are labeled with the scan name (`compliance.openshift.io/scan-name=<scan_name>`):
163
+
* *logcollector*: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the `ResultServer` and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a `ConfigMap.` These result config maps are labeled with the scan name (`compliance.openshift.io/scan-name=rhcos4-e8-worker`):
162
164
+
163
165
[source,terminal]
164
166
----
@@ -243,7 +245,8 @@ It is also possible to trigger a re-run of a scan in the Done phase by annotatin
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with `autoApplyRemediations: true`. The {product-title} administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the `ComplianceSuite` controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the `ComplianceRemediation` controller takes over.
@@ -299,14 +302,16 @@ The remediation loop ends once the rendered machine config is updated, if needed
After the Compliance Operator processes the applied remediation, the `status.ApplicationState` attribute would change to *Applied* or to *Error* if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a `MachineConfig` object named `75-$scan-name-$suite-name`. That `MachineConfig` object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node.
Copy file name to clipboardExpand all lines: modules/compliance-crd-profile-bundle.adoc
+3-8Lines changed: 3 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
:_content-type: CONCEPT
6
6
[id="profile-bundle-object_{context}"]
7
7
= ProfileBundle object
8
-
When you install the Compliance Operator, it includes ready-to-run `ProfileBundle`object. The Compliance Operator parses the `ProfileBundle` object and creates a `Profile` object for each profile in the bundle. It also parses `Rule` and `Variable` objects, which are used by the `Profile` object.
8
+
When you install the Compliance Operator, it includes ready-to-run `ProfileBundle`objects. The Compliance Operator parses the `ProfileBundle` object and creates a `Profile` object for each profile in the bundle. It also parses `Rule` and `Variable` objects, which are used by the `Profile` object.
Copy file name to clipboardExpand all lines: modules/compliance-crd-scan-setting.adoc
+69-16Lines changed: 69 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,26 +14,79 @@ By default, the Compliance Operator creates the following `ScanSetting` objects:
14
14
.Example `ScanSetting` object
15
15
[source,yaml]
16
16
----
17
-
apiVersion: compliance.openshift.io/v1alpha1
18
-
kind: ScanSetting
19
-
metadata:
20
-
name: <name of the scan>
21
-
autoApplyRemediations: false <1>
22
-
autoUpdateRemediations: false <2>
23
-
schedule: "0 1 * * *" <3>
24
-
rawResultStorage:
25
-
size: "2Gi" <4>
26
-
rotation: 10 <5>
27
-
roles: <6>
28
-
- worker
29
-
- master
17
+
Name: default-auto-apply
18
+
Namespace: openshift-compliance
19
+
Labels: <none>
20
+
Annotations: <none>
21
+
API Version: compliance.openshift.io/v1alpha1
22
+
Auto Apply Remediations: true
23
+
Auto Update Remediations: true
24
+
Kind: ScanSetting
25
+
Metadata:
26
+
Creation Timestamp: 2022-10-18T20:21:00Z
27
+
Generation: 1
28
+
Managed Fields:
29
+
API Version: compliance.openshift.io/v1alpha1
30
+
Fields Type: FieldsV1
31
+
fieldsV1:
32
+
f:autoApplyRemediations: <1>
33
+
f:autoUpdateRemediations: <2>
34
+
f:rawResultStorage:
35
+
.:
36
+
f:nodeSelector:
37
+
.:
38
+
f:node-role.kubernetes.io/master:
39
+
f:pvAccessModes:
40
+
f:rotation:
41
+
f:size:
42
+
f:tolerations:
43
+
f:roles:
44
+
f:scanTolerations:
45
+
f:schedule:
46
+
f:showNotApplicable:
47
+
f:strictNodeScan:
48
+
Manager: compliance-operator
49
+
Operation: Update
50
+
Time: 2022-10-18T20:21:00Z
51
+
Resource Version: 38840
52
+
UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84
53
+
Raw Result Storage:
54
+
Node Selector:
55
+
node-role.kubernetes.io/master:
56
+
Pv Access Modes:
57
+
ReadWriteOnce
58
+
Rotation: 3 <3>
59
+
Size: 1Gi <4>
60
+
Tolerations:
61
+
Effect: NoSchedule
62
+
Key: node-role.kubernetes.io/master
63
+
Operator: Exists
64
+
Effect: NoExecute
65
+
Key: node.kubernetes.io/not-ready
66
+
Operator: Exists
67
+
Toleration Seconds: 300
68
+
Effect: NoExecute
69
+
Key: node.kubernetes.io/unreachable
70
+
Operator: Exists
71
+
Toleration Seconds: 300
72
+
Effect: NoSchedule
73
+
Key: node.kubernetes.io/memory-pressure
74
+
Operator: Exists
75
+
Roles: <6>
76
+
master
77
+
worker
78
+
Scan Tolerations:
79
+
Operator: Exists
80
+
Schedule: "0 1 * * *" <5>
81
+
Show Not Applicable: false
82
+
Strict Node Scan: true
83
+
Events: <none>
30
84
----
31
-
32
85
<1> Set to `true` to enable auto remediations. Set to `false` to disable auto remediations.
33
86
<2> Set to `true` to enable auto remediations for content updates. Set to `false` to disable auto remediations for content updates.
34
-
<3> Specify how often the scan should be run in cron format.
87
+
<3> Specify the number of stored scans in the raw result format. The default value is `3`. As the older results get rotated, the administrator must store the results elsewhere before the rotation happens.
35
88
<4> Specify the storage size that should be created for the scan to store the raw results. The default value is `1Gi`
36
-
<5> Specify the amount of scans for which the raw results will be stored. The default value is `3`. As the older results get rotated, the administrator has to store the results elsewhere before the rotation happens.
89
+
<5> Specify how often the scan should be run in cron format.
0 commit comments