You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The compliance content is stored in `Profile` objects that are generated from a `ProfileBundle`. The Compliance Operator creates a `ProfileBundle` for the cluster and another for the cluster nodes.
12
+
The compliance content is stored in `Profile` objects that are generated from a `ProfileBundle` object. The Compliance Operator creates a `ProfileBundle` object for the cluster and another for the cluster nodes.
12
13
13
14
[source,terminal]
14
15
----
15
16
$ oc get profilebundle.compliance
17
+
----
18
+
19
+
[source,terminal]
20
+
----
16
21
$ oc get profile.compliance
17
22
----
18
23
@@ -21,12 +26,25 @@ The `ProfileBundle` objects are processed by deployments labeled with the `Bundl
21
26
[source,terminal]
22
27
----
23
28
$ oc logs -lprofile-bundle=ocp4 -c profileparser
29
+
----
30
+
31
+
[source,terminal]
32
+
----
24
33
$ oc get deployments,pods -lprofile-bundle=ocp4
34
+
----
35
+
36
+
[source,terminal]
37
+
----
25
38
$ oc logs pods/<pod-name>
39
+
----
40
+
41
+
[source,terminal]
42
+
----
26
43
$ oc describe pod/<pod-name> -c profileparser
27
44
----
28
45
29
-
== The `ScanSetting` and `ScanSettingBinding` lifecycle and debugging
== The ScanSetting and ScanSettingBinding objects lifecycle and debugging
30
48
With valid compliance content sources, the high-level `ScanSetting` and `ScanSettingBinding` objects can be used to generate `ComplianceSuite` and `ComplianceScan` objects:
31
49
32
50
[source,yaml]
@@ -72,7 +90,8 @@ Events:
72
90
73
91
Now a `ComplianceSuite` object is created. The flow continues to reconcile the newly created `ComplianceSuite`.
== ComplianceSuite custom resource lifecycle and debugging
76
95
The `ComplianceSuite` CR is a wrapper around `ComplianceScan` CRs. The `ComplianceSuite` CR is handled by controller tagged with `logger=suitectrl`.
77
96
This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the `suitectrl` also handles creating a `CronJob` CR that re-runs the scans in the suite after the initial run is done:
78
97
@@ -90,12 +109,15 @@ NAME SCHEDULE SUSPEND ACTIVE LA
90
109
91
110
For the most important issues, events are emitted. View them with `oc describe compliancesuites/<name>`. The `Suite` objects also have a `Status` subresource that is updated when any of `Scan` objects that belong to this suite update their `Status` subresource. After all expected scans are created, control is passed to the scan controller.
== ComplianceScan custom resource lifecycle and debugging
94
114
The `ComplianceScan` CRs are handled by the `scanctrl` controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases:
95
115
116
+
[id="compliance-scan-pending-phase_{context}"]
96
117
=== Pending phase
97
118
The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase.
98
119
120
+
[id="compliance-scan-launching-phase_{context}"]
99
121
=== Launching phase
100
122
In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps:
In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result `ConfigMap` objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a `ComplianceRemediation` object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.
174
203
@@ -177,6 +206,11 @@ When a config map is processed by an aggregator pod, it is labeled the `complian
177
206
[source,terminal]
178
207
----
179
208
$ oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
209
+
----
210
+
211
+
.Example output
212
+
[source,terminal]
213
+
----
180
214
NAME STATUS SEVERITY
181
215
rhcos4-e8-worker-accounts-no-uid-except-zero PASS high
182
216
rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium
@@ -186,6 +220,11 @@ and `ComplianceRemediation` objects:
186
220
[source,terminal]
187
221
----
188
222
$ oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase.
199
238
239
+
[id="compliance-scan-done-phase_{context}"]
200
240
=== Done phase
201
241
In the final scan phase, the scan resources are cleaned up if needed and the `ResultServer` deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the next scan instance would then recreate the deployment again.
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with `autoApplyRemediations: true`. The {product-title} administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the `ComplianceSuite` controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the `ComplianceRemediation` controller takes over.
211
251
212
-
== `ComplianceRemediation` lifecycle and debugging
Each pod that is spawned by the compliance-operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the `compliance.openshift.io/scan-name` label. The workload identifier is labeled with the `workload` label.
323
+
Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the `compliance.openshift.io/scan-name` label. The workload identifier is labeled with the `workload` label.
268
324
269
-
The compliance-operator schedules the following workloads:
325
+
The Compliance Operator schedules the following workloads:
270
326
271
327
* *scanner*: Performs the compliance scan.
272
328
@@ -282,5 +338,5 @@ When debugging and logs are required for a certain workload, run:
Although you can use the `autoApplyRemediations` boolean parameter in a `ComplianceSuite` object, you can alternatively annotate the object with `compliance.openshift.io/apply-remediations`. This allows the Operator to apply all of the created remediations.
9
+
10
+
.Procedure
11
+
12
+
* Apply the `compliance.openshift.io/apply-remediations` annotation by running:
In some cases, a scan with newer content might mark remediations as `OUTDATED`. As an administrator, you can apply the `compliance.openshift.io/remove-outdated` annotation to apply new remediations and remove the outdated ones.
9
+
10
+
.Procedure
11
+
12
+
* Apply the `compliance.openshift.io/remove-outdated` annotation:
Copy file name to clipboardExpand all lines: modules/compliance-custom-storage.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ While the custom resources such as `ComplianceCheckResult` represent an aggregat
8
8
9
9
A related parameter is `rawResultStorage.rotation` which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.
Because {product-title} can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the `rawResultStorage.StorageClassName` attribute.
By default, the `ComplianceCheckResult` objects are labeled with several useful labels that allow you to query the checks and decide on the next steps after the results are generated.
9
+
10
+
List checks that belong to a specific suite:
11
+
12
+
[source,terminal]
13
+
----
14
+
$ oc get compliancecheckresults -l compliance.openshift.io/suite=example-compliancesuite
15
+
----
16
+
17
+
List checks that belong to a specific scan:
18
+
19
+
[source,terminal]
20
+
----
21
+
$ oc get compliancecheckresults -l compliance.openshift.io/scan=example-compliancescan
22
+
----
23
+
24
+
Not all `ComplianceCheckResult` objects create `ComplianceRemediation` objects. Only `ComplianceCheckResult` objects that can be remediated automatically do. A `ComplianceCheckResult` object has a related remediation if it is labeled with the `compliance.openshift.io/automated-remediation` label. The name of the remediation is the same as the name of the check.
25
+
26
+
List all failing checks that can be remediated automatically:
27
+
28
+
[source,terminal]
29
+
----
30
+
$ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remmediation
31
+
----
32
+
33
+
List all failing checks that must be remediated manually:
34
+
35
+
[source,terminal]
36
+
----
37
+
$ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation
38
+
----
39
+
40
+
The manual remediation steps are typically stored in the `description` attribute in the `ComplianceCheckResult` object.
Copy file name to clipboardExpand all lines: modules/compliance-inconsistent.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,18 +4,18 @@
4
4
5
5
[id="compliance-inconsistent_{context}"]
6
6
= Inconsistent remediations
7
-
The `ScanSetting` lists the node roles that the compliance scans generated from the `ScanSetting` or `ScanSettingBinding` would scan. Each node role usually maps to a machine config pool.
7
+
The `ScanSetting` object lists the node roles that the compliance scans generated from the `ScanSetting` or `ScanSettingBinding` objects would scan. Each node role usually maps to a machine config pool.
8
8
9
9
[IMPORTANT]
10
10
====
11
-
It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a Pool should be identical.
11
+
It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical.
12
12
====
13
13
14
14
If some of the results are different from others, the Compliance Operator flags a `ComplianceCheckResult` object where some of the nodes will report as `INCONSISTENT`. All `ComplianceCheckResult` objects are also labeled with `compliance.openshift.io/inconsistent-check`.
15
15
16
16
Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the `compliance.openshift.io/most-common-status` annotation and the annotation `compliance.openshift.io/inconsistent-source` contains pairs of `hostname:status` of check statuses that differ from the most common status. If no common state can be found, all the `hostname:status` pairs are listed in the `compliance.openshift.io/inconsistent-source annotation`.
17
17
18
-
If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The `ComplianceScan` must be re-run to get a consistent result by annotating the scan with the `compliance.openshift.io/rescan=` option:
18
+
If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the `compliance.openshift.io/rescan=` option:
0 commit comments