Skip to content

Commit e0ffb51

Browse files
authored
Merge pull request #29673 from ahardin-rh/cmp-operator-updates
Updating Compliance Operator docs
2 parents ff5950c + 344aa99 commit e0ffb51

16 files changed

+438
-22
lines changed

_topic_map.yml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -590,16 +590,20 @@ Topics:
590590
- Name: Compliance Operator
591591
Dir: compliance_operator
592592
Topics:
593+
- Name: Installing the Compliance Operator
594+
File: compliance-operator-installation
595+
- Name: Compliance Operator scans
596+
File: compliance-scans
593597
- Name: Understanding the Compliance Operator
594598
File: compliance-operator-understanding
595599
- Name: Managing the Compliance Operator
596600
File: compliance-operator-manage
597-
- Name: Managing Compliance Operator remediation
598-
File: compliance-operator-remediation
599601
- Name: Tailoring the Compliance Operator
600602
File: compliance-operator-tailor
601603
- Name: Retrieving Compliance Operator raw results
602604
File: compliance-operator-raw-results
605+
- Name: Managing Compliance Operator remediation
606+
File: compliance-operator-remediation
603607
- Name: Performing advanced Compliance Operator tasks
604608
File: compliance-operator-advanced
605609
- Name: Troubleshooting the Compliance Operator

modules/compliance-anatomy.adoc

Lines changed: 68 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,22 @@
22
//
33
// * security/compliance_operator/compliance-operator-troubleshooting.adoc
44

5-
[id="compliance_anatomy_{context}"]
5+
[id="compliance-anatomy_{context}"]
66
= Anatomy of a scan
77

88
The following sections outline the components and stages of Compliance Operator scans.
99

10+
[id="compliance-anatomy-compliance-sources_{context}"]
1011
== Compliance sources
11-
The compliance content is stored in `Profile` objects that are generated from a `ProfileBundle`. The Compliance Operator creates a `ProfileBundle` for the cluster and another for the cluster nodes.
12+
The compliance content is stored in `Profile` objects that are generated from a `ProfileBundle` object. The Compliance Operator creates a `ProfileBundle` object for the cluster and another for the cluster nodes.
1213

1314
[source,terminal]
1415
----
1516
$ oc get profilebundle.compliance
17+
----
18+
19+
[source,terminal]
20+
----
1621
$ oc get profile.compliance
1722
----
1823

@@ -21,12 +26,25 @@ The `ProfileBundle` objects are processed by deployments labeled with the `Bundl
2126
[source,terminal]
2227
----
2328
$ oc logs -lprofile-bundle=ocp4 -c profileparser
29+
----
30+
31+
[source,terminal]
32+
----
2433
$ oc get deployments,pods -lprofile-bundle=ocp4
34+
----
35+
36+
[source,terminal]
37+
----
2538
$ oc logs pods/<pod-name>
39+
----
40+
41+
[source,terminal]
42+
----
2643
$ oc describe pod/<pod-name> -c profileparser
2744
----
2845

29-
== The `ScanSetting` and `ScanSettingBinding` lifecycle and debugging
46+
[id="compliance-anatomy-scan-setting-scan-binding-lifecycle_{context}"]
47+
== The ScanSetting and ScanSettingBinding objects lifecycle and debugging
3048
With valid compliance content sources, the high-level `ScanSetting` and `ScanSettingBinding` objects can be used to generate `ComplianceSuite` and `ComplianceScan` objects:
3149

3250
[source,yaml]
@@ -72,7 +90,8 @@ Events:
7290

7391
Now a `ComplianceSuite` object is created. The flow continues to reconcile the newly created `ComplianceSuite`.
7492

75-
== `ComplianceSuite` lifecycle and debugging
93+
[id="compliance-suite-lifecycle-debugging_{context}"]
94+
== ComplianceSuite custom resource lifecycle and debugging
7695
The `ComplianceSuite` CR is a wrapper around `ComplianceScan` CRs. The `ComplianceSuite` CR is handled by controller tagged with `logger=suitectrl`.
7796
This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the `suitectrl` also handles creating a `CronJob` CR that re-runs the scans in the suite after the initial run is done:
7897

@@ -90,12 +109,15 @@ NAME SCHEDULE SUSPEND ACTIVE LA
90109

91110
For the most important issues, events are emitted. View them with `oc describe compliancesuites/<name>`. The `Suite` objects also have a `Status` subresource that is updated when any of `Scan` objects that belong to this suite update their `Status` subresource. After all expected scans are created, control is passed to the scan controller.
92111

93-
== `ComplianceScan` lifecycle and debugging
112+
[id="compliance-scan-lifecycle-debugging_{context}"]
113+
== ComplianceScan custom resource lifecycle and debugging
94114
The `ComplianceScan` CRs are handled by the `scanctrl` controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases:
95115

116+
[id="compliance-scan-pending-phase_{context}"]
96117
=== Pending phase
97118
The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase.
98119

120+
[id="compliance-scan-launching-phase_{context}"]
99121
=== Launching phase
100122
In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps:
101123

@@ -128,6 +150,7 @@ rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Comple
128150
At this point, the scan proceeds to the Running phase.
129151
----
130152

153+
[id="compliance-scan-running-phase_{context}"]
131154
=== Running phase
132155
The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase:
133156

@@ -139,7 +162,12 @@ The running phase waits until the scanner pods finish. The following terms and p
139162
+
140163
[source,terminal]
141164
----
142-
$ oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
165+
$ oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
166+
----
167+
+
168+
.Example output
169+
[source,terminal]
170+
----
143171
Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
144172
Namespace: openshift-compliance
145173
Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker
@@ -169,6 +197,7 @@ Scanner pods for `Platform` scans are similar, except:
169197

170198
When the scanner pods are done, the scans move on to the Aggregating phase.
171199

200+
[id="compliance-scan-aggregating-phase_{context}"]
172201
=== Aggregating phase
173202
In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result `ConfigMap` objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a `ComplianceRemediation` object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.
174203

@@ -177,6 +206,11 @@ When a config map is processed by an aggregator pod, it is labeled the `complian
177206
[source,terminal]
178207
----
179208
$ oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
209+
----
210+
211+
.Example output
212+
[source,terminal]
213+
----
180214
NAME STATUS SEVERITY
181215
rhcos4-e8-worker-accounts-no-uid-except-zero PASS high
182216
rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium
@@ -186,6 +220,11 @@ and `ComplianceRemediation` objects:
186220
[source,terminal]
187221
----
188222
$ oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
223+
----
224+
225+
.Example output
226+
[source,terminal]
227+
----
189228
NAME STATE
190229
rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied
191230
rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied
@@ -197,6 +236,7 @@ rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied
197236

198237
After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase.
199238

239+
[id="compliance-scan-done-phase_{context}"]
200240
=== Done phase
201241
In the final scan phase, the scan resources are cleaned up if needed and the `ResultServer` deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the next scan instance would then recreate the deployment again.
202242

@@ -209,7 +249,8 @@ $ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
209249

210250
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with `autoApplyRemediations: true`. The {product-title} administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the `ComplianceSuite` controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the `ComplianceRemediation` controller takes over.
211251

212-
== `ComplianceRemediation` lifecycle and debugging
252+
[id="compliance-remediation-lifecycle-debugging_{context}"]
253+
== ComplianceRemediation controller lifecycle and debugging
213254
The example scan has reported some findings. One of the remediations can be enabled by toggling its `apply` attribute to `true`:
214255

215256
[source,terminal]
@@ -224,6 +265,11 @@ The `MachineConfig` object always begins with `75-` and is named after the scan
224265
[source,terminal]
225266
----
226267
$ oc get mc | grep 75-
268+
----
269+
270+
.Example output
271+
[source,terminal]
272+
----
227273
75-rhcos4-e8-worker-my-companys-compliance-requirements 2.2.0 2m46s
228274
----
229275

@@ -232,6 +278,11 @@ The remediations the `mc` currently consists of are listed in the machine config
232278
[source,terminal]
233279
----
234280
$ oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements
281+
----
282+
283+
.Example output
284+
[source,terminal]
285+
----
235286
Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements
236287
Labels: machineconfiguration.openshift.io/role=worker
237288
Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:
@@ -257,16 +308,21 @@ The scan will run and finish. Check for the remediation to pass:
257308
[source,terminal]
258309
----
259310
$ oc get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod
311+
----
312+
313+
.Example output
314+
[source,terminal]
315+
----
260316
NAME STATUS SEVERITY
261317
rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium
262318
----
263319

264-
// TODO: This shouldn't be a level one in this module
265-
= Useful labels
320+
[id="compliance-operator-useful-labels_{context}"]
321+
== Useful labels
266322

267-
Each pod that is spawned by the compliance-operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the `compliance.openshift.io/scan-name` label. The workload identifier is labeled with the `workload` label.
323+
Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the `compliance.openshift.io/scan-name` label. The workload identifier is labeled with the `workload` label.
268324

269-
The compliance-operator schedules the following workloads:
325+
The Compliance Operator schedules the following workloads:
270326

271327
* *scanner*: Performs the compliance scan.
272328

@@ -282,5 +338,5 @@ When debugging and logs are required for a certain workload, run:
282338

283339
[source,terminal]
284340
----
285-
$ oc logs -l workload=<workload_name> -c <container-name>
341+
$ oc logs -l workload=<workload_name> -c <container_name>
286342
----
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-advanced.adoc
4+
5+
[id="installing-compliance-operator-cli_{context}"]
6+
= Applying remediations generated by suite scans
7+
8+
Although you can use the `autoApplyRemediations` boolean parameter in a `ComplianceSuite` object, you can alternatively annotate the object with `compliance.openshift.io/apply-remediations`. This allows the Operator to apply all of the created remediations.
9+
10+
.Procedure
11+
12+
* Apply the `compliance.openshift.io/apply-remediations` annotation by running:
13+
14+
[source,terminal]
15+
----
16+
$ oc annotate compliancesuites/<suite-_name> compliance.openshift.io/apply-remediations=
17+
----
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-advanced.adoc
4+
5+
[id="automatically-update-remediations_{context}"]
6+
= Automatically update remediations
7+
8+
In some cases, a scan with newer content might mark remediations as `OUTDATED`. As an administrator, you can apply the `compliance.openshift.io/remove-outdated` annotation to apply new remediations and remove the outdated ones.
9+
10+
.Procedure
11+
12+
* Apply the `compliance.openshift.io/remove-outdated` annotation:
13+
14+
[source,terminal]
15+
----
16+
$ oc annotate compliancesuites/<suite_name> compliance.openshift.io/remove-outdated=
17+
----
18+
19+
Alternatively, set the `autoUpdateRemediations` flag in a `ScanSetting` or `ComplianceSuite` object to update the remediations automatically.

modules/compliance-custom-storage.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ While the custom resources such as `ComplianceCheckResult` represent an aggregat
88

99
A related parameter is `rawResultStorage.rotation` which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.
1010

11-
11+
[id="using-custom-result-storage-values_{context}"]
1212
== Using custom result storage values
1313
Because {product-title} can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the `rawResultStorage.StorageClassName` attribute.
1414

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-remediation.adoc
4+
5+
[id="filtering-failed-compliance-check-results_{context}"]
6+
= Filters for failed compliance check results
7+
8+
By default, the `ComplianceCheckResult` objects are labeled with several useful labels that allow you to query the checks and decide on the next steps after the results are generated.
9+
10+
List checks that belong to a specific suite:
11+
12+
[source,terminal]
13+
----
14+
$ oc get compliancecheckresults -l compliance.openshift.io/suite=example-compliancesuite
15+
----
16+
17+
List checks that belong to a specific scan:
18+
19+
[source,terminal]
20+
----
21+
$ oc get compliancecheckresults -l compliance.openshift.io/scan=example-compliancescan
22+
----
23+
24+
Not all `ComplianceCheckResult` objects create `ComplianceRemediation` objects. Only `ComplianceCheckResult` objects that can be remediated automatically do. A `ComplianceCheckResult` object has a related remediation if it is labeled with the `compliance.openshift.io/automated-remediation` label. The name of the remediation is the same as the name of the check.
25+
26+
List all failing checks that can be remediated automatically:
27+
28+
[source,terminal]
29+
----
30+
$ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remmediation
31+
----
32+
33+
List all failing checks that must be remediated manually:
34+
35+
[source,terminal]
36+
----
37+
$ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation
38+
----
39+
40+
The manual remediation steps are typically stored in the `description` attribute in the `ComplianceCheckResult` object.

modules/compliance-inconsistent.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,18 @@
44

55
[id="compliance-inconsistent_{context}"]
66
= Inconsistent remediations
7-
The `ScanSetting` lists the node roles that the compliance scans generated from the `ScanSetting` or `ScanSettingBinding` would scan. Each node role usually maps to a machine config pool.
7+
The `ScanSetting` object lists the node roles that the compliance scans generated from the `ScanSetting` or `ScanSettingBinding` objects would scan. Each node role usually maps to a machine config pool.
88

99
[IMPORTANT]
1010
====
11-
It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a Pool should be identical.
11+
It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical.
1212
====
1313

1414
If some of the results are different from others, the Compliance Operator flags a `ComplianceCheckResult` object where some of the nodes will report as `INCONSISTENT`. All `ComplianceCheckResult` objects are also labeled with `compliance.openshift.io/inconsistent-check`.
1515

1616
Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the `compliance.openshift.io/most-common-status` annotation and the annotation `compliance.openshift.io/inconsistent-source` contains pairs of `hostname:status` of check statuses that differ from the most common status. If no common state can be found, all the `hostname:status` pairs are listed in the `compliance.openshift.io/inconsistent-source annotation`.
1717

18-
If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The `ComplianceScan` must be re-run to get a consistent result by annotating the scan with the `compliance.openshift.io/rescan=` option:
18+
If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the `compliance.openshift.io/rescan=` option:
1919

2020
[source,terminal]
2121
----

0 commit comments

Comments
 (0)