Skip to content

Commit a96d2c2

Browse files
committed
Terminology style updates for security book
1 parent 895a060 commit a96d2c2

29 files changed

+100
-92
lines changed

modules/checking-file-intergrity-cr-status.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// * security/file_integrity_operator/file-integrity-operator-understanding.adoc
44

55
[id="checking-the-file-integrity-CR-status_{context}"]
6-
= Checking the FileIntegrity custom resource status
6+
= Checking the `FileIntegrity` custom resource status
77

88
The `FileIntegrity` custom resource (CR) reports its status through the
99
.`status.phase` subresource.

modules/compliance-anatomy.adoc

Lines changed: 22 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ $ oc logs pods/<pod-name>
2626
$ oc describe pod/<pod-name> -c profileparser
2727
----
2828

29-
== The ScanSetting and ScanSettingBinding lifecycle and debugging
29+
== The `ScanSetting` and `ScanSettingBinding` lifecycle and debugging
3030
With valid compliance content sources, the high-level `ScanSetting` and `ScanSettingBinding` objects can be used to generate `ComplianceSuite` and `ComplianceScan` objects:
3131

3232
[source,yaml]
@@ -72,9 +72,9 @@ Events:
7272

7373
Now a `ComplianceSuite` object is created. The flow continues to reconcile the newly created `ComplianceSuite`.
7474

75-
== ComplianceSuite lifecycle and debugging
75+
== `ComplianceSuite` lifecycle and debugging
7676
The `ComplianceSuite` CR is a wrapper around `ComplianceScan` CRs. The `ComplianceSuite` CR is handled by controller tagged with `logger=suitectrl`.
77-
This controller handles creating Scans from a Suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a Suite is set to execute periodically, the `suitectrl` also handles creating a `CronJob` CR that re-runs the Scans in the Suite after the initial run is done:
77+
This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the `suitectrl` also handles creating a `CronJob` CR that re-runs the scans in the suite after the initial run is done:
7878

7979
[source,terminal]
8080
----
@@ -85,26 +85,26 @@ $ oc get cronjobs
8585
[source,terminal]
8686
----
8787
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
88-
<cron-name> 0 1 * * * False 0 <none> 151m
88+
<cron_name> 0 1 * * * False 0 <none> 151m
8989
----
9090

91-
For the most important issues, Events are emitted. View them with `oc describe compliancesuites/$name`. The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller.
91+
For the most important issues, events are emitted. View them with `oc describe compliancesuites/<name>`. The `Suite` objects also have a `Status` subresource that is updated when any of `Scan` objects that belong to this suite update their `Status` subresource. After all expected scans are created, control is passed to the scan controller.
9292

93-
== ComplianceScan lifecycle and debugging
93+
== `ComplianceScan` lifecycle and debugging
9494
The `ComplianceScan` CRs are handled by the `scanctrl` controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases:
9595

9696
=== Pending phase
9797
The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase.
9898

9999
=== Launching phase
100-
In this phase, several `ConfigMaps` that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the ConfigMaps:
100+
In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps:
101101

102102
[source,terminal]
103103
----
104104
$ oc get cm -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=
105105
----
106106

107-
These `ConfigMaps` will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the `ConfigMaps` is the way to go. Afterwards, a `PersistentVolumeClaim` is created per scan in order to store the raw ARF results:
107+
These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan in order to store the raw ARF results:
108108

109109
[source,terminal]
110110
----
@@ -135,7 +135,7 @@ The running phase waits until the scanner pods finish. The following terms and p
135135

136136
* *scanner*: This container runs the scan. For node scans, the container mounts the node filesystem as `/host` and mounts the content delivered by the init container. The container also mounts the `entrypoint` `ConfigMap` created in the Launching phase and executes it. The default script in the entrypoint `ConfigMap` executes OpenSCAP and stores the result files in the `/results` directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the `debug` flag.
137137

138-
* *logcollector*: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the `ResultServer` and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a `ConfigMap.` These result Configaps are labeled with the scan name (`compliance.openshift.io/scan-name=$scan_name`):
138+
* *logcollector*: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the `ResultServer` and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a `ConfigMap.` These result config maps are labeled with the scan name (`compliance.openshift.io/scan-name=<scan_name>`):
139139
+
140140
[source,terminal]
141141
----
@@ -165,14 +165,14 @@ Scanner pods for `Platform` scans are similar, except:
165165

166166
* There is one extra init container called `api-resource-collector` that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the `scanner` container would read them from.
167167

168-
* The `scanner` container does not need to mount the host filesystem.
168+
* The `scanner` container does not need to mount the host file system.
169169

170170
When the scanner pods are done, the scans move on to the Aggregating phase.
171171

172172
=== Aggregating phase
173-
In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result `ConfigMap` objects, read the results and for each check result create the corresponding k8s object. If the check failure can be automatically remediated, a `ComplianceRemediation` object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.
173+
In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result `ConfigMap` objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a `ComplianceRemediation` object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.
174174

175-
When a `ConfigMap` is processed by an aggregator pod,it is labeled the `compliance-remediations/processed` label. The result of this phase are `ComplianceCheckResult` objects:
175+
When a config map is processed by an aggregator pod, it is labeled the `compliance-remediations/processed` label. The result of this phase are `ComplianceCheckResult` objects:
176176

177177
[source,terminal]
178178
----
@@ -207,17 +207,17 @@ It is also possible to trigger a re-run of a scan in the Done phase by annotatin
207207
$ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
208208
----
209209

210-
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with `autoApplyRemediations: true`. The {product-title} administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the `ComplianceSuite` controller takes over in the Done phase, pauses the `MachineConfigPool` to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the `ComplianceRemediation` controller takes over.
210+
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with `autoApplyRemediations: true`. The {product-title} administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the `ComplianceSuite` controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the `ComplianceRemediation` controller takes over.
211211

212-
= ComplianceRemediation lifecycle and debugging
213-
The example scan has reported some findings. One of the remediations can be enabled by toggling it's `apply` attribute to `true`:
212+
== `ComplianceRemediation` lifecycle and debugging
213+
The example scan has reported some findings. One of the remediations can be enabled by toggling its `apply` attribute to `true`:
214214

215215
[source,terminal]
216216
----
217217
$ oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge
218218
----
219219

220-
The ComplianceRemediation controller (`logger=remediationctrl`) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite `MachineConfig` object that contains all the applied remediations.
220+
The `ComplianceRemediation` controller (`logger=remediationctrl`) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite `MachineConfig` object that contains all the applied remediations.
221221

222222
The `MachineConfig` object always begins with `75-` and is named after the scan and the suite:
223223

@@ -227,7 +227,7 @@ $ oc get mc | grep 75-
227227
75-rhcos4-e8-worker-my-companys-compliance-requirements 2.2.0 2m46s
228228
----
229229

230-
The remediations the `mc` currently consists of are listed in the MachineConfig's annotations:
230+
The remediations the `mc` currently consists of are listed in the machine config's annotations:
231231

232232
[source,terminal]
233233
----
@@ -237,15 +237,15 @@ Labels: machineconfiguration.openshift.io/role=worker
237237
Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:
238238
----
239239

240-
The ComplianceRemediation controller's algorithm works like this:
240+
The `ComplianceRemediation` controller's algorithm works like this:
241241

242242
* All currently applied remediations are read into an initial remediation set.
243243
* If the reconciled remediation is supposed to be applied, it is added to the set.
244244
* A `MachineConfig` object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered `MachineConfig` object is removed.
245-
* If and only if the rendered `MachineConfig` is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted).
246-
* Creating or modifying a `MachineConfig` object triggers a reboot of nodes that match the `machineconfiguration.openshift.io/role` label - see the MachineConfig Operator documentation for more details.
245+
* If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted).
246+
* Creating or modifying a `MachineConfig` object triggers a reboot of nodes that match the `machineconfiguration.openshift.io/role` label - see the Machine Config Operator documentation for more details.
247247

248-
The remediation loop ends once the rendered MachineConfig is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it:
248+
The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it:
249249

250250
[source,terminal]
251251
----
@@ -261,6 +261,7 @@ NAME STATUS SEVERITY
261261
rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium
262262
----
263263

264+
// TODO: This shouldn't be a level one in this module
264265
= Useful labels
265266

266267
Each pod that is spawned by the compliance-operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the `compliance.openshift.io/scan-name` label. The workload identifier is labeled with the `workload` label.

modules/compliance-applying.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,18 +5,18 @@
55
[id="compliance-applying_{context}"]
66
= Applying a remediation
77

8-
The boolean attribute `spec.apply` controls whether the remediation should be applied by the Compliance Operator. We can apply the remediation by setting the attribute to true:
8+
The boolean attribute `spec.apply` controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to `true`:
99

1010
[source,terminal]
1111
----
1212
$ oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{"spec":{"apply":true}}' --type=merge
1313
----
1414

15-
After the Compliance Operator processes the applied remediation, the `status.ApplicationState` attribute would change to *Applied* or to *Error* if incorrect. When a MachineConfig remediation is applied, that remediation along with all other applied remediations are rendered into a `MachineConfig` object named `75-$scan-name-$suite-name`. That `MachineConfig` object is subsequently rendered by the MachineConfigOperator and finally applied to all the nodes in a MachineConfigPool by an instance of the MachineControlDaemon running on each node.
15+
After the Compliance Operator processes the applied remediation, the `status.ApplicationState` attribute would change to *Applied* or to *Error* if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a `MachineConfig` object named `75-$scan-name-$suite-name`. That `MachineConfig` object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node.
1616

17-
Note that when the MachineConfigOperator applies a new `MachineConfig` object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite `75-$scan-name-$suite-name` `MachineConfig` object. To prevent applying the remediation immediately, you can pause the MachineConfigPool by setting the `.spec.paused` attribute of a MachineConfigPool to `true`.
17+
Note that when the Machine Config Operator applies a new `MachineConfig` object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite `75-$scan-name-$suite-name` `MachineConfig` object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the `.spec.paused` attribute of a `MachineConfigPool` object to `true`.
1818

19-
The Compliance Operator can apply remediations automatically. Set `autoApplyRemediations: true` in the ScanSetting top-level object.
19+
The Compliance Operator can apply remediations automatically. Set `autoApplyRemediations: true` in the `ScanSetting` top-level object.
2020

2121
[WARNING]
2222
====

modules/compliance-custom-storage.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
[id="compliance-custom-storage_{context}"]
66
= Setting custom storage size for results
7-
While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the `etcd` key-value store. Instead, every scan creates a PersistentVolume which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the `rawResultStorage.size` attribute that is exposed in both the ScanSetting and ComplianceScan resources.
7+
While the custom resources such as `ComplianceCheckResult` represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the `etcd` key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the `rawResultStorage.size` attribute that is exposed in both the `ScanSetting` and `ComplianceScan` resources.
88

99
A related parameter is `rawResultStorage.rotation` which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.
1010

@@ -17,9 +17,9 @@ Because {product-title} can be deployed in a variety of public clouds or bare me
1717
If your cluster does not specify a default storage class, this attribute must be set.
1818
====
1919

20-
Configure the ScanSetting CustomResource to use a standard storage class and create PersistentVolumes that are 10GB in size and keep the last 10 results:
20+
Configure the `ScanSetting` custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results:
2121

22-
.Example ScanSetting CR
22+
.Example `ScanSetting` CR
2323

2424
[source,yaml]
2525
----

0 commit comments

Comments
 (0)