Skip to content

Commit d8e18b4

Browse files
authored
Merge pull request #50778 from sheriff-rh/co-0.1.55-rns
Compliance Operator v0.1.57 release notes and enhancements added
2 parents b28ffe9 + 4b7bad0 commit d8e18b4

15 files changed

+416
-6
lines changed
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-scans.adoc
4+
5+
:_content-type: CONCEPT
6+
[id="compliance-applying-resource-requests-and-limits_{context}"]
7+
= Applying resource requests and limits
8+
9+
When the kubelet starts a container as part of a Pod, the kubelet passes that container's requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined.
10+
11+
The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution.
12+
13+
If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set `memory.min` and `memory.low` values.
14+
15+
If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir.
16+
17+
The kubelet tracks `tmpfs` `emptyDir` volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod's container might be evicted.
18+
19+
[IMPORTANT]
20+
====
21+
A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see _Troubleshooting the Compliance Operator_.
22+
====
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-remediation.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="compliance-custom-node-pools_{context}"]
7+
= Scanning custom node pools
8+
9+
The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool.
10+
11+
If your cluster uses custom node pools outside the default `worker` and `master` node pools, you must supply additional variables to ensure the Compliance Operator aggregates a configuration file for that node pool.
12+
13+
.Procedure
14+
15+
. To check the configuration against all pools in an example cluster containing `master`, `worker`, and custom `example` node pools, set the value of the `ocp-var-role-master` and `opc-var-role-worker` fields to `example` in the `TailoredProfile` object:
16+
+
17+
[source,yaml]
18+
----
19+
apiVersion: compliance.openshift.io/v1alpha1
20+
kind: TailoredProfile
21+
metadata:
22+
name: cis-example-tp
23+
spec:
24+
extends: ocp4-cis
25+
title: My modified NIST profile to scan example nodes
26+
setValues:
27+
- name: ocp4-var-role-master
28+
value: example
29+
rationale: test for example nodes
30+
- name: ocp4-var-role-worker
31+
value: example
32+
rationale: test for example nodes
33+
description: cis-example-scan
34+
----
35+
36+
. Add the `example` role to the `ScanSetting` object that will be stored in the `ScanSettingBinding` CR:
37+
+
38+
[source,yaml]
39+
----
40+
apiVersion: compliance.openshift.io/v1alpha1
41+
kind: ScanSetting
42+
metadata:
43+
name: default
44+
namespace: openshift-compliance
45+
rawResultStorage:
46+
rotation: 3
47+
size: 1Gi
48+
roles:
49+
- worker
50+
- master
51+
- example
52+
scanTolerations:
53+
- effect: NoSchedule
54+
key: node-role.kubernetes.io/master
55+
operator: Exists
56+
schedule: '0 1 * * *'
57+
----
58+
59+
. Create a scan that uses the `ScanSettingBinding` CR:
60+
+
61+
[source,yaml]
62+
----
63+
apiVersion: compliance.openshift.io/v1alpha1
64+
kind: ScanSettingBinding
65+
metadata:
66+
name: cis
67+
namespace: openshift-compliance
68+
profiles:
69+
- apiGroup: compliance.openshift.io/v1alpha1
70+
kind: Profile
71+
name: ocp4-cis
72+
- apiGroup: compliance.openshift.io/v1alpha1
73+
kind: Profile
74+
name: ocp4-cis-node
75+
- apiGroup: compliance.openshift.io/v1alpha1
76+
kind: TailoredProfile
77+
name: cis-example-tp
78+
settingsRef:
79+
apiGroup: compliance.openshift.io/v1alpha1
80+
kind: ScanSetting
81+
name: default
82+
----
83+
84+
The Compliance Operator checks the runtime `KubeletConfig` through the `Node/Proxy` API object and then uses variables such as `ocp-var-role-master` and `ocp-var-role-worker` to determine the nodes it performs the check against. In the `ComplianceCheckResult`, the `KubeletConfig` rules are shown as `ocp4-cis-kubelet-*`. The scan passes only if all selected nodes pass this check.
85+
86+
.Verification
87+
88+
* The Platform KubeletConfig rules are checked through the `Node/Proxy` object. You can find those rules by running the following command:
89+
+
90+
[source,terminal]
91+
----
92+
$ oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name'
93+
----
94+
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-remediation.adoc
4+
5+
:_content-type: CONCEPT
6+
[id="compliance-evaluate-kubeletconfig-rules_{context}"]
7+
= Evaluating KubeletConfig rules against default configuration values
8+
9+
{product-title} infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks.
10+
11+
To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results.
12+
13+
No additional configuration changes are required to use this feature with default `master` and `worker` node pools configurations.
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-troubleshooting.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="compliance-increasing-operator-limits_{context}"]
7+
= Increasing Compliance Operator resource limits
8+
9+
In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits.
10+
11+
To increase the default memory and CPU limits of scanner pods, see _`ScanSetting` Custom resource_.
12+
13+
.Procedure
14+
15+
. To increase the Operator's memory limits to 500 Mi, create the following patch file named `co-memlimit-patch.yaml`:
16+
+
17+
[source,yaml]
18+
----
19+
spec:
20+
config:
21+
resources:
22+
limits:
23+
memory: 500Mi
24+
----
25+
26+
. Apply the patch file:
27+
+
28+
[source,terminal]
29+
----
30+
$ oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge
31+
----
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-remediation.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="compliance-kubeletconfig-sub-pool-remediation_{context}"]
7+
= Remediating `KubeletConfig` sub pools
8+
9+
`KubeletConfig` remediation labels can be applied to `MachineConfigPool` sub-pools.
10+
11+
.Procedure
12+
13+
* Add a label to the sub-pool `MachineConfigPool` CR:
14+
+
15+
[source,terminal]
16+
----
17+
$ oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=
18+
----

modules/compliance-priorityclass.adoc

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-advanced.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="compliance-priorityclass_{context}"]
7+
= Setting `PriorityClass` for `ScanSetting` scans
8+
9+
In large scale environments, the default `PriorityClass` object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the `PriorityClass` variable to ensure the Compliance Operator is always given priority in resource constrained situations.
10+
11+
.Procedure
12+
13+
* Set the `PriorityClass` variable:
14+
+
15+
[source,yaml]
16+
----
17+
apiVersion: compliance.openshift.io/v1alpha1
18+
strictNodeScan: true
19+
metadata:
20+
name: default
21+
namespace: openshift-compliance
22+
priorityClass: compliance-high-priority <1>
23+
kind: ScanSetting
24+
showNotApplicable: false
25+
rawResultStorage:
26+
nodeSelector:
27+
node-role.kubernetes.io/master: ''
28+
pvAccessModes:
29+
- ReadWriteOnce
30+
rotation: 3
31+
size: 1Gi
32+
tolerations:
33+
- effect: NoSchedule
34+
key: node-role.kubernetes.io/master
35+
operator: Exists
36+
- effect: NoExecute
37+
key: node.kubernetes.io/not-ready
38+
operator: Exists
39+
tolerationSeconds: 300
40+
- effect: NoExecute
41+
key: node.kubernetes.io/unreachable
42+
operator: Exists
43+
tolerationSeconds: 300
44+
- effect: NoSchedule
45+
key: node.kubernetes.io/memory-pressure
46+
operator: Exists
47+
schedule: 0 1 * * *
48+
roles:
49+
- master
50+
- worker
51+
scanTolerations:
52+
- operator: Exists
53+
----
54+
<1> If the `PriorityClass` referenced in the `ScanSetting` cannot be found, the Operator will leave the `PriorityClass` empty, issue a warning, and continue scheduling scans without a `PriorityClass`.
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-scans.adoc
4+
5+
:_content-type: CONCEPT
6+
[id="compliance-scansetting-cr_{context}"]
7+
= `ScanSetting` Custom Resource
8+
9+
The `ScanSetting` Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the `api-resource-collector` container. To set the memory limits of the Operator, modify the `Subscription` object if installed through OLM or the Operator deployment itself.
10+
11+
To increase the default CPU and memory limits of the Compliance Operator, see _Increasing Compliance Operator resource limits_.
12+
13+
[IMPORTANT]
14+
====
15+
Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process.
16+
====
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-scans.adoc
4+
5+
:_content-type: CONCEPT
6+
[id="compliance-scheduling-pods-with-resource-requests_{context}"]
7+
= Scheduling Pods with resource requests
8+
9+
When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type.
10+
11+
Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node.
12+
13+
For each container, you can specify the following resource limits and request:
14+
15+
[source,terminal]
16+
----
17+
spec.containers[].resources.limits.cpu
18+
spec.containers[].resources.limits.memory
19+
spec.containers[].resources.limits.hugepages-<size>
20+
spec.containers[].resources.requests.cpu
21+
spec.containers[].resources.requests.memory
22+
spec.containers[].resources.requests.hugepages-<size>
23+
----
24+
25+
[NOTE]
26+
====
27+
Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a pod resource request or limit is the sum of the resource requests or limits of that type for each container in the pod.
28+
====
29+
30+
.Example Pod resource requests and limits
31+
[source,yaml]
32+
----
33+
apiVersion: v1
34+
kind: Pod
35+
metadata:
36+
name: frontend
37+
spec:
38+
containers:
39+
- name: app
40+
image: images.my-company.example/app:v4
41+
resources:
42+
requests: <1>
43+
memory: "64Mi"
44+
cpu: "250m"
45+
limits: <2>
46+
memory: "128Mi"
47+
cpu: "500m"
48+
- name: log-aggregator
49+
image: images.my-company.example/log-aggregator:v6
50+
resources:
51+
requests:
52+
memory: "64Mi"
53+
cpu: "250m"
54+
limits:
55+
memory: "128Mi"
56+
cpu: "500m"
57+
----
58+
<1> The Pod is requesting 64 Mi of memory and 250 m CPU.
59+
<2> The Pod's limits are defined as 128 Mi of memory and 500 m CPU.

modules/compliance-supported-profiles.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,12 +86,14 @@ The Compliance Operator provides the following compliance profiles:
8686
|0.1.47+
8787
|link:https://www.pcisecuritystandards.org/document_library?document=pci_dss[PCI Security Standards &#174; Council Document Library]
8888
|`x86_64`
89+
`ppc64le`
8990

9091
|ocp4-pci-dss-node
9192
|PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4
9293
|0.1.47+
9394
|link:https://www.pcisecuritystandards.org/document_library?document=pci_dss[PCI Security Standards &#174; Council Document Library]
9495
|`x86_64`
96+
`ppc64le`
9597

9698
|ocp4-high
9799
|NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * security/compliance_operator/compliance-operator-troubleshooting.adoc
4+
5+
:_content-type: REFERENCE
6+
[id="operator-resource-constraints_{context}"]
7+
= Configuring Operator resource constraints
8+
9+
The `resources` field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM).
10+
11+
[NOTE]
12+
====
13+
Resource Constraints applied in this process overwrites the existing resource constraints.
14+
====
15+
16+
.Procedure
17+
18+
* Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the `Subscription` object:
19+
+
20+
[source,yaml]
21+
----
22+
kind: Subscription
23+
metadata:
24+
name: custom-operator
25+
spec:
26+
package: etcd
27+
channel: alpha
28+
config:
29+
resources:
30+
requests:
31+
memory: "64Mi"
32+
cpu: "250m"
33+
limits:
34+
memory: "128Mi"
35+
cpu: "500m"
36+
----

0 commit comments

Comments
 (0)