Skip to content

Commit 7cfb4cb

Browse files
authored
Merge branch 'main' into feat/stabilize_scs-compatible-iaas_v5
2 parents 47a27c5 + 7a2662a commit 7cfb4cb

10 files changed

+181
-40
lines changed

Standards/scs-XXXX-v1-security-of-iaas-service-software.md renamed to Standards/scs-0124-v1-security-of-iaas-service-software.md

File renamed without changes.

Standards/scs-XXXX-w1-security-of-iaas-service-software.md renamed to Standards/scs-0124-w1-security-of-iaas-service-software.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ type: Supplement
44
track: IaaS
55
status: Draft
66
supplements:
7-
- scs-XXXX-v1-security-of-iaas-service-software.md
7+
- scs-0124-v1-security-of-iaas-service-software.md
88
---
99

1010
## Testing or Detecting security updates in software

Standards/scs-0214-v1-k8s-node-distribution.md

Lines changed: 0 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -80,42 +80,6 @@ If the standard is used by a provider, the following decisions are binding and v
8080
can also be scaled vertically first before scaling horizontally.
8181
- Worker node distribution MUST be indicated to the user through some kind of labeling
8282
in order to enable (anti)-affinity for workloads over "failure zones".
83-
- To provide metadata about the node distribution, which also enables testing of this standard,
84-
providers MUST label their K8s nodes with the labels listed below.
85-
- `topology.kubernetes.io/zone`
86-
87-
Corresponds with the label described in [K8s labels documentation][k8s-labels-docs].
88-
It provides a logical zone of failure on the side of the provider, e.g. a server rack
89-
in the same electrical circuit or multiple machines bound to the internet through a
90-
singular network structure. How this is defined exactly is up to the plans of the provider.
91-
The field gets autopopulated most of the time by either the kubelet or external mechanisms
92-
like the cloud controller.
93-
94-
- `topology.kubernetes.io/region`
95-
96-
Corresponds with the label described in [K8s labels documentation][k8s-labels-docs].
97-
It describes the combination of one or more failure zones into a region or domain, therefore
98-
showing a larger entity of logical failure zone. An example for this could be a building
99-
containing racks that are put into such a zone, since they're all prone to failure, if e.g.
100-
the power for the building is cut. How this is defined exactly is also up to the provider.
101-
The field gets autopopulated most of the time by either the kubelet or external mechanisms
102-
like the cloud controller.
103-
104-
- `topology.scs.community/host-id`
105-
106-
This is an SCS-specific label; it MUST contain the hostID of the physical machine running
107-
the hypervisor (NOT: the hostID of a virtual machine). Here, the hostID is an arbitrary identifier,
108-
which need not contain the actual hostname, but it should nonetheless be unique to the host.
109-
This helps identify the distribution over underlying physical machines,
110-
which would be masked if VM hostIDs were used.
111-
112-
## Conformance Tests
113-
114-
The script `k8s-node-distribution-check.py` checks the nodes available with a user-provided
115-
kubeconfig file. It then determines based on the labels `kubernetes.io/hostname`, `topology.kubernetes.io/zone`,
116-
`topology.kubernetes.io/region` and `node-role.kubernetes.io/control-plane`, if a distribution
117-
of the available nodes is present. If this isn't the case, the script produces an error.
118-
If also produces warnings and informational outputs, if e.g. labels don't seem to be set.
11983

12084
[k8s-ha]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
12185
[k8s-large-clusters]: https://kubernetes.io/docs/setup/best-practices/cluster-large/

Standards/scs-0219-v1-kaas-networking.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
---
22
title: KaaS Networking Standard
33
type: Standard
4-
status: Draft
4+
status: Stable
5+
stabilized_at: 2024-11-21
56
track: KaaS
67
---
78

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,12 @@
1-
# Plugin for provisioning k8s clusters and performing conformance tests on these clusters
1+
# Test suite for SCS-compatible KaaS
22

33
## Development environment
44

55
### requirements
66

77
* [docker](https://docs.docker.com/engine/install/)
88
* [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
9+
* [sonobuoy](https://sonobuoy.io/docs/v0.57.1/#installation)
910

1011
### setup for development
1112

@@ -19,7 +20,6 @@
1920
(venv) curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
2021
(venv) python3.10 -m pip install --upgrade pip
2122
(venv) python3.10 -m pip --version
22-
2323
```
2424

2525
2. Install dependencies:
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
pytest-kind
22
kubernetes
3+
junitparser
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ google-auth==2.34.0
1616
# via kubernetes
1717
idna==3.8
1818
# via requests
19+
junitparser==3.2.0
20+
# via -r requirements.in
1921
kubernetes==30.1.0
2022
# via -r requirements.in
2123
oauthlib==3.2.2
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
#!/usr/bin/env python3
2+
# vim: set ts=4 sw=4 et:
3+
#
4+
import logging
5+
import sys
6+
7+
import click
8+
9+
from sonobuoy_handler import SonobuoyHandler
10+
11+
logger = logging.getLogger(__name__)
12+
13+
14+
@click.command()
15+
@click.option("-k", "--kubeconfig", "kubeconfig", required=True, type=click.Path(exists=True), help="path/to/kubeconfig_file.yaml",)
16+
@click.option("-r", "--result_dir_name", "result_dir_name", type=str, default="sonobuoy_results", help="directory name to store results at",)
17+
@click.option("-c", "--check", "check_name", type=str, default="sonobuoy_executor", help="this MUST be the same name as the id in 'scs-compatible-kaas.yaml'",)
18+
@click.option("-a", "--arg", "args", multiple=True)
19+
def sonobuoy_run(kubeconfig, result_dir_name, check_name, args):
20+
sonobuoy_handler = SonobuoyHandler(check_name, kubeconfig, result_dir_name, args)
21+
sys.exit(sonobuoy_handler.run())
22+
23+
24+
if __name__ == "__main__":
25+
logging.basicConfig(format='%(levelname)s: %(message)s', level=logging.DEBUG)
26+
sonobuoy_run()
Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
from collections import Counter
2+
import json
3+
import logging
4+
import os
5+
import shlex
6+
import shutil
7+
import subprocess
8+
9+
from junitparser import JUnitXml
10+
11+
logger = logging.getLogger(__name__)
12+
13+
14+
class SonobuoyHandler:
15+
"""
16+
A class that handles both the execution of sonobuoy and
17+
the generation of the results for a test report
18+
"""
19+
20+
kubeconfig_path = None
21+
working_directory = None
22+
23+
def __init__(
24+
self,
25+
check_name="sonobuoy_handler",
26+
kubeconfig=None,
27+
result_dir_name="sonobuoy_results",
28+
args=(),
29+
):
30+
self.check_name = check_name
31+
logger.debug(f"kubeconfig: {kubeconfig} ")
32+
if kubeconfig is None:
33+
raise RuntimeError("No kubeconfig provided")
34+
self.kubeconfig_path = kubeconfig
35+
self.working_directory = os.getcwd()
36+
self.result_dir_name = result_dir_name
37+
self.sonobuoy = shutil.which('sonobuoy')
38+
logger.debug(f"working from {self.working_directory}")
39+
logger.debug(f"placing results at {self.result_dir_name}")
40+
logger.debug(f"sonobuoy executable at {self.sonobuoy}")
41+
self.args = (arg0 for arg in args for arg0 in shlex.split(str(arg)))
42+
43+
def _invoke_sonobuoy(self, *args, **kwargs):
44+
inv_args = (self.sonobuoy, "--kubeconfig", self.kubeconfig_path) + args
45+
logger.debug(f'invoking {" ".join(inv_args)}')
46+
return subprocess.run(args=inv_args, capture_output=True, check=True, **kwargs)
47+
48+
def _sonobuoy_run(self):
49+
self._invoke_sonobuoy("run", "--wait", *self.args)
50+
51+
def _sonobuoy_delete(self):
52+
self._invoke_sonobuoy("delete", "--wait")
53+
54+
def _sonobuoy_status_result(self):
55+
process = self._invoke_sonobuoy("status", "--json")
56+
json_data = json.loads(process.stdout)
57+
counter = Counter()
58+
for entry in json_data["plugins"]:
59+
logger.debug(f"plugin:{entry['plugin']}:{entry['result-status']}")
60+
for result, count in entry["result-counts"].items():
61+
counter[result] += count
62+
return counter
63+
64+
def _eval_result(self, counter):
65+
"""evaluate test results and return return code"""
66+
result_str = ', '.join(f"{counter[key]} {key}" for key in ('passed', 'failed', 'skipped'))
67+
result_message = f"sonobuoy reports {result_str}"
68+
if counter['failed']:
69+
logger.error(result_message)
70+
return 3
71+
logger.info(result_message)
72+
return 0
73+
74+
def _preflight_check(self):
75+
"""
76+
Preflight test to ensure that everything is set up correctly for execution
77+
"""
78+
if not self.sonobuoy:
79+
raise RuntimeError("sonobuoy executable not found; is it in PATH?")
80+
81+
def _sonobuoy_retrieve_result(self):
82+
"""
83+
This method invokes sonobuoy to store the results in a subdirectory of
84+
the working directory. The Junit results file contained in it is then
85+
analyzed in order to interpret the relevant information it containes
86+
"""
87+
logger.debug(f"retrieving results to {self.result_dir_name}")
88+
result_dir = os.path.join(self.working_directory, self.result_dir_name)
89+
if os.path.exists(result_dir):
90+
raise Exception("result directory already existing")
91+
os.mkdir(result_dir)
92+
93+
# XXX use self._invoke_sonobuoy
94+
os.system(
95+
# ~ f"sonobuoy retrieve {result_dir} -x --filename='{result_dir}' --kubeconfig='{self.kubeconfig_path}'"
96+
f"sonobuoy retrieve {result_dir} --kubeconfig='{self.kubeconfig_path}'"
97+
)
98+
logger.debug(
99+
f"parsing JUnit result from {result_dir + '/plugins/e2e/results/global/junit_01.xml'} "
100+
)
101+
xml = JUnitXml.fromfile(result_dir + "/plugins/e2e/results/global/junit_01.xml")
102+
counter = Counter()
103+
for suite in xml:
104+
for case in suite:
105+
if case.is_passed is True: # XXX why `is True`???
106+
counter['passed'] += 1
107+
elif case.is_skipped is True:
108+
counter['skipped'] += 1
109+
else:
110+
counter['failed'] += 1
111+
logger.error(f"{case.name}")
112+
return counter
113+
114+
def run(self):
115+
"""
116+
This method is to be called to run the plugin
117+
"""
118+
logger.info(f"running sonobuoy for testcase {self.check_name}")
119+
self._preflight_check()
120+
try:
121+
self._sonobuoy_run()
122+
return_code = self._eval_result(self._sonobuoy_status_result())
123+
print(self.check_name + ": " + ("PASS", "FAIL")[min(1, return_code)])
124+
return return_code
125+
126+
# ERROR: currently disabled due to: "error retrieving results: unexpected EOF"
127+
# might be related to following bug: https://github.com/vmware-tanzu/sonobuoy/issues/1633
128+
# self._sonobuoy_retrieve_result(self)
129+
except BaseException:
130+
logger.exception("something went wrong")
131+
return 112
132+
finally:
133+
self._sonobuoy_delete()

Tests/scs-compatible-kaas.yaml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,10 @@ modules:
99
- id: cncf-k8s-conformance
1010
name: CNCF Kubernetes conformance
1111
url: https://github.com/cncf/k8s-conformance/tree/master
12+
run:
13+
- executable: ./kaas/sonobuoy_handler/run_sonobuoy.py
14+
args: -k {subject_root}/kubeconfig.yaml -r {subject_root}/sono-results -c 'cncf-k8s-conformance' -a '--mode=certified-conformance'
15+
#~ args: -k {subject_root}/kubeconfig.yaml -r {subject_root}/sono-results -c 'cncf-k8s-conformance' -a '--plugin-env e2e.E2E_DRYRUN=true'
1216
testcases:
1317
- id: cncf-k8s-conformance
1418
tags: [mandatory]
@@ -30,6 +34,15 @@ modules:
3034
testcases:
3135
- id: node-distribution-check
3236
tags: [mandatory]
37+
- id: scs-0219-v1
38+
name: KaaS networking
39+
url: https://docs.scs.community/standards/scs-0219-v1-kaas-networking
40+
run:
41+
- executable: ./kaas/sonobuoy_handler/run_sonobuoy.py
42+
args: -k {subject_root}/kubeconfig.yaml -r {subject_root}/sono-results -c 'kaas-networking-check' -a '--e2e-focus "NetworkPolicy"'
43+
testcases:
44+
- id: kaas-networking-check
45+
tags: [mandatory]
3346
timeline:
3447
- date: 2024-02-28
3548
versions:
@@ -40,5 +53,6 @@ versions:
4053
- cncf-k8s-conformance
4154
- scs-0210-v2
4255
- scs-0214-v2
56+
- scs-0219-v1
4357
targets:
4458
main: mandatory

0 commit comments

Comments
 (0)