Skip to content

Commit 48ba4d7

Browse files
authored
Merge branch 'main' into 526-refine-cve-check-in-scs-0210-v2-test-script
2 parents f51a46c + 53b5e45 commit 48ba4d7

27 files changed

+1309
-133
lines changed

Standards/scs-XXXX-v1-security-of-iaas-service-software.md renamed to Standards/scs-0124-v1-security-of-iaas-service-software.md

File renamed without changes.

Standards/scs-XXXX-w1-security-of-iaas-service-software.md renamed to Standards/scs-0124-w1-security-of-iaas-service-software.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ type: Supplement
44
track: IaaS
55
status: Draft
66
supplements:
7-
- scs-XXXX-v1-security-of-iaas-service-software.md
7+
- scs-0124-v1-security-of-iaas-service-software.md
88
---
99

1010
## Testing or Detecting security updates in software

Standards/scs-0125-v1-secure-connections.md

Lines changed: 277 additions & 0 deletions
Large diffs are not rendered by default.

Standards/scs-0214-v1-k8s-node-distribution.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,4 +84,3 @@ If the standard is used by a provider, the following decisions are binding and v
8484
[k8s-ha]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
8585
[k8s-large-clusters]: https://kubernetes.io/docs/setup/best-practices/cluster-large/
8686
[scs-0213-v1]: https://github.com/SovereignCloudStack/standards/blob/main/Standards/scs-0213-v1-k8s-nodes-anti-affinity.md
87-
[k8s-labels-docs]: https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone

Standards/scs-0214-v2-k8s-node-distribution.md

Lines changed: 2 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
---
22
title: Kubernetes Node Distribution and Availability
33
type: Standard
4-
status: Draft
4+
status: Stable
5+
stabilized_at: 2024-11-21
56
replaces: scs-0214-v1-k8s-node-distribution.md
67
track: KaaS
78
---
@@ -100,23 +101,6 @@ These labels MUST be kept up to date with the current state of the deployment.
100101
The field gets autopopulated most of the time by either the kubelet or external mechanisms
101102
like the cloud controller.
102103

103-
- `topology.scs.community/host-id`
104-
105-
This is an SCS-specific label; it MUST contain the hostID of the physical machine running
106-
the hypervisor (NOT: the hostID of a virtual machine). Here, the hostID is an arbitrary identifier,
107-
which need not contain the actual hostname, but it should nonetheless be unique to the host.
108-
This helps identify the distribution over underlying physical machines,
109-
which would be masked if VM hostIDs were used.
110-
111-
## Conformance Tests
112-
113-
The script `k8s-node-distribution-check.py` checks the nodes available with a user-provided
114-
kubeconfig file. Based on the labels `topology.scs.community/host-id`,
115-
`topology.kubernetes.io/zone`, `topology.kubernetes.io/region` and `node-role.kubernetes.io/control-plane`,
116-
the script then determines whether the nodes are distributed according to this standard.
117-
If this isn't the case, the script produces an error.
118-
It also produces warnings and informational outputs, e.g., if labels don't seem to be set.
119-
120104
## Previous standard versions
121105

122106
This is version 2 of the standard; it extends [version 1](scs-0214-v1-k8s-node-distribution.md) with the

Standards/scs-0214-w1-k8s-node-distribution-implementation-testing.md

Lines changed: 5 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -16,25 +16,15 @@ Worker nodes can also be distributed over "failure zones", but this isn't a requ
1616
Distribution must be shown through labelling, so that users can access these information.
1717

1818
Node distribution metadata is provided through the usage of the labels
19-
`topology.kubernetes.io/region`, `topology.kubernetes.io/zone` and
20-
`topology.scs.community/host-id` respectively.
21-
22-
At the moment, not all labels are set automatically by most K8s cluster utilities, which incurs
23-
additional setup and maintenance costs.
19+
`topology.kubernetes.io/region` and `topology.kubernetes.io/zone`.
2420

2521
## Automated tests
2622

27-
### Notes
28-
29-
The test for the [SCS K8s Node Distribution and Availability](https://github.com/SovereignCloudStack/standards/blob/main/Standards/scs-0214-v2-k8s-node-distribution.md)
30-
checks if control-plane nodes are distributed over different failure zones (distributed into
31-
physical machines, zones and regions) by observing their labels defined by the standard.
32-
33-
### Implementation
23+
Currently, automated testing is not readily possible because we cannot access information about
24+
the underlying host of a node (as opposed to its region and zone). Therefore, the test will only output
25+
a tentative result.
3426

35-
The script [`k8s_node_distribution_check.py`](https://github.com/SovereignCloudStack/standards/blob/main/Tests/kaas/k8s-node-distribution/k8s_node_distribution_check.py)
36-
connects to an existing K8s cluster and checks if a distribution can be detected with the labels
37-
set for the nodes of this cluster.
27+
The current implementation can be found in the script [`k8s_node_distribution_check.py`](https://github.com/SovereignCloudStack/standards/blob/main/Tests/kaas/k8s-node-distribution/k8s_node_distribution_check.py).
3828

3929
## Manual tests
4030

Standards/scs-0219-v1-kaas-networking.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
---
22
title: KaaS Networking Standard
33
type: Standard
4-
status: Draft
4+
status: Stable
5+
stabilized_at: 2024-11-21
56
track: KaaS
67
---
78

Tests/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
htmlcov/
22
.coverage
3+
.secret

Tests/add_subject.py

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
#!/usr/bin/env python3
2+
# vim: set ts=4 sw=4 et:
3+
#
4+
# add_subject.py
5+
#
6+
# (c) Matthias Büchse <[email protected]>
7+
# SPDX-License-Identifier: Apache-2.0
8+
import base64
9+
import getpass
10+
import os
11+
import os.path
12+
import re
13+
import shutil
14+
import signal
15+
import subprocess
16+
import sys
17+
18+
try:
19+
from passlib.context import CryptContext
20+
import argon2 # noqa:F401
21+
except ImportError:
22+
print('Missing passlib and/or argon2. Please do:\npip install passlib argon2_cffi', file=sys.stderr)
23+
sys.exit(1)
24+
25+
# see ../compliance-monitor/monitor.py
26+
CRYPTCTX = CryptContext(schemes=('argon2', 'bcrypt'), deprecated='auto')
27+
SSH_KEYGEN = shutil.which('ssh-keygen')
28+
SUBJECT_RE = re.compile(r"[a-zA-Z0-9_\-]+")
29+
30+
31+
def main(argv, cwd):
32+
if len(argv) != 1:
33+
raise RuntimeError("Need to supply precisely one argument: name of subject")
34+
subject = argv[0]
35+
print(f"Attempt to add subject {subject!r}")
36+
keyfile_path = os.path.join(cwd, '.secret', 'keyfile')
37+
tokenfile_path = os.path.join(cwd, '.secret', 'tokenfile')
38+
if os.path.exists(keyfile_path):
39+
raise RuntimeError(f"Keyfile {keyfile_path} already present. Please proceed manually")
40+
if os.path.exists(tokenfile_path):
41+
raise RuntimeError(f"Tokenfile {tokenfile_path} already present. Please proceed manually")
42+
if not SUBJECT_RE.fullmatch(subject):
43+
raise RuntimeError(f"Subject name {subject!r} using disallowed characters")
44+
sanitized_subject = subject.replace('-', '_')
45+
print("Creating API key...")
46+
while True:
47+
password = getpass.getpass("Enter passphrase: ")
48+
if password == getpass.getpass("Repeat passphrase: "):
49+
break
50+
print("No match. Try again...")
51+
token = base64.b64encode(f"{subject}:{password}".encode('utf-8'))
52+
hash_ = CRYPTCTX.hash(password)
53+
with open(tokenfile_path, "wb") as fileobj:
54+
os.fchmod(fileobj.fileno(), 0o600)
55+
fileobj.write(token)
56+
print("Creating key file using `ssh-keygen`...")
57+
subprocess.check_call([SSH_KEYGEN, '-t', 'ed25519', '-C', sanitized_subject, '-f', keyfile_path, '-N', '', '-q'])
58+
with open(keyfile_path + '.pub', "r") as fileobj:
59+
pubkey_components = fileobj.readline().split()
60+
print(f'''
61+
The following SECRET files have been created:
62+
63+
- {keyfile_path}
64+
- {tokenfile_path}
65+
66+
They are required for submitting test reports. You MUST keep them secure and safe.
67+
68+
Insert the following snippet into compliance-monitor/bootstrap.yaml:
69+
70+
- subject: {subject}
71+
api_keys:
72+
- "{hash_}"
73+
keys:
74+
- public_key: "{pubkey_components[1]}"
75+
public_key_type: "{pubkey_components[0]}"
76+
public_key_name: "primary"
77+
78+
Make sure to submit a pull request with the changed file. Otherwise, the reports cannot be submitted.
79+
''', end='')
80+
81+
82+
if __name__ == "__main__":
83+
try:
84+
sys.exit(main(sys.argv[1:], cwd=os.path.dirname(sys.argv[0]) or os.getcwd()) or 0)
85+
except RuntimeError as e:
86+
print(str(e), file=sys.stderr)
87+
sys.exit(1)
88+
except KeyboardInterrupt:
89+
print("Interrupted", file=sys.stderr)
90+
sys.exit(128 + signal.SIGINT)

Tests/iaas/mandatory-services/mandatory-iaas-services.py

100644100755
Lines changed: 50 additions & 72 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
#!/usr/bin/env python3
12
"""Mandatory APIs checker
23
This script retrieves the endpoint catalog from Keystone using the OpenStack
34
SDK and checks whether all mandatory APi endpoints, are present.
@@ -26,54 +27,30 @@
2627
block_storage_service = ["volume", "volumev3", "block-storage"]
2728

2829

29-
def connect(cloud_name: str) -> openstack.connection.Connection:
30-
"""Create a connection to an OpenStack cloud
31-
:param string cloud_name:
32-
The name of the configuration to load from clouds.yaml.
33-
:returns: openstack.connnection.Connection
34-
"""
35-
return openstack.connect(
36-
cloud=cloud_name,
37-
)
38-
39-
40-
def check_presence_of_mandatory_services(cloud_name: str, s3_credentials=None):
41-
try:
42-
connection = connect(cloud_name)
43-
services = connection.service_catalog
44-
except Exception as e:
45-
print(str(e))
46-
raise Exception(
47-
f"Connection to cloud '{cloud_name}' was not successfully. "
48-
f"The Catalog endpoint could not be accessed. "
49-
f"Please check your cloud connection and authorization."
50-
)
30+
def check_presence_of_mandatory_services(conn: openstack.connection.Connection, s3_credentials=None):
31+
services = conn.service_catalog
5132

5233
if s3_credentials:
5334
mandatory_services.remove("object-store")
5435
for svc in services:
5536
svc_type = svc['type']
5637
if svc_type in mandatory_services:
5738
mandatory_services.remove(svc_type)
58-
continue
59-
if svc_type in block_storage_service:
39+
elif svc_type in block_storage_service:
6040
block_storage_service.remove(svc_type)
6141

6242
bs_service_not_present = 0
6343
if len(block_storage_service) == 3:
6444
# neither block-storage nor volume nor volumev3 is present
6545
# we must assume, that there is no volume service
66-
logger.error("FAIL: No block-storage (volume) endpoint found.")
46+
logger.error("No block-storage (volume) endpoint found.")
6747
mandatory_services.append(block_storage_service[0])
6848
bs_service_not_present = 1
69-
if not mandatory_services:
70-
# every mandatory service API had an endpoint
71-
return 0 + bs_service_not_present
72-
else:
73-
# there were multiple mandatory APIs not found
74-
logger.error(f"FAIL: The following endpoints are missing: "
75-
f"{mandatory_services}")
76-
return len(mandatory_services) + bs_service_not_present
49+
if mandatory_services:
50+
# some mandatory APIs were not found
51+
logger.error(f"The following endpoints are missing: "
52+
f"{', '.join(mandatory_services)}.")
53+
return len(mandatory_services) + bs_service_not_present
7754

7855

7956
def list_containers(conn):
@@ -167,8 +144,8 @@ def s3_from_ostack(creds, conn, endpoint):
167144
# pass
168145

169146

170-
def check_for_s3_and_swift(cloud_name: str, s3_credentials=None):
171-
# If we get credentials we assume, that there is no Swift and only test s3
147+
def check_for_s3_and_swift(conn: openstack.connection.Connection, s3_credentials=None):
148+
# If we get credentials, we assume that there is no Swift and only test s3
172149
if s3_credentials:
173150
try:
174151
s3 = s3_conn(s3_credentials)
@@ -183,58 +160,46 @@ def check_for_s3_and_swift(cloud_name: str, s3_credentials=None):
183160
if s3_buckets == [TESTCONTNAME]:
184161
del_bucket(s3, TESTCONTNAME)
185162
# everything worked, and we don't need to test for Swift:
186-
print("SUCCESS: S3 exists")
163+
logger.info("SUCCESS: S3 exists")
187164
return 0
188165
# there were no credentials given, so we assume s3 is accessable via
189166
# the service catalog and Swift might exist too
190-
try:
191-
connection = connect(cloud_name)
192-
connection.authorize()
193-
except Exception as e:
194-
print(str(e))
195-
raise Exception(
196-
f"Connection to cloud '{cloud_name}' was not successfully. "
197-
f"The Catalog endpoint could not be accessed. "
198-
f"Please check your cloud connection and authorization."
199-
)
200167
s3_creds = {}
201168
try:
202-
endpoint = connection.object_store.get_endpoint()
203-
except Exception as e:
204-
logger.error(
205-
f"FAIL: No object store endpoint found in cloud "
206-
f"'{cloud_name}'. No testing for the s3 service possible. "
207-
f"Details: %s", e
169+
endpoint = conn.object_store.get_endpoint()
170+
except Exception:
171+
logger.exception(
172+
"No object store endpoint found. No testing for the s3 service possible."
208173
)
209174
return 1
210175
# Get S3 endpoint (swift) and ec2 creds from OpenStack (keystone)
211-
s3_from_ostack(s3_creds, connection, endpoint)
176+
s3_from_ostack(s3_creds, conn, endpoint)
212177
# Overrides (var names are from libs3, in case you wonder)
213178
s3_from_env(s3_creds, "HOST", "S3_HOSTNAME", "https://")
214179
s3_from_env(s3_creds, "AK", "S3_ACCESS_KEY_ID")
215180
s3_from_env(s3_creds, "SK", "S3_SECRET_ACCESS_KEY")
216181

217-
s3 = s3_conn(s3_creds, connection)
182+
s3 = s3_conn(s3_creds, conn)
218183
s3_buckets = list_s3_buckets(s3)
219184
if not s3_buckets:
220185
s3_buckets = create_bucket(s3, TESTCONTNAME)
221186
assert s3_buckets
222187

223188
# If we got till here, s3 is working, now swift
224-
swift_containers = list_containers(connection)
189+
swift_containers = list_containers(conn)
225190
# if not swift_containers:
226-
# swift_containers = create_container(connection, TESTCONTNAME)
191+
# swift_containers = create_container(conn, TESTCONTNAME)
227192
result = 0
228193
if Counter(s3_buckets) != Counter(swift_containers):
229-
print("WARNING: S3 buckets and Swift Containers differ:\n"
230-
f"S3: {sorted(s3_buckets)}\nSW: {sorted(swift_containers)}")
194+
logger.warning("S3 buckets and Swift Containers differ:\n"
195+
f"S3: {sorted(s3_buckets)}\nSW: {sorted(swift_containers)}")
231196
result = 1
232197
else:
233-
print("SUCCESS: S3 and Swift exist and agree")
198+
logger.info("SUCCESS: S3 and Swift exist and agree")
234199
# Clean up
235200
# FIXME: Cleanup created EC2 credential
236201
# if swift_containers == [TESTCONTNAME]:
237-
# del_container(connection, TESTCONTNAME)
202+
# del_container(conn, TESTCONTNAME)
238203
# Cleanup created S3 bucket
239204
if s3_buckets == [TESTCONTNAME]:
240205
del_bucket(s3, TESTCONTNAME)
@@ -266,34 +231,47 @@ def main():
266231
help="Enable OpenStack SDK debug logging"
267232
)
268233
args = parser.parse_args()
234+
logging.basicConfig(
235+
format="%(levelname)s: %(message)s",
236+
level=logging.DEBUG if args.debug else logging.INFO,
237+
)
269238
openstack.enable_logging(debug=args.debug)
270239

271240
# parse cloud name for lookup in clouds.yaml
272-
cloud = os.environ.get("OS_CLOUD", None)
273-
if args.os_cloud:
274-
cloud = args.os_cloud
275-
assert cloud, (
276-
"You need to have the OS_CLOUD environment variable set to your cloud "
277-
"name or pass it via --os-cloud"
278-
)
241+
cloud = args.os_cloud or os.environ.get("OS_CLOUD", None)
242+
if not cloud:
243+
raise RuntimeError(
244+
"You need to have the OS_CLOUD environment variable set to your "
245+
"cloud name or pass it via --os-cloud"
246+
)
279247

280248
s3_credentials = None
281249
if args.s3_endpoint:
282250
if (not args.s3_access) or (not args.s3_access_secret):
283-
print("WARNING: test for external s3 needs access key and access secret.")
251+
logger.warning("test for external s3 needs access key and access secret.")
284252
s3_credentials = {
285253
"AK": args.s3_access,
286254
"SK": args.s3_access_secret,
287255
"HOST": args.s3_endpoint
288256
}
289257
elif args.s3_access or args.s3_access_secret:
290-
print("WARNING: access to s3 was given, but no endpoint provided.")
258+
logger.warning("access to s3 was given, but no endpoint provided.")
291259

292-
result = check_presence_of_mandatory_services(cloud, s3_credentials)
293-
result = result + check_for_s3_and_swift(cloud, s3_credentials)
260+
with openstack.connect(cloud) as conn:
261+
result = check_presence_of_mandatory_services(conn, s3_credentials)
262+
result += check_for_s3_and_swift(conn, s3_credentials)
263+
264+
print('service-apis-check: ' + ('PASS', 'FAIL')[min(1, result)])
294265

295266
return result
296267

297268

298269
if __name__ == "__main__":
299-
main()
270+
try:
271+
sys.exit(main())
272+
except SystemExit:
273+
raise
274+
except BaseException as exc:
275+
logging.debug("traceback", exc_info=True)
276+
logging.critical(str(exc))
277+
sys.exit(1)

0 commit comments

Comments
 (0)