Skip to content

Commit dedef28

Browse files
committed
Fix linting issues on the merge of origin/main
1 parent 684123f commit dedef28

File tree

6 files changed

+43
-40
lines changed

6 files changed

+43
-40
lines changed

ansible/roles/topology/README.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
1-
topology
2-
========
1+
# topology
32

43
Templates out /etc/slurm/topology.conf file based on an OpenStack project for use by
54
Slurm's [topology/tree plugin.](https://slurm.schedmd.com/topology.html) Models
@@ -12,22 +11,23 @@ reconfigure an already running cluster after a `ansible/site.yml` run. You will
1211
to run the `ansible/adhoc/restart-slurm.yml` playbook for changes to topology.conf to be
1312
recognised.
1413

15-
Role Variables
16-
--------------
14+
## Role Variables
1715

1816
- `topology_nodes:`: Required list of strs. List of inventory hostnames of nodes to include in topology tree. Must be set to include all compute nodes in Slurm cluster. Default `[]`.
1917
- `topology_conf_template`: Optional str. Path to Jinja2 template of topology.conf file. Default
2018
`templates/topology.conf.j2`
21-
- `topology_above_rack_topology`: Optionally multiline str. Used to define topology above racks/AZs if
22-
you wish to partition racks further under different logical switches. New switches above should be
23-
defined as [SwitchName lines](https://slurm.schedmd.com/topology.html#hierarchical) referencing
24-
rack Availability Zones under that switch in their `Switches fields`. These switches must themselves
25-
be under a top level switch. e.g
26-
```
27-
topology_above_rack_topology: |
28-
SwitchName=rack-group-1 Switches=rack-az-1,rack-az-2
29-
SwitchName=rack-group-2 Switches=rack-az-3,rack-az-4
30-
SwitchName=top-level Switches=rack-group-1,rack-group-2
31-
```
32-
Defaults to an empty string, which causes all AZs to be put under a
33-
single top level switch.
19+
- `topology_above_rack_topology`: Optionally multiline str. Used to define topology above racks/AZs if
20+
you wish to partition racks further under different logical switches. New switches above should be
21+
defined as [SwitchName lines](https://slurm.schedmd.com/topology.html#hierarchical) referencing
22+
rack Availability Zones under that switch in their `Switches fields`. These switches must themselves
23+
be under a top level switch. e.g
24+
25+
```yaml
26+
topology_above_rack_topology: |
27+
SwitchName=rack-group-1 Switches=rack-az-1,rack-az-2
28+
SwitchName=rack-group-2 Switches=rack-az-3,rack-az-4
29+
SwitchName=top-level Switches=rack-group-1,rack-group-2
30+
```
31+
32+
Defaults to an empty string, which causes all AZs to be put under a
33+
single top level switch.

ansible/roles/topology/defaults/main.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,3 @@ topology_nodes: []
55
topology_conf_template: templates/topology.conf.j2
66

77
topology_above_rack_topology: ""
8-

ansible/roles/topology/library/map_hosts.py

Lines changed: 20 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
#!/usr/bin/python
1+
#!/usr/bin/python # pylint: disable=missing-module-docstring
22

33
# Copyright: (c) 2025, StackHPC
44
# Apache 2 License
55

6-
from ansible.module_utils.basic import AnsibleModule
7-
import openstack
6+
import openstack # pylint: disable=import-error
7+
from ansible.module_utils.basic import AnsibleModule # pylint: disable=import-error
88

99
DOCUMENTATION = """
1010
---
@@ -47,50 +47,54 @@
4747
- mycluster-compute-1
4848
"""
4949

50+
5051
def min_prefix(uuids, start=4):
51-
""" Take a list of uuids and return the smallest length >= start which keeps them unique """
52+
"""Take a list of uuids and return the smallest length >= start which keeps them unique"""
5253
for length in range(start, len(uuids[0])):
5354
prefixes = set(uuid[:length] for uuid in uuids)
5455
if len(prefixes) == len(uuids):
5556
return length
57+
# Fallback to returning the full length
58+
return len(uuids[0])
59+
5660

57-
def run_module():
58-
module_args = dict(
59-
compute_vms=dict(type='list', elements='str', required=True)
60-
)
61+
def run_module(): # pylint: disable=missing-function-docstring
62+
module_args = {"compute_vms": {"type": "list", "elements": "str", "required": True}}
6163
module = AnsibleModule(argument_spec=module_args, supports_check_mode=True)
6264

6365
conn = openstack.connection.from_config()
6466

65-
servers = [s for s in conn.compute.servers() if s["name"] in module.params["compute_vms"]]
67+
servers = [
68+
s for s in conn.compute.servers() if s["name"] in module.params["compute_vms"]
69+
]
6670

6771
topo = {}
6872
all_host_ids = []
6973
for s in servers:
70-
az = s['availability_zone']
71-
host_id = s['host_id']
72-
if host_id != '': # empty string if e.g. server is shelved
74+
az = s["availability_zone"]
75+
host_id = s["host_id"]
76+
if host_id != "": # empty string if e.g. server is shelved
7377
all_host_ids.append(host_id)
7478
if az not in topo:
7579
topo[az] = {}
7680
if host_id not in topo[az]:
7781
topo[az][host_id] = []
78-
topo[az][host_id].append(s['name'])
82+
topo[az][host_id].append(s["name"])
7983

8084
uuid_len = min_prefix(list(set(all_host_ids)))
8185

8286
for az in topo:
8387
topo[az] = dict((k[:uuid_len], v) for (k, v) in topo[az].items())
8488

8589
result = {
86-
"changed": False,
90+
"changed": False,
8791
"topology": topo,
8892
}
89-
93+
9094
module.exit_json(**result)
9195

9296

93-
def main():
97+
def main(): # pylint: disable=missing-function-docstring
9498
run_module()
9599

96100

ansible/roles/topology/tasks/main.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@
1313
dest: /etc/slurm/topology.conf
1414
owner: root
1515
group: root
16-
mode: 0644
16+
mode: "0644"

ansible/slurm.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,14 +60,14 @@
6060
tags:
6161
- openhpc
6262
tasks:
63-
- include_role:
63+
- ansible.builtin.include_role:
6464
name: topology
6565
# Gated on topology group having compute nodes but role also
6666
# needs to run on control and login nodes
6767
when:
6868
- appliances_mode == 'configure'
6969
- groups['topology'] | length > 0
70-
- include_role:
70+
- ansible.builtin.include_role:
7171
name: stackhpc.openhpc
7272
tasks_from: "{{ 'runtime.yml' if appliances_mode == 'configure' else 'main.yml' }}"
7373

environments/skeleton/{{cookiecutter.environment}}/tofu/node_group/variables.tf

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -149,9 +149,9 @@ variable "match_ironic_node" {
149149
}
150150

151151
variable "availability_zone" {
152-
type = string
153-
description = "Name of availability zone. If undefined, defaults to 'nova' if match_ironic_node is true, deferred to OpenStack otherwise"
154-
default = null
152+
type = string
153+
description = "Name of availability zone. If undefined, defaults to 'nova' if match_ironic_node is true, deferred to OpenStack otherwise"
154+
default = null
155155
}
156156

157157
variable "baremetal_nodes" {

0 commit comments

Comments
 (0)