You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
sambacc: avoid logging an error if cluster is being torn down
Saw this in a ceph teuthology run:
```
2024-08-20 20:39:57,289: DEBUG: Creating RADOS connection
2024-08-20 20:39:57,333: INFO: cluster meta content changed
2024-08-20 20:39:57,333: DEBUG: cluster meta: previous={'nodes':
[{'pnn': 0, 'identity': 'smb.adctdb1.0.0.ceph0.kdlxgn', 'node':
'192.168.76.200', 'state': 'ready'}, {'pnn': 1, 'identity':
'smb.adctdb1.1.0.ceph1.ngbqkk', 'node': '192.168.76.201', 'state':
'ready'}, {'pnn': 2, 'identity': 'smb.adctdb1.2.0.ceph2.rhmqnu', 'node':
'192.168.76.202', 'state': 'ready'}], '_source': 'cephadm'} current={}
2024-08-20 20:39:57,333: ERROR: error during ctdb_monitor_nodes: max()
arg is an empty sequence, count=0
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/sambacc/commands/ctdb.py", line
479, in catch
yield
File "/usr/lib/python3.9/site-packages/sambacc/commands/ctdb.py", line
360, in ctdb_monitor_nodes
ctdb.monitor_cluster_meta_changes(
File "/usr/lib/python3.9/site-packages/sambacc/ctdb.py", line 561, in
monitor_cluster_meta_changes
expected_nodes = _cluster_meta_to_ctdb_nodes(
File "/usr/lib/python3.9/site-packages/sambacc/ctdb.py", line 506, in
_cluster_meta_to_ctdb_nodes
pnn_max = max(n["pnn"] for n in nodes) + 1 # pnn is zero indexed
ValueError: max() arg is an empty sequence
```
I could see from the ceph logs the smb cluster was being removed right
around this time. If we had nodes and they suddenly vanish we're likely
in the process of getting removed and we raced a tad with cephadm
removing services while the smb mgr module was removing the contents
of the .smb pool.
Signed-off-by: John Mulligan <[email protected]>
0 commit comments