Skip to content

fix: ensure Slurm scheduler initializes independently of gRPC connection#1

Open
fluidnumerics-joe wants to merge 1 commit intomainfrom
claude/slurm-prometheus-conversion-7dTYW
Open

fix: ensure Slurm scheduler initializes independently of gRPC connection#1
fluidnumerics-joe wants to merge 1 commit intomainfrom
claude/slurm-prometheus-conversion-7dTYW

Conversation

@fluidnumerics-joe
Copy link
Member

The Slurm scheduler (file-based) was initialized after gRPC client setup in Init(). If the gRPC connection to the GPU agent service failed, initalizeScheduler() was never called, leaving slurmScheduler as nil. During reconnect cycles, Close() would destroy the existing Slurm client, and if Init() failed again at initclients(), the Slurm data was permanently lost - causing empty job_id, job_user, job_partition labels in Prometheus metrics.

Changes:

  • Move initalizeScheduler() before initclients() in Init() so the Slurm scheduler is always created regardless of gRPC connection state
  • Make scheduler init failure non-fatal (log and continue)
  • Preserve Slurm scheduler across reconnect() cycles to avoid losing accumulated workload data
  • Add test verifying Slurm labels flow into Prometheus label output
  • Add test verifying Slurm scheduler survives reconnect cycles

https://claude.ai/code/session_017EwiweAXuPWUcAgisyuuBk

The Slurm scheduler (file-based) was initialized after gRPC client setup
in Init(). If the gRPC connection to the GPU agent service failed,
initalizeScheduler() was never called, leaving slurmScheduler as nil.
During reconnect cycles, Close() would destroy the existing Slurm client,
and if Init() failed again at initclients(), the Slurm data was
permanently lost - causing empty job_id, job_user, job_partition labels
in Prometheus metrics.

Changes:
- Move initalizeScheduler() before initclients() in Init() so the Slurm
  scheduler is always created regardless of gRPC connection state
- Make scheduler init failure non-fatal (log and continue)
- Preserve Slurm scheduler across reconnect() cycles to avoid losing
  accumulated workload data
- Add test verifying Slurm labels flow into Prometheus label output
- Add test verifying Slurm scheduler survives reconnect cycles

https://claude.ai/code/session_017EwiweAXuPWUcAgisyuuBk
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants