Skip to content

Revert "Revert "fix(scheduling): query "/" to check if a runner is ready""#174

Merged
ericcurtin merged 1 commit intomainfrom
revert-173-revert-170-wait-runner
Sep 25, 2025
Merged

Revert "Revert "fix(scheduling): query "/" to check if a runner is ready""#174
ericcurtin merged 1 commit intomainfrom
revert-173-revert-170-wait-runner

Conversation

@doringeman
Copy link
Contributor

@doringeman doringeman commented Sep 25, 2025

Reverts #173 which reverted #170.

I reverted it because suddenly it stopped working, as in it was returning 404 instead of 503 and I wanted to make sure I correctly test it and not leave it on main like that. I was getting 404 because I was testing it with DD's latest tagged llama.cpp, which is expected to return 404.

From #170:

The llama.cpp server returns an error if the model is still loading: https://github.com/ggml-org/llama.cpp/blob/459c0c2c1a400f960d7b8e8d94d31a8426f80986/tools/server/server.cpp#L4220. Wait for it to be loaded using the correct endpoint, as on /models it doesn't return 503.

In order to test this, run it, send a request to big model so it's getting loaded and look for level=info msg="srv log_server_r: request: GET / 503" component=llama.cpp in the logs.

make docker-run LLAMA_SERVER_VERSION=v0.0.16-rc1

Summary by Sourcery

Bug Fixes:

  • Use GET "/" for the HTTP readiness probe instead of "/v1/models" to correctly detect a loading runner

@doringeman doringeman requested review from a team and Copilot September 25, 2025 09:01
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Sep 25, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

This PR reverts a previous revert to restore the readiness probe by querying the root endpoint (“/”) instead of “/v1/models”, adjusting the HTTP request in runner.go to correctly detect when the llama.cpp server is still loading.

File-Level Changes

Change Details Files
Reinstate readiness check using root (‘/’) endpoint
  • Changed HTTP GET request URL from "/v1/models" to "/" in the readiness probe
pkg/inference/scheduling/runner.go

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request reverts a previous revert, restoring a fix that changes the health check endpoint from /v1/models to / when determining if a runner is ready. The change addresses an issue where the llama.cpp server returns different HTTP status codes depending on the endpoint when the model is still loading.

  • Changes the readiness check endpoint from /v1/models to / to properly detect when a runner is ready
  • Ensures the health check receives a 503 status code (instead of 404) when the model is still loading
  • Restores functionality that was working correctly but was temporarily reverted for testing

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @doringeman, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request restores a critical fix for the llama.cpp server's readiness check. It reverts a prior revert, ensuring that the system accurately detects when a model is still loading by querying the appropriate endpoint. This change prevents premature scheduling to runners that are not yet fully operational, improving the overall stability and reliability of the inference scheduling process.

Highlights

  • Reinstating Readiness Check Fix: This pull request reverts a previous revert, effectively bringing back the original fix that changes the llama.cpp server readiness check to query the root endpoint (/) instead of /v1/models.
  • Correct Status Code for Loading Models: The change ensures that when a llama.cpp model is still loading, the readiness check correctly receives a 503 (Service Unavailable) status, providing an accurate indication of the runner's state.
  • Addressing Testing Misunderstanding: The initial revert was due to a misunderstanding during testing, where a 404 was observed with a specific llama.cpp version, which was expected behavior for that version, not an issue with the original fix.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request reverts a previous revert, effectively re-introducing a change that uses the "/" endpoint for the HTTP readiness probe instead of "/v1/models". The change aims to correctly detect when a runner is loading a model, as the "/models" endpoint doesn't return a 503 status during the loading process. The review focuses on ensuring the correctness of the endpoint used for the readiness probe.

@ericcurtin ericcurtin merged commit 9ae8b80 into main Sep 25, 2025
5 checks passed
@ericcurtin ericcurtin deleted the revert-173-revert-170-wait-runner branch September 25, 2025 10:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants