Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Aug 16, 2025

Summary

This PR fixes the issue where LM Studio models appeared twice in the Provider Configuration Profile when they were both downloaded and loaded.

Problem

The previous implementation fetched models from two different LM Studio APIs:

  • listDownloadedModels() - showed all models on disk
  • listLoaded() - showed models currently in memory

This caused duplicates when a model was both downloaded and loaded, with inconsistent keys (model.path vs model.modelKey).

Solution

As suggested by @daniel-lxs, this PR simplifies the implementation by:

  • Removing the listDownloadedModels() call entirely (lines 74-83)
  • Only showing loaded models - these are the only ones that can actually process requests
  • Using model.path as the consistent key to prevent any potential duplicates
  • Updating tests to reflect the simplified implementation

Benefits

  • ✅ Eliminates duplicates entirely (no deduplication logic needed)
  • ✅ Only shows models that can actually process requests
  • ✅ Provides accurate runtime metadata (contextLength vs maxContextLength)
  • ✅ Simpler, more maintainable code
  • ✅ Better performance (one less API call)

UX Change

Users will need to load models in LM Studio before they appear in Roo Code. This is already required to use them anyway, so it aligns the UI with the actual functionality.

Testing

  • All existing tests have been updated and pass
  • Removed tests for listDownloadedModels functionality
  • Updated test to verify that loaded models use model.path as the key

Fixes #6954


Important

Fixes duplicate LM Studio models by removing listDownloadedModels() and using model.path as the key in lmstudio.ts.

  • Behavior:
    • Removes listDownloadedModels() call in lmstudio.ts, only using listLoaded() to fetch models.
    • Uses model.path as the key to prevent duplicates.
  • Testing:
    • Updates tests in lmstudio.test.ts to reflect removal of listDownloadedModels().
    • Verifies that loaded models use model.path as the key.
  • Misc:
    • Users must load models in LM Studio for them to appear in Roo Code.

This description was created by Ellipsis for 6f7c4bc. You can customize this summary. It will automatically update as commits are pushed.

- Remove listDownloadedModels() call and its try/catch block
- Only fetch loaded models as they are the only ones that can process requests
- Use model.path as consistent key to prevent duplicates
- Update tests to reflect the simplified implementation

This fixes the issue where models appeared twice in the Provider Configuration
Profile when they were both downloaded and loaded in LM Studio.

Fixes #6954
@roomote roomote bot requested review from cte, jr and mrubens as code owners August 16, 2025 03:44
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. bug Something isn't working labels Aug 16, 2025
Copy link
Contributor Author

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing my own code is like debugging in a mirror - everything looks backwards but the bugs are still mine.

models[lmstudioModel.modelKey] = parseLMStudioModel(lmstudioModel)
modelsWithLoadedDetails.add(lmstudioModel.modelKey)
// Use model.path as the consistent key to prevent duplicates
models[lmstudioModel.path] = parseLMStudioModel(lmstudioModel)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good simplification! Using model.path as the consistent key should effectively prevent duplicates. However, I notice that the forceFullModelDetailsLoad function (not in this diff) still uses modelId to add to modelsWithLoadedDetails. Should that be updated to use the model path for consistency?

console.warn("Failed to list downloaded models, falling back to loaded models only")
}
// We want to list loaded models *anyway* since they provide valuable extra info (context size)
// Only get loaded models - these are the only ones that can actually process requests
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice change to only fetch loaded models. This makes sense since only loaded models can actually process requests. The comment clearly explains the rationale.


const expectedParsedModel = parseLMStudioModel(mockRawModel)
expect(result).toEqual({ [mockRawModel.modelKey]: expectedParsedModel })
// Now using model.path as the key instead of modelKey
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good test update! The assertion correctly verifies that we're now using model.path as the key instead of modelKey.

})

it("should fall back to listLoaded when listDownloadedModels fails", async () => {
it("should fetch only loaded models and use model.path as key", async () => {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent test name update that clearly describes the new behavior - fetching only loaded models and using model.path as the key.

@mechanicmuthu
Copy link

Not a good feature to have:

LM Studio, when configured with Just-in-Time (JIT) Model Loading, can automatically load an unloaded model upon receiving an API call that requests that specific model.

Since 2024 october this change is effective. https://lmstudio.ai/blog/lmstudio-v0.3.5#on-demand-model-loading

Also there is a auto model unload on 60 minutes timeout. So, it is better to show all models with simple deduplication.

@daniel-lxs
Copy link
Member

Closing this PR based on feedback from @mechanicmuthu about LM Studio's JIT Model Loading feature (available since v0.3.5). The current approach of only showing loaded models doesn't align well with the JIT loading capability where models can be automatically loaded on-demand. Will open a new PR with a simpler deduplication approach that shows all models while preventing duplicates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working PR - Needs Preliminary Review size:M This PR changes 30-99 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

LM Studio: Loaded models appear twice in Provider Configuration Profile

5 participants