Skip to content

feat: add LM Studio as first-class LLM provider #1

Merged
dpsoft merged 5 commits intomainfrom
feature/lm-studio-provider
Feb 22, 2026
Merged

feat: add LM Studio as first-class LLM provider #1
dpsoft merged 5 commits intomainfrom
feature/lm-studio-provider

Conversation

@ramirolaso
Copy link
Copy Markdown
Collaborator

Summary

  • Adds LlmProvider::LmStudio variant alongside the existing Claude, Ollama, and Custom providers
  • Injects ANTHROPIC_BASE_URL=http://10.0.2.2:1234 and ANTHROPIC_API_KEY=lm-studio into the guest VM so claude-code talks to the local LM Studio server via the SLIRP gateway — no proxy needed (LM Studio 0.3.x+ exposes a native
    Anthropic-compatible Messages API)
  • Adds lm_studio(model) and lm_studio_with_host(model, host) constructors, and extends all match arms: cli_args(), env_vars(), is_local(), description(), and the model() builder
  • Adds LM_STUDIO_MODEL detection in detect_llm_provider() (checked before OLLAMA_MODEL)
  • Adds examples/lm_studio_local.rs — a runnable example mirroring ollama_local.rs, with KVM/mock fallback and LM Studio-specific prerequisites documented
  • Bumps default VM memory to 2048 MB across all local-LLM examples and common::make_box — necessary because claude-code (~250 MB RSS) + a full initramfs (~299 MB decompressed) exceeds 1 GB

Test plan

  • cargo test -p void-box -- llm::tests — all 17 tests pass (5 new LM Studio tests added)
  • cargo build — compiles clean
  • cargo clippy -- -D warnings — no warnings
  • Mock mode: LM_STUDIO_MODEL=deepseek-r1-distill-llama-8b cargo run --example lm_studio_local runs without KVM
  • KVM mode: example boots guest, vsock connects, and claude-code calls LM Studio at http://10.0.2.2:1234

LM Studio prerequisite

In the LM Studio app → Local Server tab → Start Server (default port 1234). The server must listen on 0.0.0.0, not just 127.0.0.1, for the SLIRP gateway to reach it from inside the VM — same constraint as Ollama.

@dpsoft dpsoft requested a review from Copilot February 22, 2026 15:48
@dpsoft dpsoft merged commit ca63302 into main Feb 22, 2026
10 checks passed
@dpsoft dpsoft deleted the feature/lm-studio-provider branch February 22, 2026 15:50
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds LM Studio as a first-class LLM provider alongside the existing Claude, Ollama, and Custom providers. It enables void-box to use local LM Studio instances running on the host machine, leveraging LM Studio 0.3.x+'s native Anthropic-compatible API via SLIRP networking. The implementation also includes a refactoring of the kernel module installation script to support newer .ko.zst compression formats, and bumps VM memory for local LLM examples to accommodate the larger resource requirements.

Changes:

  • Adds LlmProvider::LmStudio variant with constructor methods and full integration into the provider system
  • Refactors kernel module installation in build script to support .ko.xz, .ko.zst, and uncompressed .ko formats
  • Adds comprehensive test coverage for LM Studio provider (5 new tests)
  • Increases VM memory from 1024 MB to 2048 MB for local LLM examples to handle claude-code + initramfs requirements
  • Adds LM_STUDIO_MODEL detection to the provider auto-detection logic

Reviewed changes

Copilot reviewed 5 out of 6 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/llm.rs Adds LmStudio enum variant, constructors, and implementations for all provider methods (cli_args, env_vars, is_local, description, model builder); includes comprehensive test coverage
scripts/build_guest_image.sh Refactors kernel module installation to handle multiple compression formats (.ko.xz, .ko.zst, .ko) with built-in module detection
examples/lm_studio_local.rs New runnable example demonstrating LM Studio integration with KVM/mock fallback, mirroring the ollama_local.rs pattern
examples/ollama_local.rs Updates memory allocation from 256 MB to 2048 MB for local LLM workloads
examples/common/mod.rs Adds LM_STUDIO_MODEL detection in detect_llm_provider() and updates default memory to 2048 MB
.gitignore Adds IDE and local utility directories (.claude, .idea, .local-utils)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 33 to 37
/// Detect the LLM provider from environment variables.
///
/// - `OLLAMA_MODEL=qwen3-coder` -> Ollama with that model
/// - `LLM_BASE_URL=...` -> Custom provider
/// - Otherwise -> Claude (default)
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation comment should be updated to include the newly added LM Studio detection. It currently lists Ollama and LLM_BASE_URL options, but omits LM_STUDIO_MODEL which is now checked first in the implementation.

Copilot uses AI. Check for mistakes.
//! 4. Build the guest initramfs:
//! ```
//! CLAUDE_CODE_BIN=$(which claude) BUSYBOX=/usr/bin/busybox \
//! scripts/build_claude_rootfs.sh
Copy link

Copilot AI Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation references scripts/build_claude_rootfs.sh, but the Ollama example uses scripts/build_guest_image.sh. These should be consistent. Verify which script name is correct and update the documentation accordingly.

Suggested change
//! scripts/build_claude_rootfs.sh
//! scripts/build_guest_image.sh

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants