feat: add LM Studio as first-class LLM provider #1
Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds LM Studio as a first-class LLM provider alongside the existing Claude, Ollama, and Custom providers. It enables void-box to use local LM Studio instances running on the host machine, leveraging LM Studio 0.3.x+'s native Anthropic-compatible API via SLIRP networking. The implementation also includes a refactoring of the kernel module installation script to support newer .ko.zst compression formats, and bumps VM memory for local LLM examples to accommodate the larger resource requirements.
Changes:
- Adds
LlmProvider::LmStudiovariant with constructor methods and full integration into the provider system - Refactors kernel module installation in build script to support .ko.xz, .ko.zst, and uncompressed .ko formats
- Adds comprehensive test coverage for LM Studio provider (5 new tests)
- Increases VM memory from 1024 MB to 2048 MB for local LLM examples to handle claude-code + initramfs requirements
- Adds LM_STUDIO_MODEL detection to the provider auto-detection logic
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| src/llm.rs | Adds LmStudio enum variant, constructors, and implementations for all provider methods (cli_args, env_vars, is_local, description, model builder); includes comprehensive test coverage |
| scripts/build_guest_image.sh | Refactors kernel module installation to handle multiple compression formats (.ko.xz, .ko.zst, .ko) with built-in module detection |
| examples/lm_studio_local.rs | New runnable example demonstrating LM Studio integration with KVM/mock fallback, mirroring the ollama_local.rs pattern |
| examples/ollama_local.rs | Updates memory allocation from 256 MB to 2048 MB for local LLM workloads |
| examples/common/mod.rs | Adds LM_STUDIO_MODEL detection in detect_llm_provider() and updates default memory to 2048 MB |
| .gitignore | Adds IDE and local utility directories (.claude, .idea, .local-utils) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /// Detect the LLM provider from environment variables. | ||
| /// | ||
| /// - `OLLAMA_MODEL=qwen3-coder` -> Ollama with that model | ||
| /// - `LLM_BASE_URL=...` -> Custom provider | ||
| /// - Otherwise -> Claude (default) |
There was a problem hiding this comment.
The documentation comment should be updated to include the newly added LM Studio detection. It currently lists Ollama and LLM_BASE_URL options, but omits LM_STUDIO_MODEL which is now checked first in the implementation.
| //! 4. Build the guest initramfs: | ||
| //! ``` | ||
| //! CLAUDE_CODE_BIN=$(which claude) BUSYBOX=/usr/bin/busybox \ | ||
| //! scripts/build_claude_rootfs.sh |
There was a problem hiding this comment.
The documentation references scripts/build_claude_rootfs.sh, but the Ollama example uses scripts/build_guest_image.sh. These should be consistent. Verify which script name is correct and update the documentation accordingly.
| //! scripts/build_claude_rootfs.sh | |
| //! scripts/build_guest_image.sh |
Summary
Anthropic-compatible Messages API)
Test plan
LM Studio prerequisite
In the LM Studio app → Local Server tab → Start Server (default port 1234). The server must listen on 0.0.0.0, not just 127.0.0.1, for the SLIRP gateway to reach it from inside the VM — same constraint as Ollama.