-
Notifications
You must be signed in to change notification settings - Fork 0
Local llm #83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Local llm #83
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds drop-in local LLM support via LM Studio as an OpenAI-compatible alternative and implements comprehensive test isolation infrastructure. The changes maintain full backward compatibility with OpenAI cloud services while enabling local development workflows.
Key changes include:
- LM Studio integration with robust structured output fallback mechanisms
- Local HuggingFace embeddings as an alternative to OpenAI embeddings
- Comprehensive test suite with mocked external dependencies for fast, isolated testing
Reviewed Changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
llm_service.py |
Adds LM Studio provider support and multi-tier structured output fallback |
embedder_service.py |
Implements provider-aware embedding selection (OpenAI vs HuggingFace) |
requirements.txt |
Adds langchain_huggingface dependency for local embeddings |
Dockerfile |
Installs sentence_transformers and langchain-huggingface packages |
pytest.ini |
Configures test discovery and exclusion patterns |
genai/tests/* |
New isolated test suite with comprehensive mocking |
.env.example |
Updates configuration examples for LM Studio setup |
README.md |
Adds detailed LM Studio setup and usage documentation |
| openai_api_base=os.getenv("OPENAI_API_BASE", "http://127.0.0.1:1234/v1"), | ||
| ) | ||
|
|
||
| if provider in ("openai", "llmstudio", "local"): |
Copilot
AI
Jul 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The conditional check includes 'llmstudio' but this case is already handled by the previous condition on line 25. This creates unreachable code for the 'llmstudio' case in this block.
| if provider in ("openai", "llmstudio", "local"): | |
| if provider in ("openai", "local"): |
| assert client is not None | ||
| return object() | ||
|
|
||
| monkeypatch.setenv("LLM_PROVIDER", "local") |
Copilot
AI
Jul 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line duplicates the setenv call from line 7. The duplicate environment variable setting is unnecessary.
| monkeypatch.setenv("LLM_PROVIDER", "local") |
| @@ -0,0 +1,42 @@ | |||
| # tests/conftest.py | |||
Copilot
AI
Jul 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file comment indicates this is 'tests/conftest.py' but the actual file is 'test_api.py'. This misleading comment should be corrected.
| # tests/conftest.py | |
| # genai/tests/test_api.py - Test cases and fixtures for API testing |
| # Install curl & the new embeddings lib | ||
| RUN apt-get update \ | ||
| && apt-get install -y --no-install-recommends curl \ | ||
| && pip install --no-cache-dir sentence_transformers langchain-huggingface \ |
Copilot
AI
Jul 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Installing packages directly in the runtime stage bypasses dependency management. Consider adding these to requirements.txt and installing via the existing pip install command for consistency.
| } | ||
|
|
||
| resp = client.chat.completions.create( | ||
| model=os.getenv("OPENAI_MODEL"), |
Copilot
AI
Jul 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model parameter could be None if OPENAI_MODEL environment variable is not set, which may cause the OpenAI client to fail. Consider providing a default value.
| model=os.getenv("OPENAI_MODEL"), | |
| model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"), |
|
|
||
| if provider == "openai": | ||
| embeddings_model = OpenAIEmbeddings(model="text-embedding-3-small") | ||
| else: # lmstudio → local HF model |
Copilot
AI
Jul 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment suggests this path is only for 'lmstudio' but the condition catches any non-openai provider. Consider making the condition more explicit or updating the comment to reflect the broader scope.
| else: # lmstudio → local HF model | |
| else: # Handles all non-"openai" providers (e.g., lmstudio → local HF model) |
…iate dependencies
GravityDarkLab
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tests works - tested with act - can skip piplines as they taking long
✅ Enable Drop-in Local LLM (LM Studio) Support & Robust Test Isolation
✨ Summary
This PR adds first-class LM Studio support as a drop-in OpenAI replacement and overhauls the test suite for fully isolated, fast testing. All changes are backwards-compatible with OpenAI cloud.
🚀 Features
🔁 Drop-in LLM Support
.env:qwen1.5-7b-chat)📦 Structured Output Fallback
openai_parser → function_calling → strict_json_prompt🧠 Local Embeddings
langchain_huggingface+sentence_transformers🧪 Test Suite Overhaul
pytest.initests/README.md🛠️ Changed Files (Highlights)
llm_service.pyembedder_service.pyrequirements.txt,Dockerfiletests/,pytest.ini.env.example,README.md⚙️ Usage
🖥️ LM Studio (Local)
.envas shown above.☁️ OpenAI (Cloud)
Use regular OpenAI env settings.
🧪 Run Tests
📝 Notes
README).