Multi-LLM Collaborative Chat Interface - Select 2-3 local Ollama models to work together on dev projects
Imagine having multiple AI coding specialists working together in the same chat. Select your best local Ollama models, give them a project, and watch them collaborate, plan, code, test, and build together.
You: "Hey team, I want to build a REST API with authentication"
π§ DeepSeek-Coder: "I'll design the architecture and core endpoints..."
π§ Qwen2.5-Coder: "I'll implement the auth middleware and JWT handling..."
π§ͺ CodeGemma: "I'll write the test suite and security checks..."
[Models read and build upon each other's responses]
- ποΈ Model Selector - Dropdown to pick 2-3 Ollama models from your 67+ local arsenal
- π¬ Unified Chat - Single interface where all selected models participate
- π Inter-Model Communication - Models see and respond to each other's outputs
- π Project Context - Load codebase files for context-aware responses
βΆοΈ Code Execution - Run generated code snippets directly- πΎ Export/Save - Save conversations and generated code
- π Dark Mode - Easy on the eyes for those late night coding sessions
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Electron Desktop App β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββ β
β β Model Selector β β Chat Panel β β Code View β β
β β (2-3 models) β β (all models) β β (Monaco) β β
β ββββββββββ¬βββββββββ ββββββββββ¬βββββββββ ββββββββ¬βββββββ β
β β β β β
β ββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββββ΄βββββββ β
β β Conversation Orchestrator β β
β β β’ Turn management β’ Context building β’ Streaming β β
β ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββ
β
ββββββββββΌβββββββββ
β Ollama API β
β localhost:11434 β
β 67+ Models β
βββββββββββββββββββ
- Node.js 18+
- Ollama running locally
- At least 2-3 models pulled (e.g.,
deepseek-coder,qwen2.5-coder,codellama)
# Clone the repository
git clone https://github.com/VisionaryArchitects/Copilot_LLM_Chat_Studio.git
cd Copilot_LLM_Chat_Studio
# Install dependencies
npm install
# Start development
npm run dev
# Or build and run
npm run build
npm start| Model | Specialty | Size |
|---|---|---|
deepseek-coder:33b |
Advanced code generation | 33B |
qwen2.5-coder:32b |
Multi-language coding | 32B |
codellama:34b |
Code completion & infilling | 34B |
codegemma:7b |
Fast code assistant | 7B |
starcoder2:15b |
Code generation | 15B |
Use the model selector to pick 2-3 models. Mix specialists for best results!
Type your project idea or coding task. All selected models will see your message.
Models read each other's responses and build upon them. You can:
- Let them auto-collaborate
- Direct specific models with
@model-name - Request specific actions: "plan", "code", "review", "test"
- Run code snippets directly
- Copy generated code
- Export full conversations
Create a .env file:
OLLAMA_API_URL=http://localhost:11434
MAX_CONTEXT_LENGTH=8192
DEFAULT_MODELS=deepseek-coder:33b,qwen2.5-coder:32b# Run in development mode
npm run dev
# Lint code
npm run lint
# Run tests
npm test
# Package for distribution
npm run packageCopilot_LLM_Chat_Studio/
βββ src/
β βββ main/ # Electron main process
β βββ renderer/ # React frontend
β β βββ components/ # UI components
β β βββ hooks/ # Custom React hooks
β β βββ contexts/ # React contexts
β β βββ utils/ # Utility functions
β β βββ styles/ # CSS/Tailwind
β βββ shared/ # Shared types & constants
βββ server/ # Local Express server (optional)
βββ docs/ # Documentation & assets
βββ package.json
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
MIT License - see LICENSE for details.
Visionary Architects - AI Development Ecosystem
Built with β€οΈ for the AI-first development community