Skip to content
This repository was archived by the owner on Feb 28, 2026. It is now read-only.

Multi-LLM collaborative chat interface - Select 2-3 local Ollama models to work together on dev projects

License

Notifications You must be signed in to change notification settings

VisionaryArchitects/Copilot_LLM_Chat_Studio

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Copilot LLM Chat Studio

Multi-LLM Collaborative Chat Interface - Select 2-3 local Ollama models to work together on dev projects

Visionary Architects Electron React Ollama

🎯 Vision

Imagine having multiple AI coding specialists working together in the same chat. Select your best local Ollama models, give them a project, and watch them collaborate, plan, code, test, and build together.

You: "Hey team, I want to build a REST API with authentication"

🧠 DeepSeek-Coder: "I'll design the architecture and core endpoints..."
πŸ”§ Qwen2.5-Coder: "I'll implement the auth middleware and JWT handling..."
πŸ§ͺ CodeGemma: "I'll write the test suite and security checks..."

[Models read and build upon each other's responses]

✨ Features

  • 🎚️ Model Selector - Dropdown to pick 2-3 Ollama models from your 67+ local arsenal
  • πŸ’¬ Unified Chat - Single interface where all selected models participate
  • πŸ”„ Inter-Model Communication - Models see and respond to each other's outputs
  • πŸ“ Project Context - Load codebase files for context-aware responses
  • ▢️ Code Execution - Run generated code snippets directly
  • πŸ’Ύ Export/Save - Save conversations and generated code
  • πŸŒ™ Dark Mode - Easy on the eyes for those late night coding sessions

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Electron Desktop App                      β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  Model Selector β”‚  β”‚   Chat Panel    β”‚  β”‚  Code View  β”‚ β”‚
β”‚  β”‚   (2-3 models)  β”‚  β”‚  (all models)   β”‚  β”‚  (Monaco)   β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚           β”‚                    β”‚                   β”‚        β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚              Conversation Orchestrator                  β”‚ β”‚
β”‚  β”‚   β€’ Turn management  β€’ Context building  β€’ Streaming   β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                            β”‚
                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
                   β”‚   Ollama API    β”‚
                   β”‚ localhost:11434 β”‚
                   β”‚   67+ Models    β”‚
                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

Prerequisites

  • Node.js 18+
  • Ollama running locally
  • At least 2-3 models pulled (e.g., deepseek-coder, qwen2.5-coder, codellama)

Installation

# Clone the repository
git clone https://github.com/VisionaryArchitects/Copilot_LLM_Chat_Studio.git
cd Copilot_LLM_Chat_Studio

# Install dependencies
npm install

# Start development
npm run dev

# Or build and run
npm run build
npm start

Recommended Models

Model Specialty Size
deepseek-coder:33b Advanced code generation 33B
qwen2.5-coder:32b Multi-language coding 32B
codellama:34b Code completion & infilling 34B
codegemma:7b Fast code assistant 7B
starcoder2:15b Code generation 15B

πŸ“– Usage

1. Select Your Team

Use the model selector to pick 2-3 models. Mix specialists for best results!

2. Start a Conversation

Type your project idea or coding task. All selected models will see your message.

3. Watch Them Collaborate

Models read each other's responses and build upon them. You can:

  • Let them auto-collaborate
  • Direct specific models with @model-name
  • Request specific actions: "plan", "code", "review", "test"

4. Execute & Export

  • Run code snippets directly
  • Copy generated code
  • Export full conversations

πŸ”§ Configuration

Create a .env file:

OLLAMA_API_URL=http://localhost:11434
MAX_CONTEXT_LENGTH=8192
DEFAULT_MODELS=deepseek-coder:33b,qwen2.5-coder:32b

πŸ› οΈ Development

# Run in development mode
npm run dev

# Lint code
npm run lint

# Run tests
npm test

# Package for distribution
npm run package

πŸ“ Project Structure

Copilot_LLM_Chat_Studio/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ main/           # Electron main process
β”‚   β”œβ”€β”€ renderer/       # React frontend
β”‚   β”‚   β”œβ”€β”€ components/ # UI components
β”‚   β”‚   β”œβ”€β”€ hooks/      # Custom React hooks
β”‚   β”‚   β”œβ”€β”€ contexts/   # React contexts
β”‚   β”‚   β”œβ”€β”€ utils/      # Utility functions
β”‚   β”‚   └── styles/     # CSS/Tailwind
β”‚   └── shared/         # Shared types & constants
β”œβ”€β”€ server/             # Local Express server (optional)
β”œβ”€β”€ docs/               # Documentation & assets
└── package.json

🀝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

πŸ“œ License

MIT License - see LICENSE for details.

🏒 Organization

Visionary Architects - AI Development Ecosystem


Built with ❀️ for the AI-first development community

About

Multi-LLM collaborative chat interface - Select 2-3 local Ollama models to work together on dev projects

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors