Skip to content

Commit 96ed9a9

Browse files
authored
Update README.md
1 parent e66c883 commit 96ed9a9

File tree

1 file changed

+172
-1
lines changed

1 file changed

+172
-1
lines changed

README.md

Lines changed: 172 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,172 @@
1-
# Run-LLMs-in-Parallel
1+
# LLM Parallel Run
2+
3+
This project demonstrates how to load and run multiple LLMs using Ollama and Python (Streamlit & Requests).
4+
5+
## Why This Project?
6+
7+
I've been using Ollama and wanted a simple graphical user interface (GUI) for it. I tried OpenWebUI (very good product), but it felt too complex for my basic needs. Plus, it requires Docker, which adds extra steps to set up (consumes memory, disk and cpu). So, I decided to create this project to have an easier option that allows me to select multiple models and run them simultaneously, add or remove more models as needed.
8+
9+
## Prerequisites
10+
11+
- Ollama (Installed locally from ollama.com website)
12+
- Ollama Models (Any model as you like)
13+
- Python 3.13.5 or higher
14+
- pip (Python package installer)
15+
16+
## Tested On
17+
18+
- Python 3.13.5
19+
- Windows Server 2022 OS
20+
- Ollama version 0.9.0
21+
22+
## Setting Up the Environment
23+
24+
### Windows
25+
26+
1. Open Command Prompt.
27+
2. Create a virtual environment:
28+
29+
```bash
30+
python -m venv LLM_Parallel_Run
31+
```
32+
33+
3. Activate the virtual environment:
34+
35+
```bash
36+
.\LLM_Parallel_Run\Scripts\activate
37+
```
38+
39+
4. Install the required packages:
40+
41+
```bash
42+
pip install streamlit requests
43+
```
44+
45+
### Linux
46+
47+
1. Open a terminal.
48+
2. Create a virtual environment:
49+
50+
```bash
51+
python3 -m venv LLM_Parallel_Run
52+
```
53+
54+
3. Activate the virtual environment:
55+
56+
```bash
57+
source LLM_Parallel_Run/bin/activate
58+
```
59+
60+
4. Install the required packages:
61+
62+
```bash
63+
pip install streamlit requests
64+
```
65+
66+
### macOS
67+
68+
1. Open a terminal.
69+
2. Create a virtual environment:
70+
71+
```bash
72+
python3 -m venv LLM_Parallel_Run
73+
```
74+
75+
3. Activate the virtual environment:
76+
77+
```bash
78+
source LLM_Parallel_Run/bin/activate
79+
```
80+
81+
4. Install the required packages:
82+
83+
```bash
84+
pip install streamlit requests
85+
```
86+
87+
## Running the Application
88+
89+
Once the environment is set up and the packages are installed, based on the view you like (horizontal or vertical) copy the code from this repository Horizontal/Vertical View - app.py file (Ex: Horizontal View - app.py), rename it as app.py and run your Streamlit application using the following command and access the application from browser.
90+
91+
```bash
92+
#windows
93+
.\LLM_Parallel_Run\Scripts\activate
94+
streamlit run app.py
95+
96+
#Mac and Linux
97+
source ./LLM_Parallel_Run/scripts/activate
98+
streamlit run app.py
99+
```
100+
101+
## Run without terminal
102+
To run a Streamlit app in a Python virtual environment without opening a terminal, create and run a shortcut or script that activates the virtual environment and starts the app. Here's how to do it on different platforms:
103+
104+
---
105+
106+
**Windows (.bat file):**
107+
108+
1. Create a `run_streamlit.bat` file with the following content:
109+
110+
```bat
111+
@echo off
112+
call C:\path\to\venv\Scripts\activate.bat
113+
streamlit run C:\path\to\your_app.py
114+
```
115+
Next create a .vbs file (e.g., launch_app.vbs) in the same folder and double click it. It will open browser directly without opening terminal.
116+
117+
```vbscript
118+
Set WshShell = CreateObject("WScript.Shell")
119+
WshShell.Run chr(34) & "C:\path\to\run_streamlit.bat" & chr(34), 0
120+
Set WshShell = Nothing
121+
```
122+
123+
2. Double-click the `.vbs` file to launch the app. After you close the browser, if it does not close python and streamlit.exe processes, you have to manually kill those processes or they will pile up for every time you launch the app.
124+
125+
---
126+
127+
**macOS/Linux (.sh file):**
128+
129+
1. Create a `run_streamlit.sh` script:
130+
131+
```bash
132+
#!/bin/bash
133+
source /path/to/venv/bin/activate
134+
streamlit run /path/to/your_app.py
135+
```
136+
137+
2. Make it executable:
138+
139+
```bash
140+
chmod +x run_streamlit.sh
141+
```
142+
143+
3. Run it via double-click or from a launcher depending on your desktop environment.
144+
145+
---
146+
147+
## Note
148+
- Vertical view denotes prompt and models are in vertical layout to the left. Horiztontal view denotes prompt and models are in horizontal layout.
149+
- In vertical view, you can move prompt and models window to the right as needed and move them to the left, to give more space to your output window.
150+
- Using this streamlit site you can run multiple LLMs at same time. But if your results shows one after the other, you should set OLLAMA_MAX_LOADED_MODELS = 2 (or any number as your hardware supports). Refer to Ollama documentation on how to use it in your OS version.
151+
- If you have downloaded a new model while streamlit app is running, stop the streamlit app and rerun it. If not, new model will not be detected by streamlit and you cant see it in the dropdown while selecting the model.
152+
- Source files are provided for horizontal and vertical view for Prompt and Model selection. Use anything you'd like and rename it to app.py.
153+
154+
## Prompt & Model Selection Horizontal View - Quick Look with 3 models
155+
156+
![Alt Text](https://github.com/ChayScripts/Run-LLMs-in-Parallel/blob/main/Horizontal%20View.png)
157+
158+
## Prompt & Model Selection Vertical View - Quick Look with 3 models
159+
160+
![Alt Text](https://github.com/ChayScripts/Run-LLMs-in-Parallel/blob/main/Vertical%20View.png)
161+
162+
### Authors
163+
164+
* **Chay** - [ChayScripts](https://github.com/ChayScripts)
165+
166+
### Contributing
167+
168+
Please follow [github flow](https://guides.github.com/introduction/flow/index.html) for contributing.
169+
170+
### License
171+
172+
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details

0 commit comments

Comments
 (0)