Skip to content

Commit d012acf

Browse files
committed
add note on fork
resolves #7
1 parent 9e4a1db commit d012acf

File tree

1 file changed

+34
-24
lines changed

1 file changed

+34
-24
lines changed

README.md

Lines changed: 34 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# nextjs-vllm-ui
22

3+
Forked from https://github.com/jakobhoeg/nextjs-ollama-llm-ui
4+
35
<div align="center">
46
<img src="ollama-nextjs-ui.gif">
57
</div>
@@ -30,8 +32,8 @@ https://github.com/jakobhoeg/nextjs-ollama-llm-ui/assets/114422072/08eaed4f-9deb
3032

3133
To use the web interface, these requisites must be met:
3234

33-
1. Download [vLLM](https://docs.vllm.ai/en/latest/) and have it running. Or run it in a Docker container.
34-
2. [Node.js](https://nodejs.org/en/download) (18+), [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) and [yarn](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable) is required.
35+
1. Download [vLLM](https://docs.vllm.ai/en/latest/) and have it running. Or run it in a Docker container.
36+
2. [Node.js](https://nodejs.org/en/download) (18+), [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) and [yarn](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable) is required.
3537

3638
# Usage 🚀
3739

@@ -42,11 +44,13 @@ docker run --rm -d -p 3000:3000 -e VLLM_URL=http://host.docker.internal:8000 ghc
4244
```
4345

4446
If you're using Ollama, you need to set the `VLLM_MODEL`:
47+
4548
```
4649
docker run --rm -d -p 3000:3000 -e VLLM_URL=http://host.docker.internal:11434 -e VLLM_TOKEN_LIMIT=8192 -e VLLM_MODEL=llama3 ghcr.io/yoziru/nextjs-vllm-ui:latest
4750
```
4851

4952
If your server is running on a different IP address or port, you can use the `--network host` mode in Docker, e.g.:
53+
5054
```
5155
docker run --rm -d --network host -e VLLM_URL=http://192.1.0.110:11434 -e VLLM_TOKEN_LIMIT=8192 -e VLLM_MODEL=llama3 ghcr.io/yoziru/nextjs-vllm-ui:latest
5256
```
@@ -58,42 +62,48 @@ Then go to [localhost:3000](http://localhost:3000) and start chatting with your
5862
To install and run a local environment of the web interface, follow the instructions below.
5963

6064
1. **Clone the repository to a directory on your pc via command prompt:**
61-
```
62-
git clone https://github.com/yoziru/nextjs-vllm-ui
63-
```
65+
66+
```
67+
git clone https://github.com/yoziru/nextjs-vllm-ui
68+
```
6469

6570
1. **Open the folder:**
66-
```
67-
cd nextjs-vllm-ui
68-
```
71+
72+
```
73+
cd nextjs-vllm-ui
74+
```
6975

7076
1. ** Rename the `.example.env` to `.env`:**
71-
```
72-
mv .example.env .env
73-
```
77+
78+
```
79+
mv .example.env .env
80+
```
7481

7582
1. **If your instance of vLLM is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:**
76-
```
77-
VLLM_URL="http://localhost:8000"
78-
VLLM_API_KEY="your-api-key"
79-
VLLM_MODEL="llama3:8b"
80-
VLLM_TOKEN_LIMIT=4096
81-
```
83+
84+
```
85+
VLLM_URL="http://localhost:8000"
86+
VLLM_API_KEY="your-api-key"
87+
VLLM_MODEL="llama3:8b"
88+
VLLM_TOKEN_LIMIT=4096
89+
```
8290

8391
1. **Install dependencies:**
84-
```
85-
yarn install
86-
```
92+
93+
```
94+
yarn install
95+
```
8796

8897
1. **Start the development server:**
89-
```
90-
yarn dev
91-
```
9298

93-
1. **Go to [localhost:3000](http://localhost:3000) and start chatting with your favourite model!**
99+
```
100+
yarn dev
101+
```
94102

103+
1. **Go to [localhost:3000](http://localhost:3000) and start chatting with your favourite model!**
95104

96105
You can also build and run the docker image locally with this command:
106+
97107
```sh
98108
docker build . -t ghcr.io/yoziru/nextjs-vllm-ui:latest \
99109
&& docker run --rm \

0 commit comments

Comments
 (0)