Skip to content

Commit 0abf69b

Browse files
committed
Adds api-server and model eval chat ENVs to native mode example
Signed-off-by: Brent Salisbury <[email protected]>
1 parent 94ddd85 commit 0abf69b

File tree

2 files changed

+7
-5
lines changed

2 files changed

+7
-5
lines changed

.env.native.example

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,3 +12,5 @@ NEXT_PUBLIC_TAXONOMY_ROOT_DIR=
1212
NEXT_PUBLIC_EXPERIMENTAL_FEATURES=false
1313

1414
# IL_FILE_CONVERSION_SERVICE=http://localhost:8000 # Uncomment and fill in the http://host:port if the docling conversion service is running.
15+
# NEXT_PUBLIC_API_SERVER=http://localhost:8080 # Uncomment and point to the URL the api-server is running on. Native mode only and needs to be running on the same host as the UI.
16+
# NEXT_PUBLIC_MODEL_SERVER_URL=http://x.x.x.x # Used for model chat evaluation vLLM instances. Currently, server side rendering is not supported so the client must have access to this address for model chat evaluation to function in the UI. Currently ports, 8000 & 8001 are hardcoded and why it is not an option to set.

api-server/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ Starts a training job.
166166
{
167167
"modelName": "name-of-the-model",
168168
"branchName": "name-of-the-branch",
169-
"epochs": 10 // Optional
169+
"epochs": 10
170170
}
171171
```
172172

@@ -199,7 +199,7 @@ Combines data generation and training into a single pipeline job.
199199
{
200200
"modelName": "name-of-the-model",
201201
"branchName": "name-of-the-branch",
202-
"epochs": 10 // Optional
202+
"epochs": 10
203203
}
204204
```
205205

@@ -230,7 +230,7 @@ Serves the latest model checkpoint on port `8001`.
230230

231231
```json
232232
{
233-
"checkpoint": "samples_12345" // Optional
233+
"checkpoint": "samples_12345"
234234
}
235235
```
236236

@@ -353,7 +353,7 @@ Unloads a specific VLLM container.
353353

354354
```json
355355
{
356-
"model_name": "pre-train" // Must be either "pre-train" or "post-train" for meow
356+
"model_name": "pre-train"
357357
}
358358
```
359359

@@ -387,7 +387,7 @@ Fetches the status of a specific VLLM model.
387387

388388
```json
389389
{
390-
"status": "running" // Possible values: "running", "loading", "stopped"
390+
"status": "running"
391391
}
392392
```
393393

0 commit comments

Comments
 (0)