Skip to content

Commit eb4a201

Browse files
committed
Update README
1 parent 634609a commit eb4a201

File tree

1 file changed

+68
-17
lines changed

1 file changed

+68
-17
lines changed

README.md

Lines changed: 68 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -236,27 +236,33 @@ List of command-line flags
236236
</summary>
237237

238238
```txt
239-
usage: server.py [-h] [--multi-user] [--model MODEL] [--lora LORA [LORA ...]] [--model-dir MODEL_DIR] [--lora-dir LORA_DIR] [--model-menu] [--settings SETTINGS]
239+
usage: server.py [-h] [--user-data-dir USER_DATA_DIR] [--multi-user] [--model MODEL] [--lora LORA [LORA ...]] [--model-dir MODEL_DIR] [--lora-dir LORA_DIR] [--model-menu] [--settings SETTINGS]
240240
[--extensions EXTENSIONS [EXTENSIONS ...]] [--verbose] [--idle-timeout IDLE_TIMEOUT] [--image-model IMAGE_MODEL] [--image-model-dir IMAGE_MODEL_DIR] [--image-dtype {bfloat16,float16}]
241241
[--image-attn-backend {flash_attention_2,sdpa}] [--image-cpu-offload] [--image-compile] [--image-quant {none,bnb-8bit,bnb-4bit,torchao-int8wo,torchao-fp4,torchao-float8wo}]
242242
[--loader LOADER] [--ctx-size N] [--cache-type N] [--model-draft MODEL_DRAFT] [--draft-max DRAFT_MAX] [--gpu-layers-draft GPU_LAYERS_DRAFT] [--device-draft DEVICE_DRAFT]
243243
[--ctx-size-draft CTX_SIZE_DRAFT] [--spec-type {none,ngram-mod,ngram-simple,ngram-map-k,ngram-map-k4v,ngram-cache}] [--spec-ngram-size-n SPEC_NGRAM_SIZE_N]
244244
[--spec-ngram-size-m SPEC_NGRAM_SIZE_M] [--spec-ngram-min-hits SPEC_NGRAM_MIN_HITS] [--gpu-layers N] [--cpu-moe] [--mmproj MMPROJ] [--streaming-llm] [--tensor-split TENSOR_SPLIT]
245245
[--row-split] [--no-mmap] [--mlock] [--no-kv-offload] [--batch-size BATCH_SIZE] [--ubatch-size UBATCH_SIZE] [--threads THREADS] [--threads-batch THREADS_BATCH] [--numa]
246-
[--extra-flags EXTRA_FLAGS] [--cpu] [--cpu-memory CPU_MEMORY] [--disk] [--disk-cache-dir DISK_CACHE_DIR] [--load-in-8bit] [--bf16] [--no-cache] [--trust-remote-code]
247-
[--force-safetensors] [--no_use_fast] [--attn-implementation IMPLEMENTATION] [--load-in-4bit] [--use_double_quant] [--compute_dtype COMPUTE_DTYPE] [--quant_type QUANT_TYPE]
248-
[--gpu-split GPU_SPLIT] [--enable-tp] [--tp-backend TP_BACKEND] [--cfg-cache] [--cpp-runner]
249-
[--alpha_value ALPHA_VALUE] [--rope_freq_base ROPE_FREQ_BASE] [--compress_pos_emb COMPRESS_POS_EMB] [--listen] [--listen-port LISTEN_PORT] [--listen-host LISTEN_HOST] [--share]
250-
[--auto-launch] [--gradio-auth GRADIO_AUTH] [--gradio-auth-path GRADIO_AUTH_PATH] [--ssl-keyfile SSL_KEYFILE] [--ssl-certfile SSL_CERTFILE] [--subpath SUBPATH] [--old-colors]
251-
[--portable] [--api] [--public-api] [--public-api-id PUBLIC_API_ID] [--api-port API_PORT] [--api-key API_KEY] [--admin-key ADMIN_KEY] [--api-enable-ipv6] [--api-disable-ipv4]
252-
[--nowebui]
246+
[--parallel PARALLEL] [--fit-target FIT_TARGET] [--extra-flags EXTRA_FLAGS] [--cpu] [--cpu-memory CPU_MEMORY] [--disk] [--disk-cache-dir DISK_CACHE_DIR] [--load-in-8bit] [--bf16]
247+
[--no-cache] [--trust-remote-code] [--force-safetensors] [--no_use_fast] [--attn-implementation IMPLEMENTATION] [--load-in-4bit] [--use_double_quant] [--compute_dtype COMPUTE_DTYPE]
248+
[--quant_type QUANT_TYPE] [--gpu-split GPU_SPLIT] [--enable-tp] [--tp-backend TP_BACKEND] [--cfg-cache] [--alpha_value ALPHA_VALUE] [--rope_freq_base ROPE_FREQ_BASE]
249+
[--compress_pos_emb COMPRESS_POS_EMB] [--listen] [--listen-port LISTEN_PORT] [--listen-host LISTEN_HOST] [--share] [--auto-launch] [--gradio-auth GRADIO_AUTH]
250+
[--gradio-auth-path GRADIO_AUTH_PATH] [--ssl-keyfile SSL_KEYFILE] [--ssl-certfile SSL_CERTFILE] [--subpath SUBPATH] [--old-colors] [--portable] [--api] [--public-api]
251+
[--public-api-id PUBLIC_API_ID] [--api-port API_PORT] [--api-key API_KEY] [--admin-key ADMIN_KEY] [--api-enable-ipv6] [--api-disable-ipv4] [--nowebui] [--temperature N]
252+
[--dynatemp-low N] [--dynatemp-high N] [--dynatemp-exponent N] [--smoothing-factor N] [--smoothing-curve N] [--min-p N] [--top-p N] [--top-k N] [--typical-p N] [--xtc-threshold N]
253+
[--xtc-probability N] [--epsilon-cutoff N] [--eta-cutoff N] [--tfs N] [--top-a N] [--top-n-sigma N] [--adaptive-target N] [--adaptive-decay N] [--dry-multiplier N]
254+
[--dry-allowed-length N] [--dry-base N] [--repetition-penalty N] [--frequency-penalty N] [--presence-penalty N] [--encoder-repetition-penalty N] [--no-repeat-ngram-size N]
255+
[--repetition-penalty-range N] [--penalty-alpha N] [--guidance-scale N] [--mirostat-mode N] [--mirostat-tau N] [--mirostat-eta N] [--do-sample | --no-do-sample]
256+
[--dynamic-temperature | --no-dynamic-temperature] [--temperature-last | --no-temperature-last] [--sampler-priority N] [--dry-sequence-breakers N]
257+
[--enable-thinking | --no-enable-thinking] [--reasoning-effort N] [--chat-template-file CHAT_TEMPLATE_FILE]
253258
254259
Text Generation Web UI
255260
256261
options:
257262
-h, --help show this help message and exit
258263
259264
Basic settings:
265+
--user-data-dir USER_DATA_DIR Path to the user data directory. Default: auto-detected.
260266
--multi-user Multi-user mode. Chat histories are not saved or automatically loaded. Warning: this is likely not safe for sharing publicly.
261267
--model MODEL Name of the model to load by default.
262268
--lora LORA [LORA ...] The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces.
@@ -280,12 +286,12 @@ Image model:
280286
Quantization method for image model.
281287
282288
Model loader:
283-
--loader LOADER Choose the model loader manually, otherwise, it will get autodetected. Valid options: Transformers, llama.cpp, ExLlamav3_HF, ExLlamav3,
284-
TensorRT-LLM.
289+
--loader LOADER Choose the model loader manually, otherwise, it will get autodetected. Valid options: Transformers, llama.cpp, ExLlamav3_HF, ExLlamav3, TensorRT-
290+
LLM.
285291
286292
Context and cache:
287-
--ctx-size N, --n_ctx N, --max_seq_len N Context size in tokens. llama.cpp: 0 = auto if gpu-layers is also -1.
288-
--cache-type N, --cache_type N KV cache type; valid options: llama.cpp - fp16, q8_0, q4_0; ExLlamaV3 - fp16, q2 to q8 (can specify k_bits and v_bits separately, e.g. q4_q8).
293+
--ctx-size, --n_ctx, --max_seq_len N Context size in tokens. llama.cpp: 0 = auto if gpu-layers is also -1.
294+
--cache-type, --cache_type N KV cache type; valid options: llama.cpp - fp16, q8_0, q4_0; ExLlamaV3 - fp16, q2 to q8 (can specify k_bits and v_bits separately, e.g. q4_q8).
289295
290296
Speculative decoding:
291297
--model-draft MODEL_DRAFT Path to the draft model for speculative decoding.
@@ -300,7 +306,7 @@ Speculative decoding:
300306
--spec-ngram-min-hits SPEC_NGRAM_MIN_HITS Minimum n-gram hits for ngram-map speculative decoding.
301307
302308
llama.cpp:
303-
--gpu-layers N, --n-gpu-layers N Number of layers to offload to the GPU. -1 = auto.
309+
--gpu-layers, --n-gpu-layers N Number of layers to offload to the GPU. -1 = auto.
304310
--cpu-moe Move the experts to the CPU (for MoE models).
305311
--mmproj MMPROJ Path to the mmproj file for vision models.
306312
--streaming-llm Activate StreamingLLM to avoid re-evaluating the entire prompt when old messages are removed.
@@ -314,13 +320,17 @@ llama.cpp:
314320
--threads THREADS Number of threads to use.
315321
--threads-batch THREADS_BATCH Number of threads to use for batches/prompt processing.
316322
--numa Activate NUMA task allocation for llama.cpp.
323+
--parallel PARALLEL Number of parallel request slots. The context size is divided equally among slots. For example, to have 4 slots with 8192 context each, set
324+
ctx_size to 32768.
325+
--fit-target FIT_TARGET Target VRAM margin per device for auto GPU layers, comma-separated list of values in MiB. A single value is broadcast across all devices.
326+
Default: 1024.
317327
--extra-flags EXTRA_FLAGS Extra flags to pass to llama-server. Format: "flag1=value1,flag2,flag3=value3". Example: "override-tensor=exps=CPU"
318328
319329
Transformers/Accelerate:
320330
--cpu Use the CPU to generate text. Warning: Training on CPU is extremely slow.
321331
--cpu-memory CPU_MEMORY Maximum CPU memory in GiB. Use this for CPU offloading.
322332
--disk If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk.
323-
--disk-cache-dir DISK_CACHE_DIR Directory to save the disk cache to. Defaults to "user_data/cache".
333+
--disk-cache-dir DISK_CACHE_DIR Directory to save the disk cache to.
324334
--load-in-8bit Load the model with 8-bit precision (using bitsandbytes).
325335
--bf16 Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU.
326336
--no-cache Set use_cache to False while generating text. This reduces VRAM usage slightly, but it comes at a performance cost.
@@ -341,9 +351,6 @@ ExLlamaV3:
341351
--tp-backend TP_BACKEND The backend for tensor parallelism. Valid options: native, nccl. Default: native.
342352
--cfg-cache Create an additional cache for CFG negative prompts. Necessary to use CFG with that loader.
343353
344-
TensorRT-LLM:
345-
--cpp-runner Use the ModelRunnerCpp runner, which is faster than the default ModelRunner.
346-
347354
RoPE:
348355
--alpha_value ALPHA_VALUE Positional embeddings alpha factor for NTK RoPE scaling. Use either this or compress_pos_emb, not both.
349356
--rope_freq_base ROPE_FREQ_BASE If greater than 0, will be used instead of alpha_value. Those two are related by rope_freq_base = 10000 * alpha_value ^ (64 / 63).
@@ -373,6 +380,50 @@ API:
373380
--api-enable-ipv6 Enable IPv6 for the API
374381
--api-disable-ipv4 Disable IPv4 for the API
375382
--nowebui Do not launch the Gradio UI. Useful for launching the API in standalone mode.
383+
384+
API generation defaults:
385+
--temperature N Temperature
386+
--dynatemp-low N Dynamic temperature low
387+
--dynatemp-high N Dynamic temperature high
388+
--dynatemp-exponent N Dynamic temperature exponent
389+
--smoothing-factor N Smoothing factor
390+
--smoothing-curve N Smoothing curve
391+
--min-p N Min P
392+
--top-p N Top P
393+
--top-k N Top K
394+
--typical-p N Typical P
395+
--xtc-threshold N XTC threshold
396+
--xtc-probability N XTC probability
397+
--epsilon-cutoff N Epsilon cutoff
398+
--eta-cutoff N Eta cutoff
399+
--tfs N TFS
400+
--top-a N Top A
401+
--top-n-sigma N Top N Sigma
402+
--adaptive-target N Adaptive target
403+
--adaptive-decay N Adaptive decay
404+
--dry-multiplier N DRY multiplier
405+
--dry-allowed-length N DRY allowed length
406+
--dry-base N DRY base
407+
--repetition-penalty N Repetition penalty
408+
--frequency-penalty N Frequency penalty
409+
--presence-penalty N Presence penalty
410+
--encoder-repetition-penalty N Encoder repetition penalty
411+
--no-repeat-ngram-size N No repeat ngram size
412+
--repetition-penalty-range N Repetition penalty range
413+
--penalty-alpha N Penalty alpha
414+
--guidance-scale N Guidance scale
415+
--mirostat-mode N Mirostat mode
416+
--mirostat-tau N Mirostat tau
417+
--mirostat-eta N Mirostat eta
418+
--do-sample, --no-do-sample Do sample
419+
--dynamic-temperature, --no-dynamic-temperature Dynamic temperature
420+
--temperature-last, --no-temperature-last Temperature last
421+
--sampler-priority N Sampler priority
422+
--dry-sequence-breakers N DRY sequence breakers
423+
--enable-thinking, --no-enable-thinking Enable thinking
424+
--reasoning-effort N Reasoning effort
425+
--chat-template-file CHAT_TEMPLATE_FILE Path to a chat template file (.jinja, .jinja2, or .yaml) to use as the default instruction template for API requests. Overrides the model's
426+
built-in template.
376427
```
377428

378429
</details>

0 commit comments

Comments
 (0)