File tree Expand file tree Collapse file tree 4 files changed +4
-4
lines changed
Expand file tree Collapse file tree 4 files changed +4
-4
lines changed Original file line number Diff line number Diff line change @@ -9,7 +9,7 @@ Install Docker Model Runner (Docker Engine only)
99| :-----------------| :---------| :------------| :-------------------------------------------------------------------------------------------------------|
1010| ` --backend ` | ` string ` | | Specify backend (llama.cpp\| vllm). Default: llama.cpp |
1111| ` --do-not-track ` | ` bool ` | | Do not track models usage in Docker Model Runner |
12- | ` --gpu ` | ` string ` | ` auto ` | Specify GPU support (none\| auto\| cuda\| rocm\| cann \| musa) |
12+ | ` --gpu ` | ` string ` | ` auto ` | Specify GPU support (none\| auto\| cuda\| rocm\| musa \| cann) |
1313| ` --host ` | ` string ` | ` 127.0.0.1 ` | Host address to bind Docker Model Runner |
1414| ` --port ` | ` uint16 ` | ` 0 ` | Docker container port for Docker Model Runner (default: 12434 for Docker Engine, 12435 for Cloud mode) |
1515
Original file line number Diff line number Diff line change @@ -9,7 +9,7 @@ Reinstall Docker Model Runner (Docker Engine only)
99| :-----------------| :---------| :------------| :-------------------------------------------------------------------------------------------------------|
1010| ` --backend ` | ` string ` | | Specify backend (llama.cpp\| vllm). Default: llama.cpp |
1111| ` --do-not-track ` | ` bool ` | | Do not track models usage in Docker Model Runner |
12- | ` --gpu ` | ` string ` | ` auto ` | Specify GPU support (none\| auto\| cuda\| cann \| musa) |
12+ | ` --gpu ` | ` string ` | ` auto ` | Specify GPU support (none\| auto\| cuda\| musa \| rocm \| cann) |
1313| ` --host ` | ` string ` | ` 127.0.0.1 ` | Host address to bind Docker Model Runner |
1414| ` --port ` | ` uint16 ` | ` 0 ` | Docker container port for Docker Model Runner (default: 12434 for Docker Engine, 12435 for Cloud mode) |
1515
Original file line number Diff line number Diff line change @@ -8,7 +8,7 @@ Restart Docker Model Runner (Docker Engine only)
88| Name | Type | Default | Description |
99| :-----------------| :---------| :------------| :-------------------------------------------------------------------------------------------------------|
1010| ` --do-not-track ` | ` bool ` | | Do not track models usage in Docker Model Runner |
11- | ` --gpu ` | ` string ` | ` auto ` | Specify GPU support (none\| auto\| cuda\| cann \| musa) |
11+ | ` --gpu ` | ` string ` | ` auto ` | Specify GPU support (none\| auto\| cuda\| musa \| rocm \| cann) |
1212| ` --host ` | ` string ` | ` 127.0.0.1 ` | Host address to bind Docker Model Runner |
1313| ` --port ` | ` uint16 ` | ` 0 ` | Docker container port for Docker Model Runner (default: 12434 for Docker Engine, 12435 for Cloud mode) |
1414
Original file line number Diff line number Diff line change @@ -9,7 +9,7 @@ Start Docker Model Runner (Docker Engine only)
99| :-----------------| :---------| :--------| :-------------------------------------------------------------------------------------------------------|
1010| ` --backend ` | ` string ` | | Specify backend (llama.cpp\| vllm). Default: llama.cpp |
1111| ` --do-not-track ` | ` bool ` | | Do not track models usage in Docker Model Runner |
12- | ` --gpu ` | ` string ` | ` auto ` | Specify GPU support (none\| auto\| cuda\| cann \| musa) |
12+ | ` --gpu ` | ` string ` | ` auto ` | Specify GPU support (none\| auto\| cuda\| musa \| rocm \| cann) |
1313| ` --port ` | ` uint16 ` | ` 0 ` | Docker container port for Docker Model Runner (default: 12434 for Docker Engine, 12435 for Cloud mode) |
1414
1515
You can’t perform that action at this time.
0 commit comments