Skip to content

Commit 4dbe384

Browse files
committed
dmr: gui docs
1 parent dc01252 commit 4dbe384

File tree

3 files changed

+48
-188
lines changed

3 files changed

+48
-188
lines changed

content/manuals/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ params:
3939
- title: Docker Model Runner
4040
description: View and manage your local models.
4141
icon: view_in_ar
42-
link: /model-runner/
42+
link: /ai/model-runner/
4343
- title: MCP Catalog and Toolkit
4444
description: Augment your AI workflow with MCP servers.
4545
icon: /assets/icons/toolbox.svg

content/manuals/ai/model-runner.md renamed to content/manuals/ai/model-runner/_index.md

Lines changed: 44 additions & 185 deletions
Original file line numberDiff line numberDiff line change
@@ -8,27 +8,30 @@ params:
88
group: AI
99
weight: 20
1010
description: Learn how to use Docker Model Runner to manage and run AI models.
11-
keywords: Docker, ai, model runner, docker deskotp, llm
11+
keywords: Docker, ai, model runner, docker desktop, llm
1212
aliases:
1313
- /desktop/features/model-runner/
14-
- /ai/model-runner/
14+
- /model-runner/
1515
---
1616

1717
{{< summary-bar feature_name="Docker Model Runner" >}}
1818

19-
The Docker Model Runner plugin lets you:
19+
## Key features
2020

21-
- [Pull models from Docker Hub](https://hub.docker.com/u/ai)
22-
- Run AI models directly from the command line
23-
- Manage local models (add, list, remove)
24-
- Interact with models using a submitted prompt or in chat mode in the CLI or Docker Desktop Dashboard
25-
- Push models to Docker Hub
21+
- [Pull and push models to and from Docker Hub](https://hub.docker.com/u/ai)
22+
- Run and interact with AI models directly from the command line or from the Docker Desktop GUI
23+
- Manage local models and display logs
24+
25+
## How it works
2626

2727
Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available).
2828

2929
> [!TIP]
3030
>
31-
> Using Testcontainers or Docker Compose? [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner.
31+
> Using Testcontainers or Docker Compose?
32+
> [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/)
33+
> and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and
34+
> [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner.
3235
3336
## Enable Docker Model Runner
3437

@@ -45,192 +48,58 @@ Models are pulled from Docker Hub the first time they're used and stored locally
4548

4649
You can now use the `docker model` command in the CLI and view and interact with your local models in the **Models** tab in the Docker Desktop Dashboard.
4750

48-
## Available commands
49-
50-
### Model runner status
51-
52-
Check whether the Docker Model Runner is active and displays the current inference engine:
53-
54-
```console
55-
$ docker model status
56-
```
57-
58-
### View all commands
59-
60-
Displays help information and a list of available subcommands.
61-
62-
```console
63-
$ docker model help
64-
```
65-
66-
Output:
67-
68-
```text
69-
Usage: docker model COMMAND
70-
71-
Commands:
72-
list List models available locally
73-
pull Download a model from Docker Hub
74-
rm Remove a downloaded model
75-
run Run a model interactively or with a prompt
76-
status Check if the model runner is running
77-
version Show the current version
78-
```
79-
80-
### Pull a model
81-
82-
Pulls a model from Docker Hub to your local environment.
83-
84-
```console
85-
$ docker model pull <model>
86-
```
87-
88-
Example:
89-
90-
```console
91-
$ docker model pull ai/smollm2
92-
```
93-
94-
Output:
95-
96-
```text
97-
Downloaded: 257.71 MB
98-
Model ai/smollm2 pulled successfully
99-
```
51+
## Pull a model
10052

101-
The models also display in the Docker Desktop Dashboard.
53+
Models are cached locally.
10254

103-
#### Pull from Hugging Face
55+
{{< tabs >}}
56+
{{< tab name="From Docker Desktop">}}
10457

105-
You can also pull GGUF models directly from [Hugging Face](https://huggingface.co/models?library=gguf).
58+
1. Select **Models** and select the **Docker Hub** tab.
59+
2. Find the model of your choice and select **Pull**.
10660

107-
```console
108-
$ docker model pull hf.co/<model-you-want-to-pull>
109-
```
110-
111-
For example:
112-
113-
```console
114-
$ docker model pull hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF
115-
```
61+
{{< /tab >}}
62+
{{< tab name="From the Docker CLI">}}
11663

117-
Pulls the [bartowski/Llama-3.2-1B-Instruct-GGUF](https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF).
64+
Use the [`docker model pull` command](/reference/cli/docker/).
11865

119-
### List available models
66+
{{< /tab >}}
67+
{{< /tabs >}}
12068

121-
Lists all models currently pulled to your local environment.
69+
## Run a model
12270

123-
```console
124-
$ docker model list
125-
```
71+
{{< tabs >}}
72+
{{< tab name="From Docker Desktop">}}
12673

127-
You will see something similar to:
74+
Select **Models** and select the **Local** tab and click the play button.
75+
The interactive chat screen opens.
12876

129-
```text
130-
+MODEL PARAMETERS QUANTIZATION ARCHITECTURE MODEL ID CREATED SIZE
131-
+ai/smollm2 361.82 M IQ2_XXS/Q4_K_M llama 354bf30d0aa3 3 days ago 256.35 MiB
132-
```
77+
{{< /tab >}}
78+
{{< tab name="From the Docker CLI">}}
13379

134-
### Run a model
80+
Use the [`docker model run` command](/reference/cli/docker/).
13581

136-
Run a model and interact with it using a submitted prompt or in chat mode. When you run a model, Docker
137-
calls an Inference Server API endpoint hosted by the Model Runner through Docker Desktop. The model
138-
stays in memory until another model is requested, or until a pre-defined inactivity timeout is reached (currently 5 minutes).
82+
{{< /tab >}}
83+
{{< /tabs >}}
13984

140-
You do not have to use `Docker model run` before interacting with a specific model from a
141-
host process or from within a container. Model Runner transparently loads the requested model on-demand, assuming it has been
142-
pulled beforehand and is locally available.
85+
## Troubleshooting
14386

144-
#### One-time prompt
87+
To troubleshoot potential issues, display the logs:
14588

146-
```console
147-
$ docker model run ai/smollm2 "Hi"
148-
```
89+
{{< tabs >}}
90+
{{< tab name="From Docker Desktop">}}
14991

150-
Output:
92+
Select **Models** and select the **Logs** tab.
15193

152-
```text
153-
Hello! How can I assist you today?
154-
```
94+
{{< /tab >}}
95+
{{< tab name="From the Docker CLI">}}
15596

156-
#### Interactive chat
97+
Use the [`docker model log` command](/reference/cli/docker/).
15798

158-
```console
159-
$ docker model run ai/smollm2
160-
```
99+
{{< /tab >}}
100+
{{< /tabs >}}
161101

162-
Output:
163-
164-
```text
165-
Interactive chat mode started. Type '/bye' to exit.
166-
> Hi
167-
Hi there! It's SmolLM, AI assistant. How can I help you today?
168-
> /bye
169-
Chat session ended.
170-
```
171-
172-
> [!TIP]
173-
>
174-
> You can also use chat mode in the Docker Desktop Dashboard when you select the model in the **Models** tab.
175-
176-
### Push a model to Docker Hub
177-
178-
To push your model to Docker Hub:
179-
180-
```console
181-
$ docker model push <namespace>/<model>
182-
```
183-
184-
### Tag a model
185-
186-
To specify a particular version or variant of the model:
187-
188-
```console
189-
$ docker model tag
190-
```
191-
192-
If no tag is provided, Docker defaults to `latest`.
193-
194-
### View the logs
195-
196-
Fetch logs from Docker Model Runner to monitor activity or debug issues.
197-
198-
```console
199-
$ docker model logs
200-
```
201-
202-
The following flags are accepted:
203-
204-
- `-f`/`--follow`: View logs with real-time streaming
205-
- `--no-engines`: Exclude inference engine logs from the output
206-
207-
### Remove a model
208-
209-
Removes a downloaded model from your system.
210-
211-
```console
212-
$ docker model rm <model>
213-
```
214-
215-
Output:
216-
217-
```text
218-
Model <model> removed successfully
219-
```
220-
221-
### Package a model
222-
223-
Packages a GGUF file into a Docker model OCI artifact, with optional licenses, and pushes it to the specified registry.
224-
225-
```console
226-
$ docker model package \
227-
--gguf ./model.gguf \
228-
--licenses license1.txt \
229-
--licenses license2.txt \
230-
--push registry.example.com/ai/custom-model
231-
```
232-
233-
## Integrate the Docker Model Runner into your software development lifecycle
102+
## Example: Integrate Docker Model Runner into your software development lifecycle
234103

235104
You can now start building your Generative AI application powered by the Docker Model Runner.
236105

@@ -290,7 +159,6 @@ with `/exp/vDD4.40`.
290159
> [!NOTE]
291160
> You can omit `llama.cpp` from the path. For example: `POST /engines/v1/chat/completions`.
292161
293-
294162
### How do I interact through the OpenAI API?
295163

296164
#### From within a container
@@ -402,12 +270,3 @@ The Docker Model CLI currently lacks consistent support for specifying models by
402270
## Share feedback
403271

404272
Thanks for trying out Docker Model Runner. Give feedback or report any bugs you may find through the **Give feedback** link next to the **Enable Docker Model Runner** setting.
405-
406-
## Disable the feature
407-
408-
To disable Docker Model Runner:
409-
410-
1. Open the **Settings** view in Docker Desktop.
411-
2. Navigate to the **Beta** tab in **Features in development**.
412-
3. Clear the **Enable Docker Model Runner** checkbox.
413-
4. Select **Apply & restart**.

data/redirects.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -298,9 +298,10 @@
298298
- /go/hub-pull-limits/
299299

300300
# Desktop DMR
301-
"/model-runner/":
301+
302+
"/ai/model-runner/":
302303
- /go/model-runner/
303-
304+
304305
# Desktop MCP Toolkit
305306
"/ai/mcp-toolkit/":
306307
- /go/mcp-toolkit/

0 commit comments

Comments
 (0)