Skip to content

Commit 45ba5e7

Browse files
committed
dmr: gui docs
1 parent a746d0b commit 45ba5e7

File tree

3 files changed

+48
-197
lines changed

3 files changed

+48
-197
lines changed

content/manuals/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ params:
3939
- title: Docker Model Runner
4040
description: View and manage your local models.
4141
icon: view_in_ar
42-
link: /model-runner/
42+
link: /ai/model-runner/
4343
- title: MCP Catalog and Toolkit
4444
description: Augment your AI workflow with MCP servers.
4545
icon: /assets/icons/toolbox.svg

content/manuals/ai/model-runner.md renamed to content/manuals/ai/model-runner/_index.md

Lines changed: 44 additions & 194 deletions
Original file line numberDiff line numberDiff line change
@@ -8,27 +8,30 @@ params:
88
group: AI
99
weight: 20
1010
description: Learn how to use Docker Model Runner to manage and run AI models.
11-
keywords: Docker, ai, model runner, docker deskotp, llm
11+
keywords: Docker, ai, model runner, docker desktop, llm
1212
aliases:
1313
- /desktop/features/model-runner/
14-
- /ai/model-runner/
14+
- /model-runner/
1515
---
1616

1717
{{< summary-bar feature_name="Docker Model Runner" >}}
1818

19-
The Docker Model Runner plugin lets you:
19+
## Key features
2020

21-
- [Pull models from Docker Hub](https://hub.docker.com/u/ai)
22-
- Run AI models directly from the command line
23-
- Manage local models (add, list, remove)
24-
- Interact with models using a submitted prompt or in chat mode in the CLI or Docker Desktop Dashboard
25-
- Push models to Docker Hub
21+
- [Pull and push models to and from Docker Hub](https://hub.docker.com/u/ai)
22+
- Run and interact with AI models directly from the command line or from the Docker Desktop GUI
23+
- Manage local models and display logs
24+
25+
## How it works
2626

2727
Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available).
2828

2929
> [!TIP]
3030
>
31-
> Using Testcontainers or Docker Compose? [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner.
31+
> Using Testcontainers or Docker Compose?
32+
> [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/)
33+
> and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and
34+
> [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner.
3235
3336
## Enable Docker Model Runner
3437

@@ -44,201 +47,58 @@ Models are pulled from Docker Hub the first time they're used and stored locally
4447

4548
You can now use the `docker model` command in the CLI and view and interact with your local models in the **Models** tab in the Docker Desktop Dashboard.
4649

47-
### Enable DMR in Docker Engine
48-
49-
1. Ensure you have installed [Docker Engine](/engine/install/).
50-
2. DMR is available as a package. To install it, run:
51-
52-
```console
53-
apt install docker-model-plugin
54-
```
55-
56-
## Available commands
57-
58-
### Model runner status
59-
60-
Check whether the Docker Model Runner is active and displays the current inference engine:
61-
62-
```console
63-
$ docker model status
64-
```
65-
66-
### View all commands
67-
68-
Displays help information and a list of available subcommands.
69-
70-
```console
71-
$ docker model help
72-
```
73-
74-
Output:
75-
76-
```text
77-
Usage: docker model COMMAND
78-
79-
Commands:
80-
list List models available locally
81-
pull Download a model from Docker Hub
82-
rm Remove a downloaded model
83-
run Run a model interactively or with a prompt
84-
status Check if the model runner is running
85-
version Show the current version
86-
```
87-
88-
### Pull a model
89-
90-
Pulls a model from Docker Hub to your local environment.
91-
92-
```console
93-
$ docker model pull <model>
94-
```
95-
96-
Example:
97-
98-
```console
99-
$ docker model pull ai/smollm2
100-
```
101-
102-
Output:
103-
104-
```text
105-
Downloaded: 257.71 MB
106-
Model ai/smollm2 pulled successfully
107-
```
108-
109-
The models also display in the Docker Desktop Dashboard.
110-
111-
#### Pull from Hugging Face
112-
113-
You can also pull GGUF models directly from [Hugging Face](https://huggingface.co/models?library=gguf).
114-
115-
```console
116-
$ docker model pull hf.co/<model-you-want-to-pull>
117-
```
118-
119-
For example:
120-
121-
```console
122-
$ docker model pull hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF
123-
```
124-
125-
Pulls the [bartowski/Llama-3.2-1B-Instruct-GGUF](https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF).
126-
127-
### List available models
128-
129-
Lists all models currently pulled to your local environment.
50+
## Pull a model
13051

131-
```console
132-
$ docker model list
133-
```
134-
135-
You will see something similar to:
136-
137-
```text
138-
+MODEL PARAMETERS QUANTIZATION ARCHITECTURE MODEL ID CREATED SIZE
139-
+ai/smollm2 361.82 M IQ2_XXS/Q4_K_M llama 354bf30d0aa3 3 days ago 256.35 MiB
140-
```
141-
142-
### Run a model
143-
144-
Run a model and interact with it using a submitted prompt or in chat mode. When you run a model, Docker
145-
calls an Inference Server API endpoint hosted by the Model Runner through Docker Desktop. The model
146-
stays in memory until another model is requested, or until a pre-defined inactivity timeout is reached (currently 5 minutes).
147-
148-
You do not have to use `Docker model run` before interacting with a specific model from a
149-
host process or from within a container. Model Runner transparently loads the requested model on-demand, assuming it has been
150-
pulled beforehand and is locally available.
151-
152-
#### One-time prompt
153-
154-
```console
155-
$ docker model run ai/smollm2 "Hi"
156-
```
157-
158-
Output:
159-
160-
```text
161-
Hello! How can I assist you today?
162-
```
163-
164-
#### Interactive chat
165-
166-
```console
167-
$ docker model run ai/smollm2
168-
```
169-
170-
Output:
171-
172-
```text
173-
Interactive chat mode started. Type '/bye' to exit.
174-
> Hi
175-
Hi there! It's SmolLM, AI assistant. How can I help you today?
176-
> /bye
177-
Chat session ended.
178-
```
52+
Models are cached locally.
17953

180-
> [!TIP]
181-
>
182-
> You can also use chat mode in the Docker Desktop Dashboard when you select the model in the **Models** tab.
54+
{{< tabs >}}
55+
{{< tab name="From Docker Desktop">}}
18356

184-
### Push a model to Docker Hub
57+
1. Select **Models** and select the **Docker Hub** tab.
58+
2. Find the model of your choice and select **Pull**.
18559

186-
To push your model to Docker Hub:
60+
{{< /tab >}}
61+
{{< tab name="From the Docker CLI">}}
18762

188-
```console
189-
$ docker model push <namespace>/<model>
190-
```
63+
Use the [`docker model pull` command](/reference/cli/docker/).
19164

192-
### Tag a model
65+
{{< /tab >}}
66+
{{< /tabs >}}
19367

194-
To specify a particular version or variant of the model:
68+
## Run a model
19569

196-
```console
197-
$ docker model tag
198-
```
70+
{{< tabs >}}
71+
{{< tab name="From Docker Desktop">}}
19972

200-
If no tag is provided, Docker defaults to `latest`.
73+
Select **Models** and select the **Local** tab and click the play button.
74+
The interactive chat screen opens.
20175

202-
### View the logs
76+
{{< /tab >}}
77+
{{< tab name="From the Docker CLI">}}
20378

204-
Fetch logs from Docker Model Runner to monitor activity or debug issues.
79+
Use the [`docker model run` command](/reference/cli/docker/).
20580

206-
```console
207-
$ docker model logs
208-
```
81+
{{< /tab >}}
82+
{{< /tabs >}}
20983

210-
The following flags are accepted:
84+
## Troubleshooting
21185

212-
- `-f`/`--follow`: View logs with real-time streaming
213-
- `--no-engines`: Exclude inference engine logs from the output
86+
To troubleshoot potential issues, display the logs:
21487

215-
### Remove a model
88+
{{< tabs >}}
89+
{{< tab name="From Docker Desktop">}}
21690

217-
Removes a downloaded model from your system.
91+
Select **Models** and select the **Logs** tab.
21892

219-
```console
220-
$ docker model rm <model>
221-
```
93+
{{< /tab >}}
94+
{{< tab name="From the Docker CLI">}}
22295

223-
Output:
96+
Use the [`docker model log` command](/reference/cli/docker/).
22497

225-
```text
226-
Model <model> removed successfully
227-
```
228-
229-
### Package a model
98+
{{< /tab >}}
99+
{{< /tabs >}}
230100

231-
Packages a GGUF file into a Docker model OCI artifact, with optional licenses, and pushes it to the specified registry.
232-
233-
```console
234-
$ docker model package \
235-
--gguf ./model.gguf \
236-
--licenses license1.txt \
237-
--licenses license2.txt \
238-
--push registry.example.com/ai/custom-model
239-
```
240-
241-
## Integrate the Docker Model Runner into your software development lifecycle
101+
## Example: Integrate Docker Model Runner into your software development lifecycle
242102

243103
You can now start building your Generative AI application powered by the Docker Model Runner.
244104

@@ -298,7 +158,6 @@ with `/exp/vDD4.40`.
298158
> [!NOTE]
299159
> You can omit `llama.cpp` from the path. For example: `POST /engines/v1/chat/completions`.
300160
301-
302161
### How do I interact through the OpenAI API?
303162

304163
#### From within a container
@@ -410,12 +269,3 @@ The Docker Model CLI currently lacks consistent support for specifying models by
410269
## Share feedback
411270

412271
Thanks for trying out Docker Model Runner. Give feedback or report any bugs you may find through the **Give feedback** link next to the **Enable Docker Model Runner** setting.
413-
414-
## Disable the feature
415-
416-
To disable Docker Model Runner:
417-
418-
1. Open the **Settings** view in Docker Desktop.
419-
2. Navigate to the **Beta** tab in **Features in development**.
420-
3. Clear the **Enable Docker Model Runner** checkbox.
421-
4. Select **Apply & restart**.

data/redirects.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -298,9 +298,10 @@
298298
- /go/hub-pull-limits/
299299

300300
# Desktop DMR
301-
"/model-runner/":
301+
302+
"/ai/model-runner/":
302303
- /go/model-runner/
303-
304+
304305
# Desktop MCP Toolkit
305306
"/ai/mcp-toolkit/":
306307
- /go/mcp-toolkit/

0 commit comments

Comments
 (0)