You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/manuals/desktop/features/gordon/_index.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,6 +97,8 @@ If you have concerns about data collection or usage, you can
97
97
98
98
9. Select **Apply & restart**.
99
99
100
+
You can also enable Ask Gordon from the **Ask Gordon** tab if you have selected the **Access experimental features** setting. Simply select the **Enable Ask Gordon** button, and then accept the Docker AI terms of service agreement.
101
+
100
102
## Using Ask Gordon
101
103
102
104
The primary interfaces to Docker's AI capabilities are through the **Ask
Copy file name to clipboardExpand all lines: content/manuals/desktop/features/model-runner.md
+16-1Lines changed: 16 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,8 @@ The Docker Model Runner plugin lets you:
17
17
-[Pull models from Docker Hub](https://hub.docker.com/u/ai)
18
18
- Run AI models directly from the command line
19
19
- Manage local models (add, list, remove)
20
-
- Interact with models using a submitted prompt or in chat mode
20
+
- Interact with models using a submitted prompt or in chat mode in the CLI or Docker Desktop Dashboard
21
+
- Push models to Docker Hub
21
22
22
23
Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available).
23
24
@@ -31,6 +32,8 @@ Models are pulled from Docker Hub the first time they're used and stored locally
31
32
6. Navigate to **Features in development**.
32
33
7. From the **Beta** tab, check the **Enable Docker Model Runner** setting.
33
34
35
+
You can now use the `docker model` command in the CLI and view and interact with your local models in the **Models** tab in the Docker Desktop Dashboard.
36
+
34
37
## Available commands
35
38
36
39
### Model runner status
@@ -84,6 +87,8 @@ Downloaded: 257.71 MB
84
87
Model ai/smollm2 pulled successfully
85
88
```
86
89
90
+
The models also display in the Docker Desktop Dashboard.
91
+
87
92
### List available models
88
93
89
94
Lists all models currently pulled to your local environment.
@@ -131,6 +136,16 @@ Hi there! It's SmolLM, AI assistant. How can I help you today?
131
136
Chat session ended.
132
137
```
133
138
139
+
> [!TIP]
140
+
>
141
+
> You can also use chat mode in the Docker Desktop Dashboard when you select the model in the **Models** tab.
0 commit comments