You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In case you want to interact with the API from the host, but use TCP instead of a Docker socket, it is recommended you use a helper container as a reverse-proxy. For example, in order to forward the API to `8080`:
269
-
270
-
```bash
271
-
docker run -d --name model-runner-proxy -p 8080:80 alpine/socat tcp-listen:80,fork,reuseaddr tcp:model-runner.docker.internal:80
272
-
```
262
+
In case you want to interact with the API from the host, but use TCP instead of a Docker socket, you can enable the host-side TCP support from the Docker Desktop GUI, or via the [Docker Desktop CLI](/manuals/desktop/features/desktop-cli.md). For example, using `docker desktop enable model-runner --tcp <port>`.
273
263
274
-
Afterwards, interact with it as previously documented using `localhost` and the forward port, in this case `8080`:
264
+
Afterwards, interact with it as previously documented using `localhost` and the chosen, or the default port.
Currently, Docker Model Runner doesn't include safeguards to prevent you from launching models that exceed their system’s available resources. Attempting to run a model that is too large for the host machine may result in severe slowdowns or render the system temporarily unusable. This issue is particularly common when running LLMs models without sufficient GPU memory or system RAM.
309
+
310
+
### `model run` drops into chat even if pull fails
311
+
312
+
If a model image fails to pull successfully, for example due to network issues or lack of disk space, the `docker model run` command will still drop you into the chat interface, even though the model isn’t actually available. This can lead to confusion, as the chat will not function correctly without a running model.
313
+
314
+
You can manually retry the `docker model pull` command to ensure the image is available before running it again.
315
+
316
+
### No consistent digest support in Model CLI
317
+
318
+
The Docker Model CLI currently lacks consistent support for specifying models by image digest. As a temporary workaround, you should refer to models by name instead of digest.
319
+
320
+
### Misleading pull progress after failed initial attempt
321
+
322
+
In some cases, if an initial `docker model pull` fails partway through, a subsequent successful pull may misleadingly report “0 bytes” downloaded even though data is being fetched in the background. This can give the impression that nothing is happening, when in fact the model is being retrieved. Despite the incorrect progress output, the pull typically completes as expected.
323
+
316
324
## Share feedback
317
325
318
326
Thanks for trying out Docker Model Runner. Give feedback or report any bugs you may find through the **Give feedback** link next to the **Enable Docker Model Runner** setting.
0 commit comments