diff --git a/content/manuals/_index.md b/content/manuals/_index.md index 43eb7dc6cebb..f90b12bc4887 100644 --- a/content/manuals/_index.md +++ b/content/manuals/_index.md @@ -39,7 +39,7 @@ params: - title: Docker Model Runner description: View and manage your local models. icon: view_in_ar - link: /ai/model-runner/ + link: /model-runner/ - title: MCP Catalog and Toolkit description: Augment your AI workflow with MCP servers. icon: /icons/toolkit.svg diff --git a/content/manuals/ai/model-runner/_index.md b/content/manuals/ai/model-runner.md similarity index 83% rename from content/manuals/ai/model-runner/_index.md rename to content/manuals/ai/model-runner.md index 56dc11dcae8c..279336575d95 100644 --- a/content/manuals/ai/model-runner/_index.md +++ b/content/manuals/ai/model-runner.md @@ -11,27 +11,24 @@ description: Learn how to use Docker Model Runner to manage and run AI models. keywords: Docker, ai, model runner, docker desktop, docker engine, llm aliases: - /desktop/features/model-runner/ - - /model-runner/ + - /ai/model-runner/ --- {{< summary-bar feature_name="Docker Model Runner" >}} -## Key features +The Docker Model Runner plugin lets you: -- [Pull and push models to and from Docker Hub](https://hub.docker.com/u/ai) -- Run and interact with AI models directly from the command line or from the Docker Desktop GUI -- Manage local models and display logs - -## How it works +- [Pull models from Docker Hub](https://hub.docker.com/u/ai) +- Run AI models directly from the command line +- Manage local models (add, list, remove) +- Interact with models using a submitted prompt or in chat mode in the CLI or Docker Desktop Dashboard +- Push models to Docker Hub Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available). > [!TIP] > -> Using Testcontainers or Docker Compose? -> [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/) -> and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and -> [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner. +> Using Testcontainers or Docker Compose? [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner. ## Enable Docker Model Runner @@ -79,58 +76,7 @@ You can now use the `docker model` command in the CLI and view and interact with $ docker model run ai/smollm2 ``` -## Pull a model - -Models are cached locally. - -{{< tabs >}} -{{< tab name="From Docker Desktop">}} - -1. Select **Models** and select the **Docker Hub** tab. -2. Find the model of your choice and select **Pull**. - -{{< /tab >}} -{{< tab name="From the Docker CLI">}} - -Use the [`docker model pull` command](/reference/cli/docker/). - -{{< /tab >}} -{{< /tabs >}} - -## Run a model - -{{< tabs >}} -{{< tab name="From Docker Desktop">}} - -Select **Models** and select the **Local** tab and click the play button. -The interactive chat screen opens. - -{{< /tab >}} -{{< tab name="From the Docker CLI">}} - -Use the [`docker model run` command](/reference/cli/docker/). - -{{< /tab >}} -{{< /tabs >}} - -## Troubleshooting - -To troubleshoot potential issues, display the logs: - -{{< tabs >}} -{{< tab name="From Docker Desktop">}} - -Select **Models** and select the **Logs** tab. - -{{< /tab >}} -{{< tab name="From the Docker CLI">}} - -Use the [`docker model log` command](/reference/cli/docker/). - -{{< /tab >}} -{{< /tabs >}} - -## Example: Integrate Docker Model Runner into your software development lifecycle +## Integrate the Docker Model Runner into your software development lifecycle You can now start building your Generative AI application powered by the Docker Model Runner. @@ -218,6 +164,7 @@ with `/exp/vDD4.40`. > [!NOTE] > You can omit `llama.cpp` from the path. For example: `POST /engines/v1/chat/completions`. + ### How do I interact through the OpenAI API? #### From within a container @@ -333,3 +280,12 @@ The Docker Model CLI currently lacks consistent support for specifying models by ## Share feedback Thanks for trying out Docker Model Runner. Give feedback or report any bugs you may find through the **Give feedback** link next to the **Enable Docker Model Runner** setting. + +## Disable the feature + +To disable Docker Model Runner: + +1. Open the **Settings** view in Docker Desktop. +2. Navigate to the **Beta** tab in **Features in development**. +3. Clear the **Enable Docker Model Runner** checkbox. +4. Select **Apply & restart**. diff --git a/data/redirects.yml b/data/redirects.yml index 2fa714d311e1..aed83e6b2667 100644 --- a/data/redirects.yml +++ b/data/redirects.yml @@ -284,8 +284,7 @@ - /go/mcp-toolkit/ # Desktop DMR - -"/ai/model-runner/": +"/model-runner/": - /go/model-runner/ # Docker Desktop - volumes cloud backup @@ -339,3 +338,4 @@ - /go/permissions/ "/desktop/setup/install/mac-permission-requirements/#binding-privileged-ports": - /go/port-mapping/ + \ No newline at end of file