Skip to content

Commit ad5f1e8

Browse files
committed
Add Docker Model Runner
1 parent 12698a8 commit ad5f1e8

File tree

2 files changed

+15
-1
lines changed

2 files changed

+15
-1
lines changed

content/guides/genai-pdf-bot/containerize.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ aliases:
1212

1313
> [!NOTE]
1414
>
15-
> GenAI applications can often benefit from GPU acceleration. Currently Docker Desktop supports GPU acceleration only on [Windows with the WSL2 backend](/manuals/desktop/features/gpu.md#using-nvidia-gpus-with-wsl2). Linux users can also access GPU acceleration using a native installation of the [Docker Engine](/manuals/engine/install/_index.md).
15+
> GenAI applications can often benefit from GPU acceleration. Currently, Docker Desktop supports GPU acceleration only on [Windows with the WSL2 backend](/manuals/desktop/features/gpu.md#using-nvidia-gpus-with-wsl2). Linux users can also access GPU acceleration using a native installation of the [Docker Engine](/manuals/engine/install/_index.md). On Mac, one can use [Docker Model Runner](https://docs.docker.com/desktop/features/model-runner/) to run models natively.
1616
1717
- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md) or, if you are a Linux user and are planning to use GPU acceleration, [Docker Engine](/manuals/engine/install/_index.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
1818
- You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.

content/guides/genai-pdf-bot/develop.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -201,6 +201,20 @@ To run Ollama outside of a container:
201201
$ ollama pull llama2
202202
```
203203

204+
{{< /tab >}}
205+
{{< tab name="Use Docker Model Runner" >}}
206+
207+
Docker Model Runner is compatible with Ollama in the API.
208+
209+
1. Make sure your OS and Docker Desktop support [Docker Model Runner](https://docs.docker.com/desktop/features/model-runner/).
210+
Docker Model Runner was initially released for Mac Silicon on Docker Desktop 4.40.
211+
2. Download the model of your choice:
212+
```console
213+
$ docker model pull ai/llama3.3
214+
```
215+
3. Update the `OLLAMA_BASE_URL` value in your `.env` file to
216+
`http://model-runner.docker.internal`.
217+
204218
{{< /tab >}}
205219
{{< tab name="Use OpenAI" >}}
206220

0 commit comments

Comments
 (0)