Skip to content

Commit 8bc6a69

Browse files
committed
add Compose how-to page for Docker Model Runner support with Compose
Signed-off-by: Guillaume Lours <[email protected]>
1 parent 4a3f007 commit 8bc6a69

File tree

1 file changed

+66
-0
lines changed

1 file changed

+66
-0
lines changed
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
---
2+
title: Using Docker Model Runner with Compose
3+
description: Learn how to integrate Docker Model Runner with Docker Compose to build AI-powered applications
4+
keywords: compose, docker compose, model runner, ai, llm, artificial intelligence, machine learning
5+
weight: 20
6+
---
7+
8+
## Using Docker Model Runner with Compose
9+
10+
Docker Model Runner can be integrated with Docker Compose to run AI models as part of your multi-container applications.
11+
This allows you to define and run AI-powered applications alongside your other services.
12+
13+
### Prerequisites
14+
15+
- Docker Compose v2.35 or later
16+
- Docker Desktop 4.41 or later
17+
- Docker Model Runner enabled in Docker Desktop
18+
- Apple Silicon Mac (currently Model Runner is only available for Mac with Apple Silicon)
19+
20+
### Enabling Docker Model Runner
21+
22+
Before you can use Docker Model Runner with Compose, you need to enable it in Docker Desktop, as described in the [Docker Model Runner documentation](/manuals/desktop/features/model-runner/).
23+
24+
### Provider services
25+
26+
Compose introduces a new service type called `provider` that allows you to declare platform capabilities required by your application. For AI models, you can use the `model` type to declare model dependencies.
27+
28+
Here's an example of how to define a model provider:
29+
30+
```yaml
31+
services:
32+
chat:
33+
image: my-chat-app
34+
depends_on:
35+
- ai-runner
36+
37+
ai-runner:
38+
provider:
39+
type: model
40+
options:
41+
model: ai/smollm2
42+
```
43+
44+
You should notice the dedicated `provider` attribute in the `ai-runner` service.
45+
This attribute specifies that the service is a model provider and let you define options such as the name of the model to be used.
46+
47+
There is also a `depends_on` attribute in the `chat` service.
48+
This attribute specifies that the `chat` service depends on the `ai-runner` service.
49+
This means that the `ai-runner` service will be started before the `chat` service to allow injection of model information to the `chat` service.
50+
51+
### How it works
52+
53+
During the `docker compose up` process, Docker Model Runner will automatically pull and run the specified model.
54+
It will also send to Compose the model tag name and the URL to access the model runner.
55+
56+
Those information will be then pass to services which declare a dependency on the model provider.
57+
In the example above, the `chat` service will receive 2 env variables prefixed by the service name:
58+
- `AI-RUNNER_URL` with the URL to access the model runner
59+
- `AI-RUNNER_MODEL` with the model name which could be passed with the URL to request the model.
60+
61+
This allows the `chat` service to interact with the model and use it for its own purposes.
62+
63+
64+
For more information, see:
65+
- [Docker Model Runner documentation](/manuals/desktop/features/model-runner/)
66+

0 commit comments

Comments
 (0)