|
| 1 | +--- |
| 2 | +title: Define AI Models in Docker Compose applications |
| 3 | +linkTitle: Use AI models in Compose |
| 4 | +description: Learn how to define and use AI models in Docker Compose applications using the models top-level element |
| 5 | +keywords: compose, docker compose, models, ai, machine learning, cloud providers, specification |
| 6 | +weight: 10 |
| 7 | +params: |
| 8 | + sidebar: |
| 9 | + badge: |
| 10 | + color: green |
| 11 | + text: New |
| 12 | +--- |
| 13 | + |
| 14 | +{{< summary-bar feature_name="Compose models" >}} |
| 15 | + |
| 16 | +Compose lets you define AI models as core components of your application, so you can declare model dependencies alongside services and run the application on any platform that supports the Compose Specification. |
| 17 | + |
| 18 | +## Prerequisites |
| 19 | + |
| 20 | +- Docker Compose v2.38 or later |
| 21 | +- A platform that supports Compose models such as Docker Model Runner or compatible cloud providers |
| 22 | + |
| 23 | +## What are Compose models? |
| 24 | + |
| 25 | +Compose `models` are a standardized way to define AI model dependencies in your application. By using the []`models` top-level element](/reference/compose-file/models.md) in your Compose file, you can: |
| 26 | + |
| 27 | +- Declare which AI models your application needs |
| 28 | +- Specify model configurations and requirements |
| 29 | +- Make your application portable across different platforms |
| 30 | +- Let the platform handle model provisioning and lifecycle management |
| 31 | + |
| 32 | +## Basic model definition |
| 33 | + |
| 34 | +To define models in your Compose application, use the `models` top-level element: |
| 35 | + |
| 36 | +```yaml |
| 37 | +services: |
| 38 | + chat-app: |
| 39 | + image: my-chat-app |
| 40 | + models: |
| 41 | + - llm |
| 42 | + |
| 43 | +models: |
| 44 | + llm: |
| 45 | + image: ai/smollm2 |
| 46 | +``` |
| 47 | +
|
| 48 | +This example defines: |
| 49 | +- A service called `chat-app` that uses a model named `llm` |
| 50 | +- A model definition for `llm` that references the `ai/smollm2` model image |
| 51 | + |
| 52 | +## Model configuration options |
| 53 | + |
| 54 | +Models support various configuration options: |
| 55 | + |
| 56 | +```yaml |
| 57 | +models: |
| 58 | + llm: |
| 59 | + image: ai/smollm2 |
| 60 | + context_size: 1024 |
| 61 | + runtime_flags: |
| 62 | + - "--a-flag" |
| 63 | + - "--another-flag=42" |
| 64 | +``` |
| 65 | + |
| 66 | +Common configuration options include: |
| 67 | +- `model` (required): The OCI artifact identifier for the model. This is what Compose pulls and runs via the model runner. |
| 68 | +- `context_size`: Defines the maximum token context size for the model. |
| 69 | +- `runtime_flags`: A list of raw command-line flags passed to the inference engine when the model is started. |
| 70 | +- Platform-specific options may also be available via extensions attributes `x-*` |
| 71 | + |
| 72 | +## Service model binding |
| 73 | + |
| 74 | +Services can reference models in two ways: short syntax and long syntax. |
| 75 | + |
| 76 | +### Short syntax |
| 77 | + |
| 78 | +The short syntax is the simplest way to bind a model to a service: |
| 79 | + |
| 80 | +```yaml |
| 81 | +services: |
| 82 | + app: |
| 83 | + image: my-app |
| 84 | + models: |
| 85 | + - llm |
| 86 | + - embedding-model |
| 87 | +
|
| 88 | +models: |
| 89 | + llm: |
| 90 | + image: ai/smollm2 |
| 91 | + embedding-model: |
| 92 | + image: ai/all-minilm |
| 93 | +``` |
| 94 | + |
| 95 | +With short syntax, the platform automatically generates environment variables based on the model name: |
| 96 | +- `LLM_URL` - URL to access the llm model |
| 97 | +- `LLM_MODEL` - Model identifier for the llm model |
| 98 | +- `EMBEDDING_MODEL_URL` - URL to access the embedding-model |
| 99 | +- `EMBEDDING_MODEL_MODEL` - Model identifier for the embedding-model |
| 100 | + |
| 101 | +### Long syntax |
| 102 | + |
| 103 | +The long syntax allows you to customize environment variable names: |
| 104 | + |
| 105 | +```yaml |
| 106 | +services: |
| 107 | + app: |
| 108 | + image: my-app |
| 109 | + models: |
| 110 | + llm: |
| 111 | + endpoint_var: AI_MODEL_URL |
| 112 | + model_var: AI_MODEL_NAME |
| 113 | + embedding-model: |
| 114 | + endpoint_var: EMBEDDING_URL |
| 115 | + model_var: EMBEDDING_NAME |
| 116 | +
|
| 117 | +models: |
| 118 | + llm: |
| 119 | + image: ai/smollm2 |
| 120 | + embedding-model: |
| 121 | + image: ai/all-minilm |
| 122 | +``` |
| 123 | + |
| 124 | +With this configuration, your service receives: |
| 125 | +- `AI_MODEL_URL` and `AI_MODEL_NAME` for the LLM model |
| 126 | +- `EMBEDDING_URL` and `EMBEDDING_NAME` for the embedding model |
| 127 | + |
| 128 | +## Platform portability |
| 129 | + |
| 130 | +One of the key benefits of using Compose models is portability across different platforms that support the Compose specification. |
| 131 | + |
| 132 | +### Docker Model Runner |
| 133 | + |
| 134 | +When Docker Model Runner is enabled: |
| 135 | + |
| 136 | +```yaml |
| 137 | +services: |
| 138 | + chat-app: |
| 139 | + image: my-chat-app |
| 140 | + models: |
| 141 | + - llm |
| 142 | +
|
| 143 | +models: |
| 144 | + llm: |
| 145 | + image: ai/smollm2 |
| 146 | +``` |
| 147 | + |
| 148 | +Docker Model Runner will: |
| 149 | +- Pull and run the specified model locally |
| 150 | +- Provide endpoint URLs for accessing the model |
| 151 | +- Inject environment variables into the service |
| 152 | + |
| 153 | +### Cloud providers |
| 154 | + |
| 155 | +The same Compose file can run on cloud providers that support Compose models: |
| 156 | + |
| 157 | +```yaml |
| 158 | +services: |
| 159 | + chat-app: |
| 160 | + image: my-chat-app |
| 161 | + models: |
| 162 | + - llm |
| 163 | +
|
| 164 | +models: |
| 165 | + llm: |
| 166 | + image: ai/smollm2 |
| 167 | + # Cloud-specific configurations |
| 168 | + labels: |
| 169 | + - "cloud.instance-type=gpu-small" |
| 170 | + - "cloud.region=us-west-2" |
| 171 | +``` |
| 172 | + |
| 173 | +Cloud providers might: |
| 174 | +- Use managed AI services instead of running models locally |
| 175 | +- Apply cloud-specific optimizations and scaling |
| 176 | +- Provide additional monitoring and logging capabilities |
| 177 | +- Handle model versioning and updates automatically |
| 178 | + |
| 179 | +## Reference |
| 180 | + |
| 181 | +- [`models` top-level element](/reference/compose-file/models.md) |
| 182 | +- [`models` attribute](/reference/compose-file/services.md#models) |
| 183 | +- [Docker Model Runner documentation](/manuals/ai/model-runner.md) |
| 184 | +- [Compose Model Runner documentation](/manuals/compose/how-tos/model-runner.md)] |
0 commit comments