You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/manuals/ai/compose/models-and-compose.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -81,8 +81,8 @@ Common configuration options include:
81
81
> possible for your use case.
82
82
83
83
- `runtime_flags`: A list of raw command-line flags passed to the inference engine when the model is started.
84
-
For example, if If you use llama.cpp, you can pass any of [the available parameters](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md).
85
-
- Platform-specific options may also be available via extensions attributes `x-*`
84
+
For example, if you use llama.cpp, you can pass any of [the available parameters](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md).
85
+
- Platform-specific options may also be available via extension attributes `x-*`
86
86
87
87
## Service model binding
88
88
@@ -170,11 +170,11 @@ Docker Model Runner will:
170
170
- Provide endpoint URLs for accessing the model
171
171
- Inject environment variables into the service
172
172
173
-
#### Alternative configuration with Provider services
173
+
#### Alternative configuration with provider services
174
174
175
175
> [!TIP]
176
176
>
177
-
> This approach is deprecated. Use the [`models` top-level element](#use-models-definition) instead.
177
+
> This approach is deprecated. Use the [`models` top-level element](#basic-model-definition) instead.
178
178
179
179
You can also use the `provider` service type, which allows you to declare platform capabilities required by your application.
180
180
For AI models, you can use the `model` type to declare model dependencies.
0 commit comments