Skip to content

Commit 347716c

Browse files
committed
fixes
1 parent 1e8dba8 commit 347716c

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

content/manuals/ai/compose/models-and-compose.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -81,8 +81,8 @@ Common configuration options include:
8181
> possible for your use case.
8282

8383
- `runtime_flags`: A list of raw command-line flags passed to the inference engine when the model is started.
84-
For example, if If you use llama.cpp, you can pass any of [the available parameters](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md).
85-
- Platform-specific options may also be available via extensions attributes `x-*`
84+
For example, if you use llama.cpp, you can pass any of [the available parameters](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md).
85+
- Platform-specific options may also be available via extension attributes `x-*`
8686

8787
## Service model binding
8888

@@ -170,11 +170,11 @@ Docker Model Runner will:
170170
- Provide endpoint URLs for accessing the model
171171
- Inject environment variables into the service
172172

173-
#### Alternative configuration with Provider services
173+
#### Alternative configuration with provider services
174174

175175
> [!TIP]
176176
>
177-
> This approach is deprecated. Use the [`models` top-level element](#use-models-definition) instead.
177+
> This approach is deprecated. Use the [`models` top-level element](#basic-model-definition) instead.
178178

179179
You can also use the `provider` service type, which allows you to declare platform capabilities required by your application.
180180
For AI models, you can use the `model` type to declare model dependencies.

0 commit comments

Comments
 (0)