Skip to content

Commit 823c9cd

Browse files
committed
fix documentation
1 parent 29a0ccf commit 823c9cd

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/resources/inference_custom_model.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ page_title: "Scaleway: scaleway_inference_deployment"
77

88
The scaleway_inference_custom_model resource allows you to upload and manage custom inference models in the Scaleway Inference ecosystem. Once registered, a custom model can be used in an scaleway_inference_deployment resource.
99

10-
## Exemple Usage
10+
## Example Usage
1111

1212
### Basic
1313

@@ -58,8 +58,8 @@ resource "scaleway_inference_deployment" "main" {
5858
- `parameter_size_bits` - Size, in bits, of the model parameters.
5959
- `size_bits` - Total size, in bytes, of the model archive.
6060
- `nodes_support` - List of supported node types and their quantization options. Each entry contains:
61-
- `node_type_name` - The type of node supported.
62-
- `quantization` - A list of supported quantization options, including:
63-
- `quantization_bits` - Number of bits used for quantization (e.g., 8, 16).
64-
- `allowed` - Whether this quantization is allowed.
65-
- `max_context_size` - Maximum context length supported by this quantization.
61+
- `node_type_name` - The type of node supported.
62+
- `quantization` - A list of supported quantization options, including:
63+
- `quantization_bits` - Number of bits used for quantization (e.g., 8, 16).
64+
- `allowed` - Whether this quantization is allowed.
65+
- `max_context_size` - Maximum context length supported by this quantization.

0 commit comments

Comments
 (0)