You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The scaleway_inference_model resource allows you to upload and manage custom inference models in the Scaleway Inference ecosystem. Once registered, a custom model can be used in any scaleway_inference_deployment resource.
8
+
The scaleway_inference_model resource allows you to upload and manage custom inference models in the Scaleway Inference ecosystem. Once registered, a model can be used in any scaleway_inference_deployment resource.
-`name` - (Required) The name of the custom model. This must be unique within the project.
46
+
-`name` - (Required) The name of the model. This must be unique within the project.
47
47
-`url` - (Required) The HTTPS source URL from which the model will be downloaded. This is typically a Hugging Face repository URL (e.g., https://huggingface.co/agentica-org/DeepCoder-14B-Preview). The URL must be publicly accessible or require valid credentials via `secret`
48
48
-`secret` - (Optional, Sensitive) Authentication token used to pull the model from a private or gated URL (e.g., a Hugging Face access token with read permission).
49
49
-`region` - (Defaults to [provider](../index.md#region)`region`) The [region](../guides/regions_and_zones.md#regions) in which the deployment is created.
@@ -69,7 +69,7 @@ In addition to all arguments above, the following attributes are exported:
69
69
70
70
## Import
71
71
72
-
Custom models can be imported using, `{region}/{id}`, as shown below:
72
+
Models can be imported using, `{region}/{id}`, as shown below:
0 commit comments