Skip to content

Commit 62a3d3b

Browse files
authored
Update pages/managed-inference/faq.mdx
1 parent 5263078 commit 62a3d3b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

pages/managed-inference/faq.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ You can select the Instance type based on your model’s computational needs and
6060
Billing is based on the Instance type and usage duration. Unlike [Generative APIs](/generative-apis/quickstart/), which are billed per token, Managed Inference provides predictable costs based on the allocated infrastructure.
6161
Pricing details can be found on the [Scaleway pricing page](https://www.scaleway.com/en/pricing/model-as-a-service/#managed-inference).
6262

63-
## Can I pause Managed Inference billing when the Instance is not in use?
63+
## Can I pause Managed Inference billing when the instance is not in use?
6464
When a Managed Inference deployment is running, corresponding resources are provisioned and thus billed. Resources can therefore not be paused.
6565
However, you can still optimize your Managed Inference deployment to fit within specific time ranges (such as during working hours). To do so, you can automate deployment creation and deletion using the [Managed Inference API](https://www.scaleway.com/en/developers/api/inference/), [Terraform](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs/resources/inference_deployment) or [Scaleway SDKs](https://www.scaleway.com/en/docs/scaleway-sdk/). These actions can be programmed using [Serverless Jobs](/serverless-jobs/) to be automatically carried out periodically.
6666

0 commit comments

Comments
 (0)