Skip to content

Commit d6167b5

Browse files
bene2k1jcirinosclwyRoRoJ
authored
Apply suggestions from code review
Co-authored-by: Jessica <[email protected]> Co-authored-by: Rowena Jones <[email protected]>
1 parent 6e726be commit d6167b5

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

faq/managed-inference.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ All models are currently hosted in a secure data center located in Paris, France
2626
You can find detailed information regarding the policies applied to Scaleway's AI services in our [Data, privacy, and security for Scaleway's AI services](/managed-inference/reference-content/data-privacy-security-scaleway-ai-services/) documentation.
2727

2828
## Is Managed Inference compatible with Open AI APIs?
29-
Managed Inference aims to achieve seamless compatibility with OpenAI APIs. You can detailed information in the following documentation: [Scaleway Managed Inference as drop-in replacement for the OpenAI APIs](/managed-inference/reference-content/openai-compatibility/).
29+
Managed Inference aims to achieve seamless compatibility with OpenAI APIs. Find detailed information in the [Scaleway Managed Inference as drop-in replacement for the OpenAI APIs](/managed-inference/reference-content/openai-compatibility/) documentation.
3030

3131
## What are the SLAs applicable to Managed Inference?
3232
We are currently working on defining our SLAs for Managed Inference. We will provide more information on this topic soon.
@@ -41,7 +41,7 @@ You can deploy a variety of models, including:
4141
* Image processing models
4242
* Audio recognition models
4343
* Custom AI models (through API only yet)
44-
Managed Inference supports both open-source models and proprietary models that you upload.
44+
Managed Inference supports both open-source models and your own uploaded proprietary models.
4545

4646
## How do I deploy a model using Managed Inference?
4747
Deployment is done through Scaleway's [console](https://console.scaleway.com/inference/deployments) or [API](https://www.scaleway.com/en/developers/api/inference/). You can choose a model from Scaleway’s selection or import your own directly from Hugging Face's repositories, configure [Instance types](/gpu/reference-content/choosing-gpu-instance-type/), set up networking options, and start inference with minimal setup.

0 commit comments

Comments
 (0)