Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions faq/managed-inference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,9 @@ We are currently working on defining our SLAs for Managed Inference. We will pro
Managed Inference provides dedicated resources, ensuring predictable performance and lower latency compared to Generative APIs, which are a shared, serverless offering optimized for infrequent traffic with moderate peak loads. Managed Inference is ideal for workloads that require consistent response times, high availability, custom hardware configurations or generate extreme peak loads during a narrow period of time.
Compared to Generative APIs, no usage quota is applied to the number of tokens per second generated, since the output is limited by the GPU Instances size and number of your Managed Inference Deployment.

## How can I monitor performance?
Managed Inference metrics and logs are available in [Scaleway Cockpit](https://console.scaleway.com/cockpit/overview). You can follow your deployment metrics in realtime, such as tokens throughput, requests latency, GPU power usage and GPU VRAM usage.

## What types of models can I deploy with Managed Inference?
You can deploy a variety of models, including:
* Large language models (LLMs)
Expand Down