diff --git a/faq/managed-inference.mdx b/faq/managed-inference.mdx index e66d2e3c6c..03594e72ba 100644 --- a/faq/managed-inference.mdx +++ b/faq/managed-inference.mdx @@ -35,6 +35,9 @@ We are currently working on defining our SLAs for Managed Inference. We will pro Managed Inference provides dedicated resources, ensuring predictable performance and lower latency compared to Generative APIs, which are a shared, serverless offering optimized for infrequent traffic with moderate peak loads. Managed Inference is ideal for workloads that require consistent response times, high availability, custom hardware configurations or generate extreme peak loads during a narrow period of time. Compared to Generative APIs, no usage quota is applied to the number of tokens per second generated, since the output is limited by the GPU Instances size and number of your Managed Inference Deployment. +## How can I monitor performance? +Managed Inference metrics and logs are available in [Scaleway Cockpit](https://console.scaleway.com/cockpit/overview). You can follow your deployment metrics in realtime, such as tokens throughput, requests latency, GPU power usage and GPU VRAM usage. + ## What types of models can I deploy with Managed Inference? You can deploy a variety of models, including: * Large language models (LLMs)