diff --git a/src/content/docs/guides/version-specific/enterprise/model-hosting.md b/src/content/docs/guides/version-specific/enterprise/model-hosting.md index a49ce9d..1599587 100644 --- a/src/content/docs/guides/version-specific/enterprise/model-hosting.md +++ b/src/content/docs/guides/version-specific/enterprise/model-hosting.md @@ -3,7 +3,7 @@ title: Enterprise Refact Edition - Model Hosting description: What Enterprise Refact is and how it works. --- -Refact Enterprise Refact is a version that is optimized for enterprise use cases. It allows you to use all of the models avaliable in Refact.ai Self-hosted and also supports vLLM models. +Refact Enterprise Refact is a version that is optimized for enterprise use cases. It allows you to use all of the models available in Refact.ai Self-hosted and also supports vLLM models. ### Enabling vLLM @@ -12,5 +12,5 @@ With the enterprise version of Refact, you can use an inference engine that uses To enable vLLM select one the available vLLM models in the **Model Hosting** page. The full list of available models can be found on the [Supported Models page](https://docs.refact.ai/supported-models/). :::note -vLLM models are suitable for a fast inference. The limiatation with the vLLM models is that they **don't support sharding**. +vLLM models are suitable for a fast inference. The limitation with the vLLM models is that they **don't support sharding**. ::: \ No newline at end of file