Skip to content

Commit 22f3cbb

Browse files
authored
docs(infr): remove dedicated model pages (#4877)
* docs(infr): remove dedicated model pages * Apply suggestions from code review * docs(infr): fix broken links * docs(infr): fix 404
1 parent 382792d commit 22f3cbb

29 files changed

+30
-1910
lines changed

changelog/february2025/2025-02-14-managed-inference-added-new-model-preview-deepseek-r1-d.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ category: ai-data
99
product: managed-inference
1010
---
1111

12-
[DeepSeek R1 Distilled Llama 70B](/managed-inference/reference-content/deepseek-r1-distill-llama-70b/) is now available on Managed Inference.
12+
[DeepSeek R1 Distilled Llama 70B](/managed-inference/reference-content/model-catalog/#deepseek-r1-distill-llama-70b) is now available on Managed Inference.
1313

1414
DeepSeek R1 Distilled Llama improves Llama model performance on reasoning use cases like mathematics or code.
1515

changelog/september2024/2024-09-05-managed-inference-added-model-library-expanded.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ category: ai-data
99
product: managed-inference
1010
---
1111

12-
[Meta Llama 3.1 8b](/managed-inference/reference-content/llama-3.1-8b-instruct/), [Meta Llama 3.1 70b](/managed-inference/reference-content/llama-3.1-70b-instruct/) and [Mistral Nemo](/managed-inference/reference-content/mistral-nemo-instruct-2407/) are available for deployment on Managed Inference.
12+
[Meta Llama 3.1 8b](/managed-inference/reference-content/model-catalog/#llama-31-8b-instruct), [Meta Llama 3.1 70b](/managed-inference/reference-content/model-catalog/llama-31-70b-instruct) and [Mistral Nemo](/managed-inference/reference-content/model-catalog/#mistral-nemo-instruct-2407) are available for deployment on Managed Inference.
1313

1414
Released July 2024, these models all support a very large context window of up to 128k tokens, particularly useful for RAG applications.
1515

menu/navigation.json

Lines changed: 0 additions & 72 deletions
Original file line numberDiff line numberDiff line change
@@ -883,78 +883,6 @@
883883
{
884884
"label": "Managed Inference model catalog",
885885
"slug": "model-catalog"
886-
},
887-
{
888-
"label": "BGE-Multilingual-Gemma2 model",
889-
"slug": "bge-multilingual-gemma2"
890-
},
891-
{
892-
"label": "Llama-3-8b-instruct model",
893-
"slug": "llama-3-8b-instruct"
894-
},
895-
{
896-
"label": "Llama-3-70b-instruct model",
897-
"slug": "llama-3-70b-instruct"
898-
},
899-
{
900-
"label": "Llama-3.1-8b-instruct model",
901-
"slug": "llama-3.1-8b-instruct"
902-
},
903-
{
904-
"label": "Llama-3.1-70b-instruct model",
905-
"slug": "llama-3.1-70b-instruct"
906-
},
907-
{
908-
"label": "Llama-3.1-nemotron-70b-instruct model",
909-
"slug": "llama-3.1-nemotron-70b-instruct"
910-
},
911-
{
912-
"label": "Llama-3.3-70b-instruct model",
913-
"slug": "llama-3.3-70b-instruct"
914-
},
915-
{
916-
"label": "DeepSeek-R1-Distill-Llama-70B model",
917-
"slug": "deepseek-r1-distill-llama-70b"
918-
},
919-
{
920-
"label": "DeepSeek-R1-Distill-Llama-8B model",
921-
"slug": "deepseek-r1-distill-llama-8b"
922-
},
923-
{
924-
"label": "Mistral-7b-instruct-v0.3 model",
925-
"slug": "mistral-7b-instruct-v0.3"
926-
},
927-
{
928-
"label": "Mistral-nemo-instruct-2407 model",
929-
"slug": "mistral-nemo-instruct-2407"
930-
},
931-
{
932-
"label": "Mixtral-8x7b-instruct-v0.1 model",
933-
"slug": "mixtral-8x7b-instruct-v0.1"
934-
},
935-
{
936-
"label": "Molmo-72b-0924 model",
937-
"slug": "molmo-72b-0924"
938-
},
939-
{
940-
"label": "Moshika-0.1-8b model",
941-
"slug": "moshika-0.1-8b"
942-
},
943-
{
944-
"label": "Moshiko-0.1-8b model",
945-
"slug": "moshiko-0.1-8b"
946-
},
947-
{
948-
"label": "Pixtral-12b-2409 model",
949-
"slug": "pixtral-12b-2409"
950-
},
951-
{
952-
"label": "Qwen2.5-coder-32b-instruct model",
953-
"slug": "qwen2.5-coder-32b-instruct"
954-
},
955-
{
956-
"label": "Sentence-t5-xxl model",
957-
"slug": "sentence-t5-xxl"
958886
}
959887
],
960888
"label": "Additional Content",

pages/generative-apis/faq.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Note that in this example, the first line where the free tier applies will not d
5555
## What is a token and how are they counted?
5656
A token is the minimum unit of content that is seen and processed by a model. Hence, token definitions depend on input types:
5757
- For text, on average, `1` token corresponds to `~4` characters, and thus `0.75` words (as words are on average five characters long)
58-
- For images, `1` token corresponds to a square of pixels. For example, [pixtral-12b-2409 model](https://www.scaleway.com/en/docs/managed-inference/reference-content/pixtral-12b-2409/#frequently-asked-questions) image tokens of `16x16` pixels (16-pixel height, and 16-pixel width, hence `256` pixels in total).
58+
- For images, `1` token corresponds to a square of pixels. For example, `pixtral-12b-2409` model image tokens of `16x16` pixels (16-pixel height, and 16-pixel width, hence `256` pixels in total).
5959

6060
The exact token count and definition depend on [tokenizers](https://huggingface.co/learn/llm-course/en/chapter2/4) used by each model. When this difference is significant (such as for image processing), you can find detailed information in each model documentation (for instance in [`pixtral-12b-2409` size limit documentation](https://www.scaleway.com/en/docs/managed-inference/reference-content/pixtral-12b-2409/#frequently-asked-questions)). Otherwise, when the model is open, you can find this information in the model files on platforms such as Hugging Face, usually in the `tokenizer_config.json` file.
6161

pages/managed-inference/how-to/import-custom-model.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,4 @@ Scaleway provides a selection of common models for deployment from the Scaleway
4848
- Estimated cost.
4949
Once checked, click **Begin import** to finalize the process.
5050

51-
Your imported model will now appear in the model library. You can proceed to [deploy your model on Managed Inference](/ai-data/managed-inference/how-to/create-deployment/).
51+
Your imported model will now appear in the model library. You can proceed to [deploy your model on Managed Inference](/managed-inference/how-to/create-deployment/).

pages/managed-inference/reference-content/bge-multilingual-gemma2.mdx

Lines changed: 0 additions & 70 deletions
This file was deleted.

pages/managed-inference/reference-content/deepseek-r1-distill-llama-70b.mdx

Lines changed: 0 additions & 82 deletions
This file was deleted.

pages/managed-inference/reference-content/deepseek-r1-distill-llama-8b.mdx

Lines changed: 0 additions & 83 deletions
This file was deleted.

0 commit comments

Comments
 (0)