Skip to content

Commit 4ee4c0b

Browse files
authored
Apply suggestions from code review
1 parent b8bd443 commit 4ee4c0b

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

pages/generative-apis/faq.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,9 @@ Our Generative APIs support a range of popular models, including:
2121

2222
## How does the free tier work?
2323
The free tier allows you to process up to 1,000,000 tokens without incurring any costs. After reaching this limit, you will be charged per million tokens processed. Free tier usage is calculated by adding all input and output tokens consumed from all models used.
24-
For more information, refer to our [pricing page](https://www.scaleway.com/en/pricing/model-as-a-service/#generative-apis) or access your bills by token types and models in [billing section from Scaleway Console](https://console.scaleway.com/billing/payment) (past and previsional during the current month).
24+
For more information, refer to our [pricing page](https://www.scaleway.com/en/pricing/model-as-a-service/#generative-apis) or access your bills by token types and models in [billing section from Scaleway Console](https://console.scaleway.com/billing/payment) (past and provisional bills for the current month).
2525

26-
Note that when your consumption exceeds free tier, you will be billed for each additional token consumed by model and token types. The minimum billing unit is 1 million tokens. Here are two examples for low volume consumptions:
26+
Note that when your consumption exceeds the free tier, you will be billed for each additional token consumed by model and token type. The minimum billing unit is 1 million tokens. Here are two examples for low volume consumption:
2727

2828
Example 1: Free Tier only
2929

@@ -50,14 +50,14 @@ Total tokens consumed: `900k`
5050
Total billed consumption: `6 million tokens`
5151
Total bill: `3.20€`
5252

53-
Note that in this example, the first line where free tier applies will not display in your current Scaleway bills by model, but will instead be listed under `Generative APIs Free Tier - First 1M tokens for free`.
53+
Note that in this example, the first line where the free tier applies will not display in your current Scaleway bills by model but will instead be listed under `Generative APIs Free Tier - First 1M tokens for free`.
5454

5555
## What is a token and how are they counted?
56-
A token is the minimum unit of content that is seen and processed by a model. Hence token definitions depends on input types:
57-
- For text, on average, `1` token corresponds to `~4` characters, and thus `0.75` words (as words are on average 5 characters long)
58-
- For images, `1` token corresponds to a square of pixels. For example, [pixtral-12b-2409 model](https://www.scaleway.com/en/docs/managed-inference/reference-content/pixtral-12b-2409/#frequently-asked-questions) image tokens of `16x16` pixels (16 pixel height, and 16 pixel width, hence `256` pixels in total).
56+
A token is the minimum unit of content that is seen and processed by a model. Hence, token definitions depend on input types:
57+
- For text, on average, `1` token corresponds to `~4` characters, and thus `0.75` words (as words are on average five characters long)
58+
- For images, `1` token corresponds to a square of pixels. For example, [pixtral-12b-2409 model](https://www.scaleway.com/en/docs/managed-inference/reference-content/pixtral-12b-2409/#frequently-asked-questions) image tokens of `16x16` pixels (16-pixel height, and 16-pixel width, hence `256` pixels in total).
5959

60-
Exact tokens count and definition depends on [tokenizers](https://huggingface.co/learn/llm-course/en/chapter2/4) used by each models. When this difference is significant (such as for image processing), you can find detailed information in each model documentation (for instance in [`pixtral-12b-2409` size limit documentation](https://www.scaleway.com/en/docs/managed-inference/reference-content/pixtral-12b-2409/#frequently-asked-questions)). Otherwise, when the model is open, you can find this information in the model files on platforms such as Hugging Face, usually in the `tokenizer_config.json` file.
60+
The exact token count and definition depend on [tokenizers](https://huggingface.co/learn/llm-course/en/chapter2/4) used by each model. When this difference is significant (such as for image processing), you can find detailed information in each model documentation (for instance in [`pixtral-12b-2409` size limit documentation](https://www.scaleway.com/en/docs/managed-inference/reference-content/pixtral-12b-2409/#frequently-asked-questions)). Otherwise, when the model is open, you can find this information in the model files on platforms such as Hugging Face, usually in the `tokenizer_config.json` file.
6161

6262
## How can I monitor my token consumption?
6363
You can see your token consumption in [Scaleway Cockpit](/cockpit/). You can access it from the Scaleway console under the [Metrics tab](https://console.scaleway.com/generative-api/metrics).

0 commit comments

Comments
 (0)