Skip to content

Commit 320e59f

Browse files
bene2k1jcirinosclwyldecarvalho-doc
authored
Apply suggestions from code review
Co-authored-by: Jessica <[email protected]> Co-authored-by: ldecarvalho-doc <[email protected]>
1 parent c5e4eeb commit 320e59f

File tree

2 files changed

+9
-9
lines changed

2 files changed

+9
-9
lines changed

pages/managed-inference/reference-content/deepseek-r1-distill-llama-70b.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -36,15 +36,15 @@ deepseek/deepseek-r1-distill-llama-70b:bf16
3636

3737
## Model introduction
3838

39-
Released January 21, 2025, Deepseek’s R1 Distilled Llama 70B is a distilled version of Llama model family based on Deepseek R1.
40-
DeepSeek R1 Distill Llama 70B is designed to improve performance of Llama models on reasoning use case such as mathematics and coding tasks.
39+
Released January 21, 2025, Deepseek’s R1 Distilled Llama 70B is a distilled version of the Llama model family based on Deepseek R1.
40+
DeepSeek R1 Distill Llama 70B is designed to improve the performance of Llama models on reasoning use case such as mathematics and coding tasks.
4141

4242
## Why is it useful?
4343

4444
It is great to see Deepseek improving open(weight) models, and we are excited to fully support their mission with integration in the Scaleway ecosystem.
4545

4646
- DeepSeek-R1-Distill-Llama was optimized to reach accuracy close to Deepseek-R1 in tasks like mathematics and coding, while keeping inference costs limited and tokens speed efficient.
47-
- DeepSeek-R1-Distill-Llama supports a context window up to 56K tokens and tool calling, keeping interaction with other components possible.
47+
- DeepSeek-R1-Distill-Llama supports a context window of up to 56K tokens and tool calling, keeping interaction with other components possible.
4848

4949
## How to use it
5050

@@ -71,9 +71,9 @@ Make sure to replace `<IAM API key>` and `<Deployment UUID>` with your actual [I
7171
This model is better used without `system prompt`, as suggested by the model provider.
7272
</Message>
7373

74-
### Receiving Inference responses
74+
### Receiving inference responses
7575

76-
Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server.
76+
Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the Managed Inference server.
7777
Process the output data according to your application's needs. The response will contain the output generated by the LLM model based on the input provided in the request.
7878

7979
<Message type="note">

pages/managed-inference/reference-content/deepseek-r1-distill-llama-8b.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,15 +37,15 @@ deepseek/deepseek-r1-distill-llama-8b:bf16
3737

3838
## Model introduction
3939

40-
Released January 21, 2025, Deepseek’s R1 Distilled Llama 8B is a distilled version of Llama model family based on Deepseek R1.
41-
DeepSeek R1 Distill Llama 8B is designed to improve performance of Llama models on reasoning use case such as mathematics and coding tasks.
40+
Released January 21, 2025, Deepseek’s R1 Distilled Llama 8B is a distilled version of the Llama model family based on Deepseek R1.
41+
DeepSeek R1 Distill Llama 8B is designed to improve the performance of Llama models on reasoning use cases such as mathematics and coding tasks.
4242

4343
## Why is it useful?
4444

4545
It is great to see Deepseek improving open(weight) models, and we are excited to fully support their mission with integration in the Scaleway ecosystem.
4646

4747
- DeepSeek-R1-Distill-Llama was optimized to reach accuracy close to Deepseek-R1 in tasks like mathematics and coding, while keeping inference costs limited and tokens speed efficient.
48-
- DeepSeek-R1-Distill-Llama supports a context window up to 131K tokens and tool calling, keeping interaction with other components possible.
48+
- DeepSeek-R1-Distill-Llama supports a context window of up to 131K tokens and tool calling, keeping interaction with other components possible.
4949

5050
## How to use it
5151

@@ -72,7 +72,7 @@ Make sure to replace `<IAM API key>` and `<Deployment UUID>` with your actual [I
7272
This model is better used without `system prompt`, as suggested by the model provider.
7373
</Message>
7474

75-
### Receiving Inference responses
75+
### Receiving inference responses
7676

7777
Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server.
7878
Process the output data according to your application's needs. The response will contain the output generated by the LLM model based on the input provided in the request.

0 commit comments

Comments
 (0)