Skip to content

Commit a61b15b

Browse files
authored
Merge pull request #2609 from MicrosoftDocs/release-foundry-model-ds
Release foundry model ds
2 parents b4a11d2 + 38179c0 commit a61b15b

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

articles/ai-studio/how-to/deploy-models-deepseek.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ print("Model provider name:", model_info.model_provider_name)
110110
```console
111111
Model name: DeepSeek-R1
112112
Model type: chat-completions
113-
Model provider name: Deepseek
113+
Model provider name: DeepSeek
114114
```
115115

116116
### Create a chat completion request
@@ -359,7 +359,7 @@ console.log("Model provider name: ", model_info.body.model_provider_name)
359359
```console
360360
Model name: DeepSeek-R1
361361
Model type: chat-completions
362-
Model provider name: Deepseek
362+
Model provider name: DeepSeek
363363
```
364364

365365
### Create a chat completion request
@@ -652,7 +652,7 @@ Console.WriteLine($"Model provider name: {modelInfo.Value.ModelProviderName}");
652652
```console
653653
Model name: DeepSeek-R1
654654
Model type: chat-completions
655-
Model provider name: Deepseek
655+
Model provider name: DeepSeek
656656
```
657657

658658
### Create a chat completion request
@@ -903,7 +903,7 @@ The response is as follows:
903903
{
904904
"model_name": "DeepSeek-R1",
905905
"model_type": "chat-completions",
906-
"model_provider_name": "Deepseek"
906+
"model_provider_name": "DeepSeek"
907907
}
908908
```
909909
@@ -1127,7 +1127,7 @@ The following example shows how to handle events when the model detects harmful
11271127

11281128
## More inference examples
11291129

1130-
For more examples of how to use Deepseek models, see the following examples and tutorials:
1130+
For more examples of how to use DeepSeek models, see the following examples and tutorials:
11311131

11321132
| Description | Language | Sample |
11331133
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
@@ -1136,7 +1136,7 @@ For more examples of how to use Deepseek models, see the following examples and
11361136
| Azure AI Inference package for C# | C# | [Link](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) |
11371137
| Azure AI Inference package for Java | Java | [Link](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) |
11381138

1139-
## Cost and quota considerations for Deepseek models deployed as serverless API endpoints
1139+
## Cost and quota considerations for DeepSeek models deployed as serverless API endpoints
11401140

11411141
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
11421142

articles/ai-studio/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ items:
140140
href: how-to/deploy-models-cohere-embed.md
141141
- name: Cohere Rerank models
142142
href: how-to/deploy-models-cohere-rerank.md
143-
- name: Deepseek-R1 reasoning models
143+
- name: DeepSeek-R1 reasoning models
144144
href: how-to/deploy-models-deepseek.md
145145
- name: Meta Llama models
146146
items:

0 commit comments

Comments
 (0)