Skip to content

Commit 5529093

Browse files
Merge pull request #284207 from santiagxf/santiagxf-patch-3
Santiagxf patch 3
2 parents 57a205f + 11c9311 commit 5529093

10 files changed

+175
-44
lines changed

articles/ai-studio/how-to/deploy-models-cohere-command.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ The response is as follows:
135135
```python
136136
print("Model name:", model_info.model_name)
137137
print("Model type:", model_info.model_type)
138-
print("Model provider name:", model_info.model_provider)
138+
print("Model provider name:", model_info.model_provider_name)
139139
```
140140

141141
```console
@@ -209,14 +209,12 @@ To visualize the output, define a helper function to print the stream.
209209
```python
210210
def print_stream(result):
211211
"""
212-
Prints the chat completion with streaming. Some delay is added to simulate
213-
a real-time conversation.
212+
Prints the chat completion with streaming.
214213
"""
215214
import time
216215
for update in result:
217216
if update.choices:
218217
print(update.choices[0].delta.content, end="")
219-
time.sleep(0.05)
220218
```
221219

222220
You can visualize how streaming generates content:
@@ -1364,7 +1362,7 @@ catch (RequestFailedException ex)
13641362
{
13651363
if (ex.ErrorCode == "content_filter")
13661364
{
1367-
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
1365+
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
13681366
}
13691367
else
13701368
{

articles/ai-studio/how-to/deploy-models-jais.md

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,8 @@ JAIS 30b Chat is an autoregressive bi-lingual LLM for **Arabic** & **English**.
2727

2828
::: zone pivot="programming-language-python"
2929

30+
## Jais chat models
31+
3032

3133

3234
You can learn more about the models in their respective model card:
@@ -103,7 +105,7 @@ The response is as follows:
103105
```python
104106
print("Model name:", model_info.model_name)
105107
print("Model type:", model_info.model_type)
106-
print("Model provider name:", model_info.model_provider)
108+
print("Model provider name:", model_info.model_provider_name)
107109
```
108110

109111
```console
@@ -177,14 +179,12 @@ To visualize the output, define a helper function to print the stream.
177179
```python
178180
def print_stream(result):
179181
"""
180-
Prints the chat completion with streaming. Some delay is added to simulate
181-
a real-time conversation.
182+
Prints the chat completion with streaming.
182183
"""
183184
import time
184185
for update in result:
185186
if update.choices:
186187
print(update.choices[0].delta.content, end="")
187-
time.sleep(0.05)
188188
```
189189

190190
You can visualize how streaming generates content:
@@ -278,6 +278,8 @@ except HttpResponseError as ex:
278278

279279
::: zone pivot="programming-language-javascript"
280280

281+
## Jais chat models
282+
281283

282284

283285
You can learn more about the models in their respective model card:
@@ -550,6 +552,8 @@ catch (error) {
550552
551553
::: zone pivot="programming-language-csharp"
552554
555+
## Jais chat models
556+
553557
554558
555559
You can learn more about the models in their respective model card:
@@ -821,7 +825,7 @@ catch (RequestFailedException ex)
821825
{
822826
if (ex.ErrorCode == "content_filter")
823827
{
824-
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
828+
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
825829
}
826830
else
827831
{
@@ -838,6 +842,8 @@ catch (RequestFailedException ex)
838842
839843
::: zone pivot="programming-language-rest"
840844
845+
## Jais chat models
846+
841847
842848
843849
You can learn more about the models in their respective model card:

articles/ai-studio/how-to/deploy-models-jamba.md

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,8 @@ The Jamba-Instruct model is AI21's production-grade Mamba-based large language m
2626

2727
::: zone pivot="programming-language-python"
2828

29+
## Jamba-Instruct chat models
30+
2931

3032

3133
You can learn more about the models in their respective model card:
@@ -102,7 +104,7 @@ The response is as follows:
102104
```python
103105
print("Model name:", model_info.model_name)
104106
print("Model type:", model_info.model_type)
105-
print("Model provider name:", model_info.model_provider)
107+
print("Model provider name:", model_info.model_provider_name)
106108
```
107109

108110
```console
@@ -176,14 +178,12 @@ To visualize the output, define a helper function to print the stream.
176178
```python
177179
def print_stream(result):
178180
"""
179-
Prints the chat completion with streaming. Some delay is added to simulate
180-
a real-time conversation.
181+
Prints the chat completion with streaming.
181182
"""
182183
import time
183184
for update in result:
184185
if update.choices:
185186
print(update.choices[0].delta.content, end="")
186-
time.sleep(0.05)
187187
```
188188

189189
You can visualize how streaming generates content:
@@ -277,6 +277,8 @@ except HttpResponseError as ex:
277277

278278
::: zone pivot="programming-language-javascript"
279279

280+
## Jamba-Instruct chat models
281+
280282

281283

282284
You can learn more about the models in their respective model card:
@@ -549,6 +551,8 @@ catch (error) {
549551
550552
::: zone pivot="programming-language-csharp"
551553
554+
## Jamba-Instruct chat models
555+
552556
553557
554558
You can learn more about the models in their respective model card:
@@ -820,7 +824,7 @@ catch (RequestFailedException ex)
820824
{
821825
if (ex.ErrorCode == "content_filter")
822826
{
823-
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
827+
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
824828
}
825829
else
826830
{
@@ -837,6 +841,8 @@ catch (RequestFailedException ex)
837841
838842
::: zone pivot="programming-language-rest"
839843
844+
## Jamba-Instruct chat models
845+
840846
841847
842848
You can learn more about the models in their respective model card:

articles/ai-studio/how-to/deploy-models-llama.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ The response is as follows:
159159
```python
160160
print("Model name:", model_info.model_name)
161161
print("Model type:", model_info.model_type)
162-
print("Model provider name:", model_info.model_provider)
162+
print("Model provider name:", model_info.model_provider_name)
163163
```
164164

165165
```console
@@ -233,14 +233,12 @@ To visualize the output, define a helper function to print the stream.
233233
```python
234234
def print_stream(result):
235235
"""
236-
Prints the chat completion with streaming. Some delay is added to simulate
237-
a real-time conversation.
236+
Prints the chat completion with streaming.
238237
"""
239238
import time
240239
for update in result:
241240
if update.choices:
242241
print(update.choices[0].delta.content, end="")
243-
time.sleep(0.05)
244242
```
245243

246244
You can visualize how streaming generates content:
@@ -1038,7 +1036,7 @@ catch (RequestFailedException ex)
10381036
{
10391037
if (ex.ErrorCode == "content_filter")
10401038
{
1041-
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
1039+
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
10421040
}
10431041
else
10441042
{

articles/ai-studio/how-to/deploy-models-mistral-nemo.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ The response is as follows:
113113
```python
114114
print("Model name:", model_info.model_name)
115115
print("Model type:", model_info.model_type)
116-
print("Model provider name:", model_info.model_provider)
116+
print("Model provider name:", model_info.model_provider_name)
117117
```
118118

119119
```console
@@ -187,14 +187,12 @@ To visualize the output, define a helper function to print the stream.
187187
```python
188188
def print_stream(result):
189189
"""
190-
Prints the chat completion with streaming. Some delay is added to simulate
191-
a real-time conversation.
190+
Prints the chat completion with streaming.
192191
"""
193192
import time
194193
for update in result:
195194
if update.choices:
196195
print(update.choices[0].delta.content, end="")
197-
time.sleep(0.05)
198196
```
199197

200198
You can visualize how streaming generates content:
@@ -1385,7 +1383,7 @@ catch (RequestFailedException ex)
13851383
{
13861384
if (ex.ErrorCode == "content_filter")
13871385
{
1388-
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
1386+
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
13891387
}
13901388
else
13911389
{

articles/ai-studio/how-to/deploy-models-mistral-open.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ The response is as follows:
158158
```python
159159
print("Model name:", model_info.model_name)
160160
print("Model type:", model_info.model_type)
161-
print("Model provider name:", model_info.model_provider)
161+
print("Model provider name:", model_info.model_provider_name)
162162
```
163163

164164
```console
@@ -235,14 +235,12 @@ To visualize the output, define a helper function to print the stream.
235235
```python
236236
def print_stream(result):
237237
"""
238-
Prints the chat completion with streaming. Some delay is added to simulate
239-
a real-time conversation.
238+
Prints the chat completion with streaming.
240239
"""
241240
import time
242241
for update in result:
243242
if update.choices:
244243
print(update.choices[0].delta.content, end="")
245-
time.sleep(0.05)
246244
```
247245

248246
You can visualize how streaming generates content:

articles/ai-studio/how-to/deploy-models-mistral.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ The response is as follows:
143143
```python
144144
print("Model name:", model_info.model_name)
145145
print("Model type:", model_info.model_type)
146-
print("Model provider name:", model_info.model_provider)
146+
print("Model provider name:", model_info.model_provider_name)
147147
```
148148

149149
```console
@@ -217,14 +217,12 @@ To visualize the output, define a helper function to print the stream.
217217
```python
218218
def print_stream(result):
219219
"""
220-
Prints the chat completion with streaming. Some delay is added to simulate
221-
a real-time conversation.
220+
Prints the chat completion with streaming.
222221
"""
223222
import time
224223
for update in result:
225224
if update.choices:
226225
print(update.choices[0].delta.content, end="")
227-
time.sleep(0.05)
228226
```
229227

230228
You can visualize how streaming generates content:
@@ -1475,7 +1473,7 @@ catch (RequestFailedException ex)
14751473
{
14761474
if (ex.ErrorCode == "content_filter")
14771475
{
1478-
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
1476+
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
14791477
}
14801478
else
14811479
{

articles/ai-studio/how-to/deploy-models-phi-3-vision.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@ The Phi-3 family of small language models (SLMs) is a collection of instruction-
2323

2424
::: zone pivot="programming-language-python"
2525

26+
## Phi-3 chat models with vision
27+
2628
Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
2729

2830

@@ -114,7 +116,7 @@ The response is as follows:
114116
```python
115117
print("Model name:", model_info.model_name)
116118
print("Model type:", model_info.model_type)
117-
print("Model provider name:", model_info.model_provider)
119+
print("Model provider name:", model_info.model_provider_name)
118120
```
119121

120122
```console
@@ -191,14 +193,12 @@ To visualize the output, define a helper function to print the stream.
191193
```python
192194
def print_stream(result):
193195
"""
194-
Prints the chat completion with streaming. Some delay is added to simulate
195-
a real-time conversation.
196+
Prints the chat completion with streaming.
196197
"""
197198
import time
198199
for update in result:
199200
if update.choices:
200201
print(update.choices[0].delta.content, end="")
201-
time.sleep(0.05)
202202
```
203203

204204
You can visualize how streaming generates content:
@@ -343,6 +343,8 @@ Usage:
343343

344344
::: zone pivot="programming-language-javascript"
345345

346+
## Phi-3 chat models with vision
347+
346348
Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
347349

348350

@@ -684,6 +686,8 @@ Usage:
684686
685687
::: zone pivot="programming-language-csharp"
686688
689+
## Phi-3 chat models with vision
690+
687691
Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
688692
689693
@@ -1022,6 +1026,8 @@ Usage:
10221026

10231027
::: zone pivot="programming-language-rest"
10241028

1029+
## Phi-3 chat models with vision
1030+
10251031
Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
10261032

10271033

articles/ai-studio/how-to/deploy-models-phi-3.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ The response is as follows:
169169
```python
170170
print("Model name:", model_info.model_name)
171171
print("Model type:", model_info.model_type)
172-
print("Model provider name:", model_info.model_provider)
172+
print("Model provider name:", model_info.model_provider_name)
173173
```
174174

175175
```console
@@ -246,14 +246,12 @@ To visualize the output, define a helper function to print the stream.
246246
```python
247247
def print_stream(result):
248248
"""
249-
Prints the chat completion with streaming. Some delay is added to simulate
250-
a real-time conversation.
249+
Prints the chat completion with streaming.
251250
"""
252251
import time
253252
for update in result:
254253
if update.choices:
255254
print(update.choices[0].delta.content, end="")
256-
time.sleep(0.05)
257255
```
258256

259257
You can visualize how streaming generates content:
@@ -1068,7 +1066,7 @@ catch (RequestFailedException ex)
10681066
{
10691067
if (ex.ErrorCode == "content_filter")
10701068
{
1071-
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
1069+
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
10721070
}
10731071
else
10741072
{

0 commit comments

Comments
 (0)