Skip to content

Commit 554f928

Browse files
author
Jill Grant
authored
Merge pull request #24 from dargilco/dargilco/post-beta-4-python-sdk-release
Fix setting response format for Python Inference SDK
2 parents 12bda57 + e851d56 commit 554f928

16 files changed

+41
-35
lines changed

articles/ai-studio/how-to/deploy-models-cohere-command.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,7 @@ print_stream(result)
231231
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
232232

233233
```python
234-
from azure.ai.inference.models import ChatCompletionsResponseFormat
234+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
235235

236236
response = client.complete(
237237
messages=[
@@ -244,7 +244,7 @@ response = client.complete(
244244
stop=["<|endoftext|>"],
245245
temperature=0,
246246
top_p=1,
247-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
247+
response_format=ChatCompletionsResponseFormatText(),
248248
)
249249
```
250250

@@ -256,13 +256,15 @@ Cohere Command chat models can create JSON outputs. Set `response_format` to `js
256256

257257

258258
```python
259+
from azure.ai.inference.models import ChatCompletionsResponseFormatJSON
260+
259261
response = client.complete(
260262
messages=[
261263
SystemMessage(content="You are a helpful assistant that always generate responses in JSON format, using."
262264
" the following format: { ""answer"": ""response"" }."),
263265
UserMessage(content="How many languages are in the world?"),
264266
],
265-
response_format={ "type": ChatCompletionsResponseFormat.JSON_OBJECT }
267+
response_format=ChatCompletionsResponseFormatJSON()
266268
)
267269
```
268270

articles/ai-studio/how-to/deploy-models-jais.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ print_stream(result)
201201
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
202202

203203
```python
204-
from azure.ai.inference.models import ChatCompletionsResponseFormat
204+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
205205

206206
response = client.complete(
207207
messages=[
@@ -214,7 +214,7 @@ response = client.complete(
214214
stop=["<|endoftext|>"],
215215
temperature=0,
216216
top_p=1,
217-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
217+
response_format=ChatCompletionsResponseFormatText(),
218218
)
219219
```
220220

articles/ai-studio/how-to/deploy-models-llama.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -255,7 +255,7 @@ print_stream(result)
255255
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
256256

257257
```python
258-
from azure.ai.inference.models import ChatCompletionsResponseFormat
258+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
259259

260260
response = client.complete(
261261
messages=[
@@ -268,7 +268,7 @@ response = client.complete(
268268
stop=["<|endoftext|>"],
269269
temperature=0,
270270
top_p=1,
271-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
271+
response_format=ChatCompletionsResponseFormatText(),
272272
)
273273
```
274274

articles/ai-studio/how-to/deploy-models-mistral-nemo.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ print_stream(result)
209209
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
210210

211211
```python
212-
from azure.ai.inference.models import ChatCompletionsResponseFormat
212+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
213213

214214
response = client.complete(
215215
messages=[
@@ -222,7 +222,7 @@ response = client.complete(
222222
stop=["<|endoftext|>"],
223223
temperature=0,
224224
top_p=1,
225-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
225+
response_format=ChatCompletionsResponseFormatText(),
226226
)
227227
```
228228

@@ -234,13 +234,15 @@ Mistral Nemo chat model can create JSON outputs. Set `response_format` to `json_
234234

235235

236236
```python
237+
from azure.ai.inference.models import ChatCompletionsResponseFormatJSON
238+
237239
response = client.complete(
238240
messages=[
239241
SystemMessage(content="You are a helpful assistant that always generate responses in JSON format, using."
240242
" the following format: { ""answer"": ""response"" }."),
241243
UserMessage(content="How many languages are in the world?"),
242244
],
243-
response_format={ "type": ChatCompletionsResponseFormat.JSON_OBJECT }
245+
response_format=ChatCompletionsResponseFormatJSON()
244246
)
245247
```
246248

articles/ai-studio/how-to/deploy-models-mistral-open.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -257,7 +257,7 @@ print_stream(result)
257257
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
258258

259259
```python
260-
from azure.ai.inference.models import ChatCompletionsResponseFormat
260+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
261261

262262
response = client.complete(
263263
messages=[
@@ -270,7 +270,7 @@ response = client.complete(
270270
stop=["<|endoftext|>"],
271271
temperature=0,
272272
top_p=1,
273-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
273+
response_format=ChatCompletionsResponseFormatText(),
274274
)
275275
```
276276

articles/ai-studio/how-to/deploy-models-mistral.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -239,7 +239,7 @@ print_stream(result)
239239
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
240240

241241
```python
242-
from azure.ai.inference.models import ChatCompletionsResponseFormat
242+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
243243

244244
response = client.complete(
245245
messages=[
@@ -252,7 +252,7 @@ response = client.complete(
252252
stop=["<|endoftext|>"],
253253
temperature=0,
254254
top_p=1,
255-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
255+
response_format=ChatCompletionsResponseFormatText(),
256256
)
257257
```
258258

@@ -264,13 +264,15 @@ Mistral premium chat models can create JSON outputs. Set `response_format` to `j
264264

265265

266266
```python
267+
from azure.ai.inference.models import ChatCompletionsResponseFormatJSON
268+
267269
response = client.complete(
268270
messages=[
269271
SystemMessage(content="You are a helpful assistant that always generate responses in JSON format, using."
270272
" the following format: { ""answer"": ""response"" }."),
271273
UserMessage(content="How many languages are in the world?"),
272274
],
273-
response_format={ "type": ChatCompletionsResponseFormat.JSON_OBJECT }
275+
response_format=ChatCompletionsResponseFormatJSON()
274276
)
275277
```
276278

articles/ai-studio/how-to/deploy-models-phi-3-5-moe.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -219,7 +219,7 @@ print_stream(result)
219219
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
220220

221221
```python
222-
from azure.ai.inference.models import ChatCompletionsResponseFormat
222+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
223223

224224
response = client.complete(
225225
messages=[
@@ -232,7 +232,7 @@ response = client.complete(
232232
stop=["<|endoftext|>"],
233233
temperature=0,
234234
top_p=1,
235-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
235+
response_format=ChatCompletionsResponseFormatText(),
236236
)
237237
```
238238

articles/ai-studio/how-to/deploy-models-phi-3-5-vision.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -215,7 +215,7 @@ print_stream(result)
215215
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
216216

217217
```python
218-
from azure.ai.inference.models import ChatCompletionsResponseFormat
218+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
219219

220220
response = client.complete(
221221
messages=[
@@ -228,7 +228,7 @@ response = client.complete(
228228
stop=["<|endoftext|>"],
229229
temperature=0,
230230
top_p=1,
231-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
231+
response_format=ChatCompletionsResponseFormatText(),
232232
)
233233
```
234234

articles/ai-studio/how-to/deploy-models-phi-3-vision.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -215,7 +215,7 @@ print_stream(result)
215215
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
216216

217217
```python
218-
from azure.ai.inference.models import ChatCompletionsResponseFormat
218+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
219219

220220
response = client.complete(
221221
messages=[
@@ -228,7 +228,7 @@ response = client.complete(
228228
stop=["<|endoftext|>"],
229229
temperature=0,
230230
top_p=1,
231-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
231+
response_format=ChatCompletionsResponseFormatText(),
232232
)
233233
```
234234

articles/ai-studio/how-to/deploy-models-phi-3.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -256,7 +256,7 @@ print_stream(result)
256256
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
257257

258258
```python
259-
from azure.ai.inference.models import ChatCompletionsResponseFormat
259+
from azure.ai.inference.models import ChatCompletionsResponseFormatText
260260

261261
response = client.complete(
262262
messages=[
@@ -269,7 +269,7 @@ response = client.complete(
269269
stop=["<|endoftext|>"],
270270
temperature=0,
271271
top_p=1,
272-
response_format={ "type": ChatCompletionsResponseFormat.TEXT },
272+
response_format=ChatCompletionsResponseFormatText(),
273273
)
274274
```
275275

0 commit comments

Comments
 (0)