You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/reference/reference-model-inference-api.md
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -116,6 +116,8 @@ model = ChatCompletionsClient(
116
116
)
117
117
```
118
118
119
+
Explore our [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference) to get yourself started.
120
+
119
121
# [JavaScript](#tab/javascript)
120
122
121
123
Install the package `@azure-rest/ai-inference` using npm:
@@ -150,6 +152,8 @@ const client = new ModelClient(
150
152
);
151
153
```
152
154
155
+
Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
156
+
153
157
# [REST](#tab/rest)
154
158
155
159
Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions:
> When using Azure AI Inference SDK, using passing extra parameters using `model_extras` configures the request with `extra-parameters: pass-through` automatically for you.
197
+
> When using Azure AI Inference SDK, using `model_extras` configures the request with `extra-parameters: pass-through` automatically for you.
194
198
195
199
# [JavaScript](#tab/javascript)
196
200
@@ -255,9 +259,9 @@ The following example shows the response for a chat completion request indicatin
255
259
# [Python](#tab/python)
256
260
257
261
```python
262
+
import json
258
263
from azure.ai.inference.models import SystemMessage, UserMessage, ChatCompletionsResponseFormat
259
264
from azure.core.exceptions import HttpResponseError
260
-
import json
261
265
262
266
try:
263
267
response = model.complete(
@@ -371,6 +375,7 @@ The following example shows the response for a chat completion request that has
371
375
372
376
```python
373
377
from azure.ai.inference.models import AssistantMessage, UserMessage, SystemMessage
378
+
from azure.core.exceptions import HttpResponseError
0 commit comments