You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
from azure.ai.inference import ChatCompletionsClient
108
108
from azure.identity import AzureDefaultCredential
109
109
110
-
model= ChatCompletionsClient(
110
+
client= ChatCompletionsClient(
111
111
endpoint=os.environ["AZUREAI_ENDPOINT_URL"],
112
112
credential=AzureDefaultCredential(),
113
113
)
@@ -151,6 +151,48 @@ const client = new ModelClient(
151
151
152
152
Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/js/reference) to get yourself started.
153
153
154
+
# [C#](#tab/csharp)
155
+
156
+
Install the Azure AI inference library with the following command:
Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started.
195
+
154
196
# [REST](#tab/rest)
155
197
156
198
Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions:
@@ -175,7 +217,7 @@ The following example shows a request passing the parameter `safe_prompt` suppor
175
217
# [Python](#tab/python)
176
218
177
219
```python
178
-
response =model.complete(
220
+
response =client.complete(
179
221
messages=[
180
222
SystemMessage(content="You are a helpful assistant."),
181
223
UserMessage(content="How many languages are in the world?"),
@@ -210,6 +252,22 @@ var response = await client.path("/chat/completions").post({
210
252
console.log(response.choices[0].message.content)
211
253
```
212
254
255
+
# [C#](#tab/csharp)
256
+
257
+
```csharp
258
+
requestOptions=newChatCompletionsOptions()
259
+
{
260
+
Messages= {
261
+
newChatRequestSystemMessage("You are a helpful assistant."),
262
+
newChatRequestUserMessage("How many languages are in the world?")
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
538
+
}
539
+
else
540
+
{
541
+
throw;
542
+
}
543
+
}
544
+
```
545
+
427
546
# [REST](#tab/rest)
428
547
429
548
__Request__
@@ -480,6 +599,12 @@ The client library `@azure-rest/ai-inference` does inference, including chat com
480
599
481
600
Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/js/reference) to get yourself started.
482
601
602
+
# [C#](#tab/csharp)
603
+
604
+
The client library `Azure.Ai.Inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
605
+
606
+
Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started.
607
+
483
608
# [REST](#tab/rest)
484
609
485
610
Explore the reference section of the Azure AI model inference API to see parameters and options to consume models, including chat completions models, deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
0 commit comments