You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/reference/reference-model-inference-api.md
+126-1Lines changed: 126 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ Models deployed to [managed inference](../concepts/deployments-overview.md):
56
56
> [!div class="checklist"]
57
57
> *[Meta Llama 3 instruct](../how-to/deploy-models-llama.md) family of models
58
58
> *[Phi-3](../how-to/deploy-models-phi-3.md) family of models
59
-
> * Mixtral famility of models
59
+
> *[Mistral](../how-to/deploy-models-mistral-open.md) and [Mixtral](../how-to/deploy-models-mistral-open.md?tabs=mistral-8x7B-instruct) family of models.
60
60
61
61
The API is compatible with Azure OpenAI model deployments.
62
62
@@ -154,6 +154,48 @@ const client = new ModelClient(
154
154
155
155
Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
156
156
157
+
# [C#](#tab/csharp)
158
+
159
+
Install the Azure AI inference library with the following command:
Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started.
198
+
157
199
# [REST](#tab/rest)
158
200
159
201
Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions:
@@ -215,6 +257,22 @@ var response = await client.path("/chat/completions").post({
215
257
console.log(response.choices[0].message.content)
216
258
```
217
259
260
+
# [C#](#tab/csharp)
261
+
262
+
```csharp
263
+
requestOptions=newChatCompletionsOptions()
264
+
{
265
+
Messages= {
266
+
newChatRequestSystemMessage("You are a helpful assistant."),
267
+
newChatRequestUserMessage("How many languages are in the world?")
Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
543
+
}
544
+
else
545
+
{
546
+
throw;
547
+
}
548
+
}
549
+
```
550
+
432
551
# [REST](#tab/rest)
433
552
434
553
__Request__
@@ -485,6 +604,12 @@ The client library `@azure-rest/ai-inference` does inference, including chat com
485
604
486
605
Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
487
606
607
+
# [C#](#tab/csharp)
608
+
609
+
The client library `Azure.Ai.Inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
610
+
611
+
Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started.
612
+
488
613
# [REST](#tab/rest)
489
614
490
615
Explore the reference section of the Azure AI model inference API to see parameters and options to consume models, including chat completions models, deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
0 commit comments