Skip to content

Commit d4a8f64

Browse files
Merge pull request #2707 from santiagxf/santiagxf-patch-1
Update code-create-chat-client-entra.md
2 parents 67d45ff + 4741a15 commit d4a8f64

File tree

1 file changed

+18
-5
lines changed

1 file changed

+18
-5
lines changed

articles/ai-foundry/model-inference/includes/code-create-chat-client-entra.md

Lines changed: 18 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,10 @@ import os
2525
from azure.ai.inference import ChatCompletionsClient
2626
from azure.identity import DefaultAzureCredential
2727

28-
model = ChatCompletionsClient(
28+
client = ChatCompletionsClient(
2929
endpoint="https://<resource>.services.ai.azure.com/models",
3030
credential=DefaultAzureCredential(),
31+
credential_scopes=["https://cognitiveservices.azure.com/.default"],
3132
model="mistral-large-2407",
3233
)
3334
```
@@ -47,10 +48,13 @@ import ModelClient from "@azure-rest/ai-inference";
4748
import { isUnexpected } from "@azure-rest/ai-inference";
4849
import { DefaultAzureCredential } from "@azure/identity";
4950

51+
const clientOptions = { credentials: { "https://cognitiveservices.azure.com" } };
52+
5053
const client = new ModelClient(
5154
"https://<resource>.services.ai.azure.com/models",
5255
new DefaultAzureCredential(),
53-
"mistral-large-2407"
56+
"mistral-large-2407",
57+
clientOptions,
5458
);
5559
```
5660

@@ -79,10 +83,16 @@ using Azure.AI.Inference;
7983
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions with Entra ID:
8084

8185
```csharp
86+
var credential = new DefaultAzureCredential();
87+
AzureAIInferenceClientOptions clientOptions = new AzureAIInferenceClientOptions();
88+
BearerTokenAuthenticationPolicy tokenPolicy = new BearerTokenAuthenticationPolicy(credential, new string[] { "https://cognitiveservices.azure.com/.default" });
89+
clientOptions.AddPolicy(tokenPolicy, HttpPipelinePosition.PerRetry);
90+
8291
ChatCompletionsClient client = new ChatCompletionsClient(
8392
new Uri("https://<resource>.services.ai.azure.com/models"),
84-
new DefaultAzureCredential(includeInteractiveCredentials: true),
85-
"mistral-large-2407"
93+
credential,
94+
"mistral-large-2407",
95+
clientOptions.
8696
);
8797
```
8898

@@ -106,8 +116,9 @@ Add the package to your project:
106116
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
107117

108118
```java
119+
TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build();
109120
ChatCompletionsClient client = new ChatCompletionsClientBuilder()
110-
.credential(new DefaultAzureCredential()))
121+
.credential(defaultCredential)
111122
.endpoint("https://<resource>.services.ai.azure.com/models")
112123
.model("mistral-large-2407")
113124
.buildClient();
@@ -127,6 +138,8 @@ Authorization: Bearer <bearer-token>
127138
Content-Type: application/json
128139
```
129140

141+
Tokens have to be issued with scope `https://cognitiveservices.azure.com/.default`.
142+
130143
For testing purposes, the easiest way to get a valid token for your user account is to use the Azure CLI. In a console, run the following Azure CLI command:
131144

132145
```azurecli

0 commit comments

Comments
 (0)