Skip to content

Commit 89e0bdf

Browse files
authored
Merge pull request #2716 from MicrosoftDocs/main
02/03/2025 PM Publishing
2 parents 849dc26 + 3a9e993 commit 89e0bdf

33 files changed

+365
-111
lines changed

articles/ai-foundry/model-inference/includes/code-create-chat-client-entra.md

Lines changed: 18 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,10 @@ import os
2525
from azure.ai.inference import ChatCompletionsClient
2626
from azure.identity import DefaultAzureCredential
2727

28-
model = ChatCompletionsClient(
28+
client = ChatCompletionsClient(
2929
endpoint="https://<resource>.services.ai.azure.com/models",
3030
credential=DefaultAzureCredential(),
31+
credential_scopes=["https://cognitiveservices.azure.com/.default"],
3132
model="mistral-large-2407",
3233
)
3334
```
@@ -47,10 +48,13 @@ import ModelClient from "@azure-rest/ai-inference";
4748
import { isUnexpected } from "@azure-rest/ai-inference";
4849
import { DefaultAzureCredential } from "@azure/identity";
4950

51+
const clientOptions = { credentials: { "https://cognitiveservices.azure.com" } };
52+
5053
const client = new ModelClient(
5154
"https://<resource>.services.ai.azure.com/models",
5255
new DefaultAzureCredential(),
53-
"mistral-large-2407"
56+
"mistral-large-2407",
57+
clientOptions,
5458
);
5559
```
5660

@@ -79,10 +83,16 @@ using Azure.AI.Inference;
7983
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions with Entra ID:
8084

8185
```csharp
86+
var credential = new DefaultAzureCredential();
87+
AzureAIInferenceClientOptions clientOptions = new AzureAIInferenceClientOptions();
88+
BearerTokenAuthenticationPolicy tokenPolicy = new BearerTokenAuthenticationPolicy(credential, new string[] { "https://cognitiveservices.azure.com/.default" });
89+
clientOptions.AddPolicy(tokenPolicy, HttpPipelinePosition.PerRetry);
90+
8291
ChatCompletionsClient client = new ChatCompletionsClient(
8392
new Uri("https://<resource>.services.ai.azure.com/models"),
84-
new DefaultAzureCredential(includeInteractiveCredentials: true),
85-
"mistral-large-2407"
93+
credential,
94+
"mistral-large-2407",
95+
clientOptions.
8696
);
8797
```
8898

@@ -106,8 +116,9 @@ Add the package to your project:
106116
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
107117

108118
```java
119+
TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build();
109120
ChatCompletionsClient client = new ChatCompletionsClientBuilder()
110-
.credential(new DefaultAzureCredential()))
121+
.credential(defaultCredential)
111122
.endpoint("https://<resource>.services.ai.azure.com/models")
112123
.model("mistral-large-2407")
113124
.buildClient();
@@ -127,6 +138,8 @@ Authorization: Bearer <bearer-token>
127138
Content-Type: application/json
128139
```
129140

141+
Tokens have to be issued with scope `https://cognitiveservices.azure.com/.default`.
142+
130143
For testing purposes, the easiest way to get a valid token for your user account is to use the Azure CLI. In a console, run the following Azure CLI command:
131144

132145
```azurecli

articles/ai-foundry/model-inference/includes/configure-entra-id/bicep.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,8 +100,10 @@ Once you configured Microsoft Entra ID in your resource, you need to update your
100100
101101
[!INCLUDE [about-credentials](about-credentials.md)]
102102
103+
## Disable key-based authentication in the resource
103104
105+
Disabling key-based authentication is advisable when you implemented Microsoft Entra ID and fully addressed compatibility or fallback concerns in all the applications that consume the service. You can achieve it by changing the property `disableLocalAuth`:
104106
105-
## Disable key-based authentication in the resource
107+
__modules/ai-services-template.bicep__
106108
107-
Disabling key-based authentication is advisable when you implemented Microsoft Entra ID and fully addressed compatibility or fallback concerns in all the applications that consume the service.
109+
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/modules/ai-services-template.bicep" highlight="10-11,42":::

articles/ai-services/agents/includes/quickstart-javascript.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ author: aahill
44
ms.author: aahi
55
ms.service: azure-ai-agent-service
66
ms.topic: include
7-
ms.date: 01/28/2025
7+
ms.date: 02/03/2025
88
ms.custom: devx-track-js
99
---
1010

@@ -38,6 +38,12 @@ npm install @azure/ai-projects
3838
npm install @azure/identity
3939
```
4040

41+
Next, to authenticate your API requests and run the program, use the [az login](/cli/azure/authenticate-azure-cli-interactively) command to sign into your Azure subscription.
42+
43+
```azurecli
44+
az login
45+
```
46+
4147
Use the following code to create and run an agent. To run this code, you will need to create a connection string using information from your project. This string is in the format:
4248

4349
`<HostName>;<AzureSubscriptionId>;<ResourceGroup>;<ProjectName>`
@@ -87,7 +93,7 @@ export async function main() {
8793
const codeInterpreterTool = ToolUtility.createCodeInterpreterTool();
8894

8995
// Step 2 an agent
90-
const agent = await client.agents.createAgent("gpt-35-turbo", {
96+
const agent = await client.agents.createAgent("gpt-4o-mini", {
9197
name: "my-agent",
9298
instructions: "You are a helpful agent",
9399
tools: [codeInterpreterTool.definition],
@@ -143,7 +149,7 @@ export async function main() {
143149
// messages[0] is the most recent
144150
for (let i = messages.data.length - 1; i >= 0; i--) {
145151
const m = messages.data[i];
146-
if (isOutputOfType<MessageTextContentOutput>(m.content[0], "text")) {
152+
if (isOutputOfType(m.content[0], "text")) {
147153
const textContent = m.content[0];
148154
console.log(`${textContent.text.value}`);
149155
console.log(`---------------------------------`);

articles/ai-services/agents/includes/quickstart-typescript.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ author: aahill
44
ms.author: aahi
55
ms.service: azure-ai-agent-service
66
ms.topic: include
7-
ms.date: 01/28/2025
7+
ms.date: 02/03/2025
88
ms.custom: devx-track-ts
99
---
1010

@@ -39,6 +39,12 @@ npm install @azure/ai-projects
3939
npm install @azure/identity
4040
```
4141

42+
Next, to authenticate your API requests and run the program, use the [az login](/cli/azure/authenticate-azure-cli-interactively) command to sign into your Azure subscription.
43+
44+
```azurecli
45+
az login
46+
```
47+
4248
Use the following code to create and run an agent. To run this code, you will need to create a connection string using information from your project. This string is in the format:
4349

4450
`<HostName>;<AzureSubscriptionId>;<ResourceGroup>;<ProjectName>`
@@ -93,7 +99,7 @@ export async function main(): Promise<void> {
9399
const codeInterpreterTool = ToolUtility.createCodeInterpreterTool();
94100

95101
// Step 2: Create an agent
96-
const agent = await client.agents.createAgent("gpt-35-turbo", {
102+
const agent = await client.agents.createAgent("gpt-4o-mini", {
97103
name: "my-agent",
98104
instructions: "You are a helpful agent",
99105
tools: [codeInterpreterTool.definition],

articles/ai-services/computer-vision/spatial-analysis-container.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -159,9 +159,9 @@ Follow these steps to remotely connect from a Windows client.
159159
160160
Follow these instructions if your host computer isn't an Azure Stack Edge device.
161161
162-
#### Install NVIDIA CUDA Toolkit and Nvidia graphics drivers on the host computer
162+
#### Install NVIDIA CUDA Toolkit and NVIDIA graphics drivers on the host computer
163163
164-
Use the following bash script to install the required Nvidia graphics drivers, and CUDA Toolkit.
164+
Use the following bash script to install the required NVIDIA graphics drivers, and CUDA Toolkit.
165165
166166
```bash
167167
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin

articles/ai-services/content-safety/how-to/containers/install-run-container.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ Even with identical GPUs, performance can fluctuate based on the GPU load and th
8686

8787
## Install the NVIDIA container toolkit
8888

89-
The `host` is the computer that runs the docker container. The host must support Nvidia container toolkit. Follow the below guidance to install the toolkit in your environment.
89+
The `host` is the computer that runs the docker container. The host must support NVIDIA container toolkit. Follow the below guidance to install the toolkit in your environment.
9090

9191
[Install the NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
9292

articles/ai-services/document-intelligence/train/custom-labels.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ A labeled dataset consists of several files:
4141

4242
* We explore how to create a balanced data set and select the right documents to label. This process sets you on the path to higher quality models.
4343

44-
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWWHru]
44+
> [!VIDEO 1190c010-ef3e-4cc6-8ffc-6d896fbb9711]
4545
4646
## Create a balanced dataset
4747

articles/ai-services/language-service/concepts/model-lifecycle.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,8 @@ Use the table below to find which model versions are supported by each feature:
4141
| Sentiment Analysis and opinion mining | `latest*` | |
4242
| Language Detection | `latest*` | |
4343
| Entity Linking | `latest*` | |
44-
| Named Entity Recognition (NER) | `latest*` | `2023-04-15-preview**` |
45-
| Personally Identifiable Information (PII) detection | `latest*` | `2023-04-15-preview**` |
44+
| Named Entity Recognition (NER) | `latest*` | `2024-04-15-preview**` |
45+
| Personally Identifiable Information (PII) detection | `latest*` | `2024-04-15-preview**` |
4646
| PII detection for conversations | `latest*` | `2024-11-01-preview**` |
4747
| Question answering | `latest*` | |
4848
| Text Analytics for health | `latest*` | `2022-08-15-preview`, `2023-01-01-preview**`|

articles/ai-services/language-service/summarization/how-to/use-containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ The following table describes the minimum and recommended specifications for the
3535
| Container Type | Recommended number of CPU cores | Recommended memory | Notes |
3636
|----------------------------|----------------------------------|--------------------|-------|
3737
| Summarization CPU container| 16 | 48 GB | |
38-
| Summarization GPU container| 2 | 24 GB | Requires an Nvidia GPU that supports Cuda 11.8 with 16GB VRAM.|
38+
| Summarization GPU container| 2 | 24 GB | Requires an NVIDIA GPU that supports Cuda 11.8 with 16GB VRAM.|
3939

4040
CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
4141

articles/ai-services/openai/how-to/code-interpreter.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 01/28/2025
9+
ms.date: 02/03/2025
1010
author: aahill
1111
ms.author: aahi
1212
recommendations: false
@@ -104,7 +104,12 @@ curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2
104104
"tools": [
105105
{ "type": "code_interpreter" }
106106
],
107-
"model": "gpt-4-1106-preview"
107+
"model": "gpt-4-1106-preview",
108+
"tool_resources": {
109+
"code_interpreter": {
110+
"file_ids": ["assistant-123abc456"]
111+
}
112+
}
108113
}'
109114
```
110115

0 commit comments

Comments
 (0)