Skip to content

Commit 8d0b994

Browse files
authored
Merge pull request #2552 from MicrosoftDocs/main
01/28/2025 AM Publishing
2 parents 5c1ad19 + f3ad5d1 commit 8d0b994

File tree

13 files changed

+36
-17
lines changed

13 files changed

+36
-17
lines changed

articles/ai-foundry/model-inference/concepts/deployment-types.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,13 @@ To learn more about deployment options for Azure OpenAI models see [Azure OpenAI
3131

3232
Models from third-party model providers with pay-as-you-go billing (collectively called Models-as-a-Service), makes models available in Azure AI model inference under **standard** deployments with a Global processing option (`Global-Standard`).
3333

34-
Models-as-a-Service offers regional deployment options under [Serverless API endpoints](../../../ai-studio/how-to/deploy-models-serverless.md) in Azure AI Foundry. Prompts and outputs are processed within the geography specified during deployment. However, those deployments can't be accessed using the Azure AI model inference endpoint in Azure AI Services.
35-
3634
### Global-Standard
3735

3836
Global deployments leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources. Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure location. Learn more about [data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
3937

38+
> [!NOTE]
39+
> Models-as-a-Service offers regional deployment options under [Serverless API endpoints](../../../ai-studio/how-to/deploy-models-serverless.md) in Azure AI Foundry. Prompts and outputs are processed within the geography specified during deployment. However, those deployments can't be accessed using the Azure AI model inference endpoint in Azure AI Services.
40+
4041
## Control deployment options
4142

4243
Administrators can control which model deployment types are available to their users by using Azure Policies. Learn more about [How to control AI model deployment with custom policies](../../../ai-studio/how-to/custom-policy-model-deployment.md).

articles/ai-foundry/model-inference/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ metadata:
55
description: Get answers to the most popular questions about Azure AI model inference
66
#services: cognitive-services
77
manager: nitinme
8-
ms.service: azure-ai-models
8+
ms.service: azure-ai-model-inference
99
ms.topic: faq
1010
ms.date: 1/21/2025
1111
ms.author: fasantia

articles/ai-foundry/model-inference/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ summary: Azure AI model inference provides access to the most powerful models av
66
metadata:
77
title: Azure AI model inference documentation - Quickstarts, How-to's, API Reference - Azure AI Foundry | Microsoft Docs
88
description: Learn how to use flagship models available in the Azure AI model catalog from the key model providers in the industry, including OpenAI, Microsoft, Meta, Mistral, Cohere, G42, and AI21 Labs.
9-
ms.service: azure-ai-models
9+
ms.service: azure-ai-model-inference
1010
ms.custom:
1111
ms.topic: landing-page
1212
author: mrbullwinkle

articles/ai-services/agents/includes/quickstart-csharp.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,12 @@ dotnet add package Azure.AI.Projects
3838
dotnet add package Azure.Identity
3939
```
4040

41+
Next, to authenticate your API requests and run the program, use the [az login](/cli/azure/authenticate-azure-cli-interactively) command to sign into your Azure subscription.
42+
43+
```azurecli
44+
az login
45+
```
46+
4147
Use the following code to create and run an agent. To run this code, you will need to create a connection string using information from your project. This string is in the format:
4248

4349
`<HostName>;<AzureSubscriptionId>;<ResourceGroup>;<ProjectName>`
@@ -56,6 +62,7 @@ For example, your connection string may look something like:
5662

5763
Set this connection string as an environment variable named `PROJECT_CONNECTION_STRING`.
5864

65+
5966
```csharp
6067
// Copyright (c) Microsoft Corporation. All rights reserved.
6168
// Licensed under the MIT License.

articles/ai-services/agents/includes/quickstart-python-openai.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,12 @@ pip install azure-identity
4141
pip install openai
4242
```
4343

44+
Next, to authenticate your API requests and run the program, use the [az login](/cli/azure/authenticate-azure-cli-interactively) command to sign into your Azure subscription.
45+
46+
```azurecli
47+
az login
48+
```
49+
4450
Use the following code to create and run an agent. To run this code, you will need to create a connection string using information from your project. This string is in the format:
4551

4652
`<HostName>;<AzureSubscriptionId>;<ResourceGroup>;<ProjectName>`

articles/ai-services/agents/includes/quickstart-python.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,11 @@ Run the following commands to install the python packages.
3838
pip install azure-ai-projects
3939
pip install azure-identity
4040
```
41+
Next, to authenticate your API requests and run the program, use the [az login](/cli/azure/authenticate-azure-cli-interactively) command to sign into your Azure subscription.
42+
43+
```azurecli
44+
az login
45+
```
4146

4247
Use the following code to create and run an agent. To run this code, you will need to create a connection string using information from your project. This string is in the format:
4348

@@ -115,7 +120,7 @@ with project_client:
115120
print(f"Messages: {messages}")
116121

117122
# Get the last message from the sender
118-
last_msg = messages.get_last_text_message_by_sender("assistant")
123+
last_msg = messages.get_last_text_message_by_role("assistant")
119124
if last_msg:
120125
print(f"Last Message: {last_msg.text.value}")
121126

articles/ai-services/immersive-reader/how-to-cache-token.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ public async Task<string> GetTokenAsync()
6161

6262
The `AuthenticationResult` object has an `AccessToken` property, which is the actual token you use when launching the Immersive Reader using the SDK. It also has an `ExpiresOn` property that denotes when the token expires. Before launching the Immersive Reader, you can check whether the token is expired, and acquire a new token only if it expired.
6363

64-
## Using Node.JS
64+
## Using Node.js
6565

6666
Add the [request](https://www.npmjs.com/package/request) npm package to your project. Use the following code to acquire a token, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
6767

articles/search/hybrid-search-how-to-query.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/d
226226

227227
## Semantic hybrid search
228228

229-
Assuming that you [enabled semantic ranker](semantic-how-to-enable-disable.md) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search and keyword search, with semantic ranking over the merged result set. Optionally, you can add captions and answers.
229+
Assuming that you [have semantic ranker](semantic-how-to-enable-disable.md) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search and keyword search, with semantic ranking over the merged result set. Optionally, you can add captions and answers.
230230

231231
Whenever you use semantic ranking with vectors, make sure `k` is set to 50. Semantic ranker uses up to 50 matches as input. Specifying less than 50 deprives the semantic ranking models of necessary inputs.
232232

articles/search/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ summary: Information retrieval at scale for vector and text content in tradition
55
metadata:
66
title: Azure AI Search documentation
77
description: Information retrieval at scale for vector and text content in traditional or generative search scenarios.
8-
ms.service: service
8+
ms.service: azure-ai-search
99
ms.custom:
1010
- ignite-2023
1111
- ignite-2024

articles/search/search-manage-rest.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegrou
229229

230230
## Disable semantic ranker
231231

232-
Although [semantic ranker isn't enabled](semantic-how-to-enable-disable.md) by default, you could lock down the feature at the service level for greater certainty it can't be used.
232+
[Semantic ranker is enabled](semantic-how-to-enable-disable.md) by default at the free plan that allows up to 1,000 requests per month at no charge. You can lock down the feature at the service level to prevent usage.
233233

234234
```http
235235
### disable semantic ranker

0 commit comments

Comments
 (0)