Skip to content

Commit f90e577

Browse files
authored
Merge pull request #267237 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents e5d1a5c + ad109ac commit f90e577

File tree

8 files changed

+15
-11
lines changed

8 files changed

+15
-11
lines changed

articles/ai-services/openai/how-to/embeddings.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,7 @@ Our embedding models may be unreliable or pose social risks in certain cases, an
140140
* Store your embeddings and perform vector (similarity) search using your choice of Azure service:
141141
* [Azure AI Search](../../../search/vector-search-overview.md)
142142
* [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md)
143+
* [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search)
143144
* [Azure Cosmos DB for NoSQL](../../../cosmos-db/vector-search.md)
144145
* [Azure Cosmos DB for PostgreSQL](../../../cosmos-db/postgresql/howto-use-pgvector.md)
145146
* [Azure Database for PostgreSQL - Flexible Server](../../../postgresql/flexible-server/how-to-use-pgvector.md)

articles/ai-studio/includes/evaluations/from-data/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ For the supported scenarios mentioned previously, we provide default metrics by
128128
| Question Answering | `qa` | `gpt_groundedness` (requires context), `gpt_relevance` (requires context), `gpt_coherence` | `gpt_groundedness`, `gpt_relevance`, `gpt_coherence`, `gpt_fluency`, `gpt_similarity`, `f1_score`, `exact_match`, `ada_similarity` |
129129
| Single and multi-turn conversation (context required) | `chat` | `gpt_groundedness`, `gpt_relevance`, `gpt_retrieval_score` |`gpt_groundedness`, `gpt_relevance`, `gpt_retrieval_score` |
130130

131-
### Set up your Azure Open AI configurations for AI-assisted metrics
131+
### Set up your Azure OpenAI configurations for AI-assisted metrics
132132

133133
Before you call the `evaluate()` function, your environment needs to set up your large language model deployment configuration that's required for generating the AI-assisted metrics.
134134

articles/app-service/includes/tutorial-connect-msi-azure-database/code-postgres-mi.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ For more information, see the following resources:
9595

9696
# Uncomment the following lines according to the authentication type.
9797
# For system-assigned identity.
98-
# credential = DefaultAzureCredential()
98+
# cred = DefaultAzureCredential()
9999

100100
# For user-assigned identity.
101101
# managed_identity_client_id = os.getenv('AZURE_POSTGRESQL_CLIENTID')
@@ -155,4 +155,4 @@ For more information, see the following resources:
155155

156156
-----
157157

158-
For more code samples, see [Create a passwordless connection to a database service via Service Connector](/azure/service-connector/tutorial-passwordless?tabs=user%2Cappservice&pivots=postgresql#connect-to-a-database-with-microsoft-entra-authentication).
158+
For more code samples, see [Create a passwordless connection to a database service via Service Connector](/azure/service-connector/tutorial-passwordless?tabs=user%2Cappservice&pivots=postgresql#connect-to-a-database-with-microsoft-entra-authentication).

articles/azure-cache-for-redis/cache-tutorial-semantic-cache.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.date: 01/08/2024
1212

1313
# Tutorial: Use Azure Cache for Redis as a semantic cache
1414

15-
In this tutorial, you use Azure Cache for Redis as a semantic cache with an AI-based large language model (LLM). You use Azure Open AI Service to generate LLM responses to queries and cache those responses using Azure Cache for Redis, delivering faster responses and lowering costs.
15+
In this tutorial, you use Azure Cache for Redis as a semantic cache with an AI-based large language model (LLM). You use Azure OpenAI Service to generate LLM responses to queries and cache those responses using Azure Cache for Redis, delivering faster responses and lowering costs.
1616

1717
Because Azure Cache for Redis offers built-in vector search capability, you can also perform _semantic caching_. You can return cached responses for identical queries and also for queries that are similar in meaning, even if the text isn't the same.
1818

@@ -78,7 +78,7 @@ See [Deploy a model](/azure/ai-services/openai/how-to/create-resource?pivots=web
7878

7979
To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. You also need an **endpoint** and a **key** to connect to Azure Cache for Redis.
8080

81-
1. Go to your Azure Open AI resource in the Azure portal.
81+
1. Go to your Azure OpenAI resource in the Azure portal.
8282

8383
1. Locate **Endpoint and Keys** in the **Resource Management** section of your Azure OpenAI resource. Copy your endpoint and access key because you need both for authenticating your API calls. An example endpoint is: `https://docs-test-001.openai.azure.com`. You can use either `KEY1` or `KEY2`.
8484

articles/azure-monitor/agents/agents-overview.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -258,10 +258,9 @@ The Azure Monitoring Agent for Linux now officially supports various hardening s
258258
Currently supported hardening standards:
259259
- SELinux
260260
- CIS Lvl 1 and 2<sup>1</sup>
261-
262-
On the roadmap
263261
- STIG
264262
- FIPs
263+
- FedRamp
265264

266265
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>|
267266
|:---|:---:|:---:|:---:|

articles/azure-monitor/vm/vminsights-performance.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.date: 09/28/2023
1414
1515
VM insights includes a set of performance charts that target several key [performance indicators](vminsights-log-query.md#performance-records) to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected.
1616

17-
VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance complements the health monitoring feature and helps to:
17+
VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance helps to:
1818

1919
- Expose issues that indicate a possible system component failure.
2020
- Support tuning and optimization to achieve efficiency.
@@ -43,7 +43,7 @@ To access from Azure Monitor:
4343
<!-- convertborder later -->
4444
:::image type="content" source="media/vminsights-performance/vminsights-performance-aggview-01.png" lightbox="media/vminsights-performance/vminsights-performance-aggview-01.png" alt-text="Screenshot that shows a VM insights Performance Top N List view." border="false":::
4545

46-
On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Health or Map.
46+
On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Map.
4747

4848
By default, the charts show performance counters for the last hour. By using the **TimeRange** selector, you can query for historical time ranges of up to 30 days to show how performance looked in the past.
4949

@@ -55,6 +55,10 @@ Five capacity utilization charts are shown on the page:
5555
* **Bytes Sent Rate**: Shows the top five machines with the highest average of bytes sent.
5656
* **Bytes Receive Rate**: Shows the top five machines with the highest average of bytes received.
5757

58+
>[!NOTE]
59+
>Each chart described above only shows the top 5 machines.
60+
>
61+
5862
Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the correct scope and view.
5963

6064
Select the icon to the left of the pushpin icon on a chart to open the **Top N List** view. This list view shows the resource utilization for a performance metric by individual VM. It also shows which machine is trending the highest.

articles/defender-for-cloud/quickstart-onboard-aws.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore]
115115
**You must have the SSM Agent for auto provisioning Arc agent on EC2 machines. If the SSM doesn't exist, or is removed from the EC2, the Arc provisioning won't be able to proceed.**
116116

117117
> [!NOTE]
118-
> As part of the cloud formation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the cloud formation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the cloud formation.
118+
> As part of the CloudFormation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the CloudFormation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the CloudFormation.
119119
120120
If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
121121

articles/service-connector/includes/code-postgres-me-id.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ For more tutorials, see [Use Spring Data JDBC with Azure Database for PostgreSQL
109109

110110
# Uncomment the following lines according to the authentication type.
111111
# For system-assigned identity.
112-
# credential = DefaultAzureCredential()
112+
# cred = DefaultAzureCredential()
113113

114114
# For user-assigned identity.
115115
# managed_identity_client_id = os.getenv('AZURE_POSTGRESQL_CLIENTID')

0 commit comments

Comments
 (0)