You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Single and multi-turn conversation (context required) |`chat`|`gpt_groundedness`, `gpt_relevance`, `gpt_retrieval_score`|`gpt_groundedness`, `gpt_relevance`, `gpt_retrieval_score`|
130
130
131
-
### Set up your Azure Open AI configurations for AI-assisted metrics
131
+
### Set up your Azure OpenAI configurations for AI-assisted metrics
132
132
133
133
Before you call the `evaluate()` function, your environment needs to set up your large language model deployment configuration that's required for generating the AI-assisted metrics.
@@ -155,4 +155,4 @@ For more information, see the following resources:
155
155
156
156
-----
157
157
158
-
For more code samples, see [Create a passwordless connection to a database service via Service Connector](/azure/service-connector/tutorial-passwordless?tabs=user%2Cappservice&pivots=postgresql#connect-to-a-database-with-microsoft-entra-authentication).
158
+
For more code samples, see [Create a passwordless connection to a database service via Service Connector](/azure/service-connector/tutorial-passwordless?tabs=user%2Cappservice&pivots=postgresql#connect-to-a-database-with-microsoft-entra-authentication).
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-tutorial-semantic-cache.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.date: 01/08/2024
12
12
13
13
# Tutorial: Use Azure Cache for Redis as a semantic cache
14
14
15
-
In this tutorial, you use Azure Cache for Redis as a semantic cache with an AI-based large language model (LLM). You use Azure Open AI Service to generate LLM responses to queries and cache those responses using Azure Cache for Redis, delivering faster responses and lowering costs.
15
+
In this tutorial, you use Azure Cache for Redis as a semantic cache with an AI-based large language model (LLM). You use Azure OpenAI Service to generate LLM responses to queries and cache those responses using Azure Cache for Redis, delivering faster responses and lowering costs.
16
16
17
17
Because Azure Cache for Redis offers built-in vector search capability, you can also perform _semantic caching_. You can return cached responses for identical queries and also for queries that are similar in meaning, even if the text isn't the same.
18
18
@@ -78,7 +78,7 @@ See [Deploy a model](/azure/ai-services/openai/how-to/create-resource?pivots=web
78
78
79
79
To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. You also need an **endpoint** and a **key** to connect to Azure Cache for Redis.
80
80
81
-
1. Go to your Azure Open AI resource in the Azure portal.
81
+
1. Go to your Azure OpenAI resource in the Azure portal.
82
82
83
83
1. Locate **Endpoint and Keys** in the **Resource Management** section of your Azure OpenAI resource. Copy your endpoint and access key because you need both for authenticating your API calls. An example endpoint is: `https://docs-test-001.openai.azure.com`. You can use either `KEY1` or `KEY2`.
Copy file name to clipboardExpand all lines: articles/azure-monitor/vm/vminsights-performance.md
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ ms.date: 09/28/2023
14
14
15
15
VM insights includes a set of performance charts that target several key [performance indicators](vminsights-log-query.md#performance-records) to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected.
16
16
17
-
VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance complements the health monitoring feature and helps to:
17
+
VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance helps to:
18
18
19
19
- Expose issues that indicate a possible system component failure.
20
20
- Support tuning and optimization to achieve efficiency.
@@ -43,7 +43,7 @@ To access from Azure Monitor:
43
43
<!-- convertborder later -->
44
44
:::image type="content" source="media/vminsights-performance/vminsights-performance-aggview-01.png" lightbox="media/vminsights-performance/vminsights-performance-aggview-01.png" alt-text="Screenshot that shows a VM insights Performance Top N List view." border="false":::
45
45
46
-
On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Health or Map.
46
+
On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Map.
47
47
48
48
By default, the charts show performance counters for the last hour. By using the **TimeRange** selector, you can query for historical time ranges of up to 30 days to show how performance looked in the past.
49
49
@@ -55,6 +55,10 @@ Five capacity utilization charts are shown on the page:
55
55
***Bytes Sent Rate**: Shows the top five machines with the highest average of bytes sent.
56
56
***Bytes Receive Rate**: Shows the top five machines with the highest average of bytes received.
57
57
58
+
>[!NOTE]
59
+
>Each chart described above only shows the top 5 machines.
60
+
>
61
+
58
62
Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the correct scope and view.
59
63
60
64
Select the icon to the left of the pushpin icon on a chart to open the **Top N List** view. This list view shows the resource utilization for a performance metric by individual VM. It also shows which machine is trending the highest.
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/quickstart-onboard-aws.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -115,7 +115,7 @@ Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore]
115
115
**You must have the SSM Agent for auto provisioning Arc agent on EC2 machines. If the SSM doesn't exist, or is removed from the EC2, the Arc provisioning won't be able to proceed.**
116
116
117
117
> [!NOTE]
118
-
> As part of the cloud formation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the cloud formation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the cloud formation.
118
+
> As part of the CloudFormation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the CloudFormation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the CloudFormation.
119
119
120
120
If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
0 commit comments