Skip to content

Commit dc9caeb

Browse files
committed
2 parents 98104ef + 1d9a3f8 commit dc9caeb

File tree

457 files changed

+5292
-3628
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

457 files changed

+5292
-3628
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ _repo.*/
1111

1212
.openpublishing.buildcore.ps1
1313

14+
.vscode/
15+
1416
*sec.endpointdlp
1517

1618
# CoPilot instructions and prompts

.openpublishing.publish.config.json

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -182,6 +182,12 @@
182182
"branch": "main",
183183
"branch_mapping": {}
184184
},
185+
{
186+
"path_to_root": "azure-search-javascript-samples",
187+
"url": "https://github.com/Azure-Samples/azure-search-javascript-samples",
188+
"branch": "main",
189+
"branch_mapping": {}
190+
},
185191
{
186192
"path_to_root": "azureai-model-inference-bicep",
187193
"url": "https://github.com/Azure-Samples/azureai-model-inference-bicep",

.vscode/settings.json

Lines changed: 0 additions & 5 deletions
This file was deleted.

articles/ai-foundry/concepts/architecture.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,7 @@ author: Blackmist
1616

1717
# Azure AI Foundry architecture
1818

19-
> [!NOTE]
20-
> The architecture discussed in this article is specific to a **[!INCLUDE [hub](../includes/hub-project-name.md)]**. For more information, see [Types of projects](../what-is-azure-ai-foundry.md#project-types).
19+
[!INCLUDE [hub-only-alt](../includes/uses-hub-only-alt.md)]
2120

2221
Azure AI Foundry provides a unified experience for AI developers and data scientists to build, evaluate, and deploy AI models through a web portal, SDK, or CLI. Azure AI Foundry is built on capabilities and services provided by other Azure services.
2322

articles/ai-foundry/concepts/concept-playgrounds.md

Lines changed: 144 additions & 131 deletions
Large diffs are not rendered by default.

articles/ai-foundry/concepts/encryption-keys-portal.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,9 @@ Managed identity must be enabled as a prerequisite for using customer-managed ke
117117

118118
Customer-managed key encryption is configured via Azure portal in a similar way for each Azure resource:
119119

120+
> [!IMPORTANT]
121+
> The Azure Key Vault used for encryption **must be in the same resource group** as the AI Foundry project. Key Vaults in other resource groups are not currently supported by the deployment wizards or project configuration workflows.
122+
120123
1. Create a new Azure AI Foundry resource in the [Azure portal](https://portal.azure.com/).
121124
1. Under the **Encryption** tab, select **Customer-managed key**, **Select vault and key**, and then select the key vault and key to use.
122125

articles/ai-foundry/concepts/evaluation-evaluators/agent-evaluators.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ intent_resolution(
7070

7171
### Intent resolution output
7272

73-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason and additional fields can help you understand why the score is high or low.
73+
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason and additional fields can help you understand why the score is high or low.
7474

7575
```python
7676
{
@@ -137,7 +137,7 @@ tool_call_accuracy(
137137

138138
### Tool call accuracy output
139139

140-
The numerical score (passing rate of correct tool calls) is 0-1 and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason and tool call detail fields can help you understand why the score is high or low.
140+
The numerical score (passing rate of correct tool calls) is 0-1 and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason and tool call detail fields can help you understand why the score is high or low.
141141

142142
```python
143143
{
@@ -174,7 +174,7 @@ task_adherence(
174174

175175
### Task adherence output
176176

177-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
177+
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
178178

179179
```python
180180
{

articles/ai-foundry/concepts/evaluation-evaluators/general-purpose-evaluators.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ coherence(
5959

6060
### Coherence output
6161

62-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
62+
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
6363

6464
```python
6565
{
@@ -88,7 +88,7 @@ fluency(
8888

8989
### Fluency output
9090

91-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
91+
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
9292

9393
```python
9494
{
@@ -127,7 +127,7 @@ qa_eval(
127127

128128
### QA output
129129

130-
While F1 score outputs a numerical score on 0-1 float scale, the other evaluators output numerical scores on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
130+
While F1 score outputs a numerical score on 0-1 float scale, the other evaluators output numerical scores on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
131131

132132
```python
133133
{

articles/ai-foundry/concepts/evaluation-evaluators/rag-evaluators.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ retrieval(
6363

6464
### Retrieval output
6565

66-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (a default is set), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
66+
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (a default is set), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
6767

6868
```python
6969
{
@@ -163,7 +163,7 @@ document_retrieval_evaluator(retrieval_ground_truth=retrieval_ground_truth, retr
163163

164164
### Document retrieval output
165165

166-
All numerical scores have `high_is_better=True` except for `holes` and `holes_ratio` which have `high_is_better=False`. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise.
166+
All numerical scores have `high_is_better=True` except for `holes` and `holes_ratio` which have `high_is_better=False`. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise.
167167

168168
```python
169169
{
@@ -206,7 +206,7 @@ groundedness(
206206

207207
### Groundedness output
208208

209-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
209+
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
210210

211211
```python
212212
{
@@ -276,7 +276,7 @@ relevance(
276276

277277
### Relevance output
278278

279-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
279+
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
280280

281281
```python
282282
{
@@ -306,7 +306,7 @@ response_completeness(
306306

307307
### Response completeness output
308308

309-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
309+
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
310310

311311
```python
312312
{

articles/ai-foundry/concepts/evaluation-evaluators/textual-similarity-evaluators.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ similarity(
5858

5959
### Similarity output
6060

61-
The numerical score on a likert scale (integer 1 to 5) and a higher score means a higher degree of similarity. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
61+
The numerical score on a likert scale (integer 1 to 5) and a higher score means a higher degree of similarity. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
6262

6363
```python
6464
{
@@ -87,7 +87,7 @@ f1_score(
8787

8888
### F1 score output
8989

90-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score <= threshold, or "fail" otherwise.
90+
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
9191

9292
```python
9393
{
@@ -115,7 +115,7 @@ bleu_score(
115115

116116
### BLEU output
117117

118-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score <= threshold, or "fail" otherwise.
118+
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
119119

120120
```python
121121
{
@@ -144,7 +144,7 @@ gleu_score(
144144

145145
### GLEU score output
146146

147-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score <= threshold, or "fail" otherwise.
147+
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
148148

149149
```python
150150
{
@@ -173,7 +173,7 @@ rouge(
173173

174174
### ROUGE score output
175175

176-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score <= threshold, or "fail" otherwise.
176+
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
177177

178178
```python
179179
{
@@ -208,7 +208,7 @@ meteor_score(
208208

209209
### METEOR score output
210210

211-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score <= threshold, or "fail" otherwise.
211+
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
212212

213213
```python
214214
{

0 commit comments

Comments
 (0)