Skip to content

Commit 7f5c960

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-ai-docs-pr into imagen
2 parents 139b1e2 + 00a41cf commit 7f5c960

File tree

1,239 files changed

+24719
-8710
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,239 files changed

+24719
-8710
lines changed

.github/policies/disallow-edits.yml

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -19,18 +19,18 @@ configuration:
1919
@${issueAuthor} - You tried to add an index file to this repository; this is not permitted so your pull request will be closed automatically.
2020
- closePullRequest
2121

22-
- description: Close PRs to the "ai-services/personalizer" and "ai-services/responsible-ai" folders where the author isn't a member of the MicrosoftDocs org (i.e. PRs in public repo).
22+
- description: Close PRs to the "personalizer" and "responsible-ai" folders where the author isn't a member of the MicrosoftDocs org (i.e. PRs in public repo).
2323
if:
2424
- payloadType: Pull_Request
2525
- isAction:
2626
action: Opened
2727
- or:
2828
- filesMatchPattern:
2929
matchAny: true
30-
pattern: articles/ai-services/personalizer/*
30+
pattern: articles/ai-foundry/responsible-ai/*
3131
- filesMatchPattern:
3232
matchAny: true
33-
pattern: articles/ai-services/responsible-ai/*
33+
pattern: articles/ai-services/personalizer/*
3434
- not:
3535
activitySenderHasAssociation:
3636
association: Member
@@ -40,14 +40,18 @@ configuration:
4040
@${issueAuthor} - Pull requests that modify files in this folder aren't accepted from public contributors.
4141
- closePullRequest
4242

43-
- description: \@mention specific people when a PR is opened in the "ai-services/personalizer" folder.
43+
- description: \@mention specific people when a PR is opened in the "personalizer" or "responsible-ai" folder.
4444
if:
4545
- payloadType: Pull_Request
4646
- isAction:
4747
action: Opened
48-
- filesMatchPattern:
49-
matchAny: true
50-
pattern: articles/ai-services/personalizer/*
48+
- or:
49+
- filesMatchPattern:
50+
matchAny: true
51+
pattern: articles/ai-foundry/responsible-ai/*
52+
- filesMatchPattern:
53+
matchAny: true
54+
pattern: articles/ai-services/personalizer/*
5155
- activitySenderHasAssociation:
5256
association: Member
5357
- not:

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,5 @@ _repo.*/
1818
# CoPilot instructions and prompts
1919
.github/copilot-instructions.md
2020
.github/prompts/*.md
21+
.github/prompts/*.zip
22+
.github/patterns/*.md

.openpublishing.publish.config.json

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -182,6 +182,12 @@
182182
"branch": "main",
183183
"branch_mapping": {}
184184
},
185+
{
186+
"path_to_root": "azure-search-javascript-samples",
187+
"url": "https://github.com/Azure-Samples/azure-search-javascript-samples",
188+
"branch": "main",
189+
"branch_mapping": {}
190+
},
185191
{
186192
"path_to_root": "azureai-model-inference-bicep",
187193
"url": "https://github.com/Azure-Samples/azureai-model-inference-bicep",

.openpublishing.redirection.json

Lines changed: 167 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
{
1+
{
22
"redirections": [
33
{
44
"source_path": "articles/ai-foundry/concepts/connections.md",
@@ -319,6 +319,171 @@
319319
"source_path_from_root": "/articles/ai-services/language-service/tutorials/prompt-flow.md",
320320
"redirect_url": "/azure/ai-services/language-service/tutorials/power-automate",
321321
"redirect_document_id": false
322+
},
323+
{
324+
"source_path": "articles/ai-foundry/model-inference/concepts/content-filter.md",
325+
"redirect_url": "../../foundry-models/concepts/content-filter",
326+
"redirect_document_id": false
327+
},
328+
{
329+
"source_path": "articles/ai-foundry/model-inference/concepts/default-safety-policies.md",
330+
"redirect_url": "../../foundry-models/concepts/default-safety-policies",
331+
"redirect_document_id": false
332+
},
333+
{
334+
"source_path": "articles/ai-foundry/model-inference/concepts/deployment-types.md",
335+
"redirect_url": "../../foundry-models/concepts/deployment-types",
336+
"redirect_document_id": false
337+
},
338+
{
339+
"source_path": "articles/ai-foundry/model-inference/concepts/endpoints.md",
340+
"redirect_url": "../../foundry-models/concepts/endpoints",
341+
"redirect_document_id": false
342+
},
343+
{
344+
"source_path": "articles/ai-foundry/model-inference/concepts/models.md",
345+
"redirect_url": "../../foundry-models/concepts/models",
346+
"redirect_document_id": false
347+
},
348+
{
349+
"source_path": "articles/ai-foundry/model-inference/concepts/model-versions.md",
350+
"redirect_url": "../../foundry-models/concepts/model-versions",
351+
"redirect_document_id": false
352+
},
353+
{
354+
"source_path": "articles/ai-foundry/model-inference/how-to/configure-content-filters.md",
355+
"redirect_url": "../../foundry-models/how-to/configure-content-filters",
356+
"redirect_document_id": false
357+
},
358+
{
359+
"source_path": "articles/ai-foundry/model-inference/how-to/configure-deployment-policies.md",
360+
"redirect_url": "../../foundry-models/how-to/configure-deployment-policies",
361+
"redirect_document_id": false
362+
},
363+
{
364+
"source_path": "articles/ai-foundry/model-inference/how-to/configure-entra-id.md",
365+
"redirect_url": "../../foundry-models/how-to/configure-entra-id",
366+
"redirect_document_id": false
367+
},
368+
{
369+
"source_path": "articles/ai-foundry/model-inference/how-to/configure-marketplace.md",
370+
"redirect_url": "../../foundry-models/how-to/configure-marketplace",
371+
"redirect_document_id": false
372+
},
373+
{
374+
"source_path": "articles/ai-foundry/model-inference/how-to/configure-project-connection.md",
375+
"redirect_url": "../../foundry-models/how-to/configure-project-connection",
376+
"redirect_document_id": false
377+
},
378+
{
379+
"source_path": "articles/ai-foundry/model-inference/how-to/create-model-deployments.md",
380+
"redirect_url": "../../foundry-models/how-to/create-model-deployments",
381+
"redirect_document_id": false
382+
},
383+
{
384+
"source_path": "articles/ai-foundry/model-inference/how-to/inference.md",
385+
"redirect_url": "../../foundry-models/how-to/inference",
386+
"redirect_document_id": false
387+
},
388+
{
389+
"source_path": "articles/ai-foundry/model-inference/how-to/manage-costs.md",
390+
"redirect_url": "../../foundry-models/how-to/manage-costs",
391+
"redirect_document_id": false
392+
},
393+
{
394+
"source_path": "articles/ai-foundry/model-inference/how-to/monitor-models.md",
395+
"redirect_url": "../../foundry-models/how-to/monitor-models",
396+
"redirect_document_id": false
397+
},
398+
{
399+
"source_path": "articles/ai-foundry/model-inference/how-to/quickstart-ai-project.md",
400+
"redirect_url": "../../foundry-models/how-to/quickstart-ai-project",
401+
"redirect_document_id": false
402+
},
403+
{
404+
"source_path": "articles/ai-foundry/model-inference/how-to/quickstart-create-resources.md",
405+
"redirect_url": "../../foundry-models/how-to/quickstart-create-resources",
406+
"redirect_document_id": false
407+
},
408+
{
409+
"source_path": "articles/ai-foundry/model-inference/how-to/quickstart-github-models.md",
410+
"redirect_url": "../../foundry-models/how-to/quickstart-github-models",
411+
"redirect_document_id": false
412+
},
413+
{
414+
"source_path": "articles/ai-foundry/model-inference/how-to/use-blocklists.md",
415+
"redirect_url": "../../foundry-models/how-to/use-blocklists",
416+
"redirect_document_id": false
417+
},
418+
{
419+
"source_path": "articles/ai-foundry/model-inference/how-to/use-chat-completions.md",
420+
"redirect_url": "../../foundry-models/how-to/use-chat-completions",
421+
"redirect_document_id": false
422+
},
423+
{
424+
"source_path": "articles/ai-foundry/model-inference/how-to/use-chat-multi-modal.md",
425+
"redirect_url": "../../foundry-models/how-to/use-chat-multi-modal",
426+
"redirect_document_id": false
427+
},
428+
{
429+
"source_path": "articles/ai-foundry/model-inference/how-to/use-chat-reasoning.md",
430+
"redirect_url": "../../foundry-models/how-to/use-chat-reasoning",
431+
"redirect_document_id": false
432+
},
433+
{
434+
"source_path": "articles/ai-foundry/model-inference/how-to/use-embeddings.md",
435+
"redirect_url": "../../foundry-models/how-to/use-embeddings",
436+
"redirect_document_id": false
437+
},
438+
{
439+
"source_path": "articles/ai-foundry/model-inference/how-to/use-image-embeddings.md",
440+
"redirect_url": "../../foundry-models/how-to/use-image-embeddings",
441+
"redirect_document_id": false
442+
},
443+
{
444+
"source_path": "articles/ai-foundry/model-inference/how-to/use-structured-outputs.md",
445+
"redirect_url": "../../foundry-models/how-to/use-structured-outputs",
446+
"redirect_document_id": false
447+
},
448+
{
449+
"source_path": "articles/ai-foundry/model-inference/how-to/github/create-model-deployments.md",
450+
"redirect_url": "../../../foundry-models/how-to/github/create-model-deployments",
451+
"redirect_document_id": false
452+
},
453+
{
454+
"source_path": "articles/ai-foundry/model-inference/tutorials/get-started-deepseek-r1.md",
455+
"redirect_url": "../../foundry-models/tutorials/get-started-deepseek-r1",
456+
"redirect_document_id": false
457+
},
458+
{
459+
"source_path": "articles/ai-foundry/model-inference/overview.md",
460+
"redirect_url": "../foundry-models/overview",
461+
"redirect_document_id": false
462+
},
463+
{
464+
"source_path": "articles/ai-foundry/model-inference/quotas-limits.md",
465+
"redirect_url": "../foundry-models/quotas-limits",
466+
"redirect_document_id": false
467+
},
468+
{
469+
"source_path": "articles/ai-foundry/model-inference/supported-languages.md",
470+
"redirect_url": "../foundry-models/supported-languages",
471+
"redirect_document_id": false
472+
},
473+
{
474+
"source_path": "articles/ai-foundry/model-inference/supported-languages-openai.md",
475+
"redirect_url": "../foundry-models/supported-languages-openai",
476+
"redirect_document_id": false
477+
},
478+
{
479+
"source_path": "articles/ai-foundry/model-inference/faq.yml",
480+
"redirect_url": "../foundry-models/faq",
481+
"redirect_document_id": false
482+
},
483+
{
484+
"source_path": "articles/ai-foundry/model-inference/index.yml",
485+
"redirect_url": "../foundry-models/index",
486+
"redirect_document_id": false
322487
}
323488
]
324-
}
489+
}

articles/ai-foundry/.openpublishing.redirection.ai-studio.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -350,6 +350,11 @@
350350
"redirect_url": "/azure/ai-foundry/how-to/evaluate-results",
351351
"redirect_document_id": true
352352
},
353+
{
354+
"source_path_from_root": "/articles/ai-foundry/how-to/deploy-nvidia-inference-microservice.md",
355+
"redirect_url": "/azure/ai-foundry/how-to/deploy-models-managed-pay-go#nvidia",
356+
"redirect_document_id": true
357+
},
353358
{
354359
"source_path_from_root": "/articles/ai-studio/how-to/fine-tune-managed-compute.md",
355360
"redirect_url": "/azure/ai-foundry/how-to/fine-tune-managed-compute",

articles/ai-services/agents/breadcrumb/toc.yml renamed to articles/ai-foundry/agents/breadcrumb/toc.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@
1414
topicHref: /azure/index
1515
items:
1616
- name: AI Foundry # Original doc set name
17-
tocHref: /legal/cognitive-services/openai # Destination doc set route
17+
tocHref: /azure/ai-foundry/responsible-ai/openai # Destination doc set route
1818
topicHref: /azure/ai-services/agents/index # Original doc set route
1919
items:
2020
- name: Agent Service # Destination doc set name
21-
tocHref: /legal/cognitive-services/openai # Destination doc set route
21+
tocHref: /azure/ai-foundry/responsible-ai/openai # Destination doc set route
2222
topicHref: /azure/ai-services/agents/index # Original doc set route

articles/ai-services/agents/concepts/agent-catalog.md renamed to articles/ai-foundry/agents/concepts/agent-catalog.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,12 @@ ms.custom:
1515
# Get started with the Agent Catalog
1616

1717
Accelerate your agent development using code samples and best practices for creating agents. Each agent sample below links to a GitHub Repository, where you can browse the agent's configuration files, setup instructions and source code to start integrating them into your own project in code.
18-
With agents you create using these code samples, be sure to assess safety and legal implications, and to comply with all applicable laws and safety standards. See the [transparency note](/legal/cognitive-services/agents/transparency-note) for more information.
19-
20-
[!INCLUDE [feature-preview](../../../ai-foundry/includes/feature-preview.md)]
18+
With agents you create using these code samples, be sure to assess safety and legal implications, and to comply with all applicable laws and safety standards. See the [transparency note](/azure/ai-foundry/responsible-ai/agents/transparency-note) for more information.
2119

2220
## Prerequisites
2321

2422
- [Azure subscription](https://azure.microsoft.com/free)
25-
- An [Azure AI Foundry project](../../../ai-foundry/how-to/create-projects.md).
23+
- An [Azure AI Foundry project](../../how-to\create-projects.md).
2624

2725
## Find the Agent Catalog in the Azure AI Foundry portal
2826

@@ -31,7 +29,7 @@ With agents you create using these code samples, be sure to assess safety and le
3129
1. On the left pane, select **Agents**.
3230
1. Near the top of the screen, select **Catalog**. Find the code sample you want to use.
3331

34-
:::image type="content" source="../media/agent-catalog.png" alt-text="A screenshot of the model catalog." lightbox="../media/agent-catalog.png":::
32+
:::image type="content" source="../media\agent-catalog.png" alt-text="A screenshot of the model catalog." lightbox="../media\agent-catalog.png":::
3533

3634
1. Select **Open in Github** to view the entire sample application.
3735

articles/ai-services/agents/concepts/model-region-support.md renamed to articles/ai-foundry/agents/concepts/model-region-support.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,13 +22,13 @@ Azure OpenAI provides customers with choices on the hosting structure that fits
2222
- **Standard** is offered with a global deployment option, routing traffic globally to provide higher throughput.
2323
- **Provisioned** is also offered with a global deployment option, allowing customers to purchase and deploy provisioned throughput units across Azure global infrastructure.
2424

25-
All deployments can perform the exact same inference operations, however the billing, scale, and performance are substantially different. To learn more about Azure OpenAI deployment types see [deployment types guide](../../openai/how-to/deployment-types.md).
25+
All deployments can perform the exact same inference operations, however the billing, scale, and performance are substantially different. To learn more about Azure OpenAI deployment types see [deployment types guide](../../../ai-services/openai/how-to/deployment-types.md).
2626

2727
Azure AI Foundry Agent Service supports the following Azure OpenAI models in the listed regions.
2828

2929
> [!NOTE]
30-
> * The following table is for standard deployment availability. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](../../openai/concepts/provisioned-throughput.md) in the Azure OpenAI documentation. `GlobalStandard` customers also have access to [global standard models](../../openai/concepts/models.md#global-standard-model-availability).
31-
> * [Hub based projects](../../../ai-foundry/what-is-azure-ai-foundry.md#project-types) are limited to the following models: gpt-4o, gpt-4o-mini, gpt-4, gpt-35-turbo
30+
> * The following table is for serverless API deployment availability. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](../../../ai-services/openai/concepts/provisioned-throughput.md) in the Azure OpenAI documentation. `GlobalStandard` customers also have access to [global standard models](../../../ai-services/openai/concepts/models.md#global-standard-model-availability).
31+
> * [Hub based projects](../../what-is-azure-ai-foundry.md#project-types) are limited to the following models: gpt-4o, gpt-4o-mini, gpt-4, gpt-35-turbo
3232
3333
| REGION | o1 | o3-mini | gpt-4.1, 2025-04-14 | gpt-4.1-mini, 2025-04-14 | gpt-4.1-nano, 2025-04-14 | gpt-4o, 2024-05-13 | gpt-4o, 2024-08-06 | gpt-4o, 2024-11-20 | gpt-4o-mini, 2024-07-18 | gpt-4, 0613 | gpt-4, turbo-2024-04-09 | gpt-4-32k, 0613 | gpt-35-turbo, 1106 | gpt-35-turbo, 0125 |
3434
|------------------|----|---------|---------------------|--------------------------|--------------------------|--------------------|--------------------|--------------------|-------------------------|-------------|-------------------------|-----------------|--------------------|--------------------|

articles/ai-services/agents/concepts/standard-agent-setup.md renamed to articles/ai-foundry/agents/concepts/standard-agent-setup.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -27,18 +27,17 @@ Both standard setup configurations are designed to give you complete control ove
2727

2828
By bundling these BYO features (file storage, search, and thread storage), the standard setup guarantees that your deployment is secure by default. All data processed by Azure AI Foundry Agent Service is automatically stored at rest in your own Azure resources, helping you meet internal policies, compliance requirements, and enterprise security standards.
2929

30-
## Project-Level Data Isolation
30+
### Azure Cosmos DB for NoSQL
31+
32+
Your existing Azure Cosmos DB for NoSQL Account used in standard setup must have a total throughput limit of at least **3000 RU/s**. Both **Provisioned Throughput** and **Serverless** modes are supported.
3133

32-
Azure AI Foundry enforces project-level data isolation by default. When you configure your own resources in the project capability host:
33-
* **Azure Storage**: Two Blob containers are automatically provisioned:
34-
* One for uploaded files
35-
* One for intermediate system data (for example, chunks, embeddings)
36-
* **Azure Cosmos DB**: Three containers are provisioned under a dedicated enterprise_memory database:
37-
* thread-message-store: End-user conversations
38-
* system-thread-message-store: Internal system messages
39-
* agent-entity-store: Model inputs and outputs
34+
When you use standard setup, **three containers** will be provisioned in your existing Cosmos DB account, and **each container requires 1000 RU/s**.
35+
* thread-message-store: End-user conversations
36+
* system-thread-message-store: Internal system messages
37+
* agent-entity-store: Agent metadata including their instructions, tools, name, etc.
4038

41-
This default behavior was chosen to reduce configuration complexity while still enforcing strict data boundaries—ensuring each project has a clean, isolated storage footprint without requiring manual setup.
39+
## Project-Level Data Isolation
40+
Standard setup enforces project-level data isolation by default. Two blob storage containers will automatically be provisioned in your storage account, one for files and one for intermediate system data (chunks, embeddings) and three containers will be provisioned in your Cosmos DB, one for user systems, one for system messages, and one for user inputs related to created agents such as their instructions, tools, name, etc. This default behavior was chosen to reduce setup complexity while still enforcing strict data boundaries between projects.
4241

4342
## Capability hosts
4443
**Capability hosts** are sub-resources on both the Account and Project, enabling interaction with the Azure AI Foundry Agent Service.
@@ -85,4 +84,4 @@ This default behavior was chosen to reduce configuration complexity while still
8584
* Assign role: Cosmos DB Built-in Data Contributor
8685
* Cosmos DB for NoSQL container: `<'${projectWorkspaceId}>-agent-entity-store'`
8786
* Assign role: Cosmos DB Built-in Data Contributor
88-
11. Once all resources are provisioned, all developers who want to create/edit agents in the project should be assigned the role: Azure AI User on the project scope.
87+
11. Once all resources are provisioned, all developers who want to create/edit agents in the project should be assigned the role: Azure AI User on the project scope.

articles/ai-services/agents/concepts/threads-runs-messages.md renamed to articles/ai-foundry/agents/concepts/threads-runs-messages.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ When you use an Agent, there are a series of steps that are involved.
2626
- **Check the run status:** Monitor the run until it has completed.
2727
- **Getting the response:** After the agent has created a response, display it to the user.
2828

29-
:::image type="content" source="../media/run-thread-model.png" alt-text="A diagram showing an example of an agent run." lightbox="../media/run-thread-model.png":::
29+
:::image type="content" source="../media\run-thread-model.png" alt-text="A diagram showing an example of an agent run." lightbox="../media\run-thread-model.png":::
3030

3131
## Agent
3232

@@ -46,4 +46,4 @@ A run involves invoking the agent on the thread, where it processes the messages
4646

4747
## Next steps
4848

49-
* [Quickstart: create an agent](../quickstart.md)
49+
* [Quickstart: create an agent](../quickstart.md)

0 commit comments

Comments
 (0)