Skip to content

Commit 9366ce7

Browse files
Merge pull request #7377 from MicrosoftDocs/main
Auto Publish – main to live - 2025-09-30 17:13 UTC
2 parents 5a2d21a + c1fdb46 commit 9366ce7

18 files changed

+234
-64
lines changed

articles/ai-foundry/agents/how-to/tools/model-context-protocol-samples.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ manager: nitinme
77
ms.service: azure-ai-foundry
88
ms.subservice: azure-ai-foundry-agent-service
99
ms.topic: how-to
10-
ms.date: 09/04/2025
10+
ms.date: 09/30/2025
1111
author: aahill
1212
ms.author: aahi
1313
zone_pivot_groups: selection-mcp-code
@@ -16,9 +16,6 @@ ms.custom: azure-ai-agents-code
1616

1717
# How to use the Model Context Protocol tool (preview)
1818

19-
> [!NOTE]
20-
> Supported regions are `westus`, `westus2`, `uaenorth`, `southindia`, and `switzerlandnorth`.
21-
2219
Use this article to find code samples for connecting Azure AI Foundry Agent Service with Model Context Protocol (MCP) servers.
2320

2421
## Prerequisites

articles/ai-foundry/agents/how-to/tools/model-context-protocol.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,13 @@ manager: nitinme
77
ms.service: azure-ai-foundry
88
ms.subservice: azure-ai-foundry-agent-service
99
ms.topic: how-to
10-
ms.date: 09/04/2025
10+
ms.date: 09/30/2025
1111
author: aahill
1212
ms.author: aahi
13-
ms.custom: references_regions
1413
---
1514

1615
# Connect to Model Context Protocol servers (preview)
1716

18-
> [!NOTE]
19-
> Supported regions are `westus`, `westus2`, `uaenorth`, `southindia`, and `switzerlandnorth`.
20-
2117
> [!NOTE]
2218
> When using a [Network Secured Azure AI Foundry](../../how-to/virtual-networks.md), private MCP servers deployed in the same virtual network is not supported, only publicly accessible MCP servers are supported.
2319

articles/ai-foundry/openai/concepts/prompt-engineering.md

Lines changed: 2 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure OpenAI
44
description: Learn how to use prompt engineering to optimize your work with Azure OpenAI.
55
ms.service: azure-ai-openai
66
ms.topic: conceptual
7-
ms.date: 09/23/2025
7+
ms.date: 09/30/2025
88
ms.custom: references_regions, build-2023, build-2023-dataai
99
manager: nitinme
1010
author: mrbullwinkle
@@ -102,27 +102,10 @@ Supporting content is information that the model can utilize to influence the ou
102102

103103
## Scenario-specific guidance
104104

105-
While the principles of prompt engineering can be generalized across many different model types, certain models expect a specialized prompt structure. For Azure OpenAI GPT models, there are currently two distinct APIs where prompt engineering comes into play:
106-
107-
- Chat Completion API.
108-
- Completion API.
109-
110-
Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the GPT-35-Turbo and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
111-
112-
The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules.
113-
114105
The techniques in this section will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/azure/ai-foundry/responsible-ai/openai/transparency-note#limitations), is just as important as understanding how to leverage their strengths.
115106

116-
#### [Chat completion APIs](#tab/chat)
117-
118107
[!INCLUDE [Prompt Chat Completion](../includes/prompt-chat-completion.md)]
119108

120-
#### [Completion APIs](#tab/completion)
121-
122-
[!INCLUDE [Prompt Completion](../includes/prompt-completion.md)]
123-
124-
---
125-
126109
## Best practices
127110

128111
- **Be Specific**. Leave as little to interpretation as possible. Restrict the operational space.
@@ -133,7 +116,7 @@ The techniques in this section will teach you strategies for increasing the accu
133116

134117
## Space efficiency
135118

136-
While the input size increases with each new generation of GPT models, there will continue to be scenarios that provide more data than the model can handle. GPT models break words into "tokens." While common multi-syllable words are often a single token, less common words are broken in syllables. Tokens can sometimes be counter-intuitive, as shown by the example below which demonstrates token boundaries for different date formats. In this case, spelling out the entire month is more space efficient than a fully numeric date. The current range of token support goes from 2,000 tokens with earlier GPT-3 models to up to 32,768 tokens with the 32k version of the latest GPT-4 model.
119+
While the input size increases with each new generation of GPT models, there will continue to be scenarios that provide more data than the model can handle. GPT models break words into "tokens." While common multi-syllable words are often a single token, less common words are broken in syllables. Tokens can sometimes be counter-intuitive, as shown by the example below which demonstrates token boundaries for different date formats. In this case, spelling out the entire month is more space efficient than a fully numeric date.
137120

138121
:::image type="content" source="../media/prompt-engineering/space-efficiency.png" alt-text="Screenshot of a string of text with highlighted colors delineating token boundaries." lightbox="../media/prompt-engineering/space-efficiency.png":::
139122

articles/ai-foundry/openai/how-to/fine-tuning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ manager: nitinme
66
ms.service: azure-ai-openai
77
ms.custom: build-2023, build-2023-dataai, devx-track-python, references_regions
88
ms.topic: how-to
9-
ms.date: 07/02/2025
9+
ms.date: 09/30/2025
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
zone_pivot_groups: openai-fine-tuning

articles/ai-foundry/openai/how-to/function-calling.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: mbullwin #delegenz
77
ms.service: azure-ai-openai
88
ms.custom: devx-track-python
99
ms.topic: how-to
10-
ms.date: 09/15/2025
10+
ms.date: 09/30/2025
1111
manager: nitinme
1212
---
1313

@@ -31,9 +31,6 @@ At a high level you can break down working with functions into three steps:
3131

3232
* `gpt-35-turbo` (`1106`)
3333
* `gpt-35-turbo` (`0125`)
34-
* `gpt-4` (`1106-Preview`)
35-
* `gpt-4` (`0125-Preview`)
36-
* `gpt-4` (`vision-preview`)
3734
* `gpt-4` (`2024-04-09`)
3835
* `gpt-4o` (`2024-05-13`)
3936
* `gpt-4o` (`2024-08-06`)
@@ -44,6 +41,7 @@ At a high level you can break down working with functions into three steps:
4441
* `gpt-5` (`2025-08-07`)
4542
* `gpt-5-mini` (`2025-08-07`)
4643
* `gpt-5-nano` (`2025-08-07`)
44+
* `gpt-5-codex` (`2025-09-11`)
4745

4846
Support for parallel function was first added in API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
4947

articles/ai-foundry/openai/how-to/json-mode.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 07/02/2025
9+
ms.date: 09/30/2025
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false

articles/ai-foundry/openai/how-to/predicted-outputs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 06/17/2025
9+
ms.date: 09/30/2025
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false

articles/ai-foundry/openai/how-to/switching-endpoints.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ metadata:
77
author: mrbullwinkle
88
ms.author: mbullwin
99
manager: nitinme
10-
ms.date: 09/15/2025
10+
ms.date: 09/30/2025
1111
ms.service: azure-ai-openai
1212
ms.topic: how-to
1313
ms.custom:

articles/ai-foundry/openai/includes/embeddings-powershell.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ services: cognitive-services
33
manager: nitinme
44
ms.service: azure-ai-openai
55
ms.topic: include
6-
ms.date: 12/05/2023
6+
ms.date: 09/30/2025
77
author: mrbullwinkle #noabenefraim
88
ms.author: mbullwin
99
---

articles/ai-foundry/openai/includes/embeddings-python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ services: cognitive-services
33
manager: nitinme
44
ms.service: azure-ai-openai
55
ms.topic: include
6-
ms.date: 09/01/2025
6+
ms.date: 09/30/2025
77
author: mrbullwinkle #noabenefraim
88
ms.author: mbullwin
99
---

0 commit comments

Comments
 (0)