Skip to content

Commit 22dd9f2

Browse files
Merge pull request #3762 from mrbullwinkle/mrb_03_26_2025_freshness_003
[Azure OpenAI] Freshness update 003
2 parents 93810df + 4e0e870 commit 22dd9f2

File tree

7 files changed

+20
-21
lines changed

7 files changed

+20
-21
lines changed

articles/ai-services/openai/concepts/advanced-prompt-engineering.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
title: Design system messages with Azure OpenAI
33
titleSuffix: Azure OpenAI Service
4-
description: Learn about system message design with GPT-3, GPT-35-Turbo, and GPT-4 models.
4+
description: Learn about system message design
55
author: mrbullwinkle
66
ms.author: mbullwin
77
ms.service: azure-ai-openai
88
ms.topic: conceptual
9-
ms.date: 09/05/2024
9+
ms.date: 03/26/2025
1010
manager: nitinme
1111
keywords: ChatGPT, GPT-4, meta prompts, chain of thought
1212
---
@@ -17,9 +17,9 @@ This guide will walk you through some techniques in system message design.
1717

1818

1919

20-
## What is a system message?
20+
## What is a system message?
2121

22-
A system message is a feature-specific set of instructions or contextual frameworks given to a generative AI model (e.g. GPT4-o, GPT3.5 Turbo, etc.) to direct and improve the quality and safety of a model’s output. This is particularly helpful in situations that need certain degrees of formality, technical language, or industry-specific terms.
22+
A system message is a feature-specific set of instructions or contextual frameworks given to a generative AI model (e.g. GPT-4o, GPT-3.5 Turbo, etc.) to direct and improve the quality and safety of a model’s output. This is particularly helpful in situations that need certain degrees of formality, technical language, or industry-specific terms.
2323

2424

2525
There is no prescribed length. A system message can be one short sentence:

articles/ai-services/openai/how-to/latency.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about performance and latency with Azure OpenAI
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: how-to
8-
ms.date: 09/20/2024
8+
ms.date: 03/26/2025
99
author: mrbullwinkle
1010
ms.author: mbullwin
1111
recommendations: false

articles/ai-services/openai/how-to/provisioned-get-started.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom: openai
88
ms.topic: how-to
99
author: mrbullwinkle
1010
ms.author: mbullwin
11-
ms.date: 09/20/2024
11+
ms.date: 03/26/2025
1212
recommendations: false
1313
---
1414

@@ -136,14 +136,13 @@ The inferencing code for provisioned deployments is the same a standard deployme
136136

137137

138138
```python
139-
#Note: The openai-python library support for Azure OpenAI is in preview.
140139
import os
141140
from openai import AzureOpenAI
142141

143142
client = AzureOpenAI(
144143
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
145144
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
146-
api_version="2024-02-01"
145+
api_version="2024-10-21"
147146
)
148147

149148
response = client.chat.completions.create(
@@ -203,7 +202,7 @@ from openai import AzureOpenAI
203202
client = AzureOpenAI(
204203
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
205204
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
206-
api_version="2024-02-01",
205+
api_version="2024-10-21",
207206
max_retries=5,# default is 2
208207
)
209208

articles/ai-services/openai/how-to/reproducible-output.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 09/20/2024
9+
ms.date: 03/26/2025
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
@@ -50,7 +50,7 @@ from openai import AzureOpenAI
5050
client = AzureOpenAI(
5151
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
5252
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
53-
api_version="2024-02-01"
53+
api_version="2024-10-21"
5454
)
5555

5656
for i in range(3):
@@ -79,7 +79,7 @@ for i in range(3):
7979
$openai = @{
8080
api_key = $Env:AZURE_OPENAI_API_KEY
8181
api_base = $Env:AZURE_OPENAI_ENDPOINT # like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
82-
api_version = '2024-02-01' # may change in the future
82+
api_version = '2024-10-21' # may change in the future
8383
name = 'YOUR-DEPLOYMENT-NAME-HERE' # name you chose for your deployment
8484
}
8585
@@ -145,7 +145,7 @@ from openai import AzureOpenAI
145145
client = AzureOpenAI(
146146
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
147147
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
148-
api_version="2024-02-01"
148+
api_version="2024-10-21"
149149
)
150150

151151
for i in range(3):
@@ -174,7 +174,7 @@ for i in range(3):
174174
$openai = @{
175175
api_key = $Env:AZURE_OPENAI_API_KEY
176176
api_base = $Env:AZURE_OPENAI_ENDPOINT # like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
177-
api_version = '2024-02-01' # may change in the future
177+
api_version = '2024-10-21' # may change in the future
178178
name = 'YOUR-DEPLOYMENT-NAME-HERE' # name you chose for your deployment
179179
}
180180

articles/ai-services/openai/how-to/work-with-code.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,17 +5,17 @@ description: Learn how to use the Codex models on Azure OpenAI to handle a varie
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: how-to
8-
ms.date: 09/20/2024
8+
ms.date: 03/26/2025
99
author: mrbullwinkle
1010
ms.author: mbullwin
1111
---
1212

13-
# Codex models and Azure OpenAI Service
13+
# Completion models and code with Azure OpenAI Service
1414

1515
> [!IMPORTANT]
16-
> This article was authored and tested against the [legacy code generation models](/azure/ai-services/openai/concepts/legacy-models). These models use the completions API, and its prompt/completion style of interaction. If you wish to test the techniques described in this article verbatim we recommend using the `gpt-35-turbo-instruct` model which allows access to the completions API. However, for code generation the chat completions API and the latest GPT-4o models will yield the best results, but the prompts would need to be converted to the conversational style specific to interacting with those models.
16+
> This article was authored and tested against the [legacy code generation models](/azure/ai-services/openai/concepts/legacy-models). These models use the completions API, and its prompt/completion style of interaction. If you wish to test the techniques described in this article verbatim we recommend using the `gpt-35-turbo-instruct` model which allows access to the completions API. However, for code generation the chat completions API and the latest GPT-4o and o-series models will yield the best results. To use these newer models these prompts would need to be converted to the conversational style specific to interacting with those models.
1717
18-
The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
18+
The Codex model series was a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
1919

2020
You can use Codex for a variety of tasks including:
2121

@@ -27,7 +27,7 @@ You can use Codex for a variety of tasks including:
2727

2828
## How to use completions models with code
2929

30-
Here are a few examples of using Codex that can be tested in the [Azure AI Foundry](https://ai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
30+
Here are a few examples of using completion models that can be tested in the [Azure AI Foundry](https://ai.azure.com) playground with a deployment of `gpt-35-turbo-instruct`.
3131

3232
### Saying "Hello" (Python)
3333

articles/ai-services/openai/includes/fine-tune.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: mrbullwinkle
66
ms.author: mbullwin
77
ms.service: azure-ai-openai
88
ms.topic: include
9-
ms.date: 09/01/2023
9+
ms.date: 03/26/2025
1010
manager: nitinme
1111
keywords: ChatGPT
1212

articles/ai-services/openai/tutorials/fine-tune.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to use Azure OpenAI's latest fine-tuning capabilities wit
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: tutorial
8-
ms.date: 09/09/2024
8+
ms.date: 03/26/2025
99
author: mrbullwinkle
1010
ms.author: mbullwin
1111
recommendations: false

0 commit comments

Comments
 (0)